IEEE 802.3: On the issue of Backwards compatibility

As this is my first blog post, I want to take a second and let you know why this blog is important to me. I know that not everyone who uses PoE can be part of the IEEE committee and this is one more way to reach out and ask for feedback on what is important to people who actually use the technology that we are creating. I hope that as someone who is interested in PoE (why else would you be reading this blog?), you will help create a lively discussion that I can use to bring your ideas to the committee. With that said, let’s get started.

high power IEEE 802.3 standard

One of the most interesting dilemmas that the (soon to be) IEEE 802.3bt task force is facing is how to define new mutual identification protocols while maintaining backwards compatibility and maximizing interoperability. In the original PoE project (IEEE 802.3af), the mutual identification protocols consisted of a detection stage, where the PD presents a resistance of 25KΩ to the PSE to let it know it is a valid device, and an optional physical layer classification stage, where the PD produces a fixed current as a way of asking the PSE for a certain amount of power. When the specification was updated by the IEEE 802.3at task force, detection was left unchanged, but classification was altered to allow for the new higher-power devices by introducing Class 4 and two-finger physical layer classification.

The 4-pair PoE study group has already begun to discuss the issues of backwards compatibility and has adopted two draft objectives to guide our work in this area. They are:

  • 4PPoE PDs which operate at power levels consistent with IEEE 802.3-2012 PDs will interoperate with IEEE 802.3-2012 PSEs.
  • 4PPoE PSEs will be backwards compatible with IEEE 802.3-2012 PDs.

While the wording of these objectives may seem vague, or even odd, to someone who was not in the room while they were being debated, I believe there is one crucial point to pull from these objectives. Both of these objectives indicate (in my opinion) that we will not change the detection process in any meaningful way. This is critical as any change in the valid detection resistance range would allow the possibility of false positives (i.e. the PSE believing a valid PD is connected to the other end of the cable when that is not the case).

So, while it seems detection will remain unchanged, classification is a different story all together. In order to add new power classes, we will definitely have to make some changes. There are many ideas on how to do this including adding new current ranges to the classification schedule, adding more class fingers, and other hybrid approaches. Which one will be adopted? It is too early to know, but that means you can still have some input.

What do you think? Do you agree that leaving detection unchanged is critical? Is there a certain way you hope to see the new classification protocol implemented? Anything else you are hoping to see the committee consider?

To read all of my blogs on the IEEE 802.3 meetings, click here. Don't miss out on future Power House blogs, subscribe using the button at the top, right-hand side of this post.

Additional Resources:

*Edit: I answered the question: "What markets are driving the need for >25.5W PoE and a new IEEE 802.3 standard?" on Power Electronics community for those interested.

  • I think that detection should be left alone and that classification should consist of a low level requirement that draws minimal power over a single pair and that a higher level protocol over the ethernet link is then used to request more power over whatever pairs are available.  This could be extended to active power management whereby the PSE may request the PD to enter a low power mode when it deems it necessary.

    In today's environment, reducing power requirements or at least actively controlling it is more important then ever.  End customers are starting to consider the total cost of ownership and this includes the energy footprint.

    Adding more power so that dumb designs can be energy inefficient is not the way forward.  Adding intelligence to use the extra power available as and when it's needed is a better idea.


  • Matt,

    Thanks for the response.  I agree with almost every point you make, but I have some questions about the classification protocol...

    When you say that detection should be left alone, do you envision detection being performed on a single pair set or both pair sets?  If detection is only done on a single pair set, are you suggesting that power is applied only to that pair set?  Meaning that only once the "higher level protocol over the Ethernet link" has been performed could power be applied to the second pair set.

    If the scenario above is what you intended (or even if it isn't), do you have any concerns about applying power to a pair set that detection was not performed on?  This is a situation that 2-pair PoE does not have to deal with as detection is always performed before applying power to a pair set.  In my mind, this raises some very interesting questions.

    Another interesting point to consider is how midspan power injectors could work with the proposed scheme.  In my opinion, a new classification algorithm must contain some form of physical layer classification as any system that relies on Ethernet communication to will not work with midspans.

    Thanks again for your response and I look forward to your next one.


  • Just my opinion, but....

    I think it would be dangerous to apply power to a pair set that has not met detection even if a higher level protocol suggests it to be OK - errors in patching may be unlikely but not impossible.  So detection indicates one or more pair sets - taking into account that legacy PDs will diode OR the pair sets together, i.e. a legacy PD is counted as having one pair set.

    H/w classification is carried out and the link powered up over a single pair set.  Higher level protocols then determine what to do next.

    I don't think it's too much to ask for a low cost CPU to be included in mid-span PSEs to carry out the protocol and provide a level of reporting and management.  If the midspan is going to provide several ports at up to 50W (say) per port it is either going to have a huge power supply or need to be configured for provisioning the power appropriately.

    It is a possibility of course that a dumb mid-span is limited to supplying ~15W per pair set, i.e. treating each pair set as a legacy PD.  However I'm in favour of active power management and if we start with clever mid-spans, perhaps we will be more aware of the power we use and more inclined to turn things off and on as needed.


    P.S. Sorry for the late response - I thought I'd get an email on blog activity.

  • Matt, we are looking into why you didn't get notified when my response was posted.   In the mean time,you can "subscribe to comments” on the right hand side of this blog to guarantee you are notified of replies. You might want to also click on “Power House” and choose to “subscribe to this blog by email” so that you do not miss future blog posts.

    Thanks again for your response.  As I have said before, the more opinions and information that are contributed to the conversation, the more I can take to the committee, leading to a stronger standard.   I can see the value of smart midspan power injectors and active power management, although I think that most midspans will continue to be built without CPUs.  In addition, as you will see in my next blog post (which talks about LED lighting being powered by PoE and should be posted within a day or two) there are applications for higher power PoE that may not even have or use a data link.  I believe having a physical classification protocol that enables these applications would be a big advantage for PoE.


  • Hi Dave,

    I've subscribed to comments.  Perhaps that was it.  I had subscribed to the blog thinking I'd get everything concerned with it but hadn't noticed a separate subscription for comments - user error I guess.

    I can see that there is an area for dumb end products, but wouldn't it be nice to switch on the LEDs (remotely) over a datalink or schedule patterns or do all the whizzy things you can do with LEDs...

    Room for more people to comment I feel.

    Best Regards,