This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

DP83867E: Failing to meet 1Gbps (and much lower)

Part Number: DP83867E
Other Parts Discussed in Thread: DP83869HM, DP83869EVM, DP83869

Hi all,

I've been doing testing with an Ixia Novus One to communicate over 1Gbps Ethernet Copper. I am running the DP83867E PHY in reverse loopback mode with auto-negotiation enabled, where the PHY is automatically configured to master.

I drop packets when issuing Random Packet or Quad Gaussian tests, even at 60% utilization. Stable sized packets result in no drops until higher percents (100% utilization 64-Byte packets drop .005%).

Implementing an ethernet switch between the PHY and traffic generator significantly improves performance. Manually configuring the PHY to be a slave instead of master slightly improves performance.

I have seen this problem on several iterations of custom platforms using the DP83867E, several iterations of custom platforms using the DP83869HM, and on the DP83869EVM Development Board.

I have tried the same tests using a Marvell 88E1111 PHY and Quad Gaussian passes at 95% utilization with no drops.

Thanks in advance,

George

  • Hi George,

    Would you be able to share what cable length is being used for this test?

    Also, if there are two 867's or two 869's connected to each other, are the same types of packet errors occurring? Or are these errors only when the Ixia is connected to those devices?

    Thank you for bringing this up!

    Lysny

  • Hi Lysny,

    We've been doing our testing with 10 ft cat5e cables and have also seen the issue with cat6 cables.

    What do you mean two connected to each other? Back to back? We have a platform where data passes between multiple PHYs of the same model, but there are other components on the board which process the traffic.

  • Hi George,

    Yes, I mean where one 867 is the host and the other 867 is the client. Or similar with 869.

    Could you please provide a block diagram that includes how many PHYs the data is passing through, as well as the other significant components along that path?

    Thanks,

    Lysny

  • Lysny,

    We don't currently have a strict PHY-PHY configuration, nor can we. My platform has an FPGA blocking direct PHY-PHY connection.

  • Hi George,

    Are you able to check the connection of two 867's over the copper cable?

    Also, what is the interpacket gap when transmitting 64byte packets? If using a tight IPG, please try programming register 0x53 to 0x2054?

    Thanks,
    Lysny

  • Hi Lysny,

    Please refer to my drawing attached. It shows how deep we are able to go into the platform.

    We're using a standard IPG gap of 12 (looking into that register, I believe we can leave that default then)

  • Hi George,

    Please try the suggested register configuration and let me know of your results!

    Thanks,

    Lysny

  • Hi Lysny,

    Much better results! Though still not fixed entirely.

    Results on TI Platform before Register Change:

    Trial # Util % Packet Type Frames Delta Loss %
    1 100 64-Byte 19167 0.002
    2 100 96-Byte 3230762 0.775
    3 95 Random (64-1518 Byte) 662088 0.754
    4 95 Quad Gaussian (64-1518 Byte) 707451 0.785

    Results TI Platform after Register Change:

    Trial # Util % Packet Type Frames Delta Loss %
    1 100 64-Byte 0 0
    6 100 96-Byte 0 0
    7 95 Random (64-1518 Byte) 1000 0.001
    8 95 Quad Gaussian (64-1518 Byte) 1150 0.001

    Highest Util % on TI Platform without Drops:

    Trial # Util % Packet Type Frames Delta Loss %
    9 56 Random (64-1518 Byte) 0 0
    10 51 Quad Gaussian (64-1518 Byte) 0 0

    Note: Quad Gaussian at 52-56% were tested and resulted in drops of 1 packet. Higher rate = higher drop percent.

    Compared to a Marvell 88E1111

    Trial # Util % Packet Type Frames Delta Loss %
    11 95 Quad Gaussian (64-1518 Byte) 0 0

    Thankfully, we're getting much closer. My questions now are as follows:

    1. Do you have any more insight into this register? The register description only accounts for 2 values (0x4 and 0x5). What about all other values? I may try and replicate some tests with the value set to 0x3.

    2. Do you have any other suggestions for the varied-packet tests?

    Thanks in advance,

    George

  • Hi George,

    Glad to see improvements! Register 0x53 is not in the datasheet, you may be looking at Table 53, which is register 0x32. I will get back to your with more register details and some further recommendations tomorrow!

    Thanks,

    Lysny

  • Hi Lysny,

    To clarify, I've been looking at these two datasheets for the two respective platforms which have been dropping packets:

    DP83867E - PHY corresponding to the tests above (much better results but I'd still like to cut down out the last bit of drops). This datasheet provides info for register 0x0053 (and is what I changed to improve performance).

    DP83869HM - PHY which is on the other platform and facing this large packet drop issue. I don't see a register 0x0053 here or a register similar to 0x0053 from the DP83867E PHY.

    Thanks,

    George

  • Hi George,

    Can you please check registers 0x13 and 0x15 after running one of the tests that sees errors? The interrupt needs to be enabled in register 0x12[2] and 0x13 and 0x15 will report any errors. Could you also read register 0x32 as well? Looking at these will help locate where the issue is occurring.

    Register 0x53 adjust the idle cycles needed to resynchronize in gigabit reception.

    One other suggestion is to increase the IPG to 13.

    Thanks,

    Lysny

  • Hi Lysny,

    Here are the results of reading the registers before / during / after running a test resulting in ~1000 dropped packets. This was using the value "2054" in register 0x0053:

    Register Before Test During Test After Test
    0x13 16'h1C42 16'h0104 (first time). Bit 2 would go high occasionally 16'h0504
    0x15 16'h0000 16'h0000 16'h0000
    0x32 16'h0054 16'h0054 16'h0054

    A second test was ran to get the values in the "after test" column since some of these values are self clearing. I'm thinking bit 10 (link status change) likely just marks the beginning / end of a test as it is not going to '1' during the test.

    As a side note, I have seen slightly better performance changing the value from "2054" to "2053." I have seen worse performance changing it to "2055" and even "2052" surprisingly.

    I suspect the Ixia traffic generator being used is occasionally sending at below 12 IPG. We are seeing much worse performance with the IPG set to 10 on the Ixia. Are you aware of any PHY register which would tell us if the IPG ever dips below 12?

    Thanks,

    George

  • Hi George,

    Okay, looks like the over/underflow is happening on the xgmii side. I believe that register 0x32[2] needs to be cleared in order for register 0x15 to show an error count value. Would you be able to try that adjust and see if register 0x15 shows any changes?

    And, I believe if you check register 0x43[bit4], that that bit should go high if the detected IPG is different than the programmed IPG. I will have to double check though that that is the case.

    Were you able to increase the IPG on the Ixia to 13?

    Thanks,

    Lysny

  • It looks like register 32 bit 2 is RO and "writes are ignored" though I tried anyway.

    Here is a comparison of the two tests (note: I power cycled in between tests):

    Util % Packet Type Frames Tx Frames Rx Frames Delta Loss % IPG Duration Register 0x15 Register 0x43
    95 Quad Gaussian (64-1518 Byte) 90,137,641 90,136,370 1,271 0.001 12 10:00 0x0000 0x07A0
    95 Quad Gaussian (64-1518 Byte) 90,137,641 90,137,641 0 0.000 13 10:00 0x0000 0x07A0

    I'd like to see no drops with Ixia set to 12 however

    Thanks in advance,

    George

  • Hi George,

    Looks like that bit is different between the 867 and 869. I'm glad that setting the IPG to 13 was able to show no frame loss. I will look into what other registers we can tweak in order to keep the IPG at 12 and get back to you tomorrow!

    Thanks,

    Lysny

  • Hi Lysny,

    I have just caught an error in my .tcl read script of some of these registers. After revising and running a dropped packet tests, I receive these values:

    Register Before After
    0x13 1C42 0000
    0x15 0000 0000
    0x32 00D3 00D3
    0x43 07A0 07A0

    Sorry for the confusion.

    Thanks,

    George

  • Hi George,

    Are you able to run this test with the DP83869EVM?

    Could you provide the cable length and category as well?

    Also, if you are able to provide a schematic of the DP83867 custom board, I can review.

    Thanks,

    Lysny

  • Hi Lysny,

    We've been using a 100m CAT5e cable in the prior tests.

    We have switched to a standard 10ft cable with the DP83867E and reg 0x0053 set to either 2053 and 2054. It looks like the cable length had no discernable effect on performance.

    And some tests with the DP83869EVM:

    Util % Packet Type Frames Delta Loss % IPG
    95 Quad Gaussian (64-1518 Byte) 1,688,954 1.874 12
    95 Quad Gaussian (64-1518 Byte) 2249 0.002 13

    It should be noted we have connected the clock to the on-board 25-MHz CMOS oscillator.

    I'll get back to you on the schematic.

  • Hi George,

    Is there a reason that you choose the CMOS oscillator? Are you able to test this with the crystal oscillator? We have previously tested the EVM with the crystal oscillator and not seen any packet loss.

    Was the EVM test still connected through the FPGA? 

    Also, feel free to send schematic whenever. :)

    Thanks,
    Lysny

  • Hi Lysny,

    We chose the CMOS oscillator as a performance check. The crystal resulted in packet loss as well, though a little better than the CMOS oscillator. The EVM test was using loopback from Ixia to the DP83869EVM dev board, so no FPGA.

    Here are results from the DP83869 on our custom platform in PHY loopback, using the FPGA clock (most accurate ppm between the custom and dev board platforms):

    Util % Packet Type Frames Delta Loss % IPG
    100 64-Byte 1,102,064 0.129 12
    100 96-Byte 118,632 0.024 12
    95 Random (64-1518 Byte) 1,754,741 1.997 12
    95 Quad Gaussian (64-1518 Byte) 1,893,394 2.101 12

    Would you mind giving your email so I could provide the schematic?

    Thanks,

    George

  • Hi George,

    Just sent a friend request that has my email attached! Let's continue over email thread.

    Thanks,

    Lysny