This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

EVM6678L board - How to enable Flow Control?

Dear Everyone, I would like to ask the forum's assistance in resolving the problem I experience in EVM6678L board.

I wrote an application based in the PA_simpleExample. The application sends and receives  packets to/from  an external device. Every thing is right except  the Flow control is disable.

I added the following lines in the Init_Switch funtion:

flowControlCfg.p0FlowEnable = 1;
flowControlCfg.p1FlowEnable = 1;
flowControlCfg.p2FlowEnable = 1;
CSL_CPSW_3GF_setFlowControlReg(&flowControlCfg);

 But the Flow control is still disable.

I see that the function Init_MDIO is empty. 

My problem is related with the MDIO Init or with something else?

How can I configure the MDIO to enable  Flow Control?

Thanks,

Enrique

  • Enrique,

     

    How are you making sure that the flow control is not enabled? What test are you doing to check whether flow control is enabled or not?

     

    On the phy which is on the EVM should have by default flow control enabled. For MDIO configuration, we have functions available at the CSL level which you can use to modify PHY configuration. You can find those inline functions in csl_mdioaux.h file.

     

    Regards,

    Bhavin

  • Bhavin,

    Thanks for your answer.

    1.The test sends packets at 300MB/s to an external Bounce Back, and receives the packets back.

    When I free the received packets and I don’t let the RX Free Queue empties, I received all the packets without packet loss. (Each test sent several millions of packets).

    But if I free packets in periods bigger than the RX Free Queue size, and the queue empties, then packets are dropped.

    From the point of view of the Bounce Back (I can print full ethernet statistics), it doesn't get any Pause frame, in order to pause the transmit from the Bounce Back to the EVM-c6678. Then I loss several packets.


    2.  I triedthese function in the following way:

       CSL_MDIO_getUserAccessRegister(port_num, &user_access_reg);

        user_access_reg.phyAddr = phy_addr;

        user_access_reg.regAddr = 4;    // Auto Negotiation Advertisement Register (address 04h) 

        user_access_reg.data    |= 0x0C00; //bits 10 and 11 enables (Asymetric pause, and pause capable)

        CSL_MDIO_setUserAccessRegister(port_num, &user_access_reg);

    I see the initial register(Addr-0x4) values is 0xFFFF,  and this value is not valis, right?

    I tried to read/write using port_num 0 and port_num 1.


    I didn't see any change.

    May be I need to configure something else in the DSP in order to enable Fow Control?

    Thanks,

    Enrique


     

  • This issue is urgent for our company.

    So, someone knows how to enable the Flow COntrol Ethernet on EVM-c6678?

  • Enrique,

    In C6678 device, in Ethernet switch-subsystem, there is register called FLOW_CONTROL which controls the ethernet flow control on both the ports.This register is at address 0x02090824. Can you enable flow control on all the ports ?

    Those flow control will allow pause-frames to be sent whenever there is congestion.

    Regards,

    Bhavin

  • Also, Information on this register can be found in the Ethernet switch sub-system user's guide page#96 or just search for flow_control in user's guide.

    Regards,

    Bhavin

  • Enrique,

    The above mentioned register should be set by your below piece of code.

    flowControlCfg.p0FlowEnable = 1;  flowControlCfg.p1FlowEnable = 1;  flowControlCfg.p2FlowEnable = 1;  CSL_CPSW_3GF_setFlowControlReg(&flowControlCfg);

    Tx PAUSE frames and Rx PAUSE frames can be checked in the statistics registers of ethernet switch sub-system. Please find attached is the GEL file which can dump those registers for you in the CCS console to verify whether there is any PAUSE frames transmitted or received from C6678 or not.

    6470.cpsw_stats_print.gel

    If above flow control register configuration is not making any difference then the packets are being dropped at the NetCP PktDMA just before when they should be kept in the descriptor which should have been popped from Rx Free descriptor queue (Rx FDQ). NetCP PktDMA Rx flow is by default configured to drop the packets if the Rx FDQ is empty. Ideally you should check your software architecture if you are running out of Free descriptor in the FDQ. If you follow the convention of recycling descriptors between Rx FDQ and Rx destination queue (or Rx queue) and the Rx FDQ runs out of descriptors then either there is a bug in your recycling or you have not planned for the worst case scenario in which case You should either create a new memory region of descriptors or steal some descriptors from another queue.

    Regards, Bhavin

     

  • Enrique,

    One more quick thing to try. Default configuration of Rx flow can be changed to not drop any packets and stall packets. That is to set the Rx flow to keep on "retry" for Rx free descriptors. This can be set by RX_ERROR_HANDLING in the Rx Flow A Register (see 4.2.4.1 section in Multicore Navigator User's Guide). This will stall all the packets in PA and Switch sub-system and if all the internal buffers in PA and switch sub-system are filled with incoming packets then switch sub-system should start generating PAUSE frames.

    One important thing to note is if you set the Rx Flow to “retry”, then you also need to set the TIMEOUT field of the Performance Control Reg (see 4.2.1.2 of the Nav. UG) so that the pktDMA does not retry the FDQ too quickly and flood the VBUSM with pop requests.

    I still suggest to reconsider your design for number of free descriptors in an Rx FDQ because it is not advisable to have more PAUSE frames in the system. You can also set the Rx destination queue to be one of the accumulator queue and it can generate an interrupt once it exceed more than N number of descriptors. You can use that interrupt to push descriptor to Rx FDQ as it might be near to empty with N descriptors in the Rx destination queue.

    Please let me know if these items fixes your problem.

    Regards,

    Bhavin

  • Bhavin,

    Thanks for your complete answer.

    But our problem was not resolved, because we need that the DSP sends TX pause frames to an external device when the  device sends packets to the Dsp in a rate bigger than the Dsp can process them, with no relation of the implementation of the queues.

    Now I've checked your suggestions:

    1. The gel script you've sent is very useful and I see that no TX pause frames were sent.


    66xx_0: GEL Output: Ethernet Statistics A.
    C66xx_0: GEL Output: -----------------------------------------------
    C66xx_0: GEL Output: RX Good Frames ................ 0x0x00061A81
    C66xx_0: GEL Output: RX Broadcast Frames ........... 0x0x00000000
    C66xx_0: GEL Output: RX Multicast Frames ........... 0x0x00000000
    C66xx_0: GEL Output: RX Pause Frames ............... 0x0x00000000
    C66xx_0: GEL Output: RX CRC Errors ................. 0x0x00000000
    C66xx_0: GEL Output: RX Align/Code Errors .......... 0x0x00000000
    C66xx_0: GEL Output: RX Oversized Frames ........... 0x0x00000000
    C66xx_0: GEL Output: RX Jabber Frames .............. 0x0x00000000
    C66xx_0: GEL Output: RX Undersized Frames .......... 0x0x00000000
    C66xx_0: GEL Output: RX Fragments .................. 0x0x00000000
    C66xx_0: GEL Output: RX Octets ..................... 0x0x1DB4FB42
    C66xx_0: GEL Output: TX Good Frames ................ 0x0x0001B0DD
    C66xx_0: GEL Output: TX Broadcast Frames ........... 0x0x0000221F
    C66xx_0: GEL Output: TX Multicast Frames ........... 0x0x00000A99
    C66xx_0: GEL Output: TX Pause Frames ............... 0x0x00000000
    C66xx_0: GEL Output: TX Deferred Frames ............ 0x0x00000000
    C66xx_0: GEL Output: TX Collision Frames ........... 0x0x00000000
    C66xx_0: GEL Output: TX Single Collision Frames .... 0x0x00000000
    C66xx_0: GEL Output: TX Multiple Collision Frames .. 0x0x00000000
    C66xx_0: GEL Output: TX Excessive Collision Frames . 0x0x00000000
    C66xx_0: GEL Output: TX Late Collisions ............ 0x0x00000000
    C66xx_0: GEL Output: TX Underrun ................... 0x0x00000000
    C66xx_0: GEL Output: TX Carrier Sense Errors ....... 0x0x00000000
    C66xx_0: GEL Output: TX Octets ..................... 0x0x0771BDE4
    C66xx_0: GEL Output: 64 Byte Octet Frames .......... 0x0x00001E24
    C66xx_0: GEL Output: 65 to 127 Byte Octet Frames ... 0x0x000009BC
    C66xx_0: GEL Output: 128 to 255 Byte Octet Frames .. 0x0x0000029C
    C66xx_0: GEL Output: 256 to 511 Byte Octet Frames .. 0x0x000001F0
    C66xx_0: GEL Output: 512 to 1024 Byte Octet Frames . 0x0x00000035
    C66xx_0: GEL Output: Over 1024 Byte Octet Frames . 0x0x00079EC1
    C66xx_0: GEL Output: Net Octets .................... 0x0x2526BA26
    C66xx_0: GEL Output: RX Start of Frame Overruns .... 0x0x00000000
    C66xx_0: GEL Output: RX Middle of Frame Overruns ... 0x0x000005DA
    C66xx_0: GEL Output: RX DMA Overruns ............... 0x0x00000000
    C66xx_0: GEL Output: -----------------------------------------------

    2. . I've checked the register address 0x02090824 when I call the function CSL_CPSW_3GF_setFlowControlReg(&flowControlCfg) and I see that the flow control is enable for all the ports.

    3. Yes, we configure the PA with RX_ERROR_HANDLING = 1 and TIMEOUT =4000000 and we checked that the INTERNAL FLOW CONTROL works properly, it means when one core sends a lot of packets to another core and the receive packets are not free and the Rx FDQ is empty then the PA stops the transmit and no packets are loss.

    All the internal traffic works properly between the Cores.

    4. I added all the Mac Init functions related with Flow Control:

    CSL_CPGMAC_SL_enableFullDuplex (macPortNum);
    CSL_CPGMAC_SL_enableExtControl (macPortNum);
    CSL_CPGMAC_SL_enableRxFlowControl(macPortNum);
    CSL_CPGMAC_SL_enableTxFlowControl(macPortNum);
    CSL_CPGMAC_SL_enableRxCMF(macPortNum);

    But I don;t get any difference.

    I thing that the problem is when the RX FDQ is empty the MAC flow control is not triggered  by this exception. I think that the trigger is only connected with the FIFO MAC and not with the RX FDQ of the PA.

    Any idea?

    Thanks,

    Enrique


  • Enrique,

    I looked at your statistics. There is one suspecting statistics which I am showing below:

    C66xx_0: GEL Output: RX Middle of Frame Overruns ... 0x0x000005DA

    Switch sub-system has 3 ports and two ports are going out of the device and one port is connecting other modules inside the device. To me it looks like we are dropping the packets in that internal port. I am still verifying this here. I will get back to you soon for this.

    Regards,

    Bhavin

     

  • Enrique,

    Does the frame drop count matches OR somewhere near to the frames mentioned in below statistics?

    C66xx_0: GEL Output: RX Middle of Frame Overruns ... 0x0x000005DA

    Can you keep on sending the frames from host in the error condition and check statistics multiple times to see whether Rx Middle of Frame Overruns count keeps on increasing?

    Regards,

    Bhavin

  • Enrique,

    Have you configured below two registers with below mentioned value for flow control?

       P1_MAX_BLKS  = 0xd7

       p2_MAX_BLKS  = 0xd7

     

    Also have you configured MAC_CONTROL register for both EMACs to enable RX_FLOW_EN? I know that you configured flow_control in the switch registers but below registers are also required to be configured in EMACs.

       MacControl, rx_flow_en = 1  (both EMAC's)

    If not, then lets try enabling those and see if you can see PAUSE frames or not.

    Regards,

    Bhavin

  • Bhavin,

    Yes, the RX Middle of Frame Overruns doesn't increase the counter when a external device sends packets to the DSP. The MAC get all the frames, because I see that the RX Good Frames gets exactly the value of packets sent from an external device.

    I think that In MAC level , we don't have Packet Loss. The Packet loss happens In PA level.


    Enrique.


  • Bhavin,

    Thanks for your suggestions.

    I'll check the P1_MAX_BLKS  values.( when I back to my office)

    I enabled the EMAC flow control using the functions:

    CSL_CPGMAC_SL_enableFullDuplex (macPortNum);
    CSL_CPGMAC_SL_enableExtControl (macPortNum);
    CSL_CPGMAC_SL_enableRxFlowControl(macPortNum);
    CSL_CPGMAC_SL_enableTxFlowControl(macPortNum);

    But may be there is a bug or something I miss.

    Thanks,

    Enrique


  • Enrique,

    Did you check P1_MAX_BLKS?

    Also if packets are getting dropped at PA level then PA LLD has APIs which can get you the statistics at the PA level. Can you check that to see if you are dropping packets at PA level?

    Regards,

    Bhavin

  • Hi Enrique,

    Do you still have an issue with the flow control? Were you able to check the PA level stats as bhavin suggested? Please advise.

    THanks,

    Arun.

  • Hi Arun,

    Yes, the flow control issue is open yet.

    Next week I'll return to my work and I'll check the PA level stats as Bhavin suggested.

    Thanks,

    Enrique

  • Hi,

    Yes, the Flow Control problem is very critical for the company,

    I checked the Pa stats and I saw something interesting.

    I did the same test: external device sends packets to the Dsp.

    When the Dsp application test reads in a high rate the packets and the RX FDQ is not empty never (normal test), the Pa stats show normal values:

    C66xx_0] --- PA STATS ---
    [C66xx_0] C1 number of packets: 401976
    [C66xx_0] C1 number IPv4 packets: 0
    [C66xx_0] C1 number IPv6 packets: 0
    [C66xx_0] C1 number custom packets: 0
    [C66xx_0] C1 number non IP packets: 0
    [C66xx_0] C1 number llc/snap fail: 127
    [C66xx_0] C1 number table matched: 400862
    [C66xx_0] C1 number failed table matched: 962
    [C66xx_0] C1 number IP frags: 0
    [C66xx_0] C1 number IP depth overflow: 0
    [C66xx_0] C1 number vlan depth overflow: 0
    [C66xx_0] C1 number gre depth overflow: 0
    [C66xx_0] C1 number mpls packets: 0
    [C66xx_0] C1 number of parse fail: 127
    [C66xx_0] C1 number invalid IPv6 opts: 0
    [C66xx_0] C1 number of command failures: 0
    [C66xx_0] C1 number invalid reply dests: 0
    [C66xx_0] C1 number of silent discard: 1089
    [C66xx_0] C1 number of invalid control: 0
    [C66xx_0] C1 number of invalid states: 0
    [C66xx_0] C1 number of system fails: 0

    But , When the Dsp application test reads in a slow rate the packets and the RX FDQ is empty for a moment  (flow control test), the Pa stats show overflow values:

    [C66xx_0] --- PA STATS ---
    [C66xx_0] C1 number of packets: 293601280
    [C66xx_0] C1 number IPv4 packets: 168463852
    [C66xx_0] C1 number IPv6 packets: 168433153
    [C66xx_0] C1 number custom packets: -1995190271
    [C66xx_0] C1 number non IP packets: -1207662317
    [C66xx_0] C1 number llc/snap fail: 463230641
    [C66xx_0] C1 number table matched: -1801249892
    [C66xx_0] C1 number failed table matched: -2064918640
    [C66xx_0] C1 number IP frags: 562339
    [C66xx_0] C1 number IP depth overflow: 1811939397
    [C66xx_0] C1 number vlan depth overflow: 0
    [C66xx_0] C1 number gre depth overflow: 2006847749
    [C66xx_0] C1 number mpls packets: -771683830
    [C66xx_0] C1 number of parse fail: 503384586
    [C66xx_0] C1 number invalid IPv6 opts: -1978430957
    [C66xx_0] C1 number of command failures: 22528
    [C66xx_0] C1 number invalid reply dests: 892613426
    [C66xx_0] C1 number of silent discard: 959985462
    [C66xx_0] C1 number of invalid control: 1027357498
    [C66xx_0] C1 number of invalid states: 1094729534
    [C66xx_0] C1 number of system fails: 1162101570

    It seems a BUG on the PA, caused by overflows when packets are sent to the PA and the RX FDQ is empty,   then PA stops to work correctly.

    We need urgently to solve this problem.

    Regards,

    Enrique

  • Hi Arun,

    Yes, the Flow Control problem is very critical for the company,

    I checked the Pa stats and I saw something interesting.

    I did the same test: external device sends packets to the Dsp.

    When the Dsp application test reads in a high rate the packets and the RX FDQ is not empty never (normal test), the Pa stats show normal values:

    C66xx_0] --- PA STATS --- 
    [C66xx_0] C1 number of packets: 401976
    [C66xx_0] C1 number IPv4 packets: 0
    [C66xx_0] C1 number IPv6 packets: 0
    [C66xx_0] C1 number custom packets: 0
    [C66xx_0] C1 number non IP packets: 0
    [C66xx_0] C1 number llc/snap fail: 127
    [C66xx_0] C1 number table matched: 400862
    [C66xx_0] C1 number failed table matched: 962
    [C66xx_0] C1 number IP frags: 0
    [C66xx_0] C1 number IP depth overflow: 0
    [C66xx_0] C1 number vlan depth overflow: 0
    [C66xx_0] C1 number gre depth overflow: 0
    [C66xx_0] C1 number mpls packets: 0
    [C66xx_0] C1 number of parse fail: 127
    [C66xx_0] C1 number invalid IPv6 opts: 0
    [C66xx_0] C1 number of command failures: 0
    [C66xx_0] C1 number invalid reply dests: 0
    [C66xx_0] C1 number of silent discard: 1089
    [C66xx_0] C1 number of invalid control: 0
    [C66xx_0] C1 number of invalid states: 0
    [C66xx_0] C1 number of system fails: 0

    But , When the Dsp application test reads in a slow rate the packets and the RX FDQ is empty for a moment  (flow control test), the Pa stats show overflow values:

    [C66xx_0] --- PA STATS --- 
    [C66xx_0] C1 number of packets: 293601280
    [C66xx_0] C1 number IPv4 packets: 168463852
    [C66xx_0] C1 number IPv6 packets: 168433153
    [C66xx_0] C1 number custom packets: -1995190271
    [C66xx_0] C1 number non IP packets: -1207662317
    [C66xx_0] C1 number llc/snap fail: 463230641
    [C66xx_0] C1 number table matched: -1801249892
    [C66xx_0] C1 number failed table matched: -2064918640
    [C66xx_0] C1 number IP frags: 562339
    [C66xx_0] C1 number IP depth overflow: 1811939397
    [C66xx_0] C1 number vlan depth overflow: 0
    [C66xx_0] C1 number gre depth overflow: 2006847749
    [C66xx_0] C1 number mpls packets: -771683830
    [C66xx_0] C1 number of parse fail: 503384586
    [C66xx_0] C1 number invalid IPv6 opts: -1978430957
    [C66xx_0] C1 number of command failures: 22528
    [C66xx_0] C1 number invalid reply dests: 892613426
    [C66xx_0] C1 number of silent discard: 959985462
    [C66xx_0] C1 number of invalid control: 1027357498
    [C66xx_0] C1 number of invalid states: 1094729534
    [C66xx_0] C1 number of system fails: 1162101570

    It seems a BUG on the PA, caused by overflows when packets are sent to the PA and the RX FDQ is empty,   then PA stops to work correctly.

    We need urgently to solve this problem.

    Regards,

    Enrique

  • Hi Arun,

    Can you tell me if the behavior of PA that crashes when Rx FDQ empties, is reproduced in your system?

    Thanks,

    Enrique

  • Hi Enrique,

    Let me take a look. I will update you today.

    Thanks,

    Arun.

  • Hi Arun,

    I want to add more information about this problem:

    The FDQ RX queue is opened like the TI example:

    Qmss_queueOpen (Qmss_QueueType_STARVATION_COUNTER_QUEUE, CONFIG_PKT_PA_RX_BUFFER_Q, &isAlloc);

    and the Qmss_setQueueThreshold() function is NOT USED for this queue

    Do you know what the Threshold function does?, it could help?

    Regards,

    Enrique

  • Arun,

    We have solved the problem. We redesign our Ethernet Application and the PA crash has disappeared. May be was a memory access collision on the RX Free Buffers between the software on the core and the PA.

    Thanks for all the information

    Regards,

    Enrique