This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

RTOS/DRA746: GMAC AVB configuration

Part Number: DRA746
Other Parts Discussed in Thread: SYSBIOS

Tool/software: TI-RTOS

Hi,

    we are trying to transmit avb traffic with GMAC. The design is that GMAC interrupts are routed to both IPU and MPU, avb  talker is implemented on IPU with sysbios, non-avb traffic is implemented on MPU with linux. CPDMA channel 7 is kept for avb traffic,  priority of avb traffic is 2.  CPDMA channel 0 is kept for non-avb traffic,  priority of non-avb traffic is 2. CPPI TX buffer descriptors's TO_PORT filed is set so that avb packets could be sent directly to port 1. we disable all other channels'  tx interrupts in WR_C0_TX_EN except channel 7. In channel 7, two avb packets is transmitted every 1.333ms by sysbios. In MPU, based on avb 1.333ms interrupts, channel 0's tx handler is called. 

     The above is our design framework, however, it is found that traffic could not be much on MPU side even avb register is configured according to RM. The following is our configuration:

#pri2-->switch 3, prio 0--->switch 2

devmem2 0x48484118 w 0x302

devmem2 0x48484218 w 0x302

#port 1 , priority 3 queque as shaping queue
devmem2 0x48484010 w 0x40000
#channel 7 is rate limited and tate limited mode
devmem2 0x48484210 w 0x88240c0
#P1 send percent 10%
devmem2 0x48484228 w 0xa0a00
#channel 7 is rate limited
devmem2 0x48484820 w 0x8001


#priority 2 rate 
devmem2 0x48484838 w 0x10009

   with the configuration, when we increase bandwidth to 60Mbps in MPU side with iperf, in IPU side, the 1.333ms avb traffic interval is not stable, which will affect listener to lock.  we add print log to check the time diff of avb packets submitted to channel queue and tx interrupt. It is observed that at some time the gap would be very large. some samples are given as below. In my opinion, avb traffic should be promised to transmitted at first,  we disable all other channels' tx interrupt except avb channel, so the interval between packets submitted and tx interrupt should be stable. I'm not sure if my avb configuration is right or when cpmda starts to select channel 0 to fetch packet, it could not switch to channel 7 until all packets on channel 0 is completed. Could anyone give some advice?

5400, 8250, 5400, 6900, 5850, 6750, 5550, 8400, 5700, 6600, 5700, 6900, 5400, 7050, 5250, 619200, 159300, 6750, 5100, 6900

  • Hi,

    Have you tried setting a higher priority for the CPDMA channel corresponding to AVB traffic? Also where do you configure the 4/3 ms interval for AVB transmission?

    Regards,
    Anand

  • Hi Anand,

       Have you tried setting a higher priority for the CPDMA channel corresponding to AVB traffic?

        ----do you mean avb packet priority? It is specified by customer.

        Also where do you configure the 4/3 ms interval for AVB transmission?

        ----it is implemented by timer interrupt.

    Best Regards

  • Hi, 

      could someone familiar with GMAC  give some advice?

       Thanks very much.

    Best Regards 

  • Hi Kenshion,

    Please help me understand the problem better. Do you see a large gap between AVB packet Tx interrupts only when you increase throughput of non-AVB traffic? Ideally, traffic in one channel shouldn't be affected by another by this much, I will see the CPDMA  implementation and get back to you on that. If it's really the case that in fact the CPDMA sees equal priority for both channels and decides to switch to non-AVB channel buffer dequeue for some reason, we might have to play with the priorities a bit to see if this happens. I will take a look at the implementation of the submitPacketQueue() function and get back to you if it's actually the case.

    Regards,
    Anand

  • Hi Anand,

       Do you see a large gap between AVB packet Tx interrupts only when you increase throughput of non-AVB traffic? 

       ----yes, a large gap between AVB packet submitted (LOCAL_submitPacketQueueToChannel ) and  AVB packet Tx interrupts is observed when throughput of non-AVB traffic is increased. 

        I also observed the following description in TRM:

       "The port will transmit packets until all queued packets have been transmitted and the queue(s) are empty."

        I'm not sure when avb packets is queued in high priority rate limited channel during port transmitting non avb packets in non rate limited channel, could port switch to rate limited channel before  all packets quequed on non rate limited channel have been transmitted.

    Best Regards 

  • Hi Anand,

      we tried to enable dual mac mode in linux side, then the gap between AVB packet submitted  and  AVB packet Tx interrupts become much better even when the bandwidth of  non-AVB traffic is large.  Our system could tolerate this gap althrough sometimes it could also be more than 100+ us. Is there any distinct difference for the packet transmitting implementation between singal MAC and dual MAC mode, which could affect tx completion time?

    Best Regards

  • Hi Kenshion,

    I am not sure how the dual mac mode is implemented in the linux. Anyways, I wonder why it would matter as you are only using single port in your application. You sure there were no other changes which you did, and there are no packet drops whatsoever in the case where it improved?

    Regards,
    Anand

  • Hi Anand,

         why it would matter as you are only using single port in your application.

    ------I notice the following description of Reg P0_TX_IN_CTL in TRM. I want to make port0 in rate limiting mode, so I try to use single port firstly.

         "Note that Dual MAC mode is not compatible with escalation or shaping because dual MAC mode forces round robin priority on FIFO egress.Rate-limiting and shaping are still available for Port 1 and Port 2 when Port0 is set in dual MAC mode."

         You sure there were no other changes which you did, and there are no packet drops whatsoever in the case where it improved?

    -----yes, there is no ther changes. And from iperf udp test result , drop packets rate could be tolerated.

    Best Regards

  • Hi Kenshion,

    I am not quite sure if I understand your requirement in that case. So ideally you want to use only P1, since you need to apply rate limiting on the host port. Is it?

    Regards,
    Anand

  • Hi Anand,

         So ideally you want to use only P1, since you need to apply rate limiting on the host port. Is it?

        ----- My requirement is that  avb packets could be fordwarded to P1 while other ethernet packets are forwarded to P2. To ensure avb packets are fetched by CPDMA and forwaded to P1 prior to other packets, rate limiting is used on host port 0. I'm not sure if my descriptio is clear enough, if any confused, please let me know.

    Best Regards

                                 

  • Hi Kenshion,

    Help me understand the scenario better. You need both ports, P1 and P2. P2 will be your gen-ethernet port and P1 will be AVB. Please note that P0 is the CPDMA or host port. P1 and P2 are the MAC ports available to you in the board. You are in dual mac mode, and you are able to receive both AVB and Ethernet traffic, as expected. So the only issue is that AVB TX is delayed sometimes? Especially when you increase non-AVB traffic? Is that the only issue? Or is there something more?

    Regards,
    Anand

  • Hi Anand,

        Please note that P0 is the CPDMA or host port. P1 and P2 are the MAC ports available to you in the board.

       ---we totally agree and understand. 

         You are in dual mac mode, and you are able to receive both AVB and Ethernet traffic, as expected.So the only issue is that AVB TX is delayed sometimes? Especially when you increase non-AVB traffic? Is that the only issue?

         -----yes, the only issue is that AVB TX is delayed sometimes especially when increasing non-avb traffic regardless of single mac or dual mac mode。 But in dual mac mode, AVB TX delay is much less and could meet our requirement.

    Best Regards

  • Hi Kenshion,

    What is the restriction of using dual mac mode? 

    Regards,
    Anand

  • Hi Anand,

         I have post the consideration that make me choose single mac in previous reply.

    "

    ------I notice the following description of Reg P0_TX_IN_CTL in TRM. I want to make port0 in rate limiting mode, so I try to use single port firstly.

         "Note that Dual MAC mode is not compatible with escalation or shaping because dual MAC mode forces round robin priority on FIFO egress.Rate-limiting and shaping are still available for Port 1 and Port 2 when Port0 is set in dual MAC mode."

    "

        I'm not sure whether my understanding is correct. Does this mean only port 0 's egress  policy is restricted? or CPDMA's rating limit mode is also restrcit. could you give a detailed explanation for the description in TRM ?

    Best Regards

  • Hi Kenshion,

    Port 0 here means the host port, or the gate to CPDMA. Since this is the common endpoint for both the external ports when used in dual mac mode, it makes sense to do the FIFO egress following a round robin priority assignment. Port 1 and 2 are the external MAC ports which have different independent FIFOs. So applying rate limiting on P1 and P2 can be done regardless of Dual MAC Mode or not.

    Regards,
    Anand

  • Hi Kenshion,

    Any updates on this issue?

    Regards,
    Anand

  • Hi Anand,

       sorry for the delay in response.  I understand  round robin priority assignment on port 0 in dual mac mode. However, what I want is to achieve rating  limited mode in port 0 so that packets queued in high priority channel could be transmitted firstly. This is why I want to use single mac mode because in dual mac mode only round robin priority is allowed on port 0. what confused me is that in single mac mode, rating limited mode takes more delay for high priority channel packets transmitting. 

    Best Regards

  • Hi kenshion,

    Your requirement is anyway to use both the mac ports I suppose (you had mentioned this in a previous reply). So isn't using single mac mode out of question? You can anyway use traffic shaping on the external mac ports and then maybe play around with the tx_priority rates to ensure that higher priority will be sent out first. I think I would be able to help further only if I recreate the setup. Which version of the SDK are you using?

    Regards,
    Anand

  • Hi kenshion,

    Are there any updates on this?

    Regards,
    Anand

  • Hi Anand,

        isn't using single mac mode out of question?

    -----in previous description, I have mentioned that there is no explicit requirement for singe or both mac mode. What we actually need is that avb traffic could be transmitted in high priority and minimal delay. Unfortunately, in single mac mode, a long delay was observed even in rating limited config.

         Which version of the SDK are you using?

    ----Our supplier ported GMAC_SW driver from Vision SDK to IPU. But they did not provide any version information to us. Do you need any other information?

    Best Regards 

  • Hi Kenshion,

    I don't understand when you say "ported from Vision SDK to IPU". The NSP GMACSW driver is based on SYS-BIOS and will be built for IPU anyways. How have you implemented the AVB talker in IPU?

    Regards,
    Anand

  • Hi Anand,

       our system on M4 is  highly customized based on sysbios. There is no GMACSW driver and AVB stuff in original system. our supplier ported GMAC driver  from Vision SDK to IPU and implemented AVB talker based on TI's talker sample.

    Best Regards 

  • Hi Kenshion,

    I am not sure in that case if I will be able to reproduce the issue at my side. Did you port just the AVBTP driver or both AVB and NSP?

    Regards,
    Anand

  • Hi Kenshion,

    Please give more details on the porting done.

    Regards,
    Anand

  • Hi Anand,

        Both AVB and NSP are ported. In our original system, ethernet is not required, so there is no ethernet related module.

    Best Regards

  • Hi Kenshion,

    Thanks for clarifying. In that case, from a SW point of view I can't really tell you what to change, as you are not using the RTOS drivers provided. But from an IP perspective, I would like to give conclusive answer:

    Since you need AVB traffic on one port, and NON-AVB on another port, you have to use dual mac mode. I am not sure how your ALE settings are, so I can't really predict the behaviour of the second port in GMACSW hardware when used in single mac mode. The delay in AVB Tx interrupts reduced as you observed in dual mac mode might be because the CPDMA has a better chance at egressing the arrived packets since you're directing packets to external ports. I think the only solution left is to reduce the priority of non-AVB traffic as you're stuck on a fixed AVB traffic priority. 

    Regards,
    Anand

  • Hi Kenshion,

    Any updates on this?

    Regards,
    Anand

  • Hi Anand,

        Since you need AVB traffic on one port, and NON-AVB on another port, you have to use dual mac mode.

        ------I though it is also available for using single mac mode while distribute different traffic to different external port according to TO_PORT field in CPPI buffer. 

         I am not sure how your ALE settings are.

        ---- when power up, firstly sysbios will implement ALE setting inherited from vision nsp SDK default setting , then linux implements ALE setting with original cpsw driver. A slight difference is that ALE_BYPASS bit is set. Since  TO_PORT field in CPPI buffer is set, I thought ALE setting will not affect packets forwarded from port 0 to external port 1 or 2.

        I think the only solution left is to reduce the priority of non-AVB traffic as you're stuck on a fixed AVB traffic priority. 

        -----as I described in previous, in single mac mode, config low  priority for non-AVB traffic does not reduce the delay between  packets submitted and tx interrupt.

        as a result , could I conclude that GMAC does not fully support FQTSS in single mac mode?

    Best Regards

  • Hi Kenshion,

    With the current implementation of the driver, I guess that would be the case.

    Regards,
    Anand

  • Hi Anand,

       Got it.  Appreciate your kindness and support.

    Best Regards