This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

66AK2E05: Road to 1Gb/s on PA emac based ethernet

Part Number: 66AK2E05

Hi !!

So i have been using the pa emac example as a base to transfer data between my PC and K2E's ARM core.


My problem is very simple that when i try to transfer UDP packets at a rate higher than 200-300 Mb/s i get alot of packet loss.

I know that the issue is not in the packet accelerator but rather in the queue management and/or interrupt handling as i m able to see the packets leaving the PA at higher bit rates.

My system outlook can be seen below

PA-> que 704 -> accumulator -> interrupt (just freeing desc. etc nothing big done here)

i have used multiple configs of accumulator (larger page entries, time delays  to interrupts etc etc) but i always end up with packet loss.

So my question is

  1. Is the queue manager fast enough to achieve 1Gb/s with just one queue?
  2. how can i check where the packet loss is coming from after leaving the PA?
  3. If a single queue isn't fast enough can i use multiple accumulator channels to sort of create an alternating load and reset mechanism which utilizes 2 ques 2 channels 2 interrupts, i.e. half packets que 1 half in queue 2?
  4. Any other way i can get 1 gb/s without using the ndk or linux?

Following are my system specs

K2e ARM core 0

pdk 4.0.2

my application is bare metal but uses 90% of the PA emac example code

send and rec. works perfectly at lower rates

the original PA_emac example has the same issue

regards

  • Hi,

    I've notified the team. Their feedback will be posted here.

    Best Regards,
    Yordan
  • please note here the following observations

    with 32 FDQ at Rx i get 0% packet loss at when the incoming packets is less then ~32-34 packets

    with 48 Rx FDQ i get 0% packet loss when incoming packets are less then ~50 packets

    my accumulator has a length of 16 packets

    if i send more then the # of packet descriptors i get packet loss
  • Hi,

    The best way to achieve high throughput is to use Linux OS. For using RTOS, there was a HUA demo developed for Keystone I devices, based on NDK, it was discontinued in Processor SDK RTOS release. The PA example under EMAC driver is intended for showing packets Tx/Rx, not for high throughput. I need to check how to improve it or what is expected. Do you see the packet drop in Rx direction (PC to K2E) or Tx?

    Regards, Eric
  • hello Eric !!

    The packets drop at the Rx end. more specifically i think i am not able to free the RX free descriptor queue fast enough
  • well i was wrong in assuming that the issue is in the free descriptor. the problem is that the packets at host port 0 != port 1 as can be seen below. so any help in how to resolve this issue?

    Stats for block number: 0

    ********************************************

           ...

           Good Frames Sent                          576

           ...

           Total Tx&Rx with Octet Size of 65 to 127  576

           ....

           Sum of all Octets Tx or Rx on the Network 72576

    ********************************************

    Stats for block number: 1

    ********************************************

           Good Frames Received                      800

           .......

           Total Tx&Rx with Octet Size of 65 to 127  800

           .....

           Sum of all Octets Tx or Rx on the Network 100800

    ********************************************

  • Hi,

    Is this your customized board or TI K2E EVM? The switch is 5-port Gb switch. Can you use bigger UDP packet size (e.g. 1500 bytes) to see if you have packet loss in the switch? If the host port Tx is less than SGMII port Rx good frame, do you see any error packets in host port?

    Regards, Eric
  • hello !!

    this is the stock K2e evm. the problem starting happening when i set "rxFlowCfg.rx_error_handling = 1;" while creating the RX flow. if i set this value to 0 i get 800 on both port 1 & port 0 but this results in high packet loss after the data exits the PA.and no all other values are 0 hence omitted
  • waiting for your update !!