This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA3: Network TX Link throughput improvement

Part Number: TDA3


Hi Stanley,

Here is the E2E request to help improve Network TX throughput. Please provide instructions to modify the code on both TDA3 and external Host per our discussion

Thanks,

--Khai

  • Hi Khai,

    On TDA3 side, please go to the following file, ~/vision_sdk/links_fw/src/rtos/links_common/network_tx/networkTxLink_drv.c.

    In NetworkTxLink_drvSendData(), you will find NetworkTxLink_drvWriteHeader() and NetworkTxLink_drvWritePayload() are being called.

    The first one, NetworkTxLink_drvWriteHeader(), is to send the header for buffer info via TCP.

    The second one,  NetworkTxLink_drvWritePayload() is the radar data.

    You can skip the header for now to see if the performance is improved.

    On PC side, please go to ~/vision_sdk/apps/tools/network_tools/network_rx/src/network_rx_main.c.

    In RecvData(), ReadCmdHeader() and ReadData() are called.

    If you skip the header on TDA3 side, you need to skip it as well on PC side.

    Regards,
    Stanley

  • Hi Stanley,

    I can't afford to not have the header info. I need to know the channel number of the incoming streams as I have 3 channels sent over from TDA3. The channel number identifies the stream so I know which one it is.
    Regards,
    --Khai
  • Hi Khai,

    Channel number is used to identified different radars, not radar channels within the same radar. Radar data from all Rx channels will only use ch0 in Link.

    Are you connecting to multiple radar devices?

    Regards,
    Stanley

  • Hi Stanley,

    I have made changes to the Network_tx link to output 3 channels - RDM (all 4 RX channels are lumped into this RDM channel as described in the EVE doc), metadata (describes the RDM buffer), and PkDetect Taget List.

    On PC side,  I do:

    ./network_rx --host_ip xx.xx.xx.xx --target_ip xx.xx.xx.xx -usetfdtp --verbose --files rdm meta peak.

    This will capture the 3 streaming channels into the 3 files.

    So when a stream comes over, I need the header info to tell me which stream it is to write to which file descriptor.

    Regards,

    --Khai

  • Khai,

    The NetworkTx Link with TFDTP streaming is most efficient when sending one big buffer instead of multiple small buffers.

    If you can combine 3 types of data into one buffer to send and unpack them on PC side, it will help improving the network throughput.

    Otherwise, you have to live with the frame drop with small buffer transfer due to the overhead from each transfer.

    Regards,
    Stanley

  • HI Piyali/Stanley,

    Reducing the amount of data output over Ethernet allows me to increase the frame rate much further. Next I would like to understand a bit more about the statistics output from the p command specifically in RED below:

    [IPU1-0] 74.956322 s: [ ti.radar.fft ] LATENCY,
    [IPU1-0] 74.956383 s: ********************
    [IPU1-0] 74.956444 s: Local Link Latency : Avg = 2256 us, Min = 2226 us, Max = 2349 us,
    [IPU1-0] 74.956566 s: Source to Link Latency : Avg = 2507 us, Min = 2440 us, Max = 2654 us,
    [IPU1-0] 74.956688 s:
    [IPU1-0] 74.956749 s: CPU [ DSP1], LinkID [ 25], Link Statistics not available !
    [IPU1-0] 74.956902 s:
    [IPU1-0] 74.956963 s: ### CPU [ DSP1], LinkID [ 50],
    [IPU1-0] 74.957054 s:
    [IPU1-0] 74.957268 s: [ ti.radar.pkDetect ] Link Statistics,
    [IPU1-0] 74.957359 s: ******************************
    [IPU1-0] 74.957420 s:
    [IPU1-0] 74.957481 s: Elapsed time = 23973 msec
    [IPU1-0] 74.957542 s:
    [IPU1-0] 74.957603 s: New data Recv = 140.78 fps
    [IPU1-0] 74.957695 s:
    [IPU1-0] 74.957725 s: Input Statistics,
    [IPU1-0] 74.957786 s:
    [IPU1-0] 74.957847 s: CH | In Recv | In Drop | In User Drop | In Process
    [IPU1-0] 74.958396 s: | FPS | FPS | FPS | FPS
    [IPU1-0] 74.958518 s: --------------------------------------------------
    [IPU1-0] 74.958610 s: 0 | 140.78 0. 0 0. 0 140.78
    [IPU1-0] 74.958762 s:
    [IPU1-0] 74.958793 s: Output Statistics,
    [IPU1-0] 74.958854 s:
    [IPU1-0] 74.958915 s: CH | Out | Out | Out Drop | Out User Drop
    [IPU1-0] 74.958976 s: | ID | FPS | FPS | FPS
    [IPU1-0] 74.959067 s: ---------------------------------------------
    [IPU1-0] 74.959220 s: 0 | 0 140.78 0. 0 0. 0
    [IPU1-0] 74.959372 s:
    [IPU1-0] 74.959403 s: [ ti.radar.pkDetect ] LATENCY,
    [IPU1-0] 74.959494 s: ********************
    [IPU1-0] 74.959555 s: Local Link Latency : Avg = 903 us, Min = 884 us, Max = 977 us,
    [IPU1-0] 74.959677 s: Source to Link Latency : Avg = 3529 us, Min = 3477 us, Max = 3721 us

    I have this in each link down to the network TX link. which has this latency values:

    [IPU1-0] 75.471237 s: [ NETWORK TX ] LATENCY,
    [IPU1-0] 75.471298 s: ********************
    [IPU1-0] 75.471359 s: Local Link Latency : Avg = 1792 us, Min = 122 us, Max = 6862 us,
    [IPU1-0] 75.471512 s: Source to Link Latency : Avg = 5616 us, Min = 3873 us, Max = 10675 us,

     

    What does the system tell me and how do I interpret it?

    Regards,

    --Khai

  • Hi Khai,

    Please start a new post for different topic so we can conclude the original issue on this post.

    Could we close this thread or do you have any more question about Network TX throughput?

    Thanks.

    Regards,
    Stanley

  • Hi Stanley,

    I can create new post. But the question I asked was still throughput related the performance of the Links.

    Thanks,

    --Khai

  • Hi Khai,

    "Local Link Latency" is the time taken for the frame arriving at the IPC_IN link to the frame arriving at the current Link.
    "Source to Link Latency" is the time taken from the frame captured by Capture Link to the frame arriving at the current Link.

    Regards,
    Stanley