This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

BCP Packet Miss

Hi,

in my application, we are using BCP in loop back mode with other Uplink and downlink processing for LTE. We see sometimes the output doesn't come out of BCP for loop back packet(i.e BCP doesn't pop descriptor from outFdq and push to outQ for the configured FlowID). From BCP debug info the packet has gone through TM module (verified using destnTag). TM or any other module doesn't throw any error also. 

Any input on this issue?

Ronak

  • Hi Ronak,

    Please provide below information to help you.

    1. DSP Part number.

    2. Packages used and its version.

    Note:

    TCI66xx devices are supported directly through Local Field Applications Engineers (FAEs.)  These devices are not supported on the E2E forum.  Please contact your local FAE for support of these devices.  If you are not sure who your local FAE is, then please contact your local technical sales representative and they will be able to put you in contact with your local FAE.

    Thanks.

  • Hi Rajasekaran,

    I am working on EVM 6670. 

    PDK version : pdk_C6670_1_0_0_21

    Let me know if anything else is required.

    Ronak

  • Hi Ronak,

    1. Are you using the accumulator or software polling to monitor the RX queue?
    2. After you miss an output packet, does the BCP stop completely or is it able to process further input packets and continue producing output packets?

    One reason for missing output packets could be descriptor starvation i.e. if the Rx Free descriptor queue runs out of free descriptors either permanently due to error in descriptor recycling code or momentarily due to recycling not happening quickly enough to keep up with BCP processing. Since you're using loopback mode which means you have removed all submodules except TM from the chain, BCP will process packets faster thus consuming RX free descriptors quicker than envisioned by the application while dimensioning the free descriptor queue.

    1. The simplest solution to prevent this is to increase the number of descriptors in the RX FDQ.

    2. You can also enable error handling in the Rx Flow configuration as given below:
        Bcp_RxCfg   rxCfg;
      
        rxCfg.rxQNum                        =   rxQueueNum;
        rxCfg.bUseInterrupts              =   0;

        memset (&rxCfg.flowCfg, 0, sizeof(Cppi_RxFlowCfg));
        rxCfg.flowCfg.flowIdNum                 =   flowIdx;
        rxCfg.flowCfg.rx_dest_qmgr            =   0;
        rxCfg.flowCfg.rx_dest_qnum           =   rxQueueNum;  
        .... .... .... .... .... ....
        rxCfg.flowCfg.rx_error_handling     =   0; // 0 : Drop the packet, do not retry on starvation. 1 = Retry transmission. 
     
    A value of 0 instructs the Packet DMA to drop the packet in case of descriptor starvation. Setting it to 1 instructs it to retry the operation depending upon the TIMEOUT value configured in the Packet DMA Performance control register. Please look at sections 4.2.4.1 (Rx Flow N configuration Register A (0x000 + 32xN)) and 4.2.1.2 (Performance control register (0x04)) of the Keystone Multicore Navigator user guide for details on rx_error_handling.

    Another reason (if using the accumulator) could be accumulator descriptor list overflow. If this is the case, try increasing the number of entries in the accumulator list.

    Regards
    -Nitin