This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VE-Q1: Eth buffer BUSY

Part Number: TDA4VE-Q1

Tool/software:

Hello,

I'm trying to initiate transmission of several ethernet frames from upper layers.

When a buffer is requested by the upper layer, Eth_Queue_remove() is called, which sets Eth_CfgPtr ->  dmaCfgPtr -> egressFifoCfgPtr -> quePtr-> head/tail pointers to NULL

On the next buffer request, since these pointers are already NULL, the frame will not be added in the queue and Eth driver will report BUFFER_BUSY to upper layer.

Those pointers are initialized in Eth_Queue_add(), which is called by Eth_TxConfirmation().

Initially I had only one Fifo configured in Eth driver. I've tried to add few more but with no result.

Could you give me some hints on how I could approach this issue ?

Regards,

Octavian

  • Hi,

    When a buffer is requested by the upper layer, Eth_Queue_remove() is called, which sets Eth_CfgPtr ->  dmaCfgPtr -> egressFifoCfgPtr -> quePtr-> head/tail pointers to NULL

    How many buffers are configured? 

    If you use all the buffer space allocated, there will be no buffer for submitting the transfer request until the transfer is completed.

    Could you give me some hints on how I could approach this issue ?

    You also need to have descriptions to hold the buffers.

    As per the latest TI SDK, we have 16 descriptors and a buffer size of 24576U (each descriptor wth a 1536 buffer size, i.e. 1536*16).

    Best Regards,
    Sudheer

  • Hi Kumar,

    Thank you for your quick response.

    I'm not sure about what buffers and descriptors are you talking about.

    I've seen that based on those configurable Fifos, Eth driver will generate some buffers and descriptors.

    Do you mean that I should configure 16 Fifos ? 

    Here are the generated variables for each fifo:

  • Hi,

    Here are the generated variables for each fifo:

    The above configuration is correct. It has 128 descriptors i.e. means 128 buffers, and a 196608 buffer size, each descriptor with 1536 bytes.

    Are you facing the same issue even after updating to 128 buffers?

    Which SDK version are you using?
    Have you verified how many buffers are requested by the application before transfer completion? Is all 128 requested?

    Best Regards,
    Sudheer



  • How I see, for each Fifo that I configure there are 2 arrays of 128 elements generated, Descriptor_0[128] and Descriptor_1[128].

    Do I need 2 Fifos or more? Or 1 would be enough and the issue is not related to how many Fifos I configure?

    We are using pdk_09_02_00_30.

    It seems that all 128 are requested more faster than transmission takes place.


    Several buffer requests take place while buffer is full and that's why I get that Buffer Busy.

  • Hi,

    How I see, for each Fifo that I configure there are 2 arrays of 128 elements generated, Descriptor_0[128] and Descriptor_1[128].

    Do I need 2 Fifos or more? Or 1 would be enough and the issue is not related to how many Fifos I configure?

    We are using pdk_09_02_00_30.

    From the TI SDK configurator files, I could see only one set of Memory, i.e. "Eth_Ctrl_0_Ingress_Descriptor_0".
    The same is mapped in the config.h file for "Eth_GetIngressFifoDescAddress". Please refer below capture for better understanding.

    It seems that all 128 are requested more faster than transmission takes place.

    Yes, this is only a possible cause for buffer busyness. 

    Best Regards,
    Sudheer