This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TMS320C6457: PBM resource in NDK library

Part Number: TMS320C6457

I received inquiries about PBM(Packet Buffer Manager). This inquiries is related to the following E2E thread(https://e2e.ti.com/support/processors/f/791/p/905776/3370042#3370042).

1. We received Expert's comments which was "PBM resources are shared between send and receive." Does this mean, PBM resources is the sum of "TCP send buffer size" and "TCP receive buffer size" ?

2. Expert told that the maximum TCP transmit buffer size is 65535 bytes. Regarding "maximum TCP receive buffer size", is this max-size also 65535 bytes? This means, can customer understand that max PBM resources is "65535 bytes x2" ?

3. Regarding PBM resource, could you share helpful document/User-guide information, please? as far as I was looking for this, the following User Guide may be helpful. However, it will be appreciated if you will share Expert's comments on this.

Best regards, Miyazaki

  • Hi Takayuki,

    Hopefully this page will answer your memory related questions with the NDK: https://processors.wiki.ti.com/index.php/TI-RTOS_Networking_Stack_Memory_Usage

    Todd

  • Hello Todd,

    I tried to look into answers for #1 and #2 in this Processor Wiki, I was not able to find them. According to NDK user’s guide( https://www.ti.com/lit/pdf/spru523 ), it seems, user is able to set MMALLOC_MAXSIZE to “65500” Can we have your comments on those inquires, please?

    Best regards, Miyazaki

  • Miyazaki,

    The PBM packets are used by the low level EMAC driver.

    Let's look at the receiving side first. When an Ethernet packet comes in, it is placed into a PBM packet. A PBM packet is the max size of an Ethernet packet. If larger TCP or UDP are sent to the device, Ethernet will break them up. So it will take several PBM packets to receive the larger TCP or UDP packet. The re-assembly occurs higher in the NDK stack. 

    On sending side, if a larger buffer is sent, it will be broken into multiple PBM packets (and then become multiple Ethernet packets).

    This is the default behavior. The NDK does support jumbo packets but requires a rebuild of the NDK. 

    Todd

  • Hello Todd,

    I got additional inquiries about NDK API Reference Guide (https://www.ti.com/lit/ug/spru524k/spru524k.pdf) from our customer. I'm not familiar with NDK API, therefore, I just tried to translate their inquiries directly. Sorry.

    1. Regarding "_ipcfg.SockBufMinTx" (p173)

    Does "The send buffer size" mean “transmitBufSize” ? According to this description, This value is usually about 25% to 50% of the send buffer size. Customer seems to set around 6%. In this case, is there any problem?

    2. Regarding each definition in CCS configuration file

    Does each definition in CCS configuration file match as those descriptions in NDK? It means, customer would like to confirm the following relationship between them.

    【app_p64P.cfg.xml】 → 【spur524】
    
    transmitBufSize → _ipcfg.SockTcpTxBufSize
    
    receiveBufSize  → _ipcfg.SockTcpRxBufSize
    
    maxReassemblySize  → _ipcfg.IpReasmMaxSize
    
    socketBufMinTxSize  → _ipcfg.SockBufMinTx
    
    socketBufMinRxSize  → ipcfg.SockBufMinRx

    3. NDK_send() /NDK_recv()

    Customer is using send()/recv() API at this time, should customer use NDK_send()/NDK_recv()? 

    4. Regarding PBM

    Customer does not use NDK_***() APIs at current project. However, should customer also use PBM tasks in this case?

    5. Customer noticed that TaskCreate() was never called in their application software. In this case, customer should also use PBM's buffer and should do Ethernet communication with PBM?

    6. In the case of using PBM, send()/recv() are using PBM pool. There is possibility that PBM pool runs out. Something like the following URL's explanation(https://e2e.ti.com/support/processors/f/791/p/905776/3370042#3370042). Is that correct?

     

    It will be appreciated if you are able to share your comments on those inquiries.

     

    Best regards,

    Miyazaki

  • Miyazaki,

    Re: #1 - _ipcfg.SockBufMinTx

    When the description for _ipcfg.SockBufMinTx is referring to "send buffer size", is means the value as defined by _ipcfg.SockTcpTxBufSize. During startup, this parameter will be set by the Tcp.transmitBufSize defined in the application.cfg script, but might be changed later by a call to CfgAddEntry().

    There is no problem setting _ipcfg.SockBufMinTx to 6% of the transmit buffer size. It just means that the socket will be marked as write-able sooner, but with less room for new data. Note that using _ipcfg.SockBufMinTx implies you are using fdSelect() with non-blocking sockets. When using the send() function, it does not use this configuration parameter.


    Re: #2 - CCS configuration vs NDK _ipcfg

    You have the correct relationship between these parameters. The values you specify in the application.cfg script will be applied to the NDK _ipcfg parameters during the startup phase. When you create a new socket, it will take its values from _ipcfg. Once you have a socket, you can modify its parameters using the setSockOpt() function.


    Re: #3 - BSD vs NDK

    It is recommended to use the BSD functions (send, recv). But an application is allowed to use both the BSD and the NDK functions (NDK_send, NDK_recv) at the same time. Both APIs must be invoked from an NDK task so that the proper fdSession is active.

    It might not be possible to use both APIs from the same source file because the header files might not be incompatible. But it should work if you separate your code into two separate source files. But it would be best to simply use the BSD API alone.


    Re: #4 - PBM

    The customer should not be using any PBM functions. The only attention to PBM is for configuration of the PBM pool. After that, the PBM buffers are used internally by the NDK and the driver. They should not be directly accessed by
    the application.


    Re: #5 - TaskCreate

    It is highly recommended that an application use TaskCreate() to create a new task from which the network APIs are invoked. This is needed to setup the correct fdSession context. However, it is possible to make BSD and NDK API calls directly from a kernel task, but it requires setting up the proper fdSession context manually. It is not recommended to use this approach.


    Re: #6 - PBM pool

    Both the BSD and NDK APIs make use of the PBM pool. If the application is not structured correctly, it is possible for the PBM pool to run out and cause the network traffic to stop.

    The PBM buffers are used internally for both sending and receiving data. They are used to move data between the driver layer and the application layer, in both directions. The problem can occur on the receiving path. When new data is received, the driver will automatically acquire PBM buffers to store the incoming data, and then pass the PBM buffer up to the NDK stack. The data will sit there until the application calls recv(), at which point the data is transferred out of the PBM buffer into the application buffer. Then the empty PBM buffer is returned to the pool. If the application never calls recv(), or does not call is soon enough, it is possible for the PBM pool to run out. If this happens, and the application attempts to send data, it would fail because there are no PBM buffers available to transfer the outbound data from the application to the driver.

    ~Ramsey