This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

SRIO driver questions

Hi,

I have SRIO Unit test code which does transfers successfully to the host processor. It uses the SRIO/CPPI/QMSS driver from the pdk_C6678_1_0_0_17.

1. I noticed that the SIZE_HOST_DESC is defined to be 48. But the structure defined in cppi_desc.h has 13 elements so, I expect sizeof(Cppi_HostDesc) to be 52. Moreover, I think this value is required to be the next higher multiple of 16, so should not the SIZE_HOST_DESC be 64?  At some places in the example code, SIZE_HOST_DESC is used, while sizeof(Cppi_HostDesc) is used at other places. The sample application works fine, but I wonder if this is to be expected.

/**

 * @brief CPPI host descriptor layout

 */

typedef struct {

    /** Descriptor type, packet type, protocol specific region location, packet length */

    uint32_t          descInfo; 

    /** Source tag, Destination tag */

    uint32_t          tagInfo;

    /** EPIB present, PS valid word count, error flags, PS flags, return policy, return push policy,

     * packet return QM number, packet return queue number */

    uint32_t          packetInfo;

    /** Number of valid data bytes in the buffer */

    uint32_t          buffLen;

    /** Byte aligned memory address of the buffer associated with this descriptor */

    uint32_t          buffPtr;

    /** 32-bit word aligned memory address of the next buffer descriptor */

    uint32_t          nextBDPtr;      

    /** Completion tag, original buffer size */

    uint32_t          origBufferLen;

    /** Original buffer pointer */

    uint32_t          origBuffPtr;

    /** Optional EPIB word0 */

    uint32_t          timeStamp;

    /** Optional EPIB word1 */

    uint32_t          softwareInfo0;

    /** Optional EPIB word2 */

    uint32_t          softwareInfo1;

    /** Optional EPIB word3 */

    uint32_t          softwareInfo2;

    /** Optional protocol specific data */

    uint32_t          psData;

}Cppi_HostDesc;

 

2. When I try to bring over the SRIO driver to other application, the SRIO transmit does not work. The Srio_socSend() returns without any error, but the Srio_txCompletionIsr is never called. Now, I copied over all configurations from the working unit test code, and I cannot see anything that I missed.

One difference I noticed is that the host buffer descriptors addresses for the unit test application were under L2SRAM for Core0. I am only using Core0 for now. But for the final application the address are 0x1c200b60, 0x1c200b90 etc. Per the 6678 datasheet these fall in the reserved address range 17F08000 to 1FFFFFFF.

The buffer descriptors are allocated by the CPPI/QMSS driver, so I wonder if this is to be expected either.

I think, I came across some requirement about a heap for CPPI. Note that the system heap was moved over from the multicore shared memory to DDR3. And I have only one heap defined. With the same change on the unit test code, still works. So I assume the heap is ok.

 

Thanks!

Shivang

 

 

 

  • Well, for the second part, the address of buffer descriptors are global. So, 0x1c200b60, 0x1c200b90 for Core0 would mean they are at 0x0c200b60, 0x0c200b90. ie in the multicore shared memory. So atleast they dont go to reserved area of memory map.

    But the working unit test code has the descriptors in L2SRAM (0x805a90, 0x805ac0...). I havent really configured MSM for anything in particular. I know it can be configured as L2SRAM or L3SRAM (and then cachable or not).  I wonder if having the descriptors in L2SRAM or MSM make a difference.

  • Ok, I can reproduce the problem in the unit test code as well. It is based on the sample loopback code for srio that is part of the PDK. The buffer descriptors are allocated from the host_region, which is a global array. And as I mentioned the descriptors and the region go to L2SRAM by default in the small unit test code.

    #pragma DATA_ALIGN (host_region, 16)

    Uint8   host_region[NUM_HOST_DESC * SIZE_HOST_DESC];

    Linker file for unit test code

    SECTIONS

    {

        .bootcode:

        {

            -lboot.ae66e<boot.oe66e>(.text:_c_int00)

        }>0x800000

        .init_array:    load >> L2SRAM

        .srioSharedMem: load >> MSMCSRAM

        .qmss:          load >> MSMCSRAM

        .cppi:          load >> MSMCSRAM

    }

    As soon as I place the host_region into the Multicore Shared memory explicitly, (like what happens with my firmware application by default), then the SRIO transmit interrupts stop getting generated. I think, its not just the interrupts, the transfers are not taking place at all since I don’t receive anything on the host side.

    //ST Check if it works if descriptors are in MSM

    #pragma DATA_SECTION(host_region, ".msm");

    #pragma DATA_ALIGN (host_region, 16)

    Uint8   host_region[NUM_HOST_DESC * SIZE_HOST_DESC];

    Linker file for unit test code

    SECTIONS

    {

        .bootcode:

        {

            -lboot.ae66e<boot.oe66e>(.text:_c_int00)

        }>0x800000

        .init_array:    load >> L2SRAM

        .srioSharedMem: load >> MSMCSRAM

        .qmss:          load >> MSMCSRAM

        .cppi:          load >> MSMCSRAM

        .msm:          load >> MSMCSRAM

    }

    So it seems host_region must be in L2SRAM. However, it seems its necessary but not sufficient condition. I mean, I made sure the host_region goes to L2SRAM in my large firmware application explicitly using linker file, but my firmware application still doesn’t do srio transmit. So it seems there are more requirements about using the driver that I am missing.

    Any pointers please?

    Thanks!

  • Shivang,

    Please update to the latest MCSDK at: http://software-dl.ti.com/sdoemb/sdoemb_public_sw/bios_mcsdk/latest/index_FDS.html

    It has been cleaned up considerably since the early version that you are using and probably addresses some of the issues listed in #1 above. 

    Regards,

    Travis

  • Not sure I follow...

    Each corepac L2 uses 0x00800000 for the local starting L2 address, and 0x10800000, 0x11800000, 0x12800000 etc for the global starting L2 address depending on the corepac number. 

    0x1C200B60 and 0x1C200B90 are reserved addresses and can not be used. 0x0C000000-0x0C3FFFFF is MSM memory.  There is no local address for MSM. 

    Descriptors could be stored in L2 or MSM.

    Am I missing your point?

    Regards,

    Travis

  • SThakkar said:
    As soon as I place the host_region into the Multicore Shared memory explicitly, (like what happens with my firmware application by default), then the SRIO transmit interrupts stop getting generated. I think, its not just the interrupts, the transfers are not taking place at all since I don’t receive anything on the host side.

    I will investigate moving the host_region to Multicore Shared memory and provide the needed changes.  Then you can apply to your application.

    Regards,

    Travis

  • Hi,

    We modified the source code and kept the host_region in the non-cachable DDR and with that change the code started working fine. Below is the piece of code which I kept in the test_main.c file to configure the DDR to be non-cacheable for Corepac0 in the unitTestTask function.

    /***** Configure MAR128 to be non-cachable for DDR address of 0x80000000 to 0x80FFFFFF *****/

        ptr = (volatile UInt32 *) 0x01848200;

        *ptr = 0x0;

    Srio_sockSend_TYPE11 function in the SRIO driver does not writeback the descriptor contents to MSMC memory before pushing that descriptor in the Tx queue. I need to check whether the SRIO driver is expected to writeback and it is not doing it OR it is expected to handle the cacheablity in the application code.

    If you are planning to use MSMC shared memory to be memory for host_region then for this workaround you need to configure the MSMC to be non-cacheable. This can be done by configuring MPAX registers for PrivID0 (for Corepac0) and generating one section for MSMC shared memory with physical address to be MSMC address 0xC0000000 and configuring that section to be non-cacheable.

    Please try above options and let us know if that solves your problem.

    I will check on the driver implementation about cacheablity and get back to you.

    Regards,

    Bhavin 

     

  • Thanks Travis and Bhavin. I am bypassing the issue for now by using descriptors in L2 SRAM.  I will try out the pushing host_descriptor to noncacheable DDR region next week.