This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Cannot open FFTC Tx channel

I am trying to combine the QMSS Infrastructure Multicore Mode (PDK_INSTALL_PATH/ti/drv/qmss/example/InfrastructureModeMulticore) example code with the FFTC multicore (PDK_INSTALL_PATH/ti/drv/fftc/example/multicore) example code and I am getting a strange error during configuration:

[C66xx_1] Error opening Tx channel corresponding to FFTC queue : 1
[C66xx_1] Invalid FFTC Transmit configuration specified

I was able to see these messages by implementing Osal_fftcLog() in multicore_osal.c. Another important change was using monolithic descriptors instead of host descriptors in the FFTC example code.

The call sequence to get to this is, roughly:

Core 0:

Qmss_init()

Qmss_start()

Cppi_init()

Cppi_open() *

Qmss_insertMemoryRegion() for monolithic/sync descriptors *

Qmss_insertMemoryRegion() for monolithic descriptors for FFTC

several [seemingly irrelevant for the problem at hand] xxxx_initDescriptor() calls for memory regions 0 and 1

and finally, several Qmss_queueOpen() calls for the queues defined in the QMSS infrastructure example.

* these calls are specific to the QMSS InfrastructureModeMulticore example


after the above sequence ends, Core 1 does:

Qmss_start()

enable_fftc()

Fftc_init()

fftc_setup()  -> changed to use monolithic descriptors in memory region 2; this function will call Fftc_open()

populate an FFT_ExampleCfg struct

Fftc_txOpen()


... which is where I get the above error message and a NULL hTxObj.

I am not sure what else to try except changing the fftcInstNum variable (from CSL_FFTC_A to CSL_FFTC_B) and the txCfg.txQNum (from 0 to 1). Neither of the 4 possible combinations of instance number and txQNum work.

If I move all this code from Core 1 to Core 0, the Fftc_txOpen() call returns a non-null hTxObj, but I haven't gone any further into using it; the fact that it works, though, hints that there is some sort of core-specific configuration I am unable to find, so I would like some advice from TI as to where else to look.


Best regards and thank you in advance

  • Igor, several things to check:

     

    1. Is Fftc_open being called?
    2. Are all of the descriptor memory regions in the newly combined code being configured in ascending address order? And the descriptor memory regions are not overwriting one another? (if the first example created regions 0 and 1, and the second example created region 0, the combined example should create 0, 1 and 2 – each in ascending address order).
    3. The first error (“Error opening Tx channel corresponding to FFTC queue”) occurs when Fftc_txQueueOpen calls Cppi_txChannelOpen. The possibilities for errors are:
      1. the channel number is invalid (should be 0-3 for FFTC)
      2. it fails RM permissions check (probably not the problem here)
      3. all channels are already taken
      4. the osal malloc routine failed to return a buffer (probably not the problem either)
    4. The second error message (“Invalid FFTC Transmit configuration specified”) is simply a restatement of the same error.

     It is hard to be more specific without more details of what your code is doing, but this is a configuration error.  The best advice I can give is to examine the two configurations and make sure they are working as one.  It is probably best to do all the configuration on a single core, in one sequence, where you don’t have to worry about core or task synchronization issues.

      -dave

     

  • Dave, thanks for you response.

    1. Yes. You can check in the FFTC multicore example that fftc_setup() calls Fftc_open() after populating an Fftc_DrvCfg struct. I have a question here:

    In the example, each core calls fftc_setup(), so the summation of fftcInitCfg.cppiFreeDescCfg[0].numDesc equals FFTC_NUM_HOST_DESC. Is that necessary or could I just have one of the cores ask for all the free descriptors and have the others call Fftc_open() with, say, a NULL struct?

    2. I will double-check that shortly, but I have no reason to believe they are not in ascending address order. The hostDesc[] buffer from the FFTC multicore example (which would be memory region 2 in the combined code) is defined just as the monolithic and sync descriptors from the QMSS Infrastructure Multicore Mode example. The actual code I have here is:

    #pragma DATA_ALIGN (monolithicDesc, 16)
    UInt8                   monolithicDesc[SIZE_MONOLITHIC_DESC * NUM_MONOLITHIC_DESC];
    #pragma DATA_ALIGN (syncDesc, 16)
    UInt8                   syncDesc[SIZE_SYNC_DESC * NUM_SYNC_DESC];
    #pragma DATA_ALIGN (dataBuff, 16)
    UInt8                   dataBuff[SIZE_DATA_BUFFER];
    //a few other vars
    #pragma DATA_ALIGN (hostDesc, 16)
    UInt8                 hostDesc[FFTC_SIZE_MONOLITHIC_DESC * FFTC_NUM_MONOLITHIC_DESC];

    Do I need any higher-level statements similar to a .cmd file to be sure they are placed in ascending order?

    3. The first error (“Error opening Tx channel corresponding to FFTC queue”) occurs when Fftc_txQueueOpen calls Cppi_txChannelOpen -> Yes, I was able to trace that in the PDK_INSTALL_PATH/ti/drv/fftc/fftc.c file.

    a. Fftc_txQueueOpen()'s Fftc_QueueId parameter is a direct copy from one of the fields in Fftc_txOpen()'s Fft_TxCfg parameter. The FFTC example code always uses 0 (zero) for that, though I have tried 1 as well, like I said in the original post. So I do not believe this is incorrect.

    b. Can you be more verbose?

    c. How can I be sure? As I understand it, no QMSS channels are opened until the QMSS infrastructure example code calls send_data(), which will not happen for quite some time. Could the code in sysInit() do anything to these channels?

    d. I guess I would have to rebuild the PDK library to be sure of this, right?

    4. Yes, I can see it is printed when Fftc_txQueueOpen() returns NULL.


    Using the global isQmssInitialized variable for synchronization worked fine to separate the initialization in core 0 and the FFTC configuration code in core 1, so I was suspecting these 'core-specific' problems like using "channel modulo 4 == core number" as a possible source of this error. Up to this point in the code, it is a very straightforward copy-paste from the two examples.

    The "system initialization" part of the examples, sysInit() and system_init() from QMSS Infra example and FFTC multicore example respectively, were combined as I described in the original post. Question: could the Cppi_open() call, which is specific to the QMSS Infra example, do any harm to the code which does CPPI setup internally in the Fftc_xxxx() functions?

  • Argh, I'm having a hard time with this rich text editor. Please excuse the color change and numbering problems.

  • Igor,

     1)      You can do it either way – have one or all cores call fftc_setup(). It mainly depends on whether or not you intend to access the FFTC LLD from each core (even if you did, you could still do all the setup on one core). If the FFTC example is putting its descriptors in L2, then these cannot be placed in the same memory region, you would need a separate region for each core’s L2.  One tip regarding the LLDs – if you haven’t discovered the .chm files in the /docs directory of each LLD, they are a great way to navigate the functions, structs, enums, etc.

    2)      You could use #pragma DATA_SECTION() statements that match labels in your .cmd linker file, but if all your regions are placed in the same memory then all you really need is to examine the .map file to make sure each region is at a higher address than the previous one. This restriction will go away with Keystone2 devices.

     Your numbering got confused following this, so I’m not sure what is referring to what.

     WRT initialization, the LLD’s “xxx_init” functions are meant to be called only ONCE per system (i.e. only on one core). Cppi_init doesn’t do (or require) any initialization of the QM LLD, but Fftc_init assumes that Cppi_init has already been called (it calls Cppi_Open).  The LLD’s xxx_open functions can be called on each core, but following the first call, they should return a pointer for the cfg struct used in the first call.  Your combined code should not be calling these functions multiple times.

       -dave

  • This is from the .map file:

    
    
    00859260   monolithicDesc
    0085ba60   syncDesc
    //...
    0085bf20   hostDesc

    the first two are from the QMSS example, and the last one is from the FFTC example; and they correspond to memory regions 0, 1 and 2. So it seems they are properly placed in memory.


    To clear the confusion: in your original post, you suggested in (3) that:

    a) the channel number might be wrong (not in the range [0-3] inclusive).

    I replied to this with:

    Fftc_txQueueOpen()'s Fftc_QueueId parameter is a direct copy from one of the fields in Fftc_txOpen()'s Fft_TxCfg parameter. The FFTC example code always uses 0 (zero) for that, though I have tried 1 as well, like I said in the original post. So I do not believe this is incorrect.

    b) "it fails RM permissions check" -> I did not understand this

    c) "all channels are taken" -> How can I be sure? As I understand it, no QMSS channels are opened until the QMSS infrastructure example code calls send_data(), which will not happen for quite some time. Could the code in sysInit() do anything to these channels?

    d) "the osal malloc routine failed to return a buffer" -> I said I would have to rebuild the PDK library [with debugging stuff on] to be sure of this [since I can't 'step into' during debugging].

  • "it fails RM permissions check" -> I did not understand this"

    The RM is a Resource Manager LLD available for TCI6614 parts. Not sure which PDK you are using.

    ""all channels are taken" -> How can I be sure?"

    SysInit() only appears to be initializing the LLDs, with the exception that a few FDQs are opened. The CPPI LLD function Cppi_txChannelOpen writes 0x8000_0000 to the Tx Channel N Global Config Reg A per Tx channel. Refer to the Navigator User Guide (SPRUGR9E) Sections 4.2.2.1 and 5.3.3 for information. You can examine these registers in a Memory View window and see when they are set.