Hi,
While optimizing my C6670 application I came accross an issue in the way monolithic descriptors are used in the FFTC LLD.
To confirm this, I ran pdk_C6670_1_1_2_6's FFTC_MultiCore_testProject with a single modification: using the test function in test_mono_singlecore.c instead of the default one defined in test_multicore.c.
During the run, the buffers for FFTC submit and receive are not aligned to 16 bytes, even though the descriptors in the assigned memory region had been created that way. What I get in the run, then, is:
After calling Fftc_txGetRequestBuffer():
hRequestInfo == 0x108A39C0 (the monolithic descriptor)
pReqBuffer == 0x108A39E4 (pointer to descriptor's data buffer)
After calling Fftc_rxGetResult():
hResultInfo == 0x108A31C0 (mono desc)
pResultBuffer == 0x108A31CC (data buffer)
Trying to trace the 12-byte difference in the receive buffer I got as far as FFTC_CPPI_MONOLITHIC_DESC_SIZE == 12 in /pdk_C6670_1_1_2_6/packages/ti/drv/fftc/include/fftc_pvt.h, but changing that macro to 16 (and recompiling the FFTC LLD) introduced odd behaviors that I could not fully debug. However, the 36-byte difference in the submit descriptor remains a mistery.
My algorithm does element-wise complex vector multiplication and it benefits greatly from aligned data. Can anyone advise?