This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA2EXEVM: 2D-FFT Processing Questions

Part Number: TDA2EXEVM

TDA Processor Team:
We have the following questions we seek clarity on FFT processing:
1. The Processor SDK has support for  12, 14 and 16 bit ADC input data while the AWR datasheet specifies supporting 12 bit ADC data only. Which one is correct? Are there any trade offs in selecting one over the other?
2.  There are 5 stages in FFT processing and each stage has a bitshift scaling factor specified by user.  Can you please explain how these bit shifting per on each FFT stage? I would like a concrete example say for 256 point FFT, how does the 5 bit shift scalars applied to the data?
3. There is also the Twiddle factor. It looks as if the Twiddle factor is of 16-bit, If the ADC data is also 16-bit, the outcome of applying twiddle factor to the ADC data makes it 32-bit I and 32-bit Q sample before bitshifting takes place. Please confirm our understanding
4. How is overflow being handled internally during processing as the FFT coherently adds the signal.
5. When collecting data, we are experiencing heavy DSP processing load unless 2 bit shifts per FFT stage are applied.  What is the cause of the heavy loading without the bit shifting and how can the bit shifting be avoided at the same keeping the DSP % usage to reasonable level?
Thanks,
--Khai
  • Hi,

    We are checking internally to get the requested info.

    Regards,

    Stanley

  • Hi Khai,

         Can you confirm if you are talking about DSP or EVE here?


    Regards,

    Anshu

  • It's for EVE.

  • Hi Khai,

        Thanks for confirmation, based on point 5 above it looked like the questions are for DSP.

     Please find my response on some of the questions : 

    2.  There are 5 stages in FFT processing and each stage has a bitshift scaling factor specified by user.  Can you please explain how these bit shifting per on each FFT stage? I would like a concrete example say for 256 point FFT, how does the 5 bit shift scalars applied to the data?

        After each stage processing right shift given by the user is applied on the output if the output container size is same as input. So for 256 point FFT to guarantee no overflow happening in all stages we would expect the shift by 2, 2, 2, 2, 1  

    3. There is also the Twiddle factor. It looks as if the Twiddle factor is of 16-bit, If the ADC data is also 16-bit, the outcome of applying twiddle factor to the ADC data makes it 32-bit I and 32-bit Q sample before bitshifting takes place. Please confirm our understanding

        This is correct. EVE each intermediate vector can be 40 bit long.

    4. How is overflow being handled internally during processing as the FFT coherently adds the signal.

       If user provides the scale factor for each stages then the same will be used. If overflow is required to be handled internally then for each stage min and max value is calculated and based on that shift is applied ( note that there are extra cycles required to compute these min and max values)

    5. When collecting data, we are experiencing heavy DSP processing load unless 2 bit shifts per FFT stage are applied.  What is the cause of the heavy loading without the bit shifting and how can the bit shifting be avoided at the same keeping the DSP % usage to reasonable level?

        Is this point still about DSP or EVE?  I don't recall anything very specific related to bit shift in EVE, can you confirm if you have enabled overflow detection (If this point is for EVE)?

    Regards,

    Anshu