This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

CC2564MODA: Non assisted A2DP glitches, SBC decoding seems slow

Part Number: CC2564MODA

I have a non-assisted design (ie, the audio Codec is linked to the host processor) and I try to make the A2DP audible. All my tests show that there is some time lost in the UART/HCI/SBC decoding path, since my I2S DMA is always faster that the samples. Adding multiple intermediate buffers do not help.

My parameters: UART@921600 Interrupt (Bluetopia HCI code), Host CPU@96MHz.

My questions:

1. Is there a possibility that my host processor is not enough fast to handle the SBC decoding?

2. What are the minimum requirements for a non-assisted desing and a 44,1Khz audio streaming?

3. Are there some parameters on the HCI or on any host side to play with? (buffer sizes ....)

4. Is it worth switching to the DMA version of the HCI?

Any help would be appreciated.

  • Hi ,
    We have received your query and will get back to you shortly .

    Thanks
    Saurabh
  • Anthony,

    Anthony Rabine14 said:

    My parameters: UART@921600 Interrupt (Bluetopia HCI code), Host CPU@96MHz.

    Please increase the baud rate of the UART interface if it is possible on the host.

    Anthony Rabine14 said:

    1. Is there a possibility that my host processor is not enough fast to handle the SBC decoding?

    That is possible. In that case, using the assisted mode (A3DP) is the best solution.

    Anthony Rabine14 said:

    2. What are the minimum requirements for a non-assisted desing and a 44,1Khz audio streaming?

    A host's decode capability depends on various factors like the core speed, architecture and instruction set. Thus, the minimum requirements are not quantified. You can use the AUDDemo running on the CortexM4 platform as a benchmark for the MCU requirements.

    Anthony Rabine14 said:

    3. Are there some parameters on the HCI or on any host side to play with? (buffer sizes ....)

    There are some parameters like the SBC_BUFFER_SIZE and AUDIO_BUFFER_SIZE in the AudioDecoder.c that you can play with to figure out the optimal setting for your system.

    Anthony Rabine14 said:

    4. Is it worth switching to the DMA version of the HCI?

    This will depend on the host MCU so it is really a question for that MCU manufacturer.

    For code reference, please refer to the AUDDemo for the non-assisted A2DP sink. For additional help, please refer to these resources.

    Best regards,

    Vihang

  • Thank you for your reply. Here are some updates:

    Rising the UART to nearly 4Mb/s and rising the SBC_BUFFER_SIZE seems to help a little, the audio quality seems improved. But rising the AUDIO_BUFFER_SIZE does not help and make the sound more noisy. I can't understand the influence of these two buffer, for me larger they are, better it is.

    We are using a STM32F412 host processor so the architecture is similar to the AUDIO demo project. Unfortunately, the example is hard faulting. And the HCI code must be ported since we are using the new ST library API.

    Here are my questions :

    1. Do you have a host code example of the HCI in DMA using the new STM Cube library?

    2. Since we are using the I2S in DMA mode, the code example for the samples processing must be adapted. Do you have some example of good practices regarding this design? (double buffring, buffer switching in DMA half transfer ...)

  • Hello Anthony,

    Both your questions are regarding the driver package on a non-TI MCU platform. We do not have this solution and I would not be provide much help regarding the non-TI platform porting. For this, I recommend you get help from one of our 3rd party partners.

    Best regards,
    Vihang
  • Ok, related to your last answer, we now have some support from our MCU vendor.

    Nevertheless, here is a capture showing the time spent in the HCI interrupt (channel 0) and the time spent in the decoding thread (channel 1). Can it work like this? It seems that we have time to perform everything but the synchronisation is not well managed (decoding is not performed when there are no any interrupts!).

    What is your opinion?