This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Audio processing with DSP on omap3530

Other Parts Discussed in Thread: OMAP3530

Hi,

I posted my question in the multimedia sotware codecs forum but i had no answer. I think that the linux forum is more reacting so i decided to post it here.

My worry is that:

I want to do an audio processing using DSP on OMAP3530. So I want to perform a simple application. My application will be like that:

The DSP will recover the audio ffle. In the DSP i will have an algorithm which will be able able to filter my audio file (.mp3 for exemple). And after the audio file will pass through a digital analogic converter (DAC) for being listen through the headphones.

It seems simple but i am pretty confuse due to all the informations i get via internet. So i want to know what are the essential things to know if i want to do an audio processing without using DSPLink (I only want to use the DSP, to develop the application on CCSv4, and to charge it in the DSP via a JTAG Emulator (XDS100v2).

If it is possible to do an audio processing with only DSP (without cortex A8, DSPLink), Where can the DSP recover the audio file? (RAM? SD CARD? EEPROM.....)

Is it necessary to use APIs wich permit to the DSP to access directly to the peripherals (to the registers)? Does it exist any API which permit to the DSP to access to th peripherals (audio output, RAM, SD card...)

Is it possible to do my audio processing only with the DSP? without using Cortex A8, DSP Link.

One of my ideas was to use the codecs that are available in the DVSDK? Is it a good idea? If yes; how can i use it?

In my mine, it's the cortex which regul all the access to the peripherals. so if we want to listen an audio file with the OMAP3530 EVM card, we are oblige to pass through the cortex A8. the cortex A8 will send to the DSP the codec and the audio file. and after the DSP will proced to the processing and after it will decode the file before giving back the file decoded to the cortex. the which is the only one which has access to the peripherals will send it to the audio output. Is it True?

Have you any exemple of algorithm that perform audio processing?

You surely feel in my argumentation that it is confuse in my mine. that's why i need your help to clear all the doubt i have.

thank you in advance

Jean FAYE

  • Hi Jean,

    Let's discuss this matter here (as opposed to the other thread) since you provide more detail. The dvsdk demos come with a "decode demo" which outputs video as well as audio. You can run the demo with e.g. ./decode -a myfile.aac to decode an AAC encoded file (and the DVSDK comes with one).

    You are right, the cortex a8 have access to the peripherals where the DSP doesn't. What the decode demo does is to send the audio file, frame by frame, for decode by the DSP, and then out to the audio DAC on the board (to your headphones). This sounds like what you want to do, and since the decode demo comes with source code it should be a good starting point.

    If you want to develop your algorithm using CCS, you can start by using the DSP side app shipped with the AAC codec. It can be found at:

    dvsdk_3_00_02_44/cs1omap3530_1_00_01/packages/ti/sdo/codecs/aachedec/app

    This is a frame by frame test application which doesn't use files however.

    Regards, Niclas