This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Implementing a Digital EQ in real time on C2000 DSP



Me and an associate are in the process of designing and implementing a digital equalizer on a C2000 DSP.  We currently are using a TI Audio CODEC to perform the A/D and D/A conversions, and utilizing an LCD- Resistive Touch Screen to provide a GUI to control the EQ.  However, we have reached a fundamental knowledge barrier that neither of us seem to be able to conquer.

We have already designed the EQ utilizing Matlab just to get a feel for how to perform the computations on a DSP.  However, when we designed it in Matlab, we utilized the wavread() function to analyze an audio source before any filtering was performed (naturally).  This is where our road block is coming into place.  We are planning on using FIR filtering, convolution, gain, etc. to perform the actual computations but are struggling to find a way to replicate something like the wavread() function.  Clearly we arent on the hunt for a DSP version of wavread() but are not quite sure as to how to use the A/D values to replicate this function.  A buffer is clearly going to be necessary to store the A/D values, but it would be extremely inconvenient (and wrong) to essentially listen to the entire song while sampling it into a buffer, then perform the equalization, and then play it back.  Should we just fill the buffer with a predetermined amount of data based on the FIR filter used, perform the filtering on this filled buffer, send it out and repeat this until the song is over?  

I apologize if this turned into an extremely muddled down question.  To sum it up, essentially how would we go about storing and filtering in real time instead of just sampling the entire song and then performing filtering?

Thanks,

TGM

  • This will depend on a few things.  First, is the data being streamed real-time or can data be sent faster than real-time?  

    If the data is sent real-time, you can use two identical buffers in a ping-pong setting instead of equalizing the entire song at once.

    What you do is fill BufferA with data from the song.  When BufferA is full, you perform equalization on BufferA and output it.  While BufferA is being processed, BufferB is filled with data.  Once BufferA is done, BufferB is then ready for processing.  While BufferB is processed, BufferA is filled with new data.  etc. etc.

    If the data can be sent faster than real-time, it is possible you could use just one buffer.  You would fill BufferA, process it, and then queue it for output.  Once the processed data in BufferA is sent to the output buffer, you can load new data and process it while the old data is being output.  This would require you to have some sort of output buffer.  This also assumes you have enough processing to buffer and complete the equalization before the old data is output in real-time.

  • Thanks, that makes perfect sense! Ping pong method it is.