This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

About Reducing Background Noise

     Hi all,

     I am trying to reduce the backround noise using speex preprocessor functions. I use aic3204-c5505. I ve downloaded eZdsp Audio filter demo project and found it useful for getting samples via dma. The only thing I did there was to replace fir function with preprocess funtion. But I got bad results. I see some echoes on voice(like robotic).  When i comment out the function I see pure voice. I assume it was caused by the processing time of the preprocess function(5ms). Because when i remove the function and put some delay instead of it, the voice gets bad when the delay  becomes more than 1ms.My frame and dma transmit size is 160. Where did I made mistake?

     Thanks.  

  • Hi metus.

    Your  description suggests, that You already know what is going on, but I'll try to give some more detailed explanation of that situation. You are right - the problem is in time of algorithm processing. What you call 'robotic voice' is in fact unintentional modulation. Since processing of one frame takes 5ms, during ping buffer processing DMA sends to codec not only pong buffer, but also ping which actually in use at that time. That's not all - as You measured, that cycle is repeated few times during single processing. In result, output stream consist of correct sample frames separated by some unpredicted samples. This is quite similar to upsampling, where you put zeros between samples, but in this case sampling frequency is fixed in codec, so what You got is change of voice frequency and some noise.

     To be honest, it might be hard to fix that problem. Event if there is something to optimize in processing code, You would need to make it execute 6-7 times faster. Of course I assume, that it is already fixed point, otherwise You just need to change float operations to fixed point (it will become at least 10 times faster). There is no other solution, your algorithm must be fast enough to be ready before next DMA event - it's essential of DSP programming. Depending on algorithm, sometimes frame size is important. Mostly processing time is proportional to amount of samples, so frame size doesn't really matter. But if there is other relation (e.g T~N^2), shorter frame can make difference.