This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

why we need to use the oversamplling followed by '' decimation method ''to increasee the ADC resolution and not oversampling followed by ''averaging'' .

To increase the 12-bit resolution from 12 bit to 14 bit , this can be done through 'oversampling and decimation method'

An quoted Atmel Application note (http://www.atmel.com/Images/doc8003.pdf) says that

 ''The higher the number of samples averaged is, the more selective the low-pass filter will be, and the better the interpolation. The extra samples, m, achieved by oversampling the signal are added, just as in normal averaging, but the result are not divided by m as in normal averaging. Instead the result is right shifted by n, where n is the desired extra bit of resolution, to scale the answer correctly. Right shifting a binary number once is equal to dividing the binary number by a factor of 2.''

''It is important to remember that normal averaging does not increase the resolution of the conversion. Decimation, or Interpolation, is the averaging method, which combined with oversampling, which increases the resolution''

This reference clearly says that in decimation method the result is right shifted by the desired extra bit of resolution ,and not divided by m as in normal average.

So , the question is why we need to use decimation and not the normal averaging after the oversampling to increase the ADC resolution? 

  • Muhammad,

    Oversampling and decimating the input signal is how delta-sigma converters achieve multi-bit resolution even though it utilizes a single-bit converter. Basically, the input signal is oversampled. Oversampling means that samples are taken at a much higher rate than the sampling frequency. Decimation drops samples while interpolating the samples in-between each sample period through a low pass filter. A good reference material explaining this methodology can be found in this How Delta-Sigma ADCs Work. Averaging just takes a collection of samples at the sampling period, sums them together, and divide by the number of samples collected.

    If the noise is purely random, averaging tends to lower the noise, increasing the SNR, By taking more samples in time, the noise will gradually even out and thus be averaged out. Averaging does not add really extra bits of resolution, just makes the smallest bit a little more accurate. Note that we are sampling at the sampling frequency and not taking more samples in between the sampling frequency. Let's an example:

    Sampling frequency is 32kHz. This means we take a sample every 31.26usec.

    If we average 10 samples, we take 312.6usec to collect the data, sum them together, and divide by 10. So the signal is still sampled at 32kHz.

    If we oversample by 10x, we take a sample every 3.125usec, so we have 10 samples in 31.26usec, then decimate, interpolate and report one number every 31.26usec. The signal is sampled at 320kHz, so we get more values to track the change signal, increasing the resolution of each sample. 

    As you can see, what really adds more bits of resolution is to oversample the signal and then decimate it. If you just average a signal without oversampling, you will just cancel random noise. However, by taking more samples at a higher rate and then interpolating between the samples, that provides the extra bits of resolution. 

    Best regards,

      Pedro