To increase the 12-bit resolution from 12 bit to 14 bit , this can be done through 'oversampling and decimation method'
An quoted Atmel Application note (http://www.atmel.com/Images/doc8003.pdf) says that
''The higher the number of samples averaged is, the more selective the low-pass filter will be, and the better the interpolation. The extra samples, m, achieved by oversampling the signal are added, just as in normal averaging, but the result are not divided by m as in normal averaging. Instead the result is right shifted by n, where n is the desired extra bit of resolution, to scale the answer correctly. Right shifting a binary number once is equal to dividing the binary number by a factor of 2.''
''It is important to remember that normal averaging does not increase the resolution of the conversion. Decimation, or Interpolation, is the averaging method, which combined with oversampling, which increases the resolution''
This reference clearly says that in decimation method the result is right shifted by the desired extra bit of resolution ,and not divided by m as in normal average.
So , the question is why we need to use decimation and not the normal averaging after the oversampling to increase the ADC resolution?