This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

ADC, 8 bit resolution



Hello I am using the TM4C129.

I do not think this possible, based on looking at the datasheet, but I was wondering if it was possible to receive 8-bits precision from the ADC, instead of 12 bits?

Thanks

Daniel

  • You are confusing "precision" w/"accuracy."    Precision is the "clustering" (grouping of results around some value) - Accuracy is the "deviation" between measured value & the (real) value.

    Now these MCUs are not (really) capable of (full) 12 bit accuracy.   Even when fed a (perfect) input signal - most always the three (least significant) bits will "bounce."   I'm not "knocking" this vendor - firm/I use ARM MCUs from multiple vendors - they all suffer, similarly.

    To obtain, "8 bit ACCURACY" you may (simply) reject the 4 lsb.   (i.e. mask them out)    You will find those 8 msbits to be very stable/consistent.

  • Hello cb1,

    Agreed. Also a Right Shift of 4 will do? Unless the signal level is such that it resides in the lower 4 bit, where either shift or mask will be of no help...

    Regards
    Amit
  • Hi Amit,

    Indeed your "Right Shift" (4 lsb into oblivion) will work. I chose "mask" as it (might) be easier for posters to comprehend.

    Now - if the (real) signal level is confined to lower 4 bits - poster is, (bit) "stuck" - a (REAL) quality ADC is (then) likely required. Which your firm, ADI, Linear Tech all supply...
  • Hello cb1,

    I was just troubleshooting the 8-bit ADC concept. If the signal level does not reach in the range of the upper 8-bits or skims the lower end, most of the signals conversions would look suspiciously same.

    Regards
    Amit
  • As cb1 and Amit have pointed out it's quite possible to reduce the precision. Yes, precision is right since we are referring to the precision of the instrument, what I was taught to call the precision measure of the instrument. However, that's a digression.

    However, I suspect these are answers to a different question than you thought you were asking. If it's the question you were asking then I would ask why? Why does it matter. The only case I can think of where it is useful is for compatible with SW written for an 8 bit processor. Even it that case it's probably worthwhile to at least evaluate increasing the internal resolution.


    I can think of no case off hand that actually benefits from this, even though it's not necessarily harmful to anything other than computation overhead.

    I have always gone the other direction myself, I actually add bits so that the internal representation is a signed 16 bit number. Filters and some other intermediate storage uses 32 bits. The advantages to this are

    Internals are essentially independent of a/ d precision up to 15 bits
    Filters carry enough information to avoid issues with roundoff without needing to resort to floating point
    Single pole IIR filters still only require addition, subtraction and shift operations, thus are quite fast
    int must be at least 16 bits and long at least 32 for a complaint C/ C++ compiler so you are assured of being able to use it

    Robert
  • That should be compliant not complaint

    Robert