This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

MSP430G2111 & MSP430G2131 A/D (Slope versus 10-bit SAR)

Other Parts Discussed in Thread: MSP430G2111, MSP430G2131

Greetings,

Can you please provide a simple explanation of the difference between the MSP430G2111 & MSP430G2131 A/D option, the difference between a Slope versus 10-bit SAR ADC.  I assume the 10bit SAR is the better option, but can you explain what its strengths are, or the deficiencies of the slope A/D?

 

  • A slope ADC basically works as a ramp generator and a comparator. The ramp generator generates a sawtooth signal and the comparator triggers at the moment the sawtoots exceeds the input signal. The time passed from beginning of the ram is the conversion result. Precision depends on the precision of the ramp generation, often done with a capacitor and a current source, so the current and capacitor tolerances affect timing. Also, the precision of the time metering affects the precision, the linearity of the comparator and the comparator latency.
    WHile often present, a slope converter does not need a sampling capacitor. If the inptu signal changes during a conversion, it just means that the comparator will trigger sooner or later, but it will trigger at the moment the input signal matches the ramp, so the result is correct. There is, however, a certain uncertainty about the exact moment on teh input timeline where the conversion result belongs to. Hence the occasional sampling capacitor (then it is clear that the result belongs to the moment the sampling stage ended).
    Conversion result is available as soon as the comparator has triggered.

    An SAR usually has a sampling capacitor, a comparator and instead of a ramp generator it has a DAC. The DAC i used to generate an output voltage that is compared to the buffered input signal. First the MSB is set, so the input is compared to 1/2 Vref. Depending on the result, the bit is either reset or kept, then the next lower bit is set and so on. The value in the DAC register does a successive approximation to the input signal. Hence the term SAR.
    The conversion is timed by a clock signal and it takes 1 clock tick per bit (+1 for init) to do the conversion. Plus the preceding sampling time. So each conversion takes the same time independently of the signal. Precision depends mainly on the precision of the DAC.

    The MSPs ADCs use a different approach that does not require the DAC. Instead it uses a cascaded set of capacitors as sampling capacitor. On these capacitors (which can be produced quire precise on silicon), a method named charge distribution is used. The capacitors are charged against GND but then switched agianst VCC or each other, to determine their charge. Since the capacitors have a 2:1 ratio, this also leads to a pretty precise result. However, the linearity/tolerance of the DAC is not a matter here. The timing is the same as with a normal SAR.

    Some other MSPs use a Delta-Sigma ADC (SD16/SD24). This is a completely different approach. Precision is quite high, but conversion speed is very low compared iwth an SAR or even a slope. I explaind in some other posts in this forum (and there are some good resources in the web)

**Attention** This is a public forum