This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

SD16 ADC question

Other Parts Discussed in Thread: MSP430F2013

Hi,

I am learning about the MSP430F2013. Everything seems so far so good till I meet this guy - SD16. I am trying to learn this guy but still confusing. What I understand is:

1. Analog signal goes to differential input of SD16

2. That differential voltage goes to Sigma Delta module and is converted to a "bit stream" (1-bit stream) with frequency Fm.

3. Fm is much higher (oversampled) than Fs - data output rate from digital filter (which is >input frequency). And the relation between Fs and Fm is Fs=Fm/OSR (OSR-oversampling const. e.g. 32, 64, 128, 256, 512 and 1024)

4. The output from Sigma Delta module, is a bit stream with frequency Fm, goes to digital filter (low pass). Based on the OSR value, the output range is from 15 to 30 bits (according to the datasheet). This is where I get confused. Why it is range from 15-30 bit? Can anyone explain? It should be 16 bits isnt it?

Also I read some paper said that SD16 bit can access to total of 24 bits from digital output. I think I need paper and pencil now... Anyway, any explain would much appreciate!

Regards,

  • Indeed, the SD16 isn't easy to understand.

    1. Yes. On some MSPs, there is an optional amplifier which adds some gain factor to small signals, but also reduces the SNR (Signal-to-Noise-Ratio), so its use is not as much as it seems on first glance (still adds some bits in resolution)

    2. the SD16 has a second order SD module. So it's not just a 1-bit-stream. At least not exactly. But don't ask me about how the 2nd order SD works. That's a chapter of its own.

    3. Not exactly. Fm is actually the modulation frequency (hence Fm) of the SD16 and also the output frequency of the bitstream. Yet the oversampling ratio determines the output frequency of the digital filter. In theory, it could be possible to output one value from the filter with each sample, but then the digital filter would have to be way more complex (like a moving average over n samples requires much more memory than just adding up n samples and then divide by n.).
    But on the bottom line, this defference to your interpretation is rather 'internal'.

    4. Yes, that's confusing indeed. Yet it is in the way the digital filter works. The more samples are fed into the digital filter, the more bits it produces. Just like buildign an average, if you feed 10 values in the range of 0..9 into it and divide by ten, then you'll get a result in the range of 0.0 to 9.0, if you feed 100 values into it, you'll get a result of 0.00 to 9.00. The digital filter is a bit more complex, but basically the more values you feed into it, th emore bits you get (and the meaning of the bits changes too). With lowest OSR, the filter won't produce mroe than 15 bits and of these, some are more or less meaningless (below the SNR level). If you have an OSR of 1000, up to 30 bits are produced, still some of them meaningless because of the SNR, but the filter does not know of SNR so it provides all of them well, not all 30, only the upper ones, as the registers work that way. I guess the current implementation with a register offering the upper 16 bits and alternatively the lower 16 bits (even if these overlap) saves some programming effort compared to a plain 32 bit register with changing bit meanings (depending on the OSR).
    The SD16 is called sd16 because under best conditions, you'll get ~16 significant bits. But if one wants more bits, well, why not. You can averaging several samples to reduce the noise and make more bits significant if you really want.

  • Hi Jens-Michael,

    So for example, I have a signal with input frequency =100Hz. My output will be Fs>200Hz (for example 300Hz). If my OSR=256, that means my Fm=300 x 256=76800Hz.

    1. The input Fin=100Hz -> SD16 module will be converted to a bit stream with frequency =76800Hz=76800 samples/s

    2. 76800 samples will be put to digital filter. These samples contains most noise signal in the bandwidth: 76800-300 = 73800Hz. Is it true that it has ~ 73800 samples which are noises?

    3. Out of the digital filter, 76800 samples are filtered and downsampled to 300Hz which means it filters out 73800 samples, is it right?

    I can take more bits because I can chose not to filter all 73800 samples but instead I just chose to filter out 70000 samples which gives me have access to 3800 more samples = access more bits (for example)???

    One more question, how do you know OSR of 1000 can produce up to 30. In other way, how do you know which is useful bit I can have. And why 6dB=1 bit (as I read some where in the forum)?

    Regards,

  • 1. yes. Yet normally, the Fm is just the internal ~5MHz and you only request the conversions slower than you could, or let it run continuously and read only every Nth result, you you read your 300 samples per second.
    But don't confuse an SD16 sample (a value that represents your signal) with a sample from the SD stage that is put into the digital filter. These samples are NOT complete values, only  a bit (or 2, as it is a 2nd order SD), indicating that your value is above or below a threshold value based on the current modulation.

    Imagine a tracking ADC. It has a D/A stage that starts at 0 increases its output by 1 bit each clock impulse (frequency Fm). This D/A output is compared to your signal and if the signal is higher, a '1' is produced and if it is lower, a '0' is produced. This stream of '1' and '0' is accumulated and after the DAC has stepped from 0 to maximum, the number of '1' bits generated is the digitized value of the signal (your 'sample').
    The SD works comparably, only that the bits are not generated by comparing to a D/A output and are not just accumulated but fed into a more complex digital filter. And the number of steps in the D/A is comparable to the oversampling ratio. If the OSR changes (the number of D/A steps per sample), then the output value changes meaning (more 'output bits' are generated). In case of teh trackign ADC, you get 1 more bit for each doubling of the DAC stages. For the SD, the ratio is a bit different. And I don't know the exact calculation.

    2. no. All samples from the SD stage are required for gettign the final value. Yet each single sample is subject to noise. By taking many of them, the digital filter can provide more meaningful bits. E.g. when doinf OSR=1024, the digital filter will get 1024 sample bits. Some of them are '0' where the should be '1' and some re '1' when the shoudl be '0'. Yet after being passed through the digital filter, teh filter will provide a 'bit count value' (still not correct, but easier to imagine without all the math) that has 30 bits which are likely to be significant, while if there are only 15 possibly significant bits when the OSR is only 32. (don't try to match the bitwidth of the output with the number of samples. The math in between is missing, so you cannot see the dependence).

    3. see 2. Out of 256 bit samples each from a bitstream, a digital value is generated that has 24 bit or so. Of which many are insignificant.

    This insignificance has a reason: the calculations are done with the assumption of perfect components. Yet each component has tolerances, there is noise added by each resistor or semiconductor inside the whole circuitry etc. Also by rounding in the digital filter or quantization errors. This all is described with the SNR value, which is e.g. 72dB. That means that at full signal level, the signal is by a factor of 4096 larger than the noise added by the system. But this also means (in case of 72dB) that the noise introduced at maximum readout ls in the range of the bits below the upper 12 bits. If your signal has only 1/2 of the maximum, it may well be that the upper 13 bits are significant, yet the MSB is of course 0 in this case, so you still have 12 significant bits (plus 1 fixed 0-bit), yet they have only 1/2 the value.

    Why 6dB per bit? Simple. dB is a relation factor. 20 dB is a relation of 10:1.
    Originally, dB describes a power relation. there 10 dB (deziBel) or 1 B (Bel) is 10:1. But since voltage and power are in a squared relation and the calculation is done using logarithm, an additional factor of 2 is introduced for the 10:1 ratio.  (log(x)*log(x) = log(2*x))
    One Bit, however, is a ratio of 2:1. 3 bit are 8:1, so 3.33333 bit are 10:1. and 20:3.3333333 is 6, so 6 dB per bit. And this is why I had the 12 bits above: at an SNR of 72dB, the noise level is 72dB = 12 bit below the signal level.

     

  • Hi Jen-Michael,

    Thank you for your explaination about the 6dB factor. However, I still not clear about the SD16 operation.

    I dont understand

    Jens-Michael Gross said:
    when doinf OSR=1024, the digital filter will get 1024 sample bits. Some of them are '0' where the should be '1' and some re '1' when the shoudl be '0'. Yet after being passed through the digital filter, teh filter will provide a 'bit count value' (still not correct, but easier to imagine without all the math) that has 30 bits which are likely to be significant, while if there are only 15 possibly significant bits when the OSR is only 32.

    Why the digital filter gets 1024 (1-bit?) sample bits?

    Jens-Michael Gross said:
    teh filter will provide a 'bit count value'

    - I suppose the filter will sum up all the bits that have value = 1? How do you know that has 30 bits?


    So here is my thinking:

    I have Fm=1Mhz, OSR=256 => Fs=4kHz (MSP430F2013 default)

    1. The different voltage between 2 SD16 inputs is measured and converted to digital signal.

    2. This difference is goes to Sigma-Delta module. And is compared and the output of this SD module is a "1-bit" bit stream (1,0) with Fm=1Mhz. This means I will have 1 million "1-bit" samples/s output at SD module. (e.g. 10001010101...1010= 1million bits/s output)

    3. This bit stream goes to digital filter which basically calculates the sum of 1million samples(sum up all bits that have value=1). Then take average of 1 million samples by factor of OSR which is 256 (e.g. sum of values of 1million samples /256 = mean of the bit stream). By doing this, it can reduce noises and reduce the sample rate to Fs=4kHz.

    4. Out of the whole SD16 module, all I have is 4000 samples/s (4000 means value). Each samples will have 16-bit to present its value.

    Thanks for being patience :D to answer my questions.

  • Uhm, I think I need to edit something here. At step 3:

    3. The bit stream goes to digital filter with frequency Fm. The filter with frequency Fs=4kHz will take 4000samples/s and average them out. Therefore, in 1 second, the output is 256 means value (each value is presented by 16-bit).

    4. So out of the SD16 module, all I have is 256 mean values of the different input voltage. If I want to have more precision, I just simply sum all 256 mean values and take average to have only 1 mean value which can basically give me more precision. 

    So Fm is the sampling frequency of SD16 module

    Fs is the sampling frequency of Digital Filter module, not the output frequency, is it right?

    I think this thought is more properly than the previous one I posted.

     

    Regards,

  • Nogcas said:
     I still not clear about the SD16 operation.

    Nor am I :) The exact inner workings are still TIs secret. All that's known about it is:
    " the converter is based on a second-order oversampling sigma-delta modulator and a digital comb-type decimation filter with selectable oversampling ratios up to 1024 " keywords are 'second-order', 'sigma-delta-modulator' , 'comb-type' and 'decimation filter'.

    I'm really not an expert on SD converters. It is a completely different approach than normal AD converters (including the ADC10 and ADC12 types in other MSPs)

    Nogcas said:
    Why the digital filter gets 1024 (1-bit?) sample bits?

    from the datasheet: The analog-to-digital conversion is performed by a 1-bit second-order sigma-delta modulator. A single-bit comparator within the modulator quantizes the input signal with the modulator frequency fM. The resulting 1-bit data stream is averaged by the digital filter for the conversion result.

    Nogcas said:
    1. The different voltage between 2 SD16 inputs is measured and converted to digital signal.

    No, that's not how Sigma-delta works.
    On an SD, the input signal is integrated and compared to a reference. comparison result is output as a bit and also, if the result was 1, the threshold voltage is subtracted from the input. The resulting reduced input signal is again fed into the integrator and again compared etc. So the bits mean either 'max' or 'zero' and their average is the average of the input signal.
    A second-order SD has (depending on implementation) two integrator steps and two subtraction points, or the subtraction is not only based on the result of the last comparision but on the last two. In any case, a second-order SD does not add as much conversion noise to the bitstream as a first order SD.

    This also explains the number of bits: The conversion noise of a first oder SD starts at ~-40DB at OSR32 and ends at ~-80dB at OSR1024. For a second-order SD the conversion noise starts at ~-60dB and goes as low as -140dB at OSR1024. So at OSR=32, the conversionnoise is at the 11th bit, on OSR=1024, it is on the 23th bit of the result. Remember that the meaning of an SD bit inteh bitstream is either max or zero. And max may be very well a 32 bit value. It gives you 23 meaningful bits. At least in theory. But of course there are some limitations. First, the analog noise inside the circuitry is often greater than the theroetical conversion noise (depending on OSR and modulation frequency) and not every possible value of the output bits will exist at all, depending on the digital filter. (maybe, the lower 10 bits are always 0 or 1, but nothing in between, or every third value does not exist or so.)

    Nogcas said:
    The filter with frequency Fs=4kHz will take 4000samples/s and average them out.

    No. The 'averaging' filter was just a simple example of a digital filter. And an averaging filter with OSR=1024 would only produce 10 significant bits, and 8 bits with OSR=256. I think that the term 'averaged' in the quote above is also used only as a simplified example.
    The comb-type digital decimation filter is a bit more complex. Yet I have no clue how it works internally. Maybe some more internet research will reveal more of the secret.
    Or there is a technote from TI about this.

    Nogcas said:
    If I want to have more precision, I just simply sum all 256 mean values and take average to have only 1 mean value which can basically give me more precision. 


    This would be true if the filter would be a simple averaging filter. Yet it isn't. You won't get more precision by just doing some averaging. Averaging is just a low-pass filter. By averaging 2 values, you effectively halve the maximum frequency of the input signal. If your input signal is still of a (much) lower frequency, you'll eliminate the high-frequency noise in the sampled signal. Yet you won't get more precision. Every filter has its purpose :)
    And imagine getting 256 times the same value (e.g. 100). Averaging will give the same value (100) but will not tell you whether you 256 times had a value of 100.1 or 100.2, so you don't get more precision. It's the same reason why the SD works with modulation. A non-modulated 1 bit comparator will give you just 1 bit of precision. It will tell you whether the value is greater of smaller than the threshold. And no matter how often you sample, you'll get always the same result of 1 bit precision.

    Nogcas said:
    So Fm is the sampling frequency of SD16 module

    It is the modulation frequency. The frequency with which the modulation is done to the inputs. and the comparisons done. And the bitstream put into the digital filter.

    There's an excellent article available in the net at http://www.beis.de/Elektronik/DeltaSigma/SigmaDelta.html 

  • Hello Jen,

    Quick question here: base on the data sheet of MSP430F2013 (page 45)

    SINAD = 63dB

    OSR=1024

    Gain=32

    Vin=15mV

    Fin=100Hz

    From this information, the Effective Number of Bit (ENOB) = (SINAD - 1.76dB)/6.02 = (63dB - 1.72dB)/6.02 ~ 10bit. Which means the signal is 1024 (2^10) times bigger than noises   (signal=1024*noise). Is this right? Try to interpret the data sheet :D.

    Same OSR, and Fin but SINAD=87dB, Gain = 1 (so Vin increases to 500mV...urg, less sensitive). The ENOB now will be 12 bits. Which means signal=4096*noise.

    From the above calculation, I can conclude that OSR=1024, Gain=1 is better at noise cancelling (better SNR) but less sensitive than the one with OSR=1024, Gain=32. Is this true?

    So next question, if ENOB = 10bit, this means from 11th bit up to 16th bit present the noise (4bits present noise???) ,right? (bit 0 is least significant bit, at the right most)

    Very appreciate your help in this. I have read the SD article you gave. Actually, I read that before you gave me - but not clearly understand. With your help I step by step gain more knowledge in this topic, and now be able to understand SD converter. (still some cloud around my mind but now I can see the blue sky more clearer :)) ). And now try to step by step interpret the information from data sheet.

    Regards,

  • Nogcas said:
    Which means the signal is 1024 (2^10) times bigger than noises   (signal=1024*noise). Is this right?

    More or less, yes.

    Nogcas said:
    From the above calculation, I can conclude that OSR=1024, Gain=1 is better at noise cancelling (better SNR) but less sensitive than the one with OSR=1024, Gain=32. Is this true?

    Yes, but... :)
    Don't forget the gain=32.
    With GAIN=1, a 15mV signal will by by a factor of 100 below the 1.5V reference, which means that the upper 6 (almost 7) bits are always 0. With a gain of 32, your signal increases to 480mV which is only a factor of 3 below the reference, so only 1.5 bits are zero. You lose 2 bit ENOB, but gain 5 bit, so you end up with 3 bit more resolution for the small signal.

    Nogcas said:
    So next question, if ENOB = 10bit, this means from 11th bit up to 16th bit present the noise (4bits present noise???) ,right? (bit 0 is least significant bit, at the right most)


    Yes and no. Yes, if your signal is full-range. Remember that 10 bits means 0.1% precision. So the 11th bit is below 0.1%. (BTW the numbering is wrong. If bit 0 is the LSB, then noise on bit 11 to 16 would be really BIG noise :) )
    If your signal level is only 1/4 of the reference, then the upper two bits are zero, then follow 10 bits of meaningful data and then the noise follows. Still 0.1% resolution of the signal and not of the full range.
    The relative noise (in relation to the signal) is constant, the absolute noise (in relation to the reference) becomes smaller with a smaller signal.

    And it is not signal noise cancelling that we are talking about. It is conversion noise cancelling. Noise on the signal itself is (from the SD point of voiew) part of the signal

    Nogcas said:
    I have read the SD article you gave. Actually, I read that before you gave me -[...] and now be able to understand SD converter.


    When this thread started, my own understanding was rather limited too. But since I'm as much a hardware  as a software engineer, reading this article lead to instant enlightement :)

    BTW,  as the guy from this article pointed out, it should be DS rather than SD. What he's missing is the reason: He jus tstates that Delta-Sigma was the name used by the developers. But it has a physical reason: the first stage in teh converter is the difference building of signal and loopback (delta, mathematical sign of diference) and then comes the integrator (sigma, mathematical sign of an integrated sum). So Delta-Sigma is electronically correct.

  • Hello,

    Jens-Michael Gross said:
    BTW,  as the guy from this article pointed out, it should be DS rather than SD. What he's missing is the reason: He jus tstates that Delta-Sigma was the name used by the developers. But it has a physical reason: the first stage in teh converter is the difference building of signal and loopback (delta, mathematical sign of diference) and then comes the integrator (sigma, mathematical sign of an integrated sum). So Delta-Sigma is electronically correct.

    I think TI engineers suppose that at Delta: at the very first time, the is no signal to subtract (nothing in the loop - ideally) ! So Sigma comes first. That's why they named Sigma Delta!!! lol, just my thought why TI names it SD converter.

     

    OK, so coming up the next question about Analog Input Characteristics (In MSP430F2013 - User guide part 24.2.6, page 24-6)

    Suppose I have:

    AVcc=3V
    Vs+ = 9mV
    Vs- = -9mV
    According to the formula I have: VAx = 1.59V

    At GAIN=1, Rs=1kOhm (for example) => Tsettling > 30.1ns => fs<16Mhz

    At GAIN=32, Rs=1kOhm (for example) => Tsettling > 306.6ns => fs<1.63Mhz

    My question: fs is the maximum allowable for input frequency, right?

    So with the GAIN=1, I can input the signal with freq.<16Mhz, while I just can only input freq<1.63Mhz for GAIN=32, right?

    Regards,

  • Nogcas said:
    At GAIN=1, Rs=1kOhm (for example) => Tsettling > 30.1ns => fs<16Mhz

    You forgot Nyquists sampling theoreme and the other limitations.

    The settling time determines the minimum time required for suddenly attached _static_ signal to settle before its level is properly recognized by the internal SD circuitry. In your case of a low-impedance source (Rs = 1k) it is as really short time. Yet the maximum frequency is limited by other factors:

    Fm i slimited to 1MHz and you have at least an OSR of 32, so the maximum sampling frequency is 31250Hz under all circumstances. And the Nyquist theoreme tells that your sampling frequency Fs mut be more than two times as much as the maximum signal frequency or  you'll see alias frequencies in the sampled signal.

    So the maximum signal frequency is <15625Hz for a periodical input signal under every circumstances. A (quasi) static signal has a frequency of ~0 and is therefore below that.

    The formula for the settling time comes to efect if you have a high-impedance source (Rs>100k), which is common in low-power applications. E.g if you have an input signal of 10V and need to break it down to 2.5V by a 3:1 divider. Since this will result in a constant load, the resistors are usually as high as possible. Yet the low internal resistance of 1k will distort this voltage divider (effectively being a low-impedance shortcut in parallel to the bottom resistor of the divider) and the signal takes some time to settle until no more current flows into Cs.

    Of course in the case of a voltage divider, you won't need the higher gain and so this whole part is rather academic except for some extreme circutry. If only every part of the users guides were that detailed - there are parts of the MSP where this would be way more necessary.
    Yet knowing the input impedance of a sensor might be an important thing if the signal you measure is not only measured but also doing something else in a circutry. Then it is important to know how the sensor influences the signal (for being a complex load).

  • Jens-Michael Gross said:
    If your signal level is only 1/4 of the reference, then the upper two bits are zero, then follow 10 bits of meaningful data and then the noise follows. Still 0.1% resolution of the signal and not of the full range.

    Not clearly understand this.

    For example: If my circuit has

    Vref = 1.2 V
    Vin = 9mV (max)
    Gain=2
    OSR=256

    According to the datasheet with Gain=2, OSR=256=> SINAD=77dB => ENOB=12 bit.

    So if my input is 1.2V, the 12 bit ENOB will be from 15th bit to 4th bit (15-4) , the last 4 bits (0-3) are noises.

    But if my max input is 9mV, with Gain=2 which mean Vfsr = 0.3V

    => 9mV=985LSB or 1LSB=9.15uV

    So to present 985LSB, I will need at most 10 bits (2^10 = 1024). Which means I had only 7bits, from 10th to 4th, to measure the signal. 15th-11th bits = 0, 10th-4th = signal, 3th-0th bits = noise.

        15| 14 | 13 | 12 | 11 | 10 | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 |
    Sign<--------------ENOB =12 bits----------><--noise-->      :Full range
    Sign<------- =0-------->|<-------signal------><--noise-->      : 9mV input

    Full range input : signal =14th-4th bits, noise =3th-0th bits
    9mV input         : 0=14th-11th bits, signal =10th-4th bits, noise=3th-0th bits

    Is my thought right?

    So in case I want to get signal only, I just do the following to clear the last 4 bits noises: 

    & (unsigned int) 0xFFF0 (if the current convert>0)
    & (int) 0xFFF0 (if the current convert<0)

    Note: These are calculated theoretically, at this time I do not want to include any real time problem. Just theory thoughts.


    I am trying to measure the input voltage with max in =9mV. Trying to capture each 18uV increment. I tried to do with Gain=32 (1LSB=0.46uV), but its not stable. It works fine when input is hundreds mV. Is there anyway  to make it stable?

    Regards,

  • Nogcas said:
    I tried to do with Gain=32 (1LSB=0.46uV), but its not stable. It works fine when input is hundreds mV. Is there anyway  to make it stable?

    Hmmm, it should. a higher gain requires a longer settlng time, IIRC, and maybe the bandwidth is a bit limited. But if these are not a problem, it should be stable. Do you have a ground loop tha tinduces humming?

    About the bit calculations, well, the documentation leaves much room for interpretation.
    Yet if there wasn't someone making something completely stupid, there must be a shift in the noise depending on the signal. Note, it is signal-to-noise ratio, not reference-to-noise.

    So if the reference is 2.5V, the signal is 9mV, then the signal is 8 bit below the maximum. You'll get 8 0-bits, then your 12 bits ENOB and then the noise.

    That's just an educated guess, as it makes no sense offering 'upper 16 bit' and 'lower 16 bit' results and auto-switch mechanism into the SD16 hardware when you'll never get more than 16 useful non-noise bits under all circumstances.

    It's logical that the noise is smaller for smaller signals. All hints indicate it. Yet it isn't necessarily the truth :)

**Attention** This is a public forum