This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TMS320F28377S: ADC Readings Missing Lower 3 Bits

Part Number: TMS320F28377S
Other Parts Discussed in Thread: C2000WARE

Tool/software:

Hello,

I'm stuck on an issue with the F28377S where, apparently, my ADC readings have lost 3 bits of resolution. The values are resolving to 0x___8 or 0x___0, and when I sweep the input voltage to the circuit, it only steps up or down in increments of 16 or 32. (e.g., 0x0118 -> 0x0128 -> 0x0148)

The ADC is configured in 12-bit single signal mode, and I used the C2000ware function to set it.

I have tried slowing down the ADCCLK and maximizing the acquisition window for each sample, but there has been no change. If this was a matter of settling time I would still expect at least some amount of noise in the lower bits, even if they were inaccurate.

Additionally, the values read out have been "flickering" between significantly different values - for instance, with the input signal in a steady state, the ADC result would jump between 0x0A58 and 0x0A78, without apparently hitting any other values.

In some ways it is acting as if the values are left-shifted in the result register, but as they are, they are close to what I expect - just missing precision.

I thought at first this was an artifact of monitoring the registers in Code Composer, but I read out the results to debug variables and calculated the deltas in readings with the same results.

I do not have any post processing active.

Are there any configuration options that I've missed that would explain this truncation? I've read that accuracy can be lost because of an improperly designed input circuit, but in that case I would still expect some amount of noise in the lower bits.

Any other suggestions are also appreciated.

  • Hello,

    Based on your observations, this is more of a result alignment not lost resolution. On F28377S the 12-bit conversion result is left-justified in the 16-bit ADCRESULTx register: the valid 12 bits sit in [15:4] and [3:0] are always 0.

    If you look at the raw register value without shifting, you’ll see codes change in steps of 0x10 (16) or 0x20 (32), and hex values will often end in …0 (and depending on how you’re printing/masking, you may notice …8 as you cross boundaries). Therefore, when reading the raw register, shift right by 4 to get a true 12-bit.

    Best Regards,

    Masoud

  • Hi Masoud,

    I'm confused on this point. I've seen some suggestions before that the 12-bit results are left-aligned in the result registers but that doesn't match what I'm seeing in my debug code. The 4 most significant bits are not populated in my results, and the channels that should be maxed out are presenting as 0x0FFF / 4095.

    (I might be missing some lower level access thing that's automatically doing the shifting.)

    Also, if it was just a problem with shifting I would still expect some kind of sensitivity in the results. Our ADC circuit is powered at 3.3V so if I sweep the input voltage I should see the results change in steps of a little less than 1mV. But what I'm seeing is that the results are resistant to changing except in steps of 12.9mV or even 25.8mV.

  • Hi Jeff,

    What you’re describing is exactly what you see when the raw ADC result is left-justified and the lower 4 bits are always zero. On F2837x devices in 12-bit mode, the conversion is stored in ADCRESULTx[15:4] and ADCRESULTx[3:0]=0. If you read the raw ADCRESULTx register, you must shift right by 4 to get a true 12-bit value (0–4095).

    Why you saw 0x0FFF (4095) for “maxed” channels: that suggests some layer in your code path is already giving you a right-justified 12-bit number (e.g., a helper/API that reads and shifts). If you then shift again (e.g., val >>= 4 before converting to mV), you effectively zero out the lower 4 bits, leading to the coarse steps. In other words, you likely have a double-shift situation depending on whether you read via a driver/API or read the register directly.

    Best regards,

    Masoud

  • I'm very confident that it's not a bit shifting error, I've run the same code and used the same debugging process in other applications; in those cases I've seen 12 bits of precision in the result registers. In fact the same code I'm testing ran on the same processor but a different version of the board under test and had full 12 bit resolution. There were no relevant changes to the ADC circuitry as far as I can tell, so I'm casting about for any possible explanations of why the converters would exhibit this behavior.

  • If after right-shifting you still see quantization jumps larger than 1 LSB, share a short snippet showing how you configure ADC_setMode(...), SOC ACQPS, and how you read/log the result; but in almost all cases like this, the >> 4 is the missing step.

    Best regards,

    Masoud

  • Hi Masoud,

    Got it figured out - I'll mention this here for future reference. The root problem ended up being that the reference voltage was not fully stable when we were taking readings. I still do not entirely understand how that resulted in the truncation effect I was seeing, but once we improved the line I started getting full resolution again.

    Thanks for the suggestions!