This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

CC1310: Need better explanation for CC1310 A/D input structure/Reference design

Part Number: CC1310
Other Parts Discussed in Thread: ADS8512

The documentation for the CC1310 A/D (sensor) subsystem is pretty bad.  there is this mysterious 4.3V reference yet with a even more mysterious/undocumented divider with a max voltage of 1.49V.  The Registers indicate the reference can be "tweaked" to a level above the maximum allowed voltages.  Can someone provide a diagram like a normal A/D vendor would provide?

I'm trying to perform a tolerance/error budget for an A/D measurement.

  • Hello MWagner,

    1)

    There is a simplified diagram and explanation in the sensor controller documentation:

    https://software-dl.ti.com/lprf/sensor_controller_studio/docs/cc13x0_cc26x0_help/html/adc__0.html 

    2)

    The reference is actually trimmed to a target of 1% above 4.3V, but the resulting gain and offset are measured and stored in FCFG for use by the driver lib. If the driverlib API is used, the correct reference voltage in unscaled mode will be 1.478V

    Even though the internal reference is scaled for 4.3 V, the maximum input voltage must never be higher than VDDS (max 3.8 V). The reference is not really 4.3 V, but the input signal is scaled down so the reference looks like 4.3 V relative to the actual input voltage.


    To ensure that the offset/gain is compensated correctly, the following driverlib commands should be used to read the ADC:

    int32_t gain = AUXADCGetAdjustmentGain(refSource);
    int32_t offset = AUXADCGetAdjustmentOffset(refSource);

    int32_t adcValue = do ADC measurement;

    int32_t ADCResultAdjusted = AUXADCAdjustValueForGainAndOffset(adcValue, gain, offset):

    This is handled within the ADC TI-RTOS driver if configured for (hwAttrs->returnAdjustedVal).

  • Hi Eirik,

    Thank you for the feedback. I'd just like to make sure we understand correctly, since you mentioned that "the reference is actually trimmed to a target of 1% above 4.3V" but also "the reference is not really 4.3 V, but the input signal is scaled down so the reference looks like 4.3 V". Looking at the link and your comments, is our understanding below correctly?:

    - The actual reference voltage generated in the device is 4.30V typical, 4.343V (1% higher) max

    - The reference is always scaled down by a factor of 1408/4095 so actual ADC reference voltage is always 1.478V in any mode. This reference is also the max input voltage when not using input scaling. 

    - When scaling is enabled, the input is scaled by the same 1408/4095 factor which is the equivalent of having a 4.30V reference and no input scaling (except that max input is limited by the VDDS value).

    Since the main concern is accuracy, can you please share your feedback on the below?:

    - Is the typical value for the actual internal reference 4.30V? One of the concerns is the single decimal accuracy since we specify it as "equivalent" with scaling rather than the actual reference voltage that is scaled down.

    - Are there any additional errors associated with the input scaling circuitry that should be accounted for?

    Thanks,

    Antonio

  • So as to be clear what I'm expecting for specs, an example of some of Texas Instruments documentation for an A/D converter can be found for your part ADS8512.  Even with laser trimming a built in reference is rarely more accurate than a 0.1% and is also specified as far as temperature drift.  Please keep in mind that details like this are expected in the "hardware section" of a processor's documentation as that is done prior to a software spec being generated.  I work on the hardware systems design and provide guidance to the software engineers to access it, ensuring accuracy expectations.  Thanks for your help.

  • Hello Antonio,

    Only the ADC input is scaled down.

    The real reference (approx 1.48 V) can be derived from the scaled value (4.3 V) as follows: Vref = 4.3 V × 1408 / 4095 (see ADC Characteristics in the data sheet).

    I don't know the variations of the ref it self in production as you will need to rely on the gain/offset error correction to achieve the equivalent 4.3V 1% accuracy.

    The maximum input voltage with scaling disabled is 1.49 V (refer to Absolute Maximum Ratings in data sheet).

  • Eirik,

        please confirm the attached scan is an appropriate tolerance analysis for the CC1310 measurement system.  (YES/NO)  Assume scaling IS ENABLED.

    Normally we would have a 3.7V nominal LiPo battery attached, but during charging, and during production/programming, this signal chain will impress 5.00V upon the signal "VBAT" in the sketch.

    -Best Regards,

    M.Wagner

  • please see next message.  Antonio is going to insert the PDF of the sketch.

    -M.Wagner

  • Hello,

    I am not an ADC expert, but based on the definition of Total unadjusted error explained here:

    https://e2e.ti.com/blogs_/archives/b/precisionhub/archive/2014/10/14/adc-accuracy-part-2-total-unadjusted-error-explained 

    Total Unadjusted Error (TUE) is an indication of the accuracy you can expect from an ADC without applying any Offset or Gain Error correction.

    Using the numbers from the CC1310 datasheet this gives you a TUE of around 4.6 LSB or 4.9 mV with a 4.3V internal reference. This does not account for effects of different temperatures. Also note that this is without any Offset or Gain Error correction. If you do calibration in your setup your TUE number will go down and be dominated by DNL and INL.

    Which formula did you use for uncertainty and how did you get the terms?

    I will check with the team to verify if you diagram is correct.

  • Please speak with someone who IS familiar with full A/D characterization.  Knowing accuracy without temperature effects is useless.  our product is used Outdoors from Florida to Alaska.  This isn't a room temperature lab software experiment.   You seem to expect that the divider and reference is "perfect" (over temp) which I doubt, so I need to know its performance over the FULL temp range of the part (-40C to +85C). Even some of the best TI references which I have used are are best 0.05% basic tolerance NOT including temperature effects, so NO, it is not likely it will be dominated by the INL.

    I recently used reference REF3430QDBVRQ1 in a design which exhibits a 6ppm/C drift.  We operate down to -25C which give a 50C span from ambient, resulting in an additional 0.03% error for instance (on top of the 0.05%).

    I detailed the elements of the equation with all you have been able to give me is a "1%" accuracy.   In that case the 1% error (over temp) of the reference or divider will overwhelm a 2LSB INL error.

    If there ARE software corrections, I still need the residual error that results.  In that case please point to specific documentation as to what registers are to be used and what routines are to be run,  and how often they need to be run (indicating repeatability of hardware over time/temp).

    I'm likely OK with it being 1%, but I don't have time or the sheer number of parts to make a statistically significant experiment to measure.  I need a fully characterized part, run the calculations on those number, and see if the accuracy that results is adequate for our battery capacity estimation.  I just need to know how the part was designed to perform.

    If it would be more efficient, I can do a teleconference, Teams, or Zoom meeting to fit within your local time if you can get the right people to speak with me.  We sell ~30-50K+ of these clips each year, and we have adapted our product for automated Covid-19 contact tracing and have potential orders this year to exceed 200K units. We have a risk with the current configuration of damaging the front end and need to make a decision on what to do next SOON.

    thank you.
    -M.Wagner

  • Just to close off.

    Your diagram make sense. But TI does not guarantee any max min values.