Because of the holidays, TI E2E™ design support forum responses will be delayed from Dec. 25 through Jan. 2. Thank you for your patience.

This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

[FAQ] TMS320F28035: ADC Calibration and Total Unadjusted Error

Part Number: TMS320F28035
Other Parts Discussed in Thread: REF5030, OPA320

In looking at the F2803x datasheet, I'm trying to determine two things:

  1. What are all the strategies that can be deployed for ADC calibration, including steps necessary to implement them?
    1. self-recalibration
    2. offset calibration (if self-recalibration is occuring, is there nothing else that can improve this?)
    3. two point gain calibration
    4. temperature compensation
    5. etc...
  2. After deploying all the necessary ADC calibration, what is the resulting total unadjusted error?

Let's assume that periodic self-recalibration occurs.  It appears as though the INL and DNL are not additive.  This would leave INL as the worst of the two at 4 counts.  What I don't understand is whether what other errors are additive:

  1. Channel-to-Channel offset and gain variation does not appear to be an absolute error added to the total unadjusted error of a given channel.
  2. gain error appears to be additive, and the total unadjusted error added would be a factor of the accuracy of a two point gain calibration reference.
    1. If gain calibration happens on only one channel, does the channel-to-channel variation come back into play?
  3. The temperature coefficient would appear to be additive to the total unadjusted error.  It could to some extent be compensated for with a temperature measurement.
    1. How much of the temperature coefficient is accounted for in the periodic self-recalibration?

In summary, I'm trying to understand what all goes into determining the absolute worst case total unadjusted error, and what strategies can be deployed to minimize it?

Thanks,

Stuart

  • Hi Stuart,

    The first thing to look at is how we calculate total un-adjusted error (TUE).  From the ADC we usually consider gain error, offset error, INL, and DNL.  We also need to consider the error introduced by the ADC external reference.  We don't need to consider ADC channel-to-channel offset or gain error, since this is just a measure of how much the channels can vary from each other (the worst case specifications hold for all channels). 

    The worst case ADC errors are usually independent (e.g. an ADC with worst case INL is unlikely to also have worst case gain error).  Because of this we don't directly add the worst case errors, since this would result in an an overly pessimistic TUE.  Instead, we add the errors using root-sum-squares.  

    Before we consider calibration, here is a comparison of the raw TUE of C28x Piccolo series ADCs.  The specifications for F2803x, F2802x, F2806x, and F2805x are similar, so they don't all have their own table entry.  Note that in all C28x datasheets, the Min/Max ADC errors are specified including drift for voltage, temperature, and manufacturing process variation.  If a temperature coefficient is also provided, this isn't added on top of the worst case error (but instead gives some idea of how much of the error is due to temperature drift).  Also note that the F2803x (and similar devices mentioned above) require at least one-time offset self-calibration (we don't support factory offset trim).  All values in the table below are with external reference.

    (1) Executing one-time self-calibration

    The external reference gain error has has some assumptions.  This assumes a very good external reference solution.

    For example, REF5030 has the following key specifications:

    • Initial accuracy = 0.05% => 0.05%*4096 LSBs = 2.0 LSBs
    • Temperature drift = 3ppm/deg. C => for standard temperature range (85C - 25C)*3ppm*4096 LSBs = 0.7 LSBs

    And then OPA320 (which we recommend to drive the 12-bit reference on F2807x and F28004x) has

    • Vos (max) = 150uV => 0.2 LSBs @ 3.0V

    So the total worst case reference error can be estimated as sqrt(2^2 + 0.7^2 + 0.2^2) = 2.1 LSBs (and 1.0 LSBs is just a guess for typical reference error).  

    Obviously if a different reference solution is used, the actual accuracy can also be calculated.  Getting something more accurate than the above example is possible, but will get very expensive very fast.  

    Now to calibrate offset error all of these devices have an internal connection to VREFLO (no external channel required).  In the F2803x datasheet we specify +/-20LSBs with one-time offset calibration.  To get better performance we can calibrate periodically:

    Procedure: Sample the internal VREFLO connection periodically.  Use these samples to adjust the ADC offset trim register accordingly.  Adjusting the HW trim register will adjust the ADC samples directly, so no additional SW post-processing is necessary.

    Limitations: Channel-to-channel offset variation will limit how close we can get to perfect offset trim.  On F2803x, the channel-to-channel offset is specified as +/-4 LSBs.

    Requirements:

    • You can use the internal connection, so no external pin is required.  
    • You also need to configure one of the SOCs to periodically sample the signal.  On F2803x, this can be an issue if you are already using all 16 SOCs.  There is also no way to automatically (in the SOC configuration) switch between the internal VREFLO connection and the external pin.  These can both be overcome by swapping some of the SW settings in the ADC ISR and then triggering more samples 

    Note: If the offset error is negative, the ADC conversion of VREFLO will read 0 and you won't know if the true offset error is -1 or something like -20.  If you are periodically calibrating offset error on-line, the easiest way to handle this is to trim towards an offset error of +1 instead of 0.  If the offset error is large and negative, successive rounds of calibration will eventually drive the offset error to +1.  You can then either accept the extra 1 LSBs of error, or you can use the CPU to post-process the results.  ADC range will be reduced by 1 LSB.  

    Calibrating gain error is a little more involved. 

    Procedure: Provide a single calibration voltage near the full-scale range. Since we already have an internal method to calibrate offset error, we don't need to do 2-point gain trim.  Practically, using more calibration points will help average-out INL errors.  This calibration voltage should be sampled periodically.  The CPU can then post-process the ADC results to remove the gain error.    

    Limitations: 

    The error from sampling the calibration signal comes from a few sources:

    • Channel-to-channel gain variation => +/-4 LSBs on F2803x
    • INL +/-4 LSBs on F2803x
    • Error in the calibration source => probably not better than +/2.1 LSBs
    • Offset error has a complicated effect, but mostly cancels out...if offset error is positive, the calibration voltage reads high and the gain calibration will cancel out the offset error at the calibration point (near full-scale-range).    

    The best we can do for sampling error of the calibration voltage is therefore sqrt(4^2 + 4^2 + 2.1^2) =  6.0LSBs

    The error at full scale range scales up based on where the calibration was done.  So if the FSR is 3.0V and the calibration voltage is 2.5V, the total gain error would therefore be 3.0V/2.5V * 6.0LSBs = 7.2 LSBs.

    Note: Because of the above, we want to do calibration as close to full-scale as possible.  However, there needs to be enough space between full-scale and the calibration voltage to allow for any uncalibrated drift.  For F2803x, the natural gain error is 40 LSBs = 30mV @ 3.0V VREFHI.  

    Requirements: 

    • An external channel to sample the calibration voltage
    • You also need to configure one of the SOCs to periodically sample the signal.
    • The CPU has to scale all the ADC results via post-processing...no HW method to compensate for gain error is available.  
    • The external calibration voltage needs to be accurate.  This can be achieved two ways:
      • A second precision voltage IC (see the first diagram below)
        • In this case, an op-amp is probably not needed to drive the ADC pin (just use a large capacitor directly on the ADC pin)
        • Calibration will also take care of any gain error in the external reference IC, so that IC can be a cheaper and less accurate IC in this case (within reason...it depends how much space you have between the calibration voltage and the full-scale-range).  
      • A precision voltage divider from the VREFHI reference IC (see the second diagram below)
      • You definitely need to use matched resistors in a single package to get good performance

    We can now fill back into the table the errors for F2803x with on-line calibration:

    (1) Executing periodic re-calibration

    (2) Error Included in ADC gain calibration

    Some other notes:

    • Doing the calibration periodically should take care of any internal ADC temperature coefficients.  
      • Calibration should be done fast enough to deal with the expected rate of thermal change in the system.  Usually re-calibrating a couple times per-second should be plenty fast.
      • (You still need to consider the external temperature coefficients of your calibration voltage)
    •  The DC code-spread for this ADC is roughly 4 LSBs.  When you take calibration readings, it is best to take many points and average.  I'd recommend averaging 256 points or more.
      • (For every 4x oversampling, you gain 1 bit of SNR, so 4^4 = 256 samples results in noise of about 4 LSBs * (0.5^4) = 0.25 LSBs of noise. 64 sample averages would give you about 0.5 LSBs of noise.)   
    • It's important to get good settling on the ADC inputs, because settling error is random or cross-talk from previous samples.  This can't be corrected.  This also applies to your calibration input, so ensure settling much less than 1 LSBs on these calibration inputs. 

  • Hi Devin-san,

    I have 2 questions above.

    Q1) Is not DNL unnecessary in the following formula? Because DNL is included in INL.

    In the description of the link below, INL was not included

    https://e2e.ti.com/blogs_/b/precisionhub/archive/2014/10/14/adc-accuracy-part-2-total-unadjusted-error-explained

    Q2) TUE(F2803x ±9.2LSBs) calculates by the above method, but is it necessary to include INL in TUE calculation?

    As ADC GE(F2803x ±7.2LSB) calculates including INL, I think that INL will overlap if INL is included in TUE calculation.

    Best regards,

    Sasaki

  • Hi Sasaki-san,

    Great questions!

    I have seen TUE estimated with and without DNL and I think you can make a reasonable argument either way. As you said, the INL is an integration of DNL, so it does include the DNL. On the other hand, since INL is specified as a MIN/MAX of an integration, the worst case tends to hit some specific spots in the ADC transfer function whereas DNL tends to be more random transition-to-transition. I think it actually makes the most sense to include DNL in a typical calculation and exclude DNL from a MAX calculation. In any case, the DNL error is not large and therefore doesn't make a large difference in the final TUE estimation.

    As far as INL affecting TUE when doing periodic gain calibration, this will indeed affect the total error twice. First it will affect the calibration sample, resulting in some error in the gain calibration factor. Then, when we go to take some sample as part of the application, this sample will also be subject to INL error. Therefore the sample gets the INL error twice: once from the calibration factor applied and a second time from the actual sample. Because INL varies randomly throughout the transfer function, it won't cancel out in the calibration factors like gain and offset.
  • Hi Devin-san,

    Thank you for your detailed answer !

    I understood it thanks to your comment :)

    Best regards,

    Sasaki

  • Devin,

    Thank you for this response. I have some follow-on questions related to the design we are working on. Normally, a one time calibration is performed during manufacturing when the custom bootloader is programmed, and is at room temperature. Based on the datasheet values, we know that the ADC can drift quite a bit over temperature variation, and with the existing one-time calibration, this would not be accounted for.

    The product is expected to be in a sealed box, and the inside ambient is expected to be close to 85C. When a DFSS analysis is performed on the current sensing design, it has been determined that this high offset error rate in the ADC hurts the accuracy significantly. To counteract this, the plan has been to implement periodic calibration in the product and account for temperature drifts and possibly lower the offset error.

    Despite having code in hand to test this periodic calibration procedure, it does not appear that the results can so far be validated. When comparing boards with and without the procedure, both at 0C and at 85C (using a temperature chamber), no significant deviation in the offset counts is found. There seems to be a problem with the experimental data matching the datasheet values in terms of drift across temperature.

    Can you recommend a way to test the offset counts versus temperature with and without periodic calibration in a better way?

    Thanks,
    Stuart
  • Hi Stuart,

    Are you measuring the error in a sampled signal, or are you looking directly at the calibration value being written to the offset trim register? Is the measured offset error positive or negative?

    I'd suggest looking directly at how the offset trim register is changing. You would expect this to change with changing temperature. If this isn't changing, it may indicate that the offset trim procedure is not functioning correctly. The most likely cause of the offset trim not working correctly would be not adding some artificial offset so that you can correct negative offset error.

    If you are looking directly at a sampled signal, then what voltage(s) are being applied? To measure the offset error, you can either source in a voltage near to the VREFLO (e.g. 100mV) or you can source in 2 or more points across the ADC transfer function, then do a linear regression fit. In either case, you will want to ensure that the signal is being driven with a good low-impedance source, it is (at least ideally) buffered locally on the board by an op-amp (a large capacitor can work too if the sample rate is low), and is being measured at the pin by a DMM.
  • Devin,

    After implementing the code to check the trim register value and see that with temperature variation from 0 degree to 85C, the trim register value showed a 4LSB shift. I have some additional questions for you :

    1. The datasheet specified a +/- 20 LSB offset error for the ADC . With the execution of the periodic calibration this error is supposed to be +/- 4 LSB.  Please confirm.
    2. How does the offset error exactly vary with temperature. Is it relatively flat in the temperature ranges of our interest (0-85C) and sharp increase/decrease  at the allowable operating temperature extremities of the piccolo? The reason I am asking this is that we only see a 4 LSB shift and not the +/- 16LSB shift with temperature variation of 0-85C.
    3. All of our data was from a single piccolo. Does the +/- 20LSB factor account for  part to part variation, ADC channel to channel variation ( or any other process factors)  ? Can you elaborate the conditions from where the 20LSB number comes from. We suspect that the 20 LSB error accounts for the worst case conditions  and since we are not operating in those conditions we don’t see the error.

    Thanks,

    Stuart

  • Stuart,

    Stuart Baker said:

    1. The datasheet specified a +/- 20 LSB offset error for the ADC . With the execution of the periodic calibration this error is supposed to be +/- 4 LSB.  Please confirm.

    correct

    Stuart Baker said:

    2. How does the offset error exactly vary with temperature. Is it relatively flat in the temperature ranges of our interest (0-85C) and sharp increase/decrease  at the allowable operating temperature extremities of the piccolo? The reason I am asking this is that we only see a 4 LSB shift and not the +/- 16LSB shift with temperature variation of 0-85C.

    From the data I have reviewed, offset seems pretty linear across temperature, but the LSB/degC can vary fairly significantly device to device and can be positive or negative. 

    Stuart Baker said:

    3. All of our data was from a single piccolo. Does the +/- 20LSB factor account for  part to part variation, ADC channel to channel variation ( or any other process factors)  ? Can you elaborate the conditions from where the 20LSB number comes from. We suspect that the 20 LSB error accounts for the worst case conditions  and since we are not operating in those conditions we don’t see the error.

    The 20LSB represents the worst case drift seen across devices from multiple fablots, taken at different supply voltages and ADC conditions.  It does not include ch2ch offset.  It is certainly possible that some devices do not vary this much depending on the silicon and system conditions. 

    Regards,

    Joe