This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TMS320F280041: ADC fractional clock divider errata details?

Part Number: TMS320F280041


We've recently updated all the tools for our project and have noticed, that the 2.5x divider option for ADC clock was removed from the lib and the code no longer compiles. After some investigation we've found the errata stating

"Using fractional SYSCLK-to-ADCCLK dividers (controlled by the ADCCTL2.PRESCALE field) has been shown to cause degradation in ADC performance on this device"

What does exactly "degraded performance" mean? We've been observing some problems with ADC measurements, but we'd like to know if this could be caused by the degradation listed in the errata, before we investigate further. Does it cause increased conversion errors, inconsistent timing, or some other problem?

On a side note: is there any document, that lists all the changes between versions of CGT/driverlib/etc? I was quite time consuming to find out, why the DIV_2_5 definition is no longer recognized.

  • Hi Rafal,

    The fractional dividers will cause the linearity (INL/DNL) of the ADC to degrade.  Timings will still be consistent / deterministic. 

    Can you comment on why you were using the fractional dividers?  There generally isn't any advantage to running the ADCCLK slower than 50MHz (for instance, there would be no power savings because the clock doesn't run when the ADC isn't converting). 

    I can definitely help you debug the performance issues with the ADC if you have some more detailed description of the issue you are seeing.  Some culprits for performance issues would be (1) issues with operating conditions like ADCCLK, VDDA, VREFHI, or even SYSCLK (2) issues with trigger and read timing (e.g. triggering the ADC faster than it can process samples or accidentally reading stale results), (3) Issues with driving the ADC input (using too high of an input impedance, too low of a bandwidth, or too short of a S+H duration).  (4) Damage caused by over-voltage or under-voltage on one of the ADC pins

    On the SW changelist side, I agree that the situation sounds frustrating.  I'm not sure what we provide though; I'll pull in someone from the SW team to comment. 

  • We were using 40Mhz ADC clock because it made for convenient sampling timing. 

    The product is a kind of special purpose DC/AC 1.5kWRMS inverter and we were wondering, if certain problems we're observing with ADC measurments could be attributed to "degraded performance".

    Do you have any numerical figure for the INL/DNL degradation? We were observing non-linearity to the point where we can't use a linear function to implement calibration, the device is very noisy and we've previously assumed that something was off with the analog frontend, but couldn't figure out where the fault is.

  • Hi Rafal,

    The linearity degradation is certainly not catastrophic; maybe an extra 2-4 LSBs of linearity error? 

    What S+H time are you using?  It would generally make much more sense to add additional S+H time if you wanted to increase the ADC conversion time since (1) The S+H time has a resolution of 1 SYSCLK cycle and (2) Adding S+H time can allow for additional R-C filtering on the input (since adding more R or C on the input slows down settling, requiring additional S+H time).

    Can you comment on what the impedance of the front end driving the ADC is and what S+H time you are using? You might find https://www.ti.com/lit/an/spract6/spract6.pdf useful for further analyzing the input front end yourself.