This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

ADS1218: Queries related to PGA and calibration commands- reg

Part Number: ADS1218

Hi,

I have few queries regarding the operations of PGA and calibrations in ADS1218,

1) Does  PGA is used only for maximizing the input resolution? say 40mV for PGA =1 and 70nV for PGA = 128. If  it have any other purpose, please let me know.

2) I observe that the ADS1218 consume more current on initial startup and also while reading the flash pages. Could you please share the logic behind the same. 

3) It is mentioned in the Datasheet that the maximum data output rate is 1 KHz, (PROGRAMMABLE DATA OUTPUT RATES UP TO 1kHz) at the first page under the Features. Does it means that the maximum speed of digital output from "DOUT" pin is 1 KHz?. If not what is the minimum and maximum frequency limit for digital communication with the ADS1218. 

4) Does Selfcal/SelfGcal/SelfOcal works only for PGA = 1? 

4.a)At selfcal, IC connects the analog pins to reference voltage which is going to be always 2.5/1.25,  and do gain calibration. Thus the full scale positive value will now be mapped to 1.25/2.5V.

      What if PGA = 128, where the positive full scale value is 19.531mV.

4.b) At PGA = 128, Is the maximum input differential voltage and output  is 19.531mV ? 

4 c) At what time we have to use selfcal and syscal (differentiate )

5) Does the differential input voltage should be always greater than or equal to 0V?. i.e  we should not give negative voltage on any input channel (say, (-)1V or (-)2V on AIN0 )[Mentioned in ADS1218 datasheet pg.no 38 under table.5 as note(2)]

6) Is there any simulation tool available for testing this IC?

6) Relation between Fdata, Fmod and function of buffer in IC.

Thanks and Regards,

Makesh

  • Makesh,

    1. The PGA is basically a programmable gain amplifier. It allows for the measurement of smaller signals so the resolution is better. For the measurement using a 2.5V reference:

    PGA=1: Full-scale range=±2.5V, LSB size=3uV

    PGA=1: Full-scale range=±19.53mV, LSB size=23nV

    Note that the negative full-scale range is differential and just refers to an input where AINN is higher than AINP (for example, AINN=3.5V, and AINP=1.5V means that the input is -2V).

    There will some noise dependent gain and decimation ratio, which you can calculate from the typical characteristics curves in Figures 1 through 7.

    2. As the ADS1218 powers up, there are internal registers that are read at start up that are programmed at our final test to trim and errors in the device. Therefore at the startup, there will some extra supply current needed as these registers are loaded and set.

    3. The maximum output data rate of 1kHz refers to the output data of the ADC readings. It is more appropriate to say number of samples per second for output data.

    The fastest digital output from the DOUT pin is different. This is based on the maximum speed where DOUT can be clocked by SCLK. You can find this information in the Timing Specification Table. It is listed as the minimum SCLK period. For this device the min SCLK period is 4 tosc periods. In this case, the ADS1218 running at 2.4576MHz, has a min SCLK period of 16.27us, or a max SCLK rate of 614kHz.

    4. I believe that you should only run a SELFCAL and SELFGCAL at PGA=1. However, I think you can run a SELFOCAL at any PGA. SELFOCAL is basically a measure of the offset from the ADC, which should scale with the PGA gain. Once the offset is measured from the SELFOCAL, this offset is subtracted from future measurements.

    However, anything that involves a gain calibration, the device is expecting a positive full scale input at the time of input. If there is gain, I don't think the SELF calibration commands will work.

    a. Correct, if the PGA=128, the self calibration won't work.

    b. At PGA=128 and a reference of 2.5V, the measurement will be in the range of ±19.53mV. Voltages larger than this input will appear as a full scale reading 7FFFFFh, (or negative full scale reading of 800000h for negative over-voltages.

    c. The self cal is used to remove offset and gain error within the device. The offset cal removes any offset that appears from the input multiplexer/PGA/ADC. The gain calibration removes any gain error that the ADC sees from the input channel and the comparision with the reference input channel.

    For system calibrations, imagine that you have an external amplifier. This will have it's own gain and offset error. You can use the system calibrations to calibrate for those errors. For the system offset calibration, you would short the input of your external amplifier to use as the ADC's measured offset. Then for the system gain calibration, you would put in what your system calls the full scale measurement.

    Note that the system calibrations, you should first ensure that the gain calibration is moderately close, then perform the system offset calibration, and then perform the gain calibration. If the gain error is extremely large at the start, the offset calibration will be off as well. If you have a smaller gain error to start, then the offset calibration will be much more accurate.

    5. I mentioned this in 1), but the analog input pins should not be negative voltage. If AVDD=5V and GND=0, then the AIN pins must be between 0V and 5V. If any pin goes outside this range by 0.3V (outside -0.3V and 5.3V) there may be damage to the device. As a differential input for the ADC, the negative measurements comes when AINN is higher than AINP.

    6. I don't think we have any software tools for this device. There had been an ADS1218EVM, but it was obsoleted many years ago.

    7. The ADS1218 is a delta-sigma (or oversampling) type of ADC. That means that the ADC is using many samples of the input to get one output ADC data that you can read. The ratio of the number of input samples it takes to create one ADC data is know as the oversampling (or decimation ratio). Most of the definitions are given at the end of the ADS1218 datasheets but I'll summarize them here:

    fosc is the oscillator clock frequency. The typical is 2.4576MHz, with a maximum of 5MHz.

    fmod is the modulator frequency. Generally this is the frequency at which the input is sampled. This frequency is fmod=fosc/128 or fosc/256 depending on the SPEED bit in the configuration ratio. However, for higer gains, the input is sampled faster than fmod.

    fdata is the output data rate. This is the rate at which the ADC puts out a measurement reading.

    decimation ratio is the ration between fmod and fdata. I would note that there may be

    b. The buffer is an just a unity gain buffer and is used to reduce the input impedance of the ADC, the downside to the buffer is that it limits the input range for the analog inputs. This is listed in the electrical characteristics table in the datasheet:

    Hopefully this answers your questions about the ADS1218. Out of curiosity, what are you measuring and how did you settle on this device? This is an older device, and while it is fine for use and is still a popular device, there are other devices with better specifications and more features than this one. If you do settle on this device for your system, feel free to post a schematic for review. There are always plenty of details to consider in constructing a system with a precision ADC and it's best to have the schematic reviewed.

    Joseph Wu

  • Hi Joseph,

    Thanks for the detailed response! I got clear on all my previous questions. 

    Sorry, I could not be able to share the schematic here due to our policies. We are basically screening the ICs for our customers. 

    We used to test some characteristics of the ICs. While doing so, I hit a scenario where the values obtained from IC after giving RDATAC command deviates from the actual differential voltage. To overcome this, I issue STOPC and again gave RDATAC, while doing this after 3-4 times I am able to get the actual values.(I used to collect 10 samples at every iteration i.e., 30 bytes)

    However with RDATA command, I am able to resolve more accurate value at the differential input.

     What may be the possible reason that the RDATAC values at first instances deviates from actual values?

    IC config is given below,

    Fosc = 4 MHz

    Avdd,Dvdd = 5V,

    SCLK = 1KHz,

    Channel = 1 (+ve) and 8 (-ve)

    Vref = 2.5V(tried both int and Ext)

    PGA =  1;

    decimation ratio = 75; (At all other decimation ratio, Rdatac values is no where near to the actual values)

    Bipolar, Auto digital filter.

    Please let me know any if any other details is required.

    Once again I wish to thank your support!

    Regards,

    Makesh

  • Hi Makesh,

    Can you give some examples of what you expect and what you are getting with respect to using RDATAC and RDATA? Also, just to be clear, is your SCLK really 1kHz? Can you send all the actual values in the register settings?

    One possibility where there can be a difference between the two methods is with respect to timing. The RDATA command will latch in the most recent conversion result when the command is decoded. With RDATAC you must read out the entire result before the next conversion has completed. I would suggest using a scope and monitoring the DRDY and your communication to see if DRDY happens during the read of your conversion result. If it does, then the data will be corrupted.

    Best regards,
    Bob B