This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

ADS1243-HT: Input range of 0 to 5V with a 5V reference

Part Number: ADS1243-HT
Other Parts Discussed in Thread: ADS1243,

I have a circuit that was designed for a 0 to 2.5V input range with a 5V reference. We have so far used the ADS1243 with GAIN=1 and RANGE=1 providing the required input span of 0 to 2.5V.

We now noticed that some of our sensors have an offset and go slightly higher than 2.5V, i.e. the input range is more like 0.2 to 2.7V. We tried changing the ADS1243 input span to cover 0 to 5V by setting RANGE=0 and it seems to work, although the datasheet states that for VDD=5V and RANGE=0 the reference must not exceed 2.5V.

Before proceeding I would like to understand where the restrictions in the datasheet come from and what the consequences might be if violated?

I also thought of shifting the input span with the Offset DAC, but as I understand from the datasheet, this would not work together with self calibration?

I appreciate any thoughts, many thanks!

  • Hello

    To clarify, Vref is the difference between the Vref+ and Vref- pins.   The voltages on those two pins can be anything within the power rails, as long as the differential between the two is less than the specified limit.   So, in your case, if you increase Vref+ to 2.7V and Vref- to 0.2V, you are still maintaining that 2.5V differential and are fine.

    Also note that while body of the text states a generic 2.5V limit, the datasheet parametric table shows that the differential can actually be up to 2.6V and the part will maintain proper performance.

  • Hello Kirby,

    Many thanks for your reply. I am powering the device with 5V and have the +Vref pin connected to the supply rail (via an RC filter). The -Vref pin is connected to ground, i.e. the Vref differential is 5V.

    As described in my original email, this was used with RANGE=1 for an input span of 0 to 2.5V and as such was within the constraints stated by the datasheet.

    However, we discovered that in some cases our input signal can exceed 2.5V and the question is, whether we can accommodate for this without a circuit change, i.e. maintaining the 5V reference and not having to add any input attenuation. Basically, how to cover a 5V input span using a 5V reference.

    Setting RANGE=0 seems to work, it's only that the datasheet suggests that it shouldn't be done when Vref=5V. My question therefore is, what exactly is the background for the constraint set out in the datasheet. I just want to make sure there are no nasty surprises later.

    Many thanks, MaVo

  • Another way to look at this is that the input range is limited to half the supply voltage range, whether you use RANGE=0 and Vref is 2.5V or RANGE=1 and Vref is 5V.

    If Vref is above the limit, the part could over range and you could have clipping of the signal.   There is a possibility it might also impact the linearity of the part, but we do not have test data for it. 

  • Hello Kirby,

    Many thanks again for your reply. So, if I understand this correctly, the device is not designed to measure anything that is above 2.5V differential, regardless whether RANGE=0 or RANGE=1. In practise it may work, but with some clipping or inferior linearity. Correct?

    I did a quick test with RANGE=0 and differential input voltages up to 4.5V and compared the ADC results with a multimeter. They both agree to within 1mV. So clipping doesn't seem to be an issue, but maybe linearity or overall precision could be impaired. My test is too crude to tell.

    What I would like to understand is whether the combination of Vref=5V and RANGE=0 could impair the conversion result over the whole input range, or only when the input signal exceeds 2.5V. Remember, my input signal ranges from 0.2V to 2.7V, i.e. only exceeds 2.5V marginally and I could easily live with somewhat reduced precision at the top end, i.e. above 2.5V, as long as the lower end is OK.

    In the end I see 3 options:

    1. Use the above approach (Vref=5V, RANGE=0) and hope for the best. The advantage would be that I don't need to change the PCB.

    2. Use Vref=5V and RANGE=1 and attenuate the input signal with a resistor divider, so that the ADC doesn't see more than 2.5V. This would require a PCB change and the tolerance of the resistors would add to the error budget. But I guess that would be the safest approach.

    3. Use Vref=5V and RANGE=1 and use the Offset DAC to subtract 200mV from the input signal, to bring it below 2.5V. However, I am not sure how the Offset DAC would affect the precision of the conversion. The datasheet implies that the Offset DAC is quite crude. The datasheet also implies that I wouldn't be able to use self calibration in this case. Or am I interpreting this wrongly?

    What would be your advice regarding these 3 options?

    Many thanks again, MaVo

  • I think you have a very good understanding of the part and I will provide some feedback.  However, I have just been filling in while our expert on the part has been out of the office.   He will return on Monday and may have some additional advice.

    I should have asked from the start is if your interest is in the ADS1243 or ADS1243-HT.   Are you going to be using this in a high temperature application?

    For option 3 using the DAC will impact performance over temperature.  However, since you are barely out, the DAC setting will be small and you might not see much impact.   Your option 2 could create as much error as using the DAC.   If you don't care about the higher voltages, then option 1 might be the best the choice.   If you are going to use this for a high temp applciation, you should check your final design at the max temperature.

  • Hello Kirby,

    Thanks again for your reply. Yes, we will be using the part at high temperature (175°C/350°F long-term). I will be doing my own oven tests, but at the moment none is free. Therefore I can only do ambient temperature tests just now. I am looking forward to a reply by your colleague. I really appreciate your support.

    MaVo

  • Just wondering whether there are any more suggestions? Many thanks.

  • "However, I have just been filling in while our expert on the part has been out of the office. He will return on Monday and may have some additional advice"

    I would appreciate if your expert could look into this. Please get back even if you don't have an answer, so that I can move on. Many thanks.

  • Hi MaVo,

    Sorry for the delayed response.  I have a slightly different interpretation of the datasheet.  A 5V differential analog input signal is possible and does not violate the specs.

    From my perspective the following is a valid use case per the specs:

    VDD     = +5V

    VREF+ =+5V

    VREF-  = 0V

    RANGE = 1

    PGA = 1

    Full-Scale Input Range =  5.0 Vpp-diff = 2.5Vpp-s.e.

    So if your analog signal is split between the two inputs with a common mode bias of VDD/2 = 2.5V, there should be no problem with a senor voltage of 2.7V.  The question is, however, how is the 2.7V sensor voltage being applied to the differential inputs.  Can you provide the AC swing on each analog input around a common mode bias point? 

    Thanks

    Christian

  • Hello Christian,

    Many thanks for your reply. Unfortunately I am measuring 6 other signals with that device, i.e. all my channels are referenced via AIN7 to ground. I therefore cannot apply a 2.5V bias and measure in differential mode. My other channels are all within the 0 - 2.5V range, i.e. it is only one channel that goes beyond the 2.5V limit.

    As I mentioned in my earlier posts, the device seems to work with a 0 - 5V input span when I set RANGE = 0, but this mode appears to be in contradiction with the datasheet. Ideally I would like to understand why the datasheet excludes this mode and what the implications might be if I go ahead and use the device in that way.

    As an alternative I was also wondering whether the offset DAC could be used to subtract 200mV from my signal and bring it within the 0 - 2.5V range. But the datasheet is not clear where exactly the DAC voltage is applied and whether the offset is preserved when running a self calibration or whether it is simply cancelled out. I would appreciate if you could shed some light on the operation of the offset DAC and whether I could use it to solve my problem.

    Many thanks for your help

    MaVo

  • Hi MaVo,

    Thanks for the clarification.  I am reaching out to the development team for this device to get clarification on the specs.  It definitely sounds like a violation to the datasheet when you apply +5V reference (Vref+=5V and Vref-=0V) and set the RANGE = 0.  The datasheet clearly states with a reference over 2.6V, RANGE should be 1.  I'll let you know once I hear back.

    Thanks

    Christian

  • I got the following clarifications from the development team.  We are unable to assign a design resource that can investigate why the scenario you mention is working even though it is a clear violation of the spec.  Our recommendation is to ensure you operate within the specifications.

    This device cannot measure more than 2.5 Vpp, for RANGE = 0 or RANGE = 1. When RANGE = 0, the reference voltage is limited to 2.5V; and when RANGE = 1, the differential voltage is limited to VREF/(2*PGA)

    • What does the RANGE=0 bit do?  Customer reports applying a 5V REF+ with RANGE=0 and being able to convert a 4.5Vpp-se signal.  This does appear to be a violation of the spec.  Any information you have is appreciated.
      • This I do not know…If I had to guess, I would image it does some internal scaling of either the input or reference voltage. In the case of RANGE = 0, the ADC can measure a differential voltage up to VREF, but we only specify VREF to 2.6V. This might imply that the reference voltage is internally gained up (or double sampled). However, I am surprised that this scenario is working for the customer

    The offset DAC is applied in the analog domain between the mulitplexer and the PGA.  The datasheet section on calibration mentions that the ODAC should be disabled during calibration so as to not calibrate out the desired offset.  Once calibration is complete, the ODAC offset can be reinvoked. 

    Thanks

    Christian