This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

DDC112 Integration Time

Other Parts Discussed in Thread: DDC112, IVC102

Hello,

The DDC112 dual current input 20 bit A/D converter specifies a maximum integration time of 1 second. What is the reason for 1 second and what affect would using a longer integration time have on the measurement or accuracy. I'm looking for a technical explanation so details are appreciated.

Thanks and Regards,

DF

  • David,

    Thanks for contacting us. I will get back to you soon.

    Regards,

    -Adam

  • David,

    Integration times that long can get tricky because you start to really accumulate a lot of the device's Ibias current and integrated noise. Accuracy will degrade but you can certainly try it depending on your accuracy needs. Looking at the datasheet Noise vs. TINT plot on page 6, you can get an idea of how this noise adds up over time. In theory the device will integrate until you tell it to stop, we just don't have data for longer TINT than listed in the datasheet. If you're looking to integrate a really low current signal, hence the long TINT, you might look into the IVC102. It has typical ibias of 100fA but is analog output so you would need an ADC too.

    Please let me know if you have other questions.

    Regards,

    -Adam
  • Hi Adam,

    Thanks for the reply. One of our tests in a product we use the DDC112 in, checks the result of the DDC112 over a 5 second integration time. While I realize this is out of spec for the part, in your opinion would it be reasonable to expect consistent results from 5 second integration times from part to part? We're seeing quite a bit of variation. According to the datasheet on page 20 the zero input signal result should be 0000 0001 0000 0000 0000. While some parts come close to this, in others it's quite a bit less. Integration times within the 1 second spec are much better as expected. Could this be due to some internal noise or leakage currents of the internal capacitors? On one of the DDC112 parts I have now I have even lifted the signal input pins off of the PCB so they are floating in free air. I have been unsuccessful in getting a zero result closer to what is listed above and in the datasheet.

    Thanks for your help!

    David

  • Hi David,

    Jumping in the conversation... Is the measurement repeatable from run to run, on the same channel/device?

    If mostly so, what you are describing looks to me related to the input bias current spec on the device (0.1pA typ, 10pA max). I am not sure if there are any other effects kicking in beyond 1s integration time, but I would start there. This current is internal to the device, so, disconnecting its input should make little difference, as you observed. I think you can pretty much assume that the current is constant for a given device input, so, you can prove this by changing the integration times and seeing if the output value changes proportionally. This current will change, though, with temperature, so, if you intend to calibrate it out, make sure you keep an eye on your temp.

    Give it a shot and let us know...

    Eduardo

  • Hi Eduardo,

    Generally speaking yes the 5 second readings are consistent for individual devices but vary from one device to another. I don't know the full reason why a 5 second integration time was originally chosen for the test since I wasn't involved in the product development but I think the designer's original thought process is that if they can get an ideal reading of 0000 0001 0000 0000 0000 or slightly less than that, than shorter integration times will give even better results since less noise is accumulated for smaller Tint. The range of values we are seeing for 5 second integrations with zero input signal is anywhere from 40 to 130. Could this be caused by the variation in input bias current from one device to the next?

    Thanks and Regards,

    David

  • What range are you using? The problem with guessing just from one reading is that the spec is very wide (not sure the reason, i.e., if it is real or because of some test limitations). So, by doing the calculation, it is likely that what you see falls in that range, but that doesn't tell you 100% that is the Ibias. The calculation is easy, though... Output counts ~ 10pA (max) x 5s (Tint) / FSR (in pC) x 2^20. But again, that is the max error. Obviously the part is much more likely to be close to 100x less than that.

    Again, if you change your integration time, with the input disconnected, and the error changes proportionally, then you are likely talking about this... It is probably the best test. Also if you change the temp. A ballpark is that the Ibias may increase 2x for every 10C increase...

    Cheers,
    Eduardo
  • Hello again Everyone,

    I'm sorry to resurrect this old thread but I have a couple more questions regarding this part. One of the production tests we do on the PCB where this part is used is to take a baseline zero reading from the device with nothing attached to the inputs. We're expecting to see a near zero result. But I'm wondering if it's even meaningful to do this with floating inputs?

    Secondly, how sensitive is this device to leakage currents? We're working on improving our production process to improve cleanliness, remove PCB humidity and conformal coat.

    Thanks and Regards,

    David
  • Hi David,

    No worries! That's why we are here for :)

    To your first question, I guess it depends on what you want to "baseline". If it is the device (its offset), the sure, disconnecting the output seems a good way to do so. If you want to measure other leakage currents on the system, then obviously they should still be connected...

    About sensitivity to leakage currents... Well, the device measures currents at its input with 20 bit resolution. As such, it is very sensitive (on purpose, it is its function) to input currents. Obviously the device can not tell the difference between a real signal current (which we aim to measure) and a parasitic/leakage current. If those leakage currents are constant and small, and the signal current can be made zero, then you should be able to measure the leakage current and calibrate it out. The other/parallel approach is what you suggest and avoid/reduce the leakage currents to begin with using techniques like you describe.

    Hope this makes sense,
    Eduardo