This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

DAC7568EVM: Analog output inaccurate and changing - Calibration needed?

Part Number: DAC7568EVM
Other Parts Discussed in Thread: REF5025, DAC8568

Hi,

I’m planning on using the DAC8568C in a design (actually, I only need about 11-bit accuracy, but given all the inaccuracies I read from the datasheet, I thought, I’d better go for 16-bit). For evaluation, I’m using a DAC7568EVM (but I understand, that the DACs are the same except for the DAC resolution).

I’m powering the EVM with a TPS7A471EVM, which I’ve set to provide 5.4 V (measuring effectively 5.439 V). The SPI connection to my Arduino runs just fine. I have a problem with the analog output, though.

If I power up the internal 2.5 V reference, I get a full-scale output of between 5.0006 and 5.0007 V according to my Fluke 187 true RMS multimeter (so everything as expected here; basically the voltage output is overlayed by some noise according to oscilloscope, but as long as RMS is okay, I don’t care too much). However, the zero-scale output is 2.8 mV. Measuring two hours later and reconnecting all the wires, I now get 4.9997 V full-scale and 2.7 mV zero-scale.

At the time of the second measurement, if I connect the external reference (REF5025) provided on the evaluation board, I get 5.0000 V full-scale and 2.4 mV zero-scale output.

Now, I have some questions on my scenario:

  1. How is it possible, that I’m so much off on the zero-scale, but quite accurate on the full-scale? With 1 LSB initial accuracy, shouldn’t I be off by a maximum of 1.22 mV? Of course, I also read about voltage reference accuracy, which is 5 mV max., but since the full-scale output is correct, I can’t relate this. What is the best way to compensate for that? EDIT: I just saw the zero-code error of up to 4 mV. Is this likely to be the problem? Can I somehow compensate for this?
  2. How can I expect to get 16-bit accuracy with an initial voltage reference accuracy of +/- 5 mV? This looks more like 10-bit-accuracy to me.

Can I get rid of those inaccuracies by calibration? If so, how can I do that (and is it enough to only do that once in Arduino code for all DACs or do I have to do that for every single DAC again or even on every start-up of the DAC)? What is the best way to achieve the desired accuracy (11-bit – after subtracting all inaccuracies – for me)?

Thank you so much for answering my questions! I’m quite inexperienced with analog signals, so I hope, that you can help me here.

Best regards,

Henrik

  • Hi Henrik,

    Welcome to E2E and thank you for your query. I am looking into the issue. Will get back as soon as possible.

    Regards,
    Uttam Sahu
    Applications Engineer, Precision DACs
  • Thank you, Uttam!

  • Henrik,

    My two cents on this stuff...

    Henrik Hille said:
    How is it possible, that I’m so much off on the zero-scale, but quite accurate on the full-scale? With 1 LSB initial accuracy, shouldn’t I be off by a maximum of 1.22 mV? Of course, I also read about voltage reference accuracy, which is 5 mV max., but since the full-scale output is correct, I can’t relate this. What is the best way to compensate for that? EDIT: I just saw the zero-code error of up to 4 mV. Is this likely to be the problem? Can I somehow compensate for this?

    The DAC8568 is a single-supply device which in your case you are operating with a ~5.4V VDD supply voltage. In this case consider that the output amplifier negative supply is GND while the positive supply is 5.4V. In the case of a positive full-scale output, you have 400mV of head-room over the full-scale 5V output and therefore are not limited by any output voltage swing to rail issues. In the zero-scale case meanwhile there is no foot-room and therefore you will observe some signature of the amplifiers output voltage swing to rail capability, which is described in the datasheet by the zero-code error. Since this is a single-supply device, there isn't much of anything you can do to compensate for this error and the measurements you have shared are in-line with the datasheet figures.

    If getting closer to zero-scale is important to you, we can suggest other approaches. I think the most practical approach would be considering an unbuffered R-2R DAC such that you can select an external amplifier with lower swing to rail limitations and/or operating on a dual-supply.

    Henrik Hille said:
    How can I expect to get 16-bit accuracy with an initial voltage reference accuracy of +/- 5 mV? This looks more like 10-bit-accuracy to me.

    Resolution and accuracy aren't the same thing. Accuracy would suggest how close you are to an absolute value, meanwhile resolution describes the step size / number of steps you could take. If you were to calibrate offset and gain errors from a DAC, within the linear region of operation you would essentially be left only with the INL error + noise as an error source, along with 16-bit step sizes. In the case of DAC8568 if we were to assume perfect offset and gain calibration you would have 0.018% FSR error in the linear region of operation, which per the datasheet is defined as the range from code 485 through 64714 (staying away from swing to rail limitations).

    Henrik Hille said:
    Can I get rid of those inaccuracies by calibration? If so, how can I do that (and is it enough to only do that once in Arduino code for all DACs or do I have to do that for every single DAC again or even on every start-up of the DAC)?

    Each DAC needs to be calibrated individually. Calibration should not be required on each start-up, though over long enough time (years) some parameters will begin to shift. Heat on the PCB or in the ambient environment can also impact performance as defined by all of the datasheet drift coefficients.

    Henrik Hille said:
    What is the best way to achieve the desired accuracy (11-bit – after subtracting all inaccuracies – for me)?

    Most conventionally simple calibration to remove offset and gain errors is performed by measuring the performance of the unit using a two-point measurement at room temperature (25C). From there you would apply and offset and scalar coefficient to the input data (think back to the old y = mx + b)

  • Kevin, Thank you so much for your answer! I see much clearer now. :)

    Let me provide you some more information on my complete system, so that you can make more sense of my follow-up questions. I'm designing an encoder, that is supposed to transform a linear position sensor's sin/cos signal to a linear analog voltage (maximum 5 V) that is proportional to the sensed position. The analog signal is later fed to a PLC analog input module for further processing.

    The application's accuracy requirement is really high and thus I need to be able to distinguish 666 different positions (hence voltage values; as I understand from your text, that would be called the resolution requirement; before, it was 2000 steps - that's why I referred to 12-bit accuracy, meaning 2.5 mV, initially) with the highest accuracy as possible (+- 1 position step would be nice). One voltage step would consequently be around 7.5 mV (actually a little less if I avoid zero-range as explained below). The derivation of the position from the raw sensor signal works fine and I have it stored successfully in my Arduino. Now, I'm working on the analog output side which seems to be a little bit more tricky.

    Kevin Duke said:

    The DAC8568 is a single-supply device which in your case you are operating with a ~5.4V VDD supply voltage. In this case consider that the output amplifier negative supply is GND while the positive supply is 5.4V. In the case of a positive full-scale output, you have 400mV of head-room over the full-scale 5V output and therefore are not limited by any output voltage swing to rail issues. In the zero-scale case meanwhile there is no foot-room and therefore you will observe some signature of the amplifiers output voltage swing to rail capability, which is described in the datasheet by the zero-code error. Since this is a single-supply device, there isn't much of anything you can do to compensate for this error and the measurements you have shared are in-line with the datasheet figures.

    If getting closer to zero-scale is important to you, we can suggest other approaches. I think the most practical approach would be considering an unbuffered R-2R DAC such that you can select an external amplifier with lower swing to rail limitations and/or operating on a dual-supply.

    Thank you for the explanation! This makes sense to me, and actually, it came to my mind, that it'd better not use the zero-range, anyway, so the PLC will be able to detect lead-fracture. Do you mean that the FSR for me won't be limited by code 64714 as the upper boundary, but actually 65535 given my power supply is 5.4 V? Then I could use everything between 37 mV (code 485) and 5 V and assume linearity between those points, right?

    Kevin Duke said:

    Resolution and accuracy aren't the same thing. Accuracy would suggest how close you are to an absolute value, meanwhile resolution describes the step size / number of steps you could take. If you were to calibrate offset and gain errors from a DAC, within the linear region of operation you would essentially be left only with the INL error + noise as an error source, along with 16-bit step sizes. In the case of DAC8568 if we were to assume perfect offset and gain calibration you would have 0.018% FSR error in the linear region of operation, which per the datasheet is defined as the range from code 485 through 64714 (staying away from swing to rail limitations).

    Those two terms are not 100 % clear for me yet. Would the INL / FSR error actually influence accuracy or resolution? Maybe resolution is always fixed at 16-bit by the DAC and all I want is an accuracy of around 7.5 mV for the analog output?! Could I even use the 12-bit DAC in this case since my desired accuracy is still lower than 5V/2^12=1.22 mV and thus changing the resolution from 16 to 12-bit won't affect my accuracy? 0.018 % sounds quite nice. That is the INL error, right? What do you actually mean by "perfect calibration"? Can I actually get rid off the specified "Full-scale error" completely by applying gain and offset calibration as described below?

    Kevin Duke said:

    Most conventionally simple calibration to remove offset and gain errors is performed by measuring the performance of the unit using a two-point measurement at room temperature (25C). From there you would apply and offset and scalar coefficient to the input data (think back to the old y = mx + b)

    That sounds quite simple. So to calibrate to reach my desired accuracy, it's sufficient just input code 485, measure the output, input code 65535, measure the output, derive offset and gain calibration factors from the difference to what I expected (37 mV and 5 V) and store them in EEPROM? In production use, I would then apply those factors to the code every time before I input it to the DAC. That's really no magic. Sounds almost too simple to be true. :-)

    So, in short, not using zero-range and applying offset and gain calibration, I should be able to reach my goal of around +-7.5 mV accuracy, right? Can you also give me a hint to whether that would still work if the application requirement changes to an accuracy of +-5 mV or even 2.5 mV? Besides the DAC core, is the internal reference still enough in those cases? Thank you!

  • Hello again Henrik,

    Glad we're on the right path to helping you understand the datasheet details and align device selection with your application's needs.

    Henrik Hille said:
    Do you mean that the FSR for me won't be limited by code 64714 as the upper boundary, but actually 65535 given my power supply is 5.4 V? Then I could use everything between 37 mV (code 485) and 5 V and assume linearity between those points, right?

    From an intuitive electrical perspective I would say that your upper code limit for defining the linear region of operation will be greater than 64714 if you have any headroom over 5V. If you have the headroom which you've described in your application, I believe you could expect to see no issue up to full-scale code 65535. For legacy reasons I think the datasheet is specified the way that it is concerning the selection of the high code for this two-point line of best fit because older parts that maybe had 5V FSR were also specified with 5V positive supply. So I cannot really guarantee anything beyond what the datasheet says, but I think you should have access all the way to full-scale code in the linear region of operation. This also matches up with your measured observations.

    The only thing I'd add is that without calibration the effects of offset and gain error still may put you in a situation where the full-scale value isn't exactly perfect because these effects also impact the output at full-scale.

    Henrik Hille said:
    Those two terms are not 100 % clear for me yet. Would the INL / FSR error actually influence accuracy or resolution? Maybe resolution is always fixed at 16-bit by the DAC and all I want is an accuracy of around 7.5 mV for the analog output?!

    There are two main specifications that relate to describing the effective resolution of a data-converter which are called differential non-linearity (DNL) and integral non-linearity (INL). I use percent full-scale range (% FSR) as a term to describe errors relative to the complete range. So % FSR is a unit while INL and DNL are specifications describing the device (i.e. usually INL and DNL are specified in LSBs, but you could convert this to % FSR)

    DNL specifies the difference between two adjacent codes, which for the DAC8658 is specified a 16-bit device with DNL error as +/-1 LSB maximum. That means for any two adjacent codes you will always increment or decrement as intended and there will be no missing codes, which means you have a 16-bit device.

    INL is effectively the summation of DNL errors. You can imagine that each step isn't going to be exactly 1 LSB, for example a step may be 1.5 LSBs (here the error is 0.5 LSB). If you had two consectuve 1.5 LSB steps, your integrated error would be 1 LSB over those two steps. So essentially in your case if you were to perform calibration the only error you'd be left with (in basic terms) is the INL error which for DAC8568 says that at no point over the entire transfer function (except for what you lose to calibration) would you see error larger than 12 LSBs, or the 0.018% FSR number I mentioned before.

    I said INL and DNL are the main error sources. I think I eluded to this before as well, but, you would also potentially see effects from noise and temperature drift coefficients.

    Henrik Hille said:
    Could I even use the 12-bit DAC in this case since my desired accuracy is still lower than 5V/2^12=1.22 mV and thus changing the resolution from 16 to 12-bit won't affect my accuracy?

    Exactly. Accuracy = % FSR error. Resolution = Step Size.

    Henrik Hille said:
    0.018 % sounds quite nice. That is the INL error, right? What do you actually mean by "perfect calibration"? Can I actually get rid off the specified "Full-scale error" completely by applying gain and offset calibration as described below?

    Yes, that number is 12 LSBs converted to % FSR. So 12 / 65536 * 100 = ~0.018%

    When I say perfect calibration I mean that you have successfully completely eliminated the effects of offset and gain error based on your two point measurement. In reality it probably won't exactly eliminate these effects given noise in the measurement, repeatability, etc. This kind of calibration I proposed is purely in the digital domain, so end-point effects which are resultant of the analog domain (like output amplifier swing to rail limitations etc.) would still remain. Offset error typically comes from the input offset voltage of the output amplifier, which is also in the analog domain. Gain error comes from resistor matching in the DAC resistor string, which is in the analog domain. So at the end-points, even with "perfect calibration" you're still going to potentially see errors because you'll run out of codes to compensate with.

    Example: Imagine you have something extreme like -1% gain error on a 0-5V output 16-bit DAC, so at full-scale code 65535 you see 4.95V instead of 5V. You cannot input codes greater than 65535 to compensate for this.

    Henrik Hille said:
    That sounds quite simple. So to calibrate to reach my desired accuracy, it's sufficient just input code 485, measure the output, input code 65535, measure the output, derive offset and gain calibration factors from the difference to what I expected (37 mV and 5 V) and store them in EEPROM? In production use, I would then apply those factors to the code every time before I input it to the DAC. That's really no magic. Sounds almost too simple to be true. :-)

    As mentioned above, you will need to go below code 65535 in order to actually have room on the high-end to perform this calibration. The datasheet high code and low code values are probably good guidelines.

    Henrik Hille said:
    So, in short, not using zero-range and applying offset and gain calibration, I should be able to reach my goal of around +-7.5 mV accuracy, right? Can you also give me a hint to whether that would still work if the application requirement changes to an accuracy of +-5 mV or even 2.5 mV? Besides the DAC core, is the internal reference still enough in those cases? Thank you!

    If you successfully calibrate as described you would eliminate the effect of the internal reference initial accuracy as it would have been included in your gain calibration routine. The effects are directly related.

    Again if the cal was perfect the INL errors alone (no noise etc. as I have no grounds for assumption concerning bandwidth / operating temperature range) 12 LSB max would be 915uV, just a bit below 1mV.

  • Great, that helps a lot! I really appreciate this easy way of getting individual and really helpful advice. Thank you!