TI E2E Community
Precision Data Converters
Precision Data Converters Forum
DAC1220 output calculation with DIR, FCR and OCR
Dear support of Texas Instruments,
I would like to manually calibrate the DAC1220 in a circuit with an output amplifier. For this I would like to know how the DAC output can be calculated with the DIR, FCR and OCR. The DAC1220 is used in 20 bit mode with straight binary code. The formula on page 14 in the datasheet doesn’t show the calculation of the output with FCR and OCR. Due to the different sizes of the values they must be scaled before the calculation and after it like written on page 13.
But how will the 20 bit code be filled in the 24 bit DIR? Especially I would like to know how the don’t cares are filled.
How do you multiply the 24 bit FCR which would result a 48 bit value (in combination with the 24 bit DIR)?
And after this how the 24 bit OCR is scaled to influence the calculated 48 bit value?
Maybe you make the calculations in a different way.
I would appreciate your help.
Welcome to the forum! If this were a delta-sigma ADC I could explain this easily. Unfortunately it is a little more complicated with the DAC because of the way the coefficients are calculated and that the result is used by the modulator. In simple terms, the offset is calculated with a comparison to VREF. The offset correction is added (with the value being either positive or negative) to DIR value. As the modulator input needs to be in two's complement, the DIR will be converted first if in straight binary. The default setting for the OCR register is zero. The OCR value will be relative to the stability of the reference.
The FCR value defaults to 0x800000 and is multiplicative. This value is determined by an internally generated reference and compared similarly to the offset calibration, except that the value is not two's complement adjusted. As far as how the numbers are adjusted, this is done internally so I can't tell you how they are weighted or used by the modulator. The default value does represent a gain scaling of 1 and the value relative to VREF. If I thought this through correctly, the resultant code voltage approximately equals 2 times the (DIR times FCR with the result right shifted by 6).
Thank you for your answer.
Maybe I did not understand your explanations. The problem is that I measured an output voltage of 1,005 V at input code 0x33333 and 3,561 V at input code 0xCCCCC. The reference voltage is approximately 2,5 V.
The OCR is set to 0xF66DB7 and the FCR is set to 0x6E0148.
I would like to know how the outputs of 1,005 V and 3,561 V can be calculated.
A modified formula of page 14 of the datasheet V_OUT = 2 * V_REF * (code * FCR + OCR) / 2^20 doesn’t work.
The given values are:
code = 0x33333 or 0xCCCCC
OCR = 0xF66DB7
FCR = 0x6E0148
V_REF = 2,5 V
The solution should be:
V_OUT = 1,005 V or 3,561 V
Maybe there are different steps to manipulate the code or I am using the wrong formula.
I would appreciate if you could give me the correct calculation (formula).
I currently do not have a specific formula so we have to reverse engineer the data. So prior to calibration we need to know the voltages for the codes given. Then we can calibrate and work it backwards.
I will try to do this on my end, but it will take some time. I could try to use your data, but something is messed up. The values you give should produce a difference equal above and below the reference as the values are the complement of each other. This should produce voltages of 1V and 4V for straight binary and this is what I see on my system.
As far as the formula, it will be very confusing as the adjusted data goes to the modulator which requires the data be two's complement. This means for the straight binary a number of conversions are involved. It is not clear how the modulator actually uses this data so any formula we come up with may just be an approximation. Unfortunately the designers of this part are no longer with TI, so I can't ask them for any specifics and the design documents are not clear.
Please verify the output voltages you gave me in the previous post and the calibration constants as well. In the meantime I will try to figure out a formula from data I collect.
I’ve validated the output:
At OCR = 0xF66DB7 and FCR = 0x6E0148
code = 0x33333 equals V_OUT = 1,005 V
code = 0xCCCCC equals V_OUT 3,561 V
At OCR = 0x000000 and FCR = 0x800000
code = 0x33333 equals V_OUT = 0,985 V
code = 0xCCCCC equals V_OUT 3,957 V
This is very interesting information and totally unexpected. As you can see, the values uncalibrated are close to what I would expect. This means something very strange is happening. I will soon be receiving a DAC1220EVM that I can use to read the registers. My current system only allows me to write. In my current tests, I get the expected results both before and after calibration.
Can you tell me the command register settings you are using so I can fully duplicate your setup? Also can you tell me what you are using for the reference (a schematic would be helpful)?
Also, I am assuming that the values of OCR and FCR came following self calibration and were not some arbitrary values entered into the registers. Have you done repeated calibrations to see if these values change and if so by how much? If the reference is not stable when the calibration has started, these register values can fluctuate considerably.
Thank you for your efforts. Maybe there is a misunderstanding. I’ve set the FCR and OCR manually to achieve a certain behavior at the DAC output. I haven’t used the self calibration.
Therefor I would like to calculate the FCR and OCR. But first I don’t know how to calculate the V_OUT with these values. If I would know it I could transpose this formula.
The configuration is:
ADPT = 0
CALPIN = 1
CRST = 1
RES = 1
CLR = 1
DF = 1
DISF = 0
BD = 0
MSB = 0
MD = 00
Thanks for the configuration and the explanation. I will attempt to verify, evaluate and give you a more precise formula as soon as I can get my hardware setup.
Here is some further detail as to how the self calibration works. The first part of the calibration is offset from 2.5V which in bipolar mode is 0 code. This is the code value that should be either added or subtracted so that the DAC output is the same value as the reference (assuming that a 2.5V reference is being used.) This adjusted code value is placed in the OCR. The second part of the calibration procedure establishes a code value for the FCR by using an internally generated voltage near ground (which is near bipolar full scale.) The offset adjusted output is compared to the theoretical with the gain value adjusted until the reference equals the output.
What makes this all so complicated, and I presume why it is not in the datasheet, is that the modulator input is bipolar regardless of the selection of unipolar or bipolar in the configuration. In the unipolar case, the code value is converted to biploar and the OCR value adjusted. The result is then multiplied by the FCR.
One issue here is that if you adjust the OCR to a value that is one extreme or the other from 2.5V you may have linearity issues and quite possible range beyond the capable output of the DAC1220. Any code you enter will be based on the information the DAC1220 thinks is 2.5V and full scale. The output range will be mirror images from the value of the adjusted 2.5V OCR setting.
I finally was able to verify how the calibration register data is used within the DAC1220. As I have mentioned earlier, the calibration is not straightforward, and is relative to the reference voltage. Internally the device is bipolar so that 0 code is actually Vref. The first thing the internal calibration procedure completes is the adjustment of the offset. A comparator circuit adjusts the code applied until the output is equal to the reference voltage. The adjustment value is placed in the OCR register. The second part of the calibration will establish the endpoint near full scale. Again a comparator is used, but at a level internally established at 29.3mV (6144 codes at 20 bits, 2.5V reference.) The default FCR value is negative full scale (0x80000 for 20-bit, and you can ignore the last 4 bits in the register.) The comparator adjusts the code until the output matches the desired value. The result is placed into the FCR register.
Everything is then relative to the reference voltage and the formula used is slightly different depending on whether you are above or below the reference. The internal code value used will be determined by the following:
Vout > Vref: (DIR +OCR) + (DIR/FS) * (FCR(20 bits) - 0x80000) = Internal Code
Vout < Vref: (DIR +OCR) + (DIR/FS - 1) * (FCR(20 bits) - 0x80000) = Internal Code
In the ideal case where there is no offset or gain error the equation for calculating the Vout will have the Internal Code equal to DIR. Artificially adjusting offset and gain can create a number of problems as offset is relative to the reference voltage, and gain corrections assume a mid-point at the reference.
Thank you very much. That helped me a lot. Maybe I found an easier way to calculate the ouput voltage. (I can't say if this formula is right but in my examples it works with a few tolerances) :
V_OUT = 2 * V_REF * (DIR * FCR / 0x800000 + OCR) / 2^24
Or for direct calculation with a specific 20 bit code:
V_OUT = 2 * V_REF * (code * 2^4 * FCR / 0x800000 + OCR) / 2^24
I am very glad to see that your problem has been answerd already. But I am doing the related research about OCR these days. Here I want to share some information about it with you:
Actually, there are two basic types of core OCR algorithm, which may produce a ranked list of candidate characters.Matrix matching involves comparing an image to a stored glyph on a pixel-by-pixel basis; it is also known as "pattern matching" or "pattern recognition". This relies on the input glyph being correctly isolated from the rest of the image, and on the stored glyph being in a similar font and at the same scale. This technique works best with typewritten text and does not work well when new fonts are encountered. This is the technique the early physical photocell-based OCR implemented, rather directly.Feature extraction decomposes glyphs into "features" like lines, closed loops, line direction, and line intersections. These are compared with an abstract vector-like representation of a character, which might reduce to one or more glyph prototypes. General techniques of feature detection in computer vision are applicable to this type of OCR, which is commonly seen in "intelligent" handwriting recognition and indeed most modern OCR software. Nearest neighbour classifiers such as the k-nearest neighbors algorithm are used to compare image features with stored glyph features and choose the nearest match.Software such as Cuneiform and Tesseract use a two-pass approach to character recognition. The second pass is known as "adaptive recognition" and uses the letter shapes recognized with high confidence on the first pass to better recognize the remaining letters on the second pass. This is advantageous for unusual fonts or low-quality scans where the font is distorted (e.g. blurred or faded). I hope the information will be helpful to the people who want to lear abouit it. Good luck.
Best Regards,ArronI am testing about SDKs dealing with images, barcodes, and documents.Do you have any ideas?Next Tomorrow is Another Day.
This thread is not referring to optical character recognition with OCR but rather offset calibration register for a precision digital-to-analog converter (DAC).
All content and materials on this site are provided "as is". TI and its respective suppliers and providers of content make no representations about the suitability of these materials for any purpose and disclaim all warranties and conditions with regard to these materials, including but not limited to all implied warranties and conditions of merchantability, fitness for a particular purpose, title and non-infringement of any third party intellectual property right. TI and its respective suppliers and providers of content make no representations about the suitability of these materials for any purpose and disclaim all warranties and conditions with respect to these materials. No license, either express or implied, by estoppel or otherwise, is granted by TI. Use of the information on this site may require a license from a third party, or a license from TI.
TI is a global semiconductor design and manufacturing company. Innovate with 100,000+ analog ICs andembedded processors, along with software, tools and the industry’s largest sales/support staff.