This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

  • Resolved

DAC1220 output calculation with DIR, FCR and OCR

Dear support of Texas Instruments,

I would like to manually calibrate the DAC1220 in a circuit with an output amplifier. For this I would like to know how the DAC output can be calculated with the DIR, FCR and OCR. The DAC1220 is used in 20 bit mode with straight binary code. The formula on page 14 in the datasheet doesn’t show the calculation of the output with FCR and OCR. Due to the different sizes of the values they must be scaled before the calculation and after it like written on page 13.

But how will the 20 bit code be filled in the 24 bit DIR? Especially I would like to know how the don’t cares are filled.

How do you multiply the 24 bit FCR which would result a 48 bit value (in combination with the 24 bit DIR)?

And after this how the 24 bit OCR is scaled to influence the calculated 48 bit value?

Maybe you make the calculations in a different way.

I would appreciate your help.

  • In reply to arron lee:

    Arron,

    This thread is not referring to optical character recognition with OCR but rather offset calibration register for a precision digital-to-analog converter (DAC).

    Best regards,

    Bob B

  • In reply to arron lee:

    arron lee

    I am very glad to see that your problem has been answerd already. But I am doing the related research about OCR these days. Here I want to share some information about it with you:

    Actually, there are two basic types of core OCR algorithm, which may produce a ranked list of candidate characters.
    Matrix matching involves comparing an image to a stored glyph on a pixel-by-pixel basis; it is also known as "pattern matching" or "pattern recognition". This relies on the input glyph being correctly isolated from the rest of the image, and on the stored glyph being in a similar font and at the same scale. This technique works best with typewritten text and does not work well when new fonts are encountered. This is the technique the early physical photocell-based OCR implemented, rather directly.
    Feature extraction decomposes glyphs into "features" like lines, closed loops, line direction, and line intersections. These are compared with an abstract vector-like representation of a character, which might reduce to one or more glyph prototypes. General techniques of feature detection in computer vision are applicable to this type of OCR, which is commonly seen in "intelligent" handwriting recognition and indeed most modern free online ocr software. Nearest neighbour classifiers such as the k-nearest neighbors algorithm are used to compare image features with stored glyph features and choose the nearest match.
    Software such as Cuneiform and Tesseract use a two-pass approach to character recognition. The second pass is known as "adaptive recognition" and uses the letter shapes recognized with high confidence on the first pass to better recognize the remaining letters on the second pass. This is advantageous for unusual fonts or low-quality scans where the font is distorted (e.g. blurred or faded). I hope the information will be helpful to the people who want to lear abouit it. Good luck.

    Thanks for you information, it saves me a lot of time to get touch of ocr. 

This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.