How can I create a mesh LDC LUT to correct lens distortion for TDA2P/TDA3?
This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
The description of LDC mesh table is in available in TDA2P/TDA3 TRM.
The logical content of the LDC mesh LUT has 2 columns of 16-bit signed integers (S16Q3). The first column is for horizontal offset and the second vertical offset of a output pixel. The offsets are relative to the output pixel location starting from 0 with 1/8 pixel precision. The table is typically down-sampled by 8x8 (m=3) or 16x16 (m=4).
To create your own table, you need to define your geometric mapping first. The mapping is from each output pixel at location (h_p, v_p) to its location (h_d, v_d) in input image. The table is (h_d - h_p, v_d - v_p) in S16Q3 integer format.
I have some example matlab/octave code below to create a table from a fisheye lens spec file (text file with 2 columns: 1st is the angle of view in degree and 2nd image height in mm). This example takes care of the table down-sampling and the S16Q3 integer format. "gen_lut( )" is the function you need to call with the lens spec file name, sensor pixel pitch in mm, focal length in mm, input image width (W) and height (H), input image center (hc and vc), a scaling factor (s), and table down-sampling factor (m). The output is a text file "mesh.txt".
Once you get your table as above, you can try it out in the DCC ISP tuning tool (which can also give you good parameters for LDC output block size and block padding). If the table works as expected in the tuning tool, you can convert the table into vision sdk binary or header file format using "apps/tools/LDC_mesh_table_convert/convert.sh" in vision sdk.