Hello,
I'm implementing a feature detection and tracking algorithm on DM648 board.
I want to VLIB_trackFeaturesLucasKanade_7x7 function to track the features I got.
But I found the function doesn't produce correct answer.
Below is a pseudo code how I implemented my test case.
- make a image (im1) on YCbCr with black background and white 30x30 box. (I use Y only. im1 has values of 0 or 255)
- let the feature coordinate be a corner of the box in im1. (letting x1[0] = 100*2^4, y1[0] = 100*2^4 and nfeatures=1)
while (1)
{
- make a new image (im2) whose box was moved by 1 pixel along x-axis. (rightward)
and use VLIB:
VLIB_xyGradientsAndMagnitude(im1, gradX+nx+1, gradY+nx+1, gradMag+nx+1, nx, ny); // nx = 720, ny = 240
VLIB_trackFeaturesLucasKanade_7x7(im1, im2, gradX, gradY, nx, ny, nfeatures, x1, y1, x2, y2, 10, buffer);
x1[i] = x2[i];
y1[i] = y2[i];
- copy im2 to im1
}
But the LucasKanade function produces wrong outputs which are not x2 ~= 1600, y2 ~= 1600.
the first x2, y2 are 1632, 1600 (satisfactory) but next outputs are 504, 1689 and don't change. (they are pointing some black space)
I referred to the example code provided with VLIB2.2 and I cannot find any difference between two.
What aspects should I consider at this point?
Below are some candidates..
1. The origin of the image. Our vision system has origin at upper-left corner and x axis is rightward and y axis is downward.
Is the configuration of VLIB different from it?
2. scratch buffer size. I defined it as "unsigned char buffer[384];". Is it wrong?
3. image smoothing. I saw at the VLIB manual that image smoothing is needed before calculating gradients for Canny method.
Is it necessary? The example code doesn't use it..
And I did same thing on smoothed real video(from camera) but the features tracked move to awkward location.
Thanks