This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

PROCESSOR-SDK-TDAX: Not using Camera Extrinsic Parameters of SFM

Part Number: PROCESSOR-SDK-TDAX

Hi

I am using the SFM algorithm for fcw.

we don't have an external sensor mounted to car, e.g. IMU.

So we do not  want to use Camera Extrinsic Parameters.

I reviewed the camExtPrm variable in StructureFromMotion_DSP_UserGuide.pdf to disable the Camera Extrinsic Parameters .

Field : camExtPrm,  Data Type : Float[]  

Description :

   Camera Extrinsic Parameters. Format of camera extrinsic parameters is [ r00 r01 r02 t0 r10 r11 r12 t1 r20 r21 r22 t2], where rij are the rotation parameter, and ti are the translation parameter. If this pointer is provided as NULL, then these parameters are computed internally

 

camExtPrm variable is floa camExtPrm[SFM_TI_CAMERA_EXTRENSIC_PARAM_SIZE];

How do i set a camExtPrm variable to compute internally?

 

Regards

JP Park

 

  • Hi Park,

    Computing camera extrinsic parameter internally is not supported. It has to be available from IMU or if you  dont have IMU then you can write your own software module ( solvePnP in opencv) to estimate camera extrinsic parameter.

    regards

    Deepak Poddar

  • Dear Poddar

    I have some question for your reply.

    In SFM Module (sfm_link_algplugin.c), It use the extrinsic parameter against frameId, it means that extrinsic parameter exist per frame(like out_camera_data_proj.yml).
    In case of using solvePNP function (your reply), Only fixed extrinsic parameters can be used for the reasons below.
    I think that in Calibration phase, We can know objectPoint(3D) against ImagePoint(2D) using Chessboard. And also we can know Camera Intrinsic Parameter and Distortion Coefficient, So we can estimate the Camera Pose(Rotation and Translation Matrix) using SolvePNP function in openCV. However, we can't estimate camera pose every frame using solvePNP function because we don't know 3D points corresponding 2D points (result of SOF).

    1. Is there any problem using the fixed parameters extracted from the calibration phase in SFM?
    2. If we have to estimate extrinsic paramter of the each frame, how can we estimate it?

    Regards

    JS Shin
  • Hello Poddar and team,

    Would you please to check and answer JS Shin's question?

    If you have any issues regarding to open NDA or SRA related information at open community, please give an email to me.

    I am TI Korea EPD FAE in charge of JS Shin's company(They have NDA for this).

    Than you.

    Best Regards,

    Ernest Cho

  • Hi,

    We support pose ( or camera extrinsic) parameter only coming from external sensor which is IMU in our case. But SFM software doesnt restrict user to externally compute the pose and provide it frame by frame if IMU is not present. 

    Similar discussion happened in below threads.

    you can go through the post there, it will help you.

    There are ways to calculate pose every frame, one of them is solvePnP, and another could be from Fundamental Matrix. 

    regards

    Deepak Poddar

  • Also Can you please provide more information about the use case you are trying to implement with SFM ?
  • Hi
    Thank you for your reply.

    I already read your post(e2e).
    And I think that solvePNP function is not suitable because we can't know the 3D points corresponding 2D points.
    (Could you advice me, How can I estimate the Rotation and Translation Matrix from only 2D Points of the previous and current frame(matching poinsts) using solvePNP ?)

    So, I implemeted the function of Calculation of the Fundamental Matrix and compared our result with result of opencv and MATLAB using matching Points [referenced by some material of SFM].

    OpenCV(F = findFundamentalMat(points2, points1, CV_FM_8POINT, 3.0, 0.99);)
    F=[0.0000002,-0.00000374,0.002412
    0.00000557,-0.0000001,0.001777
    -0.003632,-0.003372,1.0]
    MATLAB(F=estimateFundamentalMatrix(x2,x1,'Method','RANSAC','NumTrials',2000,'DistanceThreshold',1e-2))
    F=[2.459470104390675e-07,3.428153602086448e-06,0.002358838975283
    5.240980681886452e-06,9.201921607336470e-08,0.002071184215080
    -0.003611271974516, -0.003610942583508, 1.0]
    Our Result
    F=[0.00000020, -0.00000400, 0.00255741
    0.00000575, -0.00000009, 0.00148227
    -0.00373770, -0.00307414, 0.99998392]

    And then, we estimated the Rotation and Translation Matrix as bellows:
    1. E = K.t * F * K
    2. [U, S, V] = SVD(E) [Result of the SVD Function also compared with result of the MATLAB ]
    3. we can get the 4 possible result.
    - Translation Matrix is U(0,2), U(1,2), U(2,2) => T, -T
    - Rotation Matrix is U*W*V.t (W=[0,-1,0;1,0,0;0,0,1]) => R1(U*W*V.t), R2(U*W.t*V.t)
    4. Will Select 1 solution (R|T). <= Not Yet applied
    5. The estimated R|T matrix will be used for camera extrinsic parameter at SFM module. [Will calculate every frame]

    Currently, We just applied each of the 4 possible R|T Matrix to the SFM.
    However it doesn't seem to work normally(Not like result of the SFM usecase).
    (Using provided intrinsic parameter and clip avi file)

    Is this approach correct?

    Regards

    JS Shin
  • Hi,

    Your F matrix approach is on correct path. We also have observed that the pose calculated from F matrix is not robust, hence it is advisable to use solvePnP wherever it can be used. 

    Your flow could be like

    1. First capture frame (#1) , you can not do anything.

    2. Second capture frame, calculate F matrix between second ( or current or #2) frame with previous frame. And compute pose. Find the best pose among all possible solution by projecting back calculated 3D points on image plane. 

    3. Third frame (#3) onward SolvePnP can be used. As you have already calculated 3D points in previous frame, which is not going to change, and for those 3D points corresponding 2d point in frame #2 is was known in step 2. Find the optical flow of matching points from #2 frame to #3 frame. So now you have corresponding 2d  points in frame #3 for known 3D points. Now you can do solve PnP and find out new 3D points or revise old known 3D points.

    4. In all these process scale will be lost, hence there should be some scale correction mechanism to correct 3D points and pose in every frame. Scale factor can be evaluated every frame, by evaluating any known real world information e.g. height of the camera from ground. For this approach you will have to estimate ground plane equation in 3D world. 

    All these logic is not delivered as part of  TDA2x software, and we are planning to do these things in next version of software release for next gen h/w. Since these logics are independent of h.w hence can be implemented on TDA2x by customer as well.

    regards

    Deepak Poddar

  • Thank you for your reply and advices.

    Your answers have been very helpful.

    I think that the easiest way to use SFM(just setting the configuration) in current vision sdk is to install the external sensor such as IMU sensor to our customer test vehicle.
    if not, I think that maybe we have to implement the SFM module itself.

    So could you give me the detail information of the IMU sensor(same as used in the vision sdk) and could you explain how we can extract or convert the R|T matrix from the output value of the IMU sensor?

    Regards

    JS Shin
  • Hi,

    We have used solvePnP of OpenCv. And expecting that customer will have their own sensor or software solution to find pose (R & T) information.

    Regards
    Deepak Poddar