This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VMXEVM: Structure for Motion (SfM) on Fish-eye Camera Application use video input file

Part Number: TDA4VMXEVM

Dear Sir/Madam,

I follow the guide software-dl.ti.com/.../group_apps_sfm_fish_eye.html
I success to run the demo application under x86 emulation mode.

I use PSDKRA 6.2.
This demo use following test data.
test_data_ptk\sequence0016\calibration_data
test_data_ptk\sequence0016\camera002
test_data_ptk\sequence0016\ins001
test_data_ptk\sequence0016\vcamera002_dof_ldc

I have a new test MP4 video file (call video_A).
I want to use my video_A as new input data.
How can input data replace with video_A ?

Best regards

-Jason

I appreciate your help.

  • Dear Jason,

    For the SFM algorithm to run on your camera data, you would need the following:

    1) Camera data in PNG format which could be extracted from your video

    2) INS data collected at the same time the camera data is collected. This is a must.

    3) Calibration data for the camera. This is specific to your camera

    4) An App that takes {camera images, INS data, calibration, LUT tables} as input and generate DOF_LDC output. This App is not given to the customers.

    Given the above it looks like you cannot proceed with running the SFM App with your data on the PC. However you need to run it on the EVM but you would still need (1) - (3) above.

    Regards,

    Vijay

  • Dear Vijay,

     

    Thank you about your reply.

    My EVM will arrive soon.

    I think, I can try the SfM App on EVM.

    I want to know more detail about your reply.

     

    1) Camera data in PNG format which could be extracted from your video

    >> What camera be used by the demo? Could you give me the model name of camera?

     

    2) INS data collected at the same time the camera data is collected. This is a must.

    >> Does INS data get from camera? Or I need another device to get INS data. If it need another device. Could you give me the device model name that be used on the demo?

     

    3) Calibration data for the camera. This is specific to your camera

    >> Does calibration data get from camera? Or I need another device to get calibration data. If it need another device. Could you give me the device model name that be used on the demo?

     

    4) An App that takes {camera images, INS data, calibration, LUT tables} as input and generate DOF_LDC output. This App is not given to the customers.

    >> What is LUT tables? How can I get the LUT tables? How can I get the App that generate DOF_LDC output? Do I need to buy the App?

     

    I appreciate your help.

    Best regards

     

    -Jason

  • Hello Jason,

    For your questions:

    1) We used OV10636, which is fish-eye lens camera.

    2) No. INS data is not from camera. You need a INS device. There are several INS devices, e.g. Novatel, VectorNav, etc.

    3) You need two calibrations here:

    •    Calibration between camera and INS
    •    Calibration between camera and ground: This is done by the method described in the user guide in /vision_apps/tools/3d_calibration_tool.

    4) On EVM, you don't need DOF_LDC output. You can run using with camera images, INS data, calibration data and LUT tables. LUT table here is camera distortion table. Because we use fisheye camera, we have to correct distortion before applying SfM. This table is created in a specific format based on camera parameter. The user guide in /vision_apps/tools/3d_calibration_tool also describe how to create these table.

    This demo is running with calibrated multiple sensor data. Replacing image data only won't work.

    Best regards,

    Do-Kyoung

  • Dear Do-Kyoung,

    Thank you about your reply.
    I want to know more detail about your reply.

    (1) Inertial Navigation System (INS)
    The test input data "test_data_ptk\sequence0016\ins001\" for the SfM application.
    How I can get these data (under sequence0016\ins001 directory) from INS?
    Does it need special format convert?
    Do you have step by step example?

    (2) Calibration
    <<Calibration between camera and INS>>
    Is there guide about calibration between camera and INS?
    Is it possible to run real time video?

    <<Calibration between camera and ground>>
    I read /vision_apps/tools/3d_calibration_tool/PSDKRA_UserGuide_3D_SurroundView_Manual_CalibTool.pdf
    It is tool that run offline on PC.
    Because the real time calibration is necessary when vehicle run on road.
    Do you have the tool that can real time calibration?
    Is SfM application possible run on real time?

    (3) detail user guide
    For example, "VisionSDKTDA3xx3D Surround ViewUser Guide" have detail setup step. (refer: usermanual.wiki/.../VisionSDKUserGuide3DSurroundViewTDA3xxDemo.1482403017)
    Is there a detail guide that step by step to teach SfM application run on EVM?
    The guide contains the camera, INS, calibration.
    If I have the detail guide, then I can easier to reproduce SfM application on EVM.

    I appreciate your help very much.

    -Jason

  • Hello Jason,

    1. bin files in /data are raw data from INS. Data format will depend on INS. Our data acquisition system has a tool that save raw data, timestamps, etc. We do not release the details of acquisition system.

    2. We calibrate multiple sensors (camera, INS, lidar, radar) all together. No direct calibration between INS and camera. For example, camera and Lidar are calibrated to each other and Lidar and INS are calibrated to each other. Then calibration between camera and INS can be indirectly calibrated. We do not release details about it yet.

    We have online calibration, which means that calibration is done from EVM directly. But car should be parked with chart around it. (similar to one in https://usermanual.wiki/Document/VisionSDKUserGuide3DSurroundViewTDA3xxDemo.1482403017

    3. No. The user guide is not ready yet. It might be ready when the demo supports live sensor in the future.

    Running the SfM demo as is with your live camera or image sequences is almost impossible, since it require calibrated camera and INS mounted on a (real) car. The purpose of this demo is showing that SfM can run real time on EVM, and this demo is not designed yet for arbitrary camera data and live sensors. If you want to run SfM with your own camera data, you have to write your SfM application.

    Best regards,

    Do-Kyoung