This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

RTOS/TDA3: tuning vision based object detection algorithm

Part Number: TDA3

Tool/software: TI-RTOS

Hi

We are developing pedestrian detection application based on vision_sdk\apps\src\rtos\usecases\vip_single_cam_object_detection2.
We have several issues.

1. angle coverage : the usecase cannot detect objects which is located far away from center like over 45 degree from center

2. High camera-position : When Camera is located on 2.5m and see downside like 15 degree, the usecase cannot detect objects over 6.5m.

Normally objects which are far a way from camera is located on upper side of image.

3. some non-moving object like cabinet is detected falsely

I think issue 1,2 can be solved by changing some configuration like ROI parameters, but I cannot until now.

The issue 3 may be solved by retraining with high quality images.

Please help me.

Best regards,

Andrew

  • Hi Andrew,

    Yes you can improve the coverage area with ROI settings. But please keep in mind that these are machine learning based algorithms and trained for a specific camera properties and mounting so you need to make sure that you retrain the classifier with your training input captured from your system.

    Also these algorithms are suppsed to be used as reference software for customers to understand the building blocks and software architecture on TI devices and should not be expected to work in all different conditions in real life. We expect the customers to use them as reference only.

    Thanks,
    with Regards,
    Pramod
  • Hi Pramod,

    I cannot understand the ROI setting values of the usecase , which has quite big values for width and height. Also When I changed the values, there is no effect.
    Can you provide some example values with more details?

    We tried to chanage ROI values in a function, AlgorithmLink_objectDetectionInitIOBuffers. The reference code is as follows:
    pInBufs->bufDesc[0]->bufPlanes[0].frameROI.topLeft.x = 0;
    pInBufs->bufDesc[0]->bufPlanes[0].frameROI.topLeft.y = 0;
    pInBufs->bufDesc[0]->bufPlanes[0].width = 2*(width /4)*2;
    pInBufs->bufDesc[0]->bufPlanes[0].height = 2*(height/4)*2*10;
    pInBufs->bufDesc[0]->bufPlanes[0].frameROI.width = 2*(width /4)*2;
    pInBufs->bufDesc[0]->bufPlanes[0].frameROI.height = 2*(height/4)*2*10;

    Also, Alg_ImgPyramid which is ancestor step of Alg_ObjectDetections also has configurations for ROI. This also looks related with final results.

    Please support more details for us.
    Thanks in advance
    Andrew
  • Refer below parameters in the test bennch file "modules\apps\ti_pd_feature_plane_computation\test\src\feature_plane_comp_tb.c" for setting ROIs for HOG computation and object detction

    RoiCenterX = 640
    RoiCenterY = 400
    RoiWidth = 1280
    RoiHeight = 144

    regrads,

    Kumar.D

  • Few more additional details.

    The feature plane (HOG) is computed in EVE and then DSP does the classifier using this feature plane in TI's object detection implementation. So the ROI settings happen during feature plane computation in EVE and gets carried forward to classification/detection on DSP. The applet of EVE is "ti_pd_feature_plane_computation". If you refer the documentation of this applet (section 3.12 in EVE_Applets_UserGuide.pdf), you can find that the structure scaleParams provide the way to define ROI for each scale). You can control ROI for each scale and the test application provides one example of uniform scaled ROI with base scale ROIS as 1280x144 as mentioned in above post.

    Thanks,
    With Regards,
    Pramod
  • Hi Andrew,
    Just checking on this issue to see if this is closed from your end. Please do let us know if you would need any further support on this issue.

    Thanks,
    With Regards,
    Pramod