This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA2E: Lane detection samples in vision sdk, how can we do further modification on the sample to mark the lane with different format.

Part Number: TDA2E

Hi all,

I am trying to run the lane detection sample programs in Vision_sdk in TDA2x board.

We found two samples available inside vision_sdk toolkit.

1. One is under use case directory

    "VISION_SDK_02_12_00_00\vision_sdk\examples\tda2xx\src\usecases\vip_single_cam_lane_detection"

2. Another under the following directory.

    "C:\VISION_SDK_02_12_00_00\ti_components\algorithms_codecs\REL.200.V.LD.C66X.00.02.03.00\200.V.LD.C66X.00.02\modules\ti_lane_detection\test"

Can anyone please explain us the difference between the two sample application. How can we use those for further development.

I tried running both the samples.

I am not clear on the output of the sample applications.

The Lane Detection example under use-case is showing the video with some zebra lines marking on the Lane, only on the single side.

Can anyone explain the actual output showing?

Another example is getting the individual monochrome frames as input and giving (x,y) co-ordinates as output.

How can send video input for this use-case and how can we see the output of these frames and process further.

How can I modify the algorithm to tune the lane detection as per my requirement.

Can somebody please help me on this, as I am new to this.

Thanks.

Suganthi

  • Hi Suganthi,

    I have forwarded your question to an algorithm expert.

    Regards,
    Yordan
  • Hi Suganthi,

         "VISION_SDK_02_12_00_00\vision_sdk\examples\tda2xx\src\usecases\vip_single_cam_lane_detection" is the SDK use-case for lane detection. If you want to tune the lane detection algorithm for your video clip, then you need to use this and tune as per your video clip. 

    "\VISION_SDK_02_12_00_00\ti_components\algorithms_codecs\REL.200.V.LD.C66X.00.02.03.00\200.V.LD.C66X.00.02\modules\ti_lane_detection\test" is the standalone test bench to test the lane detection algorithm in a standalone environment without using vision sdk. This uses a pre-defined input buffers which are stored under "\ti_lane_detection\test\testvecs\input".

    The VisionSDK use-case uses the \REL.200.V.LD.C66X.00.02.03.00\200.V.LD.C66X.00.02\modules\ti_lane_detection\lib\lane_detection_algo.lib to link to the lane detection algorithm.

    Please read the lane detection document to tune the parameters for your clip. Typical parameters would be the ROI definition, edge detection thresholds, number of hough maximas to be found.

  • Hi Prashanth,

    Thank you very much for the clarification.

    We tried running the Lane Detection Use-case under the vision_sdk.

    By default, the video input is taken through HDMI source.
    We modified it to take from Omni-vision camera.

    On running the use case, we are getting slight lane markings(showing like zebra lines on one side lane) on the Display, but it is not clear.

    Query 1:
    Can you please explain, what will be the actual output of the vision_sdk lane detection use-case? Do you have any sample image of how it will appear on the display?
    How the lane points are marked and how it will show on the display?

    Query 2:
    In the stand alone use-case, we have the .y files in the input directoy. We are not able to view the images. Which tool we can use for viewing the image file of .y format?


    Thanks,
    Suganthi
  • Hi Suganthi,

       The output of the use-case looks as shown in the below link

    www.youtube.com/watch

    You can use the sample image shared in the lane_detection test module.

    The .y files can be viewed using 7YUV. You can download from http://datahammer.de/

    You can also use ffmpeg and convert it to png or jpeg if you like.

  • Hi Prashanth,

    Thanks for the input.

    We have a question regarding the input to the lane detection algorithm.

    We watched the video you have shared, and we understand that lane markings are showing between the two lanes.

    We also need clarification on the below things:

    1. The lane detection sample use case under vision_sdk can take the video directly from the camera and process it?
    or
    2. Any explicit conversion on frames to luma frame needs to be done in use-case?
    or
    3. The lane detection algorithm plugin is already doing that conversion?

    4. Currently, the sample use-case is taking some default values for threshold and theta parameters. Do we need to tune these parameters also at our end?

    Thanks,
    Suganthi
  • Hi Suganthi,
    1. Yes, it can process video directly from camera and process it. However, you need to tune the parameters like ROI, thresholds etc as I had mentioned earlier.
    2. No. The algorithm uses only luma channel input for processing. So, if you pass only the luma channel, it should be sufficient for the algorithm.
    3. No need of any explicit conversion as long as your input is in YUV. If you are using any other format like RCCC, then conversion may be needed. Although, we have never tried that at our end.
    4. You can try using the default values and theta parameters. If you feel the performance is not good, then you can tune them.

    The algorithm implementation is very simple edge detection + Hough for lines. It does not have any advanced processing to figure out if it is zebra crossing etc. You can always add these post processing steps on top of our algorithm. For more details on the algorithm implementation, you can refer to the paper: ieeexplore.ieee.org/.../stamp.jsp
  • Hi Prashanth,

    Thank you so much for the clarifications.
    Do you have any sample video of road map with lane markings and obstacles?
    We have a sample video, and we are testing with that.
    It will be helpful if you share any other videos, suitable for lane detection and obstacle detection, so that we will check with that also.

    Thanks,
    Suganthi
  • Hi Prashanth,

    We have referred the example usecase "vipS_sngle_cam_analytics2" which is using all the algorithms (Object detect, lane detect, sfm, fcw, tsr, clr).

    Here when we run the sample, we are getting the below error.

    325.633746 s: FILE: Reading file [SFM_POSE.bin] ...
    [IPU1-0] 325.633837 s: FILE: ERROR: Could not open FILE !!!
    [IPU1-0] 325.633898 s: Assertion @ Line: 187 in C:/VISION_SDK_02_12_00_00/vi
    sion_sdk/examples/tda2xx/src/usecases/common/chains_common_sfm.c: status==SYSTEM
    _LINK_STATUS_SOK : failed !!

    How to resolve this?

    Thanks,
    Suganthi
  • Hi Prasanth,

    We would like to also know how SFM is related Lane detection module ,In file "chains_common_fc_analytics2.c" they are enabling and disabling the " ldDrawEnable" parameter based on the frame number from SFM using this function "ChainsCommon_Sfm_CaptureGetCurrentFrameNum()".

    Can you please clarify on the same.

    Thanks,
    Swati
  • Suganthi
    you need to copy SFM_POSE.bin into the SD card, this file can be downloaded from CDDS along with demo clips
    Vision SDK demo clips are available at cdds.ext.ti.com/.../emxNavigator.jsp
    refer VisionSDK_UserGuide_TDA2xx.pdf
    section 3.0 Run the demo
    3.9.1 Single channel demos with HDMI input

    Swati,
    SFM is not related to lane detection as such, but SFM is used to find out the distance between the camera and the object

    regards, Shiju