This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM62A7: Question related to the availability of AM62 for vision apps?

Part Number: AM62A7

Dear TI staff,

I am currently investigating the availability of AM62 for the vision app demos, and have some related questions:

1. I only found the vision apps user guide for TDA4. Is there any exclusive guide for AM62?

2. The vision apps user guide for TDA4 mentioned the PSDK_RTOS for TDA4, but I didn't find the package for AM62. Instead I am using the firmware builder because it seems to be similiar to the PSDK_RTOS. Should I continue using this or switch to some other package?

3. My intention is to compile the app_single_cam in vision apps for AM62, somehow this demo is not enabled in the default makefile. While trying to manually enable it, I found some differeces among tivx_soc_am62a.h and headers for other platform: there is no TIVX_TARGET_CAPTURE/DISPLAY defined in the am62 version, causing errors when buiding the demo. I'm not sure how to proceed, so for this issue my questions are: is am62a capable of running this demo? How to add these capture/display, if possible?

Thank you for the support in advance and looking forward to some reply.

Huang Jingjie

  • Hi Jingjie,

    1. I only found the vision apps user guide for TDA4. Is there any exclusive guide for AM62?

    There is no vision apps user guide for AM62A.

    2. The vision apps user guide for TDA4 mentioned the PSDK_RTOS for TDA4, but I didn't find the package for AM62. Instead I am using the firmware builder because it seems to be similiar to the PSDK_RTOS. Should I continue using this or switch to some other package?

    There is no PSDK_RTOS for AM62A. The firmware builder is the right package.

    my questions are: is am62a capable of running this demo? How to add these capture/display, if possible?

    This demo is not supported on AM62A. 

    Regards,

    Jianzhong

  • Hi Jianzhong,

    Thank you very much for the reply. 

    I would like to further ask about the use case in Technical White Paper: Camera Mirror Systems on AM62A. It said that the software includes the 'TI OpenVX based vision application performing the video streaming', which I believe should be something similar to vision apps mentioned above.

    Meanwhile I also tried to modify the gstreamer based python application demo in the prebuilt AM62A image to run the specific deep learning algorithm in the white paper, with imx219 as the camera input. The framerate was not satisifying. Judging by the latency posted in the white paper, gstreamer based application may not achieve the goal.

    So, I would appreciate if you can provide more detailed information on this use case. Thank you in advance.

    Regards,

    Huang Jingjie

  • Hi Jingjie,

    I would like to further ask about the use case in Technical White Paper: Camera Mirror Systems on AM62A. It said that the software includes the 'TI OpenVX based vision application performing the video streaming', which I believe should be something similar to vision apps mentioned above.

    Sorry, I wasn't clear in my earlier response. The single camera demo was available in QNX SDK, but not available in Linux SDK. The camera mirror system white paper you referred to was based on QNX SDK.

    Meanwhile I also tried to modify the gstreamer based python application demo in the prebuilt AM62A image to run the specific deep learning algorithm in the white paper, with imx219 as the camera input. The framerate was not satisifying.

    Is it possible for you to provide more information here, e.g., what kind of deep learning model was running, what framerate was achieved, etc? I can ask our analytics team to look at this if you could provide the detailed information.

    Regards,

    Jianzhong

  • Hi Jianzhong,

    Thanks for the clarification. I'll spend sometime to go through the QNX SDK first.

    As for the modified demo, basically I just switch the model to TVM-OD-5120-ssdLite-mobDet-DSP-coco- 320x320. And the result was as follows:

    I am not sure if the model I chose is identical to the one mentioned in the white paper:

    Since you said that the demo in the white paper is based on QNX, this test result for gstreamer based program should not be a major issue anymore. Anyway, feel free to ask if there is anymore info that I can provide for this test. 

    Regards,

    Huang Jingjie

  • Hi The Model that we had chosen was TFL-OD-2030.  Could you please check with this ? You can access this from our model zoo

    https://dev.ti.com/edgeaisession/index-AM62A.html

  • Hi Tarkesh,

    Thank you for the reply. But I'm afraid the link you posted is not for customers to browse.

    Our major concern here is the end-to-end latency. Image identification does contributes a lot, but it is highly influenced by model and etc., as suggested in your reply. And we were just trying to reproduce the demo in the white paper, without any tendency to a specific model.

    We believe the gstreamer based application would have higher latency so vision app should be more suitable. Unfortunately Jianzhong previously pointed out that only QNX version is available. I want to ask if it is possible to implement the vision app demo in Linux? The use case is feasible in TDA4, so why doesn't it work for AM62?

    Regards,

    Huang Jingjie

  • Hi Huang,  In TDA4 there are multiple R5 cores and therefore the capture drivers are implemented in the R5. In Sitara, in the current product offering we have to implement the capture driver on the A53 cores. This is the main reason. Therefore we use a gstreamer pipeline with calls to OpenVX below.  We have also constructed some optimal gstreamer pipelines called OptiFlow and for Object Detection the pipeline is given  here.

    https://software-dl.ti.com/jacinto7/esd/processor-sdk-linux-edgeai/AM62AX/09_00_00/exports/edgeai_docs/common/edgeai_dataflows.html#object-detection

    You can try this without Deep Learning model and will need to add display

    You can see if this helps.Its based on Zero buffer Copy and team has a paper on this:

     https://library.imaging.org/admin/apis/public/api/ist/website/downloadArticle/ei/35/16/AVM-113

  • Hi Tarkesh,

    Appreciate the details you provided. We will spend some time to go through the materials and maybe reproduce the process on our EVM board first.

    Regards,

    Huang Jingjie

  • Some follow-ups about the test:

    We failed to load the imx219 camera using the prebuilt 9.0 image, so we did the optiflow test on the 8.6 version instead.

    After modifying the input/output of the pipeline mentioned in this link(https://software-dl.ti.com/jacinto7/esd/processor-sdk-linux-edgeai/AM62AX/08_06_00/exports/docs/common/sample_apps.html#optiflow) to imx219/kmssink and adding a tiperfoverlay, the freamrate was around 27~30.

    Using the gst_trace, the result was as follows:

    We'll need some help regarding the imx219 driver. With that fixed I think we can test the pipeline in version 9.0.

    Regards,

    Huang Jingjie