Hi TI,
I am trying to configure the camera with the TIDL AVP usecase, Is there any faster way to get it done. Thank you.
With best regards,
H.M. Owais
This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
Hi TI,
I am trying to configure the camera with the TIDL AVP usecase, Is there any faster way to get it done. Thank you.
With best regards,
H.M. Owais
Hi Hafiz,
I have informed the correct person to answer this question, you should get reply soon.
Regards,
ANshu
Hafiz,
There are 3 high-level things you need to take care,
1. Hardware
You will need a TDA4VMid EVM, Fusion1 board and IMX390+UB953 FPDLink sensors for cameras. If you have downloaded the 6.1 PSDKRA setup do take a look at this page.
psdk_rtos_auto_j7_06_01_00/psdk_rtos_auto/docs/user_guide/evm_setup_j721e.html
2. Software
We don't have a ready made example for providing camera input to AVP demo (for reasons explained later). But if you wish you attempt, you will have to look at two examples,
Look at Single/Multi camera application which will help in capturing a frame and passing it through VISS (ISP)
psdk_rtos_auto_j7_06_01_00_docs_only/vision_apps/docs/user_guide/group_apps_basic_demos_app_single_cam.html
Once you have this ready, you will have to provide this to the AVP graph
psdk_rtos_auto_j7_06_01_00/vision_apps/docs/user_guide/group_apps_dl_demos_app_tidl_avp.html
As both the applications are written in OpenVx, it should be easy to connect all of the nodes together and execute a single graph end-to-end.
3. Algorithm
Please not that the TIDL models that comes along with AVP is trained on a particular sequence. Hence we show a file based demo just to show DL capabilities.
If you want to provide a live camera input there is no guarantee that the algorithm will work. You will have calibrate camera, capture samples, retrain for new samples and then do inference.
Its possible but you will have to train using samples captured using the cameras connected to the EVM and infer using the same cameras for best results.
Hope this helps.
Regards,
Shyam
Hi Shyam,
Thank you for your reply. I have successfully run and tested the individual nodes separately (ie. both tidl_avp and single_cam). What I am interested in is connecting both of them together. Since I am beginner in OpenVX, would you also elaborate how I can connect the single_cam node with the avp node. Do I have to keep them within the graphs and connect the output? Looking forward to your help. Thank you.
With best regards,
H.M. Owais
Hi TI,
Anyone can give some further update on this matter? Thank you.
With best regards,
H.M.Owais
Hi Hafiz,
Firstly apologize for a late response, as it was year end holidays most of us were on vacation.
You will have to look at two applications,
1. vision_apps\basic_demos\app_single_cam
The execution graph contains, capture node -> ISP (VISS) node -> auto exposure/white balance (AEWB) node -> lens distortion correction (VpacLdc) node -> display (DSS) node
2. vision_apps\dl_demos\app_tidl_avp
The execution graph contains, scaler (VpacMsc) node -> pre-proc node (C66x) -> TIDL (C7x) -> post-proc node (C66x) -> mosaic node (VpacMsc) -> display (DSS) node
So to put together you will have to construct one single graph having all these nodes. The best tutorial as of now is the TIOVX and VISION_APPS documentation and the source code.
Regards,
Shyam