This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: How to view results on HDMI display?

Other Parts Discussed in Thread: TDA4VM

Hi,

I am using TDA4VM (J721E) board with SDK 09_00_00_00.

software-dl.ti.com/.../sample_apps.html

How to view results on HDMI display and also how to stream the processed frames through RTSP server which can be viewed from a remote machine?

I have compiled some of my custom model which work fine on the board and are giving me expected results. I could write the input gstreamer pipeline to consume frames in cv2 using the EdgeAI Dataflows section from SDK documentation. I am new to launching RTSP server as well as creating output gstreamer pipeline with kmsskink which can be written in python and cv2. Is there any tutorial to do? The documentation gives the gstreamer pipeline that can be run through CLI. But when it comes to custom models which are serially dependent on each other, I need to write my own python script and then execute the task. So, in this case how exactly one should dump frames to show them through kmssink or through RTSP server is not clear yet through the documentation. I would request you to alteast provide some tutorial which we can follow. 

Thanks and regards,
Sourabh

  • Hi Sourabh,

    TI does not support the development of cv2 python scripts, so we do not have any tutorials to share. We only support what is seen on the SDK documentation such as building GStreamer pipelines and running our python/C++ applications. There are some examples with remote streaming in the edge AI dataflows section that may help you here: https://software-dl.ti.com/jacinto7/esd/processor-sdk-linux-edgeai/TDA4VM/09_00_00/exports/docs/common/edgeai_dataflows.html#semantic-segmentation

    Thank you,

    Fabiana

  • Hi Fabiana,

    Thank you for your response. I have successfully executed the GStreamer pipelines and Python applications provided by Texas Instruments (TI). However, as our requirements extend beyond the functionalities demonstrated in the provided applications, we seek guidance on developing custom artificial intelligence (AI) applications tailored to our specific needs.

    Specifically, our objective is to create AI applications that involve the sequential connection of multiple models, where the output of one model serves as input to the next. Additionally, we aim to integrate these models seamlessly with our user interface (UI), requiring the incorporation of Python scripts. Furthermore, we anticipate the necessity to stream or broadcast the predictions generated by our models over our private network.

    In the course of developing our custom product, relying solely on the provided demonstrations proves insufficient for our application requirements. Therefore, we kindly request comprehensive support for such use cases. We would greatly appreciate the provision of tutorials or documentation addressing the development of AI applications beyond the scope of the current demonstrations.

    We are interested to know if Texas Instruments has plans to address these considerations in future updates or if the current support is primarily focused on the provided demos. Your insights and guidance on this matter would be highly valuable for our ongoing development efforts.

    Thanks and regards,
    Sourabh

  • Hi Sourabh,

    Thank you for giving us feedback and clearly stating your requirements. We have many demos and examples for the sake for ramping up customers on our devices and do not claim to fully support development. As of now, we do not have any plans to develop and add such tutorials. However, your needs will be shared with the team, and we will be strongly considered when revising our roadmap.

    Thank you,

    Fabiana