This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4AEN-Q1: Problems encountered in practicing video streaming and deep learning inference on EVM development board

Part Number: TDA4VEN-Q1
Other Parts Discussed in Thread: AM67A

Note: The part number for this e2e issue's title was selected incorrectly. We are actually using TDA4VEN.

Tool/software:

After downloading version 10.01 of the SDK (as shown in Figure 1 of the attachment), we extracted the boot and tisdk tar.gz packages to the boot and root partitions of the SD card, and then launch the EVM development board through SD mode. The hardware connected to the development board includes an HDMI monitor and an IMX219 camera (see Figure 2 in the attachment). The development board was successfully launched, but the following phenomena occurred:
1. We tested the monitor with GST pipeline and report an error regarding DRM. (The pipeline is located in the gst-pipeline. txt file in the attachment, and the error is shown in Figure 3 of the attachment.)
2. The imx camera is not mounted. We attempted to run the imx219 example in edgeai_tiovx_apps and encountered an error message indicating the absence of a camera device (see Figure 4 in the attachment).


Here are our questions:
1. On the basis of the official SDK, what do we need to do to successfully drive the imx camera to run deep learning demos like edgeai_tiovx_apps? In the AM62A series documentation, it was achieved by adding configurations in uEnv.txt, but similar guidance was not found in the TDA4AEN documentation.
2. Can we use a GST pipeline similar to the AM62A series to obtain imx camera image data from the v4l2src plugin?
3. How to successfully drive20250715_E2E.zip the display to make kmssink available?

  • Hi,

    1. On the basis of the official SDK, what do we need to do to successfully drive the imx camera to run deep learning demos like edgeai_tiovx_apps? In the AM62A series documentation, it was achieved by adding configurations in uEnv.txt, but similar guidance was not found in the TDA4AEN documentation.

    You need to load the k3-j722s-evm-csi2-quad-rpi-cam-imx219.dtso. Add it to the name_overlays variable in the uEnv.txt.

    2. Can we use a GST pipeline similar to the AM62A series to obtain imx camera image data from the v4l2src plugin?
    3. How to successfully drive20250715_E2E.zip the display to make kmssink available?

    We have example pipelines here: https://software-dl.ti.com/jacinto7/esd/processor-sdk-linux-am67a/latest/exports/edgeai-docs/common/edgeai_dataflows.html 

    Ensure that the display is working by running kmsprint and seeing if the display enumerates.

    If its currently owned by weston, run:

    $ systemctl stop weston

    Best,
    Jared

  • Hi Jared,

    Thank you for your answer, but we have already done so. Our uEnv. txt file currently contains the following content:

    # This uEnv.txt file can contain additional environment settings that you
    # want to set in U-Boot at boot time.  This can be simple variables such
    # as the serverip or custom variables.  The format of this file is:
    #    variable=value
    # NOTE: This file will be evaluated after the bootcmd is run and the
    #       bootcmd must be set to load this file if it exists (this is the
    #       default on all newer U-Boot images.  This also means that some
    #       variables such as bootdelay cannot be changed by this file since
    #       it is not evaluated until the bootcmd is run.
    
    # Update the Linux hostname based on board_name
    # The SK also requires an additional dtbo to boot. Prepend it to name_overlays depending on board_name
    uenvcmd=if test "$board_name" = "am67-sk"; then ; setenv args_all $args_all systemd.hostname=am67a-sk ; fi
    
    # Setting the right U-Boot environment variables
    dorprocboot=1
    name_overlays=ti/k3-j722s-vision-apps.dtbo k3-j722s-evm-csi2-quad-rpi-cam-imx219.dtbo

    The camera and display are still unavailable. We found that using the AM67A EdgeAI SDK is feasible. However, it is not feasible in the TDA4VEN Liunx SDK.

     We found a similar issue in another e2e:

    https://e2e.ti.com/support/processors-group/processors/f/processors-forum/1491213/tda4ven-q1-failed-to-lock-all-rx-ports?keyMatch=tda4ven%20v4l2src&tisearch=universal_search

    Does this mean that other operations are required in the Linux SDK to enable the camera and display? In /opt/edgeai_tiovx_apps/configs/linux/imx219_cam.example.yaml in the Linux SDK, the camera is mounted as /dev/video-imx219-cam0 device. In the EdgeAI SDK, simply modify uEnv.txt as described earlier. What we are currently confused about is how to achieve this on the Linux SDK.

    Best regards,

    Yangtian

  • Hi Yangtian,

    The camera and display are still unavailable. We found that using the AM67A EdgeAI SDK is feasible. However, it is not feasible in the TDA4VEN Liunx SDK.

    There should be no difference in whether the camera and display enumerate between the two SDKs. The difference between the two is that the Linux SDK doesn't include the tiovx gstreamer plugins to control the ISPs. You can stream raw data from the camera, but can't use the ISP.

    Does this mean that other operations are required in the Linux SDK to enable the camera and display? In /opt/edgeai_tiovx_apps/configs/linux/imx219_cam.example.yaml in the Linux SDK, the camera is mounted as /dev/video-imx219-cam0 device. In the EdgeAI SDK, simply modify uEnv.txt as described earlier. What we are currently confused about is how to achieve this on the Linux SDK.

    The camera should be probed, you can check the dmesg logs to ensure it is. It's simply not symbolically linked to /dev/video-imx219-cam0. That's done by the setup_cameras.sh script in the EdgeAI SDK. You can still access the camera through whatever video port it enumerated to.

    I don't know what your issue is with the display, can you run kmsprint to see if it enumerates?

    Best,
    Jared

  • Hi Jared,

    I didn't see any information about the camera in demsg, and there are only video0, video1, and video2 under /dev/.
    Regarding the monitor, I obtained the following output using the kmsprint command:

    root@j722s-evm:/opt/vision_apps# kmsprint
    terminate called after throwing an instance of 'std::runtime_error'
      what():  No modesetting DRM card found
    Aborted (core dumped)

    I also ran the following GST pipeline:

    gst-launch-1.0 videotestsrc pattern=ball  ! video/x-raw, format=NV12, width=640, height=480,framerate=30/1 ! videoscale ! video/x-raw, format=NV12, width=1920, height=1080,framerate=30/1 ! kmssink driver-name=tidss
    Obtain the following error message:

    root@j722s-evm:/opt/vision_apps# gst-launch-1.0 videotestsrc pattern=ball  ! video/x-raw, format=NV12, width=640, height=480,framerate=30/1 ! videoscale ! video/x-raw, format=NV12, width=1920, height=1080,framerate=30/1 ! kmssink driver-name=tidss
    Setting pipeline to PAUSED ...
    ERROR: from element /GstPipeline:pipeline0/GstKMSSink:kmssink0: Could not open DRM module tidss
    Additional debug info:
    /usr/src/debug/gstreamer1.0-plugins-bad/1.22.12/sys/kms/gstkmssink.c(1160): gst_kms_sink_start (): /GstPipeline:pipeline0/GstKMSSink:kmssink0:
    reason: No such file or directory (2)
    ERROR: pipeline doesn't want to preroll.
    ERROR: from element /GstPipeline:pipeline0/GstKMSSink:kmssink0: GStreamer error: state change failed and some element failed to post a proper error message with the reason for the failure.
    Additional debug info:
    /usr/src/debug/gstreamer1.0/1.22.12/libs/gst/base/gstbasesink.c(5885): gst_base_sink_change_state (): /GstPipeline:pipeline0/GstKMSSink:kmssink0:
    Failed to start
    ERROR: pipeline doesn't want to preroll.
    Failed to set pipeline to PAUSED.
    Setting pipeline to NULL ...
    Freeing pipeline ...
    In both the Linux SDK and RTOS prebuilted SDK, this is the result.

    What should we do to ensure the normal operation of the GST pipeline?

    Best,

    Yangtian

  • Hi Yangtian,

    These errors mean that nothing is being enumerated. I assume there's an issue with how the SD card was set up.

    For now, can you flash the SD card with this image: https://dr-download.ti.com/software-development/software-development-kit-sdk/MD-NQjfZVt1aJ/11.00.00.08/tisdk-edgeai-image-j722s-evm.wic.xz 

    After that, check whether the screen enumerates properly. You can use kmsprint as well as ls the /dev/dri/ directory.

    To ensure the cameras will work, you need to add the correct device tree overlay to the uEnv.txt under name_overlays.

    Best,
    Jared

  • Hi Jared,

    Thank you for replying.

    Yes, I have tried this EdgeAI SDK before and it can indeed drive cameras and displays normally under Linux, which is what we expected. We are currently evaluating the Linux+RTOS SDK and EdgeAI SDK. The features we want to implement include some CMS functions and SRV function, and the official provided SRV_demo is in the RTOS SDK. The problem we are currently facing is:
    1. We are more familiar with the video streaming solution of EdgeAI SDK(Gstreamer), and it is easier to implement CMS functionality on EdgeAI SDK. However, there is no example of SRV on this SDK, so we do not know how to implement SRV functionality.
    2. There are SRV examples in the Linux+RTOS SDK, but its camera, monitor, and some other drivers are implemented in R5 core programs, and video streaming pipelines are implemented through tiovx programming. We are not familiar with its video streaming solution and find it difficult to develop CMS functionality.
    At present, we have two ideas:
    1. Consider implementing an example of SRV in EdgeAI SDK.
    2. Consider how to implement Linux driven camera, monitor, and other functions similar to EdgeAI SDK in Linux+RTOS SDK.
    I hope to receive some suggestions and guidance on these two plans.

    Best,

    Yangtian

  • Hi Yangtian,

    The difference between the RTOS (ADAS) SDK and the EdgeAI SDK is that cameras, display, and ISP functions are controlled by the R5Fs and FreeRTOS instead of the A cores and Linux.

    2. Consider how to implement Linux driven camera, monitor, and other functions similar to EdgeAI SDK in Linux+RTOS SDK.

    This will break the SRV demo, as the SRV demo is built for those functions to run on the R5Fs. Resolving all of the different interdependencies would also be a difficult task.

    1. Consider implementing an example of SRV in EdgeAI SDK.

    This is likely the easier option, but we will not offer support in the way of writing the application.

    Best,
    Jared