This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

J721S2XSOMXEVM: TIOVX featurebility question in Linux + RTOS

Part Number: J721S2XSOMXEVM

Tool/software:

Hi,

I have J721S2XSOMXEVM and using  ADAS SDK  that  PROCESSOR-SDK-LINUX-J721S2 + PROCESSOR-SDK-RTOS-J721S2.

I have a plan to develop that camera connected, run ISP / DL processing and get result.

I hope to send camera images and other data to PC using  network.

Q1) it looks like camera  demo of vision apps, camera image is not  reached to Linux space.

https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-jacinto7/latest/exports/docs/vision_apps/docs/user_guide/group_apps_dl_demos_app_tidl_cam.html

Can I import every frames of the camera image (RAW or NV12 (after ISP))  into Linux memory space?

Q2) If I use compressed image of camera image for sending network on Linux,

how can i do this? is there manual of those process?

(not streaming send, just compressed image(jped?) send, but hope to send all frame)

thanks for check!

  • Hi,

    Q1) it looks like camera  demo of vision apps, camera image is not  reached to Linux space.

    The camera buffers are captured into a dma-buf framework-based buffer which is in the shared memory of DDR.

    Which means that these are accessible to A72 running Linux as well. 

    In tiovx, you can dequeue the buffer from the application using the Dequeue API in this application and then access the buffer across application.

    Q2) If I use compressed image of camera image for sending network on Linux,

    how can i do this? is there manual of those process?

    (not streaming send, just compressed image(jped?) send, but hope to send all frame)

    For the second question, do you mean to encode the videos here or encode each frame?

    Regards,

    Nikhil

  • Hi Nikhil,

    thank you for you answer! It's very help full for me.

    Q1) I'm clear.
           one more thing, can i access every image processing step like RAW image step / after ISP image?

           can i get some document or src code about shared-memory handle for camera image?

    Q2) encode each frame and send to other device. not video stream.

           i hope to use HW engine (or DSP) for encode. not use A72. is this posible?

    regards,

    DH

  • Hi,

    can i get some document or src code about shared-memory handle for camera image?

    You can refer the single cam application present in the SDK in the path vision_apps/apps/basic_demos/app_single_cam folder

    i hope to use HW engine (or DSP) for encode. not use A72. is this posible?

    For J721s2, we only support the codec drivers on A72 (i.e. Linux Driver). We currently do not support RTOS based drivers for codec.

    Regards,

    Nikhil

  • Hi Nikhil,

    thank you again.

    Q2) I mean I don't want to use A72 processsing power.

          as your answer,  i can use some codec or HW accelletor. (not video, frame image compressing).

          can you explain this function more detail?

    Regards,

    DH

  • Hi DH, 

    The CODEC is a separate IP that enables hardware encoding and decoding. The CODEC driver stack runs on the A72 but offloads the actual processing to the dedicated hardware accelerator. In Linux we follow the Video 4 Linux 2 (v4L2) framework to manage buffers that get processed by the accelerator. However, at the application layer we use GStreamer which is how you will run encode and decode processes from the command line. 

    I see you are not trying to encode video data via h265/h265 compression but rather a single image with jpeg compression. If that's the case then there is not a hardware accelerator for jpeg encoding/decoding. You can still use GStreamer but it will use software encoding to process your compression. The below pipeline can create an jpeg image from a raw rgb input: 

    • gst-launch-1.0 filesrc location=your_input_image.raw ! rawvideoparse width=640 height=480 format=rgb ! jpegenc ! filesink location=output_image.jpg

    Please refer to GStreamer documentation here: https://gstreamer.freedesktop.org/documentation/tools/gst-launch.html?gi-language=c

  • Thank you TI experts!

  • No Problem. 

    Regards,
    Sarabesh S.