This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: Using DSS for image blending, and then output via CSI-TX.

Part Number: TDA4VM

Hi TI Experts

To implement our application, we came up with a solution:  using DSS for image blending, and then output via CSI-TX.
We are faced with some problems and need help.

Application:

We are using TDA4VM to develop an Around View Monitor system.
UI overlays on AVM image. UI is developed by Qt.
UI image and AVM image are need to be blended, and
blended image is output via CSI-TX in YUV422 format.

SDK Version: 8.4

Our solution as follow:
Using the overlay feature of DSS for blending, and then writeback to memory to feed data to CSI-TX.
We consider that this solution is possible in TDA4VM hardware capability, but there are some problems in software.

In software implementation, we have come up with three plans

Plan A: Sharing Display        
    One pipe for AVM image(over RTOS display stack), and one pipe for UI (over Linux display stack), after Overlay Manager blending, routing blended image to the writeback pipe,
    and then feed it to CSI-TX.

  

    Problem:    
        Sharing Display feature has been removed from SDK 7.1 Release.
        We can't use it directly.
        Why is Sharing Display feature removed?
        Is it possible to porting Sharing Display feature from old version SDK to 8.4? How difficult is it?

Plan B: Using the Linux Display Stack            
    Both of AVM image and UI image over Linux display stack.    
    
    Problem1:    
        DSS writeback feature has not been supported in Linux from SDK 8.0 release:
        We need to port it back.
    Problem2:    
        Suppose we successfully port Linux WB, how to feed data to CSI-TX.
        Learn from other thread, CSI-TX does not supported on Linux,
        How to feed Linux WB data to TIOVX CSI-TX in zero copy method.
    Problem3:    
        How to feed AVM image from TIOVX to Linux display stack, is it in zero copy method?

Plan C: Using the RTOS Display Stack            
    Both of AVM image and UI image over RTOS display stack.    
    
    Problem1:    
        Learn from other thread, TIOVX Display M2M Node does not support blending.
        How to make it support blending.
    Problem2:    
        Suppose we successfully modified TIOVX Display M2M Node to support blending,
        Can Qt work on RTOS Display stack?

  • Hi ,

    Well, yes, the WB pipeline on Linux is not longer supported, but i think it can be easily ported to latest release, so can you please try porting it?

    Yes, since you require blending of two pipelines in your usecase, TIOVX node cannot be used, as it supports only single pipeline..  

    Regards,

    Brijesh

  • Hi Brijesh

    Thank you.

    I will try porting it. I think I should first run Linux DSS WB example in SDK 7.3.

    Are there any sample programs and documents about Linux DSS WB?

    Do you think "Plan B: Using the Linux Display Stack" is feasible?

    BR

    Tiancheng

  • Hi ,

    I think you only need DSS WB and CSITX for single channel mode.. Depending on how DSS WB path excepts the buffer, i think it should be possible.

    If it can accept externally allocated buffer, like the one from dma-buf framework, it would be easier, because then we can allocate buffers in OpenVX frame and pass them on to the DSS WB path.. 

    If not, then you may require to do some memory copy.. or may be can use SwapBuffer API in OpenVX for swapping buffer pointers and then submitting it to the CSITX node.. 

     

    Regards,

    Brijesh

  • Hi Brijesh

    1. Is there a Linux DSS WB example program?

    If there is a example program, it will be very helpful to me.
    I found a example program from other thread:
    git.ti.com/.../test-v4l2-m2m.c
    I can't run it correctly, can it work on TDA4VM?


    2. I read the source code of WB in Linux SDK 7.3.
    WB M2M is implemented as a V4L2 mem2mem device interface.
    There are two I/O streaming: output (sending frames from memory to the hardware) and capture (receiving the processed frames from the hardware into memory) .
    We expect the display data to be input from the DRM device interface, however, in the current implementation of WB, the display data needs to be input from the output I/O streaming.
    I think it doesn't quite meet our requirement.
    Please correct meIf my understanding is wrong.

    Merry Christmas to you in advance.

    BRs

    Tiancheng

  • Hi Tiancheng,

    Unfortunately, no, there is not any example available to check WB pipeline. 

    Regarding second question, if you have input video pipeline connected to the display, then capture streaming interface will be used. But if it is not actually connected to the display, then output streaming interface will be used.. 

    Regards,

    Brijesh