This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Netra: Video path between DSP and HDVPSS

Hi,

I am using Integra/Netra EVM.

I do have access to TI EZSDK. The requirement is as follows.

- Capture 16 bit YUV data (1080i resolution) as 2x2 D1 video image (starting from (0,0) 1440x960 video and rest of the pixels blank) as shown below.

- Demux this data as 4 separate D1 channels in memory, extract unique metadata (eg. channel number and few other info) from YUV data of all 4 channels, pass these 4 channels' data to DSP for analytics.

Can someone please give me top level idea how this can be achieved ?

My basic intention is to know

1) how DSP will communicate with HDVPSS to get frames

2) Where should I implement the logic for demuxing all 4 channels, extracting metadata etc.

Any suggestion on this will be highly appreciated.

Thanks in advance,

Sweta

 

  • Hi Sweta,

    I am not able to fully understand your question. Let me able to answer based on what I understood.

    VIP will capture the 1080i frame from the VIP port of C6A816x. This frame will contain the 4 D1 video data. Now you want to separate the 4D1 data in 4 different buffers right using the DSP right? Please correct me if I am wrong. DSP can communicate with HDVPSS using the OpenMax APIs. For that you will have to write a DSP Open Max component and create a link between the VFCC and DSP open max component. Currenly we dont have any openMax component for DSP so you will have to create one taking the reference of the VFCC, VFPC or VFDC components.

     

    Regards,

    Hardik Shah

     

  • Hi Sweta,

    I understood your question now in better way after reading it multiple times. I think you want to capture 4D1 multiplexed channels, and then display it as a 2X2 mosaic of 1080i60 with embedding the Timestamp and framecounter to each frame. Please let me know if still i have not understood your question.

    Demultiplexing is done by the hardware itself. Hardware keeps the 4 D1 multiplexed channels in 4 different buffers. Handles to all the buffers is available to application through VFCC component of OpenMax.

    For the blending of the framecounter and frame timestamp, you need to do blending using the DSP. Now the communication between the VFCC and DSP can be done by writing a OpenMax component for the DSP and linking the VFCC and DSP open max component. Currently there is no openMax component available for DSP.

     

    Regards,

    Hardik Shah

  • Hi Hardik,

    Sorry, still it is not exactly the way you understood.

    Let me give you more details to better understand the issue.

    There is another processor, which will give me 2x2 video in 1080i60. So it is not multiplexed data but it is 16 bit YUV422 data, which I need to de-mux into 4 separate D1 channels using HDVPSS/DSP.

    If I am not wrong, then my understanding is as below.

    - I need to write OpenMax component to separate these 4 channels and then I need to add framecounter, channel ID etc in first few bytes of YUV data.

    - This OpenMax component will run on HDVPSS. I'll implement OpenMax APIs on DSP to get these 4 channels' data, DSP application will extract channel ID and frame counter and will do further processing.

    Questions:

    Is there any other way except OpenMax using which I can achieve above functionality ?

    Do you have any idea whether TI is planning to provide OpenMax component APIs for DSP in near future ? Can you please give approximate timeline by which DSP OpenMax APIs will be available ?

    Please let me know if I am not able to explain you fully.

    Thanks,
    Sweta

  • Hi Sweta,

    Sorry again but I am not fully able to comprehend your usecase. Let me re-phrase your question to make things clear to me.

    1. You will receive 1080i60 frame through VIP in a one single buffer right. That buffer will contain 4 D1 frames at different offsets in the one big buffer right?.

    2. Now do you want to blend the various data with video data or you want to append the data after the video data that is not clear to me.

     

    If you want to append the data then you will have to seperate out 4D1 frames in 4 different buffers using the EDMA. If you want to blend the data like timestamp and counter then you dont need to separate out 4 D1 frames.

    In both of the above cases you will have to write open man component to do blending or appending of the data. open Max component will run on DSP and not HDVPSS. TI has a plan to provide some openMax component for blending on DSP but I am not sure of timeline.

     

    Regards,

    Hardik Shah

     

  • Hi Hardik,

    Thanks, you understood first point correctly.

    For second point, I'll append data rather than using blending, because purpose is not to display/overlay this data but to process this data in DSP.

    It is kind of metadata that is unique to each of the 4 channels.


    One question that arises is, Existing SDK comes with OpenMax component for ARM only.  So I need to write component to capture 16 bit YUV data for DSP too. Am I right ?

    As per my understanding, following development will be easier way rather than writing component to extract 4D1s.

    - Write OpenMax component for DSP to capture 16 bit video.

    As I understand, I can have captured buffer straight away in DSP DDR using wrapper OpenMax calls for queue/dequeue. Right ?

    I should be able to dequeue this buffer, separate them out in DSP's DDR using EDMA, extract metadata and perform further image processing.

    In other words, I may not need to write OpenMax component for separating 4D1s if I take care of this in DSP, simply by capturing and getting entire frame (containing 4 D1s and metadata appended in video pixels itself). Please correct if my understanding or approach is wrong/inefficient.

    Thanks again for your inputs.

    Best Regards,
    Sweta

  • If video is multiplexed already and is available in a single video port, you could use linux vpss V4L driver to capture video. The video buffer after capture will be in DDR3. DSP codec engine framework can then be used to process this video. 

    Another way to do this would be to write  linux openmax client application [demo is available in latest ezsdk]. In that application, subscribe to Openmax VFCC callbacks. The video buffer pointer will be available in this call back for DSP processing [since TI uses non standard tunnelling]. You can use DSP Codec Engine here to process the buffer as well. After processing, you could queue the buffer for display in openmax VFDC component.

    It seems to me openmax componets are more fully featured than V4L drivers. AFAIK, VFCC, VFDC,VFPC  are currently available in Linux only, DSP openmax support will be for audio encode/decode.

     

    RV

  • Hi RV,

    Thanks for the important update about Linux based capture driver. This is very good news.

    Well, I understand that we can capture 16 bit YUV frame in Linux. I would like to explore this detail to figure out best possible way to fulfill my requirement.

    I am not much clear about DSP codec engine framework. Is this Codec engine concept same as Davinci processors like DM6446/67 ? Does that mean, I need to use VISA APIs for data passing from ARM to DSP ?

    Please correct me if I am wrong.

    My intention behind passing the frames to DSP is to perform analytics on all 4 D1 channels (extracted out of 16 bit multiplexed data) and get the metadata of analytics back to ARM.

    Could you please help me to point out to any documentation or sample example which shows the data sharing path between ARM and DSP using Codec engine ?

    Do you refer to the ezsdk 5_01_01_80 ? Or the ezsdk_5_11 available on extranet ?

    Please let me know the latest version of sdk.

    Thanks once again for your help.

     

    Best Regards,
    Sweta

  • I have moved on from  ezsdk_5_11 to ezsdk 5_01_01_80.  Codec engine concept is mostly the same as in DM6647, except Codec Server runs on SYS/BIOS and IPC uses SYS/LINK instead of DSP/LINK. Examples are documented here . For custom codec, I am using IUNIVERSAL codec with IDMA interface in ezsdk 5_01_01_80.

    For getting DMA working properly in Netra please see this post. 

     

    RV


  • Hi RV,

    I am sorry, I have some confusion for below.

    [RV] : Another way to do this would be to write  linux openmax client application [demo is available in latest ezsdk]. In that application, subscribe to Openmax VFCC callbacks. The video buffer pointer will be available in this call back for DSP processing [since TI uses non standard tunnelling]. You can use DSP Codec Engine here to process the buffer as well. After processing, you could queue the buffer for display in openmax VFDC component.

    [Sweta]: Do you mean that I can still have openmax component running on ARM and can straight away receive the captured data to DSP's memory (through tunnel between ARM and DSP's memory pointer) ? What I interprete from this is, callback for captured frame will put the captured data in DSP's DDR.

    If this is true then I don't need to use codec engine, DSP can take care of rest of the things.

    Please correct me if I am wrong.

    Thanks,
    Sweta

     

     

  • OpenMAX IL 1.1 supports 3 modes of buffer communication

    1.Non-Tunneled – All buffer passing between components is handled by application

    2.Tunneled – Components pass buffers to each other without application involvement

    3.Proprietary – Components pass buffers to each other using non-standard proprietary method

    TI is using Non-Tunneled in latest EZSDK. So All buffer passing between components (DSP,VFCC,VFDC)  is handled by OpenMAX IL Client application on linux. So there is no tunneling between VFCC and DSP.

    But, VFCC Buffer pointers are available in shared memory DDR3. ARM OpenMAX Linux application ( AFAIK, Not the DSP) will get notification from VFCC that the buffer is full.   You may have to run a task that listens to a message queue on the DSP. From ARM Linux you may have to write messages into this DSP task's message queue, letting DSP know what buffers to process using SYS/LINK  and IPC. DSP then will have to send message into another message box on ARM Linux, to indicate process completion.

    Codec engine does this out of the box. You can also write your own code on top of  SYS/LINK to do this.

     

    RV