This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Understanding ipcFramesIn/Out on A8 and SwOSD flow

Hi TI gurus, as I'm quite a newbie to the McFW (IPNC RDK v3.0) I'm mainly doing some trial and errors on 2 main flows:

1. Video Analytic mode that uses both the DSP and SwOSD (refer to multich_tristream_smartAnalytics.c).

2. Capture Display flow as I'm very interested in the YUV-->A8 feature (refer to multich_Stream_CaptureDisplay.c with YUV_FRAMES_TO_A8).

Regarding 1. - I'm trying to understand how eventually the display gets the SwOSD drawing part. It's obvious from the code (and as the name states) that the SwOSD is simply a mean to draw directly on the video buffer. What I don't understand is the actual flow, both the SwOSD and Display link have single ancestor which is the display dup (merely a name) but if the dup has propagated the buffers ptr to all its outputs, doesn't the frame the SwOSD is working on is after the Display link already queued the dup frame into the Display Driver and hence, probably displayed it? I'm still a bit confused about the putFrame and emptyFrame. I guess a link A send a successor link B a frame, which is now locked until link B will call empty to its predecessor (hence link A) but if my understanding is correct, how do one know how the flow is synced i.e. if I made some logic at the scaler/ DSP/ FD link, how do I know to what frame the OSD drawing refer; after all, the entire link system is somewhat a-synchronous, not?

Regarding 2. I understand that due to ISR issues, the implementation of A8 frames In/Out isn't like in Bios6 ones but what I don't understand (and you can see in the McFW pdf CaptureDisplay graph) is how frames of YUV that gets to A8 (M3-->A8) are then sent back from A8 to M3. From the graph creation phase, it's obvious there's no link from HostIn to HostOut (HostIn->next = NULL, hostOut->prev = NULL) so from where the frames are pulled, if they actually are...?

The more I dwell into McFW links, I feel like although the links concept is generic, each link has its own "special" features and it's hard to make real changes to a given flow... maybe if I can better understand the queue and put/empty mechanism I can make changes w/o making hazardous mistakes.

Last thought, is it possible to send frames from Host to M3/C67 w/o using the Capture Link and if so... what triggers a new frame event?

Thanks in advance,

Roei

  • There is a detailed link training that is present under /dvr_rdk/docs/Trainings/DVR_RDK_McFW_Link_API_Training.pdf. This is part of the DVR RDK release. Even if you don't use the DVR RDK pls ask the FAE supporting you to provide this document to cover the basics.

    Explanation of how SWOSD works is detailed in this thread: http://e2e.ti.com/support/dsp/davinci_digital_media_processors/f/717/p/240944/843247.aspx#843247

    To summarize any two connected links _have_ to be on the same core (VIdeo m3/vpss M3/c674 / A8). If two links have to interact (data flow) across cores ipc links have to be used to connect them.

    There are two flavours of ipc links

       1. ipcFrames (to exchange 2D video frames)

        2. ipcBits (to exchange 1d buffer like h264 encoded data)

    ipcFrames supports an additional feature compared to all other links .All other links have only prev and next link.ipcFrames has an additional concept of a processLink. The processLink is explained in the post above and basically it acts like a T junction. It works by sending the frame to SWOSD for overlay and once it receives the frame back from c674 instead of freeing the frame it will forward it to the next link (display in this case). So it is assured that overlay happens before display.

     

    The A8 frames in/out are achieved by having a data flow like :

    Capture -> ipcFramesOut(VPSS M3) -> ipcFramesIn (A8)

    ipcFramesOut sends full  frames to ipcFramesIn via a linked list in shared memory (Only pointer exchanged)

    Once application is done with processing of the frames received from ipcFramesIn ,it will return the frames to ipcFramesIn which in turn will put the emptied frames back to ipcFramesout (VPSS M3) via another linked list in shared memory

     

    ipcFramesOut will now free the frames back to the captureLink.

     

    Roei Avraham said:
    The more I dwell into McFW links, I feel like although the links concept is generic, each link has its own "special" features and it's hard to make real changes to a given flow... maybe if I can better understand the queue and put/empty mechanism I can make changes w/o making hazardous mistakes.

    - I don't know enough of IPNC RDK to comment on this but links definitely are not hardcoded to a data flow. A large number of completely different data flows have been realized across DVR/NVR/IPNC using the same links. There are h/w limitations realted to chroma format accepted by different IPs so it is not possible to connect anylink to any other link.

     

    Roei Avraham said:
     is it possible to send frames from Host to M3/C67 w/o using the Capture Link and if so... what triggers a new frame event?.

    Yes it is possible. You can have a data flow like :

    ipcFramesOut (A8) -> ipcFramesIn (Video  M3) -> Enclink (VideoM3)

     New frame event will be triggerd in encLink when ipcFramesIn receives a new frame.

    ipc links can operate in either notify mode in which case the remote core is  interrupted when frames are queued or in noNotify polling mode . Polling mode is preferred for higher channel density to avoid overloading the remote core with interrupts.

  • Hi Badri, thanks for the help. I read thoroughly the thread you mentioned (several times) but as it seems, this isn't the principal behind SWOSD in IPNC as there's no usage of process link. In IPNC the SWOSD is just another link (notify next model) and when I dwell further into the code, I found out 2 main OSD rendering techniques:

    1. Using (probably) some kind of overlay image that is used to draw several bitmaps windows e.g. the logo. Most of the actual code is in some closed library and I can't achieve too much from that.

    2. Drawing directly on the "passing" frame and drawing some primitives like lines and rectangles (with consideration to YUV420/422 format).

    The problem I'm experiencing is a bit regarding the "how the display is updated with this data" - how can one be sure that the frame I used somewhere in the flow (like DSP, most used example) and then sent some drawing commands to the SWOSD are actually the same image. Maybe while I was doing work in the DSP, the SWOSD was already fed with new frame from the capture. Should I "freeze" the SWOSD until it's synced with the DSP processed frame and how this can be done while ensuring the frame I rendered on is the one I see on the screen and not frame + 1 / frame + 2 / etc.

    I hope I'm clear enough... if not let me know and I'll try to further explain my problems.

    Regards,

    Roei

  • I will ask someone from the IPNC RDK team to respond.

    MY understanding is

      -- There is SWOSD link that runs on M3VPSS and uses VCOP to blend a graphics plane with video (This is probably (1) link you mentioned).

      -- There is a facedetect link which supports some primitives like drawRectangle which also runs on M3VPSS.  (This is probably (2) link you mentioned).

    Links that operate on the input buffer just forward the buffer to the nextLink after processing is complete.

    Is your requirement to do some sort of object detection on DSP and then draw rectangle around the identified object and then display the frame ?  Or Do you just want to display some text like date/time on the captured video frame

     

  • Hi Badri. Thanks again for the help, you've been extremely insightful - my requirement is more close to the FD/DSP and then draw a rect on that EXACT frame.

    Regarding the (2) - so what I don't understand is the relation between duping a link and then at some point further manipulating the frame and between the time the display link can use it and send it to the display driver itself (like Smart Analytic chain). Does Dup-link "locks" all split frames until all next links will "empty" it?

    Cheers,

    Roei

  • I don't know the data flow of the usecase you mentioned so am not sure what you are referring to.

    DupLink works as below

      Input Queue   ----- DupLink  ----- OutQue 0

                                                          OutQue 1

                                                           ....

                                                          OutQue N

     

    ON receiving a frame on the input queue multiple copies of the input frame info (not the buffer only the frame info) are made, reference count is incremented and the frame is output on each of the output queues.

    Only when all the output queues free the frame (reference count reaches 0) is the original frame freed back to the previous link.

    The output of the duplink should be treated as a readonly buffer as it is being used by other links.

    In DVR RDK a _single_ output queue may modify the duped buffer in which spl case the frame is sent to this output queue only after  all the other output queues have freed the original frame.

  • Hi Badri, thanks again for the answers but most of the things you're mentioning are what I already understood from the code itself. Is there any way someone from IPNC kit can support me?

    I'm still a bit vague on how the display intercepts the SWOSD frame and at what cycle.

    Also, does a fame holds any frame counter? I hoped FVID2_Frame::fid will take the honors but it seems it doesn't. Should I count only on timeStamp?

    Again, is there a person that is the source for IPNC related questions in TI?

    Regards,

    Roei