This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

PROCESSOR-SDK-DRA8X-TDA4X: [TDA4] OpenVX time synchronization needed

Part Number: PROCESSOR-SDK-DRA8X-TDA4X


We found no time synchronization mechanism in TI OpenVX framework and sample code.

What we need are:

1. A mechanism to synchronize frames due to different source nodes, different fps, or different delays. Ex. as Chain-Links architecture, a synchronization node (or equivalent) which can synchronize video frames.

2. A mechanism to synchronize frames with data of CAN, Radar, Lidar, GPS,….etc. Video process nodes carry no timestamp, for example TIDL. How to synchronize such data with video frame?

3. A global time (precise to milli-second) which shared between Linux (A72) and RTOS (R5F, DSP) since CAN, Lidar, IMU,…, such data may come from ethernet, CAN, UART. We need a global time in order for synchronization.

Please advise us solutions or let us know TI’s plan if there is no solution yet.

Thanks.

Jerry

  • Jerry,

    We do have an approved requirement to add support for timestamps using global time to be either auto-populated in capture node, or be added to data objects.  These timestamps will propagate through the graph, and user can get access to these from any data object that is dequeued from the graph parameters.

    Also, even today, if the capture node has multiple homogeneous cameras, it will internally do synchronization among them.  However, beyond this, there will not be any sync nodes (as there were sync links) because OpenVX architecture and semantics are fundamentally different than Links.  In OpenVX, there are no dropped frames within the graph.  Any input to a graph will carry on until completion of graph.  In Links, this was not the case and multiple parts of the chain can be done at different rates since everything had to be in 1 chain.

    OpenVX is more flexible than Links from the application control perspective.  There can be many graphs in the application all existing simultaneously and running at different rates, if desired.  It is expected that if there are multiple sources of data which are running at different rates, then these can be processed through different graphs, and the application has full control over synchronization of these graphs at the graph boundaries.  If data needs to be dropped, then the application does this as part of the synchronization logic it does in the application between graphs.  For example, camera capture at one rate came come through 1 graph, Lidar through another graph in parallel, GPS through another.  The output of each of these graphs is dequeued by the application, and if the application wants to merge these data into downstream processing together, then it manages enqueuing them into perhaps another graph, dropping frames as needed.

    Since the syncronization/frame drop policy is highly dependent on the application requirements, this is now managed entirely by the application, and the nodes within the graph can be generic and agnostic to which graph it is in, not requireing any application specific tweaks or customizations related to frame drop policies/etc.

    For reference, we have a demo application which has an example of this kind of multi-modal syncronization (albeit without timestamps yet): apps/ptk_demos/app_valet_parking

    Regards,

    Jesse

  • Hi Jesse,

    Please explain the following statement. Where and how the synchronization is done? 

    ================================================================================================

    Also, even today, if the capture node has multiple homogeneous cameras, it will internally do synchronization among them.

    ================================================================================================

    Thanks.

    Jerry

  • Downstream processing from the capture node will not begin until the capture node has received the camera buffers for ALL cameras.  For example, if there are 4 cameras connected to a single capture node, the assumption here is that they all have same meta information (width, height) and are running at same rate. The downstream processing will not start until all 4 cameras have completed filling buffers in DDR.  In this way, the 4 cameras are synchronized for downstream processing and are all sent through the graph at once.

  • Hi Jesse,  

    =================================================================

    We do have an approved requirement to add support for timestamps using global time

    =================================================================

    Is the global time an OS level time service? For example, gettimeofday() function.

    And provides all tasks/threads at all CPU cores see the same time at the same moment?

    Regards,

    Jerry

  • Yes, this is the intention of the requirement, all cores/processes shall see the same time at same moment using underlying global timer/counter on the SoC.