This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VH-Q1: Multi-Client Encoder Support across different process

Part Number: TDA4VH-Q1

  1. Does the QNX OMX encoder component (OMX.qnx.video.encoder) support multiple concurrent handles from different processes?
  2. If yes, does the driver create separate hardware contexts for each process, or are they serialized internally (shared single encoder context)?
  3. If only one process can use the encoder hardware at a time, how should multi-client encoding be implemented?
  4. Any sample applications, reference patches, or known limitations for multi-client encoder usage under QNX?
  • Hello,

    1. Yes the QNX OMX encoder can support encoding multiple streams as separate parallel processes.
    2. There is only one encoder, so they will share the single encoder context but the timing requirements are met for parallel processing.
    3. & 4. Let me try to find some examples that you can try to show how to run multiple encoder processes with the OMX encoder test-app.

    Thanks,
    Sarabesh S.

  • Hello  ,

    Can you kindly share some examples as suggested by you in #3 in above chat

    Thanks 

    Gajendra K N

  • Hello, 

    Thanks for your patience. Had some ongoing tasks that were escalated. Here is the script with multiple encode processes. Copy this onto your QNX filesystem and run the encode script for multi-instance encode. Modify the encode script to point to the correct file source and file output locations.

    https://e2e.ti.com/cfs-file/__key/communityserver-discussions-components-files/791/7506.encode_2D00_multi_2D00_instance.sh

    Thank you,
    Sarabesh S.

  • Hello Sarabesh,

       Thanks for the patience as we were busy with other Release activity.

    The example script that was provided does not appear to be based on the omax_wrapper library code. It seems to be part of the VPU module, which includes its own sample test code.

     As per our requirement  we need a sample implementation that specifically uses the omax_wrapper library — demonstrating how multiple processes can use it concurrently.

    In our setup, we have two independent applications (Process A and Process B), each needing to encode NV12 video streams. Both processes create encoder handles using the same static library app_utils_omax_wrapper.a, which is built from the omax_wrapper.c sources.

    We would like to get feedback on the following points:

    1. Does the static library app_utils_omax_wrapper.a support multiple concurrent encoder handles from different processes?
    2. Are there any sample applications available for multi-client encoder usage under QNX using this omax_wrapper static library?
  • Checking on this, will get back to you tomorrow.

    Thanks,
    Sarabesh S.

  •   : Waiting for your data

    Regards,

    Gajendra K N

  • Hello ,

     Any feedback on this

    Regards,

    Gajendra K N

  • Hi Gajendra, 

    Does the static library app_utils_omax_wrapper.a support multiple concurrent encoder handles from different processes?

    Yes this is supported but not with the library but within our encoder component. The omx_wrapper.a is only a thin layer for OMX component initialization, buffer allocations, state transitions, parameter settings, etc. but it does not manage or restrict multi-process use by itself.You should be unblocked and can proceed with encoding two processes as each process will maintain its own instance and command queue. The only limitations are from the VPU memory and hitting the max number of streams (32), so I wouldn't worry.

    Are there any sample applications available for multi-client encoder usage under QNX using this omax_wrapper static library?

    There are no official examples using this. The concept is the same since each OMX handle represents one encoder instance. All of our examples are based on the OMX-IL test-app.

    Thank you,
    Sarabesh S.