This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

SK-AM69: The performance testing issue of encoding and decoding

Part Number: SK-AM69
Other Parts Discussed in Thread: AM69, TDA4VH, AM69A

Hello, I want to test the real-time hardware encoding and decoding of the AM69 board. Our requirement is to achieve simultaneous real-time encoding of two streams of 4K@60, without dropping frames.

During testing, we found that the encoding capability was somewhat insufficient when encoding two streams of 4K@60 simultaneously. Please help us look into this issue. Below is the testing plan:

The testing plan is as follows:

  1. First, decode the MP4 into a 3840216060 NV12 format file. (This file was provided by you last time.)
gst-launch-1.0 filesrc location=./bbb4k60_hevc.mp4 ! qtdemux name=demuxer demuxer.! h265parse ! queue ! v4l2h265dec ! rawvideoparse width=3840 height=2160 format=nv12 framerate=60/1 ! filesink location=/tmp/bbb4k60_hevc.nv12
  1. Encode the 3840216060 NV12 format file and save it as an MP4 file. The expected time is a little over 10 seconds, and the original video duration is 10 seconds. It barely achieves the encoding capability of 1 stream of 4K@60 frames.
time gst-launch-1.0 filesrc location=/tmp/bbb4k60_hevc.yuv ! rawvideoparse format=nv12 width=3840 height=2160 framerate=60/1 colorimetry=bt709 ! v4l2h265enc ! h265parse ! qtmux ! filesink location=/tmp/tmp_265.mp4


If I simultaneously start two threads for encoding, I find that it takes 15 seconds for both threads to complete the encoding, which cannot achieve the real-time encoding speed of two streams of 4K@60




thanks!
  • Hello, 

    I believe the reason for your delay is due to the filesink plugin. When writing to a file (especially a 4kstream) this will take a unavoidable significant number of CPU mem-copies. Therefore, the encoder will have to facilitate writing many megabytes of data that the CPU is in charge of copying to your storage device. You can try to run the pipeline with fakesink instead like here and should see the latency measurements improve: 

    • GST_DEBUG_FILE=gst_trace.log GST_DEBUG_NO_COLOR=1 GST_DEBUG="GST_TRACER:7" GST_TRACERS="latency(flags=element):v4l2" time gst-launch-1.0 filesrc location=/bbb4k60_hevc.yuv ! rawvideoparse width=3840 height=2160 format=nv12 framerate=60/1 colorimetry=bt709 ! tee name=t \
      t. ! queue ! v4l2h265enc ! fakesink \
      t. ! queue ! v4l2h265enc ! fakesink

    Our performance metrics are isolated from the v4l2h265 hardware accelerator element, so there may be other driver and buffer delays that occur depending on the use-case. 

    Best,
    Sarabesh S.

  • Hi Sarabesh Srinivasan,

    I attempted to use the fakesink plugin, achieving a performance of barely 1 stream at 4K60, but unable to achieve 2 streams simultaneously at 4K60 in real-time. I found some information about the AM69 codec you mentioned, which supports resolutions up to 8192x4320, but it’s not real-time.

    In real-time scenarios, it can only support up to 1 stream  4K60.

    AM69: Inquery about AM69 Video Decoding Performance - Processors forum - Processors - TI E2E support forums 

  • Hello, 

    I believe you may not be specifying the correct VPU instance when running two streams of 4k60 on the AM69. I am currently looking into this. 

    I found some information about the AM69 codec you mentioned, which supports resolutions up to 8192x4320, but it’s not real-time.

    In this thread I reference the 8192x4320 MAX encode/decode resolution and 4k60 RT guarantee are on a per VPU IP basis. The AM69 has two instances of the VPU IP so the capabilities are 2x. Therefore, two streams of 4k60 RT guarantee should be supported by the AM69.

    Thank you,
    Sarabesh S.

  • Hello, 

    The following pipeline should separate the 2x 4k60 streams to be encoded individually per IP instance on the AM69. I did a gst-inspect-1.0 | grep v4l2 to determine the name of the hardware accelerator elements. 

    • GST_DEBUG_FILE=gst_trace.log GST_DEBUG_NO_COLOR=1 GST_DEBUG="GST_TRACER:7" GST_TRACERS="latency(flags=element):v4l2" time gst-launch-1.0 filesrc location=/bbb4k60_hevc.yuv ! rawvideoparse width=3840 height=2160 format=nv12 framerate=60/1 colorimetry=bt709 ! tee name=t \
      t. ! queue ! v4l2h265enc ! fakesink \
      t. ! queue ! v4l2video3h265enc ! fakesink

    BR,
    Sarabesh S.

  • Hello, 

             It seems like your testing results indicate that when encoding with just one thread, a 10-second video file took 9.59 seconds to process.

    GST_DEBUG_FILE=gst_trace.log GST_DEBUG_NO_COLOR=1 GST_DEBUG="GST_TRACER:7" GST_TRACERS="latency(flags=element):v4l2" time gst-launch-1.0 filesrc location=/bbb4k60_hevc.yuv ! rawvideoparse width=3840 height=2160 format=nv12 framerate=60/1 colorimetry=bt709 ! tee name=t \
    t. ! queue ! v4l2h265enc ! fakesink \
    t. ! queue ! v4l2video3h265enc ! fakesink

    When encoding with two threads simultaneously, the total processing time of 15 seconds exceeded the duration of the video file, making it impossible to achieve real-time encoding with two threads at 4k60.

  • Hello,

    Please refer to this FAQ (here) to determine how to measure the latency of the CODEC element given a GStreamer pipeline. 

    The delay you are measuring with the GStreamer pipeline is taking in account of setting up the pipeline, running s/w plugins, synchronization, closing the pipeline, etc. Therefore, it is not an accurate measurement of the real-time encoding performance of the actual h/w accelerator.

    BR,
    Sarabesh S.

  • Hello, 

    I tested with two different streams for 2x4k60 encode on the TDA4VH with the following test-cases:

    1. GST_DEBUG_FILE=gst_trace.log GST_DEBUG_NO_COLOR=1 GST_DEBUG="GST_TRACER:7" GST_TRACERS="latency(flags=element):v4l2" time gst-launch-1.0 filesrc location=/bbb4k60_hevc.yuv ! rawvideoparse width=3840 height=2160 format=nv12 framerate=60/1 colorimetry=bt709 ! tee name=t \
      t. ! queue ! v4l2h265enc ! fakesink \
      t. ! queue ! v4l2video3h265enc ! fakesink


    2. GST_DEBUG_FILE=gst_trace.log GST_DEBUG_NO_COLOR=1 GST_DEBUG="GST_TRACER:7" GST_TRACERS="latency(flags=element):v4l2" time gst-launch-1.0 filesrc location=/jellyfish_42frm_3840x2160_60fps_nv12.yuv ! rawvideoparse width=3840 height=2160 format=nv12 framerate=60/1 colorimetry=bt709 ! tee name=t \
      t. ! queue ! v4l2h265enc ! fakesink \
      t. ! queue ! v4l2video3h265enc ! fakesink

    From these results you can see the latency for the processing of the streams are ~14ms with the output framerate being over 60fps. Attached are the raw files used to test.

    6406.4k_raw.zip

    Best,
    Sarabesh S.

  • Hello, 

    Your test results are normal; a single program  output framerate being over 60fps.

    However, my issue is that when encoding two programs simultaneously, the frame rate cannot reach 60fps; it only reaches 37fps. In other words, two programs cannot encode in real-time to achieve 60fps.

    This is the result of running a single program.

    This is the result of running two programs simultaneously. only 37 fps

    gst_trace.txtgst_trace1.txtgst_trace2.log

  • Hi Sarabesh and ?? ?

    I am not sure if there are misunderstanding on the test method. There are 2 HW VENCDEC on the device, should be able to support 2 channel encoding in parallel, How to ensure/confirm the test used 2 VENCDEC in parallel for each stream, other than encoding 2 stream in serial using only one VENCDEC?

  • Hi,

    Can you provide me with the pipeline you are running for both of these test cases? The v4l2h265enc0 element is running the encoder instance on VPU0 and the v4l2video3h265enc0 element is running the second encoder instance on VPU1. Therefore, in a single pipeline (like shown in my previous response) you are sending one stream to be encoded by VPU0 and another stream to be encoded by VPU1.

    The 'tee' element in the pipeline splits the data stream to be sent to multiple destinations, the 'queue' ensure that the streams are running on separate threads to ensure parallel processing, and the different v4l2 encoder elements are the different VPU destinations.

    Please provide me with both pipelines you ran so I can see what is giving you the 37 fps result. Also if you can provide me with the exact stream you are using I can test on my own environment.

    BR,
    Sarabesh S.

  • GST_DEBUG_FILE=gst_trace.log GST_DEBUG_NO_COLOR=1 GST_DEBUG="GST_TRACER:7" GST_TRACERS="latency(flags=element):v4l2" time gst-launch-1.0 filesrc location=/bbb4k60_hevc.yuv ! rawvideoparse width=3840 height=2160 format=nv12 framerate=60/1 colorimetry=bt709 ! tee name=t \
    t. ! queue ! v4l2h265enc ! fakesink \
    t. ! queue ! v4l2video3h265enc ! fakesink

    I found out the command/pipeline in previous replies, it is same as yours. customer run two instance in different ssh console to ensure two instance run in parallel. 

    Not sure how you run two instance in parallel. if execute it one after another, it is in serial, not in parallel.

    BTW, customer used the same YUV file: bbb4k60_hevc.yuv

  • Hello,

    You only need one console and one pipeline to run a multi-instance parallel encode. In this pipeline:

    GST_DEBUG_FILE=gst_trace.log GST_DEBUG_NO_COLOR=1 GST_DEBUG="GST_TRACER:7" GST_TRACERS="latency(flags=element):v4l2" time gst-launch-1.0 filesrc location=/bbb4k60_hevc.yuv ! rawvideoparse width=3840 height=2160 format=nv12 framerate=60/1 colorimetry=bt709 ! tee name=t \
    t. ! queue ! v4l2h265enc ! fakesink \
    t. ! queue ! v4l2video3h265enc ! fakesink

    You are running a multi-instance encode. You have a single file source that is being duplicated with the 'tee name=t' element. Then you take one duplicate as an input, queue that input to a separate thread and let the encoder (first VPU) process the input. The pipeline then takes the second duplicate, queues that input to a separate thread, and then passes the input to the (second VPU)Since there are separate threads for each bitstream, this allows the encoder instances to run concurrently.

    Thanks,
    Sarabesh S.

  • Okay, how to change the pipeline to encode 2 different video source file in parallel. 

  • Hi Tony, 

    It would look like the following: 

    • GST_DEBUG_FILE=gst_trace.log GST_DEBUG_NO_COLOR=1 GST_DEBUG="GST_TRACER:7" GST_TRACERS="latency(flags=element):v4l2" time gst-launch-1.0 filesrc location=/bbb4k60_hevc.yuv ! rawvideoparse width=3840 height=2160 format=nv12 framerate=60/1 colorimetry=bt709 ! queue ! v4l2h265enc ! fakesink \
      filesrc location=/jellyfish_42frm_3840x2160_60fps_nv12.yuv ! rawvideoparse width=3840 height=2160 format=nv12 framerate=60/1 colorimetry=bt709 ! queue ! v4l2video3h265enc ! fakesink

    Best,
    Sarabesh S.

  • Hi Sarabesh,

       I used the following command for testing two different files,/tmp/bbb4k60_hevc.yuv /tmp/bbb4k60_hevc.yuv1.yuv 

    • GST_DEBUG_FILE=gst_trace.log GST_DEBUG_NO_COLOR=1 GST_DEBUG="GST_TRACER:7" GST_TRACERS="latency(flags=element):v4l2" time gst-launch-1.0 filesrc location=/tmp/bbb4k60_hevc.yuv ! rawvideoparse width=3840 height=2160 format=nv12 framerate=60/1 colorimetry=bt709 ! queue ! v4l2h265enc ! fakesink \
      filesrc location=/tmp/bbb4k60_hevc.yuv1.yuv ! rawvideoparse width=3840 height=2160 format=nv12 framerate=60/1 colorimetry=bt709 ! queue ! v4l2video3h265enc ! fakesink

    The result still indicates that it’s unable to achieve real-time 60fps streaming for two different files.  uesd time 13.71s  out-fps 40+

  • Hi,

    Could you send me the full 600 frame input file you are using so I can test on my environment? You should be able to compress and attach it to the this thread. I do not have a 4k file of that size. Additionally, what SDK version are you on?

    Best,
    Sarabesh S.

  • The file is too large, at 7GB in size. I can provide an MP4 file for you to generate using the following command。

    gst-launch-1.0 filesrc location=./bbb4k60_hevc.mp4 ! qtdemux name=demuxer demuxer.! h265parse ! queue ! \
    v4l2h265dec ! rawvideoparse width=3840 height=2160 format=nv12 framerate=60/1 ! filesink location=/tmp/bbb4k60_hevc.nv12
    kernel version : 6.1.33-g8f7f371be2
    SDK:AM69A _09_00_01
  • Hello, 

    I am able to reproduce the similar results with the fps output being lower that 60. From my testing this seems to be a problem only when using two filesrc locations in the pipeline. Could you confirm that the following pipeline using the tee element will meet the timing requirements and produce an output fps above 60?

    • GST_DEBUG_FILE=gst_trace.log GST_DEBUG_NO_COLOR=1 GST_DEBUG="GST_TRACER:7" GST_TRACERS="latency(flags=element):v4l2" time gst-launch-1.0 filesrc location=/bbb4k60_hevc.yuv ! rawvideoparse width=3840 height=2160 format=nv12 framerate=60/1 colorimetry=bt709 ! tee name=t \
      t. ! queue ! v4l2h265enc ! fakesink \
      t. ! queue ! v4l2video3h265enc ! fakesink

    I will look into why this is the case when running two file src. I believe there may be some limitation on the input file size but will investigate further and update you soon.

    Thanks,
    Sarabesh S.

  • Hello, 

    [Q]  Could you confirm that the following pipeline using the tee element will meet the timing requirements and produce an output fps above 60?

    • GST_DEBUG_FILE=gst_trace.log GST_DEBUG_NO_COLOR=1 GST_DEBUG="GST_TRACER:7" GST_TRACERS="latency(flags=element):v4l2" time gst-launch-1.0 filesrc location=/bbb4k60_hevc.yuv ! rawvideoparse width=3840 height=2160 format=nv12 framerate=60/1 colorimetry=bt709 ! tee name=t \
      t. ! queue ! v4l2h265enc ! fakesink \
      t. ! queue ! v4l2video3h265enc ! fakesink

    [A]:   The test data was provided to you earlier, achieving approximately 62fps.

    if this encoder has the capability for two streams of 4K60, then this result should be closer to around 120, rather than just 60+???

    Thanks

  • Hello, 

    if this encoder has the capability for two streams of 4K60, then this result should be closer to around 120, rather than just 60+???

    In this pipeline you are still running a 2x 4k60 encode. The tee element duplicates the stream after reading the data from the CPU once. So the device is still receiving two 4k60 inputs. That is, 1x 4k60 input per VPU. This is NOT encoding 1x 4k60 stream for both VPUs. Therefore, the result would NOT be closer to 120fps.

    I believe there may be some limitation on the input file size but will investigate further and update you soon.

    After looking into it, this behavior does make sense since the CPU is unable to read two large 4k60 files from the disk fast enough to meet the VPU timing requirements. However, as you can see from the test case you just ran, the VPU is able to handle two 4k60 inputs fine (in the case when it reads the file once and a tee is set up between different instances).

    Additionally, from a camera capture -> encode perspective, the AM69 would be able to meet the 4k60 timing requirements of 2 camera instances if dma-buf is added to share buffers between the camera and the encoder. This is because everything would be read from RAM and not disk. However, I believe there is no current hardware support for a 4k60 camera module, there is only 4k30. This is on the roadmap to add.

    BR,
    Sarabesh S.

  • Hello, 

             Sarabesh Srinivasan said:         

             In this pipeline you are still running a 2x 4k60 encode. The tee element duplicates the stream after reading the data from the CPU once. So the device is still receiving two 4k60 inputs. That is, 1x 4k60 input per VPU. This is NOT encoding 1x 4k60 stream for both VPUs. Therefore, the result would NOT be closer to 120fps.

            I obtained the result using the following command, and it only 63 fps, So I think the performance does not reach 120fps.

    • GST_DEBUG_FILE=gst_trace.log GST_DEBUG_NO_COLOR=1 GST_DEBUG="GST_TRACER:7" GST_TRACERS="latency(flags=element):v4l2" time gst-launch-1.0 filesrc location=/bbb4k60_hevc.yuv ! rawvideoparse width=3840 height=2160 format=nv12 framerate=60/1 colorimetry=bt709  ! v4l2h265enc ! fakesink

             Sarabesh Srinivasan said:       

             After looking into it, this behavior does make sense since the CPU is unable to read two large 4k60 files from the disk fast enough to meet the VPU timing

             I also suspect that the issue may be related to the CPU’s inadequate speed in reading disk data. Therefore, I have stored the YUV file in the tmp directory, which is a memory-based file system.

            I used this method to test the decoding performance and found that the fps can only reach 95,this testing method is not affected by disk performance.

    gst-launch-1.0 filesrc location=./bbb4k60_hevc.mp4 ! qtdemux name=demuxer demuxer.! h265parse ! queue ! \

    v4l2h265dec ! fasksink

             Are there any other testing methods to verify the performance of real-time 2-channel 4K60 encoding and decoding?

  • Hello,

    I am out of office this week so please expect a delay in response.

    Thanks,
    Sarabesh S.

  • Hello,

    I obtained the result using the following command, and it only 63 fps, So I think the performance does not reach 120fps.

    • GST_DEBUG_FILE=gst_trace.log GST_DEBUG_NO_COLOR=1 GST_DEBUG="GST_TRACER:7" GST_TRACERS="latency(flags=element):v4l2" time gst-launch-1.0 filesrc location=/bbb4k60_hevc.yuv ! rawvideoparse width=3840 height=2160 format=nv12 framerate=60/1 colorimetry=bt709  ! v4l2h265enc ! fakesink

    You will not see 120FPS, that is above pixel rate supported by device. Max capability is 500 MP/s (mega pixels per second). There is no 120 FPS support for a stream of 4K resolution. The Wave5 on TDA4VH is clocked a bit higher, so we actually see more than 500 MP/s. Which is why your are seeing that 95 fps for decoding. 

    Im not sure where we have said anything about getting 120 FPS. You are NOT splitting the video between the 2 instances of the Wave5 - this is not possible as we do not support this. When Sarabesh shared the pipeline seen below, he was showing how 1 video can be duplicated into 2 sources so that we aren't reading disk twice for both encoders. tee elements are done to create multiple data pads. You are duplicating information between the two encoders. This pipeline is essentially the same as 2 different file srcs, but you are saving cpu bandwidth by using the tee element. Because of this, each instance of wave5 should give you approximately 60FPS - because that is what the wave5 supports.

     gst-launch-1.0 filesrc location=/bbb4k60_hevc.yuv ! rawvideoparse width=3840 height=2160 format=nv12 framerate=60/1 colorimetry=bt709 ! tee name=t \
    t. ! queue ! v4l2h265enc ! fakesink \
    t. ! queue ! v4l2video3h265enc ! fakesink

    I used this method to test the decoding performance and found that the fps can only reach 95,this testing method is not affected by disk performance.

    gst-launch-1.0 filesrc location=./bbb4k60_hevc.mp4 ! qtdemux name=demuxer demuxer.! h265parse ! queue ! \

    v4l2h265dec ! fasksink

    You are seeing 95 fps here because nothing is synced. And as I mentioned above, Wave5 is clocked at higher frequency so you will see more than the 4K 60FPS. You can change pipeline to this if you want to see 60FPS:

    gst-launch-1.0 filesrc location=./bbb4k60_hevc.mp4 ! qtdemux name=demuxer demuxer.! h265parse ! queue ! v4l2h265dec ! fasksink sync=true.

    To further clarify, each Wave5 instance is both an encoder and decoder. TDA4VH has two instances of Wave5. Each Wave5 can handle 500 MP/s - which can be any combination of encode/decode as long as it falls at or below the 500MP/s. The pipeline Sarabesh has given you already proves the real time performance you are looking for. 

    Please let me know if there is anything else I can elaborate on. 

    Thanks,

    Brandon

  • Hi Brandon,

    Customer need to do 2 channel 4K60fps encoding from 2 different source, I am not sure if upper clarification provided clear command/pipeline to do that? and if BU side verified the use case?

  • Hi Brandon,

    I supplemented Tony’s content, and here I want to test two different video streams for real-time encoding or decoding.

  • Hello, 

    For something like two channels of 4K60, it would need to come from a camera source so that you could set up dmabufs between camera and encoder. Encoder can handle 4K60 with no issue, we have proven that above. The cpu in this case is limiting factor. Having to read that much data from disk is too much. The cpu would have to be reading approximately 750 mega bytes per second per stream. You're getting into gigabytes when you have two streams. Two unique file sources is not feasible - it is too much raw data that has to be read. 

    Thanks,

    Brandon

  • Hello,

          Having to read that much data from disk is too much. Therefore, I have stored the YUV file in the tmp directory, which is a memory-based file system. Is this approach feasible?

  • Hello, 

    I will explore this and check feasibility. Will get back to you with this information tomorrow.

    Thanks,

    Brandon

  • Hi Brandon,

           Is this approach feasible?

  • Hello, 

    Reading from /tmp is not feasible due to linux overhead preventing it as a valid method to meet timing requirements for the file reads. The correct way to validate two different 4k60 streams would be through a system level use-case where you running a camera-capture to encode pipeline. Support for this is currently being discussed internally. 

    Thanks,
    Sarabesh S.

  • Hi Brandon,

    The cpu would have to be reading approximately 750 mega bytes per second per stream. You're getting into gigabytes when you have two streams. Two unique file sources is not feasible - it is too much raw data that has to be read.

    https://software-dl.ti.com/jacinto7/esd/processor-sdk-linux-j784s4/09_02_00_05/exports/docs/devices/J7_Family/linux/Release_Specific_Performance_Guide.html?highlight=benchmark

    There is not a table header in the SDK performance data, some number is much higher, while some data is small in the same line. 

    Can we get the conclusion from the LMBench memory bench mark data? 

  • Hi Tony, 

    Will look into this and let you know.

    Thanks,
    Sarabesh S.

  • Hi

    I tested these two commands on two separate console:

    GST_DEBUG_FILE=/run/trace3.log GST_DEBUG_NO_COLOR=1 GST_DEBUG=2,"GST_TRACER:7" GST_TRACERS="latency(flags=element)" gst-launch-1.0 multifilesrc stop-index=0 location="/opt/test_3840_2160.yuv" index=0 loop=1 caps="video/x-raw, width=3840, height=2160, format=NV12" ! v4l2video3h264enc !  fakesink

    GST_DEBUG_FILE=/run/trace.log GST_DEBUG_NO_COLOR=1 GST_DEBUG=2,"GST_TRACER:7" GST_TRACERS="latency(flags=element)" gst-launch-1.0 multifilesrc stop-index=0 location="/opt/test_3840_2160.yuv" index=0 loop=1 caps="video/x-raw, width=3840, height=2160, format=NV12" ! v4l2h264enc !  fakesink

    The frame rate is 56 and 58. Dose this meets your requirements?

    Regards,

    Adam

  • Hi,

    I also tried reading from different images:

    GST_DEBUG_FILE=/run/trace.log GST_DEBUG_NO_COLOR=1 GST_DEBUG=2,"GST_TRACER:7" GST_TRACERS="latency(flags=element)" gst-launch-1.0 multifilesrc stop-index=0 location="/opt/test_2_3840_2160.yuv" index=0 loop=1 caps="video/x-raw, width=3840, height=2160, format=NV12" ! v4l2h264enc !  fakesink

    GST_DEBUG_FILE=/run/trace3.log GST_DEBUG_NO_COLOR=1 GST_DEBUG=2,"GST_TRACER:7" GST_TRACERS="latency(flags=element)" gst-launch-1.0 multifilesrc stop-index=0 location="/opt/test_3840_2160.yuv" index=0 loop=1 caps="video/x-raw, width=3840, height=2160, format=NV12" ! v4l2video3h264enc !  fakesink

    The framerate is 58 and 57 for each.

    Regards,

    Adam

  • I use 5 images for cyclic encoding, and the performance can only reach 40+ fps.

    GST_DEBUG_FILE=/run/trace.log GST_DEBUG_NO_COLOR=1 GST_DEBUG=2,"GST_TRACER:7" GST_TRACERS="latency(flags=element)" gst-launch-1.0 multifilesrc  location="/opt/test_%04d.yuv" start-index=0 stop-index=4 loop=1 caps="video/x-raw, width=3840, height=2160, format=NV12" ! v4l2h264enc !  fakesink

    I used the perf tool for analysis,

  • Hello,

    Will look into this and update you beginning of next week.

    Thanks,
    Sarabesh S.

  • Hello, 

    After taking a look at the pipeline, please note that you are still reading multiple 4k60 file sources when running a cyclic encode 5 times. This will result in the fps requirements not being met because the file source is being read 4 times from disk.

    Thanks,
    Sarabesh S.