Many of the serializers support splitting incoming super frames into multiple similar size frames and sending them on multiple outputs to drive multiple displays. This article explains how to generate the super frame from the DSS.
This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
Many of the serializers support splitting incoming super frames into multiple similar size frames and sending them on multiple outputs to drive multiple displays. This article explains how to generate the super frame from the DSS.
Please find attached patch, which demonstrates the use of two video pipelines in DSS and merge them in the overlay manager of the DSS to create super frames. For this multi-camera example in the vision apps is updated and two display pipelines are positioned side by side to create super frame.
Also LDC and mosaic nodes are removed from this example and uses two cameras to output on two display pipelines. These two display pipelines are currently downscaling input image to 960x1080 resolution and displays them side by side. This example is verified on EVM with IMX390 camera.
Regards,
Brijesh