This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Query regarding performance of de-interlacer for C6A8147

Hi all,

We are evaluating C6A8147 for a product and the requirement is as below.

4 D1 (NTSC/PAL) channels will be captured by a video decoder chip and it will produce 16 bit HD raw video.

We plan to give this 16 bit video to HDVPSS of C6A8147for de-interlacing.

Our objective is to get de-interlaced 16 bit video and to pass it as HD frame (4D1 combined together) for further processing.

Questions are as follow.

1) Is C6A8147capable of de-interlacing HD (4D1 channels as a single HD frame) ? If I connect 1 D1 camera and 3 cameras are not connected, then 3 channels will be blank and 1 channel will have video data ?

Would this scenario affect de-interlacing logic ? Or it is independent of the video data and would be equally effective in all cases?

2) What is the CPU utilization and performance for de-interlacing an HD channel video ?

3) Which will be more effective out of below two options a) and b) ?

Option a: Capture 4 channels as a single HD, de-interlace HD frame

Option b: Capture 4 channels separately, de-interlace 4 D1 as 4 separate channels, and combine 4 D1 channels together without any additional copy operation/overhead.

What that means is, is it possible in option b to get the video data from a particular location via DMA, de-interlace and put the de-interlaced data back to the same address ?

If yes, then without overhead of copy, we can de-interlace the required connected channel only (instead of de-interlacing all 4 channels eveninf camera is connected to only 1 channel) and save de-interlacer's time and bandwidth as and when needed.

Please suggest which option is more suitable from de-interlacer's performance point of view.

Thanks,
Sweta

 

  • Sweta,

    Option a) is not really possible since each source D1 image will be completely asynchronous to the others.

    You will need to capture and de-interlace each of the 4 source images sepreately.

    If you want to further process the 4 images by treating them as a single HD image then you will need to store all 4 de-interlaced images back to a different memory buffer which is large enough for all 4 combined images. You can then treat this combined buffer as a single image which contains all 4 de-interlaced images.

    As mentioned above though the source video streams will be asynchronous to each other so you need to be careful with your buffer management in order to ensure you do not see 'tearing' in your compiled buffer output.

    De-interlacing does not require very much CPU processing since it is handled by the hardware directly but as you note, it does require memory bandwidth.

    If you know that one or more source is disconnected then you can certainly optimize memory usage by not performing de-interlacing and/or the associated write to the output frame buffer.

    BR,

    Steve

  • Hi Steve,

    Thanks for the reply.

    This answers my query.

    Regards,
    Sweta