Hi, I need your advise to enhanced the performance on EVM.
My Scenario is like below.
1. Use HD(1280x720) Camera via Component input port on EVM.
2. Keep an original captured frame and make a new down scaled image frame (HD to VGA)
3. Make a gray scale image from VGA frame only.
4. Some Image processing(e.g. face detection) using VGA image frame and get the image processing result. (e.g. coordinate value)
5. Apply result of step 4 to an original captured frame. (e.g. draw rectangle )
6. Send the result of step 3 to the display or H264 encoder.
The problem I have is the total processing time is too big. There are two bottleneck points.
The one is Step 2 and Step 3, the other is Step 4.
I'm trying to develop image processing algorithm on DSP side for Step 4.
How can I reduce time for Step 2 and Step 3?
To do that, I tried V4L2 application. V4L2 application(saLoopback) have some limitation. The one is pixel format.
It only supports YUV422 Interleaved format in display, so it captured frame using YUV422 interleaved format.
saLoopback application runs on ARM linux side only. So it need much time to get Y channel value and resize image.
I found that OpenMax component will be a good solution.
For step 1, use VFCC or V4L2 and for step 2 use VFPC.
VFPC has one input and many output ports, so I think it can receive captured frame and it can generate two images.(HD size and VGA size)
But I can't find any component that has two input ports and one output ports.
Is it possible to co-work DSP and Openmax component(VFCC, VFPC. etc..)?
If yes, then how can I move two image data(HD size / VGA size) to the DSP side without memory copy? Do I need to use DMA/EDMA function?
My idea is if DSP and openmax components can work at the same time,
VFPC generates two image and it sends them to the DSP memory. After image processing on DSP side, DSP sends final result image to VFDC or VENC component.
Is it possible and proper approach?
I want to hear your solutions.
Best regards.
jonpgil