I'm concerned by the decode_display demo's high CPU utilisation. At 1080p60 we are talking about 30% utilisation per decode stream (as shown by "top")!
85% of the program's time is spent in the "Decode_DoChunking" function which finds out how big the next frame is. I have read the OpenMAX User Guide and see that TI's implementation of the VDEC component requires the start of the frame to coincide exactly with the start of the buffer passed to the component.
This is fair enough; however the hardware must be decoding the frame and also finding out how big the frame for itself so is there any way to get this information from the HDVICPs into the application? This would then eradicate that 85% of the time spent parsing the H264. This webpage mentions a "file-mode" but I have no documentation on implementing it or even if TI's OpenMAX implementation supports it.
This is a really important issue for us as at present it looks like we are going to have to use the DSP to parse the H264 frames just so the ARM has enough utlilisation to do anything else.
Thanks,
Ralph