I'm working with a prototype device and it has ducati hw encoder/decoder. I'm interested to access h264 encoder/decoder.
Here's the decoder name: kKeyDecoderComponent OMX.TI.DUCATI1.VIDEO.DECODER
I'm using private c/cpp api to access it using stagefright/openmax on android jelly bean. I have a sample 720p h264 stream captured from HW encoder at 5mbps. My problem is that I cannot get the decoder to handle that stream in realtime, e.g. no matter what I try it seems that at best I'm able to handle 30second stream in 35seconds. At the same time, that same stream if used with MediaCodec java api can be decoded real-time. The only big difference is that java API decodes the video directly to native window and from c/c++ stagefright API I request kClientNeedsFramebuffer, that is, in my c++ code I request decoded YUV frames so that I could use them. In c++ code if I don't do anything with the YUV frames then it takes 35 seconds to decode 900 frames (30 seconds of video). If I do any processing of the decoded frames then that time increases from 35 to 150seconds as if the YUV processing was done by the same CPU that does the decoding (note, I pass decoded YUV data for processing to another thread, that is, it doesn't block video decoding pipeline).
Any idea what's wrong? Why can't I achieve the same performance that java api gets when decoding to some surface for display?