This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Ducati Openmax on android

I'm working with a prototype device and it has ducati hw encoder/decoder. I'm interested to access h264 encoder/decoder.

Here's the decoder name: kKeyDecoderComponent OMX.TI.DUCATI1.VIDEO.DECODER

I'm using private c/cpp api to access it using stagefright/openmax on android jelly bean. I have a sample 720p h264 stream captured from HW encoder at 5mbps. My problem is that I cannot get the decoder to handle that stream in realtime, e.g. no matter what I try it seems that at best I'm able to handle 30second stream in 35seconds. At the same time, that same stream if used with MediaCodec java api can be decoded real-time. The only big difference is that java API decodes the video directly to native window and from c/c++ stagefright API I request kClientNeedsFramebuffer, that is, in my c++ code I request decoded YUV frames so that I could use them. In c++ code if I don't do anything with the YUV frames then it takes 35 seconds to decode 900 frames (30 seconds of video). If I do any processing of the decoded frames then that time increases from 35 to 150seconds as if the YUV processing was done by the same CPU that does the decoding (note, I pass decoded YUV data for processing to another thread, that is, it doesn't block video decoding pipeline).

Any idea what's wrong? Why can't I achieve the same performance that java api gets when decoding to some surface for display?

  • It is tough to answer this without looking at your code. One reason I can think of is the way in which buffers are being setup. StageFright APIs are hooked up with HW codecs, gralloc and HWC such that most efficient memory scheme (TILER) is being used and decoded frames are rendered using DSS hardware. Not sure if your native app is doing the same.

  • Well, in my case I'm not doing anything special. I have an h264 encoded stream and I want to get YUV frames. I don't render these YUV frames, I simply calculate check-sum to verify correctness of the decoder (it's bit-exact with my software code). Checksum thing takes some time (since 720p frames are a few megs in size), if I replace checksum with noop then at best it takes 35 seconds to decode 30 seconds of video. If I do checksum then suddenly it takes way more as if checksum calculation took CPU time from the decoder. Is there any sample code that uses ducati from stagefright? I wanted to review to check how possibly android player can play it realtime without any problem

  • I found a similar question on SO. It appears that if it's not rendering to a native surface (e.g. to screen) then decoding becomes very slow. I've had that feeling, but wasn't sure if there was something that I didn't do correctly. So, passing kClientNeedsFramebuffer to OMXCodec::Create will make decoding like twice slower compared to rendering directly to screen.

  • Thats right. If the client needs decoded buffer, then TILER buffers can not be used, or TILER 2D buffers need to be translated to 1D buffers in softwrae. In both the cases there will be a substantial performance drop.