Hello,
I'm using the h264 decoder in closed loop mode (and by forcing it in universal decoder mode I can't see performance decreases, so maybe I'm not setting the closed loop mode correctly). I'm using the low latency API and feeding slices to the decoder. The async calls are very fast and I feed multiple slices per call. Basically all of the time of the encoder is spent in the VICP_Wait() call, and the async call execution takes around 30ns, so I don't think the low latency implementation is responsible for bad performance. However, by decreasing the number of slices per frame (to a number that basically makes the low latency calls useless), performance increases.
About the performances I'm seeing:
- Average decode time when using 1080p@60fps display and a 720p@60fps frame size is 16 to 16.5ms. However this sometimes goes above 17ms and breaks my decoder.
- I noticed that when the decoder slows down and my incoming buffer overflows (generating a decoding error), and this happens a few times, the decoding time with the same setup can decrease to 15ms. If I break the stream (generating another decoder error), the decoding time can either stay at 15ms or go up to 16-17ms. So basically if the decoder has a small incoming buffer that keeps overflowing because of bad decoding performances, the decoder will go from 16 to 15ms average automatically after a few errors!
- I also noticed that when I decrease the display/osd resolution from 1080p@60fps to 720p@60fps the decoding performances are stable at 15ms or lower.
Is this expected behaviour? I guess so, since that there is no way to resize the 720p stream at 1080p for a 1080p display anyway.
Thanks!