The current H.264 decoder implementation has parameters for max width and max height, which caused serious limitations on our system.
First of all, these parameters are named as "maximum width" and "maximum height", but they actually do not mean that exactly. It looks like the code does check maximum width, but does not check maximum height. The code checks maximum_width * maximum_height though. These make the parameters hard to handle.
Secondly, these parameters do not conform to standards. H.264 levels contain limits for maximum number of macro blocks, but not max width/height. For instance, for level 2.0, the maximum number of macro blocks is 396, which is the number of macro blocks of a CIF picture; if max width/height is set to 704/576 which is way bigger than a CIF picture, we still cannot decode a picture that is 768*128, even though that is a very legitimate 2.0 picture.
Ideally, the decoder should not have these parameters at all. If some pre-allocated buffers are too small for pictures with weird dimensions, the decoder should release old buffers and allocate new ones.
As suggested by CouthIT, we can do some work-around with the existing implementation:
For instance, for level 2.0,
(a) We can set max width to 1056 and max height to 96, so that number of MB's are 396, this will require 69 KB of L2SRAM.
or
(b) We can set max width to 1584 and max height to 64, so that number of MB's are 396, this will require 77 KB of L2SRAM .
This work-around would work for most cases, but still would have its limits, and would not handle all the cases of extreme dimensions, and it needs special handling for each profile/level on our system.
I hope this problem can be fixed soon.
Thanks,
Daniel