Hello,
Does the numBufPerCh for encoder buffer should be equal to GOP size ?
Thank you,
Ran
This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
No it is not related to GOP size. It should be set to a number whereby it does not cause encoder link to drop input frames. Encoder Link will drop input buffer if it finds no output buffer available.Output buffer may not be avilable because of IVA_HD load (Encoder is not real time) or application on HLOS is not freeing the buffers in time to ipcBitsInLink_HLOS.Typically 4 - 6 buffers / channel is sufficient.
Note that in DVR RDK we dont support B_frame encoding. If B frame encoding is supported then encoder will internally lock buffers and output them in decode order.In this case more encoder output buffers would be required but this scenario is not supported in DVR RDK
Hi Badri,
I would please like to check more thing on this issue...
The codec does support B-frame as I understand. What does it mean supporting B-frame in framework ?Does it mean the framework need to rearrange frames before the encoding/decoding ? Does it mean that only BP profile is supported (other profiles use B-frame if I'm right) ?
Thank you very much for all the information!
Ran
Codec does _not_ support B-frame encoding in the NProcess mode that we use it in DVR RDK. In NProcessMode, a single process call will take input for multiple channels and produce output.This is done to improve IVA utilization and increase channel density. The limitation is due to IVA SL2 size.
If B -frame encoding is required:
1. processN APIs should not be used.
2. Framework changes are required in encLink to lock output buffers and release them based on codec outArgs.
The framework changes in encLink are minor but there is no plan currently to implement B-frame encoding as it is not a common requirement in DVR products and we cant disable processN API in DVRs as it affects channel density.
Hi Badri,
- I assume than that RDK framework does not support B-frame in decoding too, right ?
- Is There a need to rearrange buffer order if B-frames are supported ?
- I know that BP mean that there is no B-frame, but is it that other profiles (MP,HP) in RDK also used with no B-frames ?
Thanks,
Ran
Hi Badri,
Thank you very much, your answers are very helpful.
can you please described what you mean in
"2. Framework changes are required in encLink to lock output buffers and release them based on codec outArgs." ? Does it also mean that buffer length should be bigger ? what is channel density ?
Thank you very much!
Ran
The current behavior of encLink is
1. Get Input frame
2.Get Output butstreamm buffer
3. encode()
4. Send output buffer
5. Free input buffer.
With B-frame encoding:
1. Get Input frame
2.Get Output butstreamm buffer
3. encode()
4. Check if output buffer can be released.
If not buffer is locked, check again during next process call if buffer can be released.
5. Free input buffer.
Does it also mean that buffer length should be bigger ?
-- Yes that is correct. Since buffers are locked more output buffers are required in encLink.
What is channel density ?
-- This is the number of encode/decode channels that are possible in a device. For example on 816x, we currently do 16 ch D1 @ 30 fps h264 encode + 16 ch CIF @ 30 fps h264 encode + 16 ch D1 @ 1 fps MJPEG encode + 16 ch D1 @ 30 fps H264 decode.
Hi Badri,
I understand that in order to support B-Frames, processN should not be used. But is it because ProcessN API imply that the codec does not use lock mechanism and when ProcessN is disabled, lock mechanism is automatically used ? I see that the description of the changes that should be done in framework the main difference is that instead of sending the buffer, we should check if it is not locked, only then it can be released,i.e. the lock lock/unlock is done internally in codec, it is because ProcessN is disabled ?
Best Regard,
Ran
In processN mode of operation the codec has to maintain state for multiple channels in a single HDVICP process call. All this state has to be maintained in the HDVICP internal memory (SL2) which is limited in size.The HDVICP SL2 was sized assuming single channel operation. So when using processN mode, some features have to be compromised so that state of multiple channels fits in HDVIPC2 internal memory (SL2). B-frame encoding is one such feature and that is why B-frame encoding doesnt work in processN mode. It is not related to locks.This is the same reason why processN APIs cannot be applied to HD resolution frames. (SL2 size limitation)
Hi Badri,
Thanks for explaining the ProcessN mode. I thoought that the memory map (SR1,SR2..) allocated in DDR should be properly designed to deal with the amount of frames needed for encoding/decoding, but I see that the size of internal memory (SL2) also put limitations. if there is not enough place in SL2, doesn't it just use the DDR ?
So if I'm right the codec itself makes the decision if to lock or unlock, probably according to frame type. when B frame is not used there is no locking involved. As I understand, The framework also has to know about the lock/unlock for properly releasing of buffers.
Thank you,
Ran
"... but I see that the size of internal memory (SL2) also put limitations. if there is not enough place in SL2, doesn't it just use the DDR ?
- From system design point of view there is nothing explicit you have to take care /can do apart from being aware of the limitation.This limitation is due to codec implementation of processN mode and if you dont use it ,it has no impact.It is not possible to use DDR if SL2 runs out of space due to the way HDVICP h/w works.
So if I'm right the codec itself makes the decision if to lock or unlock, probably according to frame type. when B frame is not used there is no locking involved. As I understand, The framework also has to know about the lock/unlock for properly releasing of buffers.
- Yes codec makes the decision and informs the framework for properly locking/unlocking buffers via the outArgs populated by the codec during each process call.
Hi Badri,
What is actually "channel state" ? Is state of HD channel larger then D1 channel state ?
I search in codec user's guide to see what is meant as "channel state", but I don't find such thing.
Best Regards,
Ran
It is persistent data required to process(encode/decode) a frame. It is proportional to the number of macroblocks in the frame so D1 state will be smaller that HD state. As I mentioned previously this is internal codec implementation detail and from integration point of view being aware of the limitation is sufficient. Reason for limitation is not documented in user guide. I provided the background info for understanding purpose .If you find it confusing you can ignore the information and just remember the restriction when using processN APIs.