Hi, everyone
Is it possible to get 5MP H264/MPEG4 video if I don't care frame rate? In IPNC36X, Howto implement this?
This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
Hi, everyone
Is it possible to get 5MP H264/MPEG4 video if I don't care frame rate? In IPNC36X, Howto implement this?
Hi,
5MP capture is already supported in DM36x IPNC Ref Design ver 2.0. Currently, it does only capture and MJPEG.
For doing 5MP H.264, you need a new encoder. DM36x IPNC Ref Design ver 2.5 will have it in the system already implemented. So i would suggest you to wait for that instead of implementing on your own. It should be available in next 2-3 weeks. You can contact your local sales/FAE to get a better update on the status.
5MP MPEG4 encoding is not supported on DM36x currently. What is the requirement of 5MP MPEG4? It will consume lot of bits and i believe H.264 might be a better solution at that high resolution.
Regards,
Anshuman
PS: Please mark this post as verified, if you think it has answered your question. Thanks.
Hi, Anshuman,
My most important requirement is shot time delay(less than 10ms) snapshot during video stream(the snapshot picture resolution is different with video resolution, a shot time video pause is accessiable) and recording a short time video clip arounding the snapshot moment. I think it maybe complex for me to implement this. I wish this feature is in Appro's development schedule.
I take snapshot during 5Mp MJPEG video to get image enhancement such as ae/awb currently.
Hi,
It is not trivial to achieve that kind of shot time delay. We have to change the mode of capture from the regular video stream mode to a 5MP mode. During this time the display and capture has to be stopped. Also, it would depend upon the sensor as well, on what rate it can capture and transmit the 5MP data.
But it is possible to do a video recording, during the time the 5MP snapshots are taken. We can surely do the resize from 5MP to the required resolution and continue video recording. Ofcourse the frame rate will depend on the sensor fps @ 5MP.
I am not sure when we can have it done in IP Camera Ref Design. Currently, this feature is not planned. We can surely guide on how to do that, but still getting shot delay to 10ms needs much more thought and feasibility check. From my current understanding, it might not be possible.
Regards,
Anshuman
PS: Please mark this post as verified, if you think it has answered your question. Thanks.
Hi, Anshuman
Thanks!
It is not trivial to achieve that kind of shot time delay. We have to change the mode of capture from the regular video stream mode to a 5MP mode. During this time the display and capture has to be stopped. Also, it would depend upon the sensor as well, on what rate it can capture and transmit the 5MP data.
The short time delay I said is snapshot trigger signal to shutter start, don't include sensor data readout time or follow-up image enhancement and compress time.
If we can catch trigger signal(gpio input pulse) in ISR, then switch sensor mode from continus video to snapshot in linux kernel driver(this just set sensor's sereral registers via IIC bus). I think the time delay maybe reduce. In addtion, we dont't need stop capture and display task mannually, they just only recognize snapshot frame auto(according resolution differences or a flag set by kernel dirver), then ignore or skip the snapshot frame for video stream, encoding to jpeg and save.
Because I don't familiar with linux programming. Currently I catch trigger signal and switch image sensor mode in a seperate thread in av_server, recognize the snapshot frame in caputre thread, record its framenumber or timestamp, saved it as JPEG in stream process thread.
Anshuman Saxena said:We can surely guide on how to do that, but still getting shot delay to 10ms needs much more thought and feasibility check.
The problem I encountered is I don't know whether and howto notify the all threads in av_server when take a different resolution snapshot frame(such as capture, ae, awb, ldc, noise filter, etc, I think they maybe have different sensitive as resolution change, but still use previous ae, awb predict paramters to reduce time delay).