This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

In IPNC368 I want suspend D1 H264/MPEG4 stream first, capture a 5mp jpeg piture, then resume the D1 H264/MPEG4 video stream when trigger by external signal

I want suspend D1 H264/MPEG4 stream first, capture a 5mp jpeg piture, then resume the D1 H264/MPEG4 video stream when trigger by external signal

I haven't IPNC368 camera or devkit current, I think It seems has two method:

1. Suspend the H264/MPEG4 video stream, switch to snapshot mode to get the picture, then resume the D1 H264/MPEG4 video stream.
   The problem is no seperate appro api function can "switch to snapshot mode to get the piture" when video stream suspend.
   the appro picture capture method seems just only extract frame from the active video stream, thus the picture size can't
   large than video resolution.

2. Initiate a active D1 H264 and a companion suspend MJPEG stream at start, when capture pictrue, suspend the H264 stream,
   active the MJPEG stream and set 5mp resolution, extract frame from the MJPEG stream, then active H264, suspend MJPEG again.

Anyone can get some help? The video stream "resume" is possible? or must reinitialize?

  • Hi,

    In general for any sensor, if you want to capture a 5MP snapshot between a D1 resolution video stream, you would have to change the sensor capture mode. For D1 resolution, you would not be resizing a 5MP captured data to D1 and ofcourse you need higher frame rate for D1 video, so you would configure sensor for D1 or similar resoltion.

    In Appro camera also, for D1 mode we configure the sensor width and height accordingly.

    Now, whenever a snapshot is required, you would have to freeze the display, switch mode of the sensor and then do image capture.

    It is not inherently available in IPNC Ref Design. But there are APIs to change the sensor mode and timings. Either of the two options you have suggested might not work directly. I am putting down some points that can help.

    1. Create/Allocate buffers for 5MP size even if the stream is for D1 resolution. This way you would not have to reallocate buffers when switching to 5MP mode

    2. Create a MJPEG stream with resolution as 5MP and leave it in waiting state. It is not implemented such  a way in IPNC Ref Design, but you can always create a thread.

    3. When user input comes to take 5MP snapshot, freeze the display by not queueing any further display buffers.

    4. Change sensor mode and sensor timing (you can refer to 5MP usecase of IPNC Ref Design source code)

    5. Put the D1 H.264 stream in wait state. Again this is not implemented as-is in IPNC Ref Design, but you can change it to create a separate encode thread and do semaphore wait.

    6. Do 5MP capture and MJPEG encode.

    7. Do the reverse of all the above steps to go back to D1 encoding.

    I hope this helps. Please note that the streaming of D1 video would be halted till the time 5MP capture is happening.

    Regards,

    Anshuman

    PS: Please mark this post as verified, if you think it has answered your question. Thanks.

  • Thanks Anshuman.

    Now I knew I should change sensor mode and timing by myself to switch resolution and frame rate. In fact there is not local display in my IPNC, I record video stream and display it in back end PC via TCP/IP network as preivew window)(both require low bitrate, so can't consider 5mp mjpeg stream directly). A shot time pause/halt effect is acceptabe for me.

  • I found new problems.

    I get snapshot from mjpeg stream,  the snapshot trigger signal to snapshot really time delay would be long(related mjpeg frame rate, a random time) first, and I can't predict the snapshot params correctly such as exposure time(camera would control a external flash lamp when snapshot in night or dark env).

    If I want get a immedately snapshot, I want to know how many works left to do? I think I must modify image sensor driver to support single snapshot mode, and corresponding whole vfpess drivers, and predict proper AE, AWB setting for snapshot decide from pre continus video stream??? It seems so complicated for me.

    Who  know Appro's develop schedule for this feature?