This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

DRA744: Camera scaling using VIP

Part Number: DRA744


Android J  OS is running on DRA744 (J6). Analog camera is connected with DRA744(J6) to pin  VIN3a..

from analog camera output is connect to ADC (TW2964) which is giving YCrCb422  format. Android HAL (provided by TI (omap4xxx)) convert YUV422 to YUV420 (NV12) format. (OMAP4xxx/V4LCameraAdapter/). 

From DRA744 TRM came to know that VIP is also capable to convert YUV422 to YUV420 (NV12).

I am unable to understand at Android HAL when i start camera preview how can i get direct YUV420 (NV12) format video from VIP/VPE.

  • Hi Jigar,

    Android Camera HAL when used with analog camera first uses VIP to capture YUYV fields, then passes these to the VPE for deinterlacing and CSC to NV12. The NV12 frames are what Camera HAL provides to application. 

    I'm not sure I understand your entire question though, do you need direct access to the NV12 frames for some reason? Is this planned for use outside of Camera App preview?

    In addition, is there a reason that you are still on JellyBean release? This is quite old now and more difficult for us to support.

    Thanks,

    David

  • Hi David

    Thanks for reply.

    currently i am able to receiving interlaced video at HAL with YUV422I format. After receiving YUV422I format at HAL by "convertYUV422TONV12Tiler()" funcation converting it to NV12 (YUV420) format.
    As Per DRA744 TRM if VIP & VPE is capable enough for YUV422 to YUV420 conversion, then i do not want to convert frame format at HAL level.
    Need help to understand how can i get NV12 (YUV420) video frame at HAL. What changes i need to done at VIP/VPE.

    Thanks
    Jigar
  • Hi Jigar,

    I recommend using the "processFrame" function instead of "convertYUV422ToNV12Tiler." The "convert..." function does the Color Scale Conversion (CSC) on the ARM neon core, and unless I'm misunderstanding this means you're skipping the VPE entirely. The "processFrame" function calls V4LM2M, which handles VPE for the HAL. This will let us use the VPE for both CSC and Deinterlacing. 

    Thanks,

    David

  • Hi David

    As per your recommendation i am trying to use "Processframe" function.But before that when i am trying to call "setupM2MDevice()"

    getting error "Error while adapter initialization: video capture not supported."

    can you help over that?

    Thanks & Regards

    Jigar

  • Hi Jigar,

    That error is a normal occurrence, we haven't supported video capture (recording) at least since the move to J6. It shouldn't affect camera preview functionality. Are you ever getting a "Found VPE M2M device" message though?

    Thanks,

    David

  • Hi David

    Yes I am getting "Found VPE M2M device " message.

    getting error "Cant set color format" getting error - "invalid argument" from allocBuffers() for source buffer.
    for allocBuffers() function passing
    type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE
    srcFourcc = V4L2_PIX_FMT_YUYV

    can you help over this issue?

    Thanks
    Jigar

  • Hi Jigar,

    Those values look fine. Just to clarify, the VIDIOC_S_FMT ioctl in allocBuffers() is returning EINVAL? 

    Can you add debug code to the kernel driver "drivers/media/platform/ti-vpe/vpe.c" in the vpe_s_fmt(), __vpe_try_fmt(), and __vpe_s_fmt() functions and share where we're getting this error from?

    Thanks,

    David

  • Hi David

    By resolving allocBuffers() error i am able to stream video from "/dev/video0" node.

    I want to stream multiple video from different video node( like: /dev/video0, /dev/video1) at same time.But detectM2MDevice() every time return first found device as "/dev/video0" from setupM2MDevide() funcation.

    Can we process multiple video at same time using VPE (processFrame() function)?If yes is there any reference board/code available where multiple video streaming at same time using VPE (processFrame()).

    Thanks & Regards
    Jigar
  • Hi Jigar,

    I'm double checking with another expert, but I believe that if you want to process multiple video streams with VPE, you need to open a VPE client for each stream, and the VPE driver can handle the call serialization and so on from there.

    We don't support this in Camera HAL, but I'll let you know if anyone else knows of example code utilizing this functionality.

    Thanks,

    David

  • Hi Jigar,

    Confirmed multiple VPE contexts can be open to simultaneously process multiple video streams. An example of this use would be to run "test-v4l2-m2m" multiple times in parallel.

    Thanks,

    David