This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Question regarding V4l2 saloopback application

Other Parts Discussed in Thread: TVP7002

Hi, I'm not familiar with V4L2 so I'll  ask some questions.

My enviorment is EVM8168 with VC board, EZSDK version is 5.03.01.15 and I use PSP 04.00.02.14 version.

My start-up loading script is like below.

"        echo "Loading syslink"
        modprobe syslink
        sleep 1
        echo "Loading new HDVPSS Firmware [modified by skypiri]"
        ./slaveloader startup VPSS-M3 ti816x_hdvpss.xem3
        sleep 1
        echo "Loading VPSS"
        modprobe vpss
        sleep 1
        echo "Loading FrameBuffer"
        modprobe ti81xxfb vram=0:40M,1:1M,2:1M
        sleep 1
        echo "Loading ti81xxvo"
  modprobe ti81xxvo
  sleep 1
  echo "Loading tvp7002"
  modprobe tvp7002
  sleep 1
  echo "Loading ti81xxvin"
  modprobe ti81xxvin
  sleep 1
  
  echo "Loading ti81xxhdmi"
  modprobe ti81xxhdmi "

saLoopback application is working well with 1080i resolution camera.

 

1. What is the different between MMAP mode and user point mode?

2. I add simple image processing function into saLoopback application like below.

int Convert_yuv_to_gray( unsigned char *pYuvSrc, unsigned int nWidth, unsigned int nHeight )
{
 /* to make YUYV to gray scale, set U, V value to 128 (0x80)*/
 if( !pYuvSrc || nWidth == 0 || nHeight == 0 )
 {
  fprintf( stderr, "%s returns error.\n", __FUNCTION__ );
  return -1;
 } 

 unsigned int nPosYuv = 0;
 
 unsigned int nSize = nWidth * nHeight / 2;
 unsigned int nLoop = 0;
 
 for( nLoop = 0; nLoop < nSize; nLoop++)
 {
  pYuvSrc[ nPosYuv + 1 ] = pYuvSrc[ nPosYuv + 3] = 0x80;
  nPosYuv +=4;
 }

 return 0; 
}

I called this function in while loop.

 

/* DEQUEUE CAPTURE */

/* DEQUEUE DISPLAY */

  /* Exchange display and capture buffer pointers */
  temp_buf.m.userptr = pCaptureObj->buf.m.userptr;
  pCaptureObj->buf.m.userptr = pDisplayObj->buf.m.userptr;

  // TODO here.
  // Add some processing here....
  Convert_yuv_to_gray( (unsigned char *)temp_buf.m.userptr, 1920, 1080 ); 

  pDisplayObj->buf.m.userptr = temp_buf.m.userptr;

 /* ENQUEUE.....*/

When I convert YUYV to gray scale then, FPS is 5~6 fps. Buf When I turned off converting function, then FPS is 30 fps.

The purpose of converting function is just for test image processing. The performance is too low than I expected.

How can I increase the total performance of image processing in Cortex A8 side.

My scenario is "capture --> image processing() --> display" or "capture --> image processing -> encoding -> RTP Tx -> RTP Rx -> display"

So I want to know how can I solve this problem.

 

Regards,

Jongpil

 

  • Hi.

    Answers inline,

    Jongpil Won said:
    1. What is the different between MMAP mode and user point mode?

    Usermode is where app has to allocate buffer . MMAP mode is where driver allocates buffers.

    Regarding performance you will never get performance if you try to do image progressing with Linux.  You should try to do image processing using DSP or other  image processing blocks

    Regards,

    Hardik Shah

  • Thanks for your response.

    Regarding the performance, could you explain more detail about the lower performance?

    And what do you mean "other image processing blocks"?

    I also have a plan to use DSP, but I want to know the reason why do I have very low performance with image processing in Linux.

    Regards,

    Jongpil

  • Hi,

    I am not sure about performance of Linux A8 for image processing. You can use DSP for doing your image processing. I am referrring to image processing block of HDVPSS like noise filtering, de-interlacing, scaling, blending graphics and video.  What kind of exact processing are you targeting with captured image?

    Regards,

    Hardik Shah

  • Thanks for your reply.

    I just want to know the reason that lower performance has comed when I do my own image processing in Linux A8.

    I already know that when I doing image processing in Linux then the processing time is too large seen from my several test. But I don't understand why image processing in Linux side shows the lower performance. So I would request to explain the main reason.

    Currently, I want to down scaling with the captured image.

    From V4L2 capture driver, with the resolution 1080i gives me too big image file.

    So I want to down scaling to 720P or VGA resolution.

     

    And other option is using another input port. 8168EVM has hdmi, component, and VGA input ports. Is it possible to use VGA input with V4L2 sample application?

    Regards,

    Jongpil.