This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Linux/AM5728: Capture VPE display issues

Part Number: AM5728
Other Parts Discussed in Thread: TVP5158

Tool/software: Linux

Hi,

I add cmem support to capturedisplay (leran from dual-camera)

I tried to let our board work like this:

camera ---YUYV---> vpe ---RGB24---> display

and then :

./capturevpedisplay 704 288 yuyv 704 576 rgb24 1 3 -s 31:1920x1080

output is :

vip: G_FMT(start): width = 704, height = 288, 4cc = YUYV
vpe i/p: G_FMT: width = 704, height = 288, 4cc = YUYV
vpe o/p: G_FMT: width = 704, height = 576, 4cc = RGB3

allocating cmem buffer of size 0xc6000
ERROR:alloc_buffer:175: drmModeAddFB2 failed: Invalid argument (-22)
ERROR:get_cmem_buffers:198: allocation failed
allocating cmem buffer of addr 0xb1467000

allocating display buffer failed

My question is :

1.I think vpe is works normally.So the issue is on third parameter of drmModeAddFB2 

int drmModeAddFB2(int fd, uint32_t width, uint32_t height,
uint32_t pixel_format, uint32_t bo_handles[4],
uint32_t pitches[4], uint32_t offsets[4],
uint32_t *buf_id, uint32_t flags);

It 's not 0x33424752(RGB3).So what should it be?

2.My CMEM is default value :

40500000-405fffff : CMEM
a0000000-abffffff : CMEM

But why does the allocated cmem address is 0xb1467000.Is that right ?

Best regards

  • Can you try this first with output from vpe as nv12? Also share value of all the arguments passed to drmModeAddFB2() API.

    ./capturevpedisplay 704 288 yuyv 704 576 nv12 1 0 -s 31:1920x1080
  • Hi,

    nv12 will be failed too.But yuyv is fine.

    vpe:/dev/video0 open success!!![ 5136.283158] tvp5158 2-0058: PAL video detected

    Input = 704 x 288 , 1448695129
    Output = 704 x 576 , 842094158
    vip open success!!!
    using 1 connectors, 1920x1080 display, multiplanar: 1
    Setting mode 1920x1080 on connector 31, crtc 34
    [ 5136.577464] tvp5158 2-0058: PAL video detected
    vip: G_FMT(start): width = 704, height = 288, 4cc = YUYV
    vpe i/p: G_FMT: width = 704, height = 288, 4cc = YUYV
    vpe o/p: G_FMT: width = 704, height = 576, 4cc = NV12

    allocating cmem buffer of size 0xc6000
    ERROR:alloc_buffer:163: drmModeAddFB2 failed: Invalid argument (-22)
    ERROR:get_cmem_buffers:186: allocation failed

    Blow is my code:

    int bytes_pp = 3;

    offsets[4] = {0};
    buf->fourcc = fourcc;
    buf->width = w; //704
    buf->height = h; //576
    buf->nbo = 1;
    buf->pitches[0] = w*bytes_pp; //w*3

    buf->fd[0] = alloc_cmem_buffer(w*h*bytes_pp, 1, &buf->cmem_buf);

    if(buf->fd[0] < 0){
    free_cmem_buffer(buf->cmem_buf);
    printf(" Cannot export CMEM buffer\n");
    return NULL;
    }

    buf->bo[0] = omap_bo_from_dmabuf(display->dev, buf->fd[0]);
    if (buf->bo[0]){
    bo_handles[0] = omap_bo_handle(buf->bo[0]);
    }

    ret = drmModeAddFB2(display->fd, buf->width, buf->height, fourcc,
    bo_handles, buf->pitches, offsets, &buf->fb_id, 0);

  • Original capturevpedisplay demo works with nv12 output. The bytes_pp requirement for nv12 format is 1.5. Can you first run the demo with original buffer allocation from omap_drm for nv12 output. Make sure that's working, print the input arguments to the drmModeAddFB2 from that original demo, then switch the buffer allocation to CMEM, compare the input to drmModeAddFB2 from the working and non working solution. 

  • Hi,manisha

    We try to do convert color by ourself .

    /*
     *  Copyright (c) 2013-2014, Texas Instruments Incorporated
     *  Author: alaganraj <alaganraj.s@ti.com>
     *
     *  Redistribution and use in source and binary forms, with or without
     *  modification, are permitted provided that the following conditions
     *  are met:
     *
     *  *  Redistributions of source code must retain the above copyright
     *     notice, this list of conditions and the following disclaimer.
     *
     *  *  Redistributions in binary form must reproduce the above copyright
     *     notice, this list of conditions and the following disclaimer in the
     *     documentation and/or other materials provided with the distribution.
     *
     *  *  Neither the name of Texas Instruments Incorporated nor the names of
     *     its contributors may be used to endorse or promote products derived
     *     from this software without specific prior written permission.
     *
     *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
     *  AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
     *  THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
     *  PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
     *  CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
     *  EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
     *  PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
     *  OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
     *  WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
     *  OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,
     *  EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
     *
     *  Contact information for paper mail:
     *  Texas Instruments
     *  Post Office Box 655303
     *  Dallas, Texas 75265
     *  Contact information:
     *  http://www-k.ext.ti.com/sc/technical-support/product-information-centers.htm?
     *  DCMP=TIHomeTracking&HQS=Other+OT+home_d_contact
     *  ============================================================================
     *
     */
    
    #include <stdio.h>
    #include <stdlib.h>
    #include <fcntl.h>
    #include <unistd.h>
    #include <stdint.h>
    #include <string.h>
    #include <errno.h>
    #include <time.h>
    #include <linux/videodev2.h>
    #include <linux/v4l2-controls.h>
    
    #include <sys/mman.h>
    #include <sys/ioctl.h>
    
    #include <xf86drm.h>
    #include <xf86drmMode.h>
    #include <omap_drm.h>
    #include <omap_drmif.h>
    #include "dma-buf.h"
    #include "util.h"
    #include "cmem_buf.h"
    #include "cmem.h"
    #include "vpe-common.c"
    #define DEBUG 1
    
    #define NUMBUF	6 //to be removed
    #define VPEBUF  6
    /** VIP file descriptor */
    static int vipfd  = -1;
    static int doOnce = 0;
    static volatile unsigned char* tcmem=NULL;
    struct buffer **shared_bufs;
    
    extern void TestCarDectet(char * frame,char * dst,bool bFlag);
    
    #define CMEM_BLOCKID CMEM_CMABLOCKID
    #define SAT(c) if (c & (~255)) { if (c < 0) c = 0; else c = 255; }
    
    CMEM_AllocParams cmem_params = {
    	CMEM_POOL,	/* type */
    	CMEM_CACHED,	/* flags */
    	0		/* alignment */
    };
    //cmem_params is same as 6 bufs in vpe queue
    double tdiff_calc(struct timespec *tp_start, struct timespec *tp_end)
    {
       return (double)(tp_end->tv_nsec -tp_start->tv_nsec) * 0.000001 + \
    	  (double)(tp_end->tv_sec - tp_start->tv_sec) * 1000.0;
     }
    
    static void yuyv_to_rgb24_normal (int width, int height, unsigned char *src, unsigned char *dst)
    {
       unsigned char *s;
       unsigned char *d;
       int l, c;
       int r, g, b, cr, cg, cb, y1, y2;
    
       l = height;
       s = src;
       d = dst;
       while (l--) {
          c = width >> 1;
          while (c--) {
             y1 = *s++;
             cb = ((*s - 128) * 454) >> 8;
             cg = (*s++ - 128) * 88;
             y2 = *s++;
             cr = ((*s - 128) * 359) >> 8;
             cg = (cg + (*s++ - 128) * 183) >> 8;
    
             r = y1 + cr;
             b = y1 + cb;
             g = y1 - cg;
    
             SAT(r);
             SAT(g);
             SAT(b);
    
             *d++ = b;
             *d++ = g;
             *d++ = r;
      
             r = y2 + cr;
             b = y2 + cb;
             g = y2 - cg;
    
             SAT(r);
             SAT(g);
             SAT(b);
             *d++ = r;
             *d++ = b;
             *d++ = g;
    	
          }
       }
    }
    
    int save_file(char *addr, int length, int index)
    {
        FILE *fp;
        char fname[256];
    
        memset(fname, 0x00, 256);
        sprintf(fname, "frame_%d.yuv", index);
        printf("save_file [%s]\n", fname);
        fp = fopen(fname, "wb");
        if (fp == NULL) {
            printf("%d: fopen error\n", __LINE__);
            return -1;
        }
    
        fwrite(addr, 1, length, fp);
        fflush(fp);
        fclose(fp);
    
        return 0;
    }
    
    
    /**
     *****************************************************************************
     * @brief:  set format for vip
     *
     * @param:  width  int
     * @param:  height int
     * @param:  fourcc int
     *
     * @return: 0 on success 
     *****************************************************************************
    */
    int vip_set_format(int width, int height, int fourcc)
    {
    	int ret;
    	struct v4l2_format fmt;
    
    	memset(&fmt, 0, sizeof fmt);
    	fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    	fmt.fmt.pix.width = width;
    	fmt.fmt.pix.height = height;
    	fmt.fmt.pix.pixelformat = fourcc;
    	fmt.fmt.pix.field = V4L2_FIELD_ALTERNATE;
    
    	ret = ioctl(vipfd, VIDIOC_S_FMT, &fmt);
    	if (ret < 0)
    		pexit( "vip: S_FMT failed: %s\n", strerror(errno));
    
    	ret = ioctl(vipfd, VIDIOC_G_FMT, &fmt);
    	if (ret < 0)
    		pexit( "vip: G_FMT after set format failed: %s\n", strerror(errno));
    
    	printf("vip: G_FMT(start): width = %u, height = %u, 4cc = %.4s\n",
    			fmt.fmt.pix.width, fmt.fmt.pix.height,
    			(char*)&fmt.fmt.pix.pixelformat);
    
    	return 0;
    }
    
    /**
     *****************************************************************************
     * @brief:  request buffer for vip
     *
     * @return: 0 on success 
     *****************************************************************************
    */
    int vip_reqbuf(void)
    {
    	int ret;
    	struct v4l2_requestbuffers rqbufs;
    
    	memset(&rqbufs, 0, sizeof(rqbufs));
    	rqbufs.count = NUMBUF;
    	rqbufs.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    	rqbufs.memory = V4L2_MEMORY_DMABUF;
    
    	ret = ioctl(vipfd, VIDIOC_REQBUFS, &rqbufs);
    	if (ret < 0)
    		pexit( "vip: REQBUFS failed: %s\n", strerror(errno));
    
    	dprintf("vip: allocated buffers = %d\n", rqbufs.count);
    
    	return 0;
    }
    
    /**
     *****************************************************************************
     * @brief:  allocates shared buffer for vip and vpe
     *
     * @param:  vpe struct vpe pointer
     *
     * @return: 0 on success 
     *****************************************************************************
    */
    int allocate_shared_buffers(struct vpe *vpe)
    {
    	int i;
    
    	shared_bufs = disp_get_vid_buffers(vpe->disp, NUMBUF, vpe->src.fourcc,
    					   vpe->src.width, vpe->src.height);
    
    	if (!shared_bufs)
    		pexit("allocating shared buffer failed\n");
    
        	for (i = 0; i < NUMBUF; i++) {
    		/** Get DMABUF fd for corresponding buffer object */
    		vpe->input_buf_dmafd[i] = omap_bo_dmabuf(shared_bufs[i]->bo[0]);
    		shared_bufs[i]->fd[0] = vpe->input_buf_dmafd[i];
    		dprintf("vpe->input_buf_dmafd[%d] = %d\n", i, vpe->input_buf_dmafd[i]);
    	}
    
    	return 0;
    }
    
    /**
     *****************************************************************************
     * @brief:  queue shared buffer to vip
     *
     * @param:  vpe struct vpe pointer
     * @param:  index int
     *
     * @return: 0 on success 
     *****************************************************************************
    */
    int vip_qbuf(struct vpe *vpe, int index)
    {
    	int ret;
    	struct v4l2_buffer buf;
    
    	dprintf("vip buffer queue\n");
    
    	memset(&buf, 0, sizeof buf);
    	buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    	buf.memory = V4L2_MEMORY_DMABUF;
    	buf.index = index;
    	buf.m.fd = vpe->input_buf_dmafd[index];
    
    	ret = ioctl(vipfd, VIDIOC_QBUF, &buf);
    	if (ret < 0)
    		pexit( "vip: QBUF failed: %s, index = %d\n", strerror(errno), index);
    
    	return 0;
    }
    
    /**
     *****************************************************************************
     * @brief:  dequeue shared buffer from vip
     *
     * @return: buf.index int 
     *****************************************************************************
    */
    int vip_dqbuf(struct vpe * vpe)
    {
    	int ret;
    	struct v4l2_buffer buf;
    	
    	dprintf("vip dequeue buffer\n");
    	
    	memset(&buf, 0, sizeof buf);
    
    	buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    	buf.memory = V4L2_MEMORY_DMABUF;
    	ret = ioctl(vipfd, VIDIOC_DQBUF, &buf);
    	if (ret < 0)
    		pexit("vip: DQBUF failed: %s\n", strerror(errno));
    
    	dprintf("vip: DQBUF idx = %d, field = %s\n", buf.index,
    		buf.field == V4L2_FIELD_TOP? "Top" : "Bottom");
    	vpe->field = buf.field;
    
    	return buf.index;
    }
    
    int main(int argc, char *argv[])
    {
    	int i, index = -1, count = 0 ;
    	static int flag = 0;
    	struct buffer * testbuf =NULL;
    	unsigned char * dst=NULL;
    	struct	vpe *vpe;
    
    	if (argc != 11) {
    		printf (
    		"USAGE : <SRCWidth> <SRCHeight> <SRCFormat> "
    			"<DSTWidth> <DSTHeight> <DSTformat> "
    			"<interlace> <translen> -s <connector_id>:<mode>\n");
    
    		return 1;
    	}
    
    	init_cmem();
    	/** Open the device */
    	vpe = vpe_open();
    
    	vpe->src.width	= atoi (argv[1]);
    	vpe->src.height	= atoi (argv[2]);
    	describeFormat (argv[3], &vpe->src);
    
    	/* Force input format to be single plane */
    	vpe->src.coplanar = 0;
    
    	vpe->dst.width	= atoi (argv[4]);
    	vpe->dst.height = atoi (argv[5]);
    	describeFormat (argv[6], &vpe->dst);
    
    	vpe->deint = atoi (argv[7]);
    	vpe->translen = atoi (argv[8]);
    
    	printf ("Input  = %d x %d , %d\nOutput = %d x %d , %d\n",
    		vpe->src.width, vpe->src.height, vpe->src.fourcc,
    		vpe->dst.width, vpe->dst.height, vpe->dst.fourcc);
    
    	if (	vpe->src.height < 0 || vpe->src.width < 0 || vpe->src.fourcc < 0 || \
    		vpe->dst.height < 0 || vpe->dst.width < 0 || vpe->dst.fourcc < 0) {
    		pexit("Invalid parameters\n");
    	}
    
    
    	vipfd = open ("/dev/video1",O_RDWR);
    	if (vipfd < 0)
    		pexit("Can't open camera: /dev/video1\n");
    	
    	printf("vip open success!!!\n");
    
            vpe->disp = disp_open(argc, argv);
    	if(!vpe->disp)
    		pexit("Can't open display\n");
    
    	vpe->disp->multiplanar = false;
    
    	dprintf("display open success!!!\n");
    
    	vip_set_format(vpe->src.width, vpe->src.height, vpe->src.fourcc);
    
    	vip_reqbuf();
    	
    	vpe_input_init(vpe);
    
    	allocate_shared_buffers(vpe);
    
    	vpe_output_init(vpe);
    
    	for (i = 0; i < NUMBUF; i++)
    		vip_qbuf(vpe, i);
    
    	for (i = 0; i < VPEBUF; i++)
    		vpe_output_qbuf(vpe, i);
    
            /*************************************
                    Data is ready Now
            *************************************/
    
    	stream_ON(vipfd, V4L2_BUF_TYPE_VIDEO_CAPTURE);
    	stream_ON(vpe->fd, V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE);
    
    	vpe->field = V4L2_FIELD_ANY;
    	while (1)
    	{
    		index = vip_dqbuf(vpe);
    
    		vpe_input_qbuf(vpe, index);
    
    		if (!doOnce) {
    			count ++;
    			for (i = 1; i <= NUMBUF; i++) {
    				/** To star deinterlace, minimum 3 frames needed */
    				if (vpe->deint && count != 3) {
    					index = vip_dqbuf(vpe); 
    					vpe_input_qbuf(vpe, index);
    				} else {
    					stream_ON(vpe->fd, V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE);
    					doOnce = 1;
    					printf("streaming started...\n");
    					break;
    				}
    				count ++;
    			}
    		}
    
    		index = vpe_output_dqbuf(vpe);
    
    
    		testbuf = vpe->disp_bufs[index];	
    	
    		/*****
    		 *****
    		 *****/
    		
    		if(tcmem==NULL){
    			int test;
    			
    			tcmem = CMEM_alloc2(1,704*576*3, &cmem_params);	//allocate a rgb buf in cmem
    			
    			test=CMEM_getPhys(tcmem);
    	
    			printf("\n temp cmem buffer of addr 0x%x \n", test);
    		}
    		
    		//buf->cmem_buf is cmembuf I add
    		yuyv_to_rgb24_normal(704,576,testbuf->cmem_buf,tcmem);	//if this function is enable ,the output on hdmi display will shake 
    
    
    		/*****
    		 *****
    		 *****/
    
    		display_buffer(vpe, index);
    
    		vpe_output_qbuf(vpe, index);
    
    		index = vpe_input_dqbuf(vpe);
    		vip_qbuf(vpe, index);
    	}
    	
    	/** Driver cleanup */
    	stream_OFF(vipfd, V4L2_BUF_TYPE_VIDEO_CAPTURE);
    	stream_OFF(vpe->fd, V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE);
    	stream_OFF(vpe->fd, V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE);
    
    	disp_close(vpe->disp);
    	vpe_close(vpe);
    	close(vipfd);
    	
    	return 0;
    }
    

    /*
     *  Copyright (c) 2013-2014, Texas Instruments Incorporated
     *  Author: alaganraj <alaganraj.s@ti.com>
     *
     *  Redistribution and use in source and binary forms, with or without
     *  modification, are permitted provided that the following conditions
     *  are met:
     *
     *  *  Redistributions of source code must retain the above copyright
     *     notice, this list of conditions and the following disclaimer.
     *
     *  *  Redistributions in binary form must reproduce the above copyright
     *     notice, this list of conditions and the following disclaimer in the
     *     documentation and/or other materials provided with the distribution.
     *
     *  *  Neither the name of Texas Instruments Incorporated nor the names of
     *     its contributors may be used to endorse or promote products derived
     *     from this software without specific prior written permission.
     *
     *  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
     *  AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
     *  THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
     *  PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
     *  CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
     *  EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
     *  PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
     *  OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
     *  WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
     *  OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,
     *  EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
     *
     *  Contact information for paper mail:
     *  Texas Instruments
     *  Post Office Box 655303
     *  Dallas, Texas 75265
     *  Contact information:
     *  http://www-k.ext.ti.com/sc/technical-support/product-information-centers.htm?
     *  DCMP=TIHomeTracking&HQS=Other+OT+home_d_contact
     *  ============================================================================
     *
     */
    
    /*
     * @File        vpe-common.c
     * @Brief       vpe specific common functions, used to integrate vpe
     *		with other modules.
     *
     *		Input buffer must be allocated in application, queue it to vpe
     *		by passing buffer index
     *
     *		Output buffer allocated in vpe_output_init() as vpe output intended
     *		to display on LCD.
     */
    #include <stdio.h>
    #include <stdlib.h>
    #include <fcntl.h>
    #include <unistd.h>
    #include <stdint.h>
    #include <string.h>
    #include <errno.h>
    
    #include <linux/videodev2.h>
    #include <linux/v4l2-controls.h>
    
    #include <sys/mman.h>
    #include <sys/ioctl.h>
    #include <xf86drmMode.h>
    #include <xf86drm.h>
    #include <omap_drm.h>
    #include <omap_drmif.h>
    
    #include "util.h"
    
    //#include "cmem_buf.h"
    
    #define pexit(fmt, arg...) { \
    		printf(fmt, ## arg); \
    		exit(1); \
    }
    
    #define V4L2_CID_TRANS_NUM_BUFS         (V4L2_CID_PRIVATE_BASE)
    #define NUMBUF                          6
    
    //#define vpe_debug
    
    #ifdef vpe_debug
    #define dprintf(fmt, arg...) printf(fmt, ## arg)
    #else
    #define dprintf(fmt, arg...) do {} while(0)
    #endif
    
    struct image_params {
    	int width;
    	int height;
    	int fourcc;
    	int size;
    	int size_uv;
    	int coplanar;
    	enum v4l2_colorspace colorspace;
    	int numbuf;
    };
    
    struct vpe {
    	int fd;
    	int field;
    	int deint;
    	int translen;
    	struct image_params src;
    	struct image_params dst;
    	struct  v4l2_crop crop;
    	int input_buf_dmafd[NUMBUF];
    	int input_buf_dmafd_uv[NUMBUF];
    	int output_buf_dmafd[NUMBUF];
    	int output_buf_dmafd_uv[NUMBUF];
    	struct display *disp;
    	struct buffer **disp_bufs;
    };
    
    
    static struct buffer* alloc_buffer(struct display *display,
    		unsigned int fourcc, unsigned int w,
    		unsigned int h)
    {
    	struct buffer *buf;
    	unsigned int bo_handles[4] = {0}, offsets[4] = {0};
    	int ret;
    	int bytes_pp = 2; //capture buffer is in YUYV format
    	
    	buf = (struct buffer *) calloc(1, sizeof(struct buffer));
    	if (!buf) {
    		ERROR("allocation failed");
    		return NULL;
    	}
    
    	buf->fourcc = fourcc;
    	buf->width = w;
    	buf->height = h;
    	buf->nbo = 1;
    	buf->pitches[0] = w*bytes_pp;
    
    //#ifdef USE_CMEM_BUF
    	//Allocate buffer from CMEM and get the buffer descriptor
    
    	buf->fd[0] = alloc_cmem_buffer(w*h*bytes_pp, 1, &buf->cmem_buf);
    
    	if(buf->fd[0] < 0){
    		free_cmem_buffer(buf->cmem_buf);
    		printf(" Cannot export CMEM buffer\n");
    		return NULL;
    	}
    	//printf("alloc cmem address is 0x%x \n",buf->fd[0]);
    	/* Get the omap bo from the fd allocted using CMEM */
    	buf->bo[0] = omap_bo_from_dmabuf(display->dev, buf->fd[0]);
    	if (buf->bo[0]){
    		bo_handles[0] = omap_bo_handle(buf->bo[0]);
    	}
    
    	//printf("\n ***********cmem buffer of format 0x%x ***********\n",fourcc);
    	ret = drmModeAddFB2(display->fd, buf->width, buf->height, fourcc,
    		bo_handles, buf->pitches, offsets, &buf->fb_id, 0);
    
    	if (ret) {
    		ERROR("drmModeAddFB2 failed: %s (%d)", strerror(errno), ret);
    		return NULL;
    	}
    
    	return buf;
    }
    
    static struct buffer **get_cmem_buffers(struct display *display,
    		unsigned int n,
    		unsigned int fourcc, unsigned int w, unsigned int h)
    {
    	struct buffer **bufs;
    	unsigned int i = 0;
    
    	bufs = (struct buffer **) calloc(n, sizeof(*bufs));
    	if (!bufs) {
    		ERROR("allocation failed");
    		goto fail;
    	}
    
    	for (i = 0; i < n; i++) {
    		bufs[i] = alloc_buffer(display, fourcc, w, h);
    		if (!bufs[i]) {
    			ERROR("allocation failed");
    			goto fail;
    		}
    	}
    
    	if (bufs) {
    		/* if allocation succeeded, store in the unlocked
    		 * video buffer list
    		 */
    		list_init(&display->unlocked);
    		for (i = 0; i < n; i++)
    			list_add(&bufs[i]->unlocked, &display->unlocked);
    	}
    
    
    
    	return bufs;
    
    fail:
    	return NULL;
    }
    
    
    
    
    
    
    /**
     *****************************************************************************
     * @brief:  open the device
     *
     * @return: vpe  struct vpe pointer
     *****************************************************************************
    */
    struct vpe *vpe_open(void)
    {
    	char devname[20] = "/dev/video0";
    	struct vpe *vpe;
    
    	vpe = calloc(1, sizeof(*vpe));
    
    	vpe->fd =  open(devname, O_RDWR);
            if(vpe->fd < 0)
                    pexit("Cant open %s\n", devname);
    
            printf("vpe:%s open success!!!\n", devname);
    
    	return vpe;
    }
    
    /**
     *****************************************************************************
     * @brief:  close the device and free memory
     *
     * @param:  vpe  struct vpe pointer
     *
     * @return: 0 on success
     *****************************************************************************
    */
    int vpe_close(struct vpe *vpe)
    {
    	close(vpe->fd);
    	free(vpe);
    
    	return 0;
    }
    
    /**
     *****************************************************************************
     * @brief:  fills 4cc, size, coplanar, colorspace based on command line input
     *
     * @param:  format  char pointer
     * @param:  image  struct image_params pointer
     *
     * @return: 0 on success
     *****************************************************************************
    */
    int describeFormat (char *format, struct image_params *image)
    {
            image->size   = -1;
            image->fourcc = -1;
            if (strcmp (format, "rgb24") == 0) {
                    image->fourcc = V4L2_PIX_FMT_RGB24;
                    image->size = image->height * image->width * 3;
                    image->coplanar = 0;
                    image->colorspace = V4L2_COLORSPACE_SRGB;
    
            } else if (strcmp (format, "bgr24") == 0) {
                    image->fourcc = V4L2_PIX_FMT_BGR24;
                    image->size = image->height * image->width * 3;
                    image->coplanar = 0;
                    image->colorspace = V4L2_COLORSPACE_SRGB;
    
            } else if (strcmp (format, "argb32") == 0) {
                    image->fourcc = V4L2_PIX_FMT_RGB32;
                    image->size = image->height * image->width * 4;
                    image->coplanar = 0;
                    image->colorspace = V4L2_COLORSPACE_SRGB;
    
            } else if (strcmp (format, "abgr32") == 0) {
                    image->fourcc = V4L2_PIX_FMT_BGR32;
                    image->size = image->height * image->width * 4;
                    image->coplanar = 0;
                    image->colorspace = V4L2_COLORSPACE_SRGB;
    
            } else if (strcmp (format, "yuv444") == 0) {
                    image->fourcc = V4L2_PIX_FMT_YUV444;
                    image->size = image->height * image->width * 3;
                    image->coplanar = 0;
                    image->colorspace = V4L2_COLORSPACE_SMPTE170M;
    
            } else if (strcmp (format, "yvyu") == 0) {
                    image->fourcc = V4L2_PIX_FMT_YVYU;
                    image->size = image->height * image->width * 2;
                    image->coplanar = 0;
                    image->colorspace = V4L2_COLORSPACE_SMPTE170M;
    
            } else if (strcmp (format, "yuyv") == 0) {
                    image->fourcc = V4L2_PIX_FMT_YUYV;
                    image->size = image->height * image->width * 2;
                    image->coplanar = 0;
                    image->colorspace = V4L2_COLORSPACE_SMPTE170M;
    
            } else if (strcmp (format, "uyvy") == 0) {
                    image->fourcc = V4L2_PIX_FMT_UYVY;
                    image->size = image->height * image->width * 2;
                    image->coplanar = 0;
                    image->colorspace = V4L2_COLORSPACE_SMPTE170M;
    
            } else if (strcmp (format, "vyuy") == 0) {
                    image->fourcc = V4L2_PIX_FMT_VYUY;
                    image->size = image->height * image->width * 2;
                    image->coplanar = 0;
                    image->colorspace = V4L2_COLORSPACE_SMPTE170M;
    
            } else if (strcmp (format, "nv16") == 0) {
                    image->fourcc = V4L2_PIX_FMT_NV16;
                    image->size = image->height * image->width * 2;
                    image->coplanar = 1;
                    image->colorspace = V4L2_COLORSPACE_SMPTE170M;
    
            } else if (strcmp (format, "nv61") == 0) {
                    image->fourcc = V4L2_PIX_FMT_NV61;
                    image->size = image->height * image->width * 2;
                    image->coplanar = 1;
                    image->colorspace = V4L2_COLORSPACE_SMPTE170M;
    
            } else if (strcmp (format, "nv12") == 0) {
                    image->fourcc = V4L2_PIX_FMT_NV12;
                    image->size = image->height * image->width * 1.5;
                    image->coplanar = 1;
                    image->colorspace = V4L2_COLORSPACE_SMPTE170M;
    
            } else if (strcmp (format, "nv21") == 0) {
                    image->fourcc = V4L2_PIX_FMT_NV21;
                    image->size = image->height * image->width * 1.5;
                    image->coplanar = 1;
                    image->colorspace = V4L2_COLORSPACE_SMPTE170M;
    
            } else {
                    return 0;
    
            }
    
            return 1;
    }
    
    /**
     *****************************************************************************
     * @brief:  sets crop parameters
     *
     * @param:  vpe  struct vpe pointer
     *
     * @return: 0 on success
     *****************************************************************************
    */
    static int set_ctrl(struct vpe *vpe)
    {
    	int ret;
    	struct	v4l2_control ctrl;
    
    	memset(&ctrl, 0, sizeof(ctrl));
    	ctrl.id = V4L2_CID_TRANS_NUM_BUFS;
    	ctrl.value = vpe->translen;
    	ret = ioctl(vpe->fd, VIDIOC_S_CTRL, &ctrl);
    	if (ret < 0)
    		pexit("vpe: S_CTRL failed\n");
    
    	return 0;
    }
    
    /**
     *****************************************************************************
     * @brief:  Intialize the vpe input by calling set_control, set_format,
     *	    set_crop, refbuf ioctls
     *
     * @param:  vpe  struct vpe pointer
     *
     * @return: 0 on success
     *****************************************************************************
    */
    int vpe_input_init(struct vpe *vpe)
    {
    	int ret;
    	struct v4l2_format fmt;
    	struct v4l2_requestbuffers rqbufs;
    
    	set_ctrl(vpe);
    
    	memset(&fmt, 0, sizeof fmt);
    	fmt.type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE;
    	fmt.fmt.pix_mp.width = vpe->src.width;
    	fmt.fmt.pix_mp.height = vpe->src.height;
    	fmt.fmt.pix_mp.pixelformat = vpe->src.fourcc;
    	fmt.fmt.pix_mp.colorspace = vpe->src.colorspace;
    	fmt.fmt.pix_mp.num_planes = vpe->src.coplanar ? 2 : 1;
    
    	switch (vpe->deint) {
    	case 1:
    		fmt.fmt.pix_mp.field = V4L2_FIELD_ALTERNATE;
    		break;
    	case 2:
    		fmt.fmt.pix_mp.field = V4L2_FIELD_SEQ_TB;
    		break;
    	case 0:
    	default:
    		fmt.fmt.pix_mp.field = V4L2_FIELD_ANY;
    		break;
    	}
    
    	ret = ioctl(vpe->fd, VIDIOC_S_FMT, &fmt);
    	if (ret < 0) {
    		pexit( "vpe i/p: S_FMT failed: %s\n", strerror(errno));
    	} else {
                    vpe->src.size = fmt.fmt.pix_mp.plane_fmt[0].sizeimage;
                    vpe->src.size_uv = fmt.fmt.pix_mp.plane_fmt[1].sizeimage;
            }
    
    	ret = ioctl(vpe->fd, VIDIOC_G_FMT, &fmt);
    	if (ret < 0)
    		pexit( "vpe i/p: G_FMT_2 failed: %s\n", strerror(errno));
    
    	printf("vpe i/p: G_FMT: width = %u, height = %u, 4cc = %.4s\n",
    			fmt.fmt.pix_mp.width, fmt.fmt.pix_mp.height,
    			(char*)&fmt.fmt.pix_mp.pixelformat);
    
    	memset(&rqbufs, 0, sizeof(rqbufs));
    	rqbufs.count = NUMBUF;
    	rqbufs.type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE;
    	rqbufs.memory = V4L2_MEMORY_DMABUF;
    
    	ret = ioctl(vpe->fd, VIDIOC_REQBUFS, &rqbufs);
    	if (ret < 0)
    		pexit( "vpe i/p: REQBUFS failed: %s\n", strerror(errno));
    
    	vpe->src.numbuf = rqbufs.count;
    	dprintf("vpe i/p: allocated buffers = %d\n", rqbufs.count);
    
    	return 0;
    
    }
    
    /**
     *****************************************************************************
     * @brief:  Initialize vpe output by calling set_format, reqbuf ioctls.
     *	    Also allocates buffer to display the vpe output.
     *
     * @param:  vpe  struct vpe pointer
     *
     * @return: 0 on success
     *****************************************************************************
    */
    int vpe_output_init(struct vpe *vpe)
    {
    	int ret, i;
    	struct v4l2_format fmt;
    	struct v4l2_requestbuffers rqbufs;
    	bool saved_multiplanar;
    
    	memset(&fmt, 0, sizeof fmt);
    	fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
    	fmt.fmt.pix_mp.width = vpe->dst.width;
    	fmt.fmt.pix_mp.height = vpe->dst.height;
    	fmt.fmt.pix_mp.pixelformat = vpe->dst.fourcc;
    	fmt.fmt.pix_mp.field = V4L2_FIELD_ANY;
    	fmt.fmt.pix_mp.colorspace = vpe->dst.colorspace;
    	fmt.fmt.pix_mp.num_planes = vpe->dst.coplanar ? 2 : 1;
    
    	ret = ioctl(vpe->fd, VIDIOC_S_FMT, &fmt);
    	if (ret < 0)
    		pexit( "vpe o/p: S_FMT failed: %s\n", strerror(errno));
    
    	ret = ioctl(vpe->fd, VIDIOC_G_FMT, &fmt);
    	if (ret < 0)
    		pexit( "vpe o/p: G_FMT_2 failed: %s\n", strerror(errno));
    
    
    
    	printf("vpe o/p: G_FMT: width = %u, height = %u, 4cc = %.4s\n",
    			fmt.fmt.pix_mp.width, fmt.fmt.pix_mp.height,
    			(char*)&fmt.fmt.pix_mp.pixelformat);
    
    
    
    	memset(&rqbufs, 0, sizeof(rqbufs));
    	rqbufs.count = NUMBUF;
    	rqbufs.type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
    	rqbufs.memory = V4L2_MEMORY_DMABUF;
    
    
    	ret = ioctl(vpe->fd, VIDIOC_REQBUFS, &rqbufs);
    	if (ret < 0)
    		pexit( "vpe o/p: REQBUFS failed: %s\n", strerror(errno));
    
    	vpe->dst.numbuf = rqbufs.count;
    	dprintf("vpe o/p: allocated buffers = %d\n", rqbufs.count);
    
    	/*
    	 * disp->multiplanar is used when allocating buffers to enable
    	 * allocating multiplane buffer in separate buffers.
    	 * VPE does handle mulitplane NV12 buffer correctly
    	 * but VIP can only handle single plane buffers
    	 * So by default we are setup to use single plane and only overwrite
    	 * it when allocating strictly VPE buffers.
    	 * Here we saved to current value and restore it after we are done
    	 * allocating the buffers VPE will use for output.
    	 */
    
    	saved_multiplanar = vpe->disp->multiplanar;
    	vpe->disp->multiplanar = true;
    
    	//vpe->disp_bufs = disp_get_vid_buffers(vpe->disp, NUMBUF, vpe->dst.fourcc,
    	//				      vpe->dst.width, vpe->dst.height);	
    
    	vpe->disp_bufs = get_cmem_buffers(vpe->disp, NUMBUF, vpe->dst.fourcc,
    					      vpe->dst.width, vpe->dst.height);
    
    
    	vpe->disp->multiplanar = saved_multiplanar;
    	if (!vpe->disp_bufs)
    		pexit("allocating display buffer failed\n");
    
    	/* SetCrtc with an RGB buffer first */
    	//disp_get_fb(vpe->disp);
    
    
    	for (i = 0; i < NUMBUF; i++) {
    		vpe->output_buf_dmafd[i]=vpe->disp_bufs[i]->fd[0];
    		//vpe->output_buf_dmafd[i] = omap_bo_dmabuf(vpe->disp_bufs[i]->bo[0]);
    		//vpe->disp_bufs[i]->fd[0] = vpe->output_buf_dmafd[i];
    
    		if(vpe->dst.coplanar) {
    			vpe->output_buf_dmafd_uv[i] = omap_bo_dmabuf(vpe->disp_bufs[i]->bo[1]);
    			vpe->disp_bufs[i]->fd[1] = vpe->output_buf_dmafd_uv[i];
    			
    		}
    		/* Scale back to display resolution */
    		vpe->disp_bufs[i]->noScale = false;
    		dprintf("vpe->disp_bufs_fd[%d] = %d\n", i, vpe->output_buf_dmafd[i]);
    	}
    
    	dprintf("allocating display buffer success\n");
    	return 0;
    }
    
    /**
     *****************************************************************************
     * @brief:  queue buffer to vpe input
     *
     * @param:  vpe  struct vpe pointer
     * @param:  index  buffer index to queue
     *
     * @return: 0 on success
     *****************************************************************************
    */
    int vpe_input_qbuf(struct vpe *vpe, int index)
    {
    	int ret;
    	struct v4l2_buffer buf;
    	struct v4l2_plane planes[2];
    
    	dprintf("vpe: src QBUF (%d):%s field", vpe->field,
    		vpe->field==V4L2_FIELD_TOP?"top":"bottom");
    
    	memset(&buf, 0, sizeof buf);
    	memset(&planes, 0, sizeof planes);
    
    	planes[0].length = planes[0].bytesused = vpe->src.size;
    	if(vpe->src.coplanar)
    		planes[1].length = planes[1].bytesused = vpe->src.size_uv;
    
    	planes[0].data_offset = planes[1].data_offset = 0;
    
    	buf.type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE;
    	buf.memory = V4L2_MEMORY_DMABUF;
    	buf.index = index;
    	buf.m.planes = &planes[0];
    	buf.field = vpe->field;
    	if(vpe->src.coplanar)
    		buf.length = 2;
    	else
    		buf.length = 1;
    
    	buf.m.planes[0].m.fd = vpe->input_buf_dmafd[index];
    	if(vpe->src.coplanar)
    		buf.m.planes[1].m.fd = vpe->input_buf_dmafd_uv[index];
    
    	ret = ioctl(vpe->fd, VIDIOC_QBUF, &buf);
    	if (ret < 0)
    		pexit( "vpe i/p: QBUF failed: %s, index = %d\n",
    			strerror(errno), index);
    
    	return 0;
    }
    
    /**
     *****************************************************************************
     * @brief:  queue buffer to vpe output
     *
     * @param:  vpe  struct vpe pointer
     * @param:  index  buffer index to queue
     *
     * @return: 0 on success
     *****************************************************************************
    */
    int vpe_output_qbuf(struct vpe *vpe, int index)
    {
    	int ret;
    	struct v4l2_buffer buf;
    	struct v4l2_plane planes[2];
    
    	dprintf("vpe output buffer queue\n");
    
    	memset(&buf, 0, sizeof buf);
    	memset(&planes, 0, sizeof planes);
    
    	buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
    	buf.memory = V4L2_MEMORY_DMABUF;
    	buf.index = index;
    	buf.m.planes = &planes[0];
    	if(vpe->dst.coplanar)
    		buf.length = 2;
    	else
    		buf.length = 1;
    
    	buf.m.planes[0].m.fd = vpe->output_buf_dmafd[index];
    
    	if(vpe->dst.coplanar)
    		buf.m.planes[1].m.fd = vpe->output_buf_dmafd_uv[index];
    
    	ret = ioctl(vpe->fd, VIDIOC_QBUF, &buf);
    	if (ret < 0)
    		pexit( "vpe o/p: QBUF failed: %s, index = %d\n",
    			strerror(errno), index);
    
    	return 0;
    }
    
    /**
     *****************************************************************************
     * @brief:  start stream
     *
     * @param:  fd  device fd
     * @param:  type  buffer type (CAPTURE or OUTPUT)
     *
     * @return: 0 on success
     *****************************************************************************
    */
    int stream_ON(int fd, int type)
    {
    	int ret;
    
    	ret = ioctl(fd, VIDIOC_STREAMON, &type);
    	if (ret < 0)
    		pexit("STREAMON failed,  %d: %s\n", type, strerror(errno));
    
    	dprintf("stream ON: done! fd = %d,  type = %d\n", fd, type);
    
    	return 0;
    }
    
    /**
     *****************************************************************************
     * @brief:  stop stream
     *
     * @param:  fd  device fd
     * @param:  type  buffer type (CAPTURE or OUTPUT)
     *
     * @return: 0 on success
     *****************************************************************************
    */
    int stream_OFF(int fd, int type)
    {
    	int ret;
    
    	ret = ioctl(fd, VIDIOC_STREAMOFF, &type);
    	if (ret < 0)
    		pexit("STREAMOFF failed, %d: %s\n", type, strerror(errno));
    
    	dprintf("stream OFF: done! fd = %d,  type = %d\n", fd, type);
    
    	return 0;
    }
    
    /**
     *****************************************************************************
     * @brief:  dequeue vpe input buffer
     *
     * @param:  vpe  struct vpe pointer
     *
     * @return: buf.index index of dequeued buffer
     *****************************************************************************
    */
    int vpe_input_dqbuf(struct vpe *vpe)
    {
    	int ret;
    	struct v4l2_buffer buf;
    	struct v4l2_plane planes[2];
    
    	dprintf("vpe input dequeue buffer\n");
    
    	memset(&buf, 0, sizeof buf);
    	memset(&planes, 0, sizeof planes);
    
    	buf.type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE;
    	buf.memory = V4L2_MEMORY_DMABUF;
            buf.m.planes = &planes[0];
            if (vpe->src.coplanar)
    		buf.length = 2;
    	else
    		buf.length = 1;
    	ret = ioctl(vpe->fd, VIDIOC_DQBUF, &buf);
    	if (ret < 0)
    		pexit("vpe i/p: DQBUF failed: %s\n", strerror(errno));
    
    	dprintf("vpe i/p: DQBUF index = %d\n", buf.index);
    
    	return buf.index;
    }
    
    /**
     *****************************************************************************
     * @brief:  dequeue vpe output buffer
     *
     * @param:  vpe  struct vpe pointer
     *
     * @return: buf.index index of dequeued buffer
     *****************************************************************************
    */
    int vpe_output_dqbuf(struct vpe *vpe)
    {
    	int ret;
    	struct v4l2_buffer buf;
    	struct v4l2_plane planes[2];
    	//printf("**********************cxy_test**********************\n");
    	dprintf("vpe output dequeue buffer\n");
    
    	memset(&buf, 0, sizeof buf);
    	memset(&planes, 0, sizeof planes);
    
    	buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
    	buf.memory = V4L2_MEMORY_DMABUF;
            buf.m.planes = &planes[0];
    	if(vpe->dst.coplanar)
    		buf.length = 2;
    	else
    		buf.length = 1;
    	ret = ioctl(vpe->fd, VIDIOC_DQBUF, &buf);
    	if (ret < 0)
    		pexit("vpe o/p: DQBUF failed: %s\n", strerror(errno));
    
    	dprintf("vpe o/p: DQBUF index = %d\n", buf.index);
    
    	return buf.index;
    }
    
    /**
     *****************************************************************************
     * @brief:  buffer retried by index and displays the contents
     *
     * @param:  vpe  struct vpe pointer
     * @param: index index of dequeued output buffer
     *
     * @return: 0 on success
     *****************************************************************************
    */
    int display_buffer(struct vpe *vpe, int index)
    {
    	int ret;
    	struct buffer *buf;
    
    	buf = vpe->disp_bufs[index];
    	ret = disp_post_vid_buffer(vpe->disp, buf, 0, 0, vpe->dst.width,
    				   vpe->dst.height);
    	if (ret)
    		pexit("disp post vid buf failed\n");
    
    	return 0;
    }
    
    

    The cmem I allocated:

    Block 1: Pool 0: 6 bufs size 0xc6000 (0xc6000 requested)

    Pool 0 busy bufs:

    Pool 0 free bufs:
    id 0: phys addr 0xbcf3a000
    id 1: phys addr 0xbce74000
    id 2: phys addr 0xbcdae000
    id 3: phys addr 0xbcce8000
    id 4: phys addr 0xbcc22000
    id 5: phys addr 0xbcb5c000

    Block 1: Pool 1: 1 bufs size 0x129000 (0x129000 requested)

    Pool 1 busy bufs:

    Pool 1 free bufs:
    id 0: phys addr 0xbc96d000

    The output on HDMI will shake ,and I can't find out what's wrong with that.Please help us .

    Below is the image I stored previously.You can see the image move up one line.

    yuv.zip

  • I looked at capturevpedisplay application for using vpe for yuyv to rgb24 conversion. I find the issue to be with the fourcc format incompatibility between V4L2 and DRM framework. They are same for yuyv and nv12 and hence it works for these two formats. But for RGB format, the specification is different in these two frameworks.

    FOURCC('R','G','B','3') or FOURCC('B','G','R','3') in V4L2 spec translates to FOURCC('R', 'G', '2', '4') in DRM framework.

    I added below section to the switch() section inside the alloc_buffer() function in display-kms.c file
    case FOURCC('B','G','R','3'):
    case FOURCC('R','G','2','4'):
    fourcc =FOURCC('R', 'G', '2', '4');
    buf->nbo = 1;
    buf->bo[0] = alloc_bo(disp, 24, buf->width, buf->height,
    &bo_handles[0], &buf->pitches[0]);
    break;
    case FOURCC('A','R','2','4'):
    case FOURCC('R','G','B','4'):
    fourcc = FOURCC('A','R','2','4');
    buf->nbo = 1;
    buf->bo[0] = alloc_bo(disp, 32, buf->width, buf->height,
    &bo_handles[0], &buf->pitches[0]);
    break;

    With this changes, the capturevpedisplay application works for below arguments -
    #./capturevpedisplay 640 480 yuyv 1920 1080 bgr24 0 1 -s 36:1920x1080

    You need to do similar changes to your application using CMEM buffers.

    Hopefully with these changes/fixes, you do not need to use the software based color conversion and hence will not need to debug the display related issue occurring with that.
  • Hi,manisha

    It's successful after I tried your advice.

    But there is a issue still occur in this demo .

    When we tried to do some proccess(opencv) with buf->cmem_buf.

    The video stream will shake.It will display normally when no proccessed.

    In my opinion,CMEM is a share buf between V4L2 and DRM?

    Is there any other threads can use these cmem buf allocated.

  • Hi,manisha
    I found the problem maybe happen in vpe ?
    I have try to use :
    ./capturevpedisplay 1920 1080 yuyv 1280 720 bgr24 0 3 -s 31:1920x1080  (without deinterlace )
    To proccess(opencv) another camera(SDI 1080P) will be successful.
    If I tried :
    ./capturevpedisplay 704 288 yuyv 704 576 bgr24 1 3 -s 31:1920x1080
    The display from HDMI will be jitter (like without deinterlace)

  • Interlace display feature is broken and there is no plans to fix it.
  • Hi,manisha
    I know that DSS can't support Interlace video display.
    Is that mean If we insert our algo to process video will break the VPE to do deinterlace?
  • I don't understand what you meant by this -

    bulabula_yan said:
    If we insert our algo to process video will break the VPE to do deinterlace?

    What's the data flow for your use case?

  • Hi,manisha

    Our data flow is PAL(576I).

    So I use capturevpedisplay demo to do deinterlace then display (576P) on HDMI(resize to 1080P or other)

    I add our algo in process(),blow is where I added :

    index = vpe_output_dqbuf(vpe);    // buf[index] -> cmem is point to a allocated chache between VPE and DRM 

    proccess(buf[index] -> cmem);    //our algo

    display_buffer(vpe, index);    //display buf[index] -> cmem

    If our algo insert in this demo ,VPE will stop doing deinterlace.

    HDMI will display a 576I  without deinterlace.

  • VPE should not be dependent upon who is the consumer of the output buffer.

    Does your algorithm takes lot of time, depriving VPE of the buffer it need in real time?
    How about if you disable the cached read/write, just to make sure cache cleaning is not causing the problem
    You may also try to remove the display from the pipeline and do a file write and check if you see the issue with filewrite output too.
  • Hi,manisha

    I tried to add :

    usleep(5000);

    display_buffer(vpe, index);

    before display.

    If usleep is more than 2ms then the buffer will be interlaced sometime.

    Our algorithm needs 30ms at one time.The input must be PAL(576I).

    Is there any solution?

  • If the vpe is fed 50 interlace fields per sec, then it will produce 50 de-interlaced frames in a sec and the display should also be expecting to run at 50 fps. If your algorithm is taking 30 ms, then it is kind of slowing down entire pipeline. The algorithm needs to be running at the speed of producer (vpe) and consumer (display)
  • Hi,manisha
    Thanks for your reply.
    I think I know what you say.
    Vpe will output 50 frames with de-interlaced in PAL(50 fields).
    But there will be about 20ms that we can use.Right?
    Why did I add only 5ms (usleep(5000);) would disorganize the pipline?
  • I don't know what is going on with your application and what that algorithm is doing, so can't comment why inserting 2ms sleep is making any difference. All I can say is VPE doing de-interlacing is not dependent upon who is consuming the de-interlaced frames. VPE should be correctly fed the input and output buffers with correct parameter settings and it will do as instructed.
  • Hi,manisha

    Here is what I 've done in capturevpedisplay:

    line 327:

    index = vpe_output_dqbuf(vpe);

    //usleep(5000);

    display_buffer(vpe, index);

    As above I want to make usleep(5000) to take place our algo.

    I found if usleep is more than 2ms the vpe will disorganize the pipline after that test.

    1.In my opinion,50 frames will  remain about 20ms to do our algo,but why there is only about 2ms?

    2.In fact,we just need vpe to output 25 frames per second which remain more time to do our algo.I tried to set interlace field to 2 but it faild  

    https://e2e.ti.com/support/arm/sitara_arm/f/791/t/666961#pi316653=2

  • Can you try to increase the buffer depth of VIP and VPE?
  • Hi,manisha

    Thank you very much 

    It works!

    I changed the "NUMBUF" to 12 which vpe will work more stable.

    But sometims it still jitter

    I try to measure the time of capture-display loop.

    I found display is not stable sometimes.Below is what I debug in capturevpediaplay.c line 327 in my src code

    clock_gettime(CLOCK_MONOTONIC,&tp0);//tp0
    display_buffer(vpe, index);
    clock_gettime(CLOCK_MONOTONIC,&tp1);//tp1
    pre_time=tdiff_calc(&tp0, &tp1);
    printf ("total is tdiff=%lf ms \n", pre_time);

    [output]:

    total is tdiff=7.727317 ms
    total is tdiff=19.982787 ms
    total is tdiff=7.717232 ms
    total is tdiff=7.735614 ms
    total is tdiff=7.746837 ms
    total is tdiff=7.749277 ms
    total is tdiff=7.723738 ms
    total is tdiff=7.727480 ms
    total is tdiff=7.675589 ms
    total is tdiff=7.718696 ms
    total is tdiff=7.718370 ms
    total is tdiff=2.509784 ms
    total is tdiff=2.298317 ms
    total is tdiff=3.245363 ms
    total is tdiff=5.429814 ms
    total is tdiff=7.704218 ms
    total is tdiff=7.629555 ms
    total is tdiff=7.623373 ms
    total is tdiff=7.714792 ms
    total is tdiff=7.694133 ms
    total is tdiff=6.831024 ms
    total is tdiff=7.660624 ms
    total is tdiff=7.732197 ms
    total is tdiff=7.705032 ms
    total is tdiff=7.689090 ms
    total is tdiff=7.737403 ms
    total is tdiff=7.707635 ms
    total is tdiff=7.691856 ms
    total is tdiff=7.724389 ms
    total is tdiff=7.646798 ms
    total is tdiff=7.708286 ms
    total is tdiff=7.756109 ms
    total is tdiff=7.703243 ms
    total is tdiff=7.715280 ms
    total is tdiff=14.580302 ms
    total is tdiff=7.594094 ms
    total is tdiff=7.703243 ms
    total is tdiff=7.733987 ms
    total is tdiff=7.737240 ms
    total is tdiff=7.687790 ms
    total is tdiff=7.742608 ms
    total is tdiff=7.658509 ms
    total is tdiff=7.729595 ms
    total is tdiff=7.734963 ms
    total is tdiff=1.427237 ms
    total is tdiff=2.248216 ms
    total is tdiff=2.358829 ms
    total is tdiff=5.100251 ms
    total is tdiff=7.607107 ms
    total is tdiff=7.710725 ms
    total is tdiff=7.670710 ms
    total is tdiff=19.948464 ms
    total is tdiff=7.708448 ms
    total is tdiff=7.642242 ms
    total is tdiff=7.684210 ms

    It will occur about every 50 frames.(very rarely display will take about 15~16ms)

  • Hi,manisha
    VPE will output 50 frames in de-interlaced mode.
    I found our ALGO will take more than 20ms sometimes. I think that's why vpe pipline disorder .
    Can I config vpe output 25fps in de-interlace mode ?
  • bulabula_yan said:
    Can I config vpe output 25fps in de-interlace mode ?

    That's not supported. Today vpe driver only supports motion compensated de-interlacing which will produce de-interlace frame for every field it receives. You can try submitting alternate fields to vpe for de-interlacing in your application. I am not sure if this will work/will have effect on picture quality as it works on motion compensation. vpe IP supports spatial de-interalcing, that might have worked better with alternate field drop but the feature is not supported by the vpe driver.