This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Displaying video on OMAP 3530 LCD display

Other Parts Discussed in Thread: OMAP3530

Hi all,

         I would like to ask some queries on OMAP 3530 LCD display. I am trying to display my .YUV output video from my decoder on LCD display of OMAP 3530. I encountered some problems while doing this.

1.  My input to the display is of 176 x 144 resolution but my display on board is configured for 640 x 480 resolution display. When i write into the frame buffer it is writing from starting point to 480(X RES) and then comming to the next line i.e. it is writing it continuously and i am getting some distorted display. I dont want it to go to 480(X RES), it must go to 144(X RES) and then come to the next line and it must do this 176 times so that i get the exact one which i am transfering.

          Please let me know how do i configure the display for any resolution i want. Is there any register settings or something else to do this?

2.  The output what we get from our decoder is of YUV (420) format and we converted that to YUYV (422) format for display on LCD board.

           IS this the right way to do?   Please let me know if there is any other.

3. Please le me know the procedure for configuring the display of OMAP 3530 for video. Please let me know if there are any references for the same or any documents on it. 

        Displaying pictures is working fine, i.e. it is displaying only RGB format pictures. Inorder to display other format pictures like .jpeg, .bmp.... they should be converted to RGB format or there is any other procedure for doing that without converting.

We are working on linux platform i.e. fedora core6.

Thanks and Regards,

M BHARATH

  • M BHARATH said:
    Please let me know how do i configure the display for any resolution i want. Is there any register settings or something else to do this?

    This is a rather broad question as there can be a great deal that you have to configure to to move to another LCD (with another resolution) aside from just the resolution itself, in particular there can be timing and formatting differences that may need to be taken into account. However based on your description of the problem it sounds more like you need to do some scaling to get the video frames on to the LCD at the supported resolution as opposed to changing the resolution which would require a new LCD. This being said you could perform such scaling in software yourself or for a more efficient scaling you could use the resizer driver as discussed in the PSP user's guide to scale your output images to fit the larger 640x480 display.

    M BHARATH said:
    IS this the right way to do?   Please let me know if there is any other.

    This is correct, the display driver as well as the resizer driver all expect interleaved YUV/YCbCr 4:2:2 formatted image data.

    M BHARATH said:
    Please le me know the procedure for configuring the display of OMAP 3530 for video. Please let me know if there are any references for the same or any documents on it. 

    The best reference for using the display driver is probably the PSP user's guide found within /OMAP35x_SDK_1.0.2/docs/OMAP35x/UserGuide_1_0_2.pdf of your SDK install. For an example of displaying video, though it is a rather large example, there is the decode demo within the DVSDK.

    M BHARATH said:
    Inorder to display other format pictures like .jpeg, .bmp.... they should be converted to RGB format or there is any other procedure for doing that without converting.

    This is correct, you have to convert any images to the native format of the display window, the display driver is not capible of directly interpreting a compressed image like a .jpeg or a formatted image like .bmp.

  • Hi all,

             I am having a problem displaying UYVY, YUYV, YVU type images. I tried displaying UYVY image (decode_pal.uyvy) present in location   "/home/bharath/dvsdk_3_00_00_29/dvsdk_demos_3_00_00_06/data/pics" which is the DVSDK installed path in my system and i am not getting the image properly and the distorted image is repeated 3times. The same image when i tried to display on a UYVY image viewer it is displaying properly. This is not only the problem with the UYVY image only it is the same with YUYV and YVU format also. We also tried to display a frame of output from our decoder with a resolution of 176x144 it also gave three blocks of same image which has Y components of 3blocks, Cr components of 3 blocks, Cb components of 3 blocks.

    What can be the problem with this? IS there any setting to be configured? 

    PLease let me know the procedure for display a YUV file which is not the same resolution as that of display i.e. it is not 640x480. 

    I am giving the code which i used for displaying the UYVY image what i talked about initially.

    #define X_RES           640
    #define Y_RES           480
    #define BITS_PER_PIXEL  16
    #define NUM_BITS_PER_BYTE 8
    #define V4L2_BUF_TYPE_VIDEO_OUTPUT 2
    #define V4L2_MEMORY_MMAP 1
    #define VIDIOC_S_OMAP2_ROTATION 1

    /* Globals */
    int fd, fb_fd, screen_size;
    unsigned short *fb_area;
    unsigned char *ptr;
    unsigned int Addr;

    /* Function Prototypes */
    void print_banner (void);
    unsigned char test_arr[614400];

    int draw_bitmap ()
    {
        int status, image_fd, i;

        fb_fd = open("/dev/fb0", O_RDWR);
        if (!fb_fd) {
            printf("Could not open framebuffer.\n");
            return -1;
        }
      
        image_fd = open("/usr/images/decode_ntsc.uyvy", O_RDONLY);
        if (image_fd < 0)
        {
             printf("Open failed on the image file\n");
             return -1;
        }

        status = read(image_fd, (char*)test_arr, screen_size);
        if (status < 0)
        {
            printf("Image Read failed\n");
            return -2;
        }

        status = write(fb_fd, (const char*)test_arr, screen_size);
     //status = pwrite(fb_fd, (const char*)test_arr, screen_size, OFFSET);
        if (status < 0)
        {
            printf("Writing to Frame Buffer failed\n");
            return -3;
        }
        close(fb_fd);
        close(image_fd);
        return 0;
    }


    int main (void)
    {
            struct input_event ev;
            int ret,i;
     fd_set rfds;
     struct timeval tv;
            struct v4l2_requestbuffers reqbuf;
     struct v4l2_buffer buffer;
     struct v4l2_format fmt;
     int rotation = 90;
           
     struct
      {
      void *start;
      size_t length;
      }*buffers;

     fd = open("/dev/video1", O_RDWR);
            if (!fd)
            {
                printf("Could not open framebuffer.\n");
                return -1;
            }
     
     fmt.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
     fmt.fmt.pix.width = 640;
     fmt.fmt.pix.height = 480;
     fmt.fmt.pix.pixelformat = V4L2_PIX_FMT_UYVY;
     ret = ioctl(fd,VIDIOC_S_FMT, &fmt);
     if(ret<0)
     {
      perror("VIDIOC_S_FMT\n");
      close(fd);
      exit(0);
     }

     memset(&reqbuf,0,sizeof(reqbuf));
     reqbuf.count = 1;
     reqbuf.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
     reqbuf.memory = V4L2_MEMORY_MMAP;
     ret=ioctl(fd , VIDIOC_REQBUFS, &reqbuf);
            if(ret < 0)
            {
                printf("cannot allocate memory\n");
                close(fd);
                return -1;
            }

     buffers=calloc(reqbuf.count,sizeof(*buffers));
     screen_size = (X_RES * Y_RES * ( BITS_PER_PIXEL / NUM_BITS_PER_BYTE ));
     buffer.length = screen_size;
     for(i=0;i<1;i++)
       {
      memset(&buffer,0,sizeof(buffer));
                    buffer.index = i;
             buffer. type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
             buffer.memory = V4L2_MEMORY_MMAP;
      if (ioctl(fd, VIDIOC_QUERYBUF, &buffer) < 0)
       {
                 printf("buffer query error.\n");
                 close(fd);
                 exit(-1);
              }
      buffers[i].length=buffer.length;
      buffers[i].start=mmap(NULL, buffer.length,PROT_READ | PROT_WRITE, MAP_SHARED,fd, buffer.m.offset);
     }

        while(1)
     {
              ret = draw_bitmap();

       if (ret < 0)
               {
                  close(fd);
                  munmap ( buffers[i-1].start, screen_size );
                  exit(0);
               }
            }
    }

    Thanks and regards,

    M.BHARATH

  • If you could post a picture of your output it may help in identifying the issues that are causing your output distortion.

    M BHARATH said:
    What can be the problem with this? IS there any setting to be configured? 

    I believe the problem is due to your attempting to copy an image directly into the display buffer that is not the same horizontal width as the display, which is causing the image to get wrapped horizontally such that horizontal alignment is lost.

    M BHARATH said:
    PLease let me know the procedure for display a YUV file which is not the same resolution as that of display i.e. it is not 640x480. 

    As mentioned previously, in a typical case where you want the image to take up the entire display screen you would scale it to fit the display's resolution, however if you did not want it to fill the entire screen you could still copy it into the display buffer such that it takes only a portion of the display, however to do so you would have to keep in mind the horizontal resolution (and vertical as well if you do not want it on the top of the screen). For example in your case the 176 pixel wide image would have to be copied line by line from the yuv file, and after each line you would have to offset your pointer into the display buffer by the difference between the width of your image and the width of the display, i.e. 640-176=464 pixels or 928 bytes. This way you would end up aligning your image with the new horizontal resolution of 640 so that all the horizontal scan lines in your image line up. It does not appear you are managing this based on the code you have, which likely explains the distortion.

    To try to put this visually, consider a 4x3 yuv frame buffer as an array of data that is of the form 111122223333, where each increment is a new line of the image, and the frame buffer as a continuous buffer of values X.

    I believe you want:

    1111XXXX
    2222XXXX
    3333XXXX
    XXXXXXXX
    XXXXXXXX
    XXXXXXXX

    But you are currently seeing this, note how horizontal alignment is lost when the image is copied directly into the frame buffer memory continuously instead of taking the resolutions into account:

    11112222
    3333XXXX
    XXXXXXXX
    XXXXXXXX
    XXXXXXXX
    XXXXXXXX

    And if you were to scale it:

    11111111
    11111111
    22222222
    22222222
    33333333
    33333333

     

     

  • Hi,

         what you said is right and we are doing the same. what we are doing is, while getting our output in our decoder we are converting that output as you said so that we get a 640x480 file in which only 176x144 is our image and others are made ZERO. we are geting distoted image which is not exactly 176x144, it may be less than that and it is being repeated thrice and also Y components, U components,  V components are comming separately. We also tried one more thing i.e . converting the 176x144 image to 640x480 image.

    The YUV file of our decoder with one frame is converted to a 640x480 UYVY image using YUV converter. This image which was converted was showing fine with 640x480 resolution in the tool and we are not able to see the same with our code on the display of OMAP board. The output is distorted. Please look into our code mentioned above and let us know whether any other things are to be configured.  

    what are all the formats we can display on the OMAP board? We are able to display .R16 extension images only properly. We were not able to display no other formats. Please let us know how to display other formats if not let us know how to convert them to .R16 format.

    Please let us know the procedure for display of YUV files also.

    We tried to set the colorspace in our code which has enum values from 1 to 8 for different things(camera, jpeg...). we tried to set the value for

    V4L2_COLORSPACE_SMPTE170M

    and V4L2_COLORSPACE_BT878,  but it is always taking V4L2_COLORSPACE_JPEG. Why it is so and what is the sollution for this?

            Actual image given for display (this is a YUV file sent for display, this is a screenshot of image)

     THis is the output we got for the 640x480 YUV input shown above.

    Thanks and Regards,

    M.BHARATH

  • M BHARATH said:
    Please look into our code mentioned above and let us know whether any other things are to be configured.

    It looks like you have a second issue that I did not catch before, that is you are trying to display a YUV image on the /dev/fb0 window, which will be setup by the driver to accept RGB data, which is why an image with the .R16 extention would work if you copied into it, and likely explains the crazy looking colors you are getting on the output.

    M BHARATH said:
    what are all the formats we can display on the OMAP board?

     

    The /dev/fb0 driver (fbdev) can handle RGB565, RGB444, or RGB888 and the /dev/v4l/video1 driver (V4L2) can handle interleaved YUV422, RGB565, RGB888, and RGB565X. Note that you have to configure the display driver for what format you intend to fill the buffer with, in other words the display driver will not automatically determine the format of the image you copy into it, so just because you copy in a .yuv file does not mean the display will automatically switch to yuv mode (and in the case of fb0 it cannot handle it anyway). Details on the display driver and the formats it supports can be found in chapter 6 of the PSP User's Guide OMAP35x_SDK_1.0.2\docs\OMAP35x\UserGuide_1_0_2.pdf.

    M BHARATH said:
    Please let us know the procedure for display of YUV files also.

    For a YUV file you need to use the V4L2 driver in /dev/v4l/video1 while having it configured for interleaved YUV422 such that the color space matches properly.

    In your example code you are calling  fb_fd = open("/dev/fb0", O_RDWR); which would open the fbdev (RGB only) driver, so it seems like you are trying to do configurations to the V4L2 driver but writing into the fbdev driver. You should be able to get proper output if you write your image into the V4L2 display buffer while it is configured for V4L2_PIX_FMT_YUYV.

     

  • Hi,

    So we have to remove /dev/fbo from our code. only video1 is enough? One more thing we are using /dev/video1 in our code ina the function main please check it. we tried to use /dev/v4l/video1 but that video1 was present in /dev directory only so we are using that. Only configuring video1 is enough or we have to configure any other thing. 

    we also configured  V4L2_PIX_FMT_YUYV  (fmt.fmt.pix.pixelformat = V4L2_PIX_FMT_UYVY). even then it is not coming. can you please tell us the exact procedure for configuring the display driver i.e what are all the things we have to configure in proper order. We are refering to the same document what you mentioned but  we are not clear with that document.

    Thanks and regards,

    M.BHARATH

  • I believe /dev/video1 is correct, it looks like the PSP User's Guide may either be out of date or referencing a future version with the /dev/v4l/video1 device path.

    I think the best option in this case is to take a look at an example, there is a OMAP35x_SDK_1.0.0\examples\example\video\Sample_Applications\saMmapLoopback.c (you may have to extract the tar files in the example folder) that opens the display driver in the UYVY format that should make a good piece of software to compare to. Note however that this example has a lot more in it, so you would have to ignore the portions of code that setup a capture device as the capture device is replaced by the yuv file.

  • Hi Bernie,

             We finally got our YUV file displayed with UYVY format on OMAP3530 display. Thanks for all the sugessions you gave.  The  sugessions you gave helped us a lot.

    The other problem what we are having is the ARM and DSP communication. The display what we are doing is in linux platform and the DSP is in windows platform where we use CCS. Due to the lack of display drivers in windows platform we did display in linux platform. I wanted to ask you a few questions on this ARM - DSP communication so that we dont waste much time trying out many things which may not give proper results. This was the case when we started our display we wasted many days with that and atlast by the path you gave for display examples it sorted out. So let us know the following things.

    1.  How this ARM - DSP communication can be established?

    2. Is there any documents related to this which can be referred? Please let us know it there are any.

    3. What are the tools that are to be installed in windows as well as in linux for this communication to happen? Where they can we get them?

    4. Please let us know if there are any examples of this so that we can understand what is being done?

    Thanks and Regards,

    M.BHARATH

  • M BHARATH said:
    1.  How this ARM - DSP communication can be established?

    Typically you would utilize the various software frameworks TI offers to implement this communication, in particular the Codec Engine is the framework your Linux application would deal with directly, for more information on Codec Engine take a look at http://wiki.davincidsp.com/index.php?title=Codec_Engine_Overview

    Internally Codec Engine sits on top of other frameworks, namely DSP/BIOS Link, and at the hardware level this communication is implemented with a shared memory map system.

    M BHARATH said:
    2. Is there any documents related to this which can be referred? Please let us know it there are any.

    I think the best place to start right now would be the Wiki article mentioned above, though that Wiki there are a number of documents referenced that go into more details. Another place to find a good overview of the whole process would be the new dummies book that you can request online for free at http://www.ti.com/dummiesbook

    M BHARATH said:
    3. What are the tools that are to be installed in windows as well as in linux for this communication to happen? Where they can we get them?

    Ultimately once you are communicating between Linux on the ARM and DSP/BIOS on the DSP you will largely be working entirely from Linux, however you can still use CCS to debug a Codec Engine application as discussed in this Wiki article, it just gets more complicated. Typically I would suggest refining your algorithm with a standard CCS project and than once it is all working the way you want, porting it into the Codec Engine to be loaded from Linux to the DSP.

    M BHARATH said:
    4. Please let us know if there are any examples of this so that we can understand what is being done?

    As to examples I would probably start with the DVSDK demos and examples as these show the use of the Codec Engine on the ARM Linux side. In particular for examples of simplified DSP code take a look inside dvsdk_3_00_00_29\codec_engine_2_20_01\examples\build_instructions.html.