This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

OV10620 on the CCDC

Other Parts Discussed in Thread: MIO

Finally I got OmniVision sensor at my table and I tried to capture video from it without using SCCB connection (with default settings).

I got output but it’s very pure video with a lot of noise. Probably I don’t represent captured data on the right way. So I’ll appreciate if someone can explain me how output data are packed (probably YUV 4:2:2) and aligned. I’m using OV10620 color sensor and I’d like to process and display just grayscale data.

I’m using same “test” project like for Monochrome MT9V022 (with mini IO drivers) but I changed parameters for CCDC.  By my opinion the same CCDC driver should be able to handle both sensors because it’s just grabbing data provided at the input video port. Data are packed in different ways for MT9V022(10 bits raw data) and for OV10620(probably YUV 4:2:2 10bits data) so I should convert OV10620 data to grayscale and send it to VPBE(also YUV 4:2:2). If someone has idea what I did wrong or if someone has test project that I can use to get clear picture please let me know.

  • Which DaVinci platform are you using (DM355, DM6446, DM6437, DM6467,....)?  Also, if you can confirm the output of Omnivision part is 10-bit YUV422 (as opposed to RAW bayer pattern normally associated with CCD sensors), we can advise accordingly.  Regardless the length of the pixel data, most of our DaVinci processors store pixels in either 8-bit packed mode or 16-bit (two bytes per pixel); therefore, for 10-bit (or 20-bit is Y abc CrCb are time multiplexed) pixels, there will be some padding of the data which needs to be taken into account when manipulating pixel data around to produce grayscale.  Also, I believe true grayscale is not supported by our DaVinci platforms, but you can have a black and white image be keeping Y data and throwing away Cb and Cr data.

  • I split the thread since this seems to be largely a new topic.

    dejan sajic said:
    I’m using same “test” project like for Monochrome MT9V022 (with mini IO drivers) but I changed parameters for CCDC.

    If you are still using DM6437 I am curious where you found a test project for the MT9V022, I have only seen a driver for a MT9T001 sensor? The driver will probably work at a basic level but you will have to deal with any color formatting differences between them, for example if the MT sensor was outputting raw bayer pattern data than the driver was probably having that converted into YCbCr before passing it into DDR which may not be necessary with the new sensor.

     

  • Hi Bernie,

    Yes I’m still using DM6437 and I didn’t find test project I made it for myself. Now I’m using same project to test OVT10620 sensor.

    I got clear picture from OmniVision sensor. I still have few problems to solve because there is a blank/green part (like some offset problem) on the right side of the picture. (see attached picture) and I have also delay problem.

    In order to get it working with OVT10620 I used same CCDC driver like for MicronImager (MIO driver) but I changed resolution to : 752*2 x 480.

    Because I set up sensor to provide me YUV4:2:2 data it was needed to increase input buffers and resolution by double. On every cycle I get Y(10bits) or U/V(10bits) so I need doubled resolution to be able to capture both components(YU or YV). Also it’s very important to setup registers using CSSB connection before you start to capture data. It’s very important to set register with address 0x4E to 3. Without that change I was not able to get clear picture from sensor. I believe it improves quality of the data signals (set voltage level to 3.3V).

    Does anyone have any idea how to remove blank part from the picture?

  • This seems like a minor problem in perhaps defining the display window; I am not too familiar with DM6437 software, but I would check values of the following registers to ensure they make sense (HSTART, HSPLS, HINT, HVALID)

  • Hi all,

    I resolved offset problem (it was just meter of sensor’s configuration). Now everything works fine but slowJ.

    Video input format is YcbCr 4:2:2 so it means that one pixel is represented with 4 bytes (Ycb or YCr) and I am interested only in Y component (Grayscale information). Therefore, I should prepare buffer for processing algorithms by discarding Cb and Cr components and saving only Y component. It will produce half-sized buffer and I decided to use EDMA for it. I configured one channel as AB sync regular EDMA channel with following parameters:

    ·         B source offset =  2 (YCb or YCr) * pixelSize(2 bytes)

    ·         B destination offset = pixelSize(2 bytes)

    ·         C source offset = noOfImagerCols(752) * pixelSize(2 bytes) * 2(YCb or YCr)

    ·         C destination offset = noOfImagerCols (752) * pixelSize(2 bytes)

    ·         A counter = 2 (YCb or YCr)

    ·         B counter = noOfImagerCols(752)

    ·         C counter = noOfImagerRows(480)

    EDMA configured on this way will copy just Y component from original/src buffer (that contains YCB and Y Cr) to the destination buffer. It works fine but because it’s needed to be performed at the beginning and all other processing blocks works under destination buffer, I need to wait for it to be finished. Problem is that it takes about 10ms just for it and entire application takes less.

    I believe but I am not sure that Previewer should be able to convert it from YCbCr to 8/10 bit raw format but I am not sure how and how fast it is.

    I’m sure it’s common problem so probably you already know best/fastest way to do it.

     

     By the way, I am working on EVM6437 board and I am using IOM drivers to handle video input/output. (Not PSP drivers)

     

     

     

     

     

     

  • dejan sajic said:
    Video input format is YcbCr 4:2:2 so it means that one pixel is represented with 4 bytes (Ycb or YCr)

    I think this may just be a typo, but each individual pixel is actually represented by 2 bytes.

    dejan sajic said:
    I believe but I am not sure that Previewer should be able to convert it from YCbCr to 8/10 bit raw format but I am not sure how and how fast it is.

    The previewer is actually designed around the reverse operation of taking raw RGB bayer pattern data and processing it into YCbCr data, I do not believe that you could use the previewer to convert incoming YCbCr to raw RGB based on the block diagram, and if you want a greyscale image this is not really what you would want anyway.

    dejan sajic said:
    I’m sure it’s common problem so probably you already know best/fastest way to do it.

    The DMA method you have is probably the fastest way, but the key would be to have other processing going on during the DMA transfer, as the big advantage of a DMA setup like this is that it is asynchronous to the CPU. Since you already have the DMA working this may not be much help but there is a good DMA driver application note that talks about some similar format conversion with the DMA driver here.

  •  Hi Bernie,

    It’s not typo but maybe I didn’t explain it on the right way. Normally (with MT9V022 for example) on every pixel clock I receive one pixel (10bits raw data packed inside 16 bits word and stored in VPFE buffer “752*480*4bytes” per frame) but in this case (OV10620) on one pixel clock I receive Y component (12bits data)and on second clock I receive eider Cb or Cv component (12bits data). So if we assumed that one pixel is represented with YCb or YCr it means that one pixel is 12bits data(packed in 16 bits) + 12 bits data(packed in 16 bits) = 32 bits data. Therefore, I need to setup CCDC as double resolution (752*480*2 pixels per frame) and to allocate double sized buffers for it(752*480*4 bytes). When I receive one picture frame inside buffer I have inside it 2 bytes packed Y component and 2 bytes packed Cb/Cr component. Because I’m working with grayscale data I’d like to extract Y component into separate buffer (working buffer) that will be 752*480*2 bytes big.  

  • Dear Dejan,

    I'm also using a DM6437-based platform and we're planning to switch from the MT9V02x sensors to the OVT10620 sensor. Just a small question: did you use the BT.656 output of the OVT10620 (which is pin-compatible I believe with the MT9V02x RGB Bayer output) ?

    Best Regards,

    Romain

  • Hi Romain,

    I'm using "Parallel Generic Raw" output from OVT10620.

    First I used it in YUV 4:2:2 mode but now I'm using it with "raw data" directly from sensor(DSP bypass mode).

    Best,

    -Dejan

  • Hi All,

    I am a post graduate student currently working on an Automotive Safety Device project that provides features like 'Road Sign Detection and Warning System' and 'Lane Departure Warning System'. For this purpose I am using the HDR equipped OV10620 image sensor. I am finding it difficult to get hold of drivers for the OV10620 sensor, for the OMAP 3530 based DEVKIT 8000 platform. Any help in terms of basic driver would be appreciated.

    Thanks,

    Aman