This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Camera Interface subsystem-OMAP3530

hi,

Please clarify my understanding,

1.The camera interface subsystem can support up to maximum of 16bits/pixel.

2. 8 bit raw data means R component(8bits),G component(8bits),B component(8bits)?

3.How a 8 bit raw data is stored in memory when CCDC output memory port is enabled?

4.What is the difference between YCbCr data on 16 bits and YCbCr data on 8 bits?

Thanks.

  • 1. The OMAP35x camera interface supports a 12-bit data bus, but please note this does not imporse restrictions with regards to pixel size.  This just means up to 12-bits of data can get thru at a time.  BT.656 standard requires YCbCr 4:2:2 16-bit pixels, which are normally transmitted 8-bits at a time using two clock cycles (8 bits are transferred across data bus during one clock cycle).

    2.  In RAW (or SYNC) mode, there are no real restrictions on the pixel format or size, you can transfer just about any data (RGB, YCbCr, Bayer Pattern... pixel data, or any other data); therefore you can transfer RGB888 data if you wish; one way to do this would be sending 8-bits per clock for a total of three clock cycles to get an entire pixel of data in. 

    3) In 8-bit raw mode, your SDRAM data will look as

              byte address 0:  8-bit raw pixel 0

             byte address 1:  8-bit raw pixel 1

             byte address 2:  8-bit raw pixel 2

             byte address 3:  8-bit raw pixel 3

    and so on....

    4. So I can interpret this in two different ways.  If you are referring to pixel size, than 16-bit YCbCr pixel can represent a wider range of colors than YCbCr 8-bit pixel (not very common).  If you are referring to data-width, some of our processors have video interfaces that can read 16-bit pixels at one time (during one clock cycle); in this case you may choose to send one 16-bit pixels in either 16-bit mode (one per clock) or 8-bit mode (1 very two clocks = 16-bits). 

     

    Hope this helps.

  • Thanks a lot.

    2. Here are you referring pixel clock? if i use 3 clocks to transfer a pixel data(RAW RGB888 - 24 bits per pixel) ,of a image of 500*400, camera interface will get 3 * 500 pixel clocks per line.Please clarify this. 

    i want to transfer RAW RGB888(24 bits per pixel).

    For this the register field configuration is,

    CCDC_SYN_MODE[INPMOD] = 0;//raw data

     

    CCDC_SYN_MODE[DATSIZ] = 0x7;//cam_d signal width is 8 bits

    what will be the following register field configuration?

     

    CCDC_SYN_MODE[PACK8] = ???;//16 bits or 8 bits per pixel

    My understandig is camera interface will store a maximum of 16 bits per pixel in memory.Please clarify this.

    i transfer RAW RGB888(24 bits per pixel) in 3 clocks(8 bit per clock).  if i configure CCDC_SYN_MODE[PACK8] = ;// 8 bits per pixel then,

    byte address 0:  8-bit(byte0) 

             byte address 1:  8-bit (byte1)

             byte address 2:  8-bit (byte2)

     byte address 3:  8-bit (byte3)

    so on..... 

    So first pixel data is stored at SDRAM address(example)   0x87200000 as 0xbyte3byte2byte1byte0. Is it correct? 

    So i have to rearrange this before displaying in LCD.Please clarify this.

    Camera interface provide the option to set the pixel clock position at which data output to memory begins in CCDC_HORZ_INFO[SPH].

    In a image of 500x400 i want to store data of 400x400.i want to skip 100 pixels in every line.

    since we are utilizing 3 pixel clocks to transfer a complete pixel i have to configure this CCDC_HORZ_INFO[SPH] field with 100 * 3.Is it correct?