This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

8-bit image data is cut

Hello all:

   Now I am working on capturing 800x600 raw bayer data. However the captured data has problem. The higher two bits (bit7 and bit6) are always cut for each pixel (always 0 for these two bits). The rest are OK. I've checked hardware and the data is there. It seems that there is a limit of 0x3F for each input pixel data.

   Is there any register configuration to set such limit?

    Many thanks.

Jerry

  • Hi Jerry,

    Can you elaborate more on this? What is the hardware are you using? 

  • Bhat, thank you for your reply.

    I am working with DM368 to capture video stream from a FPGA which mimic a OV5642 sensor. The data interface is in 8-bit width. The FPGA generates a test pattern (raw bayer) with four colors in each quadrant. I am trying to do a sanity check by playing with a simple API in which both previewer and resizer are not used. Namely, the raw data which outputs from the ISIF is monitored. Two days ago, I detected that the higher two bits are always missing (CIN6&7) for each pixel, but the data is physically presented at the data port.

    Right now the problem has been resolved. The problem comes from the register setup in CGAMMAWD. When the problem occurred, the CGAMMAWD is set as 0x9 by the driver. The lowest bit named CCDTBL is automatically set by the driver which causes the higher two bits of data are missing (CIN6&7) for each pixel. After disable the CCDTBL, these two bits are back.

    Although it seems OK with the capture, I am still confused about the register setting. What does it mean by setting the bit CCDTBL. In the user guide, it says "On/Off control of Gamma (A-LAW) table to ISIF data saved to SDRAM". I don't understand what does it mean. In addition, the current setting of the filed GWDI is 0x0100 which gives the value of 4 (bit 11), but I don't know why it is set like this. Since the data bus is in 8-bit width, it seems that the value of 8 (bit 7) in this field makes more sense to me. But only does the value of 4 being set in this field works. Also the setting of the filed of CCDW in MODESET seems confusing to me.

    Regards,

    Jerry

  • Hi Jerry,

    Based on your description it looks like you are dumping the raw data after ISIF. It makes sense now that CCDTBL is used if you are using in ISIF-SDRAM dump mode.

    It basically enables you to apply A-Law compression (10bit to 8 bit) for the data coming out of ISIF and before dumping to SDRAM. This is useful to reduce the bandwidth.

    ISIF interfaces with 16 bit line and you need to mention about the exact MSB of raw data coming in from sensor in which GWDI is used for. 

  • Thank you very much for your reply.

    Yes, that what I did. However, even though I used ISIF-SDRAM dump mode, how come the output is not something I want? I mean the higher two bits are missing if CCDTBL is set. I corrected it by manually modifying the setting of CGAMMAWD at the dm365ccdc.c. I don't think it is a proper way.

    Another question regards to the resizer A config in continuous mode. Now I am trying to modify a code which was used in 1280x720 (720p-60) application (DM365) to 800x600. I found that the the program hangs at the resizer A output. The VDINT0 triggers well by chaining the buffer infinitely, but imp_dma-isr never triggered. It seems that the resizer is waiting for something. I've checked the Resize.c file everything related to the resizer configuration. I found that in the continuous mode, the config of resizer A is nothing but setting the ENABLE bit before calling RSZ_S_CONFIG. Unlike one-shot mode, other parameters in continuous mode such as image size is configured by CCDC driver. I guess the problem comes from the improper config of image size in the application which makes the resizer expect

    Jerry

  • Hi Jerry,

    I will check on resizer and get back to you.

  • Thank you so much Bhat.

    What I am doing is to modify the code which works for capture 1280x720, 720p-60 to a non-standard resolution. Now I am using 800x600 for test purpose and eventually 1440x1440, 1080p-60 will be used for final product. The two projects have exactly the same hardware setup with 8-bit data interface. I wonder if I should add new standard to the driver in order to customize my need. So far, simply changing the resolution setting at the API seems not work.

    Jerry