Finally I got OmniVision sensor at my table and I tried to capture video from it without using SCCB connection (with default settings).
I got output but it’s very pure video with a lot of noise. Probably I don’t represent captured data on the right way. So I’ll appreciate if someone can explain me how output data are packed (probably YUV 4:2:2) and aligned. I’m using OV10620 color sensor and I’d like to process and display just grayscale data.
I’m using same “test” project like for Monochrome MT9V022 (with mini IO drivers) but I changed parameters for CCDC. By my opinion the same CCDC driver should be able to handle both sensors because it’s just grabbing data provided at the input video port. Data are packed in different ways for MT9V022(10 bits raw data) and for OV10620(probably YUV 4:2:2 10bits data) so I should convert OV10620 data to grayscale and send it to VPBE(also YUV 4:2:2). If someone has idea what I did wrong or if someone has test project that I can use to get clear picture please let me know.