This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

mt9p031capture problem

I am trying to capture images from an mt9p031 sensor, using the predefined standard :

QXGA-MP-15, ie 2048 x 1536 at 15 fps

The problem I have is the capture seems to restart at some point.
The returned image starts correctly at the top. Let's say lines 0-1000 are correct,
and lines 1000-1500 are the same as lines 0-500.

Hardware is an appro photoelectron camera based on DM365.
Kernel is a 2.6.18 MV based, and looks very similar to what can be found in
lsp_02_10_00_14

Capture format is 12 bit unpacked.
To obtain the image, I pretended each pixel was a luminance value and put 0x80 in the chrominance, then
used the jpeg encoder.

Image below was resized, to avoid hurting the forums, but original image as the correct size.

 

 

 

  • bandini said:
    Hardware is an appro photoelectron camera based on DM365.

    If this is an Appro reference design, I am curious if they gave you a working software driver and a test case to start with?

    If you do have a working test case I would be curious to see that output relative to the output you show here, in particular if the upper portion of your image is the output from just the upper part of the sensor or from the entire sensor just squished into the top part of the image.

    In any case this could be a configuration problem in the sensor if it is outputting decimated data or resetting early for some reason, or in the VPFE if the vertical line count is off and you are inadvertently capturing more than the sensor is expecting to give you. It could also be any number of other possibilities but I would probably start by changing the vertical line count values in the VPFE and if available in the sensor to see how they impact the output image.

  • Well, I did not use their software, because it is CSL based, so I took the LSP that came with the board. This LSP looks very similar
    to the 02_10_14 version of MV/TI

    I had to modify the driver in the following way :
    -I2C address of micron sensor

    The VGA format and UXGA format (1600 x 1200) works fine. The register programmation must therefore
    be fine. I also checked with a micron datasheet, and it looks fine to me.
    There is no decimation or bining taking place here.

    Could it be a bandwidth problem ?

  • Bandwidth problems typically show up as horizontal white line noise.  This seems more like an interface problem between sensor and DM365 VPFE; it appears they are not talking the same language (e.g. frame sizes configured do not agree).  As Bernie suggested, perhaps playing with the vertical line count would fix this.

  • I have gone from 2048x1536 to some lower resolution (1600x1200).
    The problem was then present on some images, and some others would be fine.

    The defect never starts in the middle of a line.
    One defect is particularly strange. I have three buffer queued in the V4L2 capture driver.
    The V4L2 driver will tell me it as a new buffer ready for dequeuing, but the dequeued buffer was not modified.

    As if the buffer updating logic or code was wrong.

    The facts that I have correct images shows that at least when it comes to size, camera chip and VPFE programmation understands each other
    The fact that I get entire old images or pieces of two images in the same, make me think that the problem is not an hardware one but perhaps
    a problem with interrupt latency ?

    If I use a twice slower pixelclock, all this problem vanish (for 1600x1200).

  • This is very interesting.  Perhaps we are running into pixel clock limitations; I believe the pixel clock on the DM365 EVM supports 720p and is likely 74.25 MHz (required for 720p support).  If we consider the relationship

          pixel clock = resolution  *  refresh rate

    Then it would mean that the fixed pixel clock would not be able to keep up with a higher resolution unless the refresh rate was adequately adjusted as well.  I believe the demos included with the DVSDK (is this what you are using? ) assume a constant 30 fps refresh rate.  Maybe we are just pushing the pixel clock beyond what the hardware can recover from...  If so, we will need to redesign the hardware with an approrpiate pixel clock to support the resolution you require.  BTW, what is the camera refresh rate and/or pixel clock?

  • The settings I am using are the settings for the mt9p031 driver for davinci that is present in the DVSDK.
    It is said to be UXGA at 30 fps.

    So I believe the clocking is the same as for the EVM.

    According to the register programmation, Pixelclock is four times the clock entered on EXTCLK, which is labelled
    on the shematics as 24 MHz.

    So pixelclock should be 96 MHZ.

    Tomorrow, I will try to output 8 bit data, instead of 16 (12 bit unpacked)
    If the problem is at the input, there should be no change. because the ISIF will still see 12 bit @ 96 MHz

    If the problem is when writing image data to ddr ram, halving the bitrate should improve the situation.
    But I am just guessing here.

  • So we have the following results :

    -1600 x 1200 @ 30 fps unpacked =>  unstable (see first post)
    -1600 x 1200 @ 15 fps unpacked => stable
    -1600 x 1200 @ 30 fps 8 bit packed => stable

    So clearly there is something going on here with either DDR bandwidth or Driver interrupt handling of the buffer.

     

  • Aside from capturing and displaying, are you doing anything else in the video pipeline?  The encode demo included with DVSDK capture 720p @ 60 --> resize --> encode (DDR2 bandwith hog in this scenerio)  ---> and display 720p @ 60.   From a DDR2 bandwidth perspective, 1600 x 1200 @ 30 is only about 5% more than 720p @ 60, but considering the other stages ... unless you are doing encoding at this larger resolution, I do not believe you would run into DDR2 issues.