This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Questions on BT.656 fields

Anonymous
Anonymous

Other Parts Discussed in Thread: TVP5150, TVP5154

Hi All,

 

I would like to ask a question on BT.656 mode output for CCD/CMOS sensors.

For a BT.656 stream that is converted from a real PAL/NTSC signal, two fields are NOT taken at the same time. But nowadays many BT.656 stream actually comes from CCD/CMOS sensors and does their fields in the digital stream also captured at different times?

I figure that the internal mechanism of these CCD/CMOS sensors could be:

  1. Integrate (on the intensity of light components, for example, the Bayer pattern) the optical signal on the pixel sensor arrary every 1/30 second.
  2. 2. After the optical integration
    1) sample and quantitize for lines 1, 3, 5 ... 2k-1, generating SAV and EAV bytes, and then output
    2) sample and quantitize for lines 2, 4, 6 ... 2k, generating SAV and EAV bytes, and then output

If this is indeed the their actual mode of operation, then neighboring fields belonging to single frame are actually taken at the same time. Is that correct?


Another question is that in VPFE or DM64x chips, is there any way for the interrupt service routine triggered by VDINT0 or VDINT1 to know the the field ID (whether the interrupt is coming from the top or bottom field)?

 

Sincerely,
Zheng

  • Zheng,

    Are you trying to directly connect the sensor to the DM64x part using bt656? 

    Do you have information on the sensor that you are using?  The output format and framing could depend on how much is integrated into the sensor module.

    Which DM64x part are you using?

     

  • Anonymous
    0 Anonymous in reply to Larry Taylor

    Dear Larry,


    I am not connecting digital sensor directly to the VPFE. It is a small sized CMOS-based camera but with output converted to NTSC CVBS composite video signal. The EVM board I am using has a TVP5150 chip which is directly connected with VPFE and this 5150 chip samples and converts the analog CVBS signal into a parallel 8-bit BT.656 stream.

    Larry Taylor said:

    The output format and framing could depend on how much is integrated into the sensor module.

    By "format" and "framing", are you suggesting that

    1. The digital sensor can be configured to output formats other than YCbCr BT.656 stream.
    2. In modes other than BT.656 each frame is output as a whole rather than two interlacing fields one after another, so in this case I no longer need to concern the field ID issue.

    If I am understanding you correctly, then

    1. Of course, the majority of digital sensors do allow the choice between output formats.
    2. It is only because such a integrated camera module (lens + sensor + D-A converted CVBS output + wire and CVBS connector) is easy to connect to my EVM so that I am currently using it in development phase. It has only a power cable and an output connecter without any controlling button outside, not to mentioned doing any register configuration via I2C bus communication (physically no such connection and actual digital sensor is hidden inside the plastic body of the camera module).

    So in this particular situation in which I have no choice other than receiving the sampled BT.656 stream from TVP5150, is there any way to identify the field ID (top or bottom) in the interrupt service routine?


    The DSP I am using is DM6437.

     

    Sincerely,
    Zheng

  • Anonymous
    0 Anonymous in reply to Larry Taylor

    Dear Larry,

    I found on a Micron sensor document that

    sensor document said:

    The **** outputs processed video as a standard ITU-R BT.656 stream, an RGB stream, or as processed or unprocessed Bayer data. The ITU-R BT.656 stream contains YCbCr 4:2:2 data with optional embedded synchronization codes. This output is typically suitable for subsequent display by standard video equipment or JPEG/MPEG compression. RGB functionality provides support for LCD devices.

    Does "output is typically suitable for subsequent display" mean that BT.656 YCbCr stream is not typically used directly in image/video processing? Is it the common practice that people usually do image/video processing on RGB signals?

               

               
    Sincerely,
    Zheng

  • Zheng,

    Most of our processors support the BT.656 interface which is commonly used for interlaced NTSC and PAL SDTV.  If the sensor outputs a standard BT656 output, then a direct connection should work, but I would check details with the manufacturer.

    An issue with this type of direct inteface will be cable requriements.  Driving a parallel digital cable with significant length may not be desirable.  Most SDTV camera installations requiring lengthy connections use the camera's analog composite video output.  A video decoder at the receiving end converts the analog composite to the BT656 interface used with the processor.

    The sensors may have a BT656 interface for local connection to a video encoder or processor, witch can convert the digital YCbCr to analog composite video or other format for transmission.

    Most video is prcessed as YCbCr but often is converted to RGB for display on an LCD display.

    The best approach wil depend on what type of product you are building (a camera with multi output options or a video processing system for use with remote cameras). 

     

  • Anonymous
    0 Anonymous in reply to Larry Taylor

    Dear Larry,

    Many thanks, I am working with it well now.

     

    Zheng

  • Hi Zheng,

    I'm capturing some video  frames provided by a TVP5154 sensor. Each frame contains one field, do you please have an idea on how I can recompose my image with a full fiel?

    Thank you for your time.

    Regards

    Loïc

  • Anonymous
    0 Anonymous in reply to Loic Akue

    Hi Luic,

    My computer has problems recently so I couldn't view the code and provide the detail right now.

    I know of only BT.656 format. For this format, the frame synchronization information is embedded in the data stream, as well as a field identifier which is usually called "FID", and you might want to consult the BT.656 standard for that.

    If the processor you choose have video processing front end, it should be able to recognize that information and generate an interrupt for each field. In order to put two fields into the original one frame, you need to change the destination address for each different field. There should also be register configurations in your processor's video processing module for adjust line distance which allows you to deinterlace the fields.

    I am working with PCB routing now so I might not be able to provide more detailed help.


    Good luck,
    Zheng

  • HI,

    thank you for the quick reply.

    I'll look into my CCDC configuration.

    Best Regards,
    Loïc