This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

DM6467 SD Composite Output

Other Parts Discussed in Thread: THS8200, THS8135, TVP5147M1, TVP7002

(((EDIT: NEVERMIND ALL THIS.  The Spectrum Digital ADV7343 daughter card for the HD 1080P EVM includes SD (NTSC) Composite Out using the Analog Devices ADV7343 chip.  This auxiliary reference design includes the circuit diagram and technical reference.  See http://support.spectrumdigital.com/boards/evmdm6467t/revb/)))

I'm designing a new board, using as reference design the DM6467T EVM (Spectrum Digital calls it HD 1080P EVM).  I'm designing multiple daughter card options for various video I/O formats. (This post is about SD-in/SD-out at this time, not format conversion.)

The HD 1080P EVM has an SD (Standard-Definition) Composite Input, but does *NOT* include an SD Composite Output.  I'm trying to figure out the hardware to produce this from the DM6467T, which does not have built-in DACs like the DM6446 has. 

I found a post at http://e2e.ti.com/support/dsp/davinci_digital_media_processors/f/99/p/6920/26839.aspx#26839 entitled "DM6467 VGA output" that implies I might be able to output SD Composite (one cable) with the same THS8200 that the HD 1080P EVM uses to output HD Component (three cable) (parens added to prevent readers from mis-reading Composite v. Component).  Is this true?  If so, is it the case that I can just design the board per my reference design's use of the THS8200 for HD Component, then later make software changes to get SD Composite out of it (and still make efficient use of the DM6467T's built-in abilities, rather than totally hard coding and custom handling the data to force out SD composite).

[I added this paragraph to my post before seeing Steve's response below.] While reading the THS8200 datasheet, I find it commenting "Its ITU-R.BT656 output port could be used to connect to an NTSC/PAL video encoder, such as the Texas Instruments TVP6000, for regular composite/S-video output".  Two things.  First, this implies the THS8200 can't produce SD Composite Output by itself using the same circuit as I find in the reference design.  Second, I can't find the TVP6000 anywhere on focus.ti.com, although I do find an Analog Devices ADV7170/ADV7171 and could probably find others; yet I'm looking for a easy and fast development solution and all-TI might make it easier.

Please note that right now I'm in the schematic and board design phase.  While I can save for later any suggestions about software, I don't have the mental bandwidth right now to focus on that.  At this time, I just need to make sure I'm putting enough of the right hardware on the board.

Thanks very much for your help.

P.S. My boss demands I design the board before trying to do proof-of-concept on the EVM, so I don't have the liberty of simply trying it out on the EVM first.

  • Helmut,

    The THS8200 cannot generate composite video, only component or RGB.

    TI does not currently have a discrete video encoder device so if the processor you are using does not have an integrated composite video encoder then you will, unfortunately, need to look at other silicon vendors for one.

    BR,

    Steve

  • Steve,

    Thanks for you advice.  Per the [bracketed] paragraph (4th paragraph) I added to my initial post, do you think I should connect a third-party NTSC encoder downstream from the THS8200, or go straight from the DM6467T to a third-party NTSC encoder?

    -Helmut

  • Helmut,

    You will always have better quality, and it will be much simpler to go straight from the digital outputs of the DM6467T.

    Using the output of the THS8200 would require you to convert back from analog to digital before feeding into the video encoder, most likely requiring a video decoder too (TI does have some great video decoders :) ).

    As a side note, you might also look at the THS8135 as a substitute for the THS8200 if you don't need color space conversion. It is smaller and may suit your needs too.

     

    BR,

    Steve

  • Steve,

    Thanks again.

    === Below this point all about color space conversion, not SD composite video ===

    About color space conversion...  Bottom line, might I be able to run my legacy video analytics that operates in RGB in such a manner that the DM6467T actually sends RGB to the THS8200, and then rely on the THS8200 to convert that RGB to the necessary color space for HD Component Out (YCbCr)?

    === Below is more detail that I wrote before writing the above paragraph. ===

    You make me realize I need to draw a color space flow chart, from input through my video analytics to output.  Perhaps you would be so kind as to comment.  For video input...

    • If I use the reference design TVP5147M1 for SD Composite In, the datasheet says it will convert to YCbCr.  
    • I don't plan on it, but if I used the same chip to do SD Component In, the chip will convert to YPbPr.
    • If I use the reference design TVP7002 for HD Component In, I'm unclear but the reference design seems to get some kind of Y and C, using only 2 of three 8-bit ports on the TVP7002.
    • If I use the Sil9135 HDMI receiver per TI's spraav4.pdf, I believe what I'll get is YCbCr

    Meanwhile, my legacy video analytics uses RGB but might be convertible to YXX.  If RGB, the above means I need a color space conversion on my input.  The legacy program already does a conversion from YUV422 to what I call RGB888.  It does this and other stuff so slowly in straight C-type code that I plan to upgrade to VLIB et al.  It would be great if I could coerce the chips mentioned above to just give me RGB, or if the DM6467T has the ability to convert one to RGB without burning any cycles I might otherwise use for video analytics.

    Then when it comes to output...

    • I need to find a third-party chip for SD Composite Out, perhaps the Analog Devices ADV7170/ADV7171.  It takes I DON'T KNOW WHAT color space as input.
    • I don't plan on it, but I could use the same chip to do SD Component Out, or more likely the THS8200 for component out.  I'm not sure what inputs they take
    • If I use the reference design's THS8200 for HD Component Out, I again need some kind of Y and C, using only 2 of three 8-bit ports.
    • If I use the Sil9134 HDMI transmitter per TI's spraav4.pdf, I believe I send it YCbCr

    === Below this point is original ramblings before I came up with paragraphs above  ===

    Regarding THS8135, I don't know yet if I need color space conversion!  While I have a Ph.D. in Electrical Engineering and 35 years electronics experience, I am new to video.

    Note that my product is specifically intended to do video analytics in the DSP, and I've already gotten that code working in general on a commercially available camera with embedded DM6446 (not DM6447, and to be upgraded to VLIB anyway).  I *could* do color space conversions in code, but that would probably be a waste of precious DSP cycles, for which I'm sure I'll be running short when trying to keep up 30fps.

    So, I guess if I need color space conversion, I'll need it in hardware outside the DSP.

    Please tell me if my assumptions below are correct:

    1) If running same input and output video format, such as SD composite for both or HDMI for both, I should NOT need any color space conversion.

    2) If running different formats, such as SD composite in and HD Component out, I still think I do NOT need any color space conversion.

    However, I just looked up the meaning of "color space conversion" on wikipedia, and now realize I know more about it than I thought, just not the word.  This would be, for example, for RGB to YUV conversion.  This is INDEED an issue for me.  My preferred video analytics format is RGB, because there's a history of R&D behind it.  On the third party camera where I've already gotten the algorithm running, the existing software platform provided YUV422 to my new XDAIS routine.  I had to convert it to RGB, and was expecting to have to do so again on my own board.  YOU MAKE ME THINK... I need to figure out a color space flow diagram from input to output, if such is really going to be under my control.  But isn't NTSC in YUV?

    Thanks,

    Helmut

  • Helmut,

    I think you are getting to grips with this now :)

    First, all analog video (note, not PC graphics, but video) will be in YUV (YCbCr) color space. This can then be digitized/decoded into either 444 or 422 digital formats (others exist but these are the most common).

    This means that the TVP51xx devices don't actually convert to YUV since this is the natural color space to start with (not technically a correct statement, but I won't go into detailed semantics!!). NTSC composite video is a modulated signal with a phase modulated color carrier super-imposed onto a luma level. It is not an RGB color space is the key point here. NTSC component video carries Y, Pb and Pr, which correspond very closely to Y, Cb and Cr (or YUV depending on your semantics).

    444 has 4 luma, 4 Cb and 4 Cr samples every 4 pixel samples.

    422 has 4 luma, 2 Cb and 2 Cr samples every 4 pixel samples, so there is decimation in the chroma domain. This is done to reduce the amount of actual data which needs to be transferred/processed, but does so at the expense of a slight image quality reduction. Human vision perception is most sensitive to luma, so perceptually the decimation has a less significant impact.

    The reference design using the TVP7002 outputs data in YUV 422 format which only needs 20 bits instead of 30 bits (16/24 if only needing 8 bits per component).

    Additionally, 422 can be sent over a 10 bit (or 8 bit) interface by sending luma and chroma alternately. This is what the standard ITU656 does.

    Finally, you can have either discrete syncs or embedded syncs. Discrete syncs send the H, V, F and DE as, well, discrete signals. Embedded syncs use special data patterns to convey the same information without using additional signals.

    I can pretty much guarantee that all discrete video encoders, be they composite or component, will accept YUV 422. Some might accept YUV 444. Others may accept RGB.

    Bottom line is, as you have already mentioned, you need to draw out exactly what color spaces are needed/available at the various stages in your pipe. The source analog video will always be YUV (YCbCr/YPbPr) for video or RGB for PC graphics. Likewise the generated analog video will be the same. Some decoders/ADCs may be able to do the conversion for you but then you need to make sure that the processor can actually accept 444 data. I  could be wrong but I don't think the 6767 can, so you are pretty much stuck bringing YUV422 into the processor.

    Now, there may be some options internal to the 6467 to help with conversion from YUV422 to RGB888 but I will need to defer to others more knowledgeable.

    On the output side the THS8200 can help by doing the RGB to YCbCr conversion for you.

    One option could be to integrate the color space conversion and interpolation into your analytics since this will also potentially reduce the memory bandwidth requirements be eliminating CSC/interpolation memory reads and writes.

    Anyhow, hope this helps a little.

    BR,

    Steve

  • Steve,

    Thanks.  I understand your post.  You end by mentioning memory bandwidth requirements for CSC/interpolation.  Who's accessing what memory?  Is the THS8200, for example, accessing the DSP's DDR somehow?  More importantly, is that going to cut into my video analytics horsepower?

    -Helmut

  • Helmut,

    Memory bandwidth usage is only relevant if the processor has to do any conversion.

    The THS8200 will not need any additional memory accessing specific to the color space conversion since it will be done on the fly. Having said this the 6467 output format only supports YUV422 from what I understand though, so it will be necessary to convert from RGB to YUV422 on the 6467 even though the THS8200 would be able to do the conversion for you if you could export RGB from the 6467.

    On the input side the 6467 input video port also only supports 16 bit YUV 422 so it will be necessary to convert to RGB by doing a memory to memory conversion by the processor (unless you can integrate this conversion directly into your analytics to reduce memory bandwidth requirements)

    Unfortunately I think, unless you can modify your analytics code, it will be necessary to do 2 lots of CSC and interpolation/decimation since I don't believe there is any hardware in the 6467 to help.

    BR,

    Steve

  • Steve,

    All of your posts have been very helpful, and with varying degrees of ease or difficulty in understanding.  Your last post is EXACTLY back to what I had always thought to be the case, and for which I've always been prepared.  So, having learned a lot about color space in the last half of this thread, I'm come back to a point of confirmation with what I already believed to be the case.  This is a good thing.

    I wrote the legacy video analytics, which has always done YUV422-to-RGB888 in the front end for analysis.  Through a little magic, the output has been a modified version of the input, however, and thus still in YUV422 and not subject to conversion.  It was coded in a week, my having never done 64x+ before.  Since then, and for month's, I've had an outline for how to recode using VLIB, including a planar structure that begins with VLIB's YUV422-to-RGB888 conversion, along with some gaussian filtering and other stuff. There's also some other library, not VLIB, that I have notes about and may use.  And then I may also just write some of my own wide-instruction stuff.  (I've done plenty of wide instruction work in the past, but wrote the legacy as straight C, erroneously believing the compiler would optimize it.  I soon learned it wouldn't, and found some C library functions that facilitate taking advantage of the DSP's internal parallelism.  These functions are the ones I'll add to VLIB and FORGOTNAMEOFLIB..)

    My next task now is to find the NTSC encoder chip to use, and figure out what wiring it needs.  In doing so, I want to find somewhere in the THS8200 spec why only the G and B ports are being used and not the R port.  I realize this relates directly to the fact it's 16 bit YUV422.  I just couldn't find that on my first pass through the THS8200 data sheet.  (If you feel like giving me a clue where to find it in the spec, or another keyword to search for, I'll thank you yet again.)

    Finally, for all in this community, I realize that now 50% or more of this thread is NOT specifically about "DM6467 SD Composite Output".  I know some folks will get their panties in a wad.  But I've always realized that nothing is ever really clean cut.  It's all inter-related.  The color space conversion stuff, the THS8200 and alternatives stuff, are all necessary to cast the context into which the SD Composite Output will exist.  Thus, it all informs decisions about the SD Composite Output.

    Thanks once more,

    -Helmut

     

  • Helmut,

    The data manager on the input of the THS8200 handles the format conversion/interpretation.

    The register data_dman_cntl(2:0) controls the input format. If the input format is 422 then only 16/20 bits are used. Section 4.2 Input Interface Formats shows which input connections are used for the various modes.

    Whilst looking at this though I did notice that RGB16-444 is a valid input format too, so if the output buffer of your analytics is in RGB16 mode then it may be possible to bypass the 6467 output CSC and simply dump this frame buffer directly to the THS8200, using the 8200s color space conversion to generate YPbPr output. This would eliminate a conversion stage. The down-side is that the color fidelity would only be 16 bit instead of 24 bit.

    For the SD video encoder, as I mentioned previously, I can pretty much guarantee that YUV422 in 8 bit will be supported and most likely also 16 bit YUV422.

    BR,

    Steve

  • Steve,

    Thanks.  I find on THS8200 datasheet figure 4-3 a bit timing diagram that explains the non-use of the R port, as well as mention of ITU-R.BT656.

    Then, back on topic for this thread, I find in the ADV7171 NTSC video encoder datasheet on the VERY FIRST LINE the text "ITU-R BT601/656 YCrCb to NTSC/PAL video encoder", as well as lower on the page 4:2:2 16-bit parallel input format.  While I had previously intended to ask your opinion, this assures me that the ADV7171 will be compatible with the DM6467T by virtue of commonality with the THS8200.  I'll be able to design my circuit based on this and other info, without having to go too deep into the function right now (time the boss won't afford).

    Meanwhile, I check the DM6467T datasheet, looking for "656" and discover it never says it's compatible with a 16-bit interface, only 8-bit.   I won't worry about this because of the commonality mentioned above.

    For the benefit of others, I guess I'll add more to this post later.  My search on the forum for ADV7171 has turned up some info, but mostly 2nd base info, not 1st base info.  1st base is knowing that there's actually a path to home base!

    -Helmut

    EDIT:  OOPS!  I forgot about the Spectrum Digital ADV7343 daughter for the HD 1080P (aka Dm6467T) EVM.  It's got NTSC (SD) composite output on it already.  I could have just gone straight to their design, the OTHER reference design.  Oh well.  I learned a bunch along the way.  It uses the ADV7343, not the ADV7171.  There's probably source code out there to help me with the ADV7343, so I best go with it!