This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Basic Encoder Questions dm8168

Hello,

I am beginning to investigate the dm8168 as a tool for video encoding.  However, I am not clear, is the H264 encoder codec available for this product? 

For the dm6467t, I understood the method as follows:

- create a codec server for DSP (that will use the single HDVICP2 for encoding)

- use the codec engine to load the server onto DSP and create encoder instances

- use codec calls via DSPlink to control encoder instances

For dm8168, I cannot seem to find out how this will work.  Is it the same?  Where is the link to the codecs?

Here are more questions abou the general setup:

Can all 3 HDVICP2 individually do encoding?  And how will that work?  Does the codec engine or codec server handle assigning codec instances to different HDVICP2s? 

I read a post that said that the DSP doesn't do anything if the HDVICP2 are doing the encoding.  How does that work?  Isn't the DSP controlling the HDVICP2s?

Where are the links to explain this?

Thanks for your help.  I thought that I understood how this was working from the dm6467t but I wonder now if maybe I have always been confused....

Brandy

 

  • It's easy to miss some of the architecture in a first read of the documentation. Here are some things to keep in mind (I'll be talking about the 'DM8168, though the 'DM8148 is very similar)...

    1. There's a pair of minimally documented ARM M3s controlling the HDVPSS. They are called the "Media Controller". You'll probably never see source code for their binary. You talk to them through the HDVPSS driver, or probably a higher abstraction. The latest I heard, the HDVPSS driver source will be going away in the upcoming release.

    2. The various video processing blocks aren't tied together in a processing chain like you'd find in some TV input IC. They are tied together with the also mysterious "VPDMA", which shouldn't be confused with the well-documented EDMA. Non-tunnelled architecture, present in the current EZSDK beta release, moves video between the blocks through buffers in RAM. It's also possible to "tunnel" the output of one block to another block, reducing SDRAM activity. The circuitry is generally clocked at higher than the pixel rate, so that the blocks can do a bit of multi-tasking.

    3. The datasheet and TRM reflect the capabilities of silicon, not the current state of the software. You need to study the available code too, in order to see what the part can actually do.

    3. The HDVICP2s can do most all the repetitive processing for an encode or decode. H.264 encode/decode is supported by the current EZSDK release, but formats are limited. This makes the DSP available for things like motion detection, advanced audio processing, video analytics.. maybe lots of things you don't need.

    The architecture is powerful. You can buy the DVR SDK at a nice price, and simultaneously encode and store the encoded data for 16 simultaneous D1 streams. The down side to the Netra architecture is that there are many, many "real-time" tasks going on, with only one VPDMA, maybe two SDRAM banks, three encode/decode blocks, lots of outputs and inputs... so allocation and prioritization of resources ("scheduling") becomes unmanageable without a great deal of work and knowledge of the part's internals. Even if you could do it, TI wouldn't be able to help you if you got stuck.

    TI is addressing this by providing standard interfaces, including OMX, GStreamer, Open GL, V42L, etc. This reduces the internal complexity to standard magic, at some loss of capability. Work isn't complete for any of these standards, though major releases of the still-beta EZSDK occur about once a quarter. Don't think of Netra's current state as a finished package for evaluation. TI is sharing their work-in-progress to help you get an early start. The parts aren't yet fully released to production, and the code is still clearly marked "beta".

    Right now, with the 'DM8168, the only video/graphics input you can have is non-interlaced component analog video. Interlacing in general is barely supported. One could simply read the posts here and come up with a sizeable list of fairly basic things that you can't currently do. I have a list that is focused on my needs, and I recommend that you keep one for yourself. Initial silicon had personality, and the required work-arounds also limit performance.

    Next-stepping silicon exists, and new software releases are imminent and scheduled. It's up to you to study the specs, the wiki, this forum, and the EZSDK to determine whether your application has become feasible, and whether it makes sense to get started anyway.

    Note that the DVR SDK has it's own software interface, utilizing a "multi-stream framework", tuned for an SD DVR application. This platform hints at capabilities to come. Z3 makes a board-level "embedded" platform for the 'DM8168. It has hardware and software which is currently more capable than the SDK, though their standard eval offering doesn't include schematics and some of the source code (their goal is to make money, so deals may be possible). There's also a video-conferencing development underway, but its fruits appear to be by invitation only.

    I believe that TI has created a really cool piece of silicon that has the potential to do the strange things I have to do, and that TI's software effort is a work in progress that's heading in the fundamentally right direction, at least for TI. It's frustrating to be unable to dig in and do a product yet, but TI is getting close, and I know that when things fall into place, my competitors will be working on the platform, and it will be good if I beat them.

    Where are the links? The E2E forum is a good source for status and limitations, and the wiki deals more with defining software use and capabilities. You can start at http://processors.wiki.ti.com/index.php/Category:DaVinci, scroll down, and click on DM816x and DM814x, etc.  in the Da Vinci Directory to get started with your exploration. Some things are well hidden in plain site, like this gem: http://processors.wiki.ti.com/index.php/RoadMap. It's good to keep your own list of favorite E2E threads and other resources.

    Disclaimer - I don't work for TI, and have little "inside" knowledge. I do spend a great deal of time here and on the Wiki trying to understand the software's current status. Hopefully, if I've inadvertently fibbed, I'll be corrected shortly.

    Best wishes,

    -Herb

  • Hello Herb,

    I appreciate your discussion points.  As an active community member myself, I am sure I have inadvertently fibber :) However this is my first dive into the new dm8168 part.

    I have worked with the ARM/DSP combo with the dm6467 but I am beginning to the think it is not as similar as I had hope.  My application does not care about how it gets its video input.  In fact it will be a data stream probably over PCIe.  Then I need to be able to point my encoders to the data and then act on the encoded data. 

    Have you worked with the older versions - the dm6467?  I am trying to decide how my application will work on the newer parts.  Is there still a codec server that needs to be set up through the codec engine?  Is there still DMAI as the abstraction layer to the codec? 

    I am trying to avoid sifting through the new EZSDK for weeks ...

    Thanks!

  • Brandy,

    The DM6467T did not have dedicated video acclerators as the DM8168 does. The DSP on the 67T was considered the accelerator whereas we have 'black box' accelerators in the DM8168.

    This trivializes the work going on behind the scene, but basically picture the ARM calling a function which configures the DMA to copy the video data from RAM into the accelerator which processes the video and sends the finished result back out to RAM. This detaches the accelerator from the video input so that the accelerator does not care where the video came from. Each piece is independent.

    The EZSDK leverages OpenMax (OMX) which replaces the CE model for video codecs used previously; all codecs are validated and supported through this OMX model. I think this link should help answer your question as I understand it, but let me/us know if you still have any confusion.

  • Tim,


    I think I get it.

    So just one more clarification.  The 67t says it has "Dual Programmable High-Defination Video Image Co-Processors".  But what you are saying is this is actually the DSP?  But in the 8168, there are 3 HCVICP2s.  And these are not in the DSP but seperate cores inside the chip and are controlled through the OMX (which will be have like the codec engine did before).  Therefore, not only am I not using ARM cycles but I am also not using DSP cycles to encode the data. Is the HCVICP2 its own processor so that I have to use Syslink to talk to it or is it more like a core module inside the ARM (like the EDMA or TSIF was)?

    The link you sent, says if I really, really wanted too I could still use Codec Engine and the DSP to do my encoding as I did on the 67t but that would be considered wasteful considering the other resources at my disposal.  Understood.

    And will I still use the CMEM? And will there be a DMAI interface to the OMX and CMEM?  Or will I have to dive into the these levels my self?  I also notice that I don't use DSPlink any more but something called Syslink.  Is this basically the same?   I tried to download the EZSDK but I found I have to have Ubuntu.  That seems odd.  i would like to find away around that.

    Thanks so much,

    Brandy

     

  • Brandy,

    Brandy Jabkiewicz said:
    So just one more clarification.  The 67t says it has "Dual Programmable High-Defination Video Image Co-Processors".  But what you are saying is this is actually the DSP?
    I apologize, I misspoke on this point. The DM6467T does indeed have dual accelerators on-chip. I was getting this device confused with some of the others which utilize the DSP for video compression rather than an accelerator.

    Brandy Jabkiewicz said:
    And these are not in the DSP but seperate cores inside the chip and are controlled through the OMX (which will be have like the codec engine did before).  Therefore, not only am I not using ARM cycles but I am also not using DSP cycles to encode the data.
    This is correct in that the HDVICP2 is a completely separate entity requiring 0 DSP cycles and minimal ARM cycles for configuration/servicing interrupts, etc. The DSP is free for use in analytics or other signal processing tasks.

    Brandy Jabkiewicz said:
    Is the HCVICP2 its own processor so that I have to use Syslink to talk to it or is it more like a core module inside the ARM (like the EDMA or TSIF was)?
    It's an independent processing block designed specifically for compression. The ARM doesn't communicate with it the same way the ARM communicates with the DSP, but rather you invoke the accelerator via the OMX calls.

    Brandy Jabkiewicz said:
    And will I still use the CMEM? And will there be a DMAI interface to the OMX and CMEM?  Or will I have to dive into the these levels my self?  I also notice that I don't use DSPlink any more but something called Syslink.  Is this basically the same?
    I don't have as quite a good a grasp on this bit, so rather than stuff my foot in my mouth I'll hope someone who knows this model better than I can chime in. I can say that SYS/Link is quite different from DSP/Link. Consider it a non-backwards compatible upgrade to DSP/Link. There's an overview topic which discusses this here.

    Brandy Jabkiewicz said:
    I tried to download the EZSDK but I found I have to have Ubuntu.  That seems odd.  i would like to find away around that.
    There is a way around it, but it's unsupported because we have not tested the SDK outside of Ubuntu 10.04 LTE. See the note in the Installer FAQ and proceed at your own risk :)

  • Dear Tim,

    Could possible to utilize multi-stream framework in the latest EZSDK ?

    We have custom DM8168 processor card with 2G SDRAM and able to do 8 decode. We also got the M3 overlay for increasing the heap size and load memory.

    Please let me intimate if any possibility.


    Thanks,
    Vidya