This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

OMAP3530 video input analytics

Other Parts Discussed in Thread: OMAP3530

Hi,

I've been working with the OMAP3530 (using the OMAPEVM) for a few months; primarily just getting to know the environment and what multi-media packages are available for it. I've built up and worked with ALSA, V4L2, DVSDK, Gstreamer and most recently OpenCV (face detect).  The DVSDK demos and Gstreamer (AFAIK) both take advantage of the OMAP3530's architecture (using both the ARM and the DSP, as well as other IO of course); however, OpenCV obviously doesn't use anything other than bruteforce-ARM/NEON.  The difference (again, AFAIK) here is that  OpenCV is not intended for codec applications.  This leads to my question...

It appears to me that there is good support for "codec-ish" applications on OMAP3530 because there is a clean separation between the actual encode/decode (done on the DSP) and the network interface (done on the ARM).   I don't see as clean of a separation for other "non-codec-ish" applications.  Take, for example, the list of capabilities in the DM365: noise filtering, video stabilization, face detection, auto white balance, auto focus, auto exposure and edge enhancement allows for image improvements and added intelligent video processing. 

Is there a plan (or maybe it already exists and I just haven't found it) to support this kind of video library on the OMAP3530?  If yes, is it likely to take advantage of the DSP?

Thanks,
Ed

  • This is a followup to my previous question...

    Is there any fundamental reason why the TI-"VLIB" (Video Analytics and Vision Library) functionality is not available for the OMAP3530?  Anyone know if TI is working on this or am I missing something here?  Is there something in the DaVinci parts that is not on the OMAP3530 that makes these libraries more feasible? 

    Hoping to get an answer from the TI folks on this one.

    Thanks,

    Ed

  • VLIB was developed for 64x+ DSP core; therefore, it should apply to OMAP3530 as well.  The reason why you probrably do not see it advertised for OMAP3530 is because it was developed prior to OMAP3530 launch and the collateral had not been updated to include a mention of OMAP3530... VLIB collateral it does state support for any 64x+ DSP device and OMAP does have a 64x+ DSP core on it.

  • Juan,

    Thanks for responding.  Do you know how I can find out the status of the VLIB (with regard to OMAP3530)?  I did go through the process of getting access to that library (took a few weeks through  www.ti.com/vlibrequest ), and verified that OMAP3530 is not supported; however, as of yet, I have not been able to find out when and/or if there is intent (by TI) to get this running on the OMAP3530.

  • I have not seen the VLIB deliverable myself (I will try to get access to it), so it is hard for me to imagine what is preventing OMAP3530 support.  Can you describe what is preventing VLIB from running on OMAP?

     

  • That's a good question! :-)  AFAIK, its a binary image (libarary, no source) with an API, so if it isn't advertised as already being availble for the OMAP3530 environment then I just assumed my hands are tied.   Here is a line taken from the TI VLIB page http://software-dl.ti.com/dsps/dsps_registered_sw/video_vlib/S1VLIB/vlib_download_page.htm ...

    • Optimized functions for the TMS320C64x+TM  DaVinciTM DSP core and the TMS320C64xTM  DSP core available in object form

    The model that I have been visualizing up till now is that which I've been assuming with the codecs... I just use the "magic" of the DVSDK environment to load and run these libraries (this is how I think it works on the DaVinci boards it is advertised to be available for).  I could be totally wrong here, but I think that is the model.

    See what I mean?  I'd *love* to find out that these hooks are applicable to the OMAP3530, and I agree that I can't see why they wouldn't be (hence, the reason why I was surprised to find out they weren't).

  • Thank you for the follow up, this helps clear things up a little in my mind.  FYI, I have also sent internal communication asking for clarification on any limitations of using VLIB on OMAP3530, so we should have more confirmation soon.

    So this is my take on how VLIB should be used.  Similarly to IMGLIB or DSPLIB, VLIB is a library optimized under a specific DSP Core (or CPU); based on the link you sent me, it appears VLIB has been developed for two DSP CPUs (64x+ and 64x cores); I interpret this to mean that it can run on any TI device that includes one of these DSP cores (OMAP3530 has 64x+ core), similar model for DSPLIB, and IMGLIB.  In my humble opinion, we should probrably not mention any specific part numbers associated with VLIB such as DM6437 mentioned in many places... since it is not specific to DM6437; fyi, DSPLIB and IMAGLIB do not mention specific parts, but rather more accurately DSP cores they support.

    That said, how do you use VLIB (or DSPLIB and IMGLIB for that matter) in DVSDK; well, very similar to any other dsp algorithm.  You see codec engine framework treats all DSP algorithms as libraries; therefore, you will need to include VLIB as a library in your build environment and then you would be able to use it by your algorithm; please note that one library (your agorithm) can call on another library (VLIB) to do work.... Once all dsp algorithms are built as libraries, a DSP image (codec server) is build to include all libraries along with codec engine framework... the framework provides APIs for creating instances of your dsp algorithms (or libraries) and calling them to do work...

    this is just a brief overview of how I envision things working ...

  • Ok, so if I understand you correctly, there's a good chance that it already does work on the OMAP3530 (since its DSP core is 64x+).  I promise I won't hold you to that!  :-)  It just may not be packaged for the OMAP3530 in a way that makes it easy to "just build and run" (as some of the DVSDK codec stuff is).

    I think this means that my next step is to dig deeper into the hookup between my application on linux (ARM) and the underlying libraries (whether it be a CODEC or VLIB or IMGLIB).   I guess I hadn't considered the likely fact that VLIB is a library that is built simliar to a CODEC; so, if that is the case, then I may be in business!  If I learn more I'll post; meanwhile, when you get a response from your internal communication, I'll be interested to hear that.

    Note that another assumption I am making at this point is that I will not have to do any DSP programming to use this library (once again, similar to the model for the codecs).   Obviously, I realize that if the library doesn't do what I want I may have to start some DSP effor, but for connection to the advertised functionality of VLIB I write C code on ARM-Linux.  Do you agree with that? 

    Thanks much for your help,

  • edsut said:

    Note that another assumption I am making at this point is that I will not have to do any DSP programming to use this library (once again, similar to the model for the codecs).   Obviously, I realize that if the library doesn't do what I want I may have to start some DSP effor, but for connection to the advertised functionality of VLIB I write C code on ARM-Linux.  Do you agree with that? 

    Unfortunately this is not the case, VLIB is a C64x+ library and would have to be built into some sort of DSP project, you could not access this directly from the ARM but rather would need to integrate the library into a DSP application (in codec engine terminology this would be your server). For the most part the communication with the DSP could still be handled with Codec Engine as if you were working with a codec, but there is a lot of software adaption to be done, if you wanted to do this you would probably start with IUNIVERSAL, though there may be some limitations with the VLIB library I am not aware of that would prevent you from doing this properly.

    Essentially the code will run on a C64x+, but it is not packaged such that you can just include it in one of the Codec Engine examples, and make a DSP server for it, because it simply isnt a codec.

  • edsut said:

    Ok, so if I understand you correctly, there's a good chance that it already does work on the OMAP3530 (since its DSP core is 64x+). 

    That would be my educated guess at this time

    edsut said:

    Note that another assumption I am making at this point is that I will not have to do any DSP programming to use this library (once again, similar to the model for the codecs).   Obviously, I realize that if the library doesn't do what I want I may have to start some DSP effor, but for connection to the advertised functionality of VLIB I write C code on ARM-Linux.  Do you agree with that? 

     

    VLIB is a DSP library and provides a set of APIs which are normally used by DSP algorithms.  If you want to access VLIB functionality from the ARM side, you will likely have to implement wrappers that translate VISA API calls (make by Linux apps to access control algorithms) to VLIB APIs. 

    What is your end-goal?  What exactly are you looking to do with VLIB?

  • Bernie Thompson said:

    Essentially the code will run on a C64x+, but it is not packaged such that you can just include it in one of the Codec Engine examples, and make a DSP server for it, because it simply isnt a codec.

    Ok, that may explain why it isn't quite as simple to just "build-n-run" a demo with this.

    Juan Gonzales said:

    What is your end-goal?  What exactly are you looking to do with VLIB?

    Here are two shoot-from-the-hip examples of what my goal would be...

    Parallel processing of incoming video: I'd like to be able to run an encode and some video analytics in parallel (i.e. on the same incoming video stream) .  So, the DSP would receive the incoming frames and perform encode and analysis at the same time (in different tasks of DSP/BIOS I suppose).  Then, each of those tasks would communicate with different threads/processes on Linux, passing up the compressed stream data and some other information regarding the content of the stream. 

    Pipelined processing of incoming video: I'd like to be able to do some analytics on the stream (lens correction, crop, white-balance, etc...) and pass that to the encoder also on the DSP.   Then the DSP would pass up the pre-processed and compressed stream data up to Linux for further digestion.

    Does that seem reasonable?

  • It certainly is doable, but it will require development of the analytics portion of the software; VLIB may help, but you will still need to do much of the work yourself....of course we will be here to help along the way.  FYI, another good resource just in case you are not aware if it yet is our wiki site:

    http://tiexpressdsp.com/wiki/index.php?title=Main_Page

    We normally write wiki articles in areas where we tend to get re-curring questions from customers and to communicate information which we feel is important for our customers to be aware of; it is a great resource to help tie in the various topics (specially on the software side).

    I would recommend you get yourself familiar with our software architecture with the help of our DVSDK docs and wiki, and please feel free to ask any questions you may have along the way.

  • This implies the need for Code-Composer (i.e. not free to use) correct? 

  • Actually, the toolchain (compiler, linker, ....) and the OS (DSP/BIOS) to develop DSP side code is provided to you in the free DVSDK; so if you are ok with using these tools via Linux command line, you are good to go.  However, if you prefer to use an nice IDE front-end, than you may want to consider CCS.  You can get a free (60 or 90 day, I do not recall at this moment) trial of CCS before you buy as well.

  • Here is the link to get your free CCS trial.

    http://focus.ti.com/dsp/docs/dspsupportaut.tsp?sectionId=3&tabId=416&familyId=44&toolTypeId=30

    Although if you are just getting started, I would probrably try to familiarize myself with the software architecture first, maybe even use the command line tools before ordering your free trial so you get maximum use of the trial period.

  • Unfortunately the CCS v3.3 trial version cannot be updated, so you are unable to update it to a version that has support for the OMAP3, though you can still install it and see what CCS will look like and all. One alternative for the low cost route for development is CCSv4 which is currently in beta and can be found at http://wiki.davincidsp.com/index.php/CCSv4.

    Note that with any CCS you are somewhat dependent on having an emulator as well, something like a XDS510USB or XDS560, unfortunately as this is hardware there is no free or evaluation version, unless you can borrow one from a local TI contact (not necessarily an option).

  • I have no problem with command line tools, so for starters I think I'll try them out.  Any good starting point for that route?

    I have the dvsdk installed on my machine, so I assume the dsp build tools are the cg6x_6_0_16 directory?

  • You are correct, the cg6x_6_0_16 directory contains the DSP toolchain (compiler, linker,....)

    With regards to starting point, if you wan to get a complete picture (which can be overwhelming) I would look at the codec engine examples.  There you will find examples on writing codecs (a.k.a. dsp algorithms or libraries), servers (dsp image executable), and applications (arm side application which calls on dsp server via codec engine APIs).  Of course, these are just pretend dsp algorithms to help you visualize how you would go about writing one.   A simpler example is the IUniversal example found on our wiki (http://tiexpressdsp.com/wiki/index.php?title=Main_Page), I have not used this much myself, so I am not sure if you get codec, server and app or only part of the complete story.

     

  • I was reading about IUniversal from Bernie's earlier reply.  Sounds like that may be the best route if it is mature enough for a "new guy" to work with. Anyone else have any opinions on which route makes the most sense?  Just reading the intro text for IUniversal makes me think this is the best choice for me.

  • I suppose I already implied it, but to my knowledge IUNIVERSAL is the 'easiest' way of doing this, of course that does not mean that it is truly easy but it is better than the alternative, which I see as being either hacking into another existing example codec (does not make much sense with IUNIVERSAL around), or using DSP Link (the communication layer under Codec Engine) or DSP Bridge (an alternative to Link) directly which can get very complicated very quickly and are not very well supported on their own.

    Unfortunately at the moment the only truly easy way to work with VLIB is to use a DM6437 instead of the OMAP3 but of course in your case this is not an option.

    If you can get it to work on OMAP3 through any method I look forward to seeing your results, if we can help at all during that process just keep posting around here :).