This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Really Getting Started...

Other Parts Discussed in Thread: OMAP3530

Hello,

 

I am a research engineer in the robotics lab of a French University (http://cogrob.ensta.fr/) and expected to initiate a "vision on chip" activity with OMAP3530 devices. I mainly work with a MISTRAL EVM3530 but we also have a BeagleBoard.

I've already installed DVSDK4 and followed the various Getting Started Guides here an there but can't manage to get a working exemple of taking a video stream, applying a trivial processing on it (say remove red component) and outputing or displaying it. Simply building a Codec Server containing an empty codec generated by the wizard and trying to run it led me to some error in some obsucre javascript file... So I must follow a wrong approch !

How would you start such a project ? Using Codec Engine ? By-passing it to work only with DSP Link / DSP Bridge ? From which (working) examples ? -- and no, the Canny Edge Dectector exemple is not (yet) working...

Thanks a lot for any tips or direction :-)

 

  • Anthony,

    I would use the edge detection demo in the DVSDK 4.00.00.22 (available from http://software-dl.ti.com/dsps/dsps_public_sw/sdo_sb/targetcontent/dvsdk/DVSDK_4_00/latest/index_FDS.html) as the basis for your work. The demo is in /dvsdk-demos_4_00_00_18/omap3530/edge_detection.

    This demo uses a C6Accel function (C6accel_IMG_sobel_3x3_8()) to implement a sobel filter on the captured image before looping it back. It also uses C6Accel functions to deinterleave the OMAPs VYUY data format for processing.

    C6Accel uses the same iUniversal interface as the canny example and so by looking at the source code and wiki documentation you get an idea of the way to add your own code.

    Iain 

  • Thanks for you answer  :-)

    I already tried to run the edge detection demo and did not managed, yet :-( But I'll have to fight a bit more with the overcomplicated build system before to concede defeat...

    The point is that we would wish to develop some custom image primitive running on the  DSP, meaning I will have to assume all the three roles of "Coded developper", "server intregrator", and "application developper"... Isn't that a bit too much ?

    Actually I wish I could identify some local (Paris or France) know-how regarding the OMAP platform and the (DV)SDK, which undoubtedldy is a very powerful but (over ?) complex system...

     

     

     

  • Anthony,

    The guide to follow for SDK 4.0 is the Software Developers Guide which is in the /docs folder of the installed SDK. The key thing here is to use a 32 bit ubuntu 10.4 image on the build machine and follow the instructions. We say you should use 10.4 as that is the one we've tested the setup scripts on. On any other image you have to do some configuation manually for tftp, nfs etc.

    How far did you get with your use of the sobel example?

    As to the different roles, the main one you are doing is codec developer. The integrator role is primarily a one shot action to add your new codec to the DSP server and the application role is in the code calling the iUniversal API which will initially just be your test pattern and based on SDK loopback example (such as edge detector).

    You are correct that the SDK is a very powerful build system, there are many components that can be modified but equally it will run as is with no modification to anything except your application source.

    The simpler approach is to just use the Cortex A8 to write your image processing algorithm and avoid the use of the DSP. This is probably a worthwhile first step to get to know the SDK and get video in and out of the device. Once you are familiar with the build system and Linux code you can start experimenting with the DSP for C6Accel or even a custom codec on the DSP.

    Iain