This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Porting OpenCV application on Davinci6446

Hi TI Employees,

                          I have been able to setup opencv applications for image capture via GIGE Camera from Prosilica on Ubuntu 8.10. I had hard time using their SDK as it is dependent on wxGTK and I couldn't find wxGTK2.8.8. Hence decided to give OpenCV a try and it worked! This required me to have python 2.6 for python wrappers (I had issues with python 3.0 as its not backward compatible). I was able to make changes to "makefile" which took a bit longer but eventually worked out as I intended! Is there a way I can port the same on ARM. I am sure I need to set ARM9 compiler instead of the regular g++ or gcc. -> " arm_v5t_le-gcc". However not sure if I need to do python installation on target platform (Davinci)? Also came to know arm-gcc is different from arm_v5t_le_gcc. Has anyone tried this? Appreciate any guidance!

                          Thanks!

  • There has been some talk about this, though I am not sure that anyone has OpenCV on the DM6446 or will any time soon. You may want to look at http://www.hbrobotics.org/wiki/index.php5/Beagle_Board for some discussion on this, though I am not sure if they have actually leveraged the DSP at all yet, I believe they are just using the Cortex-A8 on the OMAP3 for this.

  • Precisely I  was worried about that - DSP being employed or not? Cause DSP runs faster 594MHz while ARM slower 297 MHz. Also from what I read DSP has no Memory Management Unit (MMU) hence we use CMEM kernel module. I was wondering if this CMEM needs to be replaced. Also where do you address Davinci to use ARM side or DSP side. Also know that DSP grabs codec from heap area (something ALG - forgot what the TI calls it) and performs computation from there. Is it the CODEC portion - where this is addressed?

  • from our DVSDK software architecture perspective

    1) CMEM is used to as a contigous memory allocator to allocate and deallocate; it is precisely because DSP does not have MMU and ARM MMU does is not aware of how DSP will use the memory that you need the memory allocated by CMEM to be contigous.  Both DSP and ARM have access to CMEM memory space, hence you need not pass the entire buffer, but just a pointer to the buffer and the size...  So just to be clear, CMEM is not MMU for DSP side, it applies to both ARM and DSP and is similar to malloc and free, except that memory allocated is contigous and hence managed a bit differently.

    2) in DVSDK architecture, ARM is the master sort of speak.. typically you write a Linux program and run it on the ARM (e.g. encodedecode demo)... that Linux program can call on Engine_open( ) to take the DSP out of reset and load an executable DSP image (x64P file) onto it.  Once both ARM and DSP have executables running on it,  the ARM side will normally initiate communication with DSP via one of the VISA APIs (e.g. VIDENC_create to create an instance of a DSP algorithm such as MPEG4 video decoder  or VIDENC_process to have MPEG4 decoder decompress a video frame). 

    Let us know if this helps clear things up a bit.

  • Hi Juan

    Thanks to you and Bernie for clarifying stuff!

    Yeah Part I makes immense sense! So it is an arbitrator sort of thing between ARM and DSP both to govern memory space!This was impression I had earlier after reading book 'OMAP and Davinci for Dummies' Guess I should have synthesised my sentence better!

    Part II seems to take programmer into layer 2 below application layer. It seems to make sense. So I need to get familiar with Video Image Sound and Audio (VISA) APIs and try and develop my own too if I need to have OpenCV running on DSP! Don't know how intuitive tht would be? I haven't even started writing my own codecs. But this info is extremely helpful! This will help me in the gradual process of development!