This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

DM355 EVM Video Help

Hello,

I recently purchased the DM355 evaluation module and have started reading the manuals on interfacing with it. What I would like to do is read image data through a composite video camera at a specified rate (1 or 2 "images" per second for example), then do some processing on each "image" and then output it to view the results.

What I understand so far is that the VPFE controls video input, and writes data to RAM, and then the VPBE reads the same data from the same RAM address and outputs? Is this correct?

I could not find/understand how to first of all limit frame rate of the video, and then how to access the data in ram to do some processing before placing it back.

Any help or pointers are greatly appreciated. I'm still new to the whole programming on a board thing.

Thanks for your help :)

Cheers

  • Hi S. Abb,

    there are two ways two control VPBE and VPFE. The first and the one with the most support is using the Linux Support Package (LSP - sometimes included in the Platform Support Package - PSP) from TI.

    The LSP I used, but is probably not the up to date one is http://software-dl.ti.com/dsps/dsps_registered_sw/sdo_sb/targetcontent/psp/mv_lsp_2_10/02_10_00_14/index_FDS.html

    The LSP consists of a Linux kernel including drivers for the DaVinci hardware. These drivers are for most V4L driver, some are character devices. You can find out more from the documents included in the LSP and the Video Development Kit (DVSDK).

    http://software-dl.ti.com/dsps/dsps_public_sw/sdo_sb/targetcontent/dvsdk/DVSDK_3_10/latest/index_FDS.html

    The DVSDK also includes example programs for streaming video.

    The V4L framework can handle separate buffers for video input and output, which you can map to user space and than do your own frame processing.

    If you want more general information I recommend TI's processor wiki (http://processors.wiki.ti.com/index.php/Main_Page). In the Linux subsection you can find all LSP's listed by their processor ( http://processors.wiki.ti.com/index.php/Linux_Support_Package ).

    I am not sure if a two frames per second mode is foreseen in the LSP drivers. For that you probably will have to go a little deeper into the driver. A look at the file linux/drivers/media/video/davinci/vpfe_capture.c which lists all available V4L functions for the VPFE would be a good starting point.

    You said that you are new to the whole programming a board thing. Then you probably need a toolchain which consists of a compiler with c libraries and a target filesystem including compiled libraries and binaries. A company called RidgeRun ( http://www.ridgerun.com/ ) offers such a toolchain for DaVinci DM3xx programmers for free, so you don't have to build it on your own. On the processor Wiki you can also find a lot information about compilers and toolchains (http://processors.wiki.ti.com/index.php/Linux_Toolchain).

    Good Luck!

    Sebastian

  • Hello Sebastian,

     

    You have been of great help so far, and a lot of reading material for me to get through.  I sort of understand the point of LSP's and Toolchains, but do not really understand the difference between the two.

    With my evaluation board I got a version of montavista linux that I was able too boot up through a NFS, and run a simple Hello world program. I downloaded the ridgerun toolchain as you recommended but did not how to boot it onto the board, even after reading the documentation. Note, the EVM came with instructions on how to install and load the LSP onto the board. I guess I do not know enough to go about doing this thing, I have an OK knowledge about Linux and basic commands, similarly in C programming.

    I understood that what I want can be done using the V4L or V4L2 driver, but I believe (after searching) that this does not exist directly in the montavista package that came with the board. Can I simply copy paste for example the video drivers  folder and place it into it's respective place in montivista, or is my best bet to understand how to run ridgerun?

    Sorry if my question may same basic, just starting to grasp what's going on.

    Thanks for your help

  • Hello S.,

    My pleasure.

    With the Montavista package you have a filesystem for your DaVinci and a toolchain for your PC. With that you don't need the package from RidgeRun.

    The Montavista package should also include kernel source, if not you can get it from davinci-kernel-git. ( http://processors.wiki.ti.com/index.php/DaVinci_GIT_Linux_Kernel ).

    These three things - Rootfs, Toolchain, Kernel - are all you will need to develop your own applications.

    But before you configure and compile your own kernel, try to write and cross-compile your own "hello world" program, copy it to the directory of your network rootfs and see if it works. From there you can start using the V4L driver (/dev/video...) as documented in the LSP/PSP.If you don't know how to cross-compile, check out the included documentations for "setting up a build/development environment".

    Your idea of simply copying the video drivers is the first step to develop your own LSP/kernel drivers, and its certainly not simple, except the copying part. Using the existing linux V4L driver from the LSP is certainly easier.

    But anyhow, it sounds like you are on the right way...

    I wish you good luck,

    Sebastian

     

  • Hey Sebastian,

    I was able to successfully cross-compile my own "hell world" program, and am now looking through the demo software that comes with the board. They have an encode/decode demo that I am currently reading through to understand the different parts of the code. I've bumped into a few roadblocks that I could not find the solution to.

    The demos are part of dvsdk that comes on the boards flash. In the code I see several include headers, that I cannot seem to find. For example,

    #include <rendezvous.h>
    #include <fifoutil.h>
    #include <linux/videodev2.h>

    Are these standard headers? Where can I find them to further understand what the code is doing.

    Secondly, are all these headers already available in the copy of montavista, I guess they should be since that is the point of a LSP/toolchain no?

    Finally, instead of building a kernel directly, I want to cross-compile my code and test it until it is working before loading it onto the NAND. Although a simple hello world program worked, I do not know how to get more complicated multiple header/c file programs to work, such as the encodedecode demo. Would using the arm_v5t_le-gcc compiler for each .c command suffice? I tried it on one file and got errors to do with the header files.

    Thanks again for your great support :)

     

     

  • Hi S.,

    I am not a pro in video encoding/decoding, but I know that video encoding and decoding is done by the DSP inside the DM365/DM355. A framework is needed if the ARM core wants to access and controll the DSP. There is also a book called "OMAP and DaVinci Software for Dummies" which includes a lot of information about the ARM-DSP interaction framework.

    The headers you cannot find are probably from this framework.

    If you want to compile more than one .c file, you can tell the compiler to compile several input_x.c files and link them into one output file.

       arm_v5t_le-gcc -O2 -I/usr/src/linux-dm355evm-2.6.18/include input_1.c input_2.c input_3.c -o output_file

    If you want more information on the possibilities of gcc, you can have a look at the GNU Compiler Collection webpage. A very good introduction, overview and some interesting details about embedded linux development are written together in the book "Embedded Linux System Design and Development". It also includes a chapter about cross-compiling applications using GCC and GNU Make.

    Good luck again,

    Sebastian

  • Hello again :)

    I posted in another topic here that after rebuilding the DVEVM software and running it on the board through an NFS they don't seem to work, and give an error

    Encodedecode Error: Failed to get the requested screen size: 720x480 at 16 bpp


    That I cannot make sense off or understand.

    Any ideas why it would not be working, taking into considerations that everything was built correctly.

    Note: Same error for encode demo code.

    Thanks for your expertise :)