This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Video processing algorithm in DSP

Other Parts Discussed in Thread: OMAP3530, LINUXDVSDK-DV, CCSTUDIO

(EVM6446)

I want to write an algorithm of video processing in DSP, but I don't know how to begin with: I have two questions:

1. Which system should I select: Linux or Windows?   I think: CCS+Windows is better, is that right?  Could you tell me if I use linux , which dev tools is fit for developing DSP side.

 

2.About Algorithm of video processing:  for example: If I want to write an algorithm to detect the edge of an object, I think first I should fetch video data from a block of DDR2, because this block is just writen by VPFE, and I believe the data should be an array of one dimension.  Second, I should ask DSP to deal with the array using the algorithm. Third, the processed array should be sent back to the block of DDR2, because at this time this block will be sent to VPBE.

To describe the content in details:

a). ARM side: VPFE writes video data in DDR2

b).DSP side: To fetch the data from DDR2

c) DSP side: Algorithm section( for example: edge detection etc.)

d) DSP side: Sent back processed video data to DDR2

e) ARM side: VPBE gets the data from DDR2 and shows them in LCD

I don't whether my thinking is right.    Do you have any suggestions?

Thank you.

  • 1) For developing DSP algorithm, CCS running under windows is your best choice.  At the current time, CCS does not run under Linux; this is why we only include the tool chain (compiler, linker, DSP/BIOS...) as part of DVSDK installation rather than a full blown IDE such as CCS (which includes toolchain under Windows).

    2) The scenario you described is exactly what the DVSDK software architecture does; therefore, the less of this software (hopefully just DSP algorithm) you have to write yourself, the better.

    a) This is taken care of by the V4L2 (VPFE) Linux video driver running on ARM.

    b) In this step, the DSP does not actually fetch the buffer, the ARM side application which opened the V4L2 driver in the step above, passes the DSP the buffer pointer via codec engine API (codec engine is framework developed by TI).  On a related note, please note that buffers shared between ARM and DSP reside in CMEM (contiguous memory manager developed by TI), this is because unlike ARM,  DSP does not have a virtual memory manager, hence assumes memory buffer is contiguous (easier on the DSP video algorithm developer); also note that only buffer pointers are passed back and forth between ARM and DSP since they both have access to CMEM space.  CMEM does reside in DDR2 space.

    c) DSP algorithm processes buffer.  Please note that in order to fit into our codec engine framework, DSP algorithm must be XDM compliant.  XDM is based on XDAIS which requires DSP algorithms ask for resources rather than take them directly; this enables DSP algorithms from different vendors to play nicely together.  There is documentation included in DVSDK that details these requirements.

    d) DSP algorithm returns processed buffer pointer to ARM side.  At present, AR/Linux side makes blocking codec engine API call to DSP side which DSP eventually returns with processed buffer pointer; the actual buffer is in CMEM space which both ARM and DSP can both read and write to.

    e) ARM side application can do anything it wants with the processed buffer (display it, store it in HDD, stream it to a network...).  If you want to display, then Linux application would call on the Linux frame buffer driver (VPBE).

    I hope this helps point you in the right direction; I would recommend you go over some of some of the codec engine documentation first (included in corresponding folder under DVSDK), and as you get closer to developing your algorithm maybe learn a bit more about XDAIS (documentation also included in DVSDK).  Understanding these are key.

     

     

  • After reading Codec Engine documents, I have these questions:

    1. Can I write an algorithm which is not about "compress or decompress" into Codec Engine? (I'm sure it is XDM-compliant). For example: an algorithm about "edge detection" or "Dilation and Erosion" processing.

    2. An example:  Codec Engine: CE_InstallationPath/packages/ti/sdo/ce/video/videnc.h       XDAIS interface: XDAIS_path/packages/ti/xdais/dm/ividenc.h

                             Algorithm:  CE_InstallationPath/examples/codecs/videnc_copy/videnc_copy.c

     

        If I write an ARM application using VIDENC_process function which is in videnc.h, how does XDM know what I want to call is VIDENCCOPY_TI_process in videnc_copy.c ?  Because I do not find any relation between videnc_copy.c and videnc.h except that both of them include the same interface ividenc.h.

     3. I don't know whether it is neccessary to use thread (pthread.h) in ARM application code when the code need to use XDM algorithm.  In "encodedecode" demo, I find that this code use thread to send data between "display thread" and "camera thread". In "video thread", the camera data is encoded by h264 algorithm.

  • 1) Yes, you can create your own algorithm that does not involve "compression" or "decompression"; please see 'scale' example.

    2) Each DSP algorithm is required to implement a minimum set of functions in common (see IALGFXNS definition in videnc_copy.c); codec engine expands on the set by requiring two additional functions (process and control).  These functions are mapped (in a required order) to functions (which have more flexible naming requirements such as VIDENCCOPY_TI_process) within the DSP algorithm.  Since each DSP algorithm needs to define these functions and register them with the framework, the one called by the framework depends on which algorithm is instantiated in the user application calling VIDENC_Process.

    3) It is not necessary to create pthread to call codec engine; FYI, each Linux process (user program) inherently has at least one thread.  We just chose to create multiple threads in our demos.

  • Hi Juan, 

    I feel that it is better to develop DSP algorithm in Linux than in Windows CCS. I try to revise some code in "videnc_copy" example in codecs,  and then I compile it with XDC tools. After that, I apply this new algorithm in encodedecode demo. Everything sounds good.

    Could you tell me:

    1. Why did you say : it's better to develop DSP algorithm in windows CCS ?  Because I feel linux is also support it very well.

    2. I did not find any math lib to support my algorithm in DSP, for example: FFT or DFT etc. Could you tell me how to find this math lib, or , I have to code FFT(etc.) by myself. (in Linux system)

     

    Could you give me a suggestion:

    To be a Davinci engineer, what is most important? ( To try to be familiar with coding in Davinci OR To try to be familiar with circuit pad design, for example: PCB design ) 

    Thanks. 

  • Hi Lorry,

    1.) The reason I suggested Windows CCS is because many customers prefer to use an IDE (e.g. CCS) during development/debugging of their DSP algorithm.  That said, we do offer everything but the GUI IDE in Linux (DSP/BIOS, compiler, linker...), so if you are ok working via command line in Linux, than this is great.  In addition, during integration phase, you will need to move over to Linux anyway, hence developing in Linux gives you a head start when you get to this step; you just give up a nice IDE (CCS) to debug your DSP algorithm.

    2) I believe these functions are found under DSPLIB library: http://focus.ti.com/docs/toolsw/folders/print/sprc265.html

    Additionally, you can see a list of all available DSP libraries at the following link; please note that DM644X is based on 64x+ DSP core; hence not all libraries listed apply: http://focus.ti.com/dsp/docs/dspfindtoolswresults.tsp?sectionId=3&tabId=1620&familyId=44&toolTypeId=44&go=Go

    With regards to your question about what makes a good DaVinci engineer.  I think both hardware and software design are important; however, I am a bit more biased toward the software side since most questions we get are software-based. 

     

  • Hi Juan,

               If I want to create a CE (lib in CCS) using CCS, do I need to write a cmd file? I think I do not need to do it, because CE is calling by an ARM application, so the embedded linux system has allocated  memory address and it's length for application. a CE program does not need to consider it. Am I right?

     

     

    Thanks.

  • Lorry,

    I am not too familiar with DSP side of the equation, but I believe you will need a cmd file.  Our codec libraries are created and tested on CCS all by themselves (no ARM code needed), thus this leads me to think that they need a cmd file.  As a matter of fact, the ARM does not know anything about DSP memory space.  The system designer role is the divide the memory into ARM, DSP, and CMEM(shared) memory space.  CE Engine_open call simply programs hardware registers to take DSP out of reset and program a register to let hardware know where in memory to load DSP image (defined by system designer and specified in DSP server condifguration files).  I think if we start looking thru some of the codec engine codecs, servers, and apps files, you will see files resembling linker cmd files (still needed).

    Anyway, I know this is not a clearly detailed answer, but hopefully I have given you enough info to get a little further.

  • A Codec Engine server (.x64p) is actually a full blown executable and not just a library, as Juan is saying Linux is not allocating space for the DSP application to run, that is done by the user in advance, the Link application running in Linux sitting under the Codec Engine framework APIs will load a DSP executable to the appropriate area of DDR and set the DSP off running it. When you build a Codec Engine server you do need a linker command file because you are making an executable, in a typical build environment you use DSP/BIOS to handle the generation of the linker command file for you, so it exists but you do not write it yourself. You can adjust how the memory map is to be setup in your BIOS configuration file (.tcf) to determine how the linker command file is generated, since you are doing this in CCS you can even open up the tcf file in your project in a GUI form to modify these sorts of things.

    What you are proposing would be nice, but unfortunately the Codec Engine ARM side framework is not intelligent enough to dynamically relocate and execute a DSP library, since it would have to go through part of the build process with the relocation (which has to be outside of Linux memory anyway as the DSP has no MMU) this would probably slow things down too much to be worth the integration ease it would add.

  • If I just want to build a lib (a64P) using CCS, do I need a cmd file in CCS?

    I think another mothod to build an x64p file is to build a lib (a64P) in CCS, and then copy it to Linux system and rename it (*.lib ---> *.a64P). Finally, I use xdc tools to build a x64P file in Linux system. I think this way to build an x64P file is easier than building x64P directly in CCS.

     

  • Hi Lorry,

    This is not my area of expertise, but I do not believe you need to define a cmd file if you are building a codec library; as a matter of fact, I imagine it would be against XDAIS rules to do so as a system integrator should be able to take your codec library and build a DSP server to run in a DSP memory space of his choosing (the integrator defines the memory map).  However, I imagine at the library level you still need to define the appropriate memory segment names (not sure is standardized memory segments to be used are pre-defined as part of XDAIS) so that they match those of the codec engine framework.  Anyway, I know I am not being as specific as you would probrably like, but as I mentioned, this is not my area of expertise.  I just wanted to give you my opinion in hopes that  you can make further progress; if you continue to have trouble, please drop us another note and I will see if I can get the right person next week to chime in.

  • Thanks Juan, I'll try what I've written in my post, if I have any questions, I'll post them here. :)

  • 1.

    I've tried to compile an achive file in CCS (for example: videnc_copy.lib) in windows, and then I rename it (videnc_copy.a64P) in lib folder in Linux. And then I compile an codec server and copy it to "/home/usraccount/workdir/filesys/opt/dvevm". I want to know whether my codec can work well, but I failed.

    Sequence:

    a) I compile videnc_copy.lib in CCS, and I'm sure that all the interfaces have been realized.

    b) I copy videnc_copy.lib to /home/usracc/dvevm_x_xx/codec_engine_x_xx/examples/codecs/videnc_copy/lib , and I rename this lib file to videnc_copy.a64P.

    c) In  .../examples/servers/video_copy/ folder, I recompile codec server file and copy it to .../workdir/filesys/opt/dvevm.

    d) I want to test whether videnc_copy can work well.

    After checking, I find that the videnc works, but it is not the code that I compile in CCS, it works as the original videnc_copy.a64P in videnc_copy example in Linux. It sounds that the videnc_copy.lib does not affect codec server (video_copy.x64P).

    So I guess, the new videnc_copy.a64P( renamed from videnc_copy.lib) is not compiled when generating video_copy.x64P. 

     

    2.I find a tool named RTSC_Codec_And_Server_Package_Wizards , It can help us to compile codec server in windows without using CCS.  Could the codec generated by RTSC debug in CCS?

     

    3.If I run a program in target, an error shows " EVM #  /dev/dsplink: No such file or directory" debugging at "Engine_open", finally I find that I forget to run "./loadmodules.sh" at the beginning, I find the answer here titled "problem:Getting error when running sample applications on Linux". I think there should be another "possible cause", although it is not related with my error.

       When compiling an codec server, I know it need lots of achieve files ( osal_dsplink_bios.a64P, bioslog.a64P,....etc.), but if "dsplink.lib" or "osal_dsplink_bios.a64P" is not compiled by XDC when generating codec server, I wonder this error can also occur when a program runs on target board.

      So I think my description can be "possible cause 3". right?

     

     

  • Hi Lorry,

    Thank you for the detailed explanation.  It is my understanding that the codec library is pre-compiled ahead of time to produce .a64P library file; this library file is linked into the DSP Server when you build DSP Server.  That said,

    1) When you build your codec server thru RTSC wizard tool (or possibly other ways), you should be able to debug it using CCS provided it is a DSP server (most common); FYI, codec engine architecture allows us to build codec servers that run on ARM/Linux as well, but CCS would not be much help there.

    2) DSPLINK resides partially on ARM and partially on DSP.  Therefore, normally DSP portion of DSPLINK is linked into final DSP codec server executable (no need to have .lib or .a64P libraries in target) and ARM portion of DSPLINK is loaded via loadmodules.sh. 

    Let me know if this helps 

  • Thank Juan,

    For 1)

    Actually, I think I don't grasp the "spirit" on how to write and debug codec server in CCS yet, even if I've known the structure of XDM interface and other rules of codec. So I chose RTSC to help me to compile. I really want to read a document to instruct me how to debug codec in CCS, for example: one of your documents on how to debug an ARM program using DDD. This document helps me a lot, because it describes all the sequence step by step, but I still do not find any documents about makeing or debugging codec in CCS.

    If this type of document is not existed, I think I should read "makefile" in codec engine or codec server carefully to learn the entire process on how to compile an codec engine and server. And then I think that will help me to build codec in CCS in Windows.

    For 2)

    That means," EVM #  /dev/dsplink: No such file or directory" tells us that the ARM program needs dsplink driver or can not find dsplink driver, is that right?  But this error message occurs when I debug on "Engine_open" line. Is the error not related with Engine server on DSP? 

  • 1) I will find out if we have a debug document for DSP using CCS, but I think the CCS tutorials may be helpful.  More to come on this...

    2) Codec engine also resides partly on the ARM and partly on the DSP.  hence you are correct that /dev/dsplink is a problem in the ARM as result of calling codec engine; this probrably indicates the loadmodules.sh did not load DSPLINK correctly; did you see any errors as result of loadmodules.sh

  • Hi Lorry,

    After talking to a colleage of mine on 1) above, apparently, there is a wiki article which describes this process

    http://wiki.davincidsp.com/index.php?title=Debugging_the_DSP_side_of_a_CE_application_on_DaVinci_using_CCS

    The challenge is that an application (normally a Linux application) must be exercising DSP server and this makes debugging DSP server by itself difficult.  Of course the fact that application normally resides on ARM also makes it difficult.  But with some ingenuity described in the wiki article above, you can do this.

     

  • There may be a better way to do it (hopefully Juan is able to locate something), but I have always been under the impression that you would debug your algorithm in a self constructed test bed application on the DSP (or even on another device like DM6437 that is more condusive to DSP development), as opposed to debugging an actual codec server. Once the algorithm functions properly in the test bed than you would take it and place it into the codec engine framework for a functional system test. The difficulty in this process is why we usually suggest customers purchase existing algorithms as opposed to developing on the DSP directly in the case of heterogenous processors like DM6446.

    EDIT: It seems as I was typing this Juan had found the way to do it, I do still think it would be easier to do your initial development in a test bed rather than going through the complex steps to debug a server, or even better to buy the codec.

  • Hi all...

    Thank you very much for such a helpful discussion. I am actually very new to DVSDK (using omap3530 evm from Mistral Solutions). I tried to rebuild codec engine examples for the codec_engine_2_20_01 as described in build_instructions.html document. Everything goes ok when I rebuilt all in codec and extensions directories. However, when it comes to directory servers, after writing the command 'make' I get the following error:

    platform   = ti.platforms.evm3530
    ti.sdo.ce.examples.codecs.videnc_copy.close() ...
    ti.sdo.ce.ipc.bios.close() - setting powerSaveMemoryBlockName to DDR2
    js: "/home/ayildirim/dvsdk_3_00_00_29/xdctools_3_10_03/packages/xdc/cfg/Main.xs",
    line 201: xdc.services.global.XDCException: xdc.PACKAGE_NOT_FOUND: can't locate the package 'ti.bios.power' along the path:
    '/home/ayildirim/Desktop/examples;/home/ayildirim/dvsdk_3_00_00_29/codec_engine_2_20_01/packages;
    /home/ayildirim/dvsdk_3_00_00_29/xdais_6_20/packages;/home/ayildirim/dvsdk_3_00_00_29/dsplink_1.51/packages;
    /home/ayildirim/dvsdk_3_00_00_29/linuxutils_2_20/packages;
    /home/ayildirim/dvsdk_3_00_00_29/framework_components_2_20_01/packages;
    /home/ayildirim/dvsdk_3_00_00_29/biosutils_1_01_00/packages;/home/ayildirim/dvsdk_3_00_00_29/bios_5_32_04/packages;
    /home/ayildirim/dvsdk_3_00_00_29/xdctools_3_10_03/packages;/home/ayildirim/dvsdk_3_00_00_29/xdctools_3_10_03/packages;..;'.
    Ensure that the package path is set correctly.
    gmake: *** [package/cfg/video_copy_x64P.c] Error 1
    gmake: *** [package/cfg/video_copy_x64P.c] Deleting file `package/cfg/video_copy_x64Pcfg.cmd'
    gmake: *** [package/cfg/video_copy_x64P.c] Deleting file `package/cfg/video_copy_x64Pcfg_c.c'
    gmake: *** [package/cfg/video_copy_x64P.c] Deleting file `package/cfg/video_copy_x64Pcfg.s62'
    js: "/home/ayildirim/dvsdk_3_00_00_29/xdctools_3_10_03/packages/xdc/tools/Cmdr.xs", line 40: Error:
    xdc.tools.configuro: configuration failed due to earlier errors (status = 2); 'linker.cmd' deleted.
    gmake[2]: *** [video_copy] Error 1
    gmake[2]: Leaving directory `/home/ayildirim/Desktop/examples/ti/sdo/ce/examples/servers/video_copy/evm3530'
    gmake[1]: *** [all] Error 2
    gmake[1]: Leaving directory `/home/ayildirim/Desktop/examples/ti/sdo/ce/examples/servers/video_copy'
    gmake: *** [all] Error 2

    As a result I could not manage rebuilding these examples. Could you please help about this issue?

    Regards...

  • I think lack of enough xdc path will lead to an error, xdc tool can not find some important components to build codec server, so I suggest to check whether your xdc path includes all of the components.

  • The first error I see in your log is "'ti.bios.power' " package not being found.  This could be becuase it is really not present or because it was built in another location and copied to the corresponding ti/bios/power directory.  I would first ensure that you can find this package; try typing the following at the root dvsdk directory

    find . -name power

    and see if you get a hit resembling ti/bios/power

  • hi Juan,

    I typed the command that you have suggested and got the following output:

    root@celebro:/home/ayildirim# cd dvsdk_3_00_00_29
    root@celebro:/home/ayildirim/dvsdk_3_00_00_29# find . -name power
    ./local_power_manager_1_20_01/packages/ti/bios/power
    ./local_power_manager_1_20_01/docs/cdoc/ti/bios/power
    ./packages/ti/bios/power
    ./biosutils_1_01_00/packages/ti/bios/power
    ./codec_engine_2_20_01/packages/ti/bios/power
    ./codec_engine_2_20_01/xdoc/ti/bios/power
    root@celebro:/home/ayildirim/dvsdk_3_00_00_29#

    when I looked inside the given directories, I found that some tar files named ti_bios_power,omap3530.tar files exist. I actually could not find out what I should do next...

  • This is a bug in the codec_engine_X_YY/examples/xdcpaths.mak file - it's not providing the explicit ability to add the Local Power Manager (LPM) product, which contains the 'ti.bios.power' package' to the XDC_PATH.

    You can work around this by adding it yourself at the very bottom of that xdcpaths.mak file... something like

        XDC_PATH := /home/ayildirim/dvsdk_3_00_00_29/local_power_manager_1_20_01/packages;$(XDC_PATH)

    We'll fix this in the next release of Codec Engine.

    Chris

  • Hi Chris,

    Thank you for the help... I wrote the path as you already have shown... However the problem is still insisting on unfortunately... Should I download the package online ?

  • hi Chris...

    Ok I could solve the problem now... Thank you very much for the help once again...

    Regards...

  • Hi Juan,

    I also want to write a DSP algorithm about video warping similiar to the one that is described by Lorry. I wrote a test code inside (additional) app.c in video_copy folder and built app folder as described in build_instructions.html file.  Additionally I wrote the same algorithm inside videnc_copy.c. Secondly, I tested this algorithm by just writing an additional code compiled with arm2007q3 toolchain in order to observe performance of algorithm when only arm core operates.

    The question is; how can we be sure about which part of the app.c code is running on DSP core and which part of the code is running on ARM core, when we compile for example app.c in video_copy folder by 'gmake' ? Can we say that any additional function described in videnc_copy.c (in general codec algorithms) would run on DSP core? 

    Regards...

  • Is there an example of this somewhere so we can kind of get a feel for how this works?  I understand the CMEM pointer sharing, but what does the "DSP" driver look like in linux?  How does the DSP side know that that it received a new image?  Can it be interrrupt or event driven (instead of using a flag?)

  • mursel yildiz said:

    The question is; how can we be sure about which part of the app.c code is running on DSP core and which part of the code is running on ARM core, when we compile for example app.c in video_copy folder by 'gmake' ? Can we say that any additional function described in videnc_copy.c (in general codec algorithms) would run on DSP core? 

    Apologize for the delay, I am on vacation this week and hence not checking my e-mails that often.  Inside codec engine, you have three types of examples.

    1) The DSP codec (e.g. videnc_copy); this normally produces a library (a64P or lib file) that runs on DSP. 

    2) The DSP server which produces an x64P file.  This is the actual DSP image that you will load and run on DSP.

    3) ARM side Linux application which uses codec engine APIs to load DSP server and access codecs within a server.

    Actually, please note that codecs and DSP servers can also be built for ARM and they do not necessarily have to be built for DSP.  That said, the codec engine APIs access DSPLINK driver which loads DSP image(x64P file) onto DSP.  Codec Engine is a framework which partly resides on ARM side (in ARM application) and [artly on DSP side (in DSP server).  Let me know if this helps and if there is anything else we can assist you with.

  • BMillikan said:

    Is there an example of this somewhere so we can kind of get a feel for how this works?  I understand the CMEM pointer sharing, but what does the "DSP" driver look like in linux?  How does the DSP side know that that it received a new image?  Can it be interrrupt or event driven (instead of using a flag?)

    Hopefully my previous post will help you too.  The best place to see how it all fits together is to look at the codec engine examples.  FYI, only one DSP image can be loaded on DSP at an one time, but this image can have multiple instances of dsp algorithms.  Let us know if you need any further assistance.

     

  • Juan Gonzales said:

    Inside codec engine, you have three types of examples.

    1) The DSP codec (e.g. videnc_copy); this normally produces a library (a64P or lib file) that runs on DSP. 

    2) The DSP server which produces an x64P file.  This is the actual DSP image that you will load and run on DSP.

    3) ARM side Linux application which uses codec engine APIs to load DSP server and access codecs within a server.

    FWIW, the CE examples reflect the user roles described here:

    http://wiki.davincidsp.com/index.php?title=Codec_Engine_Roles

    That article may also help with understanding what docs/tools are available as you march through the different roles.

    Chris

  • Hi all,

    I have written my algorithm inside videnc_process() function and saw that it worked well... However it is unfortunately slow. I have a for statement inside the function like:

    for(i = 0; i < 100000; i ++) {

                   // x  = myinbuf[i];

                            ......

                            ......

    }

    when I dont perform this memory read statement ( x = myinbuf[i] ) the algorithm runs very fast and I get 25 fps image frames. However, when I write this statement inside for, I get only 1 fps image frame. Actually there are lots of MAC operations and divisions inside this for statement and they are performed very fast as expected... The given myinbuf is generated by a memcpy() statement from CMEM area. I could not actually understand why a memory read statement costed so much.... Should I try something different??? or what may be the reason for such a case ??

     

  • sorry for [i] symbols they are off course [ i ] :D

  • Hi, mursel , I just tried your code described above, but I did not find your question ( I use evm6446) ,  I believe to read from memory is not the main reason related to your question, because this kind of operation is very common in algorithm I believe. So I think maybe .... there are some other code operations collide with the reading.   That's just what I guess.  :)

  • hi Lorry...

    I actually have another question for the problem...( I should clear it more :) ) ... Having experimented many algorithms on DSP core ( just for exploring DSP performance ) I realized that no matter what kind of statement inside the code I write, may be because of a possible pipeline architecture, once I write statements like:

     

    x  = input[i];

    y = x * a;

     

    i.e. , y waits for read of x, the DSP algorithm becomes slower, I think second statement waits after "instruction decode" phase for the first statement and delaying other instructions ... However once I try

    x = input[i];

    ....(unrelated stm1)....

    ....(unrelated stm2)....

    ....(unrelated stm3)...

    y = x * a

    the DSP algorithm becomes faster... Well actually this is just a suggestion, I am not sure.... Do you think this may be the problem???

     

     

  • That's really a problem. for example: If I want to blur a image, I don't want to use convolution in ImgLib, so I have to use the code like " x = input [ i ]; y = x * a; "... continuousely, If this is a main reason for a slow program, then my blur algorithm will be very slow.   I guess if there are two or more thread in an algorithm,  the collision you've said maybe occur in that scene.

  • Hi Lorry...

    Is that possible to use a function (that is predefined somewhere) inside a codec engine process function? Or let me ask how we may define a functions that are planned to be used inside a process function ?

     

  • Of course, it does not have special rules, it is c language.

  • I think it has.. couse anytime I try to define a function (arbitrary) inside videnc_copy.c file, I get errors like "the function is not defined" types...

  • I also want to develop a video processing algorithm of XDAIS(non-XDM),  in my opinion:

    a) in ccs, write XDAIS algorithm and complie to .lib file, i.e. a64p file in linux;

    b) in linux. produce from .a64p to .x64p(codec server);

    c)in linux, write application program thar run in arn, as well as stub and skeleton;

    d)in linux,  install the executable program and the .x64p to workdir and work

    am i right?

    question:

    1.in b) below, how to produce .x64p file (dsp server or codec server) from .a64p(changed suffix from .lib in ccs) in linux ?

    i've done the same as Lorry

    LorryAstra said:

    I've tried to compile an achive file in CCS (for example: videnc_copy.lib) in windows, and then I rename it (videnc_copy.a64P) in lib folder in Linux. And then I compile an codec server and copy it to "/home/usraccount/workdir/filesys/opt/dvevm". I want to know whether my codec can work well, but I failed.

    Sequence:

    a) I compile videnc_copy.lib in CCS, and I'm sure that all the interfaces have been realized.

    b) I copy videnc_copy.lib to /home/usracc/dvevm_x_xx/codec_engine_x_xx/examples/codecs/videnc_copy/lib , and I rename this lib file to videnc_copy.a64P.

    c) In  .../examples/servers/video_copy/ folder, I recompile codec server file and copy it to .../workdir/filesys/opt/dvevm.

    d) I want to test whether videnc_copy can work well.

    After checking, I find that the videnc works, but it is not the code that I compile in CCS, it works as the original videnc_copy.a64P in videnc_copy example in Linux. It sounds that the videnc_copy.lib does not affect codec server (video_copy.x64P).

    So I guess, the new videnc_copy.a64P( renamed from videnc_copy.lib) is not compiled when generating video_copy.x64P.   

    then  how can i produce  'my'  video_copy.x64p from 'my'  videnc_copy.a64P(renamed from videnc_copy.lib)?

    2. if i write an XDAIS algorithm in ccs and compile it to .lib, then write a test.c in ccs and produce  .out, can this .out file be codec server in linux after renamed as .x64p? 

    if not, then the .out file  is just used to test the .lib, and after testing we integrate the .lib to .x64p (codec server) in linux , am i right ?

     

  • Hi Juan,

    I have similar a question. I am using DM6446. I have done all my image processing on the DSP side. Now, I would like to send an image from the ARM to the DSP for processing and send the results back to the arm.  I have read how CE, DSP link  works but, I am a bit confused.

     Can you help, please give or direct me to  a simplest example on how  to pass an image from the ARM to the DSP for processing and back to the ARM.

    Thanks in advance.

    Jonners

  • Juan Gonzales said:

    Is there an example of this somewhere so we can kind of get a feel for how this works?  I understand the CMEM pointer sharing, but what does the "DSP" driver look like in linux?  How does the DSP side know that that it received a new image?  Can it be interrrupt or event driven (instead of using a flag?)

     

    Hopefully my previous post will help you too.  The best place to see how it all fits together is to look at the codec engine examples.  FYI, only one DSP image can be loaded on DSP at an one time, but this image can have multiple instances of dsp algorithms.  Let us know if you need any further assistance.

     

    [/quote]

     

    Hi Juan,

    I have similar a question. I am using DM6446. I have done all my image processing on the DSP side. Now, I would like to send an image from the ARM to the DSP for processing and send the results back to the arm.  I have read how CE, DSP link  works but, I am a bit confused.

     Can you help, please give or direct me to  a simplest example on how  to pass an image from the ARM to the DSP for processing and back to the ARM.

    Thanks in advance.

    Jonners

  • Hi Jonners,

    So first, I just want to clarify what you mean by image, since 'DSP image' is a term we often use to refer to the DSP binary which is loaded and executed on the DSP.  From the context of your inquiry, it appears we are referring to sharing memory buffers (which can contain a video image or anything else) between the ARM and DSP.  Just wanted to clarify this for anyone else that may be reading this post.

    If we are aligned on the above, then the simplest form of this process are demos (encode, decode, encodedecode) included with the DVSDK; this will give you the Linux application level interface (what is commonly referred to VISA APIs) to allow ARM to pass video buffer to DSP (say for encoding), and DSP pass the processed video buffer back to ARM.  In this scenerio, all you would need to learn is VISA APIs (they use DSPLINK underneath without application developer having to learn it)

    If however, you are writing not only the consumer Linux application, but also the DSP algorithm, and packaging your algorithm into a DSP server image that ARM Linux application can consume, then you will need to learn a bit more than the VISA APIs.  In this case the best place to look are the codec engine examples, which include examples with source code for DSP algorithms, DSP servers and applications.  Again, if you stick to our codec engine framework, eventually this would lead to using VISA APIs or extension of them at the Linux application level.  Let me know if this helps.

  • [root@OMAP3EVM example]# ./app_remote.xv5T
    @0x0008bff1:[T:0x4001cfb0] ti.sdo.ce.examples.apps.video_copy.singlecpu - main> ti.sdo.ce.exampl<1>Existing entry's end address is covered by given entry's start & end address]
    Existing entry's end address is covered by given entry's start & end address., can not create TLB entry for address: [0x86000000] size: [0x1000000]
    es.apps.video_copy.singlecpu
    @0<1> DSP_init status [0x80008050]
     DSP_init status [0x80008050]
    x0008c272:[T:0x4001cfb0] ti.sdo.ce.examples.apps.video_copy.singlecpu - App-> Application started.
    app: error: can't open engine video_copy
    @0x0009787a:[T:0x4001cfb0] ti.sdo.ce.examples.apps.video_copy.singlecpu - app done.

    in the xdcpaths.mak ,I set PROGRAMs = DSP_SERVER  APP_CLIENT

     

  • Hi Juan

        I am working on a Beagle board running Android on top of it.I am able to encode video data present in a file, using the Video Encode application that comes as part of the DMAI sample applications.I am using H.264 Encoder which runs on the DSP side.The DMAI build mannual instructs to place the Video Encode executable and the cs.x64P codec server image to be in the same directory.Does it mean that, when the Video Encode executable runs it will inturn load the cs.x64P codec server image onto the DSP via DSPLink?Juan, i've a general doubt regarding Encoding.Lets say that i am getting video data from my video camera.Then how should i give this video data to the encoder?Should i give it to the encoder directly or store the data in some buffer and then pass it to the encoder?

     

    Thanks & Regards

    Ananth

  • ananth36082 said:

        I am working on a Beagle board running Android on top of it.I am able to encode video data present in a file, using the Video Encode application that comes as part of the DMAI sample applications.I am using H.264 Encoder which runs on the DSP side.The DMAI build mannual instructs to place the Video Encode executable and the cs.x64P codec server image to be in the same directory.Does it mean that, when the Video Encode executable runs it will inturn load the cs.x64P codec server image onto the DSP via DSPLink?

    Yes, normally the name of the DSP server image (x64P file) is built into Linux application executable; the Linux application will look for this file in the same directory where it ran from and load it to DSP via DSPLink.

    ananth36082 said:

    Juan, i've a general doubt regarding Encoding.Lets say that i am getting video data from my video camera.Then how should i give this video data to the encoder?Should i give it to the encoder directly or store the data in some buffer and then pass it to the encoder?

    This is a very good question.  DDR2 (SDRAM) memory is partitioned into Linux space (ARM) , CMEM (shared between ARM and DSP), and DSP space.  When you capture data via a camera, the Linux driver is participating in this and hence this likely happens in Linux space; the DSP algorithm cannot see Linux Space, hence the application normally allocates a buffer in CMEM space and copies data there before passing it onto the DSP.  DSP may allocate other buffers it needs to do its work from its own DSP space.

    That said, theoretically the Linux application can allocate a buffer in CMEM space and pass it to the video capture driver such that the video capture dirver fills this buffer with data coming from camera and then application can pass it directly to DSP (avoiding a copy); however, I say theoretically because I do not belive this has been implemented yet.

  • Hi Juan

                Thanks a lot for your reply.I've a few doubts regarding the build process of DMAI applications.As you said, the sample video encode application is configured,built and compiled in such a way that it looks for the DSP server image cs.x64P in its current folder.The codec_engine_2_24/packages/ti/sdo/ce/Engine.xs file specifies that the "Target application will look for the DSP Server image cs.x64P in its current directory".Juan, do you have any idea about which part of the application that actually does this loading of DSP server image onto the DSP?Does linker.cmd tells the executable that it should look for DSP server image in curr. dir.?

                Juan, my intention is to create a Shared Library out of the sample video encode application and make use of it in my Android application.In Android it is possible to load Native C Libraries and access the functions present in them via Java Native Interfaces(JNIs).In my Android app i'll load the Shared Library(created from video encode app) and call the corresponding JNI for encoding the video data.

                I renamed the main() function of the sample video encode appl. as EncodeVideoData() and created a Shared Library(.so) out of it, instead of creating an executable.I've written a Java Native Interface for the EncodeVideoData() function and i am calling it from my Android Application.

                Now the problem arises.I am Calling the JNI for EncodeVideoData() from my Android Application and Control passes to the EncodeVideoData(), which checks for a raw YUYY input file and then p,other misc. parameters and then proceeds to open the Codec Engine.I am failing at this point.The call to Engine_open() fails telling "unable to open the code engine!!!".

                Juan, as far as my understanding i suspect that the DSP Server Image cs.x64P is not loaded into the DSP.I feel that the Android Application is not able to find the location of DSP server image and thus fails to load it into the DSP.I've placed the DSP server image in the same folder where my Android app exists. I haven't checked the errorcode returned by Engin_open().I'll check and then tell you.In my case the target application is an Android app.So my problem is how to tell the Android app. about where it should look for the server image.I hope no one else apart from you would have a better solution for this .Pls help me out Juan.

    Thanks & Regards
    Ananth

  • If you are working with Andriod, I would recommend using the distro at www.arowboat.org; I believe that andriod has been ported on top of drivers and codec engine framework there.  This is likely the most up to date Andriod distro for OMAP35x.  At this site, you will also find links to IRC and useful sites where you can get support from the third parties and TIers that work closely with this Andriod release.

    That said, to answer your questions

    Question: do you have any idea about which part of the application that actually does this loading of DSP server image onto the DSP?Does linker.cmd tells the executable that it should look for DSP server image in curr. dir.?  Yes, this is done by Engine_open () call, whicn uses DSPlink underneath to load the DSP image (aka DSP executable) on the DSP and take the DSP out of reset.  If this call is failing, it means that the application cannot find the x64P file it is expecting.  This could be due to the way it was built (name mismatch, incompatible build) or the location of the file.

    I do not know enough about Andriod or the JNIs, but I do know that the rowboat distro above uses codec engine underneath in a similar fashion.  So my guess is that the issue is in the way the x64P file was built.  Can you place the original demo and x64P file instead and see if this works just to ensure that your environment setup has not been compromised.?

  • Hi Juan

    I am working with the Rowboat distro you have recommended..As you suggested i placed the original demo and the x64P files and tried encoding video data present in a file. I then provided the encoded file as input to the decoder.Both encoding and decoding were successful.So i guess the setup environment and the way x64P is built is proper.Can you tell me how to enable DSP  traces?Does GT_Xtrace (where x=0,1,2,3,4,5,6) display anything to console ? Currently GT_trace is defined as '0' and i've changed it to 1 and i compied codec_engine and the sample dmai application.I couldn't see the DSP traces while executing the original demo application. Could you tell me how to view the DSP traces?I thought of comparing the DSP traces of my original demo appl. and that of my Android app.

     

    Thanks & Regards

    Ananth

  • I have not tried this in arowboat distro myself, but I like using the CE_DEBUG methodology described in the following wiki

    http://wiki.davincidsp.com/index.php/CE_DEBUG

    to enable tracing because I do not need to rebuild anything and can turn it off at my choosing.  Let us know if this works in arowboat; I would guess yes since it is still Codec Engine underneath.

  • Juan,

     

    Your upcoming advice is GREATLY appreciated...

     

    I just got an urgent call from a client, who needs me to get something running for them on the DM6446.  I have 30+ years experience in hardware & software, but never with the DM6446 and no internals-type experience with video, only superficial experience.

     

    I've read a few of your posts on THIS thread, and I believe the DM6446 is the correct tool, and I hope you are the correct person to answer my primary question at this time.

     

    This is urgent and it might go quicker that way!

     

    QUESTION: What in total do we need to order so that we can develop our own video image processing algorithm to run on the DM6446?

     

    So far, I think we need the

    - TMDSEVM6446 (DM6446 Digital Video Evaluation Module)

    - LINUXDVSDK-DV (Lunix Digital Video Software Development Kits (DVSDK) for DaVinci Devices)

    - CCSTUDIO (Code Composer Studio (CCStudio) Integrated Development Enironment (IDE) - v4.x)

     

    I surely have this wrong, please correct me.

     

    OUR NEED: What we need to do is actual processing of video frames.  We'll analyze each frame of the video, modify the appearance of each frame, and send the modified video out.  For example, I want to point the camera at a US Flag, with red/white/blue colors.  Our algorithm could identify all the red and replace it with green.  The monitor then shows a waving green/white/blue flag.  Obviously, our proprietary video algorithm is a lot more complicated than just replacing red with green.  But if we can indeed write an algorithm to replace red with green, I believe we can implement our proprietary algorithm as well.

     

    From what I've been able to deduce, the TMDSEVM6446 includes ONLY the ability to use existing codecs.  It appears that we need to write our OWN video processing routines, XDAIS compliant, and run the routine as if it were a codec.  In one place, I read that this requires the DVSDK.  In another place I read that it requires CCS.

     

    Thanks very much,

    Helmut.

  • Thanks, Juan.  

    Yep, she wants something in two [or three] weeks, just like I thought ! 

    The green/white/blue flag analogy is close to the actual algorithm.

    I think I can do it.

    -Helmut