This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Which app and codec examples should I pattern after?



I want to produce a single-purpose appliance based on the DM6467T and developed with the DM6467T EVM.  Pretend it does a type of live chroma keying, where the input video subject has a green background, and I simply want the output video to have that green replaced with blue.  (Don't worry about substituting an image like true chroma keying does.)  I envision doing this with a simple app that uses a simple codec.

I've looked at the example vidanalytics_copy codec.  The process function simply does a memcpy from the inBufs to outBufs, and I can replace that with my green-to-blue translation.  Therefore, I believe the vidanalytics_copy codec is the correct model for me to follow.  Do you agree?

I'm having more difficulty finding a proper example app.  Is there a good explanation of what each app (or codec) does?  I haven't been able to find such on the wiki or the docs that come with the codec engine.

As best I can tell, the vidanalytics example app uses the vidanalytics_copy code, however, it uses files for input and output, not video.  Meanwhile, the encodedecode demo uses video for input and output, but is organized for calling two codecs, not just one.  I need an example that uses video for input and output, and only calls one codec to process that video.  Any suggestions, please?

Thanks very much,

Helmut

  • I'm considering testing out and basing my project on "C64x+ iUniversal Codec Creation - from memcpy to Canny Edge Detector" at http://processors.wiki.ti.com/index.php/C64x%2B_iUniversal_Codec_Creation_-_from_memcpy_to_Canny_Edge_Detector.

    If I do this, then it may mean that I substantially WASTED the last two weeks trying to build a codec and codec engine based on vidanalytics_copy example, and an app based on encodedecode source included with the DVSDK 3.10.

    Before embarking down this path, I would like to make sure that it won't be another two weeks WASTED.

    I'm looking for advice in this regard.  Specifically, the vidanalytics_copy/encodedecode combination don't really go together well, as implied in my prior post on this thread.  Meanwhile, the link I mention above in this post comments on "Using slices allows optimal use of the small but fast internal memory available on these devices to improve performance."  I believe that such optimization does NOT already exist in the other samples available.

    Please advise.

    Thanks,

    Helmut

  • These things typically come down to 3 things - 1) input, 2) processing, 3) output.

    For #1 and #3, I can't give much advice, other than to find an example that does #1 and #3 as close to you want.  The CE examples intentionally keep #1 and #3 simple to focus on the system integration and algorithms... and just use file I/O.  Probably not that interesting for your use case.  The encodedecode demo might be pretty close to what you want, as might the Canny Edge Detector app you referenced, but I don't know much about them.

    For #2, the recommendation will come down to what the intent of your alg is.  If it's to take raw video as input and generate compressed output, you probably want to look at video encoders.  If it's to take compressed image as input and generate raw image output, look at the image decoders.  Etc.  In your case, it sounds like you want to take raw video as in and generate raw video as out.  So none of the encoder/decoders are right, nor is video analytics.  You probably want to use the IUNIVERSAL interface for this, much like the Canny Edge Detector shows.

    Finally, don't underestimate the work you've done.  The time spent exposed you to the build and integration - stuff you would have had to work through even if you'd picked the perfect examples to start from (whatever those may be).  All of that work (getting the wizards to run, understanding Linux vs Windows dev hosts, getting DSP Link to build, becoming familiar with build issues, etc) is relevant experience.

    FWIW, TI offers training courses (including free online ones!) that probably would have helped you get a better overall understanding of the system before diving right in.

    Chris

  • Chris,

    Thanks for you effort in making me feel better about wasted time.  Training courses might have sped me along, but in past experience I always did better faster without them.  Perhaps not in this case.  Calendar wise, perhaps I'm still ahead, because I would have had to schedule the training course.

    I'm looking closer at canny, and find that there's dm6467 flavor source code derived from... wait for it... the encodedecode demo, of all things.  

    If you don't mind, please tell me what you think over on _________, oh I'll put it right here.  Next post down on this thread...

  • CANNY MEETS ENCODEDECODE...   I appreciate Chris's continued assistance.  If someone else has insight here, PLEASE jump in.  Thanks.  My test below encompasses 6 specific questions (#N)

    So, I went down a codec(vidanalytics_copy)/app(encodedecode) path and hit a roadblock because encodedecode really wanted to access two codecs, not one, and I only wanted to provide one to do my not-quite-chroma-key raw video to raw video application.

    So I started looking at the canny edge detect wiki referenced at the top of this thread...

    First, I hit a problem: environment, won't compile, makes me worry about a waste of time.  SUSPENDED this path for the moment.

    Second, discovered canny.txt is derived from encodedecode.txt.  Found correct version of canny source in dvsdk_2_00_00_22_Canny_iUniversal\dvsdk_demos_2_00_00_07\dm6467\canny that I extracted from \iUniversal_Canny_C64x_release_1_0.  

    SURE ENOUGH, it looks like canny came from encodedecode.  Now, I have encodedecode running on my DM6467***T*** EVM.  Perhaps I really can get canny running there, after I fight with the configuration demons.  So, I plan on inspecting this canny source code tree further.  Perhaps the canny developer did for me exactly what I need:  s/he converted encodedecode from wanting two codecs to encode/decode to just wanting one codec to process.   (#1) Does anyone know if this is true?  Does my plan sound sound?

    NEXT, I have some confusion about the intended output from the canny project.  I can't find it now, but I read stuff on the wiki that made it sound like one sees the result of canny on the host PC, like it's perhaps IP Video.  I think it was about canny I was reading.  Yet, in multiple places the wiki talks about composite display.  SURELY it's raw video in and raw video out.  (#2) Please correct me if I'm wrong.  

    NEXT, (#3) what in the blazes is D1 camera input?  Is it this: http://en.wikipedia.org/wiki/D-1_(Sony) ?

    I believe the DM6467 must have supported it somehow but the DM6467T not so much.  The modified canny.txt speaks of default to D1 ("-y 0" parameter), but my EVM's encodedecode.txt defaults to 720P @ 60fps ("-y 3" parameter) and I've seen some explanation that this is all the ____ can support.  Therefore, I'm sure I'm going to have to tame this dragon too, that is get canny working for "-y 3" option.  (#4) Please correct me if I'm wrong.

    FINALLY, this is starting to sound like a good path for me.  I trust that Chris or others will be able to answer some of the processing-centric questions I have, and will have(#5) insight into whether or not this path is good from a processing-centric point of view.  But Chris mentioned his lesser expertise with the input and output aspects, and it's there that the DM6467***T*** varies from the platform(s) on which canny was developed.  (#6) So, is there another TI guru out there who might fill in this blank: will this path lead to success, dead end, or many trials for me?

    Thanks very much,

    Helmut