This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

linux system integration



Hello, everyone:

      The platform I use  is   DAVINCI DM6446.

       Now, I have finished  the video capture,  video display,  video encode ,video decode ,  and  the speech  capture ,speech encode ,speech play ,speech decode  ,and the  RTP  transmission。  How can  I integration all  of this ?

       I   want  to  integration them  using  multithreading    just  like  the demos  . But   the  demo is too  comlex ,   it  come down  to  mutex  and condition variable 。 Can you  give me  a  easy way  to  integration my  code  fast ?  Or  can  I  write my  code  base on the demos ,such as encode to extend  it ? And  how should I do ?

       And   I  know  there  is   a  guide  on  TI   web site  : http://focus.ti.com/docs/training/catalog/events/event.jhtml?sku=4DW102644 DaVinci™ System Integration using Linux ,  can  somebody  give me  one ?  

        Thanks  very much !

  • If these are the basic modules that you want in yuor application, then you should staight away use the TI demo examples. There are scalable, you can add customised modules along the lines of the inherent modules like video capture/display, audio record/play etc.

    I think you could skip the training and go through these demos well.

     

  • Thanks  very  much   . 

    The  hardware  board  is  designed  by  myself.  But  the   software  architecture  is  the  same  as   DM6446 . 

    Now  in  my  application ,  the   data  flow   is  like  this :  video  capture  ----->   (on  the  hardware: transfers  the  frames  to  DSP though VPORT )DSP  process the  frames  with  pattern   recognition  algorithm , ------->   DSP  transfers  the  result  to  ARM  USING  HPI  port  ;   --------> video  encoding  using  mpeg4 ------------->  transmit  the  frames  that  have been  compressed   to  the  server   using  open  source   RTP  protocol .  For  test ,  there  is  a  video  display  thread  ,also .  And   there  is  a  speech  thread  ,too.         

    These  are  all  my  modules .

    So , can  I  use  the  demos  architecture   on  DM6446  to  finish  my design ? In  other words ,  can  I  make  some  modification  to  make the demos  suit  my  application ?  This  is the most  important  question  for  me .

    And   I  found  the   synchronization   between  threads  is  very   complex  ,it  refers to mutex  and  condition  variable   , and  the  communication  between  threads are  also not  easy .  In  the  demos , they   use   PIPE  to   transfer   the  struct  date  to  another  thread .  I    can't   understand  when  should  I use  the  PIPO,  and  when  should  I  write  it  to another  thread  ,   in a word ,  I  have  some  trouble  understanding    the  date  communication  betweent  the  threads   in  the demos . Can you  give me some  suggestions  for this ?  Thanks very much .  This  is  second  important  question .

    Thanks .

  • TIs demo application also is the more the same with data being captured through ports and passed to the various modules to encode/display (I am not sure if there is a streamer component into the demos; I havent been much into that area).

    Yes. You can add your customized modules in these demos and integrate them taking reference from the other modules existing. For example, the video capture module is going to remain same. You can device an another DSP algorithm module on the lines of the existing video encoder module; as long as the source to these modules is the video capture thread. And so on.

    Pipes are being so that you can send a host of information like data pointers, data size, ancillary information to these threads using a single call. You can use semaphores logically in your applicaiton; but then it will become too cumbersome and large to keep track of segregated locks all over the code. I think the global Fifo concept helps.

    I am not aware of any such documentation on the same; but the code is self-explanatory. However still, only TI would be able to answer that!!

    Thanx!

    Sundar

  • The demos were primarily written to demonstrate EVM capabilities and show how to use codec engine APIS (Linux driver APIs are more common and one can find many examples out there).  That said, I do agree they can be simplified a bit, and this is probably a good place to start; we will be happy to answer questions regarding the use of APIs, but unfortunately we do not have any simpler examples to provide.  All the additional functionality such as RTP protocol will need to be added by you.