This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Is it possible to realize real-time background subtraction with VLIB 2.0



hi, folks

 

My  project is follow xdais(xdm) software architecture.

I try to implement GMM background subtraction with VLIB 2.0 on DM6467.  The arm-core operation frequence is 337MHz and C64 processor operation frequency is twice of arm-core.  Now I measure the VLIB GMM background subtraction execution time,it seem not good. The  resolution of test video sequence is 384x288, it only achieves about 8 fps. The performace is far from real-time requirement.  Is it possible to realize real-time background subtraction with VLIB 2.0.

  •  

    Hello Tsai,

    The performance you quoted above doesn't seem to be the optimal performance achievable using VLIB.

    I think it is because you are not using the DMA.

    Please implement a block based algorithm using ping-pong buffering. An example of block based algorithm is illustrated in the Canny application note(comes with VLIB).

    Please read and download the LLD package for programming the DMA in DM6437. http://processors.wiki.ti.com/index.php/Programming_the_EDMA3_using_the_Low-Level_Driver_%28LLD%29

    Feel free to ask for any further clarification.

    Regards

    Senthil

     

     

  • Dear Senthil Kumar:

    According VLIB API spec doc, the GMM API "VLIB_mixtureOfGaussiansS16(...)" performance measure as 31.3 cycles/pixel

    I find a document about background subtraction from the link.
    http://focus.ti.com/dsp/docs/dspcontent.tsp?contentId=1107
    The document id. is SPRAAM6 -- "Video Background/Foreground Detection Implementation".
    It shows four background subtraction algorithms in Table 2. The GMM statistical B/F detection just only take 1.13 cycles/pixel.

    I see the ti NDA presentation powerpoint to my compamny. This report mension VLIB GMM background subtraction(BGS) performace is 11.86 cycles/pixel.
    There is so extreme performance difference between the three reports.  Why?  This make me  so confused.


    Now, I evaluate the VLIB GMM performance and it's pseudo codec method process() of xDM architechture is showed as follow.
    Note that all buffers needed by VLIB function are allocated on on-chip memeory. The DSP cycle/pixel is about 100. This result is far from any of your official performance.
    Could you give me more instruction?  How do I improve the performance?

    MyCodec_process(...) {
      prepare data buffer(using on-chip memeory);
      get start cycle of DSP;
      VLIB_mixtureOfGaussiansS16(...);
      get end cycle of DSP;
      calculate (start_cycle-end_cycle)/total_pixels;
    }

  •  

    Beside the GMM algorithm, there is still other background subtraction algorithms such as codebook.
    The codebook impllementation will have better performance, why does ti-VLIB choose GMM background subtraction?

  • Tsai,

    I apologize for the confusion.
    Let me try to clarify things.

    SPRAAM6 does not implement GMM.
    It talks about the benchmark for a single Gaussian only.
    You might have got misled because of the reference to a GMM paper.

    I am not sure which TI NDA slides you saw.
    I suspect it to be the MOS demo which is again not GMM.

    Please use the numbers given in the VLIB documentation ONLY to avoid any confusion.

    As you mention that all the buffers are in the on-chip memory, you should be getting the performance quoted in the documentation. Could you please check any simple function from VLIB or IMGLIB or your function to see whether you get the desired performance? This is just to verify whether the problem is with the VLIB kernel or the project configuration. Please let me know if you need any more clarification.

    Lastly, We chose GMM over codebook for VLIB because of more requests from customers.

    Regards
    Senthil



  • I think the codebook algorithm is difficult to optimize.