This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Linux/TDA2PXEVM: Training for usecase of object detection

Part Number: TDA2PXEVM

Tool/software: Linux

Hi,

I'd like to improve detection of object in your usecase.

https://e2e.ti.com/support/processors/f/791/t/737380?tisearch=e2e-sitesearch

In above comment, you recommend to use "Acf-jacinto".

But, 

https://e2e.ti.com/support/processors/f/791/t/767946

You recommend to use "caffe-jacinto".

What is difference between "Acf-jacinto" and "caffe-jacinto"?

If I'd like to improve the OD for your "vipSingleCameraAnalytics2" usecase, what model I use?

Best regards,

Heechang

  • Hi Heechang,

    "Acf-jacinto" is used by Object Detection algorithm, please check PROCESSOR_SDK_VISION_03_06_00_00\ti_components\algorithms\REL.200.V.OD.C66X.00.06.02.00\200.V.OD.C66X.00.06\modules\ti_object_detection\docs\ObjectDetection_DSP_UserGuide.pdf

    "caffe-jacinto" is used by TI Deep Learn algorithm (TIDL), check PROCESSOR_SDK_VISION_03_06_00_00\ti_components\algorithms\REL.TIDL.01.01.03.00\modules\ti_dl\docs\TIDeepLearningLibrary_UserGuide.pdf

    I see that vipSingleCameraAnalytics2 usecase uses ObjectDetection link, you need to use Acf-jacinto.

    Regards,
    Yordan
  • Hi Yordan,

    1. If I use the Acf-jacinto, should I use the MATLAB?
    2. If I do training with Acf-jacinto, how to apply to ObjectDetection link?

    Best regards,
    Heechang
  • Hi Yordan,

    I have a question.
    In your analytics2 usecase, did you use the OD module with training by INRIA data-set?

    Best regards,
    Heechang
  • Hi yordan,

    I downloaded "github.com/.../acf-jacinto".
    When I run this, the error is like below.

    cfJacintoExample
    Extracting the dataset may take a long time. Do you wish to continue? Enter 1/0 (default: 0): 1
    Extracting images from: videos/set00/V000.seq done
    Extracting annotations from: annotations/set00/V000.vbb done
    Extracting images from: videos/set00/V001.seq done
    Extracting annotations from: annotations/set00/V001.vbb done
    Extracting images from: videos/set01/V000.seq done
    Extracting annotations from: annotations/set01/V000.vbb done
    Undefined function or variable 'gradientFastMex'.

    Error in gradientMagFast (line 52)
    [M, Gx, Gy] = gradientFastMex('gradientMagFast',I,clipGrad,accurate);

    I just modified the data path.

    Do you know what is problem?

    Best regards,
    Heechang
  • The mex file is alread provided for 64bit windows. The file is acf-jacinto/channels/private/gradientFastMex.mexw64
    So on this platform, you may not face this issue.

    I guess you are using a different platform. If there is a chance of using Matlab on 64bit Windws, please try that. However, if you really want to use on a different platform, you need to create the mex file for that platform using gradientFastMex.cpp. Matlab provides documentation of how to create mex file from cpp file.
  • Hi Manu,

    Okay..I used matlab on Linux.

    I have a question.
    In your analytics2 usecase, did you use the OD module with training by INRIA data-set?

    Best regards,
    Heechang
  • Hi Manu,

    I installed the estimation version of the MATLAB.

    When I run the acfJacintoExample.m, the log shows that "gswin64c.exe" is needed.
    So, I downloaded "gswin64c.exe" from "www.ghostscript.com/.../Install.htm

    Then, I run again.

    But the progress stops.

    Extracting the dataset may take a long time. Do you wish to continue? Enter 1/0 (default: 0): 1
    Extracting images from: videos/set00/V000.seq done
    Extracting annotations from: annotations/set00/V000.vbb done
    Extracting images from: videos/set00/V001.seq done
    Extracting annotations from: annotations/set00/V001.vbb done
    Extracting images from: videos/set01/V000.seq done
    Extracting annotations from: annotations/set01/V000.vbb done
    progress time is 0.327122 s
    [0x7FFF5B2B5560] ANOMALY: meaningless REX prefix used
    [0x7FFF5B37C0F0] ANOMALY: meaningless REX prefix used
    >>

    What is problem?

    Best regards,
    Heechang
  • It is done if I changed the gswin64.exe to gswin64c.exe.

    In acfJacinto,
    [Usage]
    Open Matlab and navigate to detector folder.
    Open acfJacintoExample.m in editor
    Make changes for your dataset path, list of videos and annotations files, object type to be trained etc.
    Run the file to do train and test.

    If I run this "acfJacintoExample.m", is this trained?
    Where is the result of training?

    In example code, only 3 data input is run(set00/v000, set00/v001 and set01/v000).
    If I run more data input, the training result is better?

    Best regards,
    Heechang
  • The training will save two descriptor files - the code is here:

    https://github.com/tidsp/acf-jacinto/blob/master/detector/acfJacintoTrainTest.m#L86

    The one that ends with DetectorCascade.descriptor is what you need. 

    The training result is likely to improve with more training data. However the correctness of the data is as important as the quantity of the data.  

  • Hi Manu,

    Yes, I found the file DetectorCascade.descriptor in "\acf-jacinto-master\detector\models".
    I'd like to apply the usecase of analytics2.

    I found the converting tool from *.descriptor to *.bin using algorithms/REL.200.V.OD.C66X.00.06.02.00/200.V.OD.C66X.00.06/modules/ti_object_detection/utils/AdaboostTableGen.exe.

    How can I do?
    This converted file is the weight file?

    Best regards,
    Heechang
  • Hi Heechang,

    I am glad to see that you have been able to progress until this point.

    I am not the author of this converter. Can you check the documentation that comes with VisionSDK for the usage of AdaboostTableGen.exe. If you still can't find the required information, I can forward this question to the correct author.

    Best regards,
  • Hi Manu,

    Thank you very much for your help.
    I have a question.

    I use data-set with Caltech data-set.( "www.vision.caltech.edu/.../")
    There are many sets.
    Can I use these all(set00~set10) for better accuracy?

    Best regards,
    Heechang
  • It depends on what is your test scenario.

    If your test scenario (camera parameters), nature of the image etc. is quite different from that of Caltech training dataset, adding more such data will only hurt your test accuracy. In all these let experimentation and analysis guide you. 

    Best regards,

  • Hi Manu,

    I ran the training.
    But, the figure 1 shows the vehicles and the figure 2 shows "the log-average miss rate = 97.56%".
    The miss rate is correct?

    Best regards,
    Heechang
  • That is very high miss rate. Please try to train with INRIA Person dataset without any of your changes and make sure that you get a reasonable miss rate.
  • Hi Manu,

    I converted binary file by using AdaboostTableGen.exe.

    I'm reading the userguide of the ObjectDeteion_DSP.
    Is it OK, just if I replace the pd_adaboost_weights.bin to new one.

    Please, forward this to the correct author.

    Thanks and BR,
    Heechang
  • Hi Manu,

    I cannot take the INRIA dataset ("pascal.inrialpes.fr/.../").
    Can you access this download site?

    Best regards,
    Heechang
  • Multiple questions are getting interleaved in this thread and its difficult for me to forward this to the Vision SDK experts. For the Vision SDK related question (how to use the converted descriptor in Vision SDK), can you open another thread?

    Regarding INRIA dataset, following is the link for download:
    www.vision.caltech.edu/.../

    You can find many other datasets to download in this page:
    www.vision.caltech.edu/.../
  • The access problem is our company security policy.

    I will do training again with INRIA dataset.

    Best regards,
    Heechang
  • Okay, Thanks.

    I will create new thread.

    Best regards,
    Heechang
  • Hi Manu,

    Sorry, I have a question.

    In acfJacintoExample.m, vidList and vbbList look separating between training and test like below.

    vidList={ ...
    %train
    {'other/inria_person/V000.seq', ...
    'other/inria_person/V001.seq', ...
    'ti/lindau/V106_2015sept_100_VIRB_VIRB0031_10m_10m.MP4' ... %V106
    'ti/munich/V007_2015jul_VIRB0008_0m_7m.MP4' ... %V007
    'ti/lindau/V110_2015sept_103_VIRB_VIRB0001.MP4' ... %V110
    'ti/lindau/V111_2015sept_104_VIRB_VIRB0001.MP4' }, ... %V111
    %test
    {'ti/lindau/V105_2015sept_100_VIRB_VIRB0031_0m_10m.MP4' } %V105
    };
    vbbList={ ...
    %train
    {'other/inria_person/V000.vbb', ...
    'other/inria_person/V001.vbb', ...
    'ti/lindau/V106_2015sept_100_VIRB_VIRB0031_10m_10m.vbb' ... %V106
    'ti/munich/V007_2015jul_VIRB0008_0m_7m.vbb' ... %V007
    'ti/lindau/V110_2015sept_103_VIRB_VIRB0001.vbb' ... %V110
    'ti/lindau/V111_2015sept_104_VIRB_VIRB0001.vbb' }, ... %V111
    %test
    {'ti/lindau/V105_2015sept_100_VIRB_VIRB0031_0m_10m.vbb' } %V105
    };

    Should I follow this format?
    For testing, only one seq and vbb are needed?

    For example,
    I'd like to do training set00~set10.
    then, how should I modify this code?

    Is this fine?
    vidList={ ...
    %train
    {'videos/set00/V000.seq, ~~~~~
    'videos/set01/V000.seq, ~~~~~
    ~~~~
    videos/set09/V000.seq, ~~~~~
    }, ...
    %test
    {videos/set10/V000.seq}
    };

    vbbList={ ...
    %train
    {'annotations/set00/V000.vbb, ~~~~~
    'annotations/set01/V000.vbb, ~~~~~
    ~~~~
    annotations/set09/V000.vbb, ~~~~~
    }, ...
    %test
    {annotations/set10/V000.vbb}
    };

    Best regards,
    Heechang
  • It looks fine, but Caltech training might take too much time. Before attempting Caltech dataset training, I suggest you to train on INRIAPerson dataset and make sure that you get a reasonably low miss rate.
  • Hi Manu,

    I downloaded the INRIA dataset.
    But, the INRIA dataset consist of the neg and pos png files and annotations.
    How can I apply to acfJacintoExample.m?

    Best regards,
    Heechang
  • I am surprised that you are asking this after looking at  acfJacintoExample.m for so long. Inria is already setup here:

    https://github.com/tidsp/acf-jacinto/blob/master/detector/acfJacintoExample.m#L15

    You just have to provide the correct paths within that if condition.

  • Hi Manu,

    Yes, I checked this code.
    But, the INRIA dataset is consists of the png files separately not *.seq or *.mp4 or *.vbb format.
    The code looks like for CaltechUSA cause the CaltechUSA dataset is consists of *.vbb and *.seq.

    Then,
    Is it OK, the Pos and Neg of the INRIA png files is inserted to vidList?

    Best regards,
    Heechang
  • I loaded the png files and txt(annotations) files.
    But, the error is shown when the txt files.

    Can you check this?

    Best regards,
    Heechang
  • www.vision.caltech.edu/.../
    Above site, it is consists of the vbb and seq.

    But, in this site is png and txt files.
    pascal.inrialpes.fr/.../

    www.vision.caltech.edu/.../
    is this right?

    BR,
    Heechang
  • Yes, use the converted version for Inria dataset provided by Caltech.
  • Yes, I ran this converted images.
    But the error is shown.

    Error : bbGt>loadAll (line 558)
    Assertion result : fail

    Error : bbGt (line 88)
    [varargout{:}] = feval(action,varargin{:});

    Do you know why this log is shown?
    Only figure 1 is shown.
    The miss rate figure is not shown.

    BR,
    Heechang
  • Make sure you have replaced all the following paths with your correct paths:

    dataName='Inria';%'TIRoadDrive';
    if strcmp(dataName, 'Inria'),
    exptName='AcfJacintoInria';
    extractType='all';
    extractFormat='';
    dataDir='D:\files\work\code\vision\ti\bitbucket\algoref\vision-dataset\annotatedVbb\data-INRIA';
    vidList={ ...
    {'videos/set00/V000.seq', 'videos/set00/V001.seq'}, ...
    {'videos/set01/V000.seq'} ...
    };
    vbbList={ ...
    {'annotations/set00/V000.vbb', 'annotations/set00/V001.vbb'}, ...
    {'annotations/set01/V000.vbb'} ...
    };
  • Yes, my paths is correct so the result of the figure1 is shown.

    But, the figure 2(miss-rate) is not shown with error.
    Do you know why this error is shown?
    I only modified the paths.

    Best regards,
    Heechang
  • I have trained on 64-bit windows and it works. If you are using another platform, then I am not sure whether it will work. If you can pin point and say the details of what exactly is the error, that might give a clue.

  • Hi Manu,

    Could you let me know what training data you used in usecase of Analytics2?
    I did train model with INRIA and ran the usecase.
    But, the result looks same before and after.

    Best regards,
    Heechang
  • Hi Heechang,

    We have used an internal dataset.

    Regarding your INRIA training:
    May be the descriptor that you trained and provided is not taking effect. Since you have another thread on this topic, hopefully you will get an answer there - about how to correctly pass it to VisionSDK.
  • Hi Manu,

    I found something strange.
    The training file of binary is every same although I put into different input dataset.
    Is this correct?

    Best regards,
    Heechang
  • What exactly do you mean by "training file of binary"?
    Is the descriptor saved from acf-jacinto different for different datasets?
  • Yes.
    The ouput file(*.descriptor) of the acf-jacinto model is same although I put into different input data-set.

    BR,
    Heechang
  • <acfJacintoExample.m>

    %% dataset
    dataName='Inria';%'TIRoadDrive';
    if strcmp(dataName, 'Inria'),
    exptName='AcfJacintoInria';
    extractType='all';
    extractFormat='';
    dataDir='D:\DNN\INRIA';
    vidList={ ...
    {'videos/set00/V000.seq', 'videos/set00/V001.seq', 'videos/set01/V000.seq', 'videos/set01/V001.seq'}, ...
    {'videos/set00/V000.seq'} ...
    };
    vbbList={ ...
    {'annotations/set00/V000.vbb', 'annotations/set00/V001.vbb','annotations/set01/V000.vbb', 'annotations/set01/V001.vbb'}, ...
    {'annotations/set00/V000.vbb'} ...
    };
    config = [];

    <command window>
    >> acfJacintoExample
    Extracting the dataset may take a long time. Do you wish to continue? Enter 1/0 (default: 0): 1
    Extracting images from: videos/set00/V000.seq done
    Extracting annotations from: annotations/set00/V000.vbb done
    Extracting images from: videos/set00/V001.seq done
    Extracting annotations from: annotations/set00/V001.vbb done
    Extracting images from: videos/set01/V000.seq done
    Extracting annotations from: annotations/set01/V000.vbb done
    Extracting images from: videos/set01/V001.seq done
    Extracting annotations from: annotations/set01/V001.vbb done
    process time 1.026879 s.
    >>

    I just modified the data-set path.

    BR,
    Heechang
  • Step through the code in debug and make sure extraction of the train, test folders happen for the new dataset.
    github.com/.../acfJacintoExample.m

    If that is skipped, the old images will be used for training.
  • There are several things that you could try to find out the reason:

    Remove the extracted train and test folders, remove the descriptor files. The debug through the code to find out extraction is correctly taking place. Then do the training. Make sure the extracted images contain the new dataset.

    Can you list the what you have tried so far? Please take time, try out and list at-least 10 different things that you tried.
  • Hi Manu,

    I downloaded acf-jacinto model and the Caltech-USA dataset again.
    The new downloaded image-set is loaded.
    I did training set00~set04(totally 58 videos and annotations).

    And, the cascade descriptor file is generated.
    I compared between new descriptor file and old descriptor file.

    The result is the files are same.
    This is very strange.

    Should I modify "config" in line 28, 97, 98 and 99 of the acfJacintoExample.m?
    And is this correct the "extractType = all" not "extractType = annotated"?

    Best regards,
    Heechang
  • Its probably not doing the training, but just loading a pre-trained model and then converting to descriptor. Please delete all the files in detector/models directory and then train again.
  • Hi Manu,

    I solved this problem.
    I always do training without modifying the 'dataName' only change the 'vidList' and 'vbbList'.
    I made other dataName and was working well.

    Thank you very much.

    Best regards,
    Heechang
  • That's why I was suggesting to step through the code, debug and understand what's going on - to find out if there are any silly mistakes. Glad that the problem is solved.