This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA3XEVM: Using VLIB_haarDetectObjectsSparse(…) function...

Part Number: TDA3XEVM

Dear sirs, 

I have tried to use the VLIB_haarDetectObjectsSparse(…) function. In first I use the integral image and classifier tree with pedestrian classifier from example folder. There was same number of outObjects wich contain the outObjLoc array from example file. Subtracting outObjLoc from inObjLoc I have got point list and have drew rectangles on the restored image(from yours integral image). You can see result on image below. 

Then I took OpenCV trained classifier(face classifier), parse it and tried to build classifier tree (as in example file) and use it with VLIB_haarDetectObjectsSparse(…) function. It did not work.

Please, explain me several things:

1)    How to use VLIB_haarDetectObjectsSparse(…) function?

2)    Can I use OpenCV trained classifier with this function?

3)    What did I do wrong trying to use example?

4) Is it right to obtain the Y & X positions as upper 16 bits for Y and lowe 16 bits for X of outObjects->objectPos as documentation say?

  • Hi,

    1) VLIB_haarDetectObjectsSparse, needs to be used in conjugation of VLIB_haarDetectObjectsDense API. First Dense API needs to get called to get course estimate of locations, and then multiple time sparse API needs to be called to refine final locations.

    2) OpenCV classifier format is different from the format expected in VLIB_haarDetectObjectsSparse & VLIB_haarDetectObjectsDense. User is expected to do this format conversion.

    3) The classifier data provided in example folder is just for example purpose to show the compute efficiency not the quality of detection. Hence you are required to write the script to convert your classifer data in the required format. 

    4) Yes higher 16 bit is Y, and lower 16 bit is X.

    Regards

    Deepak Poddar

  • Thanks for your answer.

    I will try to use VLIB_haarDetectObjectsDense also.

    But I am worried about so big count of out objects in example files. I can't understand why them so much. Does should be so?

    Regards,

    Marat

  • Hi,

    Dense stage detect will have many probable candidates for object, and its eventually gets refined by later stage of classifier. Example provided there is not end-end application.

    Regards

    Deepak Poddar

  • Since there is no further query from you. I am closing this thread for now. If you face any other problem after following the suggestions provided in my previous replies, then please post it here.

    --Deepak

  • 1) I get different results with same image in VLIB_haarDetectObjectsDense function work. Why is it occuring?

    2) In Object list struct each X and Y coordinate is packed as upper 16 bit is the 'Y' co-ordinate, lower 16 bit is 'X' co-ordinate. How does packed X and Y in VLIB_HAARDETOBJ_sRect struct ?

    Regards

  • Hi,

    1) I get different results with same image in VLIB_haarDetectObjectsDense function work. Why is it occuring?

    2) In Object list struct each X and Y coordinate is packed as upper 16 bit is the 'Y' co-ordinate, lower 16 bit is 'X' co-ordinate. How does packed X and Y in VLIB_HAARDETOBJ_sRect struct ?

    Regards,

    Marat Yanglichev

  • Hi,

    As per documentation 

    /**
    *******************************************************************************
    * @struct VLIB_HAARDETOBJ_sRect
    * @brief Defines each feature-rectangle's property from base location. Base location
    * is the top left location of probable object patch. E.g if the base location
    * is (x,y) then feature rectangle will be placed at (x + offsetTL, y + offsetTR).
    * Hence as base location changes, feature-rectangle property remains same.
    *
    * @param offsetTL Offset of top left corner of rectangular feature from base location
    * @param offsetTR Offset of top right corner of rectangular feature from base location
    * @param offsetBL Offset of bottom left corner of rectangular feature from base location
    * @param offsetBR Offset of bottom right corner of rectangular feature from base location
    *******************************************************************************
    */

    typedef struct {
    int16_t offsetTL; /*!< Offset of top left corner of rectangular feature from base location */
    int16_t offsetTR; /*!< Offset of top right corner of rectangular feature from base location */
    int16_t offsetBL; /*!< Offset of bottom left corner of rectangular feature from base location */
    int16_t offsetBR; /*!< Offset of bottom right corner of rectangular feature from base location */
    } VLIB_HAARDETOBJ_sRect;

    offsetTL, offsetTR, offsetBL, offsetBR are the offset of the rectangle in integral image. if the rectangle is at location (m,n) of width w, and height h, in the image of width=W then

    offsetTL = n*W+m, offsetTR = offsetTL  +w, offsetBL=  (n+h)*W+m, offsetBR = offsetBL + w

    Can you provide a cut sort version of test case (in the format of vlib test bench) where you see different output for different run.

    regards

    Deepak

  • In general well known standard flow dense object search is 

    for all tree{

    pTree = get current tree
    index = 0;

    for all node in current tree{
    score_rect = 0;

    for all rectangle in all nodes{
    weight = get weight for given rectangle of given node;

    rectOffset = &pTree->rect[nodeId][rectId].offsetTL;

    /* Offsets for rectangle #0*/
    a= *rectOffset++;
    b= *rectOffset++;
    c= *rectOffset++;
    d= *rectOffset++;

    topLeftVal = integral_image[a];
    topRightVal = integral_image[b];
    botLeftVal = integral_image[c];
    botRightVal = integral_image[d];

    score_rect += weight * ((topLeftVal + botRightVal) - (topRightVal + botLeftVal)); // rectangle score is accumulated which is node score

    }

    if( score_rect > node_threshold ) {
    index |= (0x1 << nodeId);
    }
    } //for all node

    treeScore += pTree->result[index]; // pre computed score of tree based on binary decision of node score comparison.  
    } //for all tree

    tree score is compared with threshold to get probability of the object.

    regards

    Deepak 

  • Hi,

    Thank you for explanation. It makes clear how represented features in VLIB classifier tree.

    Regards,

    Marat Yanglichev