This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: Unexpected output of the Custom TIDL Application

Part Number: TDA4VM
Other Parts Discussed in Thread: MANDO

Hi All,

Using psdk_rtos_auto_j7_06_02_00_21 and TI has released patch of Tidl and mmalib to us using CDDS.

We have used 

TIDL patch      : tidl_j7_01_01_01_01

MMA Library   : mmalib_01_01_00_02

Our application is similar to tidl_avp example except that we are running single model of pedestrian detection.

We find that we are getting big bounding boxes as output and no pedestrian is getting detected while using the patch provided for padded model. We tried dumping the images after scalar and preprocessing node. Those results are ok. 

The same is not observed if we use the default tidl (tidl_j7_01_01_00_10) and mmalib (mmalib_01_01_00_00) that came with psdk_rtos_auto_j7_06_02_00_21. Pedestrians are getting detected on using the default packages.

One more thing that we observed is, we are getting pedestrian detected on the same set of images if we run the model on target (standalone) using the patches provided for tidl and mmalib

Please suggest where to look at to resolve this issue.

  • Anshuman,

    Can you confirm that the issue is observed with openVx based application.

    Are observig the issue in the standalone application?

  • Hi Desappan, 

    The issue is with Openvx based dl application (similar to app_tidl_avp) . On standalone target, I am getting proper detection of the pedestrian.

  • Hi Anshuman,

    The AVP demo is not designed to run with out of box OD networks. We have created a separate standalone app for running OD. I will be making a PSDKRA 6.2 add-on package release by 1st week of May. Please let me know if you are OK to wait and try the new app.

    Regards,
    Shyam

  • Hi Shyam,

    It would be great if you could release the intermediate software through CDDS. We have a scheduled Software release at the end of April. That would be helpful to compare if we have missed anything. 

  • The DL patch is now available at the below link. Please take a look at vision_apps/apps/dl_demos/app_tidl_od application

    Look at Post Release Patches

  • Hi Shyam,

    We have a query regarding the input to the TIDL block for 16 bit inference.  We were wondering if the input tensor to the TIDL block coming from the preproc block has to be 16 bit too [unsigned short] or can it be 8 bit yet go for 16 bit inference? In other words does num parambits =16 num featurebits = 16 allow an 8 bit input.

    We were having a look at the apps shared with the released patches. We found that in the OD app there was no support for 16 bit in the preproc node, whereas in the camera app there was. 

    Best Regards,

    Sankalp Kallakuri

  • If the inference is in 16bit then pre-proc module will create a 16bit tensor and put 8-bit input data in a 16-bit container.

    All this is handled in app_pre_proc_module.c file, see app_update_pre_proc() function. All the dl apps handles in the same manner.

    Regards,

    Shyam

  • Hi Shyam,

    I am using the vision_apps_patch. 

    Please confirm my following understanding. 

    In the app_tidl_od application, app_pre_proc_module.c

        if((ioBufDesc->inElementType[0] == TIDL_UnsignedChar) || (ioBufDesc->inElementType[0] == TIDL_SignedChar))
        {
            preProcObj->params.tidl_8bit_16bit_flag = 0;
        }
        else if((ioBufDesc->inElementType[0] == TIDL_UnsignedShort) || (ioBufDesc->inElementType[0] == TIDL_SignedShort))
        {
            preProcObj->params.tidl_8bit_16bit_flag = 1;
        }

    Datatype of the tensor in Application is obtained from the following code snippet. i.e the ioBufDesc->inElementType[0]. 

    i.e if datatype set by ioBufDesc->inElementType[0]​ is "signed/unsigned char" then the allocation of the tensor(by using fn createOutputTensors) is of type "8 bit" and if ioBufDesc->inElementType[0]​ is "signed/unsigned short"​, then the allocation of the tensor is of type "16bit". 

    So if the inference bit is 8 bit -->  ioBufDesc->inElementType[0] will be unsigned/signed char 

    and  if the inference bit is 16 bit -->  ioBufDesc->inElementType[0] will be unsigned/signed short

    Again, the datatype information is being passed to fn "tiadalg_image_preprocessing_c66"  and here I see that only 8 bit is supported:

      if(data_type == TIADALG_DATA_TYPE_U08){
        dst_ptr_u8 = (uint8_t*)out_img;
      }else{
        /*Currently not supported any other data format for optimized flow*/
        ret_val = ret_val | TIADALG_IN_PRM_ERR;
      }

    Assuming the following understanding, I ran one mobilenet flipfalse model(16,16,3), but got the error 

    [C6x_1 ]     55.353368 s:  VX_ZONE_ERROR:[tivxKernelImgPreProcProcess:256] tiadalg failed !!!
    [C6x_1 ]     55.353631 s:  VX_ZONE_ERROR:[tivxTargetKernelExecute:372] tivxTargetKernelExecute: Kernel process function for [com.ti.img_proc.img.preprocess] returned error code: 1
    

    Please correct me if my understanding is wrong. 

    One more thing i would like to find out is, how can i see the folowing structure "sTIDL_IOBufDesc_t" values in the import tool, as i could not find the "outputParamsFile" as mentioned in the document.
    • The Input and Output Tensor formats of a network for running inference is described by the sTIDL_IOBufDesc_t.
    • This information is generated by import tool during model translation "outputParamsFile"



    • Hi Anshuman,

      Your understanding is correct, the optimized version of tiadalg_image_preprocessing_c66 seem to support only 8-bit data. As a workaround I recommend forcing the natural C version of the kernel, tiadalg_image_preprocessing_cn.

      You can make this change in the file, psdk_rtos_auto_j7_06_02_00_21/vision_apps/kernels/img_proc/c66/vx_image_preprocessing_target.c

      if(((in_img_desc->width & 7) == 0) && (data_type == TIADALG_DATA_TYPE_U08))

      {
           tiadalg_image_preprocessing_c66(...);
      }
      else
      {
           tiadalg_image_preprocessing_cn(...);
      }

      For the outputParamsFile, Ill ask someone from TIDL library team to reply.

      Regards,
      Shyam

    • outputParamsFile is a file specified by the user in the import config file.

    • Regarding the 16 16 3 inference, as advised we have used the natural c version of the code in the patched release of TIADALG’s preproc. The application at our end to read from a file and test a model works now with the 16 16 3  with this change. We were using the default TIDL and MMAlib. The results were having False Positives.

      In case we replace the default TIDL and MMAlib with the patched TIDL and MMAlib the application does not run. The utilization for C66 C71 RF5 all goes up to saturation(99%).

       

    • Can you please confirm the TIDL and MMALIB versions used and explain the steps how you swap the libraries?

      Also please provide the uart console output when you run the demo.

      Regards,
      Shyam

    • Also make sure you re-import the model for every new version of TIDL

      Regards,
      Shyam

    • Hi,

      Q1. Can you please confirm the TIDL and MMALIB versions used and explain the steps how you swap the libraries?

      ANS: So the following sets were used to perform the experiments.

      Set 1: Default tidl_j7_01_01_00_10 and mmalib_01_01_00_00

      Set 2: tidl_j7_01_01_01_01 and mmalib_01_01_00_02 released through CDDS for Mando

      Replace the tidl and mmalib path in tiovx/psdkra_tools_path.mak with respective tidl and mmalib version.

      Delete the vision_apps "lib"  and  "out" folder from psdk_rtos_auto_j7_06_02_00_21/vision_apps

      Delete the tiovx  "out" folder from psdk_rtos_auto_j7_06_02_00_21/tiovx

      Do make sdk_scrub and make sdk and then load the application on sd card and test.

      For Set 1: If we run the 16,16,3 model using file based tidl_od application, Application runs properly but false positives are seen.

      For Set 2: If we run the 16,16,3 model using file based tidl_od application, Application does not start at all. I am not able to run any application 

      Uart Log in that scenario is:

                                                
      root@j7-evm:/opt/vision_apps#                                                   
      root@j7-evm:/opt/vision_apps# ./vx_app_tidl_od.out --cfg app_msi_od.cfg         
      APP: Init ... !!!                                                               
      APP_LOG: Mapping 0xac000000 ...                                                 
      APP_LOG: Mapped 0xac000000 -> 0xffff93930000 of size 262144 bytes               
      MEM: Init ION ... !!!                                                           
      MEM: Initialized ION (fd=4) !!!                                                 
      MEM: Init ION ... Done !!!                                                      
      IPC: Init ... !!!                                                               
      APP_LOG: Mapping 0xac040000 ...                                                 
      APP_LOG: Mapped 0xac040000 -> 0xffff91990000 of size 33161216 bytes             
      APP_LOG: Mapping 0x30e00000 ...                                                 
      APP_LOG: Mapped 0x30e00000 -> 0xffff93e20000 of size 3072 bytes                 
      IPC: Init ... Done !!!                                                          
      REMOTE_SERVICE: Init ... !!!                                                    
      REMOTE_SERVICE: Init ... Done !!!                                               
      APP: Init ... Done !!!                                                          
           0.009880 s:  VX_ZONE_INIT:Enabled                                          
           0.009894 s:  VX_ZONE_ERROR:Enabled                                         
           0.009899 s:  VX_ZONE_WARNING:Enabled                                       
           0.010341 s:  VX_ZONE_INIT:[tivxInit:75] Initialization Done !!!            
           0.010471 s:  VX_ZONE_INIT:[tivxHostInit:44] Initialization Done for HOST !!
      Computing checksum at 0x0000FFFF8F0B88C0, size = 308352  

      This is happening only when using the set 2 configuration with recently released vision_apps_patch. If I use the set 2 configuration with the default vision_apps, then either the application run with no detections for 16,16,3 model or undesirable big detection box comes.  

      Let me know if you need more details on anything.

    • Hi,

      Also make sure you re-import the model for every new version of TIDL

         -----> Yes, the models have been re-imported using the new tidl version as well.

    • Hi Anshuman,

      Can you try building the application in host-emulation mode and see whats happening for set2?

      Regards,
      Shyam

    • Dear Sir,

      We have experimented the caffe based model with app_tidl_od on PC_Emulation/Target and also with test_bench PC_Emulation.

      We have a few observations:

      1) Performance number( in terms of Precision and Recall) PC Emulation and Target with app_tidl_od matches exactly.

      2) Performance number PC_Emulation(testbench) and Target(app_tidl_od) has a difference -0.03(Precision) and -0.05(Recall)

      Why such a difference is observed in the performance numbers between the testbench and application?

      Thanks and Regards,

      Vyom Mishra

    • Dear Sir,

      Gentle Reminder!

      Thanks and Regards,

      Vyom Mishra

    • Hi Vyom,

      To validate the difference observed can you experiment comparing the inputs provided just before calling TIDL library between PC_Emulation(testbench) and Target(app_tidl_od)


      Regards,

      Shyam