This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: tda4vm 10.01.00.04 tidl runs model with Memory Output Offset Issue

Part Number: TDA4VM


Tool/software:

Hi, the deployment of the tda4vm-sdk10.1 network has revealed an issue with memory output offset.

Phenomenon‌: The model outputs correct pad and channelPitch information, but when using the writeTIDLOutput image saving function in app_tidl_module.c within the vision_app, there is an offset when viewed. Please refer to the two images below. Previous patches that resolved similar issues in entry-sdk10.0 do not work on vm-sdk10.1 or with this specific model.

Patch and Result Display‌: As follows, could you assist in synchronized debugging?

Error Phenomenon, Figure 1‌:

[Image not included]

Error Phenomenon, Figure 2‌:

[Image not included]

Previous Patch for Similar Memory Offset Issue in entry-sdk10.0, Figure 3‌:

[Image not included]

Network Model‌: pModel.bin and pModel_io_1.bin are models for the vm, with an sdk version of rtos 10.01.00.04.

Network Input‌: input_50_0.rgb, binary file input.

Network Output‌: Viewable with 7yuv in 8bpp mode, width 120 and height 160 (with pad width of 121). Upon visualization, the following errors are observed:

  • tidl_bev_output_no_pad: Indicates the tensor with invalid areas removed, but there is still an offset within the valid area accompanied by black vertical bars.
  • tidl_bev_output_with_pad: Indicates the complete preservation of tensor memory with black and white dot invalid areas, and there is an offset within the valid area accompanied by black vertical bars.

Attempted to run this model using TI_DEVICE_armv8_test_dl_algo_host_rt.out, and there was no memory offset in the output. Suspecting that somewhere in the tidl_rt updates, the vision_app test cases or app_tidl_module have not been updated. Please provide a patch, thank you!

All input and output materials are in  model_in_out.zipmodel_in_out.zip.

  • hi,i found a problem in the pModel_io_1.zip,using this model, we can get correct output in the TI_DEVICE_armv8_test_dl_algo_host_rt.out ,but still have problem in the vision_app when  saving the output in writeTIDLOutput(app_tidl_module.c),why,what we missed 

    please help!

  • Hi

    Please enable writetracelevel to 3 or 4 in your infer config file and your vision_apps based app to save the intermediate results of running TI_DEVICE_armv8_test_dl_algo_host_rt and your app. Then you can compare the saved results of the output layer to see if they are the same. 

    If they are the same, the problem maybe just wrong padding removal.

    Regards,

    Adam