Tool/software:
Hi, the deployment of the tda4vm-sdk10.1 network has revealed an issue with memory output offset.
Phenomenon: The model outputs correct pad and channelPitch information, but when using the writeTIDLOutput image saving function in app_tidl_module.c within the vision_app, there is an offset when viewed. Please refer to the two images below. Previous patches that resolved similar issues in entry-sdk10.0 do not work on vm-sdk10.1 or with this specific model.
Patch and Result Display: As follows, could you assist in synchronized debugging?
Error Phenomenon, Figure 1:


[Image not included]

Error Phenomenon, Figure 2:
[Image not included]
Previous Patch for Similar Memory Offset Issue in entry-sdk10.0, Figure 3:
[Image not included]

Network Model: pModel.bin and pModel_io_1.bin are models for the vm, with an sdk version of rtos 10.01.00.04.
Network Input: input_50_0.rgb, binary file input.
Network Output: Viewable with 7yuv in 8bpp mode, width 120 and height 160 (with pad width of 121). Upon visualization, the following errors are observed:
tidl_bev_output_no_pad: Indicates the tensor with invalid areas removed, but there is still an offset within the valid area accompanied by black vertical bars.tidl_bev_output_with_pad: Indicates the complete preservation of tensor memory with black and white dot invalid areas, and there is an offset within the valid area accompanied by black vertical bars.
Attempted to run this model using TI_DEVICE_armv8_test_dl_algo_host_rt.out, and there was no memory offset in the output. Suspecting that somewhere in the tidl_rt updates, the vision_app test cases or app_tidl_module have not been updated. Please provide a patch, thank you!
All input and output materials are in model_in_out.zipmodel_in_out.zip.