Other Parts Discussed in Thread: AM69A
Tool/software:
Hi all,
We are trying to configure batch processing on a single-core architecture (TDA4VM) using edgeai-tidl-tools, but It seems that only parallel batch processing can be used by configuring advanced_options:inference_mode and advanced_options:num_cores. However, these options are applicable to multi-core platforms. How to proceed with conversion of a network in batch mode? The edgeai-tidl-tools version we use is 09_01_08_00.
NOTE:
We also tried using an old way (import tool from PSDK version 09.01.00.06 in Linux environment) and by setting the numBatches parameter to the desired batch size. With that, the network compilation was successful, but when executed on the target we get correct results only for the first batch and other batches are garbage. The results are correct on PC. After checking the results by layer, it seems like DataConvert layer is problematic as it converts input tensor correctly only for the first batch and other batches is all 0. We checked also with PSDK version 10.00.00.05 and we experienced the same behavior. Can you confirm that DataConvert layer works correctly in batch mode on target, or point us to what we are possibly doing wrong?