Part Number: AM62A3
Dear TIDL Expert,
We are deploying our model using the SDK 8.6. After completing the import on the PC side, we use the program TI_DEVICE_armv8_test_dl_algo_host_rt.out on the AM62A to conduct a single-frame image inference test. When we test with a model quantized to 8 bits, we can successfully complete the test. However, when we test with a model quantized to 16 bits, the result stays in the state shown below and does not throw an error or proceed. We suspect whether the SDK 8.6 supports 16-bit models? Furthermore, we can successfully conduct inference on both 8-bit and 16-bit models on the PC side.
Looking forward to a helpful response.
Processing config file #0 : testvecs/config/infer/tidl_infer_LTM.txt APP: Init ... !!! MEM: Init ... !!! MEM: Initialized DMA HEAP (fd=6) !!! MEM: Init ... Done !!! IPC: Init ... !!! IPC: Init ... Done !!! REMOTE_SERVICE: Init ... !!! REMOTE_SERVICE: Init ... Done !!! 6040.560306 s: GTC Frequency = 200 MHz APP: Init ... Done !!! 6040.560447 s: VX_ZONE_INIT:Enabled 6040.560502 s: VX_ZONE_ERROR:Enabled 6040.560538 s: VX_ZONE_WARNING:Enabled 6040.561289 s: VX_ZONE_INIT:[tivxInitLocal:130] Initialization Done !!! 6040.561488 s: VX_ZONE_INIT:[tivxHostInitLocal:93] Initialization Done for HOST !!!