This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM62A3: Model deployment using TIDL on AM62A

Part Number: AM62A3

Dear TIDL Expert,

We are deploying our model using the SDK 8.6. After completing the import on the PC side, we use the program TI_DEVICE_armv8_test_dl_algo_host_rt.out on the AM62A to conduct a single-frame image inference test. When we test with a model quantized to 8 bits, we can successfully complete the test. However, when we test with a model quantized to 16 bits, the result stays in the state shown below and does not throw an error or proceed. We suspect whether the SDK 8.6 supports 16-bit models? Furthermore, we can successfully conduct inference on both 8-bit and 16-bit models on the PC side.
Looking forward to a helpful response.

Processing config file #0 : testvecs/config/infer/tidl_infer_LTM.txt 
APP: Init ... !!!
MEM: Init ... !!!
MEM: Initialized DMA HEAP (fd=6) !!!
MEM: Init ... Done !!!
IPC: Init ... !!!
IPC: Init ... Done !!!
REMOTE_SERVICE: Init ... !!!
REMOTE_SERVICE: Init ... Done !!!
  6040.560306 s: GTC Frequency = 200 MHz
APP: Init ... Done !!!
  6040.560447 s:  VX_ZONE_INIT:Enabled
  6040.560502 s:  VX_ZONE_ERROR:Enabled
  6040.560538 s:  VX_ZONE_WARNING:Enabled
  6040.561289 s:  VX_ZONE_INIT:[tivxInitLocal:130] Initialization Done !!!
  6040.561488 s:  VX_ZONE_INIT:[tivxHostInitLocal:93] Initialization Done for HOST !!!

  • Hello Tian Liang,

    My first recommendation here is to try this as well with the more recent 9.1 SDK. There have been many bug/stability fixes since the 8.6 release, which was the first software release for this device. This may immediately resolve your issue, so please try it first.

    The 8.6 SDK did have 16-bit support, but not for all layers. There are some, like Resize/Deconv, which I believe have 16-bit issues. I am also assuming that only inference is problematic, and that compilation is successful.

    I cannot learn much from the log you have shared. Please increase the debugTraceLevel to 2 or 3. Please make this configuration change and share the logs for both 8-bit and 16-bit modes

    Perhaps the inference call has hung on a particular layer and is not completing the call. You can see layer output written to files (under /tmp on SoC) by setting writeTraceLevel to 1 or 2. Please try in PC host emulation mode first, and let me know if there is a mismatch between host emulation and target behavior.

    Best,
    Reese