TDA4VEN-Q1: TIDL Output mismatch::SDK 10

Part Number: TDA4VEN-Q1

Tool/software:

Hello sir,

I'm working on an application on J722S platform. SDK is "ti-processor-sdk-rtos-j722s-evm-10_00_00_05". I compared the application TIDL dump with that of standalone target inference for the same input image.

These both found to be mismatching. I used int_16 to dump the output in both cases. Is this behaviour expected in SDK 10?

Thanks,

Seetharama Raju.

  • Hi Raju; how do you do the TIDL dump? Can you provide the script that you used.

    Thanks and regards

    Wen Li

  • Hi sir,

    I used the API provided by TI "writeTIDLOutput" in app_run_graph of the application.

    The definition of the function in app_tidl_module.c($vision_apps/modules/src) is changed little bit for int_16 dumping.

    Code changes:

    for (k = 0; k < tensor_sizes[2]; k++) {
    int16_t *pOut = (int16_t *)data_ptr + (tensor_sizes[0] * tensor_sizes[1] * k) + (ioBufDesc->outPadT[tensor_id] * tensor_sizes[0]) + ioBufDesc->outPadL[tensor_id];
    for (i = 0; i < ioBufDesc->outHeight[tensor_id]; i++) {
    fwrite(pOut, sizeof(int16_t), ioBufDesc->outWidth[tensor_id], fp);
    pOut += tensor_sizes[0];
    }
    }

    Thanks,

    Raju.

  • Hi sir,

    Any update regarding this?. The issue still persists. 

    Thanks,

    Raju

  • Hi sir,

    I would like to add few statements here.

    Our application pipeline is File based input->custom pre-proc->TIDL->post-proc. Please find my debug results below.

    1)Preproc output is valid.

    2)Directly passed RGB input to TIDL node to confirm the TIDL node is the bottleneck or not. The TIDL output is found to be wrong

    Thanks,

    Raju

  • Hi sir,

    Any update regarding this?. Request you to respond on it as soon as possible.

    Thanks,

    Raju