This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

PROCESSOR-SDK-J784S4: Does TIDL support raw model output?

Part Number: PROCESSOR-SDK-J784S4
Other Parts Discussed in Thread: TDA4VH-Q1

SDK: RTOS 9.1.0.6

EVM: TDA4VH-Q1

I am building an app which invokes an end2end image enhancement model on TDA4VH-Q1 EVM. By end2end, it means the model outputs a RGB-image like tensor. To do that, I am following demo "app_tidl_seg" as a template to construct my own openVX pipeline. Also, I am creating my own post-processing kernel which de-normalizes the output tensor and converts it to vx_image in NV12 format.

So far, I have found that (please correct me if this is wrong) most provided models are detection models and demos (and provided kernels) are taking output from TIDL detection layer. Since my model is not a detection model, I don't think I could use the detection layer. I am hoping to get raw model output from tidl directly. 

My question is I went through the header file "ti-processor-sdk-rtos-j784s4-evm-09_01_00_06/targetfs/usr/include/processor_sdk/tidl_j7/arm-tidl/rt/inc/itidl_ti.h", but I could not find the output tensor pointer in any tidl structures. Could anyone give me a hint where it is? Thanks. 

  • Hi,

    As i see you are using vision apps flow, for you custom use case.

    Can you check/refer the way post process node takes output tensor from tidl node ? and try to add your own custom node that adds some different functionality.

    Also is it possible to share the graphical representation of vision apps graph that you are looking now ?