This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
HI TI,
We used our own model and then to get the inference result, but didn't know how to parse the result we wanted?I referenced this demo (vision_apps/apps/dl_demos/app_tidl_od_cam),
For the following code parsing, how to map to our own model data.
How is the output tensor data memory arranged, and what does it relate to?
Looking forward to your reply,thanks.
Best regards!
Hi
The structures highlighted in the above image are available in relative path "arm-tidl/rt/inc/itidl_ti.h" file as part of TIDL folder in SDK and I believe the fields of these structure should be self explanatory. You can use those to parse the OD outputs. Hope this resolves the query.
Regards,
Anand
If I designed a different model myself,NOT OD,OC, such as keypoint detection, then how CAN I parse the results I want, thanks for your reply!
Best regards!
In that case, output_buffer is the actual pointer to the output data (might include padding), and the data can be accessed using the offset pointer pOut (which points to the actual data values) by typecasting to your output data type.
Regards,
Anand
and the data can be accessed using the offset pointer pOut (which points to the actual data values) by typecasting to your output data type
hi Anand,Now I'm confused by this output data type ,what is this data structure related to and how can I design this structure and is it related to my input model? Please let me know if there is any information about it.
Best regards!
Hi, I have responded to your query in the other thread, we can continue the discussion there.
Regards,
Anand
Hi
Please check the *.svg file generated in compilation artifacts and hover on the last layer and check for elementType (say, data_type). That gives the data type of the dumped output.
You can type cast -- data_type * pOut = (data_type *)data_ptr;
And access the actual output using pOut.
The structure that you see in the demo is specific to OD networks. For other types of network, there no header information at beginning of pOut and actual output data is available starting the first element.
Regards,
Anand
Hi, experts.
Thank you for your reply, I don't really understand the content, I can't get started.
Is there any related document or userguide that I can refer to.
Regards,
Hi
I would recommend using https://github.com/TexasInstruments/edgeai-tidl-tools to do the model import and inference. In that case, you would get the output from the python script as defined by the ONNX model definition, and can be easily interpreted.
Regards,
Anand