This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: Tensor Enqueue error

Part Number: TDA4VM


Hi team,


My application pipeline is given below,

I am loading the grey image as array of tensors as tidl node expect only tensor input. For TIDL node, num_input_tensors = 1, num_output_tensors = 2.

I am getting "Reference enqueue not supported at graph parameter index 0" error while trying to enqueue input tensor.

Please have a look on the snippet.

vx_tensor img_tensor[APP_MAX_TENSORS];
vx_object_array  img_tensor_arr[APP_PRE_PROC_MAX_TENSORS];

graph_parameter_index = 0;
status = add_graph_parameter_by_node_index(obj->graph, obj->tidlObj.node, 6);
obj->tidlObj.graph_parameter_index = graph_parameter_index;
graph_parameters_queue_params_list[graph_parameter_index].graph_parameter_index = graph_parameter_index;
graph_parameters_queue_params_list[graph_parameter_index].refs_list_size = 1;//APP_BUFFER_Q_DEPTH;
graph_parameters_queue_params_list[graph_parameter_index].refs_list = (vx_reference*)&obj->inputTensorObj.img_tensor[0];
graph_parameter_index++;

status = tivxSetNodeParameterNumBufByIndex(obj->tidlObj.node, 4, 2);
status = tivxSetNodeParameterNumBufByIndex(obj->tidlObj.node, 7, 2);
status = tivxSetNodeParameterNumBufByIndex(obj->orbpostprocObj.node, 2, 2);
status = tivxSetNodeParameterNumBufByIndex(obj->orbpostprocObj.node, 3, 2);

//Below command is thrwoing the error.
status = vxGraphParameterEnqueueReadyRef(obj->graph, tidlObj->graph_parameter_index, (vx_reference*)&obj->inputTensorObj.img_tensor[obj->enqueueCnt], 1);

tivxTIDLNode(graph, tidlObj->kernel, params, input_tensor, output_tensor);

I have set APP_BUFFER_Q_DEPTH=4, since tidl has 1 input and 2 outputs buffer.

Am I missing something?

Please let me know.

  • Hi,

    From the shared code snippet its occurs that the ref list size is set to 1, while you are enqueueing 4 times ! 

    Please set ref_list_size to 4 this should solve the issue.

    Thanks

  • Hi Pratik,

    Thanks for the reply. Now the issue is solved.

    I am having problem regarding quantized model output. My model is 8bit quantized. I am trying to validate model outputs from standalone inferenece on TDA4VM and TDA4VM application.

    Model details: 1 UINT8 image input and 2 float tensor outputs.

    Standalone inference is giving me float tensor outputs as expected but the problem is when I am querying outputs tensor data type in the application I am getting data type as "int_8" i.e. tidl node in application is giving tensor output in int_8(maybe because of quantization). Then how can I validate outputs from standalone and application. One is giving output in float and other is quantized int_8?

    Because of this I am unable to validate my application. How to get floating output of int8 quantized model?

    Please help me with the same.

    SDK: 8.02

    Regards,
    Harib

  • Hi,

    Since the highlighted issue is resolved can you please close this and file new for tidl specific observations ?