Hi,
There were guidelines in this link for writing specific modules in torch. These guidelines are conveyed for easier conversion from torchvision to onnx to TIDL.
One of notes in the guidelines states to use layers from xnn like
This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
Hi,
There were guidelines in this link for writing specific modules in torch. These guidelines are conveyed for easier conversion from torchvision to onnx to TIDL.
One of notes in the guidelines states to use layers from xnn like
Hi Niranjan,
What is the type of Sub and Mul operators in your model? Are they element-wise or channel-wise mul/sub?
Latest SDK and edgeai_tidl_tools release for mul operator, supports element-wise operation and multiplication channel-wise broadcastable constant.
We are yet to add support for sub operation which will be similar to current mul and are working on it.
Regards,
Anand
Hi Anand,
I see that Add and Mul are supported from onnx to TIDL from the documentation here. The rough onnx model graph is here

The output log for this model conversion is as follows
Hi Niranjan,
As you can see from the ALLOWLISTING check above, the layer is marked as unsupported because the input to the layer is expected to have 4 dimensions and it has only 2 in your model. If you make your input 4 dimensional, I think that should solve the issue.
Regards,
Anand
Hi Anand,
Currently I was trying to do the following operation :
tensor(shape:(1,1))*tensor(shape:(1,2048))
since the above tensor is not 4 dimensional, I have used the reshape.
tensor(shape:(1,1))*tensor(shape:(1,1,1,2048))
However, I get the following error ->
Hi Niranjan,
Is it not possible to just set input to be a 4d tensor and export it to ONNX? That would be an easier fix, right?
e.g. something like this in pytorch:
Hi Anand,
I am able to export the model to onnx. The problem comes when exporting from onnx to tidl format.
Are you suggesting to reshape the tensors from (1,2048) to (1,2048,1,1)? I believe two dimensional tensors are supported since most networks have fully connected layers or am I misunderstanding something?
Thank You
Niranjan
Hi Niranjan,
This issue is specific for Element wise layers, e.g . Add and Mul layers above. They will be expecting 4 dimensional inputs. This is not for fully connected layers.
Will it be possible for you to share this particular network so I can run on my side? That can help in resolving the issue faster.
Regards,
Anand
Hi Anand,
I am sharing the small part of the network where there normally is the issue. I think you can use the code to create the onnx model which has issues converting to the TIDL format. The major problem as stated before seems to be the MUL operator.
import torch
import torch.nn as nn
import onnx
import os
class recurse_network(nn.Module):
def __init__(self):
super(recurse_network, self).__init__()
self.fc1 = nn.Linear(512,1)
self.sigmoid = nn.Sigmoid()
self.flatten1 = torch.nn.Flatten(1)
self.flatten2 = torch.nn.Flatten(1)
def forward(self,input_val, prev_state, cumsum):
diff = torch.subtract(input_val, prev_state)
diff = self.flatten1(diff)
diff = self.fc1(diff)
imp = self.sigmoid(diff)
input_mod = torch.multiply(imp, input_val)
prev_state_mod = torch.multiply(cumsum, prev_state)
next_state = torch.add(input_mod, prev_state_mod)
next_state = torch.reshape(next_state, (1,512))
next_state = self.flatten2(next_state)
next_cum_sum = torch.add(imp, cumsum)
return next_state, next_cum_sum
net = recurse_network()
input_val = torch.randn( 1, 512)
prev_state = torch.randn( 1, 512)
cumsum = torch.randn(1,1)
net.eval()
out1 = net(input_val, prev_state, cumsum)
print(out1)
def write_onnx_model( model, save_path, name, input):
filepath = os.path.join(save_path, name)
model.eval()
print ("onnx saved to file path ", filepath)
opset_version = 11
#
torch.onnx.export(model, input , f =filepath, export_params=True, verbose=False,
do_constant_folding=True, opset_version=opset_version)
# infer shapes
onnx.shape_inference.infer_shapes_path(filepath, filepath)
# export torchscript model
traced_model = torch.jit.trace(model, input)
torch.jit.save(traced_model, os.path.splitext(filepath)[0]+'_model.pth')
print('trochscript export done.')
write_onnx_model(net, '.', 'ti_model.onnx', (input_val, prev_state, cumsum))
The source code of tidl_onnxImport.cpp in tidl_j7_08_00_00_10 directory,
int32_t TIDL_onnxMapMulBaseParams(GraphProto& onnGraph, int32_t i, sTIDL_LayerPC_t &layer)
{
if(onnGraph.node(i).input_size() != 2)
{
printf("Multiplication operator is supported for elementwise operation with only 2 inputs \n");
return -1;
}
if(gParams.modelType == TIDL_IMPORT_MODEL_FORMAT_ONNX_RT)
{
std::vector<std::vector<int32_t>> inputShapes;
std::vector<int32_t> nodeInputDims;
for(int j = 0; j < onnGraph.node(i).input_size(); j++)
{
nodeInputDims = getNodeInputShape(onnGraph, onnGraph.node(i).input(j), 0);
inputShapes.push_back(nodeInputDims);
}
int n1 = inputShapes[0][0];
int c1 = inputShapes[0][1];
int h1 = inputShapes[0][2];
int w1 = inputShapes[0][3];
int n2 = inputShapes[1][0];
int c2 = inputShapes[1][1];
int h2 = inputShapes[1][2];
int w2 = inputShapes[1][3];
if((n1 == n2) && (c1 == c2) && (h1 == h2) && (w1 == w2))
{
layer.layerType = TIDL_EltWiseLayer;
layer.layerParams.eltWiseParams.eltWiseType = TIDL_EltWiseProduct;
layer.numInBufs = onnGraph.node(i).input_size();
}
else
{
printf("Only elementwise numtiplication operator supported \n");
return -1;
}
}
else
{
layer.layerType = TIDL_EltWiseLayer;
layer.layerParams.eltWiseParams.eltWiseType = TIDL_EltWiseProduct;
layer.numInBufs = onnGraph.node(i).input_size();
}
return 0;
}