This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

J784S4XEVM: TDA4VM: ONNX Model import error

Part Number: J784S4XEVM

Hi Ti,

I have a onnx model which is LaneAtt and I used tidlModelImport tool convert model.

But there are some error I found. The log is follow:

[libprotobuf FATAL ./google/protobuf/repeated_field.h:1537] CHECK failed: (index) < (current_size_): 
terminate called after throwing an instance of 'google::protobuf::FatalException'
  what():  CHECK failed: (index) < (current_size_): 

I am using SDK with 08_06_00_10,and my txt file is follow:

modelType          = 2
numParamBits       = 8
numFeatureBits     = 8
quantizationStyle  = 2
inputNetFile      = "../../test/testvecs/tidl_onnx_quantization/onnx_model/laneATT/LaneAtt.onnx"
outputNetFile      = "../../test/testvecs/tidl_onnx_quantization/output/LaneATT/tidl_net_laneATT.bin"
outputParamsFile   = "../../test/testvecs/tidl_onnx_quantization/output/LaneATT/tidl_io_laneATT_"
inDataNorm  = 1
inMean = 0.485 0.456 0.406
inScale = 0.229 0.224 0.225
inDataFormat = 1
inWidth  = 640
inHeight = 360
inNumChannels = 3
numFrames = 1
inData = "../../test/testvecs/tidl_onnx_quantization/detction_list_txt/detection_list.txt"
perfSimConfig   = ../../test/testvecs/config/import/device_config.cfg   
postProcType = 2
debugTraceLevel = 1     

best,

chuan

  • Hi,

    Could you please share your observation on model import with latest 9.1 sdk.

    We have added significant amount of fixes in latest sdk, the above issue could be resolved in it.

    Please revert with your observation on 9.1 import tool.

  • Hi,

    I have done the import on 9.1 sdk, it showed errors are below:

    Current ONNX OpSet Version   : 11  
    *** WARNING : Mul with constant tensor requires input dimensions of mul layer to be present as part of the network.      If present, this warning can be ignored. If not, please use open source runtimes offering to run this model or run shape inference on this model before executing import  *** 
    Could not find const or initializer of layer Reshape_56 !!!
    
    Unsupported Onnx import data type : 0 
    Could not find const or initializer of layer Reshape_74 !!!
    
    Unsupported Onnx import data type : 0 
    Running tidl_optimizeNet 
    terminate called after throwing an instance of 'std::bad_alloc'
      what():  std::bad_alloc
    

    I am wonder if it the param I set incorrect or it not support my LanATT onnx model?

    best,

    chuan

  • Let me check this internally and get back to you in 2 -3 days time frame

  • Mean while can you set debugTaceLevel flag to 2 and share the complete import logs ?

  • I set debugTaceLevel flag to 2 and it makes the same errors.

    ONNX Model (Proto) File  : ../../test/testvecs/tidl_onnx_quantization/onnx_model/laneATT/LaneATT.onnx  
    TIDL Network File      : ../../test/testvecs/tidl_onnx_quantization/output/LaneATT/tidl_net_laneATT.bin  
    TIDL IO Info File      : ../../test/testvecs/tidl_onnx_quantization/output/LaneATT/tidl_io_laneATT_  
    Current ONNX OpSet Version   : 11  
    *** WARNING : Mul with constant tensor requires input dimensions of mul layer to be present as part of the network.      If present, this warning can be ignored. If not, please use open source runtimes offering to run this model or run shape inference on this model before executing import  *** 
    Could not find const or initializer of layer Reshape_56 !!!
    
    Unsupported Onnx import data type : 0 
    Could not find const or initializer of layer Reshape_74 !!!
    
    Unsupported Onnx import data type : 0 
    Running tidl_optimizeNet 
    terminate called after throwing an instance of 'std::bad_alloc'
      what():  std::bad_alloc
    

    The import txt file is follow:

    modelType          = 2
    numParamBits       = 8
    numFeatureBits     = 8
    quantizationStyle  = 2
    inputNetFile      = "../../test/testvecs/tidl_onnx_quantization/onnx_model/laneATT/LaneATT.onnx"
    outputNetFile      = "../../test/testvecs/tidl_onnx_quantization/output/LaneATT/tidl_net_laneATT.bin"
    outputParamsFile   = "../../test/testvecs/tidl_onnx_quantization/output/LaneATT/tidl_io_laneATT_"
    inDataNorm  = 1
    inMean = 0.485 0.456 0.406
    inScale = 0.229 0.224 0.225
    inDataFormat = 1
    inWidth  = 640
    inHeight = 360
    inNumChannels = 3
    metaArchType = -1
    numFrames = 1
    inData = "../../test/testvecs/tidl_onnx_quantization/detction_list_txt/detection_list-traffic.txt"
    perfSimConfig   = ../../test/testvecs/config/import/device_config.cfg   
    inElementType   = 1
    postProcType = 0
    inDataNamesList = "modelInput"
    outDataNamesList = "reg_proposals"
    debugTraceLevel = 2  

    And here r the config txt of LaneATT:

    dataDirs = [./model/];
    engineName = LaneATT.engine;
    onnxFileName = LaneATT.engine;
    inputTensorNames = modelInput;
    outputTensorNames = [reg_proposals];
    mMean =  [0.485, 0.456, 0.406];
    mStd =  [0.229, 0.224, 0.225];
    dlaCore = -1;
    int8 = 0;
    fp16 = 0;
    mGridingNum = 100;
    INPUT_H = 360;
    INPUT_W = 640;
    N_OFFSETS = 72;
    N_STRIPS = 71;
    MAX_COL_BLOCKS = 1000;

  • Sure, let me check the logs and get back to you.

  • Can you share the model along with import config files and required input data so i can try to reproduce this issue at my end.

    Please attach the zip file.

  • I have send them to u at message.

    THX for ur help!

  • Hi Chuan,

    Can you attach the zips here ? (We recommend to share the files in the threads, so that if needed multiple experts can access this)

    Thank for understanding.

  • laneATT.zip

    Hi,

    I attach the zip here.

  • Hi,

    The model shared by you dont have output dimensions for each layer if you look at the error log here : 

    Current ONNX OpSet Version : 11
    *** WARNING : Mul with constant tensor requires input dimensions of mul layer to be present as part of the network. If present, this warning can be ignored. If not, please use open source runtimes offering to run this model or run shape inference on this model before executing import ***
    Could not find const or initializer of layer Reshape_56 !!!

    The model import tools tries to read this values and fails, Can you try running the shape inference tool on your current model file and redo the experiment ?

    Please refer the code snippet below : 

    import onnx
    from onnx import shape_inference
    model = onnx.load("/path/to/model.onnx")
    inferred_model = shape_inference.infer_shapes(model)
    onnx.save_model(inferred_model, "/path/to/new/model.onnx")

    Please use 9.1 sdk for model import, let me know your observations post this suggested fix.

  • Hi,

    I am sorry that I don't understand where I should change the refer code.

    And I find no .py file at path ".../c7x-mma-tidl/ti_dl/utils/tidlModelImport" in 9.1 SDK...

  • You can create a separate python file and add the above source code, provide path to your current model file and generate new that can be used for model compilation.

    Idea is to apply shape inference on the model file so that Network Compiler can read the dimensions.