This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VMXEVM: Unable to import onnx model using tidlModelImport

Part Number: TDA4VMXEVM

Hi,

I am trying to use the import utility to generate the .bin file for TDA4X.

SDK details: psdk_rtos_auto_j7_06_00_01_00

TIDL details: tidl_j7_00_09_01_00

I converted  a pytorch model (.pth) to onnx (.onnx) format in order to use it. However the import utility fails with a segmentation fault. Please find the logs below:

*******************************************************************************************************************************************************

~/psdk_rtos_auto_j7_06_00_01_00/tidl_j7_00_09_01_00/ti_dl/utils/tidlModelImport$ ./out/tidl_model_import.out ../../test/testvecs/config/import/public/onnx/tidl_import_espnet.txt 

TF Model (Proto) File : ../../test/testvecs/models/public/onnx/espnet/espnet_p_2_q_3.onnx
TIDL Network File : ../../test/testvecs/config/tidl_models/onnx/espnet/espnet_p_2_q_3.bin
TIDL IO Info File : ../../test/testvecs/config/tidl_models/onnx/espnet/espnet_p_2_q_3_
Segmentation fault (core dumped)

******************************************************************************************************************************************************

Kindly let me know how to proceed in this case.

Thanks in advance,

Prajakta

  • Hi Prajakta,

      Can you step inside the import tool code and see from where the seg fault is coming. 

    Regards,

    Anshu

  • Hi Prajakta,

       Do you have any deconvolution layers in this network?

    Regards,

    Anshu

  • Hi Anshu,

    Yes, I do have deconvolutional layers in the network. Can that be handled? What changes will have to be made?

    Also, my observation is the onnx model files downloaded directly from the links given in documentation (e.g. Movilenet, vgg16) can be imported successfully. However if I try to convert a tensorflow model into onnx and use the generated file, it fails. 

    Is there any specific script used for generating onnx model files? Are there any constraints?

    -Prajakta

  • Hi Prajakta,

       The new TIDL release is now available,  can you try your network with it and see if you are still facing any issues.

    Regards,

    Anshu

  • Hi Anshu,

    I tried using TIDL 01_00 but am facing issues. Please find the logs below.

    TF operator Sub is not suported now.. By passing
    TF operator ExpandDims is not suported now.. By passing
    TF operator Transpose is not suported now.. By passing
    TF operator ExpandDims is not suported now.. By passing
    TF operator ExpandDims is not suported now.. By passing
    [libprotobuf FATAL /home/kpit/psdk_rtos_auto_j7_06_00_01_00//protobuf-3.5.1.1/src/google/protobuf/repeated_field.h:1522] CHECK failed: (index) < (current_size_):
    terminate called after throwing an instance of 'google::protobuf::FatalException'
    what(): CHECK failed: (index) < (current_size_):

    -Prajakta

  • Hi Prajakta,

       Is it a new model or one of the models which you have shared with us? From the logs it looks like this network has lot of un-supported layers. This looks similar to the network (inception_v1_tf1-1_opt). Can you check why these un-supported layers are coming in your network?

    Regards,

    Anshu

  • Hi Anshu,

    The network that I am trying is not inception net. It is a different network for semantic segmentation. 

    The model was trained using pytorch. I converted it to .onnx and .pb. However I am facing the issues in both cases.

    Is there anything wrong with the conversion of model file?

    Also, I have a couple of more questions after exploring the import tool and perf sim a bit more.

    Is there any specific way to convert a pytorch model to tflite if I want to use tflite as per our discussion?

    Can I run perf sim using a pb file or onnx file directly (Without running the import tool)? When I tried doing this I faced an error which says tidl_net.bin file could not be opened.

    -Prajakta

  • Hi Prajakta,

    We recommend TFlite only for TensorFlow trained models.

    ONNX format is recommended for models trained pytorch.

    At the end of the ONNX model export PyTorch, visualize model using netron (https://github.com/lutzroeder/netron) and make sure the operators in the model are supported by TIDl by referring below section in users guide

     tidl_j7_01_00_00_00/ti_dl/docs/user_guide_html/md_tidl_layers_info.html

    Will share an example Pytorch script to export ONNX model here soon

  • Hi Kumar,

    Thank you for sharing this information.

    I tried using the onnx model. However it is creating some problem because of the batch norm and conv layers. Please find the logs below for reference:

    WARNING: Batch Norm Layer 's coeff cannot be found(or not match) in coef file, Random bias will be generated! Only for evaluation usage! Results are all random!
    WARNING: Conv Layer 's coeff cannot be found(or not match) in coef file, Random coeff will be generated! Only for evaluation usage! Results are all random!
    WARNING: Batch Norm Layer 's coeff cannot be found(or not match) in coef file, Random bias will be generated! Only for evaluation usage! Results are all random!
    WARNING: Conv Layer 's coeff cannot be found(or not match) in coef file, Random coeff will be generated! Only for evaluation usage! Results are all random!
    WARNING: Conv Layer 's coeff cannot be found(or not match) in coef file, Random coeff will be generated! Only for evaluation usage! Results are all random!
    Segmentation fault (core dumped)

    If I eliminate the batch norm layer specify initial nodes only, import tool works.

    Kindly let me know what should I do to handle this.

    -Prajakta

  • You can use the below example to export an onnx model from PyTorch

    import os
    import torch
    import torchvision
    import datetime
    
    # some parameters
    date = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
    dataset_name = 'image_folder_classification'
    model_name = 'resnet18'
    img_resize = (256,256)
    rand_crop = (224,224)
    
    # the saving path
    save_path = './'
    save_path = os.path.join(save_path, dataset_name, date + '_' + dataset_name + '_' + model_name)
    save_path += '_resize{}x{}_traincrop{}x{}'.format(img_resize[1], img_resize[0], rand_crop[1], rand_crop[0])
    os.makedirs(save_path, exist_ok=True)
    
    # create the model
    model = torchvision.models.resnet18(pretrained=True)
    
    # create a rand input
    rand_input = torch.rand(1, 3, rand_crop[0], rand_crop[1])
    
    # write the onnx model
    torch.onnx.export(model, rand_input, os.path.join(save_path, 'model.onnx'), export_params=True, verbose=False)

    FYI

    Below are the PyTorch version that I have used

    pytorch 1.3.0 py3.7_cuda101_cudnn7_0 pytorch
    torchvision 0.4.1 py37_cu101 pytorch

    If you find any issue in the inference, update below function in tidl_import_common.cpp and rebuild import tool(Not mandatory)

    int32_t tidl_mergeFlattenLayer(sTIDL_OrgNetwork_t  &pOrgTIDLNetStructure, int32_t layerIndex)
    {
      int32_t i1, i2, i3, i4;
      int32_t status = 0;
      int32_t merged;
      for (i1 = 0; i1 < layerIndex; i1++)
      {
        if (pOrgTIDLNetStructure.TIDLPCLayers[i1].layerType == TIDL_FlattenLayer)
        {
          merged = 1;
          for (i2 = 0; i2 < 3; i2++)
          {
            if ((pOrgTIDLNetStructure.TIDLPCLayers[i1].inData[0].dimValues[i2] != 1) ||
              (pOrgTIDLNetStructure.TIDLPCLayers[i1].outData[0].dimValues[i2] != 1))
            {
              merged = 0;
              break;
            }
          }
          int32_t  inIdx = tidl_getInLayer(pOrgTIDLNetStructure, layerIndex, pOrgTIDLNetStructure.TIDLPCLayers[i1].inData[0].dataId);
          if (inIdx != -1)
          {
            sTIDL_LayerPC_t &TIDLPCLayersIn = pOrgTIDLNetStructure.TIDLPCLayers[inIdx];
    
            if ((TIDLPCLayersIn.layerType == TIDL_PoolingLayer) &&
                (TIDLPCLayersIn.layerParams.poolParams.poolingType == TIDL_AveragePooling) &&
                (TIDLPCLayersIn.outConsumerCnt[0] == 1) &&
                (TIDLPCLayersIn.layerParams.poolParams.kernelW == 0) &&
                (TIDLPCLayersIn.layerParams.poolParams.kernelH == 0))
            {
                merged = 1;
            }
          }
    
          if (merged == 1)
          {
            int32_t  idx = tidl_getInLayer(pOrgTIDLNetStructure, layerIndex, pOrgTIDLNetStructure.TIDLPCLayers[i1].inData[0].dataId);
            if (idx == -1)
            {
              return -1;
            }
            sTIDL_LayerPC_t &TIDLPCLayers = pOrgTIDLNetStructure.TIDLPCLayers[idx];
            TIDLPCLayers.numMacs += pOrgTIDLNetStructure.TIDLPCLayers[i1].numMacs;
            TIDLPCLayers.outData[0] = pOrgTIDLNetStructure.TIDLPCLayers[i1].outData[0];
            strcpy((char *)TIDLPCLayers.outDataNames[0], (char *)pOrgTIDLNetStructure.TIDLPCLayers[i1].outDataNames[0]);
            TIDLPCLayers.outConsumerCnt[0] = pOrgTIDLNetStructure.TIDLPCLayers[i1].outConsumerCnt[0];
            pOrgTIDLNetStructure.TIDLPCLayers[i1].numInBufs = -1;
            pOrgTIDLNetStructure.TIDLPCLayers[i1].numOutBufs = -1;
          }
        }
      }
    
      return 0;
    }
    

  • Hi Kumar,

    Thanks for the script. The script that I used is more or less the same. It generates the same onnx model file.

    I am facing the same issue as mentioned above.

    WARNING: Batch Norm Layer 's coeff cannot be found(or not match) in coef file, Random bias will be generated! Only for evaluation usage! Results are all random!
    WARNING: Conv Layer 's coeff cannot be found(or not match) in coef file, Random coeff will be generated! Only for evaluation usage! Results are all random!
    WARNING: Batch Norm Layer 's coeff cannot be found(or not match) in coef file, Random bias will be generated! Only for evaluation usage! Results are all random!
    WARNING: Conv Layer 's coeff cannot be found(or not match) in coef file, Random coeff will be generated! Only for evaluation usage! Results are all random!
    WARNING: Conv Layer 's coeff cannot be found(or not match) in coef file, Random coeff will be generated! Only for evaluation usage! Results are all random!
    Segmentation fault (core dumped)

    If I eliminate the batch norm layer specify initial nodes only, import tool works.

    -Prajakta

  • Prajakta,

    Are you facing this issue for pre-trained models in torchvison also or only for the model trained by you.

    From the errors/warning, it looks like your onnx model did not have the parameter ("Conv Layer 's coeff ") in the file. You may need to check your PyTorch code why this happening.

    You can open the onnx file in netron viewer to check the parameters on operator.

  • Hi Kumar,

    The model is not listed in torchvision. It is a new network.

    Don't the warnings specify which conv or batch norm layers are creating a problem? Because there are multiple conv layers and if I choose an intermediate section with around 5 conv layers and a few others, it works. 

    Also, in netron, I cannot see coeff parameter for any of them.

    -Prajakta

  • Prajakta

    Netron shall show parameters as follows. Can you confirm hat that you could use the script that I have shared and import pre-tarined model from trochvision properly? This would confirm your pytorch environment is fine.

    Regarding," which conv or batch norm layers are creating a problem" Impot tool is available as source in the package and user can add these traces as required.

    Also check you initial convolution layer in your pytorch code, what is the difference with respect to reest of network

  • Hi Kumar,

    I checked it to find that the batch norm and conv layer has proper values.

    However when I tried to break the network down to a smaller piece by specifying input and output node names, I observed that it is failing at the layers like 'Unsqueeze' and 'Pad'. Would that be causing the actual issue and not the batch norm and conv layers?

    - Prajakta

  • Hi Prajakta,

    Refer "tidl_j7_01_00_00_00/ti_dl/docs/user_guide_html/md_tidl_layers_info.html" for supported layers.

    'Unsqueeze'  operators are not supported now and parameters shall be part of the operator. I would recommend replacing PrelUs with Relu to check whether issue is only from Prelu.

    Regarding 'Pad'  - refer below from UG :

    "Padding will be taken care of during import process, and this layer will be automatically removed by import tool" Will be merged with Average pooling in this case

     

  • Hi Kumar,

    I did try to import a new model created by replacing PReLU with ReLU and it is able to import the file. I don't see the unsqeeze layers present over there.

    The bin files are generated and the perf sim output csv is generated too.

    However there still are a few warnings and errors. Please find the logs below.

    **********************************************************************************************************

    WARNING: Batch Norm Layer 's coeff cannot be found(or not match) in coef file, Random bias will be generated! Only for evaluation usage! Results are all random!
    WARNING: Conv Layer 's coeff cannot be found(or not match) in coef file, Random coeff will be generated! Only for evaluation usage! Results are all random!
    WARNING: Batch Norm Layer 's coeff cannot be found(or not match) in coef file, Random bias will be generated! Only for evaluation usage! Results are all random!
    WARNING: Conv Layer 's coeff cannot be found(or not match) in coef file, Random coeff will be generated! Only for evaluation usage! Results are all random!

    ~~~~~Running TIDL in PC emulation mode to collect Activations range for each layer~~~~~

    Processing config file #0 : /home/kpit/psdk_rtos_auto_j7_06_01_00_15/tidl_j7_01_00_00_00/ti_dl/utils/tidlModelImport/tempDir/qunat_stats_config.txt
    Error at line: 208 : in file src/tidl_tb.c, of function : tidl_tb_algCreate
    Error Type: TIDL_E_UNSUPPORTED_LAYER
    Error at line: 479 : in file src/tidl_tb.c, of function : tidlMultiInstanceTest
    Error Type: TIDL_E_UNSUPPORTED_LAYER

    ------------------ Network Compiler Traces -----------------------------
    Main iteration numer: 0....
    Preparing for memory allocation : internal iteration number: 0
    successful Memory allocation

    -------------------- Network Compiler : Analysis Results are available --------------------

    WARNING: [TIDL_ConvolutionLayer] Paramater is not validated. Please be carefull with this layer!
    Kernel 3x3 Stride 1x1 dilation 8x8 Pad 8x8 Bias 0
    WARNING: [TIDL_ConvolutionLayer] Paramater is not validated. Please be carefull with this layer!
    Kernel 3x3 Stride 1x1 dilation 16x16 Pad 16x16 Bias 0
    WARNING: [TIDL_ConvolutionLayer] Paramater is not validated. Please be carefull with this layer!
    Kernel 3x3 Stride 1x1 dilation 8x8 Pad 8x8 Bias 0
    WARNING: [TIDL_ConvolutionLayer] Paramater is not validated. Please be carefull with this layer!
    Kernel 3x3 Stride 1x1 dilation 16x16 Pad 16x16 Bias 0
    WARNING: [TIDL_ConvolutionLayer] Paramater is not validated. Please be carefull with this layer!
    Kernel 3x3 Stride 1x1 dilation 8x8 Pad 8x8 Bias 0
    WARNING: [TIDL_ConvolutionLayer] Paramater is not validated. Please be carefull with this layer!
    Kernel 3x3 Stride 1x1 dilation 16x16 Pad 16x16 Bias 0
    WARNING: [TIDL_ConvolutionLayer] Paramater is not validated. Please be carefull with this layer!
    Kernel 3x3 Stride 1x1 dilation 8x8 Pad 8x8 Bias 0
    WARNING: [TIDL_ConvolutionLayer] Paramater is not validated. Please be carefull with this layer!
    Kernel 3x3 Stride 1x1 dilation 16x16 Pad 16x16 Bias 0
    WARNING: [TIDL_ConvolutionLayer] Paramater is not validated. Please be carefull with this layer!
    Kernel 3x3 Stride 1x1 dilation 8x8 Pad 8x8 Bias 0
    WARNING: [TIDL_ConvolutionLayer] Paramater is not validated. Please be carefull with this layer!
    Kernel 3x3 Stride 1x1 dilation 16x16 Pad 16x16 Bias 0
    WARNING: [TIDL_ConvolutionLayer] Paramater is not validated. Please be carefull with this layer!
    Kernel 3x3 Stride 1x1 dilation 8x8 Pad 8x8 Bias 0
    WARNING: [TIDL_ConvolutionLayer] Paramater is not validated. Please be carefull with this layer!
    Kernel 3x3 Stride 1x1 dilation 16x16 Pad 16x16 Bias 0
    WARNING: [TIDL_ConvolutionLayer] Paramater is not validated. Please be carefull with this layer!
    Kernel 3x3 Stride 1x1 dilation 8x8 Pad 8x8 Bias 0
    WARNING: [TIDL_ConvolutionLayer] Paramater is not validated. Please be carefull with this layer!
    Kernel 3x3 Stride 1x1 dilation 16x16 Pad 16x16 Bias 0
    WARNING: [TIDL_ConvolutionLayer] Paramater is not validated. Please be carefull with this layer!
    Kernel 3x3 Stride 1x1 dilation 8x8 Pad 8x8 Bias 0
    WARNING: [TIDL_ConvolutionLayer] Paramater is not validated. Please be carefull with this layer!
    Kernel 3x3 Stride 1x1 dilation 16x16 Pad 16x16 Bias 0
    WARNING: [TIDL_ConvolutionLayer] Paramater is not validated. Please be carefull with this layer!
    Kernel 3x3 Stride 1x1 dilation 8x8 Pad 8x8 Bias 0
    WARNING: [TIDL_ConvolutionLayer] Paramater is not validated. Please be carefull with this layer!
    Kernel 3x3 Stride 1x1 dilation 16x16 Pad 16x16 Bias 0
    WARNING: [TIDL_ConvolutionLayer] Paramater is not validated. Please be carefull with this layer!
    Kernel 3x3 Stride 1x1 dilation 8x8 Pad 8x8 Bias 0
    WARNING: [TIDL_ConvolutionLayer] Paramater is not validated. Please be carefull with this layer!
    Kernel 3x3 Stride 1x1 dilation 16x16 Pad 16x16 Bias 0
    WARNING: [TIDL_ConvolutionLayer] Paramater is not validated. Please be carefull with this layer!
    Kernel 3x3 Stride 1x1 dilation 8x8 Pad 8x8 Bias 0
    WARNING: [TIDL_ConvolutionLayer] Paramater is not validated. Please be carefull with this layer!
    Kernel 3x3 Stride 1x1 dilation 16x16 Pad 16x16 Bias 0
    WARNING: [TIDL_ConvolutionLayer] Paramater is not validated. Please be carefull with this layer!
    Kernel 3x3 Stride 1x1 dilation 8x8 Pad 8x8 Bias 0
    WARNING: [TIDL_ConvolutionLayer] Paramater is not validated. Please be carefull with this layer!
    Kernel 3x3 Stride 1x1 dilation 16x16 Pad 16x16 Bias 0
    ERROR : [TIDL_PadLayer] should be removed in import process. if not, this model will not work!
    ERROR : [TIDL_PadLayer] should be removed in import process. if not, this model will not work!
    ERROR : [TIDL_PadLayer] should be removed in import process. if not, this model will not work!
    ****************************************************
    ** 0 WARNINGS 3 ERRORS **
    ****************************************************

    The pad layers are introduced while converting the .pth file to .onnx file. How can this be handled now?

    -Prajakta

  • Hi Prajakta,

       We are not sure why these pad layers are introduced during conversion, we are not observing the same behavior. To proceed further we can remove these layers during TIDL import. Can you replace the following function in

    File : tidl_import_common.cpp

    Function : tidl_mergePadLayer

    int32_t tidl_mergePadLayer(sTIDL_OrgNetwork_t  &pOrgTIDLNetStructure, int32_t layerIndex)
    {
      int32_t i1, i2, i3, i4;
      int32_t status = 0;
      int32_t padW, padH;
      for (i1 = 0; i1 < layerIndex; i1++)
      {
        if (pOrgTIDLNetStructure.TIDLPCLayers[i1].layerType == TIDL_PadLayer)
        {
          int32_t  inIdx = tidl_getInLayer(pOrgTIDLNetStructure, layerIndex, pOrgTIDLNetStructure.TIDLPCLayers[i1].inData[0].dataId);
          if (inIdx == -1)
          {
            return -1;
          }
          int32_t  outIdx = tidl_getOutLayer(pOrgTIDLNetStructure, layerIndex, pOrgTIDLNetStructure.TIDLPCLayers[i1].outData[0].dataId);
          if (outIdx == -1)
          {
            return -1;
          }
    
          sTIDL_LayerPC_t &TIDLPCLayersIn = pOrgTIDLNetStructure.TIDLPCLayers[inIdx];
          sTIDL_LayerPC_t &TIDLPCLayersOut = pOrgTIDLNetStructure.TIDLPCLayers[outIdx];
    
          if (gParams.modelType == 1)
          {
            if (gloab_data_format == 0)
            {
              padW = pOrgTIDLNetStructure.TIDLPCLayers[i1].layerPCParams.padParams.padTensor[2 * 2 + 0];
              padH = pOrgTIDLNetStructure.TIDLPCLayers[i1].layerPCParams.padParams.padTensor[1 * 2 + 0];
            }
            else
            {
              padW = pOrgTIDLNetStructure.TIDLPCLayers[i1].layerPCParams.padParams.padTensor[3 * 2 + 0];
              padH = pOrgTIDLNetStructure.TIDLPCLayers[i1].layerPCParams.padParams.padTensor[2 * 2 + 0];;
            }
          }
          else if (gParams.modelType == 2)
          {
            padW = pOrgTIDLNetStructure.TIDLPCLayers[i1].layerPCParams.padParams.padTensor[3];
            padH = pOrgTIDLNetStructure.TIDLPCLayers[i1].layerPCParams.padParams.padTensor[2];
          }
          else if (gParams.modelType == 3)
          {
            padW = pOrgTIDLNetStructure.TIDLPCLayers[i1].layerPCParams.padParams.padTensor[5];
            padH = pOrgTIDLNetStructure.TIDLPCLayers[i1].layerPCParams.padParams.padTensor[3];
          }
          else
          {
            printf("ERROR: PAD layer is NOT supported for modelType %s\n", gParams.modelType);
            return -1;
          }
    
    
          if ((TIDLPCLayersOut.layerType == TIDL_ConvolutionLayer) &&
            (pOrgTIDLNetStructure.TIDLPCLayers[i1].outConsumerCnt[0] == 1) &&
            /*(TIDLPCLayersIn.outConsumerCnt[0] == 1) &&*/
            /*(TIDLPCLayersOut.layerParams.convParams.strideW > 1) &&
            (TIDLPCLayersOut.layerParams.convParams.strideH > 1) &&*/
            ((TIDLPCLayersOut.layerParams.convParams.kernelW/2) == padW) &&
            ((TIDLPCLayersOut.layerParams.convParams.kernelH/2) == padH))
          {
            TIDLPCLayersOut.numMacs += pOrgTIDLNetStructure.TIDLPCLayers[i1].numMacs;
    
            TIDLPCLayersOut.layerParams.convParams.padW = padW;
            TIDLPCLayersOut.layerParams.convParams.padH = padH;
            if (gParams.modelType == 1 || gParams.modelType == 3)
            {
              TIDLPCLayersOut.strideOffsetMethod = TIDL_StrideOffsetTopLeft;
            }
    
              //TIDLPCLayersIn.outData[0]        = pOrgTIDLNetStructure.TIDLPCLayers[i1].outData[0];
            TIDLPCLayersOut.inData[0] = pOrgTIDLNetStructure.TIDLPCLayers[i1].inData[0];
            strcpy((char *)TIDLPCLayersOut.inDataNames[0], (char *)pOrgTIDLNetStructure.TIDLPCLayers[i1].inDataNames[0]);
            pOrgTIDLNetStructure.TIDLPCLayers[i1].numInBufs = -1;
            pOrgTIDLNetStructure.TIDLPCLayers[i1].numOutBufs = -1;
          }
          else if ((TIDLPCLayersOut.layerType == TIDL_PoolingLayer) &&
            (TIDLPCLayersOut.layerParams.poolParams.poolingType == TIDL_AveragePooling) &&
            (pOrgTIDLNetStructure.TIDLPCLayers[i1].outConsumerCnt[0] == 1))
          {
            TIDLPCLayersIn.numMacs += pOrgTIDLNetStructure.TIDLPCLayers[i1].numMacs;
    
            TIDLPCLayersOut.layerParams.poolParams.padW += padW;
            TIDLPCLayersOut.layerParams.poolParams.padH += padH;
           // if (gParams.modelType == 1 || gParams.modelType == 3)
            {
              TIDLPCLayersOut.inData[0] = pOrgTIDLNetStructure.TIDLPCLayers[i1].inData[0];
              strcpy((char *)TIDLPCLayersOut.inDataNames[0], (char *)pOrgTIDLNetStructure.TIDLPCLayers[i1].inDataNames[0]);
              pOrgTIDLNetStructure.TIDLPCLayers[i1].numInBufs = -1;
              pOrgTIDLNetStructure.TIDLPCLayers[i1].numOutBufs = -1;
            }
          }
          else
          {
            printf("ERROR: Currently PAD layer is supported only when the following layer is convolution with stride > 1\n");
            return -1;
          }
        }
      }
    
      return 0;
    }

    Regards,

    Anshu

  • Hi Prajakta,
         Were you able to make progress on this?

    Regards,

    Anshu

  • Hi Anshu,

    After adding the onnx model type in the mergePadLayer, the model can finally be imported. I checked the spatial shapes as well using the graphviz utility and the generated graph seems to be correct according to the architecture. 

    Will validate the inference on the generated bin files now.

    Thank you for your inputs!

    -Prajakta