This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

PROCESSOR-SDK-J784S4: Unexpected indices shape from TIDL TopK: expected (1,1,30,1), got (1,4,30,1)

Part Number: PROCESSOR-SDK-J784S4
Other Parts Discussed in Thread: AM69A

Tool/software:

Hello,

I'm working with an ONNX model that includes a TopK operator. (model in attachements) I want to run it on TIDL using these parameters:

  • Input: float32[1,1,100,1]
  • K: [30]
  • axis = 2
  • largest = 1
  • sorted = 1

In this case the outputs should be like this:

  • Values: float32[1,1,30,1]
  • indices: int64[1,1,30,1]

At runtime, Using 16bit quantization I encounter the following shape mismatch: "Expected shape from model of {1,1,30,1} does not match actual shape of {1,2,30,1} for output indices"

And these are the outputs shapes:
Top-K values shape: (1, 1, 30, 1)
Top-K indices shape: (1, 2, 30, 1)

I'm using SDK ti-processor-sdk-rtos-j784s4-evm-10_01_00_04

Why there is a shape difference between values and indices?

Can you help me resolve my problem?

topk_model.zip

  • Hi Ghassen,

    Thank you for the details.  There is a pending problem with TopK that may be related.  I will check if you are also impacted.

    Regards,

    Chris

  • Hi Ghassen,

    It appears to be an output problem.  I have attached a Jupyter notebook with with a tensor shape of [1,1,10,1] so it is easier to comprehend (100 is a lot of numbers to go through).  I also attached the script to generate a topk model with [1,1,10,1] tensors.  The only point to bring up about the notebook is you need to have a working 10_01_04_00 environment setup. 

    Set your 10_01_04_00 environment  in modeldir = '<your_path>'+VERSION+'/edgeai-tidl-tools/tools/AM69A/tidl_tools/'    (cell 4).

    You must read the output .bin file as 32 bit values, the data is in float32 while the the indexes are in int32.

    index=np.fromfile("out/my_data.bin",dtype=np.uint32)
    data=np.fromfile("out/my_data.bin",dtype=np.float32)

    If you do not like to use Jupyter notebooks, just copy the -- parameters from the import and inference cell and run from an import/inference config file.

    Regards,

    Chris

    import torch
    import torch.nn as nn
    
    class SimpleTopKModel(nn.Module):
        def forward(self, x):
            # Returns the top 3 largest elements along the last dimension
            values, indices = torch.topk(x, k=3, dim=2,largest=True,sorted=True)
            return values, indices
    
    # Create an instance of the model
    model = SimpleTopKModel()
    
    # Create a sample input tensor
    dummy_input = torch.randn(1,1,10,1) # Example: batch size 1, 10 elements
    
    # Export the model to ONNX
    torch.onnx.export(model,
                      dummy_input,
                      "topk_model.onnx",
                      input_names=['input'],
                      output_names=['values', 'indices'],
                      opset_version=11) # Specify an appropriate opset version
    
    print("Model exported to topk_model.onnx")
    
    https://e2e.ti.com/cfs-file/__key/communityserver-discussions-components-files/791/topk_5F00_model.onnxhttps://e2e.ti.com/cfs-file/__key/communityserver-discussions-components-files/791/SimpleTopk.ipynb