This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TIDL_EltWiseLayer for Mul of ONNX Ops, does it support 2 outputs only to be of same shape?

Other Parts Discussed in Thread: TDA4VM

Hi,

I was trying to compile a model for TDA4VM (J721E) board. But, debug console is showing me following:

Unsupported (TIDL check) TIDL layer type ---             Mul

My model has a Mul operation, but it takes in two tensors of different size 1x512x64x64 and 1x512x1x1. So, in torch it is done by broadcasting I suppose. So, due to different tensor shapes, is it possible that it is not supported by TIDL? I could see in the docs that it is a supported operation, but extensively from 2 inputs.

https://github.com/TexasInstruments/edgeai-tidl-tools/blob/09_00_00_00/docs/supported_ops_rts_versions.md

Kindly let me know where might be the issue and any possible work around for it. I think that it should be doable by using operation like .expand() if the issue is about tensor shapes.


Thanks,
Sourabh

  • Hi,

    Thank you for posting question, currently we are facing accelerated volume of questions on our platform.

    We will get back to you on this, thank you for your patience.

  • Sure! Awaiting for your reply

    Regards,
    Sourabh

  • Hi Sourabh,

    Following combinations of mul are currently supported in TIDL:

    1. Both inputs variable(derived from previous layer) - Only elementwise multiplication is supported, broadcasting not supported

    2. Mul with constant/initializer - Mul with 1D vector and scalar(single constant) is supported (broadcasting support available in these cases). Mul with const tensor (> 1D) supported only if dimensions of constant tensor same as variable tensor (broadcasting not supported in this case).

    Hope this helps.

    Regards,

    Anand

  • Hi Anand,

    Thanks a lot for the clarification! It would be really helpful if you can suggest some work around for this as many models do have broadcasting in multiplication (specially attention based).

    Regards,
    Sourabh

  • Hi Anand,

    I am working on converting "squeeze-excitation (SE) block" to TIDL supported operators. The final operator of the SE block is "Mul". It is just "out = x * y" in pyTorch, where x shape is NCHW=1x256x1x208 (original feature) and y shape is NCHW=1x256x1x1 (SE features). As my model is for audio/speech recognition, so the dimension of the feature would be 1d (height = 1).

    After running TI-benchmark, I got the message below. It seems that two input shapes should be the same, in order to do element-wise multiply.

    ALLOWLISTING : ADD/MUL layer : Only elementwise operator supported if none of the inputs to add layer is a constant  --  file info - tidl_import_common_model_check.cpp , TIDL_checkAddMulSubDivTensorProperties , 308

    I thought I am in the same situation that Sourabh be. Cry

    Could you give me some solution to overcome this module?

    Thanks.

    --Joy