Hi,
When I try to run a quantized PyTorch MobileNetV2 model on the TI edgeAI cloud, the kernel dies. I tried attaching the ONNX file to this post, but it can't be uploaded for some reason...
Does TIDL support models that have been converted to the INT8 format using the PyTorch Quantization API?: https://pytorch.org/docs/stable/quantization.html#general-quantization-flow
Thank you,
Isidora Radovanovic