This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

EDGE-AI-STUDIO: model format and how level of optimisation for the SoC?

Genius 3186 points
Part Number: EDGE-AI-STUDIO


Hi,

May I have question about edge-AI-studio?

My customer is consdering AM62A7x.

Question1:

I think we can download AI model in ONNX format from edge-AI-studio.

Is there avable(Could we select) another format?

Question2:

If we train and compile model in edge-AI-studio,

How level of optimised for C7xDSP?

Question3:

additional to question2

If there is anything to data looks like : 

  edge-AI-studio'model : including ~% float, ~% int

  model trained/compiled another OSS : including ~% float, ~% int

Thanks,

GR

  • GR,

    For your first question, the models provided by Texas Instruments in Edge AI Studio are in ONNX format.  Downloading in other formats is not supported.

    We will follow up on your other questions soon.

    Thanks,

    Martin

  • Hello GR,

    Thank you for your queries. I'll respond to ones that Martin hasn't already addressed.

    Question2:

    If we train and compile model in edge-AI-studio,

    How level of optimised for C7xDSP?

    If you can be more specific in what you mean by 'optimization level', that would be helpful.

    Our TI Deep Learning software runs on this C7xMMA core (MMA = matrix multiply accelerator, and is where the 2 TOPS performance number comes from). Each layer we support for neural networks has been optimized to use the data movement mechanisms and MMA core -- most of these have been hand-optimized by our development team. Can you clarify what you mean by your question?

    Question3:

    additional to question2

    If there is anything to data looks like : 

      edge-AI-studio'model : including ~% float, ~% int

      model trained/compiled another OSS : including ~% float, ~% int

    If I understand correctly, you want to see data about the accuracy of the floating point vs. fixed point version of our models, as well as the original OSS versions. The second point is relevant since we will make minor changes to model architectures to better utilize the accelerator.. These changes models should have a 'ti-lite' designator within the name. 

    You can find information out the models we host in our model_zoo on that repository, but not all have fixed vs. floating point values. If it does not specify fixed or floating point, then it is safe to assume floating point. We aim for <2% accuracy loss for 8-bit fixed point, but most have smaller than 2% loss. 

    https://github.com/TexasInstruments/edgeai-modelzoo/tree/main/models/vision/classification <-- specific page for classification models; there are other types in the parent directory.

    https://github.com/TexasInstruments/edgeai-yolox/blob/main/README_2d_od.md <-- YOLOX nano information

    https://github.com/TexasInstruments/edgeai-yolov5 <-- YOLOv5

    I see that most of these are floating point, because they focus on the base model, rather than the implemntation on the SoC. You may find performance and accuracy values for models on a per-device basis at https://dev.ti.com/edgeaistudio/ -> Model Analyzer (login) -> Select AM62A -> Model Selection tool (left pane)

    BR,
    Reese

  • Hello Martin, Reese,

    Thanks for your supporting and information.

    I understand it.

    If you can be more specific in what you mean by 'optimization level', that would be helpful.

    I am sorry for asking a vague question.

    I mean How many better performance AM6xAx can if use Edge-AI-studio for model training than if use other OSS(e.g. AWS...) for training.

    Could we use more less computing Time? more Low power?

    I think Edge-AI -studio's training and compiling is optimized to Sitara SoC.

    Best regards,

    GR