This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

SK-AM62A-LP: Custom Model

Part Number: SK-AM62A-LP

Tool/software:

Hi Team,

Would you kindly guide me through the process of deploying a custom model on SK-AM62A-LP. The model was already built and I would like to know how to put it on the AM62A board. What were the changes should be done in the config file? and how to connect 2-3 pipeline?

I look through the FAQs but I can't seem to find a good resource. 

Thanks!

Regards,

Marvin

  • Hello,

    Yes, we have documentation and guidance for this topic. Many user guides and documentation pages for custom models is on the edgeai-tidl-tools repository on Github.

    Please see here for custom models: https://github.com/TexasInstruments/edgeai-tidl-tools?tab=readme-ov-file#compile-and-benchmark-custom-model 

    Let me know if you still have questions! For putting together and end-to-end imaging pipelines with a live video source (file, camera, internet stream), please see the Edge AI SDk documentation: https://software-dl.ti.com/processor-sdk-linux/esd/AM62AX/10_00_00/exports/edgeai-docs/common/edgeai_dataflows.html 

    BR,
    Reese

  • Hi Reese,

    Thanks, These resources shows how to run a single model or few models in parallel. Do we have some resource in which explained how to use models one after the another?

    Regards,

    Marvin

  • Hi Marvn,

    Yes, these resources show how to run a model, and they support running multiple models (for compilation or inference) in parallel. They do not show sequentially run models, largely because there is not anything special to account for when doing so.

    For multiple models in sequence, the same APIs will be used. There is no restriction on the order in which models are initialized or called, so long as their weights fit into the device's RAM/LPDDR4.

    To run multiple models one after another; 

    • initialize each model using the preferred API, e.g. ONNX Runtime
    • Within a loop
      • preprocess images as needed
      • Pass the image as input to the model with the DL runtime API, e.g. ONNX Runtime
      • If the output of one model feeds directly into the next, retrieve that output from the runtime, apply any further processing/transformation, and pass into the next model

    The multiple sequential model use-case is not directly included in our examples. It is left to the developer to implement this. Our model zoo does not include models that require sequential operation in this way (e.g. Region proposal network --> classification network for 2-stage object detection). Instead, we support single-shot detection models. 

    BR,
    Reese