This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: Support Request: YOLOv8 Deployment on Board Without Cloud Dependency

Part Number: TDA4VM

Tool/software:

Dear Texas Instruments Support Team,

I am currently working on a confidential government-related project and have a TDA4VM board from TI in hand. The goal is to deploy a YOLOv8-based object detection model on this edge device. After reviewing the available documentation and resources, I still have a few specific queries that I’d appreciate your assistance with:

  1. Native YOLOv8 Inference Support
    Can YOLOv8 models be directly used for inference on the TDA4VM board without conversion?
    If yes:

    • What is the recommended way to run such models (e.g., with OpenCV)?

    • What FPS can be expected during inference?

  2. Model Quantization Without Cloud Access
    I came across the edgeai-tidl-tools repository, which outlines a cloud-based model training, quantization, and calibration flow.
    Due to security policies, I cannot use any cloud-based services for training or deployment.

    • Is it possible to train, quantize, and calibrate YOLOv8 models on a host machine without internet access or cloud dependencies?

    • If so, could you please share the steps or documentation to support this flow?

  3. GUI Support on the Board
    As the TDA4VM board currently runs in a headless Linux shell environment:

    • Is there a way to enable and run a GUI-based application on this board?

    • If yes, could you guide me on the steps or tools required?

  4. Custom Application Development
    I intend to build a custom application that connects a camera and performs YOLO-based object detection.

    • Do you have any reference pipelines, examples, or documentation for building and deploying such applications?

  5. OS Compatibility (Ubuntu 20.04 / 22.04)
    I understand the board has specific OS support, but just to confirm:

    • Is there any possibility of running Ubuntu 20.04 or 22.04 on the TDA4VM board?

  6. Model Conversion Without Edge AI Studio
    Is there any alternative method to convert models without using Edge AI Studio?

    • If yes, could you please provide guidance or tools to support the same?

    • if no, the please mention reason in this email thread.

I’d greatly appreciate your support in helping me proceed with an offline, secure, and efficient deployment for this project.

Looking forward to your response.

  • Hi Sanyam,

    Please give me some time to investigate all your questions. Will respond as soon as possible.

    Warm regards,

    Christina

  • Hi Sanyam,

    Answers in-line...

    1. Native YOLOv8 Inference Support
      Can YOLOv8 models be directly used for inference on the TDA4VM board without conversion?
      If yes:

      • What is the recommended way to run such models (e.g., with OpenCV)?

      • What FPS can be expected during inference?

        No, they need to be quantized to 8 or 16 bits first. 
    2. Model Quantization Without Cloud Access
      I came across the edgeai-tidl-tools repository, which outlines a cloud-based model training, quantization, and calibration flow.
      Due to security policies, I cannot use any cloud-based services for training or deployment.

    3. GUI Support on the Board
      As the TDA4VM board currently runs in a headless Linux shell environment:

      • Is there a way to enable and run a GUI-based application on this board?

        The demos are GUI-based but model development and deployment are not
      • If yes, could you guide me on the steps or tools required?

      • This requires Linux knowledge, a setup menu, and a display.  Beyond the expectations of an E2E post. 
    4. Custom Application Development
      I intend to build a custom application that connects a camera and performs YOLO-based object detection.

      • Do you have any reference pipelines, examples, or documentation for building and deploying such applications?

      • Yes, examples are in edgeai-tidl-tools.  This is a common use case.
    5. OS Compatibility (Ubuntu 20.04 / 22.04)
      I understand the board has specific OS support, but just to confirm:

      • Is there any possibility of running Ubuntu 20.04 or 22.04 on the TDA4VM board?

        Not from TI, I guess it would be possible to port Ubuntu on your own.
    6. Model Conversion Without Edge AI Studio
      Is there any alternative method to convert models without using Edge AI Studio?

      • If yes, could you please provide guidance or tools to support the same?

        Again, edgeai-tidl-tools is where you start.  The recommended interchange format is ONNX, but TFLits will also work.  I would start by adding your model in examples/osrt_python/model_config.py and running your model from examples/osrt_python/ort.  Use python3 ./onnxrt_ep.py -c -m <your model name> to compile.
      • if no, the please mention reason in this email thread.

    Regards,

    Chris