This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

SK-TDA4VM: Starting TDA4VM code development

Part Number: SK-TDA4VM
Other Parts Discussed in Thread: TDA4VM

Hi,

We are interested in porting our object detector the TDA4VM Edge AI Starter Kit. We are wondering how we can start the development of our C/C++ demo on this platform.

We have been looking into 4 of your repositories as suggested by the documentation, namely edge_ai_apps, edgeai-tidl-tools, tidl-api and tidl-utils.  We have gone through the documentation “Getting Started” (Processor SDK Linux for Edge AI) and it seems like it is focusing on the board bring-up only, hardware-wise and software-wise. 

We were wondering how these repositories relate to each other and how we can build them on x86 (host running Ubuntu 20.04), then, use the TIDL Importer tool to convert from, for instance ONNX, then use the TIDL API to write our own C++ inference application based on this conversion. Eventually, we would need to know how to cross-compile and run the application on target (Cortex-A / C7000+MMA).

In order to start development properly using all required software components from Texas Instruments and making the right design choices, we have a few questions:

  • What are all the components we need in order  to start developing for the target and where can we find them ?
    • Compiler toolchain for target 
    • Dependencies and C++ API libraries (tidl-api for example) and drivers
    • Host/PC simulator (x86) (benchmarking)
    • Model conversion tools (TIDL Importer and TIDL Visualizer) (x86)
    • C/C++ example code we can use as a starting point to write our own embedded app.
  • Which is the best inference framework and runtime to use on TDA4VM in terms of performance and compatibility (TIDL-RT/OpenVX, TF lite, ONNXRuntime, NEO AI) ?
  • Any hardware/software limitations we should be aware of ?
    • Memory size (Model and input size)
    • Model operator/layers constraints (network topology constraints)
    • Fixed point formats for 8/16 bits (restrictions integer and fractional parts size) (eg. HW accelerator register bit size limit)

 

  • Hello Julien,

    Thank you for your interest. The goal of edge AI SDK is to make such application development as easy as it can be. For most common applictions, you don't need to use TIDL importer but you can use open-source run-time libraries in any of the frameworks you mentioned - TF lite, ONNX or TVM - and use the pre-compiled models part of our ModelZoo.

    You can use this link for the SDK documentation on one application example you can try out of the box in minutes.

    https://software-dl.ti.com/jacinto7/esd/processor-sdk-linux-sk-tda4vm/latest/exports/docs/running_advance_demos.html#multi-input-multi-instance-demo

    We also have Edge AI Academy with more examples at the link below.

    https://ti.com/edgeaiacademy

    The link below has an example of a people counting using an Object detection model in Python. You can see how easy it is to create such an application using the edgeAI framework.

    https://dev.ti.com/tirex/explore/content/edgeAI_academy_0_00_00_01_eng__all/modules/6_3_people_counting.html

    Please review these links and let us know if any questions.


    Thank you,

    Srik

  • Unfortunately, these links are not really about what we are looking for and do not answer any of the listed points. We want to port existing C/C++ code base on the TDA4VM for a production grade product. Python based demo are not exactly what we are looking for. So this why we want more details about we can setup the development environment, write C++ code and build it for both PC and TDA4VM target. We have seen some cpp file and c++ compiler being used in the edgeai-tidl-tools cmake build system. Is any C++ development should replicated the build system and dependencies shown in edgeai-tidl-tools ?

    As a reminder here are the points we previously wanted to clarify:

    • What are all the components we need in order  to start developing for the target and where can we find them ?
      • Compiler toolchain for target 
      • Dependencies and C++ API libraries (tidl-api for example) and drivers
      • Host/PC simulator (x86) (benchmarking)
      • Model conversion tools (TIDL Importer and TIDL Visualizer) (x86)
      • C/C++ example code we can use as a starting point to write our own embedded app.
    • Which is the best inference framework and runtime to use on TDA4VM in terms of performance and compatibility (TIDL-RT/OpenVX, TF lite, ONNXRuntime, NEO AI) ?
    • Any hardware/software limitations we should be aware of ?
      • Memory size (Model and input size)
      • Model operator/layers constraints (network topology constraints)
      • Fixed point formats for 8/16 bits (restrictions integer and fractional parts size) (eg. HW accelerator register bit size limit)
  • Ok, understood.

    What deep learning models are you using in this C/C++ code base? You do have to use the TIDL tools to compile these models to generate artifacts which are used during the inference. This compilation process is done once and it is done on the PC. 

    You can compile the model using the documentation here.

    https://github.com/TexasInstruments/edgeai-tidl-tools

    That'd be the first step.

    Once you have this, then you'd do the application development on the TDA4VM platform. And, we recommend customers to use the same edgeAI SDK framework with gStreamer...etc This can be Python or CPP. We need to understand what kind of framework you are using in your application?

    We have a webinar next week on this topic of application development using the edgeAI SDK and our experts will be on-line as well for any questions. This might be useful for you. Here is the link for the webinar info and registration link.

    e2e.ti.com/.../tda4vm-tda4vm-jacinto-edge-ai-monthly-webinar-03-02-2022-edge-ai-camera-application-development-made-simpler-faster-and-more-affordable

    thanks

    Srik

  • We are trying to run our custom object detector model for future embedded automotive applications. One of our engineer managed to extract a structure for building cpp app from edgeai-tidl-tools repo after few modifications of the cmake build system. Usually, we build our own pipeline application to stream video but I guess the gStreamer provided may be enough to start evaluating and demoing. We will probably take a look to the webinar next week.

    Thanks