Other Parts Discussed in Thread: TDA4VM
Hi,
We are interested in porting our object detector the TDA4VM Edge AI Starter Kit. We are wondering how we can start the development of our C/C++ demo on this platform.
We have been looking into 4 of your repositories as suggested by the documentation, namely edge_ai_apps, edgeai-tidl-tools, tidl-api and tidl-utils. We have gone through the documentation “Getting Started” (Processor SDK Linux for Edge AI) and it seems like it is focusing on the board bring-up only, hardware-wise and software-wise.
We were wondering how these repositories relate to each other and how we can build them on x86 (host running Ubuntu 20.04), then, use the TIDL Importer tool to convert from, for instance ONNX, then use the TIDL API to write our own C++ inference application based on this conversion. Eventually, we would need to know how to cross-compile and run the application on target (Cortex-A / C7000+MMA).
In order to start development properly using all required software components from Texas Instruments and making the right design choices, we have a few questions:
- What are all the components we need in order to start developing for the target and where can we find them ?
- Compiler toolchain for target
- Dependencies and C++ API libraries (tidl-api for example) and drivers
- Host/PC simulator (x86) (benchmarking)
- Model conversion tools (TIDL Importer and TIDL Visualizer) (x86)
- C/C++ example code we can use as a starting point to write our own embedded app.
- Which is the best inference framework and runtime to use on TDA4VM in terms of performance and compatibility (TIDL-RT/OpenVX, TF lite, ONNXRuntime, NEO AI) ?
- Any hardware/software limitations we should be aware of ?
- Memory size (Model and input size)
- Model operator/layers constraints (network topology constraints)
- Fixed point formats for 8/16 bits (restrictions integer and fractional parts size) (eg. HW accelerator register bit size limit)