Can i deploy darknet/ultralystics yolov2,3,4,5 directly and if it works on sk-tda4vm??
Regards,
Nandu
This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
Can i deploy darknet/ultralystics yolov2,3,4,5 directly and if it works on sk-tda4vm??
Regards,
Nandu
Hi,
Currently we are supporting YoloX with different flavours available 'yolox_s_lite', 'yolox_tiny_lite', 'yolox_nano_lite', 'yolox_pico_lite', 'yolox_femto_lite'.
We currently support YoloV5 out of other listed versions.
If you are trying out of the box example models, it might not be offloaded to full acceleration on C7x-MMA, few non supported kernal from TIDL will run on ARM core.
Regards,
Pratik
Can u specify mainly yolov2 directly supported and run on sk-tda4vm with or without optimization??. Also i want to know if YOLOV5 can directly run on board without optimization as u mentioned which yolov5 is supported(either optimized or not)??.
Regards,
Nandu
Hi,
TIDL offload will happen for layers which are supported as part of TIDL optimized kernals.
The non supported layers will get offloaded to arm core (Non Optimized flow)
So for any network which has all the operators/kernals supported as part of TIDL will run in optimized way.
We recommend to check supported operator list here : https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/docs/supported_ops_rts_versions.md and compare with any model you are trying to infer.
Regards,
Pratik
So Yolov5 and yolov2 works only in optimized form in target device-sktda4vm right??
As mentioned,
For any Neural Network you are experimenting with
If all the operators/kernal are supported by TIDL library, compiled network graph will get offloaded to C7x-MMA (In layman terms, It will run in optimized mode).
You can check the list of supported operators here :
We recommend to check supported operator list here : https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/docs/supported_ops_rts_versions.md and compare with any model you are trying to infer.
Regards,
Pratik