This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

IWRL6432BOOST: How to do machine learning inference on the board with TensorFlow Lite Micro?

Part Number: IWRL6432BOOST

My work requires performing inference of the radar data with a machine learning model on the board itself, so I am looking to include the TF Lite Micro library with the CCS project. The ARM cortex M4F that is in this board is not provided as an option for the target architecture when generating the TF Lite Micro library, so should I use the option ARM cortex M4 or ARM cortex M4+FP that are provided with the TF Lite Micro target architectures and is the optimized kernel option cmsis_nn available on the microcontroller that the radar board has?