Other Parts Discussed in Thread: IWRL6432
We are looking to run a Deep Learning model inference on the board itself. We have trained a model on a x86_64 machine with TensorFlow and converted it into a TensorFlow Lite model, the generated model size is 35 MB, so we are looking for way to use the TensorFlow Lite runtime on the microcontroller present in it and generated inferences. If there are program or data memory constraints, is it possible to offload the task to DCA1000 and run the model inference it it. I couldn't find the specifications of DCA1000's computing power and memory amount on the datasheet.
Is there a way to program the DCA1000 to run the deep learning model for generating inferences after getting the data from IWRL6432. If not, do we have separate computing devices from TI for running model inferences?