This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

IWRL6432BOOST: How to run Deep Learning model inference?

Part Number: IWRL6432BOOST
Other Parts Discussed in Thread: IWRL6432

We are looking to run a Deep Learning model inference on the board itself. We have trained a model on a x86_64 machine with TensorFlow and converted it into a TensorFlow Lite model, the generated model size is 35 MB, so we are looking for way to use the TensorFlow Lite runtime on the microcontroller present in it and generated inferences. If there are program or data memory constraints, is it possible to offload the task to DCA1000 and run the model inference it it. I couldn't find the specifications of DCA1000's computing power and memory amount on the datasheet.

Is there a way to program the DCA1000 to run the deep learning model for generating inferences after getting the data from IWRL6432. If not, do we have separate computing devices from TI for running model inferences?

  • Hello, 

    This model would be too large to load onto the device itself. Your question about programming the DCA1000 for this is a bit outside the scope of these forums but I will try my best to answer. The DCA is really just an FPGA, you could potentially add that if needed but I see a couple problems. First, the flash memory available for FPGA images is only 16MB. Second, we do not provide the source code for handling the LVDS data on the DCA1000 side.

    Best Regards,

    Josh