This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

[FAQ] TDA4VM: How to perform power benchmarking of model inference/run on TDA4VM SoC

Part Number: TDA4VM
Other Parts Discussed in Thread: INA226


How to do Power Benchmarking (Power consumption in Watt) for supported deep learning models (Object Detection, Classification, Segmentation) on TDA4VM SoC ?

Thanks for help.

  • Hi,

    Prerequisites :

    The power measurements are done using the INA226 Linux driver. Using the sysfs interface we can measure the power consumption on all the 32( 2 sets of 16 rails) voltage rails. We can measure 16 rails (1 set) at a time using the I2C mux.

    One can capture the snapshot using the below steps on the target EVM follow below mentioned steps.

    Read file /sys/class/hwmon/hwmon8/power1_input to get the power reading as shown below, the readings are in micro watts unit.

    root@j7-evm:~# cat /sys/class/hwmon/hwmon8/power1_input

    You can run/inference any supported model then read the above mentioned file to get power reading.

    Example : Inferencing Object Detection model from vision apps directory

    root@j7-evm:/opt/vision_apps# ./

    Read the /sys/class/hwmon/hwmon8/power1_input file before doing model inference and note the power reading, lets say its X micro watts, then run the model inference as mentioned above and take reading lets say its Y micro watts while model inference is running, the delta (Y - X) is your final reading.

    Once can do power benchmarking for Object Classification or Segmentation by following above steps, by invoking inference run to these models.