TDA4VM: Inference benchmarking statistics - DDR BW Per Image

Part Number: TDA4VM

Tool/software:

Hello,

When performing inference on the Cloud or EVM, I consistently observe a very high DDR usage per image, while my colleagues reported only 0 MB two years ago.

I have tried various models on both the Cloud and EVM, and I always receive very high results.

On cloud

On EVM

Old results with same model : 

Are these results typical, or has the calculation method of edgeai_tidl_tools not been updated?

Thanks and regards,

Azer

  • Hi,

    Responses will be delayed until next week due to the US Thanksgiving holidays.

    regards

    Suman

  • Hi,

    I still have this problem. Do you have any idea where it might come from?

    Could it be related to an issue that prevents me from making inferences via TIDL-RT on the C7x core?

    Thanks,

    Azer

  • Hi Azer;

    I don't think we should compare with your co-worker's two-years-ago results; unless you still have the exact software and setup to compare.

    By looking at your screen shot, DDR usage was un-real high. What is the software you used to get this screen shot? 

    I will forward your question to the DDR expert. We will look at the TIDL-RT issue later.

    Thanks and regards

    Wen Li 

  • Hi,

     I get the same result on EVM and on the cloud, so I will detail how I achieve this result via the cloud for now.

    Here is the model I am using: /home/root/edgeai-modelzoo/models/vision/classification/imagenet1k/mlperf/mobilenet_v1_1.0_224.tflite.

    I launch the modified vcls-tfl notebook to run on ARM only.

    And then using the following code, I get the benchmark results : 

    from scripts.utils import plot_TI_performance_data, plot_TI_DDRBW_data, get_benchmark_output, print_soc_info
    stats = interpreter.get_TI_benchmark_data()
    fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(10,5))
    plot_TI_performance_data(stats, axis=ax)
    plt.show()
    
    tt, st, rb, wb = get_benchmark_output(stats)
    print_soc_info()
    
    print(f'{selected_model_id.label} :')
    print(f' Inferences Per Second    : {1000.0/tt :7.2f} fps')
    print(f' Inference Time Per Image : {tt :7.2f} ms')
    print(f' DDR usage Per Image      : {rb+ wb : 7.2f} MB')

    I'm doing exactly the same on EVM, with other models too.

    Regards and thanks,

    Flora

  • Ok, thanks for providing the details. I will try your script. I may need some time to setup. 

    Regards

    Wen Li

  • Hi, 

    I noticed I only have this anomaly when running on ARM only. DDR results are normal on C7x+MMA.

    Regards,

    Azer