PROCESSOR-SDK-AM68A: Neural network profiling using SDK 10.00.08

Part Number: PROCESSOR-SDK-AM68A
Other Parts Discussed in Thread: AM68A

Tool/software:

Hi,

I'm currently using TIDL (SDK v10.00.08) to profile a custom neural network on an AM68a board. I noticed that during the model artifacts generation, two csv files are created (see enclosed screenshot) with different runtimes per layer but similar memory footprint information. What is the difference between these two files ?


  • Hi Mehdi; They both are logging the performance parameters for TIDL. One of them record individual layer performance timings. Also the high-speed memory (L2) usages have also recorded. 

    Thanks and regards

    Wen Li 

  • Hi Wen, Thanks for your reply,

    I noticed two main differences in the contents of theses two files, going as follows:

    - The number of reshape & data convert layers is not the same in both files: could that Indicate that TIDL performs some optimizations on the network to reduce the number of these layers ? (Screenshot 1 & 2)

    - The size of MSMC SRAM BW is lower in the second file (ie: with reduced layer counts) but the total runtime is slightly higher. My question is, if we assume an optimization strategy is what differenciates the two, Is it possible to disable this optimization to keep the runtime at a minimum ? (Screenshots 3 & 4)

    Is it possible to access the compiler documentation to further understand what kind of optimizations are being run under the hood ?


     Screenshot 1:

    Screenshot 2:



    Screenshot 3:

    Screenshot 4:

  • Hi Mehdi;

    I have asked co-workers about this (release the compile documentation). Currently we don't have anything like that we can release. But I will ask the development/tool team if these is any future plan for this. 

     Meanwhile you can find the TIDL documentation in this link. 

    https://github.com/TexasInstruments/edgeai-tidl-tools/tree/master/docs

    I will close this ticket for now.

    But feel free to submit a new one for further questions. 

    Thanks and regards

    Wen Li