This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

SK-AM62A-LP: Edge AI Compile custom model

Part Number: SK-AM62A-LP

compile_options = {
'tidl_tools_path' : os.environ['TIDL_TOOLS_PATH'],
'artifacts_folder' : output_dir,
'tensor_bits' : 8,
'accuracy_level' : 1,
'advanced_options:calibration_frames' : len(calib_images),
'advanced_options:calibration_iterations' : 3,
'debug_level' : 3,
'deny_list' : "1, 25", #For details of TFLite builtin ops please refer: github.com/.../builtin_ops.h
}

tidl_delegate = [tflite.load_delegate(os.path.join(os.environ['TIDL_TOOLS_PATH'], 'tidl_model_import_tflite.so'), compile_options)]
interpreter = tflite.Interpreter(model_path=tflite_model_path, experimental_delegates=tidl_delegate)
interpreter.allocate_tensors()

I used the custom-model-tfl.ipynb file to modify and compile our own model, but it did not generate allowedNode.txt after running. Could you please help me find out what the possible reason is

Thank you!

  • Hello,

    Could you run that with debug_level set to 1 or 2? This is more stable for compilation for custom models, as level 3 may generate some traces for each layer, which will be slower and is meant for inference only. You can also set the deny_list to an empty string, unless you are intentionally deny those layer types ( I see this is a default setting from one of the example scripts ).

    Are there other files aside from allowedNode.txt that were generated?Please also attach any log and/or the generated files.

    Here is a doc with more info on custom model compilation: https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/docs/custom_model_evaluation.md

    Best,
    Reese

  • Thank you!

    Previously, we removed the compilation option 'advancedoptions: calibration_frames': len (calib_images)' and now we have added a few facial images. By adding this compilation option, we can generate allowedNode.txt.

    Statistics:

    Inferences Per Second: 1.53 fps

    Inference Time Per Image: 654.85 ms

    DDR BW Per Image: 360.51 MB,

    The model we use is MobileFacenet, which is used to detect faces.We use cameras to take photos for real-time prediction. We run the compiled model on SK-AM62A-LP, and the prediction time for each image is 1 second. It seems that the accelerator is not working. Do you have any suggestions to shorten the prediction time?

  • Hello Roger,

    Based on this inference time, I agree that the model is most likely not utilizing the accelerator to its fullest extent. To run the model entirely on the accelerator, all layers need to be supported -- see that list here: https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/docs/supported_ops_rts_versions.md

    There are likely layers in your model that are not supported -- this will lead to more places where data must pass from the accelerator -> CPU and back, which adds overhead in addition to slower CPU-based execution. The number of times this must happen is represented by the number of "subgraphs". The layers that are not offloaded should be shown within the compilation logs. You can see more verbose logs by setting debug_level: 2. There are also SVG files in the tempDir of the artifacts folder to help visualize that compiled model. 'runtimes_visuazliation.svg' shows all layers and subgraphs, and individual subgraphs similarly have an SVG with details about the layers in their compiled form.

    Further debugging recommendations are provided in the following page, although they are mostly focused on functional correctness of the output: https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/docs/tidl_osr_debug.md.

    Best Regards,
    Reese