This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM69A: How I extract features from intermediate layers of an Existing models?

Part Number: AM69A


Tool/software:

I am trying to implement Gaussian anomaly detection. To do this, I need to extract outputs from the intermediate layers of an existing models, for example, ONR-CL-6480-mobv3-lite-small, for feature extraction. The device is assumed to be AM69A, and I would appreciate your guidance on which tools to use to achieve this goal. If you have any relevant examples, that would be very helpful.

  • Hi Mitani-san,

    If you want to see the intermediate layers in TIDL the following instructions are for TIDL 10.1.  Make sure your tidl_tool directory is set up correctly (https://github.com/TexasInstruments/edgeai-tidl-tools/?tab=readme-ov-file#setup). 

    Go to edgeai-tidl-tools/tidl_tools directory.  To compile the model, copy the following contents into a file called "import_test"

    modelType = 2
    numParamBits = 8
    numFeatureBits = 8


    inputNetFile = "./resnet18_opset9.onnx"
    outputNetFile = "out/tidl_net.bin"
    outputParamsFile = "out/tidl_io_buff"


    addDataConvertToNet = 0
    foldPreBnConv2D = 0
    rawDataInElementType = 0
    inElementType = 0
    outElementType = 0


    tidlStatsTool = "./PC_dsp_test_dl_algo.out"
    perfSimTool = "./ti_cnnperfsim.out"
    graphVizTool = "./tidl_graphVisualiser.out"

    inWidth = 256
    inHeight = 256
    inNumChannels = 3
    numFrames = 1
    inFileFormat = 0
    inData = "./airshow.jpg"
    perfSimConfig = "device_config.cfg"
    debugTraceLevel = 4

    This assumes the standard model resnet18_opset9.onnx and airshow.jpg exist in the tidl_tools/ directory.  Then run (from tidl_tools/):

    ./tidl_model_import.out import_test

    Next you will need to run inference.  Copy the following contents to a file called inference_test.

    rawDataInElementType = 0
    inDataFormat = 1
    numFrames = 1
    netBinFile = "out/tidl_net.bin"
    ioConfigFile = "out/tidl_io_buff1.bin"
    inData = "./airshow.jpg"
    outData = "out/tidl_out.bin"
    debugTraceLevel = 4
    writeTraceLevel = 3
    postProcType = 1
    flowCtrl = 1

    Then run the model.

    ./PC_dsp_test_dl_algo.out s:inference_test.  This will create a bunch of files in tidl_tools/trace/ directory.  These files are just bin dump files of 8-bit ints (*.y file) or float32 (*.bin).  Here is a short python script to read them.  

    import json

    import numpy as np

    layer = 'inference_config_new_0144_00001_00001_00320x00320_float.bin'
    fileHandle = open(layer, 'rb')
    float_data = np.fromfile(fileHandle, dtype=np.float32)
    with open(layer+'.txt', "w") as f:
    json.dump(float_data.tolist(), f, indent=4)

    Change the dytpe to int8 if reading a .y file.

    Regards,

    Chris

      

    "

  • Hello, Chris.
    Thank you for the detailed explanation regarding my question. First of all, I understand that the first thing I need to do is set up the tool called edgeai-tidl-tools. I still don’t fully understand the content of the script you provided, but is it possible to extract features simultaneously from multiple intermediate layers by applying this method? For example, ResNet18 has 18 layers, so can I extract up to 18 features at the same time? Thank you.

  • With TIDL, yes, but you need to follow the instructions I gave you above.  If you just want to dump ONNX outputs (NO TIDL), you first need to update your model to add outputs to each layer.  Here is a script that will do that (change model_name.onnx to your model).   It will output a file called model_name_ref.onnx.

    import onnx

    model_name = f"model_name.onnx"
    modified_model_name = model_name.replace(".onnx", "_ref.onnx")

    def convert(model_name, modified_model_name):
    """
    This function modifies the ONNX model to include intermediate layer outputs
    as graph outputs. This is useful for debugging and comparison purposes.
    """
    # Load the original ONNX model
    onnx_model = onnx.load(model_name)
    intermediate_layer_value_info = onnx.helper.ValueInfoProto()
    intermediate_layer_value_info.name = ''

    for i in range(len(onnx_model.graph.node)):
    for j in range(len(onnx_model.graph.node[i].output)):
    intermediate_layer_value_info.name = onnx_model.graph.node[i].output[j]
    onnx_model.graph.output.append(intermediate_layer_value_info)

    onnx.save(onnx_model, modified_model_name)

    convert(model_name, modified_model_name)

    This will take a model like this:

    And add an output node to each layer

    Then you can output each output layer to a file or to STDOUT.  Here is an example of loading the _ref model and generating .bin files for each output in an onnx/ subdirectory.   You will have to provide an array of input data (see highlight).

    import numpy as np
    import matplotlib.pyplot as plt
    import onnxruntime as ort
    import tensorflow as tf
    import os

    session = ort.InferenceSession("your_model_ref.onnx")
    # get model input details
    input_name = session.get_inputs()[0].name
    output_details = session.get_outputs()
    output = list(session.run(None, {input_name: [input_data]}))


    for i in range(len(output_details)):
        output_dict[output_details[i].name] = output[i]


    for output_name, output_tensor in output_dict.items():


        output_name = output_name.replace('/', '_')
        model_path = model_path.split('.')[0]
        out = np.array(output_tensor, dtype = np.float32)
        out.tofile('onnx/' + os.path.basename(model_path) + '_' + output_name + '.bin')

    regards,

    Chris

  • To extract features simultaneously from multiple intermediate layers, it means creating a new model that has multiple outputs based on the existing model. I understand it well. Thank you very much.

    Lastly, may I ask just one more question? Is there any special procedure required to apply the pre-trained parameters of the original model to the newly created model with multiple outputs?

    Thank you in advance.

  • Hi Mitani-san,

    I will close this thread, but please start a new thread with the new question.  I will get on the new E2E, but we search these E2E for future reference and need to have them focus on a single topic.  I will close this one and look forward to your new question in a new E2E.

    Regards,

    Chris

  • I understand.

    Based on the information you provided, I will try to tackle my task.
    I appreciate you giving me a clear direction. At the same time, I respect your extensive knowledge.
    Thank you very much.