This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM68A: FastBEV model deployment on board

Part Number: AM68A

Tool/software:

I compiled the FastBEV model using edgeai-benchmark and generated a modelpackage file. How can I run this model on the board using edgeai-gst-apps? Does the current edgeai-gst-apps code support the FastBEV model? If I want to deploy this FastBEV model on the board, how should I deploy it?

  • Hi Yue,

    Assuming you used OSRT tools the outputs will be edgeai-tidl-tools/model-artifacts/<model>.  If you compiled it with TIDLRT the outputs will be placed in the location specified in your import file.  

    To run this on the device,  run it from /opt/tidl_test.  you will need an inference file like the one bellow.

    inFileFormat = 0
    rawDataInElementType = 0
    inDataFormat = 1
    numFrames = 1
    netBinFile = "out/tidl_net.bin"
    ioConfigFile = "out/tidl_io_buff1.bin"
    inData = "../jet.bmp"
    outData = "out/tidl_out.bin"

    Where yellow is the location of your model artifacts (net.bin and io.bin).  And green is your input data file.

    To run this, while still in /opt/tidl_test:

    ./TI_DEVICE_a72_test_dl_algo_host_rt.out s:<inference_file_name>

    Regards,

    Chris