This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: Neo-AI testing issue

Part Number: TDA4VM

Hi CHamps:

I'm experiencing the Neo-AI with AWS.

But I have the below error, may you guide me how to solve it?

Thanks.

BR Rio

Installed /root/.local/lib/python3.7/site-packages/dlr-1.8.0-py3.7.egg
Processing dependencies for dlr==1.8.0
Searching for requests==2.18.4
Best match: requests 2.18.4
Adding requests 2.18.4 to easy-install.pth file

Using /usr/lib/python3/dist-packages
Searching for numpy==1.20.1
Best match: numpy 1.20.1
Processing numpy-1.20.1-py3.7-linux-x86_64.egg
numpy 1.20.1 is already the active version in easy-install.pth
Installing f2py script to /root/.local/bin
Installing f2py3 script to /root/.local/bin
Installing f2py3.7 script to /root/.local/bin

Using /root/.local/lib/python3.7/site-packages/numpy-1.20.1-py3.7-linux-x86_64.egg
Searching for distro==1.5.0
Best match: distro 1.5.0
Processing distro-1.5.0-py3.7.egg
distro 1.5.0 is already the active version in easy-install.pth
Installing distro script to /root/.local/bin

Using /root/.local/lib/python3.7/site-packages/distro-1.5.0-py3.7.egg
Finished processing dependencies for dlr==1.8.0
root@ubuntu-vm:/opt/Neo-AI/neo-ai-dlr/python# cd ..
root@ubuntu-vm:/opt/Neo-AI/neo-ai-dlr# cd tests/python/integration/
root@ubuntu-vm:/opt/Neo-AI/neo-ai-dlr/tests/python/integration# python load_and_run_tvm_model.py
Traceback (most recent call last):
File "load_and_run_tvm_model.py", line 2, in <module>
from dlr import DLRModel
ImportError: No module named dlr
root@ubuntu-vm:/opt/Neo-AI/neo-ai-dlr/tests/python/integration#

  • Hi, 

    I found I need to use python"3" to test.

    Please see my test log in the below, is this correct? Im confused.

    root@ubuntu-vm:/opt/Neo-AI/neo-ai-dlr/tests/python/integration# python3 load_and_run_tvm_model.py

    CALL HOME FEATURE ENABLED

    You acknowledge and agree that DLR collects the following metrics to help improve its performance.
    By default, Amazon will collect and store the following information from your device:

    record_type: <enum, internal record status, such as model_loaded, model_>,
    arch: <string, platform architecture, eg 64bit>,
    osname: <string, platform os name, eg. Linux>,
    uuid: <string, one-way non-identifable hashed mac address, eg. 8fb35b79f7c7aa2f86afbcb231b1ba6e>,
    dist: <string, distribution of os, eg. Ubuntu 16.04 xenial>,
    machine: <string, retuns the machine type, eg. x86_64 or i386>,
    model: <string, one-way non-identifable hashed model name, eg. 36f613e00f707dbe53a64b1d9625ae7d>

    If you wish to opt-out of this data collection feature, please follow the steps below:
    1. Disable it with through code:
    from dlr.counter.phone_home import PhoneHome
    PhoneHome.disable_feature()
    2. Or, create a config file, ccm_config.json inside your DLR target directory path, i.e. python3.6/site-packages/dlr/counter/ccm_config.json. Then added below format content in it, {"enable_phone_home" : false}
    3. Restart DLR application.
    4. Validate this feature is disabled by verifying this notification is no longer displayed, or programmatically with following command:
    from dlr.counter.phone_home import PhoneHome
    PhoneHome.is_enabled() # false as disabled
    Preparing model artifacts for resnet18_v1 ...
    Preparing model artifacts for mobilenet_v1_0.75_224_quant ...
    Preparing model artifacts for 4in2out ...
    Preparing model artifacts for assign_op ...
    2021-03-12 16:28:00,378 INFO Could not find libdlr.so in model artifact. Using dlr from /root/.local/lib/python3.7/site-packages/dlr-1.8.0-py3.7.egg/dlr/libdlr.so
    Testing inference on resnet18...
    2021-03-12 16:28:02,067 INFO Could not find libdlr.so in model artifact. Using dlr from /root/.local/lib/python3.7/site-packages/dlr-1.8.0-py3.7.egg/dlr/libdlr.so
    Testing inference on mobilenet_v1_0.75_224_quant...
    2021-03-12 16:28:03,293 INFO Could not find libdlr.so in model artifact. Using dlr from /root/.local/lib/python3.7/site-packages/dlr-1.8.0-py3.7.egg/dlr/libdlr.so
    Testing inference on mobilenet_v1_0.75_224_quant with float32 input...
    2021-03-12 16:28:04,168 ERROR error in running inference DLRModelImpl input data with name input should have dtype uint8 but float32 is provided
    Traceback (most recent call last):
    File "/root/.local/lib/python3.7/site-packages/dlr-1.8.0-py3.7.egg/dlr/api.py", line 112, in run
    return self._impl.run(input_values)
    File "/root/.local/lib/python3.7/site-packages/dlr-1.8.0-py3.7.egg/dlr/dlr_model.py", line 446, in run
    self._set_input(key, value)
    File "/root/.local/lib/python3.7/site-packages/dlr-1.8.0-py3.7.egg/dlr/dlr_model.py", line 318, in _set_input
    format(name, input_dtype, data.dtype.name))
    ValueError: input data with name input should have dtype uint8 but float32 is provided
    2021-03-12 16:28:04,169 INFO Could not find libdlr.so in model artifact. Using dlr from /root/.local/lib/python3.7/site-packages/dlr-1.8.0-py3.7.egg/dlr/libdlr.so
    Testing multi_input/multi_output support...
    2021-03-12 16:28:05,064 INFO Could not find libdlr.so in model artifact. Using dlr from /root/.local/lib/python3.7/site-packages/dlr-1.8.0-py3.7.egg/dlr/libdlr.so
    Testing _assign() operator...
    All tests passed!
    root@ubuntu-vm:/opt/Neo-AI/neo-ai-dlr/tests/python/integration#

  • Hi Rio Chan, output seems OK.

    test_mobilenet_v1_0_75_224_quant_wrong_input_type() is failing due to a data type (dtype uint8 but float32 is provided), I remember seeing this error. The other tests passed. Just FYI, list of tests run by this python script:  

    •     test_resnet()                                                                                
    •     test_mobilenet_v1_0_75_224_quant()                                                           
    •     test_multi_input_multi_output()                                                               
    •     test_mobilenet_v1_0_75_224_quant_wrong_input_type()                                         
    •     test_assign_op()

    Thank you,

    Paula                                     

  • Hi Rio, I got a clarification from a colleague, that , the “error” is actually part of a negative testing, which feeds quantized model with float input and checks for errors.

    Thank you,

    Paula

  • Hi Paila:

    Thanks.

    But I don't 100% understand your saying.

    So, do we need t fix this error? or we can ignore it?

    BR Rio

  • Rio, my apology that my message was confusing. Yes, you can ignore the error

    Paula