This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

PROCESSOR-SDK-RTOS-J784S4: J784S4 SDK Dependency Version Conflict

Part Number: PROCESSOR-SDK-RTOS-J784S4
Other Parts Discussed in Thread: TDA4VH, AM67A, AM69A

There is a library dependency version conflict issue in their SDK when I try and build TARGET_PLATFORM=PC  that I have been getting on both the Docker and local machine. I have been trying various methods to resolve this such as compiling with different versions of this library (3.11, 3.12, 3.13, 3.19, 3.21) but none have given me any luck. image (1).pngimage.png

  • Hello Anna;

    Generally speaking, the build setup script will take care everything, if you did it correctly at the beginning. 

    So make sure to follow the SDK building steps carefully.

    For your convenience, I wrote the steps below.   

    Best regards

    Wen Li

    === How to build j784s4 RTOS PSDK ===

    1. In the SDK root directory, install ROTS PSDK package

    •  ./sdk_builder/scripts/setup_psdk_rtos.sh

    // if you want to do "make tidl_pc_tools", then use the following option

    • ./sdk_builder/scripts/setup_psdk_rtos.sh --install_tidl_deps

    2. setup the enviroment (or you can edit the file "sdk_builder/build_flags.mak" and modify below 4 variables) 

    • export BUILD_EMULATION_MODE=no
    • export BUILD_TARGET_MODE=yes
    • export SOC=j784s4
    • export PSDK_INSTALL_PATH=$(pwd)

    3. After building has been finished, make sure the $SOC has been set correctly

    • echo $SOC
    • it should show "j784s4"

    4.  Do below to build the full PSDK RTOS, with "N" being the number of parallel threads

    • cd sdk_builder
    • make sdk_scrub
    • make sdk -jN

    // the "make sdk_scrub" will make sure to have a clean build. Or you can simple use a new directory

  • Hi thank you for your response, so I am trying to do the PC emulation mode. So I change BUILD_EMULATION_MODE=yes, then ran make sdk -j8. This error comes up. Am I missing additional packages? 

  • So just to confirm i need to wait for approval of my request for the additional add-on packages to run the PC emulation mode? As indicated here: https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-j784s4/11_01_01_01/exports/docs/vision_apps/docs/user_guide/ENVIRONMENT_SETUP.html#:~:text=xx_xx_xx_xx.tar.gz-,Step%201b%3A%20Download%20and%20Install%20PSDK%20RTOS%20Add%2Don%20to%20run%20in%20PC%20Emulation%20Mode%20(Optional%2C%20only%20needed%20for%20PC%20emulation%20mode),-Download%20and%20Run 

    Because currently I have fatal error: LibDenseOpticalFlow.h: No such file or directory

    So I am just wondering if this is a missing file or somehow I built j784s4 RTOS PSDK wrong.

    Thanks.

  • Hi;

    I am not sure what kind of PC emulation you wanted to do? Do you want to do model inference?

    The TIDL edgeAI tool is better to use on PC side. You can download it from the github. The documentation is on the github as well.

    Here is the link to the github.

    https://github.com/TexasInstruments/edgeai-tidl-tools.git

    you can use "git clone" get the repository. 

    Best regards

    Wen Li

  • Thank you for your response. Our goal is to try and see if our model is compatible with the TDA4VH chip, which runs on the J784S4 processor. We’d like to emulate the model inference flow on PC to validate compatibility with TIDL. I initially thought this would be the right website to do model inference https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-jacinto7/06_01_01_12/exports/docs/tidl_j7_01_00_01_00/ti_dl/docs/user_guide_html/md_tidl_sample_test.html#:~:text=host%20emulation%20run.-,Steps%20to%20run%20the%20Host%20Emulation%20on%20PC,-Usage%3A 

    But basically I want to emulate inference on PC to check TIDL compatibility — would edgeAI tools be the right tool for that?

  • Hi Anna,

    Edgeai tools is the way to go if you want to do this on yourself.  Also, if you provide us the ONNX model you want to run we can set it up for the TDA4VH.  If the model can run on the TDA4VH device, we can provide you with a sample import and inference file to get you started.  That may be a quicker path to completion.  

    Regards,

    Chris

  • Hi Chris, thank you so much for your help. 

    Is there a way we can get in touch to send you the ONNX model?

    Thanks,

    Anna

  • Hi Anna,

    Please send me a message at s-tsongas@ti.com with a location Box/Dropbox etc. location with the model and config files.  If you do not have access to cloud location, I will set you up with my personal Box location.

    Regards,

    Chris

  • Hi Chris,

    Thank you very much for offering this help! I will probably try both methods of trying to do model emulation myself using Edgeai-tools and sending you a reduced model (IP issues).

    Off of the readme for the edgeai tools, it isn't obvious how we can emulate our model for the TDA4VH. Are there any additional docs or folders in that repo that might explain it a bit more?

    Futhermore I wanted to clarify we do not have any hardware at the moment, so I want to confirm this repo will still work for us if we do not have any of the evaluation boards. 

  • Hi Anna;

    If you can follow the out-of-box example that came with the TIDL tools, you should have a good understand of the flow that runs on PC side. Here is the link.

     https://github.com/TexasInstruments/edgeai-tidl-tools?tab=readme-ov-file

    Please pay attention on the "Setup" section and anything after that.

    You may have to set up a docker first, if you have not done so. Please follow the link below for setup a docker.

    https://github.com/TexasInstruments/edgeai-tidl-tools/blob/master/docs/advanced_setup.md#docker-based-setup-for-x86_pc

    Working on PC only without hardware, you can compile the model, and generate the artifact of a model, and do some model interference simulation. But the timing and performance results are not accurate; you will have to do this on the EVM. 

    Regards

    Wen Li

  • Hi,

    Thank you for your help, when I try to compile the example using ./onnxrt_ep.py -m cl-ort-resnet18-v1 -c

    I get the following error about TIDLCompilationProvider, Error Unknown Provider Type: TIDLCompilationProvider

    Running shape inference on model ../../../models/public/resnet18_opset9.onnx
    
    /usr/local/lib/python3.10/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py:123: UserWarning: Specified provider 'TIDLCompilationProvider' is not in available provider names.Available providers: 'AzureExecutionProvider, CPUExecutionProvider'
    warnings.warn(
    *************** EP Error ***************
    EP Error Unknown Provider Type: TIDLCompilationProvider when using ['TIDLCompilationProvider', 'CPUExecutionProvider']
    Falling back to ['CPUExecutionProvider'] and retrying.
    ****************************************
    Process Process-1:
    Traceback (most recent call last):
    File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
    File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
    File "/home/root/examples/osrt_python/ort/./onnxrt_ep.py", line 402, in run_model
    imgs, output, proc_time, sub_graph_time, ddr_bw, height, width = infer_image(sess, input_images, config)
    File "/home/root/examples/osrt_python/ort/./onnxrt_ep.py", line 219, in infer_image
    copy_time, sub_graphs_proc_time, totaltime, ddr_bw = get_benchmark_output(sess)
    File "/home/root/examples/osrt_python/ort/./onnxrt_ep.py", line 129, in get_benchmark_output
    benchmark_dict = interpreter.get_TI_benchmark_data()
    AttributeError: 'InferenceSession' object has no attribute 'get_TI_benchmark_data'
     

    Do you have any idea how to fix this?

  • Also even after I add my model to model_configs.py it seems like the compiler does not recognize it since I do not see it in the terminal 

    Is this a formatting issue? Here is how I insert at the top of model_configs

    "ANNA-dense-encoder": create_model_config(
    task_type="classification",
    source=dict(
    model_url="/home/anna.tao/edgeai-tidl-tools/models/public/dense_sensing.onnx",
    infer_shape=True,
    ),
    preprocess=dict(
    resize=0,
    crop=0,
    data_layout="NCHW",
    resize_with_pad=False,
    reverse_channels=False,
    ),
    session=dict(
    session_name="onnxrt",
    model_path=os.path.join(models_base_path, "dense_sensing.onnx"),
    input_mean=[0, 0, 0], # not used
    input_scale=[1, 1, 1], # not used
    input_optimization=False, # disable TI's input optimizer (important!)
    ),
    postprocess=dict(),
    extra_info=dict(
    num_images=1, # your model is NOT an image model
    num_classes=64 # your output size (embedding dim)
    ),
    ),
    And here is the output in terminal
    python3 onnxrt_ep.py
    2025-11-25 23:57:13.220127280 [W:onnxruntime:Default, device_discovery.cc:164 DiscoverDevicesForPlatform] GPU device discovery failed: device_discovery.cc:89 ReadFileContents Failed to open file: "/sys/class/drm/card0/device/vendor"
    Available execution providers :  ['AzureExecutionProvider', 'CPUExecutionProvider']
    
    Running 4 Models - ['cl-ort-resnet18-v1', 'od-ort-ssd-lite_mobilenetv2_fpn', 'cl-ort-resnet18-v1_low_latency', 'ss-ort-deeplabv3lite_mobilenetv2']
    
    
    Running_Model :  cl-ort-resnet18-v1  
    
    
    Running_Model :  od-ort-ssd-lite_mobilenetv2_fpn  
    
    
    Running_Model :  cl-ort-resnet18-v1_low_latency  
    
    
    Running_Model :  ss-ort-deeplabv3lite_mobilenetv2  
    
    /usr/local/lib/python3.10/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py:123: UserWarning: Specified provider 'TIDLExecutionProvider' is not in available provider names.Available providers: 'AzureExecutionProvider, CPUExecutionProvider'
      warnings.warn(
    *************** EP Error ***************
    EP Error Unknown Provider Type: TIDLExecutionProvider when using ['TIDLExecutionProvider', 'CPUExecutionProvider']
    Falling back to ['CPUExecutionProvider'] and retrying.
    So I'm not sure if this is even recognizing my model even though i put it under models/public folder
  • I ran these set up instructions:

    git clone github.com/.../edgeai-tidl-tools.git
    cd edgeai-tidl-tools
    git checkout <TAG Compatible with your SDK version>
    # Supported SOC name strings am62, am62a, (am68a or j721s2), (am68pa or j721e), (am69a or j784s4), (am67a or j722s)
    export SOC=<Your SOC name>
    source ./setup.sh


    cd edgeai-tidl-tools
    source ./setup_env.sh ${SOC}

    mkdir build && cd build
    cmake ../examples && make -j2 && cd ..
    source ./scripts/run_python_examples.sh -o
    python3 ./scripts/gen_test_report.py
  • And I also get these errors if it helps, Thanks. 

    Accelerator Fatal Error: This file was compiled: -acc=gpu -gpu=cc50 -gpu=cc60 -gpu=cc60 -gpu=cc70 -gpu=cc75 -gpu=cc80 -gpu=cc80 -gpu=cc86 -gpu=cc90 -acc=host o
    Rebuild this file with -gpu=cc89 to use NVIDIA Tesla GPU 0
     File: /work/ti/OSRT/OSRTV2/Build/j784s4/c7x-mma-tidl/ti_dl/algo/src/ref/tidl_conv2d_base_ref.c
     Function: _Z24TIDL_refConv2dKernelFastILi3EffffEvPT0_PT1_PT2_PT3_S7_S7_iiiiiiiiiiiiiiiiiiijiiiiii:462
     Line: 473
    

  • Hi; Let us start from the simple thing first: make sure you can run the example successfully first.

    I went though the entire running process today, and I wrote down all the steps that I have done. You can follow these steps and to see if you can get the same result as what I had.

    1. Clone Github repo and checkout the corresponding TIDL/SDK version

    user@pc:~$ git clone github.com/.../edgeai-tidl-tools.git
    user@pc:~/edgeai-tidl-tools$ cd edgeai-tidl-tools
    use "git tag" to check the version list
    user@pc:~/edgeai-tidl-tools$ git checkout <TAG Compatible with your SDK version> // git checkout 11_00_06_00

    2. One time setup for Docker:

    user@pc:~/edgeai-tidl-tools$ source ./scripts/docker/setup_docker.sh

    3. Build the docker image:
    user@pc:~/edgeai-tidl-tools$ source ./scripts/docker/build_docker.sh

    4. Run the docker image:


    sudo docker run -w /home/root -it --rm --shm-size=4096m -v /shared:/shared --mount source=$(pwd),target=/home/root,type=bind edgeai_tidl_tools_x86_ubuntu_22


    5. After running the docker container, run following steps inside the docker that you just created

    export SOC=am67a
    source ./setup.sh
    rm -rf build lib bin
    mkdir build
    cd build/
    cmake ../examples/
    make
    cd ..
    source ./scripts/run_python_examples.sh
    python3 ./scripts/gen_test_report.py

    6. You should get the result as the screenshot show.

    7. If you can get here, then next step you can replace the models with your own one.

    Please Let us know if you need further help.

    Best regards

    Wen Li 

  • #4 command is a single line command. make sure you enter it as a "one-line-command"

  • hi i thought we use SOC=am69a for TDA4VH

  • Hi Wen Li,

    I get these errors when I first run setup script 

    pip3 install --no-input git+https://github.com/NVIDIA/TensorRT@release/8.5#subdirectory=tools/onnx-graphsurgeon
    Collecting git+https://github.com/NVIDIA/TensorRT@release/8.5#subdirectory=tools/onnx-graphsurgeon
      Cloning https://github.com/NVIDIA/TensorRT (to revision release/8.5) to /tmp/pip-req-build-a58ayj_7
      Running command git clone --filter=blob:none --quiet https://github.com/NVIDIA/TensorRT /tmp/pip-req-build-a58ayj_7
      Running command git checkout -b release/8.5 --track origin/release/8.5
      Switched to a new branch 'release/8.5'
      Branch 'release/8.5' set up to track remote branch 'release/8.5' from 'origin'.
      Resolved https://github.com/NVIDIA/TensorRT to commit 68b5072fdb9df6b6edab1392b02a705394b2e906
      Running command git submodule update --init --recursive -q
      Installing build dependencies ... done
      Getting requirements to build wheel ... error
      error: subprocess-exited-with-error
      
      × Getting requirements to build wheel did not run successfully.
      │ exit code: 1
      ╰─> [25 lines of output]
          Traceback (most recent call last):
            File "/usr/local/lib/python3.10/dist-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 389, in <module>
              main()
            File "/usr/local/lib/python3.10/dist-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 373, in main
              json_out["return_val"] = hook(**hook_input["kwargs"])
            File "/usr/local/lib/python3.10/dist-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 143, in get_requires_for_build_wheel
              return hook(config_settings)
            File "/tmp/pip-build-env-lvaonlpc/overlay/local/lib/python3.10/dist-packages/setuptools/build_meta.py", line 331, in get_requires_for_build_wheel
              return self._get_build_requires(config_settings, requirements=[])
            File "/tmp/pip-build-env-lvaonlpc/overlay/local/lib/python3.10/dist-packages/setuptools/build_meta.py", line 301, in _get_build_requires
              self.run_setup()
            File "/tmp/pip-build-env-lvaonlpc/overlay/local/lib/python3.10/dist-packages/setuptools/build_meta.py", line 512, in run_setup
              super().run_setup(setup_script=setup_script)
            File "/tmp/pip-build-env-lvaonlpc/overlay/local/lib/python3.10/dist-packages/setuptools/build_meta.py", line 317, in run_setup
              exec(code, locals())
            File "<string>", line 19, in <module>
            File "/tmp/pip-req-build-a58ayj_7/tools/onnx-graphsurgeon/onnx_graphsurgeon/__init__.py", line 1, in <module>
              from onnx_graphsurgeon.exporters.onnx_exporter import export_onnx
            File "/tmp/pip-req-build-a58ayj_7/tools/onnx-graphsurgeon/onnx_graphsurgeon/exporters/__init__.py", line 1, in <module>
              from onnx_graphsurgeon.exporters.base_exporter import BaseExporter
            File "/tmp/pip-req-build-a58ayj_7/tools/onnx-graphsurgeon/onnx_graphsurgeon/exporters/base_exporter.py", line 18, in <module>
              from onnx_graphsurgeon.ir.graph import Graph
            File "/tmp/pip-req-build-a58ayj_7/tools/onnx-graphsurgeon/onnx_graphsurgeon/ir/graph.py", line 23, in <module>
              import numpy as np
          ModuleNotFoundError: No module named 'numpy'
          [end of output]
      
      note: This error originates from a subprocess, and is likely not a problem with pip.
    ERROR: Failed to build 'git+https://github.com/NVIDIA/TensorRT@release/8.5#subdirectory=tools/onnx-graphsurgeon' when getting requirements to build wheel
    installing the onnx graph optimization toolkit...
    running develop
    /usr/local/lib/python3.10/dist-packages/setuptools/_distutils/cmd.py:90: DevelopDeprecationWarning: develop command is deprecated.
    !!
    
            ********************************************************************************
            Please avoid running ``setup.py`` and ``develop``.
            Instead, use standards-based tools like pip or uv.
    
            This deprecation is overdue, please update your project and remove deprecated
            calls to avoid build errors in the future.
    
            See https://github.com/pypa/setuptools/issues/917 for details.
            ********************************************************************************
    
    !!
      self.initialize_options()

    Do you know what might have happened? 

    Thanks.

  • Yes, you can use am69a. But the most important thing for now is to make sure you can go through the entire process correctly. Since you don't have hardware yet, so I pick the simple case "am67a".  Also I saw you missed some steps in your post. such as "build", based on your post.

    If you want to use am69a. The only difference is #5

    source ./scripts/run_python_examples.sh -n=1 

    Best regards

    Wen Li

  • Please create a new docker to have fresh start.

    Regards

    Wen Li

  • Hi Wen,

    I followed your steps to reclone and recreate Docker. the Master branch works for me. The Output is that all tests fail though, is that a concern? 

    MEM: Deinit ... !!!
    MEM: Alloc's: 31 alloc's of 89461532 bytes 
    MEM: Free's : 31 free's  of 89461532 bytes 
    MEM: Open's : 0 allocs  of 0 bytes 
    MEM: Deinit ... Done !!!
    /home/root/scripts/osrt_cpp_scripts
    [{'Sl No.': 0, 'Runtime': 'tfl-py', 'Name': 'cl-tfl-mobilenet_v1_1.0_224', 'Output Image File': 'py_out_cl-tfl-mobilenet_v1_1.0_224_airshow.jpg', 'Output Bin File': 'py_out_cl-tfl-mobilenet_v1_1.0_224_airshow.bin', 'Functional Status': 'FAIL', 'Info': 'Output Bin File Mismatch'}, {'Sl No.': 1, 'Runtime': 'tfl-py', 'Name': 'ss-tfl-deeplabv3_mnv2_ade20k_float', 'Output Image File': 'py_out_ss-tfl-deeplabv3_mnv2_ade20k_float_ADE_val_00001801.jpg', 'Output Bin File': 'py_out_ss-tfl-deeplabv3_mnv2_ade20k_float_ADE_val_00001801.bin', 'Functional Status': 'FAIL', 'Info': 'Output Bin File Mismatch'}, {'Sl No.': 2, 'Runtime': 'tfl-py', 'Name': 'od-tfl-ssd_mobilenet_v2_300_float', 'Output Image File': 'py_out_od-tfl-ssd_mobilenet_v2_300_float_ADE_val_00001801.jpg', 'Output Bin File': 'py_out_od-tfl-ssd_mobilenet_v2_300_float_ADE_val_00001801.bin', 'Functional Status': 'FAIL', 'Info': 'Output Bin File Mismatch'}, {'Sl No.': 3, 'Runtime': 'tfl-py', 'Name': 'od-tfl-ssdlite_mobiledet_dsp_320x320_coco', 'Output Image File': 'py_out_od-tfl-ssdlite_mobiledet_dsp_320x320_coco_ADE_val_00001801.jpg', 'Output Bin File': 'py_out_od-tfl-ssdlite_mobiledet_dsp_320x320_coco_ADE_val_00001801.bin', 'Functional Status': 'FAIL', 'Info': 'Output Bin File Mismatch'}, {'Sl No.': 4, 'Runtime': 'ort-py', 'Name': 'cl-ort-resnet18-v1', 'Output Image File': 'py_out_cl-ort-resnet18-v1_airshow.jpg', 'Output Bin File': 'py_out_cl-ort-resnet18-v1_airshow.bin', 'Functional Status': 'FAIL', 'Info': 'Output Bin File Mismatch'}, {'Sl No.': 5, 'Runtime': 'ort-py', 'Name': 'od-ort-ssd-lite_mobilenetv2_fpn', 'Output Image File': 'py_out_od-ort-ssd-lite_mobilenetv2_fpn_ADE_val_00001801.jpg', 'Output Bin File': 'py_out_od-ort-ssd-lite_mobilenetv2_fpn_ADE_val_00001801.bin', 'Functional Status': 'FAIL', 'Info': 'Output Bin File Mismatch'}, {'Sl No.': 6, 'Runtime': 'tfl-cpp', 'Name': 'cl-tfl-mobilenet_v1_1.0_224', 'Output Image File': 'cpp_out_cl-tfl-mobilenet_v1_1.0_224.jpg', 'Output Bin File': 'cpp_out_cl-tfl-mobilenet_v1_1.0_224.bin', 'Functional Status': 'FAIL', 'Info': 'Output Bin File Mismatch'}, {'Sl No.': 7, 'Runtime': 'tfl-cpp', 'Name': 'ss-tfl-deeplabv3_mnv2_ade20k_float', 'Output Image File': 'cpp_out_ss-tfl-deeplabv3_mnv2_ade20k_float.jpg', 'Output Bin File': 'cpp_out_ss-tfl-deeplabv3_mnv2_ade20k_float.bin', 'Functional Status': 'FAIL', 'Info': 'Output Bin File Mismatch'}, {'Sl No.': 8, 'Runtime': 'ort-cpp', 'Name': 'cl-ort-resnet18-v1', 'Output Image File': 'cpp_out_cl-ort-resnet18-v1.jpg', 'Output Bin File': 'cpp_out_cl-ort-resnet18-v1.bin', 'Functional Status': 'FAIL', 'Info': 'Output Bin File Mismatch'}, {'Sl No.': 9, 'Runtime': 'ort-cpp', 'Name': 'od-ort-ssd-lite_mobilenetv2_fpn', 'Output Image File': 'cpp_out_od-ort-ssd-lite_mobilenetv2_fpn.jpg', 'Output Bin File': 'cpp_out_od-ort-ssd-lite_mobilenetv2_fpn.bin', 'Functional Status': 'FAIL', 'Info': 'Output Bin File Mismatch'}]
    
    Func Pass: 0
    Func Fail: 10
    
    Please refer to the output_images and output_binaries directory for generated outputs
    TEST DONE!

  • Hi Anna;

    That does not sound right. They should be all passed.

    Could you set:

    export SOC=am67a

    And do a quick test to see if all test are passed.  The am69a test takes too long, there is a test has issue.

    Best regards

    Wen Li  

  • Hi, so turns out I need to compile my model using the advanced examples. Since my model is not a simple classification model. 

    After compilation, it produces some artifacts. How can I use these artifacts in my own application using TIDLRuntime on my own repo? 

  • Hi Anna;

    Great, I am glad that you have used the advanced examples to compile your model successfully. 

    Next step, you can do some model inference to see if your model behaves as you expected.

    After that, you can look into the Vision Apps examples which come with the SDK, some of the examples are TIDL vision application related.  

    Here is the link for the Vision Apps documentation.

    https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-jacinto7/latest/exports/docs/vision_apps/docs/user_guide/index.html

    The source code will be in the downloaded SDK -> vision_apps

    If this thread answered your original question, please closed this ticket. If you have any questions in the future, you can easily submit a new ticket, by this way your question will be answered by TI experts quickly;

    Thanks and regards

    Wen Li