This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

  • Resolved

AM5729: AM5729: Problems related to using 4 eves only (excluding 2 dsps) for TIDL

Prodigy 205 points

Replies: 5

Views: 103

Part Number: AM5729

Dear Champs,

Thanks to your advice, I understood Executor in TIDL API. ( )

However, to use only 4 EVEs as an executor
In case, Make IPU1 running F/W file for OpenCL-Monitor (only on IPU1)
(Not the OpenCL-Monitor F/W but customized F/W on DSP1 and DSP2 (for example, the IPC example program)) causes problems.

to use the TIDL API, regardless of defining only 4 EVEs except DSP in the code as an executor,

TIOCL FATAL: Internal Error: Number of message queues (0) does not match number of compute units (1)

There is an error and the program is terminated.

I did some trace on code :

// If there are no devices capable of offloading TIDL on the SoC, exit
uint32_t num_eves = Executor :: GetNumDevices (DeviceType :: EVE);
uint32_t num_dsps = Executor :: GetNumDevices (DeviceType :: DSP);

I found it appears to be an error that occurs when clGetPlatformIDs function is executed in PlatformIsAM57 () function of ocl_device.cpp among tidl_api.

Logs on below are from my AM5729 B/D :

The operation of OpenCL-Monitor F/W of IPU1 looks fine.....

root @ am57xx-evm: ~ # cat / sys / kernel / debug / remoteproc / remoteproc0 / trace0
[0] [0.000] Watchdog enabled: TimerBase = 0x68824000 SMP-Core = 0 Freq = 20000000
[0] [0.000] Watchdog enabled: TimerBase = 0x68826000 SMP-Core = 1 Freq = 20000000
[0] [0.000] Watchdog_restore registered as a resume callback
[0] [0.000] 17 Resource entries at 0x3000
[0] [0.000] [t = 0x000e08e7] xdc.runtime.Main: 4 EVEs Available
[0] [0.000] [t = 0x0011f435] xdc.runtime.Main: Creating msg queue ...
[0] [0.000] [t = 0x00136fd7] xdc.runtime.Main: OCL: EVEProxy: MsgQ ready
[0] [0.000] [t = 0x0014591b] xdc.runtime.Main: Heap for EVE ready
[0] [0.000] [t = 0x00151def] xdc.runtime.Main: Booting EVEs ...
[0] [0.000] [t = 0x00c32d93] xdc.runtime.Main: Starting BIOS ...
[0] [0.000] registering rpmsg-proto: rpmsg-proto service on 61 with HOST
[0] [0.000] [t = 0x00ca8779] xdc.runtime.Main: Attaching to EVEs ...
[0] [0.011] [t = 0x010fd679] xdc.runtime.Main: EVE1 attached
[0] [0.014] [t = 0x01234961] xdc.runtime.Main: EVE2 attached
[0] [0.017] [t = 0x0136c8cd] xdc.runtime.Main: EVE3 attached
[0] [0.020] [t = 0x014a574f] xdc.runtime.Main: EVE4 attached
[0] [0.020] [t = 0x014b806f] xdc.runtime.Main: Opening MsgQ on EVEs ...
[0] [1.020] [t = 0x1aa997c3] xdc.runtime.Main: OCL: EVE1: MsgQ opened
[0] [2.020] [t = 0x3408483f] xdc.runtime.Main: OCL: EVE2: MsgQ opened
[0] [3.020] [t = 0x4d673823] xdc.runtime.Main: OCL: EVE3: MsgQ opened
[0] [4.020] [t = 0x66c62743] xdc.runtime.Main: OCL: EVE4: MsgQ opened
[0] [4.020] [t = 0x66c74db7] xdc.runtime.Main: Pre-allocating msgs to EVEs ...
[0] [4.021] [t = 0x66cd0fe3] xdc.runtime.Main: Done OpenCL runtime initialization. Waiting for messages ...

The system status related to OpenCL and TIDL is as follows.

root @ am57xx-evm: ~ # dmesg | grep -i cma
[0.000000] Reserved memory: created CMA memory pool at 0x0000000095800000, size 56 MiB
[0.000000] Reserved memory: created CMA memory pool at 0x0000000099000000, size 64 MiB
[0.000000] Reserved memory: created CMA memory pool at 0x000000009d000000, size 32 MiB
[0.000000] Reserved memory: created CMA memory pool at 0x000000009f000000, size 8 MiB
[0.000000] cma: Reserved 24 MiB at 0x00000000be400000
[0.000000] Memory: 441936K / 652288K available (8192K kernel code, 329K rwdata, 2680K rodata, 2048K init, 266K bss, 21936K reserved, 188416K cma-reserved, 103424K highmem)

root @ am57xx-evm: ~ # cat / proc / cmem

Block 0: Pool 0: 1 bufs size 0x18000000 (0x18000000 requested)

Pool 0 busy bufs:
id 0: phys addr 0xa0000000 (cached)

Pool 0 free bufs:

root @ am57xx-evm: ~ # systemctl status ti-mct-daemon.service

[[0; 1; 32m ● [[0m ti-mct-daemon.service-TI MultiCore Tools Daemon
   Loaded: loaded (/lib/systemd/system/ti-mct-daemon.service; enabled; vendor preset: enabled)
   Active: [[0; 1; 32mactive (running) [[0m since Sun 2020-01-05 02:35:35 UTC; 1min 24s ago
  Process: 376 ExecStart = / usr / bin / ti-mctd (code = exited, status = 0 / SUCCESS)
  Process: 288 ExecStartPre = / sbin / insmod /lib/modules/4.19.79-g77dfab56c6/extra/cmemk.ko (code = exited, status = 0 / SUCCESS)
 Main PID: 463 (ti-mctd)
    Tasks: 1 (limit: 1035)
   Memory: 1.9M
   CGroup: /system.slice/ti-mct-daemon.service
           mq463 / usr / bin / ti-mctd

Jan 05 02:35:34 am57xx-evm systemd [1]: Starting TI MultiCore Tools Daemon ...
Jan 05 02:35:35 am57xx-evm ti-mctd [376]: Shared Memory heaps created, size 262144 bytes
Jan 05 02:35:35 am57xx-evm systemd [1]: Started TI MultiCore Tools Daemon.

root @ am57xx-evm: ~ # cat /etc/ti-mctd/ti_mctd_config.json
        "cmem-block-offchip": "0",
        "cmem-block-onchip": "1",
        "compute-unit-list": "0",
        "eve-devices-disable": "0",
        "linux-shmem-size-KB": "256",

I am using ti-processor-sdk-linux-am57xx-evm- version.

For reference,

In case, F/W files for OpenCL-Monitor included in ti-processor-sdk-linux-am57xx-evm- is executed on DSP1, DSP2, IPU1
All OpenCL and TIDL examples run OK !!!

We are eagerly waiting for your help.
Thank you.

  • Hi awesomeYJ, thanks a lot for all the details shared... We will do some investigation. In the meantime, if you have a chance, could you with a try to an older PLSDK and see if this works?

    I will keep you post it.

    thank you,


  • In reply to Paula Carrillo:

    Hi awesomeYJ, please ignore my previous post, instead of going back on releases, maybe we can try something quickly.

    After discussing with our OpenCL expert, he mentioned, that there are two copies of IPU1 firmware on the SDCard.  One in the VFAT partition for early reboot, the second one in Linux partition /lib/firmware.  They need to be the same.

    Our Linux takes care of this for you. But, if you created your SD card in any other way, there could be a mistmatch, and you will need to manually copy the correct one:

    Could you please try below and let me know if this works for you?:

    • cp /lib/firmware/dra7-ipu1-fw.xem4.opencl-monitor /run/media/mmcblk0p1/dra7-ipu1-fw.xem4
    • reboot
    • re-test

    thank you,


  • In reply to Paula Carrillo:

    Dear Mr. Paula

    Thank you for your reply.

    I followed your advice :

    1) Check IP1 F/W File on  /run/media/mmcblk0p1/dra7-ipu1-fw.xem4 - I copied the file to SD root dir (VFAT).

    2) also I checked contents on 2 files on SDK RFS ( /lib/firmware/dra7-ipu1-fw.xem4.opencl-monitor ,   /run/media/mmcblk0p1/dra7-ipu1-fw.xem4 ),

        They are totally same. Just a naming of file is different...

    But problem is same now....

    In this pointof view, I have a question about TI_MCTD Configuration : when I want to use 4 EVEs only, what can i filled with section "compute-unit-list" ?

    root@am57xx-evm:/media/mmcblk0p1# cat /etc/ti-mctd/ti_mctd_config.json
        "cmem-block-offchip" : "0",
        "cmem-block-onchip" : "1",
        "compute-unit-list" : "????",
        "eve-devices-disable" : "0",
        "linux-shmem-size-KB" : "256",

    as I know "0" is DSP 1, "1" is DSP 2 and "0, 1" is DSP 1, 2 both, am I correct?

    Thank you for your kind reply and I'll wait for your next info about this.

  • In reply to awesomeYJ:

    Hi AwesomeYJ, I got some information from our OpenCL expert.

    • OpenCL at the current implementation requires at least one DSP running OpenCL firmware.

    • “compute-unit-list” is only for DSP cores.  If you only wants EVEs when using TIDL-API, then simple creating Executors with only EVE cores and no DSP cores. 

    • You can control how many EVE cores to use in TIDL-API when creating Executors.

    • TIDL-API is built on top of OpenCL runtime, OpenCL runtime assumes at least one DSP core running OpenCL firmware.

     No sure if you are aware of this link, just in case:

    Let me know if this helps, or if I am missing anything

    Thank you,


  • In reply to Paula Carrillo:

    Thank you for your help.

    It's very helpful to my work !!!

This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.