This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

66AK2H06: IPC_start problem

Part Number: 66AK2H06


Hello Team,


Our customer wants to made data reception on PCIe and distribution to DSP, and then processing the data and data tranfer with SPI in two directions.

The orginal thread is below(problem with qmss):
e2e.ti.com/.../2433219

The customer asked additional question, about IPC_start:

1)Due to source code of prj «transportQmssDspEpK2HC66TestProject» in «dsp_ep.c» file, the IPC initialization is:

   /* Setup TransportRpmsg for host communication */

IpcMgr_ipcStartup();

   /* Setup IPC for DSP to DSP communication */

status = Ipc_start();

Then Ipc_start() turns to the endless cyrcle.

But in documentation mentioned that Keystone 2 in Linux-DSP mode should only use the IpcMgr_ipcStartup(). But when doing so, the ti_sdo_ipc_SharedRegion_attach() and SharedRegion_getHeap() wasnt called and the initialization of ti_sdo_ipc_GateMP_Instance_init() returns NULL.

The question:

Should the SharedRegion_module->regions[0].heap be initializated in ARM-Linux?

If it is, then: Why wasnt it initializated by  «armEpTest_k2h.out»  from «ipc-transport-qmss-test»?

Also the customer adds:

1)If Ipc_start() left in DSP module then it doesnt turn to the cycle. And then, if we try to make Ipc_attach(2) to another DSP, after calling status = Ipc_procSyncStart(remoteProcId, Ipc_module->ipcSharedAddr) it returns status = -11 (Ipc_E_NOTREADY).

Log from DSP2:

2 Resource entries at 0x810000

Core 1 : ******************************************************

Core 1 : SYS/BIOS DSP TransportQmss Heterogeneous Test (DSP EP)

Core 1 : ******************************************************

Core 1 : Device name:               TMS320TCI6636

Core 1 : Processor names:           HOST,CORE0,CORE1

Core 1 : IPC Core ID:               2

Core 1 : Number of DSPs             2

Core 1 : Number of test iterations: 100

Core 1 : Starting IPC core 2 with name ("CORE1")

TransportRpmsg_Instance_init: remoteProc: 0

registering rpmsg-proto:rpmsg-proto service on 61 with HOST

[t=0x00052e1c] xdc.runtime.Main: NameMap_sendMessage: HOST 53, port=61

2) In the example project (transportQmssDspEpK2HC66TestProject) there are function:

               /* The region heap will do the alignment */

           regionHeap = SharedRegion_getHeap(obj->regionId);

It returns NULL without Ipc_start() (If we left only IpcMgr_ipcStartup()).

If we turns on the Ipc_start(), when the DSP2 is trying attach to DSP1 the process crashes with reinitialization(because through the RM server we can work only with HOST).

But at this moment we dont need operations between DSP cores.

Stack of calls when NULL is returned:

initTsk >>>

GateMP_create >>>

ti_sdo_ipc_GateMP_create >>>

ti_sdo_ipc_GateMP_Instance_init__E >>>

ti_sdo_ipc_GateMP_Instance_init >>>

SharedRegion_getHeap(0) return NULL!!!

Stack when the error in Ipc_start() CORE0 occurs:

Ipc_start >>>

status = Ipc_attach(baseId = 2); >>>

status = ti_sdo_ipc_MessageQ_SetupTransportProxy_attach(2, 0); return -1; !!!

TransportRpmsgSetup_attach

TransportRpmsg_Instance_init

/* This MessageQ Transport over RPMSG only talks to the "HOST" for now: */ !!!

Could you help us please to reslove our customer's problem. This is very important project.

  • Hi Ilya,

    The team is notified. They will post their feedback directly here.

    BR
    Tsvetolin Shulev
  • Hi,

    Can you share what OS are you using Linux or RTOS? Or are you running a both (linux on arm & rtos on dsp) and communicate between the two?

    Best Regards,
    Yordan
  • Hello Yordan,

    Our customer running both.

    Edit 1: They collect their own Linux image with the connection of the necessary modules:

    ARAGO_IMAGE_EXTRA_INSTALL ?= ""

    IMAGE_INSTALL += " \

                   packagegroup-core-boot \

                   ${ARAGO_IMAGE_EXTRA_INSTALL} \

    "

    IMAGE_INSTALL += "\

       packagegroup-arago-base \

       packagegroup-arago-console \

       packagegroup-arago-base-tisdk \

       packagegroup-arago-test \

       ${VIRTUAL-RUNTIME_initramfs} \

       "

    IMAGE_INSTALL += "openssh openssh-sftp openssh-sftp-server gdbserver multiprocmgr ipc-transport-qmss uio-module-drv qmss-lld qmss-pdsp-fw rm-lld"

    I will try to simplify the questions:
    Final result should be the system that recieve data on PCIe and transfer it to DSP, then processing the data and then transfering it with SPI in two directions.
    (The data rate should be up to 1.6 GBPS)
    The customer tried to implement ARM-DSP transfer and stucked here.
    Could you please share the right example project with needed configuration/SDK ver?

    Their attempts described in previous post/thread:
    https://e2e.ti.com/support/dsp/c6000_multi-core_dsps/f/639/t/659342




  • Hi, Ilya,

    When running the example, be sure OpenCL is not running which will interfere with IPC/Qmss. The OpenCL daemon, ti-mctd, is started by default when system comes up. I'll let DSP engineer to answer DSP related questions.

    Rex
  • Hi Ilya,

    IPC_start should only be called on the slave core if there is a need to initialize the SharedRegion module and/or to perform slave-to-slave IPC communication. If there is no SharedRegion defined, then IPC_start will fail when it looks for SR0. On the host core, IPC_start should always be called. See processors.wiki.ti.com/.../IPC_3.x_FAQ

    Also, SharedRegion is only for RTOS-RTOS environments. If using Linux-RTOS, then CMEM should be used.
  • Hello Sahin,

    Thank you for the reply!
    We got 2 more emails from the customer based on you comments.

    1. The first email:
    Based on latest version of SDK:
    In the thread (e2e.ti.com/.../372993) described, that for the starting of data exchange we need only IpcMgr_ipcStartup(); But it can connect only to TransportRpmsg from Linux. Also SharedRegion dont start, whereas var SharedRegion = xdc.useModule('ti.sdo.ipc.SharedRegion') was declared.
    SharedRegion.setEntryMeta(0,
    { base: SHAREDMEM,
    len: SHAREDMEMSIZE,
    ownerProcId: 1, /* Ensure CORE0 SR0 is owner, NOT HOST! */
    isValid: true,
    createHeap: true,
    name: "MSMC SRAM",
    });
    SharedRegion
    Regions
    id base end len ownerProcId cacheEnable isValid cacheLineSize reservedSize heap name
    0 0xc000000 0xc1fffff 0x200000 1 true true 128 0 0x0 MSMC SRAM
    1 0x0 0x0 0x0 0 true false 128 0 0x0 null
    2 0x0 0x0 0x0 0 true false 128 0 0x0 null
    3 0x0 0x0 0x0 0 true false 128 0 0x0 null

    Accordingly, the call to any of the functions results crash:
    GateMP_create(&gateMpParams);
    HeapBufMP_create(&heapBufParams);
    TransportQmss_create(&transQmssParams, &errorBlock);

    Due to source code the launch of SharedRegion occurs in the function ipc_start ();
    But if it was called from CORE0(DSP1), it tries to establish the connection between all the cores of the system
    /* Loop to attach to all other processors in cluster */

    /* call Ipc_attach for every remote processor */
    do {
    status = Ipc_attach(baseId);
    } while (status == Ipc_E_NOTREADY);

    And then, when it called later with remoteProcId = 2
    /* call attach to remote processor */
    status = ti_sdo_ipc_MessageQ_SetupTransportProxy_attach(remoteProcId,
    sharedAddr);

    It redirects on TransportRpmsgSetup_attach(remoteProcId, sharedAddr);

    And then if called:
    handle = TransportRpmsg_create(procId, &params, &eb); где procId = 2

    and exit to the TransportRpmsg_Instance_init с remoteProcId = 2. It tolds us that such transport we can estblish only with HOST.

    2. The second email base on your last reply:
    1) We returns to the question from the last week. If it is organized on ARM-Linux - Why DSP doesnt see the heap?
    2) It means, more the half of the SDK's examples is not relevant and not working.
    3) Which of the existing example and SDK ver should work correctly to organize the fast data exchange?

    For example in Yocto in .dts i had to add:
    mpm_mem: dspmem@ {
    compatible = "ti,keystone-dsp-mem", "linux,rproc-user";
    reg = <0xa0000000 0x20000000 0x0C000000 0x600000 0x0bc00000 0x10000>;
    label="dspmem";
    };
    Without it DSP didnt loaded through mpmcl at all.

    And here is mpm_config.json
    {
    "name": "local-msmc",
    "globaladdr": "0x0c000000",
    "length": "0x600000",
    "devicename": "/dev/dspmem"
    },
    {
    "name": "local-ddr",
    "globaladdr": "0x90000000",
    "length": "0x30000000",
    "devicename": "/dev/dspmem"
    },
    {
    "name": "mpax",
    "globaladdr": "0x0bc00000",
    "length": "0x10000",
    "devicename": "/dev/dspmem"
    }


    BR,
    Ilya.
  • Hi, Ilya,

    I had a discussion with Sahin about the PDK example customer is trying. Sahin just built that transportQmssDspEpK2HC66TestProject project for me. I don't see any executable for Linux. The examples under PDK are for RTOS to RTOS environment only. If Linux to RTOS examples, it should go under IPC directory. There may be some examples under PDK drv/qmss/test. The linux side of binaries are released in tisdk-server-extra-rootfs tarball. I will need to find out which DSP image is the Linux counterpart to try it.

    It may take a while for me to get this setup running. I will be out of office for a week starting tomorrow. If I can't get it up running today, I'll continue after I am back in office.

    Rex
  • Rex,
    is there anyone that could continue your work regarding this post while you are out?
  • Hello Rex,

    The customer provided some more information:
    As reference he took information from the link below:
    git://git.ti.com/keystone-linux/ipc-transport
    (Here is the projects for Linux and DSP)
    transportQmssDspEpK2HC66TestProject was builded as last version from DSP side.
    The Linux side is the last version of ipc-transport-qmss-test from Yocto (here is also projects for Linux and DSP).
    The customer tried this 2 variants and now is trying to understand the mechanism of interaction.
    And everywhere heterogeneous_proc_test -> dsp_ep.c is the same.

    TransportRPMSG starts normally, host with dsp is synchronized and launch DSP cores. As he understand, the sharedRegion here is not needed, because the cores should interact through RM. And then we should run the QMSS. Is it true?

    Currently: From the DSP:CORE0 side there is an attempt to start GateMP and QMSS with using of sharedRegion, which is not initialzed.

    Ilya.
  • This thread is now locked by the system. I'll mark this thread as closed.