This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Unable to get the IPC TransportQmss examples to work

Other Parts Discussed in Thread: SYSBIOS

Hi, 

I am currently evaluating the evm-k2h platform. 

I have built the ARM and DSP binaries following the documentation here: http://processors.wiki.ti.com/index.php/MCSDK_UG_Chapter_Developing_Transports#Heterogeneous_Processor_Test_2

I have started the rmServer.out process and loaded the DSP binary as described in the MCBSP UG. 

However, I can't get this example to fully work. On the ARM side I get the following logs from the armEpTest_k2h.out:

*********************************************************
* ARMv7 Linux TransportQmss Heterogeneous Test (ARM EP) *
*********************************************************
TransportQmss Version : 0x02000001
Version String: Linux IPC Transports Revision: 2.0.0.01:Sep  7 2016:15:39:21
Process 1 : Initialized RM_Client0
Process 1 : Opening RM client socket /var/run/rm/rm_client0
Process 0 : Starting RM Message Hub
Process 0 : Created RM hub queue: RM_Message_Hub, Qid: 0x80
Process 0 : Opening RM_Client_DSP_1
Process 1 : Creating TransportQmss instance
Process 1 : Local MessageQ: TEST_MsgQ_Proc_0, QId: 0x81
Process 1 : Attempting to open DSP 1 queue: TEST_MsgQ_Proc_1


Here are the traces from the DSP core 0:

root@k2hk-evm:~# cat /sys/kernel/debug/remoteproc/remoteproc0/trace0
3 Resource entries at 0x810000
Core 0 : ******************************************************
Core 0 : SYS/BIOS DSP TransportQmss Heterogeneous Test (DSP EP)
Core 0 : ******************************************************
Core 0 : Device name: TMS320TCI6636
Core 0 : Processor names: HOST,CORE0,CORE1
Core 0 : IPC Core ID: 1
Core 0 : Number of DSPs 2
Core 0 : Number of test iterations: 100
Core 0 : Starting IPC core 1 with name ("CORE0")
registering rpmsg-proto service on 61 with HOST

Here are the traces from DSP core 1: 

root@k2hk-evm:~# cat /sys/kernel/debug/remoteproc/remoteproc1/trace0
3 Resource entries at 0x810000
Core 1 : ******************************************************
Core 1 : SYS/BIOS DSP TransportQmss Heterogeneous Test (DSP EP)
Core 1 : ******************************************************
Core 1 : Device name: TMS320TCI6636
Core 1 : Processor names: HOST,CORE0,CORE1
Core 1 : IPC Core ID: 2
Core 1 : Number of DSPs 2
Core 1 : Number of test iterations: 100
Core 1 : Starting IPC core 2 with name ("CORE1")
registering rpmsg-proto service on 61 with HOST
ti.sysbios.knl.Semaphore: line 202: assertion failure: A_badContext: bad calling context. Must be called from a Task.
xdc.runtime.Error.raise: terminating execution

As you can see in the above trace, the DSP application in the second core is crashing badly... 

Here some more information that could be useful in identifying the issue:

  • I have made no modifications to the code for either the ARM application or the DSP application
  • I am running the tisdk-server-rootfs-image on the evaluation board
  • The multiProcessTest worked without any issues. So I know the RM server is working.

Am I missing something?

Regards,

- David

  • Hi David,

    I've notified the support team. Their feedback will be posted directly here.

    Best Regards,
    Yordan
  • Note, I am using the following packages:

    * processor-sdk-linux-k2hk-evm-03.00.00.04
    * mcsdk-03.01.04.07

    Regards,
    - David
  • Ok, I have download the processor-sdk-rtos-k2hk-evm-03.00.00.04. So, I am now using the latest version of the DSP applications.

    The DSP application are loaded and are no longer crashing. However, I still can't get the example to work with the default rootfs shipping with the processor SDK. After loading the DSP images and starting the RM server, I start the armEpTest_k2h.out application. Here is the trace I am getting:

    root@k2hk-evm:~# ./armEpTest_k2h.out
    *********************************************************
    * ARMv7 Linux TransportQmss Heterogeneous Test (ARM EP) *
    *********************************************************
    TransportQmss Version : 0x02000001
    Version String: Linux IPC Transports Revision: 2.0.0.01:Sep 7 2016:15:39:21
    Process 0 : Starting RM Message Hub
    Process 1 : Initialized RM_Client0
    Process 1 : Opening RM client socket /var/run/rm/rm_client0
    Process 0 : Created RM hub queue: RM_Message_Hub, Qid: 0x80
    Process 0 : Opening RM_Client_DSP_1
    Process 1 : Creating TransportQmss instance
    Process 1 : Local MessageQ: TEST_MsgQ_Proc_0, QId: 0x81
    Process 1 : Attempting to open DSP 1 queue: TEST_MsgQ_Proc_1
    [ 244.195376] rpmsg_proto rpmsg2: timeout waiting for a tx buffer
    [ 244.201320] rpmsg_sock_sendmsg: rpmsg_send failed: -512
    ERROR Process 1 : Error -1 when opening next DSP 0 MsgQ
    Cleaning test process
    [ 259.205285] rpmsg_proto rpmsg2: timeout waiting for a tx buffer
    [ 259.211226] rpmsg_sock_sendmsg: rpmsg_send failed: -512
    ERROR Process 0 : Error -1 when opening DSP MessageQ
    Process 0 : Cleaning up RM Message Hub
    [ 274.215196] rpmsg_proto rpmsg8: timeout waiting for a tx buffer
    [ 274.221138] rpmsg_proto rpmsg8: failed to announce service -512
    [ 275.215184] rpmsg_proto rpmsg6: timeout waiting for a tx buffer
    [ 275.221128] rpmsg_proto rpmsg6: failed to announce service -512
    [ 290.224951] rpmsg_proto rpmsg4: timeout waiting for a tx buffer
    [ 290.230894] rpmsg_proto rpmsg4: failed to announce service -512
    Test Complete!

    Here is the trace I am receiving on the DSP (both core have very similar traces):

    root@k2hk-evm:~# cat /sys/kernel/debug/remoteproc/remoteproc0/trace0
    2 Resource entries at 0x810000
    Core 0 : ******************************************************
    Core 0 : SYS/BIOS DSP TransportQmss Heterogeneous Test (DSP EP)
    Core 0 : ******************************************************
    Core 0 : Device name: TMS320TCI6636
    Core 0 : Processor names: HOST,CORE0,CORE1
    Core 0 : IPC Core ID: 1
    Core 0 : Number of DSPs 2
    Core 0 : Number of test iterations: 100
    Core 0 : Starting IPC core 1 with name ("CORE0")
    registering rpmsg-proto:rpmsg-proto service on 61 with HOST

    Both apps seem to be stuck and never recover. 

  • One more thing: I have tried the same test with the top of all repos, basically re-building the tisdk-server-rootfs-image in Yocto. I have a hard time getting the same results.

    When starting the armEpTest application I get the following trace:

    *********************************************************
    * ARMv7 Linux TransportQmss Heterogeneous Test (ARM EP) *
    *********************************************************
    TransportQmss Version : 0x02000001
    Version String: Linux IPC Transports Revision: 2.0.0.01:Sep 7 2016:15:39:21
    Process 0 : Starting RM Message Hub
    Process 1 : Initialized RM_Client0
    Process 1 : Opening RM client socket /var/run/rm/rm_client0
    Process 0 : Created RM hub queue: RM_Message_Hub, Qid: 0x80
    Process 0 : Opening RM_Client_DSP_1
    Process 1 : Creating TransportQmss instance
    fw_memMap: Failed to find fd to map 0x02a00000.
    TransportQmss_create : mpm_transport_open failed
    ERROR Process 1 : Failed to create TransportQmss handle
    Cleaning test process

    Any ideas why the mpm_transport_open function call fails?

  • Hi ,

    Any updates on this issue from the support team?

    Regards,
    - David
  • After debugging, it turns out the DSP applications are stuck in the Ipc_start() function.

    Any reasons why that would be happening?

    Regards,
    - David
  • Hi David,

    I haven't received feedback on this. I've sent a reminder.

    Regards,
    Yordan
  • Hi, 

    I have managed to get the example to work. While investigating, I got the transportQmssBenchmark project to work without changing anything. So, I decided to try to use this project SYS/BIOS configuration file and adapt it for the transportQmssDspEp project and it worked. Here is the configuration file I used: 

    /* 
     * Copyright (c) 2011-2015, Texas Instruments Incorporated
     * All rights reserved.
     *
     * Redistribution and use in source and binary forms, with or without
     * modification, are permitted provided that the following conditions
     * are met:
     *
     * *  Redistributions of source code must retain the above copyright
     *    notice, this list of conditions and the following disclaimer.
     *
     * *  Redistributions in binary form must reproduce the above copyright
     *    notice, this list of conditions and the following disclaimer in the
     *    documentation and/or other materials provided with the distribution.
     *
     * *  Neither the name of Texas Instruments Incorporated nor the names of
     *    its contributors may be used to endorse or promote products derived
     *    from this software without specific prior written permission.
     *
     * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
     * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
     * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
     * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
     * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
     * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
     * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
     * OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
     * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
     * OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,
     * EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
     */
    
    var Task      = xdc.useModule('ti.sysbios.knl.Task'); 
    var Semaphore = xdc.useModule('ti.sysbios.knl.Semaphore');
    var Timestamp = xdc.useModule('xdc.runtime.Timestamp');
    var System    = xdc.useModule('xdc.runtime.System');
    var SysMin = xdc.useModule('xdc.runtime.SysMin');
    System.SupportProxy = SysMin;
    var BIOS = xdc.useModule('ti.sysbios.BIOS');
    BIOS.heapSize = 0xA000;
    /* BIOS.libType = BIOS.LibType_Debug; */ /* Uncomment to debug step BIOS and
                                                IPC code from CCS */
    var CpIntc    = xdc.useModule('ti.sysbios.family.c66.tci66xx.CpIntc');
    
    /* Load and use the CSL, CPPI, QMSS, and RM packages */
    var devType = "k2h"
    var Csl = xdc.useModule('ti.csl.Settings');
    Csl.deviceType = devType;
    var Cppi = xdc.loadPackage('ti.drv.cppi'); 
    var Qmss = xdc.loadPackage('ti.drv.qmss');
    var Rm   = xdc.loadPackage('ti.drv.rm');
    
    Program.sectMap[".qmss"] = new Program.SectionSpec();
    Program.sectMap[".qmss"] = "MSMCSRAM";
    
    Program.sectMap[".cppi"] = new Program.SectionSpec();
    Program.sectMap[".cppi"] = "MSMCSRAM";
    
    Program.sectMap[".desc"] = new Program.SectionSpec();
    Program.sectMap[".desc"] = "MSMCSRAM";
    
    Program.sectMap[".sharedGRL"] = new Program.SectionSpec();
    Program.sectMap[".sharedGRL"] = "L2SRAM";
    
    Program.sectMap[".sharedPolicy"] = new Program.SectionSpec();
    Program.sectMap[".sharedPolicy"] = "L2SRAM";
    
    Program.sectMap[".sync"] = new Program.SectionSpec();
    Program.sectMap[".sync"] = "MSMCSRAM";
    
    
    var Ipc = xdc.useModule('ti.sdo.ipc.Ipc');
    /* Synchronize all processors (this will be done in Ipc_start using
     * TransportShmNotify transport) */
    Ipc.procSync = Ipc.ProcSync_ALL;
    
    var MessageQ = xdc.useModule('ti.sdo.ipc.MessageQ');
    var TransportQmss = xdc.useModule('ti.transport.ipc.c66.qmss.TransportQmss');
    
    var NotifyCirc = xdc.useModule('ti.sdo.ipc.notifyDrivers.NotifyDriverCirc');
    var Interrupt = xdc.useModule('ti.sdo.ipc.family.tci663x.Interrupt');
    NotifyCirc.InterruptProxy = Interrupt;
    
    /* Should be done internally */
    xdc.useModule("ti.ipc.namesrv.NameServerRemoteRpmsg");
    
    var VirtQueue = xdc.useModule('ti.ipc.family.tci6638.VirtQueue');
    
    /*  Notify brings in the ti.sdo.ipc.family.Settings module, which does
     *  lots of config magic which will need to be UNDONE later, or setup
     *  earlier, to get the necessary overrides to various IPC module proxies!
     */
    var Notify = xdc.module('ti.sdo.ipc.Notify');
    var Ipc = xdc.useModule('ti.sdo.ipc.Ipc');
    
    /* Note: Must call this to override what's done in Settings.xs ! */
    Notify.SetupProxy = xdc.module('ti.sdo.ipc.family.tci663x.NotifyCircSetup');
    
    xdc.loadPackage('ti.ipc.ipcmgr');
    
    var MessageQ  = xdc.useModule('ti.sdo.ipc.MessageQ');
    var VirtioSetup = xdc.useModule('ti.ipc.transports.TransportRpmsgSetup');
    MessageQ.SetupTransportProxy = VirtioSetup;
    
    var HeapBuf = xdc.useModule('ti.sysbios.heaps.HeapBuf');
    var params = new HeapBuf.Params;
    params.align = 8;
    params.blockSize = 512;
    params.numBlocks = 256;
    var msgHeap = HeapBuf.create(params);
    MessageQ.registerHeapMeta(msgHeap, 0);
    
    var TransportRpmsg = xdc.useModule('ti.ipc.transports.TransportRpmsg');
    
    /* Set to disable error printouts */
    /* var Error = xdc.useModule('xdc.runtime.Error'); */
    /* Error.raiseHook = null; */
    
    var MultiProc = xdc.useModule('ti.sdo.utils.MultiProc');
    /* Cluster definitions - Example one cluster.
     * [Cluster Base ID: 0] - 1 Host + 2 DSPs (Procs) */
    MultiProc.numProcessors = 3;
    MultiProc.numProcsInCluster = 3;
    MultiProc.baseIdOfCluster = 0;
    var procNameList = ["HOST", "CORE0", "CORE1"];
    MultiProc.setConfig(null, procNameList);
    
    /* Note: MultiProc_self is set during VirtQueue_init based on DNUM. */
    var MultiProcSetup = xdc.useModule('ti.sdo.ipc.family.tci663x.MultiProcSetup');
    MultiProcSetup.configureProcId = false;
    
    Program.global.sysMinBufSize = 0x8000;
    SysMin.bufSize  =  Program.global.sysMinBufSize;
    
    /* Enable Memory Translation module that operates on the Resource Table */
    var Resource = xdc.useModule('ti.ipc.remoteproc.Resource');
    /* Make sure RemoteProc's .resource_table doesn't conflict with secure
     * kernel when on secure board.  Secure kernel is located from
     * 0x00800000 - 0x00810000 */
    Resource.loadAddr = 0x00810000;
    
    
    Program.global.DEVICENAME = Program.cpu.deviceName;
    Program.global.PROCNAMES = procNameList.join(",");
    Program.global.BUILDPROFILE = Program.build.profile;
    
    var HeapBufMP   = xdc.useModule('ti.sdo.ipc.heaps.HeapBufMP');
    var SharedRegion = xdc.useModule('ti.sdo.ipc.SharedRegion');
    SharedRegion.translate = false;
    SharedRegion.setEntryMeta(0,
        { base: 0x0C000000, 
          len: 0x00300000,
          ownerProcId: MultiProc.baseIdOfCluster + 1,  /* Needs to be global core ID of DSP Core 0 */
          isValid: true,
          cacheEnable: true,
          cacheLineSize: 128,  /* Allocated messages aligned to cache line */
          name: "internal_shared_mem",
        });
    
    

    test.cfg

  • Hey David,

    I believe this issue is a known bug and being tracked internally.  Right now it's targeted for the 3.2 release, but unfortunately that work has not been completed yet and I don't have a patch for you.  Would your design constraints allow you to utilize a different IPC transport mechanism?  I apologize for lack of documentation on this issue and that you ran into it.  

    Sincerely,

    John