This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

IPC use between ARM and C66x on TCI6638

Other Parts Discussed in Thread: 66AK2H12

I am trying to use IPC to send messages from ARM to a DSP core on a 6638 emv. I am using: IPC_3_00_00_20 CCS 5.4 XTCIEVMK2X

On the DSP side I use the .cfg

MultiProc.setConfig(null, ["HOST", "CORE0"]); Ipc.procSync = Ipc.ProcSync_PAIR; MultiProc.numProcessors = 2; var SharedRegion = xdc.useModule('ti.sdo.ipc.SharedRegion'); SharedRegion.setEntryMeta(0,     { base: SHAREDMEM,       len:  0x00200000,       ownerProcId: 1,       isValid: true,       name: "MSMCSRAM_IPC",     });

On the ARM side I am running Linux and using the MessageQ example

The DSP gets to a loop for Ipc_attach() which returns Ipc_E_NOTREADY as expected. On the ARM side the Ipc_start() fails.

On the ARM side I get the message: Ipc_start: NameServer_setup() failed: -1

In this scenario I have simplified the intention to only setup communication between the ARM and CORE0 only.

Debugging into ARM code I see that there is an assumption upfront that there are 9 processors and ordered as HOST,CORE0,CORE1...   The IPC users guoide starngely is titled "SYS/BIOS IPC" not explaining the Linux issues and details.

Question 1: Where does the ARM get the information about the configuration from?  How does it know which CORE it should connect to? how is the equivalen information in .cfg  for dsp is given to ARM? That is How would I run Ipc_attach() from MessageQ example  on ARM, whithout being able to convey an equivalent "Ipc.procSync = Ipc.ProcSync_PAIR;"  concept to ARM? 

 Note; debugging into ARM side I see  Client_Info[0] has been initialized to PID=1650  responseFIFOName[]=/tmp/LAD/1650 and Client_Info[1..31] are all null.   In one occation stepping through the ARM side and stepping into ant call possible i got further than ever and the app terminated with: qTStatus: Remote communication error. Target disconnected.: Connection reset by peer.

If I setp through (by stepping over Ipc_start()) I will get: Ipc_start: NameServer_setup() failed: -1

 

Question 2: On the ARM side before running the Ipc_start() calling MultiProc_getNumProcessors(),  MultiProc_getId("CORE0"), MultiProc_self(), MultiProc_getId("HOST") returns:  Number of processors = 0 , CORE0 id= 65536 , This core id=0 , HOST id = 65536 

 on the DSP CORE0;  Number of processors=2 , CORE0 id=1 , this core id=0 ,HOST id=0 , name of id 0 is: HOST  Note the "this core id=0" using MultiProc_self() is not what I expected!

I start the DSP by loading from CCS/JTAG and run th DSP and it gets to Ipc_attach() loop. Then I let the ARM run the app and step through the MessageQ code.

I have been able to program all dsp cores to communicate with MessageQ in 2,3,8 core configurations.

 

  • You need to use MPM to download DSP image for IPC 3.x between ARM and DSP (see link http://processors.wiki.ti.com/index.php/MCSDK_UG_Chapter_Developing_Transports#KeyStone2_Specific_Details).

    Regards

    Sajesh

  • Sajesh,

    I followed the steps.

    I am now loading the dsp CORE0 image using

    mpm_reset()

    mpm_load()

    mpm_run()  and they all return success. However I still  have a problem.

    I tired

    MultiProc.setConfig(null, ["HOST", "CORE0"]); and

    MultiProc.setConfig("CORE0", ["HOST", "CORE0"]);

    In the first case when I print some information on CORE0 using MultiProc APIs, iget Number of processors=2 , CORE0 id=1 , this core id=0 ,HOST id=0 , name of id 0 is: HOST (I expected to get "this core id" to be 1 instead of 0) so Ipc_attach() on CORE0 end needs to use the id of HOST=0 ??

    before calling Ipc_start() the MultiProc APIs report Number of processors=0 CORE0 id = 65535 This core (HOST) id=0 HOST id = 65535

    At this time Ipc_attach(0) is being called on dsp returnning -11 in a loop.

    On the HOST side, the call stack after call to Ipc_start() shows open() _IO_file_open() _IO_new_file_fopen() openCommandFIFO() initWrappers() LAD_connect()

    and never returns.

    debugging further I see that all clientInfo[] arrays are initialized to nulls and false.

     

    Thanks

    Shervin

  • shervin hojati said:

    I tired

    MultiProc.setConfig(null, ["HOST", "CORE0"]); and

    MultiProc.setConfig("CORE0", ["HOST", "CORE0"]);

    In the first case when I print some information on CORE0 using MultiProc APIs, iget Number of processors=2 , CORE0 id=1 , this core id=0 ,HOST id=0 , name of id 0 is: HOST (I expected to get "this core id" to be 1 instead of 0) so Ipc_attach() on CORE0 end needs to use the id of HOST=0 ??

    When using "null" as the first parameter to MultiProc.setConfig(), there needs to be a way to later on tell MultiProc who you are.  This construct is typically used when you are loading more than 1 core with the same executable, and some startup code will assign the "self" ID at runtime.  Since you probably are not setting it at runtime, I assume the "null" causes your DSP's "self" ID to be 0, which is wrong.

    You should use your second form:
        MultiProc.setConfig("CORE0", ["HOST", "CORE0"]);
    which tells the DSP that it is "CORE 0", which is procId 1 in the ["HOST", "CORE0"] list.

    shervin hojati said:

    before calling Ipc_start() the MultiProc APIs report Number of processors=0 CORE0 id = 65535 This core (HOST) id=0 HOST id = 65535

    The MultiProc module internally queries the actual MultiProc database from the LAD daemon.  It will cache this information locally in your process.  Ipc_start() queries the LAD daemon for the information, and until you call Ipc_start(), the local information will be uninitialized.  I'm assuming that the bogus information you are getting is the uninitialized values.

    shervin hojati said:

    At this time Ipc_attach(0) is being called on dsp returnning -11 in a loop.

    On the HOST side, the call stack after call to Ipc_start() shows open() _IO_file_open() _IO_new_file_fopen() openCommandFIFO() initWrappers() LAD_connect()

    Did you start the LAD daemon first?:
        % ./lad_tci6638 log.txt

    If not, then there's your issue (but continue reading below, you're going to have to modify/rebuild LAD).

    If so, the file /tmp/LAD/log.txt will have some useful information printed about LAD's execution to that point.

    In your initial post on this topic you asked where the host gets its MultiProc information.  The LAD daemon is hard-coded with the information for your particular processor (note that the LAD executable name has the architecture in it, in this case tci6638).  The ARM does not communicate this information to the DSP, so it needs to be kept in-sync manually by specifying the core list identically for both DSP and ARM.

    The default for TCI6638 is as you stated, 9 cores ["HOST", "CORE0", "CORE1", ..., "CORE7"].  This is hardcoded in LAD, so in order to use your reducted list of ["HOST", "CORE0"] you will need to keep LAD in sync by modifying the file:
        <ipc_install_dir>/linux/src/daemon/MultiProcCfg_tci6638.c
    /* This must match BIOS side MultiProc configuration for given platform!: */
    MultiProc_Config _MultiProc_cfg =  {
       .numProcessors = 2,
       .nameList[0] = "HOST",
       .nameList[1] = "CORE0",
       .id = 0,                 /* The host is always zero */
    };

    Regards,

    - Rob

     

  • Rob,

    Thanks for the response.

    To sync the LAD, does this mean I have to rebuild the kernel and file system and generate the disk images that are loaded at boot time (in my case now using nfs) ?

    Would using the procedure in MCSDK Users Guide exploring chapter, Tocto build instructions section

    generate the following image files?

    uImage-rt.k2hk-evm.dtb

    arago-console-image.cpi.gz

    uImage-rt-keystone-evm.bin

    skern-keystone-evm.bin

    u-boot-spi-keystone-evm.gph

    Regards,

    Shervin

  • using the above suggestions I am getting the following response on linus side:

    Ipc_start(): NameServer_setup() failed: -1

    The lad.txt shows:

    Initializing LAD...       opening FIFO: /tmp/LAD/LADCMDS

    Retrieving command...

    LAD_CONNECT:         client FIFO name = /tmp/LAD/1648

        client PID = 1648

        assigned client handle = 0

        FIFO /tmp/LAD/1648 created

        FIFO /tmp/LAD/1648 opened for writing

        sent response

    DONE

    Retrieving command...

    LAD_MULTIPROC_GETCONFIG: calling MultiProc_getConfig()...

    MultiProc_getConfig() - 2 procs

            Proc 0 - "HOST"

            Proc 1 - "CORE0"

        status = 0

    DONE

    Sending response...

    Retrieving command...

    LAD_NAMESERVER_SETUP: calling NameServer_setup()...

    NameServer_setup: entered, refCount=0

    NameServer_setup: created send socket: 5

    NameServer_setup: connect failed: 22, Invalid argument

        closing send socket: 5

    NameServer_setup: created recv socket: 5

    NameServer_setup: bind failed: 22, Invalid argument

        closing recv socket: 5

    NameServer_setup: creating listener thread

    NameServer_setup: exiting, refCount=1     listener_cb: Entered Listener thread.

    NameServer: waiting for unblockFd: 2, and socks: maxfd: 3

        status = -1

    DONE

    Sending response...

    Retrieving command...

    Regards,

    Shervin 

     

     

     

     

     

  • shervin hojati said:

    NameServer_setup: created send socket: 5

    NameServer_setup: connect failed: 22, Invalid argument

        closing send socket: 5

    This LAD error typically happens when the "other side" to which the socket is connecting has not been set up yet.  We have also seen this when the kernel is not correctly "patched" for supporting AF_RPMSG sockets.

    For the second guess, can you check your kernel's linux/include/socket.h file?  There should be the following line:
        #define AF_RPMSG        40      /* Remote-processor messaging   */

    There is a FAQ entry regarding this, although I doubt it's your issue if you're running the MessageQ example: http://processors.wiki.ti.com/index.php/IPC_3.x_FAQ#LAD_reports_NameServer_setup:_connect_failed:_22.2C_Invalid_argument

    Regards,

    - Rob

  • Rob, I see

    #define AF_RPMSG        40  in socket.h of

     linux-devkit/sysroots/armvahf-vpf-neon-oe-linux-gnueabi/usr/kernel/include/linux

    linux-devkit-rt/sysroots/armvahf-vpf-neon-oe-linux-gnueabi/usr/kernel/include/linux

    arago-2013.04/sysroots/armv7ahf-vpf-neon-oe-linux-gnueabi/usr/src/kernel/include/linux

    of the arm tools on the host.

    I initially had setup IPC and messageQ between all dsp cores on a 6678. That uses

    SharedRegion = xdc.useModule('ti.sdo.ipc.SharedRegion');

    SharedRegion.setEntryMeta(0,  

     { base: SHAREDMEM,      

    len:  0x00200000,      

     ownerProcId: 1,      

    isValid: true,      

    name: "MSMCSRAM_IPC",     });  

    I then modified this application to only communicate from core0 to HOST.

    So he dsp image that I use is based on a *.cfg, with SetupTransportProxy set to ti.sdo.ipc.transports.TransportShmNotifySetup

    where is the MessageQ transport is set to "TransportRpmsg"? I can not find the source/project settings for the dsp side of the messageQ examples.

    -My intention is to have messaging between core0 with host core0 with core1 core0 with core2 Do I need two different methods for core0-host and core-core1?

    -How would one transfer large amount of data from a dsp core to ARM host running Linux?  

    -Is the ARM capable of using edma3 by allocationg a dma channel under linux?

     Thanks,

    Shervin Hojati

     

  • shervin hojati said:

    where is the MessageQ transport is set to "TransportRpmsg"? I can not find the source/project settings for the dsp side of the messageQ examples.

    DSP-side code exists in <ipc>/packages/ti/ipc/tests.  In there you will find messageq_single.c as the DSP source code, and rpmsg_transport.cfg as its configuration file (where it sets MessageQ.SetupTransportProxy=xdc.useModule('ti.ipc.transports.TransportRpmsgSetup')).

    shervin hojati said:

    -My intention is to have messaging between core0 with host core0 with core1 core0 with core2 Do I need two different methods for core0-host and core-core1?

    Yes, core<->host needs TransportRpmsg and core<->core needs NotifyShm.

    I don't know the details, but I've been informed by a team member that there are examples for having 2 different transports for a single DSP core.  There is an example in <ipc>/packages/ti/ipc/tests named dual_transport, so please refer to that for illustration of what you need to do.  It is much more involved/complicated than the single transport situation, and involves creating/configuring things at runtime.

    You're on an older Ipc release, and newer releases have updated examples.  I've been informed that the ping_rpmsg example has been modified to use this dual transport mechanism.  You may not be able to switch to a newer Ipc release, but a newer release could be used for illustration of doing what you want to do in your older release.

    shervin hojati said:
    -How would one transfer large amount of data from a dsp core to ARM host running Linux?

    One would independently allocate a large contiguous buffer in shared memory and pass the address of that buffer between the cores in a MessageQ msg.  TI has a Linux Utils product containing a module named CMEM that can be used for this purpose, and there are other ways to get a large contiguous buffer, but it must be allocated from the Linux side and be physically contiguous.

    shervin hojati said:

    -Is the ARM capable of using edma3 by allocationg a dma channel under linux?

    I'm not sure what you're asking.  The ARM Linux kernel is certainly capable of allocating and using edma3 channels, but access is limited to kernel code.

    Regards,

    - Rob

     

  • Robert,

    I have read in IPC users guide, that both sides will need to call Ipc_start() or Ipc_start(), Ipc_atatch(). And the Ipc_attach() needs to be called in a loop until successful. I do not see any calls to Ipc_start()/Ipc_attch()!

     

    -Is the ARM capable of using edma3 by allocationg a dma channel under linux?

    -I mean to transfer data from a dsp to linux: Are you saying that the Linux using CMEM allocates some memory in shared region with the address usable by dsp(which is a raw address of the buffer in shared memory 0x0c000000-0x0C5FFFFF for 6638) and communicate this address to the DSP, where it (dsp) can program the edma to transfer the data?

    Or

    The Linux on ARM can open a dma channel via its existing drivers and Linux dma api s and transfer the data from a buffer in shared menory to its memory? And it is really using the edma3 under the hood?  

    Thanks,

    Shervin

     

  • shervin hojati said:

    I have read in IPC users guide, that both sides will need to call Ipc_start() or Ipc_start(), Ipc_atatch(). And the Ipc_attach() needs to be called in a loop until successful. I do not see any calls to Ipc_start()/Ipc_attch()!

    Ipc_start() sets up SharedRegion-based Ipc for SYS/BIOS<->SYS/BIOS Ipc.  For SYS/BIOS<->ARM Linux situations, the SYS/BIOS app gets configured to call IpcMgr_ipcStartup() via the .cfg file, e.g., from rpmsg_transport.cfg:
        BIOS.addUserStartupFunction('&IpcMgr_ipcStartup');

    For dual transport mechanisms, as you're trying to get to, yet another function is called - IpcMgr_callIpcStart() - via the addUserStartupFunction mechanism (see dual_transports.cfg).

    If you're curious, checkout <ipc>/packages/ti/ipc/ipcmgr/IpcMgr.c for the definition of these functions.

    shervin hojati said:

    -I mean to transfer data from a dsp to linux: Are you saying that the Linux using CMEM allocates some memory in shared region with the address usable by dsp(which is a raw address of the buffer in shared memory 0x0c000000-0x0C5FFFFF for 6638) and communicate this address to the DSP, where it (dsp) can program the edma to transfer the data?

    Yes, this is what you would want to do.  However, CMEM doesn't use any "shared region" (unless you consider that term to be, in general, memory accessed by two different cores), at least in the Ipc SharedRegion sense.  CMEM is "given" memory that's been kept away from Linux (via the u-boot bootargs mem= construct).  The CMEM API CMEM_alloc() returns the user a virtual pointer, and the user would call CMEM_getPhys() on that to get the physical address that is then put in the MessageQ msg and is usable by the DSP.

    For the DSP to use EDMA, you would probably want to use the EDMA3 LLD package from TI on the DSP.  I believe it illustrates how to remove EDMA channels from Linux so that the DSP can own them, and it has APIs for programming the EDMA.  FYI, neither Linux-based Ipc, nor CMEM, use any EDMA.

    Regards,

    - Rob

     

  • Hi,

    I am using an evm 6638,
    I am using nfs boot
    I have upgraded to mcsdk 3.0.2.14
    I have upgraded to ipc 3.00.03.28
    I rebuild the ipc to get a correct lad_tci6638 where the MultiProcCfg_tci6638.c
    was modified for one HOST and CORE0.
    I have built a static app from messageQ example,
    I have built the dsp image based on the same mcsdk,ipc
    I load the dsp image using mpm_load() , mpm_run().
    I have copied the lad_tci6638 to target file system/usr/bin
    I have updated the libraries libtiipcutils.so.1.0.0, libtiipc.so.1.0.0 in target file system/usr/lib


    lad.txt shows:

    Initializing LAD...
        opening FIFO: /tmp/LAD/LADCMDS
    Retrieving command...

    LAD_CONNECT:
        client FIFO name = /tmp/LAD/1916
        client PID = 1916
        assigned client handle = 0
        FIFO /tmp/LAD/1916 created
        FIFO /tmp/LAD/1916 opened for writing
        sent response
    DONE
    Retrieving command...
    LAD_MULTIPROC_GETCONFIG: calling MultiProc_getConfig()...
    MultiProc_getConfig() - 2 procs
            Proc 0 - "HOST"
            Proc 1 - "CORE0"
        status = 0
    DONE
    Sending response...
    Retrieving command...
    LAD_NAMESERVER_SETUP: calling NameServer_setup()...
    NameServer_setup: entered, refCount=0
    NameServer_setup: created send socket: 5
    NameServer_setup: connect failed: 22, Invalid argument
        closing send socket: 5
    NameServer_setup: created recv socket: 5
    NameServer_setup: bind failed: 22, Invalid argument
        closing recv socket: 5
    NameServer_setup: creating listener thread
    NameServer_setup: exiting, refCount=1
        status = -1
    listener_cb: Entered Listener thread.
    DONE
    Sending response...
    NameServer: waiting for unblockFd: 2, and socks: maxfd: 3
    Retrieving command...
        EOF detected on FIFO, closing FIFO: /tmp/LAD/LADCMDS

        opening FIFO: /tmp/LAD/LADCMDS


    To get to the buttom of this I am trying to build/run the MessageQApp, MessageQMulti..
    1 - I do not see where the dsp side image is loaded to dsp by these apps.
        Am I supposed to build/install the dsp side and manually load the via JTAG?
        
        
    2 - If I use the source files /ipc/tests/messageq_multi.c , messageq_multicore.c..
        was told previously that I need to use mpm to load the dsp images, but I see
        no mpm calls in the MessageQApp, ...on ARM Linux side.
        
    3- If I use the sources in ipc/testc/... what .cfg do I use for messageq_single.c ?
       Is messageq_multicore.c on DSP side based on messageq_multicore.cfg 's namelist[]
       seems to be for 6678 and not 6638.
    4- If I use messageq_single.c what is the .cfg to be used?

    5- following the IPC Linux install, IPC bios install /build guide I do not see
       the dsp images installed/loaded on target if this is hidden inside library.
      

    Is the IPC package that I amusing not tested for 6638?

    Shervin

  • All,

    This customer is using the EVM for the part number 66AK2H12.  Please went you read tha above e-mail replace the 6638 for 66AK2H12. 

    Regards,

    Hector Rivera

  • shervin hojati said:
    1 - I do not see where the dsp side image is loaded to dsp by these apps.
        Am I supposed to build/install the dsp side and manually load the via JTAG?

    The tests and examples don't use run-time mpm calls to load the image.  The user must use the mpmcl application to load the remote cores before running an application that communicates with them.

    For keystone II the method is to start/stop using the MPM (User space) Loader:
    mpmcl load dsp<n> <firmware.out>
    mpmcl run dsp<n>
    mpmcl reset dsp<n>
    mpmcl status dsp<n> 

    That's part of the MCSDK, and has user space and kernel side components.

    shervin hojati said:
    3- If I use the sources in ipc/testc/... what .cfg do I use for messageq_single.c ?
       Is messageq_multicore.c on DSP side based on messageq_multicore.cfg 's namelist[]
       seems to be for 6678 and not 6638.

    messageq_single.c uses rpmsg_transport.cfg in the same directory.  rpmsg_transport.cfg loads other .cfg files with xdc.loadCapsule(), and for 6638 it loads messageq_common.cfg.xs.

    messageq_multicore.c does use messageq_multicore.cfg's namelist[].  You need to modify it to match the MultiProc that you're setting up for the lad daemon in MultiProcCfg_tci6638.c.

    Why do you say it seems to be for the 6678?  It is correct for 6638.

    shervin hojati said:
    5- following the IPC Linux install, IPC bios install /build guide I do not see
       the dsp images installed/loaded on target if this is hidden inside library.

    Dsp images end up in <ipc>/packages/ti/ipc/tests/bin/<arch> (I don't recall the <arch> for 6638, but it should be apparent when looking there) for the build.  I don't know where they go for an install, perhaps you can do (on your target filesystem):
        % cd /
        % find . -name messageq_multi.xe66

    Regards,

    - Rob

     

  • The default MCSDK filesystem have messageq_single.xe66 in /lib/firmware. The IPC example application is run using a Matrix demo script (/usr/share/matrix-gui-2.0/apps/demo_ipc/demo_ipc.sh).

    If you are compiling other example applications, you need to copy to the filesystem. If you look at the script it will give you a sample sequence of commands to run an IPC application.

  • I finally managed to get the communication between ARM and DSP going, using the examples.
    I load the DSP with an image that I built using ccs, and the message_songle.c
    and the .cfg and .xs files.
    If I itterate more that 256 times in a loop where the ARM sends a message to DSP,
    and the DSP responds, the target crashes.
    the "params.numvlocks=256" in .xs file, seems to be related as if the memory
    used for a sent message is not released on time whle the message 257 is sent!

    oot@keystone-evm:~# [   93.125465]  remoteproc0: powering up 2620040.dsp0
    [   93.130273]  remoteproc0: Booting unspecified firmware
    [   93.137409] virtio_rpmsg_bus virtio0: rpmsg host is online
    [   93.137464] virtio_rpmsg_bus virtio0: creating channel rpmsg-proto addr 0x3d
    [   93.137563] rpmsg_proto rpmsg0: inserting rpmsg src: 1024, dst: 61
    [   93.156194]  remoteproc0: registered virtio0 (type 7)
    [   93.161263]  remoteproc0: remote processor 2620040.dsp0 is now up
    [   98.632335] virtio_rpmsg_bus virtio0: creating channel rpmsg-proto addr 0x3d
    [   98.639414] virtio_rpmsg_bus virtio0: channel rpmsg-proto:ffffffff:3d already exist
    [   98.647101] virtio_rpmsg_bus virtio0: __rpmsg_create_channel failed

    thanks

    shervin

  • shervin hojati said:
    If I itterate more that 256 times in a loop where the ARM sends a message to DSP,
    and the DSP responds, the target crashes.

    Is this with the MessageQApp example, and you're giving it > 256 iterations for the command-line parameter?

    Should work fine.

    shervin hojati said:
    the "params.numvlocks=256" in .xs file, seems to be related as if the memory
    used for a sent message is not released on time whle the message 257 is sent!

    I've poked around but can't find anything to do with 'params.numvlocks=256' in TI content.  In what file did you find this?

    shervin hojati said:

    oot@keystone-evm:~# [   93.125465]  remoteproc0: powering up 2620040.dsp0
    [   93.130273]  remoteproc0: Booting unspecified firmware
    [   93.137409] virtio_rpmsg_bus virtio0: rpmsg host is online
    [   93.137464] virtio_rpmsg_bus virtio0: creating channel rpmsg-proto addr 0x3d
    [   93.137563] rpmsg_proto rpmsg0: inserting rpmsg src: 1024, dst: 61
    [   93.156194]  remoteproc0: registered virtio0 (type 7)
    [   93.161263]  remoteproc0: remote processor 2620040.dsp0 is now up
    [   98.632335] virtio_rpmsg_bus virtio0: creating channel rpmsg-proto addr 0x3d
    [   98.639414] virtio_rpmsg_bus virtio0: channel rpmsg-proto:ffffffff:3d already exist
    [   98.647101] virtio_rpmsg_bus virtio0: __rpmsg_create_channel failed

    The above output indicates that the DSP executable crashed (as you stated above) and restarted, since that's the only reason I can think of that would explain a 2nd "creating channel rpmsg-proto addr 0x3d" print.

    Regards,

    - Rob

     

  • Rob,

    I am using the MessageQ exmple with a "minor change" where I load the dsps with the .out image using

    mpm_reset() mpm_load(), mpm_run(). I then change the numLoops to 200,300, ...

    The dsp simply is waiting to get a message and responds with a reply.

    The output showing the crash,  is what  the rs232 console shows for the target.

    As long as the loop count is < 254, no problem occurs.

    Also I can run 254 loops, add a delay (such as break point and continue) and run another 254 loops.

    It seems like, even though I send one message from the ARM to DSP, wait for reply, and then

    send the next message, the messages are piling up and fill some limited Q, before there is a chance to

    make room,

    when I set the counter to 254 and no problem is seen, and I can repeat this without a problem the

    console shows:

    oot@keystone-evm:~# [   52.395539]  remoteproc0: powering up 2620040.dsp0
    [   52.400349]  remoteproc0: Booting unspecified firmware
    [   52.407646] virtio_rpmsg_bus virtio0: rpmsg host is online
    [   52.407698] virtio_rpmsg_bus virtio0: creating channel rpmsg-proto addr 0x3d
    [   52.407800] rpmsg_proto rpmsg0: inserting rpmsg src: 1024, dst: 61
    [   52.426434]  remoteproc0: registered virtio0 (type 7)
    [   52.431502]  remoteproc0: remote processor 2620040.dsp0 is now up

    for each successful run.

    The reason for this exercise is for me to measure the time for a transaction between ARM and DSP

    as I will need to make sure my system can keep up.

    Thanks,

    Shervin

  • Shervin,

    I am sorry if you have to repeat, but can you help me to reproduce your issue? Are you using your own image? Did you try with TI's prebuilt images or rebuilding TI's example projects? If it is your image with your modifications, can you possibly attach it so we can see if we can reproduce it quickly? I am sure we can narrow down to root cause quicklly if we can reproduce the issue you are seeing. Again, any detailed description on reproducing will be appreciated.

    Best regards,

    David Zhou