This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

IPC : MessageQ_open() always returns MessageQ_E_NOTFOUND on OMAP-L138 (DSP core)

Other Parts Discussed in Thread: OMAP-L138, SYSBIOS, OMAPL138

Objective: Use MessageQ to pass generic messages between the ARM and DSP cores within an OMAP-L138 installed on a custom board. So far I've not been able to get MessageQ_open() to successfully return on the DSP core, but one thing at a time.

Tool Versions:

CCS 6.1.0.00104

XDCtools 3.31.2.38_core

IPC 3.40.0.06

SYS/BIOS 6.42.2.29

I've referred extensively to the online IPC API documentation ( downloads.ti.com/.../index.html

I've also referred extensively to the ex02_messageq example provided with the IPC toolchain; however, while my 2 coworkers work on getting that to work (they coincidentally seem to be stuck on the same spot with Linux on the ARM core and SYS/BIOS on the DSP core, but they're using an LCDK-OMAPL138), I thought I'd try to build up 2 simple SYS/BIOS projects to get messaging working.

I'll post the relevant code below then further explain my difficulties.

arm.cfg:

var Ipc = xdc.useModule('ti.sdo.ipc.Ipc');
Ipc.procSync = Ipc.ProcSync_PAIR;

var MultiProc = xdc.useModule('ti.sdo.utils.MultiProc');
MultiProc.numProcessors = 2;
MultiProc.setConfig("HOST", ["HOST", "DSP"]);

var SharedRegion = xdc.useModule('ti.sdo.ipc.SharedRegion');
SharedRegion.translate = false;
//SharedRegion.cacheLineSize = 32;
SharedRegion.numEntries = 4;
var SHAREDMEM = 0xC2000000;
var SHAREDMEMSIZE = 0x0E000000;

SharedRegion.setEntryMeta(0,
{ base: SHAREDMEM,
len: SHAREDMEMSIZE,
ownerProcId: 0,
isValid: true,
cacheEnable: false,
cacheLineSize: 32,
createHeap: true,
name: "SR0"
}
);

var MessageQ = xdc.useModule('ti.sdo.ipc.MessageQ');
MessageQ.maxRuntimeEntries = 2;

var SyncSwi = xdc.useModule('ti.sysbios.syncs.SyncSwi');


dsp.cfg:

var Ipc = xdc.useModule('ti.sdo.ipc.Ipc');
Ipc.procSync = Ipc.ProcSync_PAIR;
//Ipc.procSync = Ipc.ProcSync_ALL;

var MessageQ = xdc.useModule('ti.sdo.ipc.MessageQ');
MessageQ.maxRuntimeEntries = 2;

var MultiProc = xdc.useModule('ti.sdo.utils.MultiProc');
MultiProc.numProcessors = 2;
MultiProc.setConfig("DSP", ["HOST", "DSP"]);

var SharedRegion = xdc.useModule('ti.sdo.ipc.SharedRegion');
SharedRegion.translate = false;
//SharedRegion.cacheLineSize = 32;
SharedRegion.numEntries = 4;
var SHAREDMEM = 0xC2000000;
var SHAREDMEMSIZE = 0x0E000000;

SharedRegion.setEntryMeta(0,
{ base: SHAREDMEM,
len: SHAREDMEMSIZE,
ownerProcId: 0,
isValid: true,
cacheEnable: false,
cacheLineSize: 32,
createHeap: true,
name: "SR0"
}
);


shared.h:

#ifndef SHARED_H
#define SHARED_H
#define MESSAGEQ_ARM_DSP ("mqad")
#endif


arm.c:

extern Swi_Handle swi_messageq;
MessageQ_Handle messageQ;
MessageQ_Params messageQParams;
SyncSwi_Params syncSwiParams;
SyncSwi_Handle syncSwiHandle;

int main()
{
/* Task Creation */
/* ... */
BIOS_start();
return 0;
}
Void taskFxn()
{
Bool bContinue = true;
Int status;
status = Ipc_start();
if(status < 0)
{
/* log error and system exit*/
}

/* Create a message queue using SyncSwi as the synchronizer */
SyncSwi_Params_init(&syncSwiParams);
syncSwiParams.swi = swi_messageq;
syncSwiHandle = SyncSwi_create(&syncSwiParams, NULL);

MessageQ_Params_init(&messageQParams);
messageQParams.synchronizer = SyncSwi_Handle_upCast(syncSwiHandle);
messageQ = MessageQ_create(MESSAGEQ_ARM_DSP, &messageQParams);
if(NULL == messageQ)
{
/* log error and system exit */
}
else
{
/* We're good! So far this seems to be working... */
}

while(bContinue)
{
Task_sleep(10);
}
}


dsp.c:

int main()
{
/* Task Creation */
/* ... */
BIOS_start();
return 0;
}
Void taskFxn()
{
MessageQ_QueueId remoteQueueId;
Int status;

status = Ipc_start();
if(status < 0)
{
/* log error and system exit */
}

do
{
status = MessageQ_open(MESSAGEQ_ARM_DSP, &remoteQueueId);
Task_sleep(1);
} while(status < 0);
}
As you can see, I haven't done anything really complicated here. So far, my Ipc_start() calls both return successfully. My MultiProc_getID() and _getName() calls produce what I expect on both cores. The call to MessageQ_create() returns with a valid handle on the ARM. But the call to MessageQ_open() always returns MessageQ_E_NOTFOUND indefinitely.

Any ideas on where to start debugging this further? I'm suspecting it either has something to do with my SharedRegion not being set up correctly (although I would expect the calls to Ipc_start() to fail if this were true), or the processors aren't actually attached (even though Ipc_start() seems to indicate they are by returning successfully).

Thanks in advance for any help!

  • Derek Wilson said:
    or the processors aren't actually attached (even though Ipc_start() seems to indicate they are by returning successfully).  

    It's probably this.

    When you specify the Ipc.procSync as Ipc.ProcSync_PAIR then you need to call Ipc_attach() for the "other" core in the pair (in addition to first calling Ipc_start()).

    Since there's only the ARM and DSP then I don't know why you wouldn't just use ProcSync_ALL, which doesn't require the additional step of Ipc_attach().

    Regards,

    - Rob

  • Thanks for the reply, Rob.

    I was initially using an IPC 3.36.x release, and I was using ProcSync_ALL.  Ipc_start() would return successfully.  Upon noticing yesterday there was a 3.40 release earlier in the week, I upgraded (along with xdctools and sys/bios).  After that, Ipc_start() wouldn't return, so I changed it back to ProcSync_PAIR (not understanding the implication) and it would return, so I thought I was good.

    Now I've changed it back to ProcSync_ALL, and Ipc_start() on the ARM doesn't return.  Stepping through the Ipc_start() code, it seems to be stuck at:

    do {
        status = Ipc_attach(baseId);
    } while (status == Ipc_E_NOTREADY);

    where (baseId = 1) which I think is correct.  Further, it seems to be stuck at:

    if(MultiProc_self() < remoteProcId) {
        /* wait for remote processor to finish */
        while(remote->startedKey != ti_sdo_ipc_Ipc_PROCSYNCFINISH &&
              remote->startedKey != ti_sdo_ipc_Ipc_PROCSYNCDETACH) {
              if(cacheEnabled) {
                  Cache_inv((Ptr)remote, reservedSize, Cache_Type_ALL, TRUE);
              }
        }
    }

    However, the 'remote' pointer seems to be incorrect (I'm seeing 0x28 in my Variables window).  This pointer should be an address offset from the beginning of the shared region; it comes from a call to (remote = ti_sdo_ipc_Ipc_getMasterAddr(remoteProcId, sharedAddr)).  I can see in the Variables window that remoteProcId = 1 (as I expect) and sharedAddr = 0xc2000000 (as I expect).  But outside of that function call (in ipc_attach), 'remote' is incorrect for whatever reason.

    Now, I haven't built the IPC modules myself, so my seeing an incorrect value could be an artifact of the lack of debug symbols (but I don't think so?).  

    While stepping through here though, I'm getting the impression that maybe the DSP should be considered the 'master' core (and thus the owner of the SharedRegion)?  But honestly, I would expect that either the DSP or the ARM should be able to 'own' the SharedRegion.  Am I wrong in assuming that?

    Sorry for the verbosity, but I'm trying to provide all the information and insights I notice.  Thanks again in advance for your help!  Please let me know what other information I can gather for you, but I will continue to look through here and see if I can understand why these pointers seem to be incorrect.

    FYI - I changed up the SharedRegion owner to be the DSP, tried calling Ipc_start() on both cores, and it's still not returning, and it still seems to be pointer related (i.e. looking at the wrong location for the startedKey).  According to  , I should only need to call Ipc_start on the 'master', but I've tried all combinations of master/slave and where I call ipc_start.  Is that documentation correct?

  • Derek Wilson said:
     According to 

    IPC 3.x FAQ - Texas Instruments Wiki

    processors.wiki.ti.com

     , I should only need to call Ipc_start on the 'master', but I've tried all combinations of master/slave and where I call ipc_start.  Is that documentation correct?

    That wiki documentation is saying that Ipc_start() shouldn't be called on the DSP only when the IPC is between a Linux ARM host and the DSP.  When both ARM and DSP are running SYS/BIOS they both need to call Ipc_start().

    Let me see if I can summon some help from someone who is more familiar with SYS/BIOS <-> SYS/BIOS IPC.  My IPC knowledge is focused on Linux <-> SYS/BIOS IPC.

    Regards,

    - Rob

     

  • Derek,

    Try the following suggestions to see if it resolves the attach failure.

    1. Change cacheEnable to true in SharedRegion configuration for both ARM and DSP.

    2. Remove cacheLineSize from all SharedRegion configurations, both ARM and DSP. The default value should be correct.

    3. Add the following configuration to both programs:

    var Ipc = xdc.useModule('ti.sdo.ipc.Ipc');
    Ipc.sr0MemorySetup = true;
    Ipc.generateSlaveDataForHost = false;

    ~Ramsey

  • Thank you for your response, Ramsey.  That seems to have gotten me much further into Ipc_start().  I've gone back and forth on who should be the SharedRegion owner (i.e. ownerProcId).  I've also gone back and forth on MultiProc.setConfig order (["HOST", "DSP"] and ["DSP", "HOST"]).  But depending on which core I set as the SharedRegion owner, the owner now fails out of Ipc_start() with Ipc_E_FAIL and the other core spins in Ipc_start() somewhere.

    The owner Ipc_start() fails with Ipc_E_FAIL, and stepping through Ipc_attach() shows that it's because the following call fails with a Notify_E_FAIL:

    status = ti_sdo_utils_NameServer_SetupProxy_attach(remoteProcId, sharedAddr);
    if (status < 0) {
        /* free allocated SharedRegion heap */
        return (Ipc_E_FAIL);
    }

    I can't seem to find the code for that function.  It seems to resolve to a call to ti_sdo_ipc_nsremote_NameServerRemoteNotify_attach which I can't find in the IPC source (I can only find it in header files).

    The non-owner Ipc_start() spins because Ipc_attach() returns Ipc_E_NOTREADY, presumably because the owner hasn't finished initialization.

    This gives me a lot more to go on, but if you have any thoughts, feel free to add on!  Many thanks!

  • In order to try and fix this problem, I added the following lines to my configuration project (not knowing what else to try):

    var NameServer = xdc.useModule("ti.sdo.utils.NameServer");
    var NsRemote = xdc.useModule("ti.ipc.namesrv.NameServerRemoteRpmsg");
    NameServer.SetupProxy = NsRemote;

    However, I've only seen reference to the usage of these XDC modules under Linux builds.  Is it appropriate to have them in this SYS/BIOS - SYS/BIOS configuration?  I can find no documentation online for the NameServer.SetupProxy field, specifically what other NameServer modules I might use.

    Putting this into my configuration project, however, causes an unresolved symbol error upon linking: RPMessage_send.  I can't figure out in which module this function exists.  I see references online to ti.ipc.rpmsg, but inclusion of that module produces PACKAGE_NOT_FOUND errors.  This only furthers my belief that this is a Linux only module.

    I'm convinced I have to do something with the NameServer module in my XDC config, but I'm not sure where to find any documentation on doing so.  I've referred extensively to the online IPC Users Guide, IPC FAQ, and IPC API to no avail.  Thanks again for the help!

  • Derek Wilson said:
    I see references online to ti.ipc.rpmsg, but inclusion of that module produces PACKAGE_NOT_FOUND errors.  This only furthers my belief that this is a Linux only module.

    Your belief is correct, ti.ipc.namesrv.NameServerRemoteRpmsg is used for only Linux-based builds.  In general, everything you use for SYS/BIOS <-> SYS/BIOS should have "ti.sdo.ipc" and not "ti.ipc".  You should not be referencing anything with Rpmsg or RPMessage in the name, since that is used only for comm with Linux.

    I'll leave it to Ramsey (or someone else) to guide you on the correct NameServer configuration for the SYS/BIOS <-> SYS/BIOS scenario since I have only basic knowledge in that area.

    Regards,

    - Rob

  • Okay, I made a little progress, but I'm still having some issues.  In my cfg file, I included the Notify module and specified a SetupProxy:

    var Notify = xdc.usemodule('ti.sdo.ipc.Notify');
    Notify.SetupProxy = xdc.useModule('ti.sdo.ipc.family.da830.NotifyCircSetup');

    per the document at: IPC Notify Drivers and Transports

    If I include those lines in the cfg file, I get past the previous point where ti_sdo_utils_NameServer_SetupProxy_attach() was throwing an error (I put a breakpoint where it calls the function, and it returns with status = 0).  However, the document indicates that the Notify module should be using NotifyDriverShm by default.  So I'm not sure why I need to specify a driver?

    Now I get to ti_sdo_ipc_MessageQ_SetupTransportProxy_attach() which fails with (-1).  So I tried adding a TransportProxy to the MessageQ module in the cfg project:

    var MessageQ = xdc.useModule('ti.sdo.ipc.MessageQ');
    MessageQ.SetupTransportProxy = xdc.module('ti.sdo.ipc.transports.TransportShmCircSetup');

    Again, according to the "IPC Notify Drivers and Transports" document listed above, I thought there was a default TransportProxy for MessageQ and specifying one wasn't necessary.  Regardless, the call to ti_sdo_ipc_MessageQ_SetupTransportProxy_attach() fails.  That function calls TransportShmCircSetup_attach():

    Int TransportShmCircSetup_attach(UInt16 remoteProcId, Ptr sharedAddr)
    {
        status = MessageQ_E_FAIL;
        /* Function and TransportShmCirc_Params initialization */
        if(Notify_intLineRegistered(remoteProcId, 0))
        {
            handle = TransportShmCirc_create(remoteProcId, &params, &eb);
            if(handle != NULL)
            {
                status = MessageQ_S_SUCCESS;
            }
        }
    
        return status;
    }

    and stepping through the function, I can see that handle is NULL after the call to TransportShmCirc_create().  (The NULL comes from the call to xdc_runtime_Core_createObject__I() in app_pe9.c which I don't have source for.)

    Any thoughts on why this would be failing?  Or how I should be configuring the MessageQ?

    Thanks again!

  • It looks like the call is failing due to an "out of mem" error.  I'm posting a picture of the Error_Block contents after the call.

    Where is it trying to allocate memory from for the object?  The SharedRegion memory is in DDR, and the ARM's program/data is in L3 (I tried switching these up with the same result).  My stack still has tons of memory with the 0xBEBEBEBE watermark, so I don't think it's that.  Is there possibly some explicit heap initialization I must do first?

  • Derek,

    I was trying to setup my OMAP-L138 LCDK using the same setup as you, but I'm having some fundamental problems. Meanwhile, I'll try to answer your questions.

    One item to mention is that IPC dropped OMAP-L138 support in the IPC 3.40 release. You might try going back to IPC 3.36.02.13. We no longer ship the ARM9 libraries. I would expect you to get linker errors unless you are building with Ipc.LibType_Custom. Otherwise, you are probably okay with the IPC 3.40 release.

    You should not need to configure any delegates for your proxies. This makes me wonder what platform you are using. The platform defines device names which are used by IPC to pick the appropriate delegate. Let me know what platform you are using and the device names.

    The platform also defines your memory map. I looked at a couple of platforms we ship (ti.platforms.evmDA830, ti.platforms.evmOMAPL138) and it seems both these platforms assume Linux is running on the ARM. These platforms define only 16 MB of external memory. My board has 128 MB. I think you will either need to write your own platform or use the generic platform. At any rate, let me know what your memory map looks like (how much you have and how to partitioned it between ARM and DSP).

    You indicate that you are running out of memory. All memory allocations come from the SYS/BIOS heap which uses the Memory.defaultHeapInstance. Look for this in your config script. You can increase the size of the heap there. It looks something like this.

    /* create a default heap */
    var HeapMem = xdc.useModule('ti.sysbios.heaps.HeapMem');
    var heapMemParams = new HeapMem.Params();
    heapMemParams.size = 0x10000;

    var Memory = xdc.useModule('xdc.runtime.Memory');
    Memory.defaultHeapInstance = HeapMem.create(heapMemParams);

    It sounds like you are placing the ARM program into the on-chip memory (L3 CBA RAM). This memory is not very big (128 KB), maybe you are running out of room. Can you try placing your ARM program in external memory?

    I'm curious if you are building your ARM and DSP programs in CCS or are you using the command line tools?

    ~Ramsey

  • Derek,

    I've attached a zip file which contains three CCS projects that build for OMAP-L138. The hello_arm and hello_dsp projects each build their respective executable. The shared project contains files used by the other projects. If you actually try to build the projects, you will need to update the product versions on the RTSC project tab. I was using older versions when I first created these projects.

    If you want, you can just look at the project settings and config scripts to compare with your own projects.

    ~Ramsey

    0880.OMAP-L138.zip

  • Thanks, Ramsey.  I'll lead off by saying that I'm now succeeding with MessageQ_open() on the DSP which was the original point of the post.  Thanks for helping me get there!  I do have a few additional questions for you though.

    First, I did end up having to downgrade from IPC 3.40.x to 3.36.x.  I was having some other issues, but ultimately this did get me there.  I was not having any linker errors, but something bad was definitely happening.  Thank you for pointing out that future releases will no longer be supported for the OMAP.  This revelation is somewhat disturbing to our engineering team, though.  I'm curious as to why support is being phased out for this device.  Is the lifetime of the OMAP-L138 close?  Note that we are using a custom board with the OMAP-L138 on it (as opposed to the LCDK).

    On to my fixes...I was indeed sizing my heap too small.  I was using the default value of 4kB.  In an attempt to fix it, I had increased that to 8kB, and when that didn't fix it, I assumed I was on the wrong path.  I increased it to your suggested 64kB, and I was able to allocate memory for the MemoryQ proxy.  However, instead of implementing it the way you suggested (using the HeapMem and Memory modules), I simply did the following:

    var BIOS = xdc.useModule('ti.sysbios.BIOS');
    BIOS.heapSize = 0x10000;

    Is there an advantage to doing it the way you suggested?

    Specifying a proxy for the Notify module is indeed necessary in my case.  If I don't, Ipc_start() returns with an error for the HOST core, and Ipc_start() spins for the DSP core.  I didn't take the time to track down *why* the error occurs though since I see no reason to not be using the faster circular buffering scheme.

    Specifying a proxy for the MessageQ module is *not* necessary in my implementation.  Once I had enough heap memory, I was able to get through the call to MessageQ_SetupProxy successfully.

    I have modified the RTSC platform memory map, but it's all pretty standard.  I'm modifying the xdc\platform\generic\package to get there.  I'm posting a screenshot of my memory sections.  In the RTSC tab of project properties, the "Target" for the ARM core is listed as "ti.targets.arm.elf.Arm9" and for the DSP is "ti.targets.elf.C674".  In each case the platform is my custom one.  Should I be using a different Target or Platform in order to not have to specify a Notify proxy?

    Here I'll post the relevant portions of my cfg files for both the ARM and DSP which will hopefully be found useful by someone in the future.  It was a bear figuring out the minimum modules necessary since I couldn't find documentation where everything was detailed in once place and all the example projects seem to assume Linux on the ARM and SYS/BIOS on the DSP.  And then there's nothing to the actual code.  I simply call Ipc_start() on both the HOST and DSP cores at the top of my only task.  Then when that returns, I call MessageQ_create() on the HOST and MessageQ_open() on the DSP.

    ARM:

    var Ipc = xdc.useModule('ti.sdo.ipc.Ipc');
    Ipc.procSync = Ipc.ProcSync_ALL;
    Ipc.sr0MemorySetup = true;
    Ipc.generateSlaveDataForHost = false;
    
    var MultiProc = xdc.useModule('ti.sdo.utils.MultiProc');
    MultiProc.numProcessors = 2;
    MultiProc.setConfig("HOST", ["HOST", "DSP"]);
    
    var SharedRegion = xdc.useModule('ti.sdo.ipc.SharedRegion');
    SharedRegion.translate = false;
    SharedRegion.numEntries = 1;
    var SHAREDMEM      = 0xC2000000;
    var SHAREDMEMSIZE  = 0x0E000000;
    
    SharedRegion.setEntryMeta(0,
        { base: SHAREDMEM,
          len: SHAREDMEMSIZE,
          ownerProcId: 0,
          isValid: true,
          cacheEnable: true,
          createHeap: true,
          name: "DDR_SHARED"
        }
    );
    
    var MessageQ = xdc.useModule('ti.sdo.ipc.MessageQ');
    MessageQ.maxRuntimeEntries = 2;
    //MessageQ.SetupTransportProxy = xdc.module('ti.sdo.ipc.transports.TransportShmCircSetup');
    
    var Notify = xdc.useModule('ti.sdo.ipc.Notify');
    Notify.SetupProxy = xdc.useModule('ti.sdo.ipc.family.da830.NotifyCircSetup'); /* without this, the ARM hits an exception... :( */
    

    DSP:

    var Ipc = xdc.useModule('ti.sdo.ipc.Ipc');
    Ipc.procSync = Ipc.ProcSync_ALL;
    Ipc.sr0MemorySetup = true;
    Ipc.generateSlaveDataForHost = false;
    
    var MessageQ = xdc.useModule('ti.sdo.ipc.MessageQ');
    MessageQ.maxRuntimeEntries = 2;
    
    var MultiProc = xdc.useModule('ti.sdo.utils.MultiProc');
    MultiProc.numProcessors = 2;
    MultiProc.setConfig("DSP", ["HOST", "DSP"]);
    
    var SharedRegion = xdc.useModule('ti.sdo.ipc.SharedRegion');
    SharedRegion.translate = false;
    SharedRegion.numEntries = 1;
    var SHAREDMEM      = 0xC2000000;
    var SHAREDMEMSIZE  = 0x0E000000;
    
    SharedRegion.setEntryMeta(0,
        { base: SHAREDMEM,
          len: SHAREDMEMSIZE,
          ownerProcId: 0,
          isValid: true,
          cacheEnable: true,
          createHeap: true,
    	  name: "DDR_SHARED"
    	}
    );
    
    var Notify = xdc.useModule('ti.sdo.ipc.Notify');
    Notify.SetupProxy = xdc.useModule('ti.sdo.ipc.family.da830.NotifyCircSetup'); /* without this, the ARM hits an exception... :( */
    

    Now to actually get some comms going, but I think I'm 90% of the way there now.  Thanks for all your effort!  If I have additional questions, I'll start a new thread, but I'll probably post on this thread too since you'll be notified of responses.

  • Derek,

    Great to hear you have MessageQ_open working.

    Re: IPC 3.36

    I've been informed this is the last release stream which will support the ARM9 libraries. These are the libraries you would need when building SYS/BIOS for the ARM. It's best if you stay on this stream. The IPC 3.40 steam supports OMAP-L138 but only with Linux on the ARM. It is unclear how much longer this support will be maintained. However, the device OMAP-L138 will continue to be supported by Texas Instruments. It's only the IPC support I'm talking about. You should be safe from a hardware point of view. From a software point of view, you will need to stay where you are. That's the best I can offer.

    Re: SYS/BIOS heap size

    Using BIOS.heapSize is just a short-cut. Using it should be fine. However, if you decide to change to a different heap type, such as HeapBuf, then you would need to do it as I indicated. Once you are in final testing, you can turn on heap statistics and track your maximum usage. Then you can reduce your heap size to fit. In early development, larger is safer.

    Re: Notify proxy

    I'm still troubled by the fact you need to specify the Notify proxy for the ARM. Did you try importing the projects I attached? You can look at the project configurations to see how I specified the platform.

    Using NotifyDriverCirc has some differences. For example, that driver will never return Notify_E_NOTINITIALIZED, Notify_E_EVTDISABLED, and Notify_E_EVTNOTREGISTERED. If your application plans to use these, you need to get NotifyDriverShm working. I know the optimization wiki page indicates that NotifyDriverCirc is faster, but this really depends on your run-time execution behavior. In some instances, NotifyDriverShm may be faster (but is does use more memory). Ideally, you would be able to get both drivers working and then you could profile them to see which suites your application the best.

    One final note. I assume you already figured this out, but I'll say it anyway. Both sides (ARM and DSP) must be using the same notify driver. If you are using NotifyDriverCirc on ARM (to get past the build errors), then you must also configure the DSP to do the same. You cannot have NotifyDriverCirc on ARM and NotifyDriverShm on DSP.

    Re: SetupProxy

    I should have mentioned this earlier. The proxy functions are generated during configuration. Look in your generated "big.c" file to find them. This file is under the configPkg project. The actual filename is a derivative of your application name. Be aware, this is a big file (hence the nickname).

    <project>/Debug/configPkg/package/cfg/<name>.c

    Re: custom RTSC package

    It sounds like you are modifying the official xdc.platforms.generic package? You should be using the Platform Wizard to create a new platform for your custom board. That way you can give it any name you wish. The wizard is limited, but an easy entry point. You will make two platforms, one for ARM and one for DSP. If you learn how to author platform packages, then you can write one platform for both, but the wizard cannot do this. You will find this tool in CCS, look in the Debug Perspective.

    CCS Debug Perspective
    Tools > RTSC Tools > Platform > New
    Package Name: acme.platforms.omapl138.arm
    Repository: use your ccs workspace or create an actual package repository
    Add Repository to Project Package Path > Select
    Device Family > arm
    Device Name > OMAPL138
    Next

    Clock Speed: 300.0
    External Memory > Insert Row
    Name: PROG, Base: 0xC4000000, Length: 0x1000000
    Memory Sections
    Code: PROG, Data: PROG, Stack: PROG
    Finish


    I've not actually built with this, so I might have a typo. But I think you get the idea. Then, in your project settings, specify the platform using your new platform you just created.

    Project Properties > CCS General > RTSC > Platform: acme.platforms.omapl138.arm

    Re: SharedRegion configuration

    I would set SharedRegion.numEntries = 4 as a minimum. I've never used 1.

    ~Ramsey

  • Hi Ramey,

    Thank you very much for your detailed post.  

    1.) You maintain that the OMAP-L138 product line is not going away.  You do, however, indicate that IPC is simply going to stay at the current stream.  So software support for the OMAP-L138 is going to continue to be supported?  Or is all software support going to be frozen at the current generation of tools?

    2.) I'm unable to import the 'shared' project: "Error: Import failed for project 'shared' because its meta-data cannot be interpreted.  Please contact support."

    3.) I imported the DSP and ARM projects and updated the XDCtools, SYSBIOS, and IPC versions.  I copied the target configuration (ccxml) over from my working project as well as my GEL files.  Everything builds correctly, however, when I attempt to debug the project, my DSP doesn't breakpoint at the top of main.  And when I pause the project, the PC is somewhere off in nowhere land - seemingly not even close to any program data.

    4.) Any ideas why I must specify a setup proxy for the Notify module?  

    Thanks again!

  • Derek,

    Re: #1 OMAP-L138 Support

    Unfortunately, the IPC software support for OMAP-L138 will be frozen. IPC development will continue, but for other supported devices. If you do find a problem with IPC 3.36, you might get some suggestions in the future on how to fix it. But it seems unlikely that there will be any patch releases.

    Re: #2 Project Import

    Yes, I got the same error. Here is what I found which does seem to work. Starting with a clean workspace, I imported the hello_arm and hello_dsp proejcts using the CCS Projects wizard. I then imported the shared project using the Eclipse project wizard. I selected the same zip file I gave you for both import operations.

    Here are the steps:

    File > Import...
    Code Composer Studio > CCS Projects
    Next

    Select archive file: OMAP-L138.zip
    hello_arm > Select
    hello_dsp > Select
    Finish

    File > Import...
    General > Existing Projects into Workspace
    Next

    Select archive file: OMAP-L138.zip
    shared > Select
    Finish

    Re: #3 DSP failure

    Maybe once you can import my shared project, it will build correctly and run correctly. Let me know.

    Re: #4 Notify proxy

    I'll have to think about this some more. Sorry.

    ~Ramsey

  • Okay, I was able to import the projects.  I changed the tools versions to:

    ARM CGT: 5.2.4

    DSP CGT: 7.4.14

    XDC: 3.30.6.67

    IPC: 3.36.2.13

    SYSBIOS: 6.41.4.54

    Everything builds successfully, and the DSP stops at the top of main.  However, if I run both the ARM and DSP, neither get out of Ipc_start().  I even took your suggestions from earlier in the thread to do:

    Ipc.sr0MemorySetup = true;

    Ipc.generateSlkaveDataForHost = false;

    Also, if I try to single step through Ipc_start(), I don't seem to have any debug symbols as all I can see is disassembly.  The project properties do specify a debug build though.

    I noticed also that in my own project, if I don't put the DSP into internal memory, it won't get out of Ipc_start(), and I'm not sure why that is.  So I changed your shared/config.bld file to put the DSP into L2 IRAM.

  • Okay, I'm finding something which I don't understand in my own application.  (I haven't been able to get your example application out of Ipc_start() for either core.)

    If I put ARM code/data in DDR and DSP code/data in DDR, my application won't get past: ti_sdo_ipc_MessageQ_Module_startup in MessageQ.c basically before it ever gets to the first line of code (called from autoinit).  If I don't load and run the ARM though, the DSP gets past that part and to Ipc_start() (where obviously it never proceeds from).  

    I've tried with caching on and without caching.  I'm not sure how to debug this.  I've also verified there's no overlap in ARM/DSP memory except for the shared region memory.  I'm convinced it has something to do with caching, but I'm not sure how to verify this.  I don't think it has to do with my GEL file initialization of the DDR though since I can run either ARM or DSP out of DDR by itself.  But I could be wrong.

    Any ideas of where to look?

  • Derek,

    I have updated to the same product versions you posted. I'm using CCS 6.0.1. I've rebuilt and run my example and it still works. I'm running on an OMAP-L138 Development Kit (LCDK). I realize you are running on a custom board. Would it be possible to try running on an LCDK just to have a common baseline?

    Here are some details on how I run the example. I don't have a lot of experience with OMAP-L138, but this sequence seems reliable on my setup. On the first run after a power on event, I think it is important to run the ARM to main before loading the DSP. Something to do with programming the MMU maybe.

    Startup Sequence

    1. Power on OMAP-L138 LCDK
    2. Connect to ARM
    3. CPU Reset ARM
    4. Load program
    5. Run to main
    6. Connect to DSP
    7. CPU Reset DSP
    8. Load program
    9. Run to main
    10. Run both processors

    At this point, I use the following sequence to simply re-run the example. This also seems to work if I rebuild the example. But if I disconnect from the ARM, then it never works again. I have to start over with the startup sequence above (i.e. disconnect and power cycle the board).

    Restart Sequence

    1. CPU Reset DSP
    2. CPU Reset ARM
    3. Reload ARM
    4. Run to main (on ARM)
    5. Reload DSP
    6. Run to main (on DSP)
    7. Run both processors

    I always disable the auto-run to main feature in my target configuration. That way, when I load the program it does not start running. You will see the PC at the program entry point. Here is how to disable the auto-run to main feature. Make sure to terminate your debug session first.

    Open the Target Configuration window (View > Target Configuration)
    Right-Click on your ccxml file (OMAP-L138 LCDK.ccxml for me) > Properties
    Device > C674X_0
    Auto Run and Launch Options > Select
    On a program load or restart > Unselect
    Apply
    Device > ARM9_0
    Auto Run and Launch Options > Select
    On a program load or restart > Unselect
    OK

    I have also modified the configuration for both ARM and DSP to enable debug symbols of IPC source code. Look in the respective config scripts. I've added the following to each.

    var Build = xdc.useModule('ti.sdo.ipc.Build');
    Build.libType = Build.LibType_Debug;

    When you rebuild your program, it will rebuild all IPC source code on-the-fly (just like it does for SYS/BIOS). When you run, you should be able to step into the IPC source code and see what is going on.

    I've attached a new updated version of the example.2018.OMAP-L138.zip

    Have a look at the project settings and verify that platform names are set correctly. It seems that when I modify the product versions and platform name, I have to do it several times before the changes actually stick. Must be a bug in CCS. I also have to make the changes in both Debug and Release configuration. The platforms should look like the following for ARM and DSP respectively.

    ti.platforms.evmOMAPL138:arm
    ti.platforms.evmOMAPL138:dsp

    These are platform instances created in the shared/config.bld script. Have a look at the memory map defined there. Are you familiar with the DSP MAR registers? These control what parts of memory are cacheable (from the DSP perspective). Each MAR bit controls a 16 MB block of memory, so you must ensure that SR-Zero it not sharing the same 16 MB section with the program code and data. You can also use ROV to look at the MAR bits. Here is the code in dsp.cfg which turns off the MAR bit for SR-Zero.

    Cache = xdc.useModule('ti.sysbios.family.c64p.Cache');
    Cache.MAR192_223 = (1 << 0);

    Let me know if you need more information on this.

    Re: your last post on memory and GEL files.

    I don't have any information on this. However, when I use my OMAP-L138 LCDK for SYS/BIOS on the ARM (i.e. not with Linux), I have to set the DIP switches as follows.

    1 2 3 4 5 6 7 8
    0 0 0 1 1 0 0 1

    I'm not sure what this controls, but it has to do with booting. You would need to understand these settings and then apply them to your own custom board.

    I do hope you can get the attached example to work. Then we would have a common baseline from with to move forward.

    ~Ramsey

  • Okay, thanks, Ramsey, that worked for me.  I borrowed a coworker's LCDK and was able to get the example running and sending the Notify events.  Now I'll see if I can't figure out how to get it running on my custom board.  It must have something to do with my target configuration and/or GEL file initialization.  The main difference I can think of is that with my custom configuration, I'm loading a GEL file for both the DSP and ARM, but for your example, I just have the single LCDK GEL file (which I load with the ARM).  That and the steps which you use to load and run everything.

    I have one interim question, though (and it may be a dumb question).  I can't figure out how to view the Log_print lines you have sprinkled about your code.  I know the System_print statements will appear in the CIO window after a System_flush, but I'm not sure how to view the Log_print lines.  In my own project, I use the UIA module and use Log_print, then use the RTOS analyzer tool to view the logs.

    Thanks again, Ramsey!

  • Derek,

    Good to hear you got the example working on an LCDK. Yes, I think the ARM gel file is key. It would be good if you can use just one gel file on your custom board. In the end, you will need to integrate most of the gel file into your ARM executable. When you get ready to deploy your application, you will probably boot from flash on ARM first. Since there is no CCS in a deployed system, the ARM executable will be responsible for doing that work. Then you will have to load and run the DSP from the ARM.

    Re: Log output

    In my example, I tried to show two different ways to raise log events. On the DSP I use Log_write and on the ARM I use System_printf. In your application, I would use Log_write on both processors; it is much more efficient than System_printf.

    When using the Log module to raise events, they are sent to a "logger". You can configure different loggers. In the example, I'm using xdc.runtime.LoggerBuf on the DSP. After you run the example, pause the DSP and open ROV. Look for a module called LoggerBuf. Click on the Records tab. Select the logger instance handle (it looks like an address); just select it (don't expand it) and you get all the log events in the table on the right.

    When using System_print, the routing of the output depends which system proxy you are using. If you use xdc.runtime.SysStd, then the output does go to the CIO buffer, which CCS will monitors. But this approach is very intrusive to your application. CCS actually halts the processor while it reads the CIO buffer and renders the output in a console window. In the example, I'm using xdc.runtime.SysMin which send the output to a buffer in memory. CCS does not monitor this buffer. To view the contents, halt the ARM and use ROV again. Look for a module called SysMin. Click the OutputBuffer tab and you will see the text.

    ~Ramsey

  • Hi Ramsey, thanks for the reply.  I understand now how the Log events work.

    This may be the wrong thread to discuss this, but I'm having a problem with, I believe, my SDRAM configuration.  That's the only thing that I think can explain the behavior I'm seeing.  Consider these scenarios:

    - Your example application runs fine on the LCDK. (I cannot get the example application running on my custom board, but I'll explain why I think that is.)

    - My application runs fine on my custom board with all cache settings turned off.

    - My application runs fine on the LCDK with all cache settings turned on (i.e. MAR bits appropriately set and L1P/L1D/L2 cache sizes set to 32k).

    - My application runs fine on the LCDK with all cache settings turned off (i.e. MAR bits all 0 and L1P/L1D/L2 cache sizes set to 0k).

    - My application will not run on my custom board with cache settings turned on.

    - My application will not run on my custom board with all DSP code in DDR (my ARM code is always is in DDR).

    I load everything the same way in each scenario; however, when I start both cores, the ARM ends up in a bad memory location doing who knows what.  The DSP spins in Ipc_start() (obviously...).

    The ONLY difference between running my application (and for that matter, your application) on my board vs on the LCDK is in the DDR_config() function in the GEL file.  

    Certainly, the SDRAM configuration registers are correct for the LCDK.  I've double and triple checked my  SDRAM configuration registers (by hand, comparing them to the datasheet, and using the mDDR/DDR2 spreadsheet provided by TI), and I'm almost certain they're correct and what I expect.  I've ensured that they are set properly by reading them back from the chip.

    We're using an mDDR part (Winbond W947D6HBHX5E). It has 128 Mb, 4 banks, 16-bit addressing, speed grade 5.  The datasheet is here: 

    I believe my SDRAM configuration is correct because: DSP only applications run fine completely out of DDR on our custom board.  ARM only applications run fine completely out of DDR on our custom board.  But when both ARM and DSP are run together out of DDR, one of the cores crashes (usually the ARM).  I have verified with the generated map files that there is absolutely no overlapping memory (except the SharedRegion).

    Are there any restrictions on the type of SDRAM which may be used with the OMAP-L138?

    Why would turning on caching prevent things from working only on my board vs the LCDK when the ONLY difference is the SDRAM configuration?

    I'm worried that our selection of an mDDR chip is a problem.  We're in the process of a new board design, and getting an appropriate mDDR chip is crucial at this stage before we spin things.

    Thanks in advance for your help!