This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
Hello!
I'm trying to catch a Transfer Completion Interrupt from EDMA on the DSP side of OMAP-L138.
I'm using channel 10 for my DMA transfer, in its OPT field TCINTEN is set to 1 (INT only on completion). Setting the TCC field is more tricky for me. How should I fill TCC and Scheduling->HWI in Code Composer to attach my function to the interrupt generated by EDMA? Also, should IER be set to (1 << 10) (as the channel number which generates the interrupt), or to the value of (1 << TCC)?
This topic seems similar but it confused me a little bit: http://e2e.ti.com/support/dsp/davinci_digital_media_processors/f/99/p/6711/25716.aspx#25716
Regards
Szymon
Hi Szymon
Are you making use of the EDMA3 LLD examples for creating your DMA transfer set up?
If you were creating your own example from scratch, here are somethings to keep in consideration
In general, it is simple to have TCC value be equal to the DMA channel number, so if you are using DMA channel # 10, you can have OPT.TCC set to 10 too. To have the EDMA3 CC generate transfer completion interrupts to ARM or DSP interrupt controller, you will need to enable
1) EDMA3CC.IER (via write to EDMA3.IESR)
2) Enable the DMA Region Access Enable Register bit corresponding to the TCC value you chose, for the right shadow region. On OMAPL138, if you need to send the EDMA transfer completion interrupt to ARM Interrupt controller , you would need to write to DRAE0 register (Region 0) and if you need to send the EDMA transfer completion interrupt to DSP interrupt controller, you would need to write to DRAE1 register (Region 1) to enable the bit corresponding to the TCC value ( This is illustrated in Figure 12 in the EDMA3 User Guide
3) In addition to above, you will need to enable the interrupts appropriately in the DSP or ARM interrupt controller. Note that there is also an IER register in the DSP Interrupt controller ( I guess it could be confusing to have the same name for 2 different registers, and I hope that is not what was confusing you in the other post you referred to).
In general the above steps (and other initialization steps) are also summarized in Appendix B of the user guide.
Let us know if you still get stuck with making your setup work for you.
Regards
Mukul
I'm still having some doubts about TCINTs. Here's the whole situation:
EDMA Channel 1 is triggered upon TXEVT from McASP. It does some preliminary data transfer and manipulation and then triggers channel 10 that is chained onto it (so TCC field in channel 1 is set to 10 to allow chaining). When channel 10 completes its transfer it needs to raise an interrupt to the DSP. From what I understand I can't use TCC 10 in channel 10 since it's already used in channel's 1 TCC field... Let's say I pick an arbitrary TCC like 8 for channel 10. Correct me if I'm wrong but here is what i set:
1) 8th bit of EDMA3CC.IER using EDMA3.IESR
2) 8th bit of DRAE1 for interrupt on DSP
3) Now... How do I attach my code to the interrupt in Code Composer? What do I have to configure in the TCF file to do it? From http://wiki.davincidsp.com/index.php/Setting_up_interrupts_in_DSP_BIOS I know what has to be set (C64_enableIER), but what I don't understand is mapping between the TCC value in channel 10 on Transfer Completion Interrupt to a HWI_INT.
Regards
Szymon
Hi Szymon
I think you are very close to getting this setup working, and you are looking at the right wiki references also.
So just focussing on the way you are currently trying to setup the transfer completion interrupt, it is ok to use TCC value of 8. The EDMA3 Channel controller has 64 DMA channels, and you are allowed to pick any TCC value between 0-63. Depending on the TCC value you picked, if you selected OPT.TCINTEN (or ITCINTEN) to be 1 then it will set the IPR register corresponding to TCC value (for transfer completion indication) and/or if you selected OPT.TCCHEN (or ITCCCHEN) it will set the CER register corresponding to the TCC value (for triggering a chained channel).
Taking your example settings, assume IPR[80 is set, and IER[8] is set (enabled via IESR[8]) AND DRAE1[8] is set , this will result in the EDMA3_x_CC0_INT1 interrupt to be sent to the DSP interrupt controller. OMAPL138 has 2 EDMA3 modules, so x will depend on which channel controller you are using (like EDMA3_0 as you are using McASP event).
Now this interrupt needs to mapped to the DSP interrupt controller , using the example/illustrations shown in the wiki article http://wiki.davincidsp.com/index.php/Setting_up_interrupts_in_DSP_BIOS
I have snapshotted the Evt# for EDMA3_0_CC0_INT, and so instead of #42 shown in the wiki article, your interrupt selection value will be 8 , and then appropriately associate your ISR function in the tcf file.This should set you up. You can use HWI_INT5 itself as shown in the wiki article
Things to note, and possibly the source of confusion
1) No direct correlation with the TCC value used and the HWI_INTx. There is a single transfer completion interrupt from the CC module, and even if you had multiple channels with different TCC values reporting transfer completion , this is the single interrupt line going to the DSP INTC. Which means if you have more channels/TCC trying to interrupt the DSP on transfer completion, you would need to hook up a smarter ISR that scans through all the IPR values for bits set, and do things accordingly (more like a handler, recommendations on that in the EDMA3 user guide)
2) There are 128 peripheral events (EDMA3_0_CC0_INT1 being one of them) that can be mapped the 12 HWI in the DSP Interrupt controller (since that seems like a large number of events/interrupts getting mapped to handful of HWI, there is also a concept of event combiner in the DSP INT). So C64_enableIER function will enable one of the 12 HWI (whichever you choose to map your EDMA transfer completion interrupt to in the tcf file)
Hope this helps. Let us know if you are still stuck.
Regards
Mukul
Hi
I wanted to address this separately to not clutter the previous response
Szymon Kuklinski said:When channel 10 completes its transfer it needs to raise an interrupt to the DSP. From what I understand I can't use TCC 10 in channel 10 since it's already used in channel's 1 TCC field... Let's say I pick an arbitrary TCC like 8 for channel 10. Correct me if I'm wrong but here is what i set:
Technically speaking you can have the same TCC assigned to multiple channels if that makes sense for your application (typically it doesn't). However in this use case, if channel 1 is only chaining to channel 10 (you don't want channel 1 to generate a transfer completion interrupt aka CH1 PARAM OPT.TCCHEN=1 but OPT.TCINTEN=0) BUT you want channel 10 to generate a transfer completion interrupt ( CH10 PARAM OPT.TCINTEN=1) then you can use the same TCC ie. CH10 PARAM OPT.TCC=10, as in this case you will still get an interrupt only after channel 10 is completed, the TCC value of 10 for Channel 0 will only result in a chained event to trigger channel 10.
Hope this makes sense? Let me know if more explanation is needed.
Regards
Mukul
Also, assuming that your EDMA setup is correct, and you are just stuck at the HWI configuration etc, you might find the following ECAP_EPWM example on OMAPL137 handy.
http://wiki.davincidsp.com/index.php/C6747_eCAP_to_EHRPWM_example
This shows how the ECAP2 event/interrupt (#51) is being mapped to HWI_INT7 , and the additional setup required to enable IER etc (for DSP INTC) and setting up a user ISR (in this case for ECAP etc).
You should be able to use this as reference to setup your EDMA transfer completion interrupt to DSP interrupt controller.
Regards
Mukul
Hello!
Thank you for your help. I really appreciate it.
One last thing about the registers... Should I write to IESR[8] in "Global Channel Registers" (address 0x01C0 1060 for OMAP L-138) or to "Shadow Region 1 Channel Regiser (0x01C0 2260)?
Anyway, by setting both to 8, all others as you advised and attaching my function to HWI_INT5 with interrupt selection number 8 etc... I got a situation in which right after McASP triggers DMA my program hangs around line 5853 of file edma3resmgr.c. Its a while loop that starts like this: "while (pendingIrqs)" - EDMA LLD version is edma3_lld_01_10_00_01.
At first I thought it was because my ISR did not comply to what is written in chapter 2.9.2 of EDMA documentation but I quickly concluded that my ISR doesn't even get run once. My program is a heavily modified and simplified version of the example given with edma3_lld_01_10_00_01.
Regards
Szymon
Szymon Kuklinski said:One last thing about the registers... Should I write to IESR[8] in "Global Channel Registers" (address 0x01C0 1060 for OMAP L-138) or to "Shadow Region 1 Channel Regiser (0x01C0 2260)?
If you wrote to IESR in the global channel registers , that should be sufficient.
If you wanted to manipulated the IER in the shadow region, you can write to Shadow Region 1 Channel Registers , you would need those bits to be enabled via setting up the DRAE1 bits appropriately (as illustrated in Figure 11 of the EDMA user guide). Enabling both is not required. Typically Shadow Regions come in handy in "hetero" processors like OMAPL138 , where you want to prevent ARM and DSP from using the same resources in terms of channels and registers. Once you enable DRAE0 and DRAE1 appropriately (and mutually exclusively) , you can have ARM CPU manipulate shadow region 0 register space and DSP CPU manipulate shadow region 1 registers (avoiding global all together , to prevent any conflict on simultaneous accesses from ARM and DSP to the same register --> which would happen if you were using global channel registers).
Szymon Kuklinski said:Anyway, by setting both to 8, all others as you advised and attaching my function to HWI_INT5 with interrupt selection number 8 etc... I got a situation in which right after McASP triggers DMA my program hangs around line 5853 of file edma3resmgr.c. Its a while loop that starts like this: "while (pendingIrqs)" - EDMA LLD version is edma3_lld_01_10_00_01.
Hmm...so you are using EDMA3LLD and the resource manager. I might need to get the software team to look at this more carefully to help you further on this. In general using EDMA3LLD, by default you might not have control over TCC allocation , and it might allocate any "free" TCC code available. Are you sure by bruteforcing TCC=8, the initialization is being done correctly under EDMA3LLD guidelines so that it is really assigning TCC=8 and not an "additional" TCC? That would cause the hang in the loop, if it is polling for some other pending irq (IPR[TCC] value instead of 8)
Also do you think that the transfers associated with your McASP and chained channel happening correctly, and you are just not getting a completion interrupt? If the transfers did not happen, then there might be some more issues with your EDMA setup/programming.
If possible, you can also post the PARAM contents for your McASP channel and the chained channel before you initiate the transfer, I can verify that it is programmed as you expect it to be from the description provided above.
Regards
Mukul
Hi
I followed some hints you mentioned and still the program freezes in the mentioned while loop. Here's my EDMA config code...:
unsigned int chIdTX0 = EDMA3_DRV_HW_CHANNEL_EVENT_1;
unsigned int chIdTX1 = EDMA3_DRV_DMA_CHANNEL_ANY;
unsigned int tccTX0 = EDMA3_DRV_TCC_ANY;
unsigned int tccTX1 = EDMA3_DRV_TCC_ANY;
unsigned int tccTXLink0 = EDMA3_DRV_TCC_ANY;
unsigned int tccTXLink1 = EDMA3_DRV_TCC_ANY;
unsigned int chIdTXLink0 = EDMA3_DRV_LINK_CHANNEL;
unsigned int chIdTXLink1 = EDMA3_DRV_LINK_CHANNEL;
EDMA3_DRV_PaRAMRegs paramSetTX0 = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0};
EDMA3_DRV_PaRAMRegs paramSetTX1 = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0};
EDMA3_DRV_Result result = EDMA3_DRV_SOK;
//-------------------TX1--------------------
result = EDMA3_DRV_requestChannel(hEdma[0], &chIdTXLink1, &tccTXLink1, (EDMA3_RM_EventQueue)0, NULL, NULL);
if (result != EDMA3_DRV_SOK) {
return result;
}
result = EDMA3_DRV_requestChannel(hEdma[0], &chIdTX1, &tccTX1, (EDMA3_RM_EventQueue)0, NULL, NULL);
if (result != EDMA3_DRV_SOK) {
return result;
}
printf("CHAN: %d\n", chIdTX1);
//returns 10
printf("LINK: %d\n", chIdTXLink1);
//returns 64
printf("TCC TX1: %d\n", tccTX1);
//returns 10
paramSetTX1.srcAddr = (unsigned int) txTmp;
paramSetTX1.destAddr = (unsigned int) 0x01D02000;
paramSetTX1.srcBIdx = 0;
paramSetTX1.destBIdx = 0;
paramSetTX1.srcCIdx = 0;
paramSetTX1.destCIdx = 0;
paramSetTX1.aCnt = 32;
paramSetTX1.bCnt = 5;
paramSetTX1.cCnt = 1;
paramSetTX1.bCntReload = 0;
paramSetTX1.linkAddr = 0x4400;
paramSetTX1.opt = (1 << 20) | (tccTX1 << 12);
//INT on completion, TCC is tccTX1
result = EDMA3_DRV_setPaRAM(hEdma[0], chIdTXLink1, ¶mSetTX1);
if (result != EDMA3_DRV_SOK) {
return result;
}
result = EDMA3_DRV_setPaRAM(hEdma[0], chIdTX1, ¶mSetTX1);
if (result != EDMA3_DRV_SOK) {
return result;
}
*((unsigned int *) 0x01c01060) = (1 << tccTX1);
//IESR
*((unsigned int *) 0x01c00348) |= (1 << tccTX1);
//DRAE1
//-------------------TX0-------------------
result = EDMA3_DRV_requestChannel(hEdma[0], &chIdTXLink0, &tccTXLink0, (EDMA3_RM_EventQueue)0, NULL, NULL);
if (result != EDMA3_DRV_SOK) {
return result;
}
result = EDMA3_DRV_requestChannel(hEdma[0], &chIdTX0, &tccTX0, (EDMA3_RM_EventQueue)0, NULL, NULL);
if (result != EDMA3_DRV_SOK) {
return result;
}
printf("MAIN: %d\n", chIdTX0);
printf("LINK: %d\n", chIdTXLink0);
paramSetTX0.srcAddr = (unsigned int) txAddr;
paramSetTX0.destAddr = (unsigned int) txTmp;
paramSetTX0.srcBIdx = 5;
paramSetTX0.destBIdx = 4;
paramSetTX0.srcCIdx = 1;
paramSetTX0.destCIdx = 0;
paramSetTX0.aCnt = 1;
paramSetTX0.bCnt = 8;
paramSetTX0.cCnt = 5;
paramSetTX0.bCntReload = 0;
paramSetTX0.linkAddr = 0x4420;
paramSetTX0.opt = (1 << 23) | (1 << 22) | (chIdTX1 << 12) | (1 << 2);
//chain chIdTX1 every intermediate and final transfer, AB synchronized
result = EDMA3_DRV_setPaRAM(hEdma[0], chIdTXLink0, ¶mSetTX0);
if (result != EDMA3_DRV_SOK) {
return result;
}
result = EDMA3_DRV_setPaRAM(hEdma[0], chIdTX0, ¶mSetTX0);
if (result != EDMA3_DRV_SOK) {
return result;
}
result = EDMA3_DRV_enableTransfer(hEdma[0], chIdTX0, EDMA3_DRV_TRIG_MODE_EVENT);
I checked the registers and DRAE1 is 0x400, IER is 0x400 and IPR is 0x400. As I mentioned the code freezes and my ISR does not execute. If I comment the lines that set DRAE1 and IER it works just fine, but naturally I don't get any interrups either.
Also In my TCF file I have:
bios.HWI.instance("HWI_INT5").interruptSelectNumber = 8;
bios.HWI.instance("HWI_INT5").useDispatcher = 1;
bios.HWI.instance("HWI_INT5").fxn = prog.extern("pcm_isr");
bios.HWI.instance("HWI_INT5").monitor = "Nothing";
And at the beggining of main(): C64_enableIER(C64_EINT5);
Hi Szymon
I am still hoping that the EDMA3 LLD team might pitch in on this thread to provide more pointers on what might be going wrong with your setup , and possible caveats of mixing and matching the code the way you have it (if any)
In general, I was told that EDMA3 LLD comes with multiple examples and most of them use EDMA3 transfer completion interrupt mechanism to inform the user about the data transfer. The interrupt registration part is transparent to the user and embedded in EDMA3 Driver sample init library, user only provides the callback function in his application while requesting the channel.
You might need to fix that in your EDMA3_DRV_requestChannel calls.
In addition to that I am a bit confused on your A/B/C CNT and indices programming (it will be a don't care if you are seeing the desired transfers happen, but just not getting the completion interrupt). For e.g. assuming paramSetTX1 is for McASP transfers, you have ACNT = 32 (bytes) , even if you are using multiple serializers, I would recommend using ACNT= 1/2/4 (bytes , depedning on 8/16/32 bit data) and BCNT= number of serializers. Similarly I didn't quite understand the indexing in for paramSetTX0 , but I guess if the data in txTmp buffer looks ok and as per your expectation then you should be fine.
Regards
Mukul
Please see if the following document helps
http://wiki.davincidsp.com/images/5/5e/EDMA3_LLD.pdf
Page 31 onwards has some checklists that you might find handy.
Regads
Mukul
Hi!
Again, thank you for the time you've spent helping me with my problem.
I'm aware of the fact that I can hook up a callback function so that it runs as ISR. If I do it that way my ISR does get executed but the program still gets stuck in the mentioned while loop not allowing me to proceed with my code.
Maybe I should explain what I'm trying to acheive this way you'll better understand why I'm doing some things in a way that may seem overcomplicated and why I'm not taking full advantage of the API delivered by EDMA LLD.
In my project I will use both ARM and DSP. ARM with Linux running on it will be reponsible for some PBX functions, whereas the DSP will perform what it's best suited for - signal processing of audio data. I will use DSPLink to interface between the two and exchange data (RINGIO) and control messages (MSGQ). Now, the tricky part is that it would be most convienient for DMA and McASP to be configured from the DSP - only it will have adresses of arrays in which we will place our processed samples to be shifted out OMAP as PCM and also the DSP will monitor if this shifting doesn't get interrupted and in worst case restarts both DMA and McASP. Of course what concerned me was that both Linux and the DSP/BIOS would be working on EDMA registers at the same time. I tried some scenarios of how it might work for me:
1) ARM side only wakes up McASP and does approprieate PINMUX (I've read on this forum and noticed it myself that if a DSP program loaded by DSPLink does something with KICK registers it freezes or does something unexpected). DSP side initializes both EDMA and McASP. On the oscilloscope I noticed that McASP started working and began shifting out some data (other than I expected though) but after I do anything simple in Linux like cating a file McASP indicates XUNDRN status. It may be because Linux messed something with EDMA, at least XUNDRN indicates that a right transfer did not occur.
Why I'm reluctant to use API-specific functions and trying to do the things as close to the registers as possible is that I'm also considering these two options:
2) ARM side takes care of everything (PSC, PINMUX, EDMA - by porting my LLD code to Linux driver and McASP). This way everything works well up to the point I try to load even the most simple DSP program (like sending one NOTIFY) through DSPLink. Again, right away McASP indicates XUNDRN and I get zeroes out of the serializers instead of a correct output seen before.
3) ARM side sets PSC, PINMUX and EDMA regs (including setting the appropriate registers to provide executing my ISR on the DSP - this way only one side does something with EDMA), DSP configures only McASP and in the TCF file of my program to be loaded by DSPLink I add the lines mapping HWI_INTX to interrupt 8 as you explained to me in the previous posts. Now, this way works well, but as you can see since in this case I dont use EDMA LLD i must understande everything register-wise to make it work. I started writing my code as API independent from the beggining as possible in fear of ending up in a situation in which I was forced to port my EDMA code to Linux. Why I was using EDMA LLD to acheive that was because Code Composer gives more debugging capabilities than I would have writing a kernel module in Linux. I was just trying to understand the idea behind it and again port my code to Linux.
Or maybe there is a better way I could acheive what I'm looking for? I would appreciate you opinion on that matter seeing as how you are far more experienced in EDMA than I am.
Regards.
Szymon
Hi Szymon
Thanks for taking the time to explain your overall software structure and components. I am still seeking help from the software team to try and get them to address this post to provide more debug tips/tricks for your original setup as explained in #1.
My initial reaction is that , I would recommend that you continue going the path of trying to resolve issues with your approach 1. I am not the expert on EDMA3 LLD, but I would still be wary of the mods you are making as per your approach #3, and I could see more pitfalls in that , plus the more you move away from the software components provided by TI , the harder it would be for us to debug and resolve the issues.
In searching through the forum, I found at least one forum post that sounded similar to your issues
If you already found this, I hope it should've help resolve issues related to the kick registers and audio driver conflicts etc.
Additionally, it would be good if you could also list the version numbers for the various software components in use (especially EDMA3LLD , DSP/BIOS PSP driver and DSPLINK version etc).
If the above forum post doesn't help and you don't think the issues in #1 and #2 are due to some resource management conflict (in terms of memory buffers, driver usage, EDMA3 usage etc) , then maybe we can also focus on "system performance" issues like 1) trying to place the McASP buffers in internal memory (if they are in external memory), 2) Reducing McASP clock rates ( what is the clock rate in your setup etc)?
Finally, the last option, in case you are not able to make progress on approach 1, would be to investigate the possibility of you sharing with us the DSP side code that you are trying to develop in approach 3 , so that we can try to figure out the issues with your mods. This should really the last option as typically debugging code snippets that have deviated from the originally provided software components are always harder to debug and could have a bigger turn around time.
Hope this helps some.
Regards
Mukul
Hi
As you suggested I did some research on #1. It turns out that when I force my DSP program to allocate buffers used by DMA in L2 memory instead of RAM it works just fine and doing complex stuff in Linux doesn't kill the DSP application (if anyone is interested the procedure of placing buffers in different types of memories described in focus.ti.com/lit/an/spraan4a/spraan4a.pdf).
The above may have been caused by the fact that I did not call Edma3_CacheFlush on the buffers in RAM as is suggested. Anyway placing the buffers in L2 was on my todo list so I'm one step closer to the finish.
Now I want to hook up my ISR in my DSP code in which I'm using EDMA LLD. Let's consider this code:
void pcm_isr (unsigned int tcc, EDMA3_RM_TccStatus status, void *appData) {
(void)tcc;
(void)appData;
*((unsigned int *) 0x11800000) = 0x66;
switch (status)
{
case EDMA3_RM_XFER_COMPLETE:
break;
case EDMA3_RM_E_CC_DMA_EVT_MISS:
break;
case EDMA3_RM_E_CC_QDMA_EVT_MISS:
break;
default:
break;
}
}
//-------------------TX1-------------------
result = EDMA3_DRV_requestChannel(hEdma[0], &chIdTXLink1, &tccTXLink1, (EDMA3_RM_EventQueue)0, NULL , NULL);
if (result != EDMA3_DRV_SOK) {
return result;
}
result = EDMA3_DRV_requestChannel(hEdma[0], &chIdTX1, &tccTX1, (EDMA3_RM_EventQueue)0, &pcm_isr, NULL);
if (result != EDMA3_DRV_SOK) {
return result;
}
paramSetTX1.srcAddr = (unsigned int) txTmp;
paramSetTX1.destAddr = (unsigned int) 0x01D02000;
paramSetTX1.srcBIdx = 0;
paramSetTX1.destBIdx = 0;
paramSetTX1.srcCIdx = 0;
paramSetTX1.destCIdx = 0;
paramSetTX1.aCnt = 32;
paramSetTX1.bCnt = 5;
paramSetTX1.cCnt = 1;
paramSetTX1.bCntReload = 0;
paramSetTX1.linkAddr = 0x4400;
paramSetTX1.opt = (1 << 20) | (tccTX1 << 12);
result = EDMA3_DRV_setPaRAM(hEdma[0], chIdTXLink1, ¶mSetTX1);
if (result != EDMA3_DRV_SOK) {
return result;
}
result = EDMA3_DRV_setPaRAM(hEdma[0], chIdTX1, ¶mSetTX1);
if (result != EDMA3_DRV_SOK) {
return result;
}
result = EDMA3_DRV_requestChannel(hEdma[0], &chIdTXLink0, &tccTXLink0, (EDMA3_RM_EventQueue)0, NULL, NULL);
if (result != EDMA3_DRV_SOK) {
return result;
}
result = EDMA3_DRV_requestChannel(hEdma[0], &chIdTX0, &tccTX0, (EDMA3_RM_EventQueue)0, NULL, NULL);
if (result != EDMA3_DRV_SOK) {
return result;
}
paramSetTX0.srcAddr = (unsigned int) txAddr;
paramSetTX0.destAddr = (unsigned int) txTmp;
paramSetTX0.srcBIdx = 5;
paramSetTX0.destBIdx = 4;
paramSetTX0.srcCIdx = 1;
paramSetTX0.destCIdx = 0;
paramSetTX0.aCnt = 1;
paramSetTX0.bCnt = 8;
paramSetTX0.cCnt = 5;
paramSetTX0.bCntReload = 0;
paramSetTX0.linkAddr = 0x4420;
paramSetTX0.opt = (1 << 23) | (1 << 22) | (chIdTX1 << 12) | (1 << 2);
result = EDMA3_DRV_setPaRAM(hEdma[0], chIdTXLink0, ¶mSetTX0);
if (result != EDMA3_DRV_SOK) {
return result;
}
result = EDMA3_DRV_setPaRAM(hEdma[0], chIdTX0, ¶mSetTX0);
if (result != EDMA3_DRV_SOK) {
return result;
}
result = EDMA3_DRV_enableTransfer(hEdma[0], chIdTX0, EDMA3_DRV_TRIG_MODE_EVENT);
Do I have to configure anything else in the TCF file for example or does EDMA LLD take care of everything and executes the callback function on TCINT? I'm asking because my callback function deos not get executed...
EDMA LLD version is 01.10.00.01
DSPLink 1.63
DSP/BIOS 5.41.00.06
xdctools 3.15.01.59
cgtools 6.1.9
I haven't yet carefully reviewed your transfer setup, but when you say your callback function does not get executed, should I assume that your EDMA transfers are happening correctly and the only thing missing now is the transfer completion interrupt?
I can't see anything visibly wrong in your setup above, but I need to confer with the EDMA3 LLD authors.
If you are now using EDMA3 LLD as is, I think you shouldn't have to do anything additional in the tcf file , as the driver takes care of registering your callback function to the edma3ComplHandler0. I think the driver hooks the EDMA transfer completion interrupt to the ECM dispatcher not to a HWI,(via registerEdma3Interrupts functions).
Note: I realized that when in previous posts I was providing you examples of mapping an interrupt to HWI directly, there could still have been issues, as you had later mentioned that you were using EDMA3LLD. In that case, I think you would still need the callback function, but the function that should've been mapped in the HWI_INTx properties should've been _lisrEdma3ComplHandler0.You would probably also would've needed to ensure that the EDMA3LLD driver is still additionally not mapping it to the ECM dispatcher.
Hopefully you are close to getting this resolved.
BTW, thanks for providing the versions for the various software components.
Regards
Mukul
Hi
I have my McASP output connected to the oscilloscope so I can easily verify if the transfers are happening correctly or not. Also just by examining XSTAT register of McASP I'm able to verify if it's getting the data form DMA. I followed the instructions on pages 31, 32 of http://wiki.davincidsp.com/images/5/5e/EDMA3_LLD.pdf and ended up with something like this in the TCF:
bios.MEM.HWISEG = prog.get("DDR");
bios.MEM.HWIVECSEG = prog.get("DDR");
bios.HWI.instance("HWI_INT5").interruptSelectNumber = 8;
bios.HWI.instance("HWI_INT5").useDispatcher = 1;
bios.HWI.instance("HWI_INT5").fxn = prog.extern("lisrEdma3ComplHandler0");
bios.HWI.instance("HWI_INT5").monitor = "Nothing";
But again, hooking the callback function like that:
result = EDMA3_DRV_requestChannel(hEdma[0], &chIdTX1, &tccTX1, (EDMA3_RM_EventQueue)0, &pcm_isr, NULL);
doesn't help...
I double checked the register values and concluded that DRAE1 is 0x00000C03, IER (for shadow region 1) is 0x00000400 and after McASP starts working IPR (for shadow region 1) gets set to 0x00000400. I get no interrupt though...
Regards.
Szymon
Hi Szymon
I have a few pointers.
Szymon Kuklinski said:I'm still having some doubts about TCINTs. Here's the whole situation:
EDMA Channel 1 is triggered upon TXEVT from McASP. It does some preliminary data transfer and manipulation and then triggers channel 10 that is chained onto it (so TCC field in channel 1 is set to 10 to allow chaining). When channel 10 completes its transfer it needs to raise an interrupt to the DSP. From what I understand I can't use TCC 10 in channel 10 since it's already used in channel's 1 TCC field... Let's say I pick an arbitrary TCC like 8 for channel 10. Correct me if I'm wrong but here is what i set:
The '10' which you use in the TCC field of channel 1 is DMA channel 10 and not TCC 10. Both of them are independent entities.
So by using 10 in the TCC field of channel 1 you are trigerring DMA channel 10 and not TCC 10 (TCC 10 is still free and you can use that in the TCC field of channel 10).
Szymon Kuklinski said:Also In my TCF file I have:
bios.HWI.instance("HWI_INT5").interruptSelectNumber = 8;
bios.HWI.instance("HWI_INT5").useDispatcher = 1;
bios.HWI.instance("HWI_INT5").fxn = prog.extern("pcm_isr");
bios.HWI.instance("HWI_INT5").monitor = "Nothing";And at the beggining of main(): C64_enableIER(C64_EINT5);
I think this might be the reason for your problem.
I do not know why you have chosen the HWI_INT5 and interruptSelectionNumber 8, but you cannot provide your own HWI_INT and interruptSelectionNumber if you are using the EDMA3LLD sample libraries and not using your own code to initialize the EDMA.
These sample libraries are built with the information about HWI_INT and interruptSelctionNumber already and you will get proper interrupts only if you pass the same information through your tcf file. So please use the tcf file present in the EDMA3LLD package. [It is in the <EDMA3LLD Install Dir>\examples\edma3_driver\evmOMAPL138\ccs3 folder]
And also to confirm the fact that you are getting interrupts from the EDMA3 module and the problem lies only in how it is mapped to the DSP, you can poll on the IPR [offset: 1068h] register which will let you know if an interrupt is raised by the EDMA3 Transfer Controller.
If the above mentioned changes are done, i think you must be able to see at least one interrupt raised.
But i am not very sure about the complete working of your code because of the way you have done the linking between the DMA channel and the LINK channel.
Szymon Kuklinski said:void pcm_isr (unsigned int tcc, EDMA3_RM_TccStatus status, void *appData) {
(void)tcc;
(void)appData;
*((unsigned int *) 0x11800000) = 0x66;
switch (status)
{
case EDMA3_RM_XFER_COMPLETE:
break;
case EDMA3_RM_E_CC_DMA_EVT_MISS:
break;
case EDMA3_RM_E_CC_QDMA_EVT_MISS:
break;
default:
break;
}
}................
paramSetTX1.bCntReload = 0;
paramSetTX1.linkAddr = 0x4400;
paramSetTX1.opt = (1 << 20) | (tccTX1 << 12);
result = EDMA3_DRV_setPaRAM(hEdma[0], chIdTXLink1, ¶mSetTX1);
if (result != EDMA3_DRV_SOK) {
return result;
}
..................
paramSetTX0.bCntReload = 0;
paramSetTX0.linkAddr = 0x4420;
paramSetTX0.opt = (1 << 23) | (1 << 22) | (chIdTX1 << 12) | (1 << 2);
result = EDMA3_DRV_setPaRAM(hEdma[0], chIdTXLink0, ¶mSetTX0);
if (result != EDMA3_DRV_SOK) {
return result;
}
result = EDMA3_DRV_setPaRAM(hEdma[0], chIdTX0, ¶mSetTX0);
if (result != EDMA3_DRV_SOK) {
return result;
}result = EDMA3_DRV_enableTransfer(hEdma[0], chIdTX0, EDMA3_DRV_TRIG_MODE_EVENT);
I suggest you to use the EDMA3_DRV_linkChannel() API to do the linking, because it is not just enough to write the address of the paRAM (of the link channel) in the linkAddr field to ensure the completion of proper linking.
Hope this helps.
Regards,
Sundaram
Hi
Thank you for joining the discussion and taking time to understand and help me with my problem. About your hints:
1) I understand the difference between TCC DMA channel now. It's clear to me now how to distunguish between the two.
2) This sounds very interesting. I wasn't aware that using EDMA LLD sample code required me to use some strict TCF options. What I need to do is to combine my DSPLink and EDMA LLD TCF files together to acheive what I want (a DSPLink program using EDMA LLD). Do you have any thoughts on that? Perhaps my approach isn't the best one? Maybe writing my own EDMA init code and not using the sample would be better? Do you know of any examples/projects that combined DSPLink and EDMA LLD in a way I'm trying to use?
3) I can assure you that me not using EDMA3_DRV_linkChannel doesn't cause any errors in my transfer. I am aware that hard-wiring the addresses this way is not the proper way to do this, but when using EDMA3_DRV_linkChannel encountered some unexpected problems (my PaRAM set being reloaded to a NULL one - described in http://e2e.ti.com/support/dsp/tms320c6000_high_performance_dsps/f/112/p/35641/130998.aspx#130998).
Regards.
Szymon
Szymon,
Szymon Kuklinski said:2) This sounds very interesting. I wasn't aware that using EDMA LLD sample code required me to use some strict TCF options. What I need to do is to combine my DSPLink and EDMA LLD TCF files together to acheive what I want (a DSPLink program using EDMA LLD). Do you have any thoughts on that? Perhaps my approach isn't the best one? Maybe writing my own EDMA init code and not using the sample would be better? Do you know of any examples/projects that combined DSPLink and EDMA LLD in a way I'm trying to use?
Before starting to think about the further implementation, can you check and confirm that interrupts are indeed raised (though it is confirmed by the fact that the corresponding bit in IPR is set) by the EDMA3 module (by using the tcf file in the EDMA3 example and seeing whether your ISR is getting executed), so we can doubly confirm that the problem lies only with this interrupt mapping (changes in tcf file) and we are not overlooking any other possible problems?
I will have to think about the "DSPLink program using EDMA LLD" you are suggesting, but as an alternative to that you don't have to write your own EDMA init code. If you want to use your own HWI number and interrupt selection number, then you will have to just change the interrupt mapping in the source code of the EDMA3 sample init library to suit yours and recompile the library so that you can use your tcf file and get proper interrupts.
But first of all why do you want to change the HWI number or the interrupt selection number? I think using the interrupt mapping in the EDMA example is a straight forward way to go about, or else the simplest way to use your own mapping according to me would be to recompile the EDMA sample init libraries.
Szymon Kuklinski said:3) I can assure you that me not using EDMA3_DRV_linkChannel doesn't cause any errors in my transfer. I am aware that hard-wiring the addresses this way is not the proper way to do this, but when using EDMA3_DRV_linkChannel encountered some unexpected problems (my PaRAM set being reloaded to a NULL one - described in http://e2e.ti.com/support/dsp/tms320c6000_high_performance_dsps/f/112/p/35641/130998.aspx#130998).
I have replied to this issue in its original thread.
Regards,
Sundaram
Hi
Thanks to your help I was able to make significant progress in my project. During the last tests of my work I encountered some odd behaviour though.
In my application, after a whole EDMA transfer has occured the ISR needs to modify the addresses of buffers on which EDMA is making the transfer to McASP (running at 2MHz, my buffers have 5 8bit PCM samples, 32 slots on each of 8 PCMs - I should get an interrupt every 625us - 5 times the normal 125us of a PCM). In the ISR I'm using the function EDMA3_DRV_setSrcParams to modify the SRC value in OPT. However, from time to time I can see on the oscilloscope that the output is invalid: first two or three slots in my TDM output are taken from the "old" buffer, whereas the rest is taken form the beginning of the "new" buffer. I concluded that one of two things are happening:
1) Calling EDMA3_DRV_setSrcParams takes so much time in the ISR that DMA starts to shift out the "old" data and only after some time it catches on the new one
2) The ISR does not start its execution as fast as I expected and buffer switching occurs too late
To support my conclusion I run my EDMA configuration code on a simple program and it ran well. But when I use the excact same code in my more complex application that utilizes several tasks and DSPLink components it produces such random behaviour.
Is there a way of telling the application to treat my ISR with the highest possible priority? Am I correct in assuming that EDMA3 uses HWI to manage my ISR?
Regards
Szymon
Szymon,
I'm not sure if you're still using HWI vector 5 for EDMA, because if you are, then that would conflict with the DSPLink HWI configuration. By default, DSPLink uses HWI vectors 4 & 5 for the IPC interrupts between ARM & DSP. So if you are using DSPLink, it would override your TCF configuration and end up plugging in the DSPLink ISR into HWI vecotr 5. This would probably interfere with your EDMA. If you want to change DSPLink to use a different interrupt vector, this is possible.
See here for details:
http://wiki.davincidsp.com/index.php/Notify_Module_Overview#Configuration
Regards,
Mugdha
Hello
As Sundaram suggested I'm using the following HWIs in the TCF (taken from EDMA3 examples):
bios.HWI.instance("HWI_INT7").interruptSelectNumber = 0;
bios.HWI.instance("HWI_INT8").interruptSelectNumber = 1;
bios.HWI.instance("HWI_INT9").interruptSelectNumber = 2;
bios.HWI.instance("HWI_INT10").interruptSelectNumber = 3;
Until I merged my two projects I encountered no problems in how DMA interrupts worked. I'm using NOTIFY, MSGQ and RINGIO components (RINGIO code is not yet run, but will be in the near future). Does MSGQ use the same HWIs as NOTIFY too?
Regards
Szymon
Szymon,
DSPLink only uses HWI vectors 4 & 5. So there does not appear to be a conflict. In this case, the issues you are seeing seem to be unrelated to DSPLink. MSGQ (and other modules in DSPLink) all use NOTIFY internally for their interrupt requirements.
Regards,
Mugdha
Hi
Thank you for your opinion on that.
Are you familliar with the calling convention of EDMA3_DRV_setSrcParams? In EDMA3 documentation I did not find any restrictions on using this function in an ISR. What I'm planning to do next is to replace this function with a simple memory write to the correct register. I'm sure it's not what EDMA3 authors had in mind but I can't figure the problem out...
I also considered that a different HWI might interrupt my ISR and cause it to stall for a significant amount of time, but when I disable HWIs in the ISR and enable them once it finishes, I still get the same output.
Also, are HWIs prioritised in OMAP-L138 DSP? From what I've read in spru423h it depends on the DSP. If they are how can I increase the priority of interrupts used by EDMA, so that my ISR won't be interrupted by HWIs from other sources (possibly DSPLink too)?
Regards
Szymon
Szymon,
I don't think the problem is because of the EDMA3_DRV_setSrcParams() API
Szymon Kuklinski said:Are you familliar with the calling convention of EDMA3_DRV_setSrcParams? In EDMA3 documentation I did not find any restrictions on using this function in an ISR. What I'm planning to do next is to replace this function with a simple memory write to the correct register. I'm sure it's not what EDMA3 authors had in mind but I can't figure the problem out...
Yes, there is no restriction on using this API in the ISR. There might not be any improvement by replacing this API with a simple memory write as anyways the API also is doing the same thing.
Szymon Kuklinski said:However, from time to time I can see on the oscilloscope that the output is invalid: first two or three slots in my TDM output are taken from the "old" buffer, whereas the rest is taken form the beginning of the "new" buffer. I concluded that one of two things are happening:
1) Calling EDMA3_DRV_setSrcParams takes so much time in the ISR that DMA starts to shift out the "old" data and only after some time it catches on the new one
2) The ISR does not start its execution as fast as I expected and buffer switching occurs too late
To support my conclusion I run my EDMA configuration code on a simple program and it ran well.
I think your second conclusion is right. I guess the ISR is taking some time to start execution. So to reduce the latency between the EDMA completion interrupt raised by the EDMACC and the start of the execution of your ISR, you can try to
1. Use the Early completion feature available in EDMACC by setting the bit 11 in the OPT field of the PaRAM
2. Hook the EDMACC completion interrupt directly to the HWI, than thorugh the ECM event which in turn is hooked to the HWI.
This requires change in the EDMA3 DRV Sample library (and the library has to be recompiled) and the tcf file of your application.
I think by using either of the above or both, you can drastically reduce the latency in the HWI ISR of the EDMA completion interrupt, and your issue might get resolved.
Regards,
Sundaram
Hi again
Apparently calling EDMA3_DRV_setSrcParams is much more time consuming than one could imagine. My ISR with EDMA3_DRV_setSrcParams takes about 2,8us, whereas changing it to memory write shortens it to 600ns (there are 5 calls of function or 5 memory writes every interrupt). 2,8us is unacceptable as I will show in the screenshots that picture my problem.
I followed your thought on using Early Completion interrupt. This is a good lead as it improves how my ISRs are executed, but not all of them. Every now and then I still get an unexpected delay in interrupt generation which causes distortions on second and third PCM slot (when new data shifting should occur only on first). Take a look at the two screenshots:
The first one pictures correct behaviour, the second one - incorrect. The signals are: purple - data shifted out - I used it to easily trace when incorrect behaviour occurs, turquoise: PFS indicating the start of next frame, green: execution of ISR. Using my superior graphic skills I also added two helper lines to indicate crucial time moments for McASP. According to figure 25, page 46 of sprufm1 the blue line is the time at which the last AXEVT before the interrupt takes place. The DMA transfer initiated by this AXEVT raises an interrupt. The red line is the AXEVT for the new frame. My interrupt (in which buffer shifting occurs) needs to conclude before this time or else I will start sending the same frame again.Oscilloscope measurement white lines mark the last time slot of the frame (3,9 us).
The combined time from AXEVT to light up->perform DMA transfer->raise an interrupt has to be minimized. What I don't understand is why there is such a a long delay between AXEVT and interrupt generation.
My question is how do you see the feasibility of my time constrains? Would modifying EDMA driver help in any way? Or should I attach my interrupt on a lower level that would provide a faster and perhaps more deterministic generation? In my opinion a 2048 kHz PCM should be doable for a 300MHz CPU without a problem.
Regards
Szymon
Hi Szymon
I need to read your last post more carefully, however I wanted to address and confirm on a few things with you
You asked about HWI prioritization , this is explained in the c674x DSP CPU and Instruction Set Guide here. Table 5-1 has the details, HWI4 is the highest priority (after NMI and Reset) and HWI15 is the lowest priority. So to map a peripheral event to highest priority interrupt, you could map it to HWI4 , and as Sundaram mentioned for your EDMA3 completion interrupt , modify the library code to directly map to an HWI rather then use the ECM dispatcher (mapped to INT6,7,8,9). You could try to map the EDMA interrupt to HWI10 onwards (to test the theory on ECM dispatcher overhead being the culprit). If you would want to map it HWI4 or 5, you would need to change the DSPLINK interrupts also (as they are mapped to HWI4 and 5).
However before you do that, can please clarify a few things
1) How many peripheral events/interrupts are in use in your complex application. I see that you might have DSPLINK based IPC interrupts and EDMA completion interrupts. What other modules are interrupting the DSP CPU? This would also provide us an indication on whether there are additional peripheral interrupts on HWI6 (combined event for EVT 4 to EVT 31, EDMA3 Completion interrupt being EVT8)
2) In general the ISR code could be residing in slower memory (e.g. external memory), have you tried to map your ISR code and critical stacks (interrupt and critical task stacks) to on chip L2 or Shared RAM memory?
3) If mapped to Shared RAM, please also make sure that MAR bits associated to Shared RAM are enabled to allow caching this memory region
4) If the critical sections are allocated in DDR2/mDDR memory, you could also confirm that the mDDR/DDR2 controller register PBBPR is set to 0x20. You might find this thread on McASP underrun helpful (even though you are not reporting under run issues etc). PBBPR is typically changed from default 0xFF to 0x20 in the UBOOT/UBL code
Hope this helps some.
Regards
Mukul
Hi
Answering your questions...:
1) At the moment I'm only using DMA interrupts and those originating form DSPLink. The interesting thing however is that removing much of the DSPLink code doesn't seem to help. When I comment some time consuming signal processing functions that run in the same Task I do see significant improvement. Such behaviour is odd as Task should not affect ISRs this way.
2) Could you please help me figure out how to move my ISR code from RAM to L2 memory? At the moment I only moved these:
bios.MEM.HWISEG = prog.get("IRAM");
bios.MEM.HWIVECSEG = prog.get("IRAM");
Judging from my MAP file ISR functions are in RAM at the moment.
Regards
Szymon
Szymon Kuklinski said:At the moment I'm only using DMA interrupts and those originating form DSPLink. The interesting thing however is that removing much of the DSPLink code doesn't seem to help. When I comment some time consuming signal processing functions that run in the same Task I do see significant improvement. Such behaviour is odd as Task should not affect ISRs this way.
So it would seem like you don't have other competing interrupts and you might not necessarily need to change the priority of the interrupts? It seems like the context save/restore time for CPU to switch to your ISR could be an issue (as CPU is busy doing other signal processing tasks when you get an interrupt)?
Szymon Kuklinski said:2) Could you please help me figure out how to move my ISR code from RAM to L2 memory? At the moment I only moved these:
bios.MEM.HWISEG = prog.get("IRAM");
bios.MEM.HWIVECSEG = prog.get("IRAM");
From SPRU423H , Pg 128
When running either an HWI or SWI, DSP/BIOS uses a dedicated system interrupt stack, called the system stack. Each task uses its own private stack. Therefore, if there are no TSK tasks in the system, all threads share the same system stack. Because DSP/BIOS uses separate stacks for each task, both the application and task stacks can be smaller. Because the system stack is smaller, you can place it in precious fast memory.
Please note that I checked with the author of the above guide, and they felt the above sentence can be a bit misleading. While it is true that HWI and SWI processing is done on the system stack, the C register context is first saved on the preempted task stack. Nested HWI or SWIs are saved on the ISR stack. It's only the preempted TSK that has this context save. The HWI dispatcher first saves the C context on the task stack before switching to the ISR stack and calling the ISR. So, if the TSK stack is in external memory , then the register context will be saved to the external memory
Can you also see if you can fit your .sysstack and .stack sections in internal memory? Again note that you should be able to make use of SHRAM also (apart from IRAM) and it should give you better performance then external memory . Make sure the MAR bits for SHRAM are enabled.
Regards
Mukul
Hi
Is modifying prog.module("TSK").STACKSEG in the TCF file sufficient to acheive what you're suggesting? And what about moving ISR code to IRAM?
Regards
Szymon
Szymon Kuklinski said:The interesting thing however is that removing much of the DSPLink code doesn't seem to help. When I comment some time consuming signal processing functions that run in the same Task I do see significant improvement. Such behaviour is odd as Task should not affect ISRs this way.
Are you using the --interrupt_threshold compiler option? If not, that means the compiler is allowed to disable interrupts as long as it wants. There are certain scenarios, generally with very intensive signal processing code, where this can improve performance. So if you are compiling for speed then the compiler will try to "do you a favor" and make the code execute as fast as possible. You should use the --interrupt_threshold to tell the compiler not to disable interrupts for longer than n cycles.
I recommend that you rebuild your entire project adding:
--interrupt_threshold=200
FYI, you don't want to make it TOO small as that can restrict the compiler's ability to generate efficient code. If you make it too big you can run into the issues you're currently experiencing!
One other hint I would recommend for BIOS interrupt prioritization is the following:
Just to be clear, the hardware priority matters little compared to how you configure the interrupt masks in BIOS. The hardware priority only serves as a "tie breaker" in the case where multiple interrupts are pending and the device needs to decide which one to service first. The "interrupt mask" is what decides which interrupts are allowed to interrupt you. When used incorrectly this can often lead to priority inversion.
Szymon Kuklinski said:Is modifying prog.module("TSK").STACKSEG in the TCF file sufficient to acheive what you're suggesting?
I didn't read the whole thread, but the command you mention would cause dynamically created tasks to be allocated wherever you specify. Are you dynamically creating tasks or do you create them statically in the tcf?
Szymon Kuklinski said:And what about moving ISR code to IRAM?
Above the ISR you can do:
#pragma CODE_SECTION(myisr, ".text:fastcode")
void myisr()
{
}
Then you add your own linker command file and have the following:
SECTIONS
{
.text:fastcode > IRAM
}
Szymon
I recommend trying Brad's suggestion on interrupt_threshold , prior to investigating the memory placement for stack and system stack etc.
Regards
Mukul
Hi
Here's what I tried:
1) The interrupt_threshold compiler option:
This seemed very promising as what I'm experiencing is relevant to what this option does. I tried several settings of this parameter but even going down to as low as --interrupt_threshold=1 or even omitting the =1 improves my situation only slightly. The interrupts in general do have a lesser delay but still they randomly get shifted so that I'm getting distortions on PCM slots 2 and 3. I recompiled only my project using this option. TI libraries that I'm using I did not recompile.
2) I also tried placing my functions in IRAM in a way Brad suggested. Using the pragma and adding a section in the cmd file did not help however as MAP file still indicates that my functions are in DDR.
3) When I try placing prog.module("TSK").STACKSEG in IRAM like this: prog.module("TSK").STACKSEG = IRAM; I get the following error:
js: "./cct_omap_dsp.tcf", line 77: Reference constraint violation: IRAM is an illegal value for TSK.STACKSEG
4) I added the interrupt mask setting as "all" to the TCF file for intrrupts used by EDMA but nothing's changed.
I fear that using EDMA LLD to hook interrupts was not such a good idea considering my time constrains. In your opinion is changing EDMA3 driver that it uses HWIs directly ant not through ECM hard to acomplish?
Also, do you see any pitfalls in using EDMA3 LLD for channel and linking configuration but not for hooking interrupts? I would be doing the latter doing manually.
Regards
Szymon
Which TI libraries were you trying to recompile and what issues did you encounter?
For the new linker command file make sure that you have added it to your project!
Instead of "IRAM" you need to use whatever the name of internal memory is in your tcf file.
Hi
If you're asking why I didn't recompile TI's libraries with --interrupt_threshold than the answer is I didn't thought it was necessary. Recompiling DSPLink and EDMA libs should not be a problem. I'm also linking with some DSP/BIOS libs. I'm writing from home and don't have access to my files to check if those DSP/BIOS libs are recompilable or not.
I was wanting to focus on modifying EDMA LLD to use HWIs directly and not through ECM. I'm not quite sure how much effort it would take though but still it's one of the leads to follow.
Regards
Szymon
Syzmon
Yes, that would be the next thing to try. If you attempt it, and run into issues please put them here for the EDMA3LLD experts to review and aid.
EDMA3LLD team is going to work on creating a wiki topic to show how to change the EDMA3LLD code to map to HWI instead of ECM (changes would need to be made in packages\ti\sdo\edma3\drv\sample\src\bios_edma3_drv_sample_init.c file and maybe more places). I am not sure on the time lines yet.
Is this going to be immediate show stopper for you? Is there some local TI support available to you to work through these issues, incase some un-official code snippets needed to be provided to you offline till a formal wiki topic is created?
Regards
Mukul
Also, can you please clarify if you are also making use of DSPLIB for your signal processing functions?
Regards
Mukul
Hi
I'm not using DSPLIB. My colleague responsible for signal processing code does make some use of TI's fastrts67x.lib. I recompiled this library yesterday with -mi1 option and built my project against it. Like I mentioned there was only slight improvement.
Unfortunately it is a significant setback for our project, it's been for some time now. We're upgrading our product by switching to a new and faster processor and DSP (OMAP-L138) and the most of our peripherals for our PBX are ready. Having working PCM is crucial for testing and for drawing the final PCB. We already have like a thousand OMAPs in our warehouse waiting for this problem to be sorted out.
I will start modifying the code on my own and see how it goes. At first glance it seems that I need to look into packages\ti\sdo\edma3\drv\sample\src\bios_edma3_drv_sample_init_multi_edma.c (as OMAP-L138 has two EDMA instances) and perhaps packages\ti\sdo\edma3\drv\sample\src\bios_edma3_drv_sample_omapl138_cfg.c.
Regards
Szymon
Hi
I have another idea that in my opinion would be quicker to implement. As there are some EDMA LLD specialists helping me with this problem I would like to ask if it is possible to do something like that:
- remove from EDMA LLD all code responsible for registering interrupts and recompiling the library
- in my TCF file doing something like this:
bios.HWI.instance("HWI_INT6").interruptSelectNumber = 8; 8 is the EDMA Transfer Completion Interrupt for Shadow Region 1 (DSP)
bios.HWI.instance("HWI_INT6").useDispatcher = 1;
bios.HWI.instance("HWI_INT6").interruptMask = "all"; so that my ISR doesn't get preempted
bios.HWI.instance("HWI_INT6").fxn = prog.extern("edma_isr");
edma_isr would comply to what's written in SPRUGP9A (2.9.2) - checking and clearing IPR etc. because of this interrupt's shared nature
- enabling HWI_INT6 in main():
C64_enableIER (C64_EINT6);
- recompiling the whole project against new EDMA LLD and with the changes made
Does this seem as a sensible walkaround do you? My superiors are pushing for a quick fix and not something that would take weeks to implement and test.
Regards
Szymon
Szymon,
This should work theoretically, as you are simply statically connecting the Completion interrupt from EDMA3 IP to the HWI. Though we have not tested any direct static mapping like this, i guess there should be no problem, if you remove all the interrupt registration and handling stuff from the sample library, and then recompile it and use the new library with these changes in your application.
Let us know the development/issues you face in your work with these changes.
Regards,
Sundaram
I was just looking back through the thread and noticed some printf calls in your code. Do you have these sprinkled throughout your code or just a few at startup? FYI, printf will cause MAJOR real-time disruptions. See this wiki page for more details. If that's the case I recommend at least temporarily removing them by either commenting them out or perhaps adding an #ifdef around each of them such that you can selectively turn all the printf statements on or off. Since you're using BIOS a much better option would be to use LOG_printf instead as detailed by this FAQ.
You seem to have had lots of useful technical help here, but glancing through the thread it seems that your software architecture is putting massive pressure on the timing constraints.
Not wanting to wade in to the technical detail here (and assuming I understand your aim correctly) it seems you want to change the source pointer for an EDMA transfer to the McASP after the transfer completes but before the McASP requests a new frame.
This sounds a little crazy...
Surely this is the very purpose the EDMA supports linking (or chaining or ping-pong or whatever the correct TI's terminology is :p).
Assuming you can tolerate a frame's worth of delay through your system you can relax the time to service the interrupt to an entire EDMA transfer (625us, which at 300MHz gives you almost 200K CPU cycles to update this pointer).
When one transfer completes ("ping") the EDMA automatically switches to another transfer context ("pong") which points to the second buffer. You then have all the time in the world to update the source pointer for the first transfer ("ping") before the second transfer completes ("pong") and switches back again.
Again, apologies if I've misunderstood your aim - I hope this helps!