This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

dsplink blocks the other interrupts

Other Parts Discussed in Thread: OMAP-L138

Hi all,

I have been developing code for OMAP-L138 with my custom board. I use BIOS 5.41, XDC 3.22, CCS5.1, DSPLINK 1.65, CGT 7.3.1.

This week I faced a problem. I had a code for reading a device over SPI with restrict timing and I brought together it with dsplink to implement on OMAP-L138. I changed interrupt vectors and task priority which dsplink uses and recompiled. I use INT4 (high priority) for timing requirement. DSPLink uses INT8 and INT9. I transfer almost 40K byte with message queue. The code goes to timer2 isr 150K times at every second. Timer2isr takes 100 to 200cycles (I can't measure).

If I increase the timer frequency over 150K or if I don't set this line "bios.HWI.instance("HWI_INT4").interruptMask = "all";" on .tcf file, DSPLink violates the timer interrupt. I believe DSPLink masks all the other interrupts even-though timer interrupt priority is higher than DSPLink. I think DSPLink disables the GIE. The timer misses some interrupts. I am not sure but DSPLink may also effect timing. 

Does anyone know DSPLink disable interrupts?

Thanks

Serdar

  • serdar said:

    If I increase the timer frequency over 150K or if I don't set this line "bios.HWI.instance("HWI_INT4").interruptMask = "all";" on .tcf file, DSPLink violates the timer interrupt.

    I wouldn't use the term "violate" here.  Your usage of that term indicates that you may be misinterpreting the dynamics of C6x interrupt priorities.  Interrupt priority applies only when multiple interrupts are ready to be serviced on a given CPU cycle (in other words, their bit in the IFR register becomes 1).  When, on any given cycle, multiple bits in the IFR are set, and all other conditions are present (GIE, etc.), the highest priority interrupt will be the one chosen by the CPU for servicing.  The CPU will disable interrupts and follow the vector.  The ISR pointed to by the vector may enable interrupts again before returning from the ISR (enabling interrupt nesting).  Once interrupts are enabled, *any* subsequent interrupt (including one that may have been present when the high-priority interrupt was serviced) will run.

    This priority mechanism can be referred to as the physical priority, whereas I believe what you're thinking of is the logical priority, which doesn't really exist in the C6x architecture, but can be emulated by appropriate interrupt masking.

    So, for your system with the high-frequency (HF) interrupt occuring, you want to mask all other interrupts during the HF ISR and never mask the HF interrupt during all other ISRs.  This is why you need the "all" setting in your .tcf file interruptMask.

    serdar said:

     I believe DSPLink masks all the other interrupts even-though timer interrupt priority is higher than DSPLink.

    As said above, there is no logical priority to interrupts on the C6x, they're all of a "flat" proirity, and SW can control the logical priority through masking during ISRs.

    DSPLink never uses the "all" mask setting.  Every DSPLink interrupt masks only itself, and there's really no way to make them mask differently.

    serdar said:

     I think DSPLink disables the GIE. The timer misses some interrupts. I am not sure but DSPLink may also effect timing. 

    Does anyone know DSPLink disable interrupts?

    It's hard to say for sure, but pretty much any OS-level framework will need to disable interrupts globally at some point.  Keeping that disabled period as small as possible is one of the main design goals of low-level, real-time oriented SW.  There will be many points where interrupts are globally disabled, and any given layer of SW will have what is referred to as an "interrupt latency", which ends up being the longest (cycle-wise) of the SW layer's areas where interrupts are disabled.  This interrupt latency translates into a delay in processing the interrupt, and a system needs to be able to accomodate this latency time or else it will be susceptible to missing realtime.

    I know that DSP/BIOS publishes its interrupt latency number in either release notes or a datasheet.  DSPLink will also have an interrupt latency, although I don't know what it is and I'm not sure where to look.

    However, I believe it's not really an issue for you since you report that your system works when you use the "all" mask setting on your HF timer ISR, and as I said above, you must use this "all" setting.

    I believe you're missing timer interrupts without the "all" setting because:
        - timer interrupt fires and your timer ISR processing starts
        - DSP/BIOS applies your mask setting to the IER register and enables GIE (global interrupts)
        - DSPLink's interrupt fires and its ISR preempts your timer ISR
        - DSPLink's ISR takes a while to run and eventually exits
        - your timer ISR is returned to and it finishes too, but you've missed one or more timer interrupts during the above sequence.
    The above sequence assumes that your timer ISR has at least masked itself, else it could re-enter which normally is not desirable whatsoever (often difficult to program for).

    Regards,

    - Rob

     

  • Hi Rob,

    Thanks for the reply. I know how an interrupt is working and C6x interrupts have no priority. Also I understand that if I need priority on interrupts I must choose appropriate cpu interrupt channel. I mean that if I need to make that timer2 interrupt is the highest pripority interrupt, It must be connected to INT4. Technical manual says the hardware has interrupt priority feature.

    When I consider the "all" mask setting, the clear thing is I need to set to BIOS to mask all the other interrupts, this means something in my program is preventing timer2 (INT4) isr to catch it's deadline somehow. I use only INT4 (my timer2 isr) and INT8, INT9 (dsplink). If dsplink disables the global interrupt how a developer can build a hard-realtime system?

    I have another problem with interrupt priority. I found that one of my task also effects the timer2 interrupt timing. The task takes 12M cycles and stack size is 8196. If I disable the task timer2 isr responds at 150K, otherwise program misses timer2 interrupts. I know Interrupt latency is a fixed value. It is not related with CPU load or stack size. How a task can effect the interrupt?

    My tcf file is:

    /* --- Global Settings --- */
    utils.importFile("dsplink-omapl138gem-base.tci");

    prog.module("MEM").ARGSSIZE = 50 ; /* It is the argument size which comes from GPP with PROC_load */

    /* Enable MSGQ and POOL Managers */
    bios.MSGQ.ENABLEMSGQ = true;
    bios.POOL.ENABLEPOOL = true;

    /* MEM : IRAM, L1DSRAM */
    var IRAM = prog.module("MEM").instance("IRAM");
    var L1DSRAM = prog.module("MEM").instance("L1DSRAM");

    /* Overwrite to dsplink setup */
    prog.module("GBL").C64PLUSL2CFG = "0k" ;
    IRAM.len = 0x40000 ; /* Use all of the IRAM for BIOS*/

    bios.MEM.NOMEMORYHEAPS = 0;
    bios.MEM.instance("IRAM").createHeap = 1;
    bios.MEM.instance("IRAM").heapSize = 0x00004000;
    bios.MEM.BIOSOBJSEG = prog.get("IRAM");
    bios.MEM.MALLOCSEG = prog.get("IRAM");
    /* --- --- */

    /* --- BIOS Intruments --- */
    /* The following DSP/BIOS Features are enabled. */
    bios.enableRealTimeAnalysis(prog);
    bios.enableRtdx(prog);
    bios.enableTskManager(prog);

    bios.CLK.TIMERSELECT = "Timer 1"; /* Select Timer 1 to drive BIOS CLK */
    bios.CLK.RESETTIMER = 1;

    bios.LOG_system.bufLen = 512;

    var trace = bios.LOG.create("trace");
    trace.bufLen = 1024;
    trace.logType = "circular";
    /* --- --- */

    /* --- DSP Idle --- */
    bios.IDL.create("IDL_dsp");
    bios.IDL.instance("IDL_dsp").order = 1;
    bios.IDL.instance("IDL_dsp").fxn = prog.extern("idle");
    /* --- --- */

    /* --- Communication (DSPLink) Components --- */
    /* Rx task must be defined first */
    bios.TSK.create("linkRxTsk");
    bios.TSK.instance("linkRxTsk").priority = 5;
    bios.TSK.instance("linkRxTsk").fxn = prog.extern("linkReceiveFxn");
    bios.TSK.instance("linkRxTsk").stackSize = 1024;

    bios.TSK.create("linkTxTsk");
    bios.TSK.instance("linkTxTsk").priority = 5;
    bios.TSK.instance("linkTxTsk").fxn = prog.extern("linkTransmitFxn");
    bios.TSK.instance("linkTxTsk").stackSize = 1024;
    bios.SEM.create("semSendToARM");
    /* --- --- */

    /* --- ADC Acquisition Components --- *//*
    bios.TSK.create("adcTsk");
    bios.TSK.instance("adcTsk").priority = 15;
    bios.TSK.instance("adcTsk").fxn = prog.extern("adcFxn");
    bios.TSK.instance("adcTsk").stackSize = 2048;*/

    /* samplig timer hwi configuration */
    bios.HWI.instance("HWI_INT4").interruptSelectNumber = 2; /* CSL_INTC_EVENTID_EVT2 */
    bios.HWI.instance("HWI_INT4").fxn = prog.extern("timer2ISR");
    bios.HWI.instance("HWI_INT4").useDispatcher = 1;
    bios.HWI.instance("HWI_INT4").interruptMask = "all";

    /* --- --- */

    /* --- Measurement Components --- */
    bios.TSK.create("measureTsk");
    bios.TSK.instance("measureTsk").priority = 2; 
    bios.TSK.instance("measureTsk").fxn = prog.extern("measureFxn");
    bios.TSK.instance("measureTsk").stackSize = 8196;
    /* --- --- */

    // !GRAPHICAL_CONFIG_TOOL_SCRIPT_INSERT_POINT!
    if (config.hasReportedError == false) {
    prog.gen();
    }

    Regards,

    Serdar

  • serdar said:
    bios.HWI.instance("HWI_INT4").interruptSelectNumber = 2; /* CSL_INTC_EVENTID_EVT2 */

    The above assignment could be the cause of you missing timer 2 interrupts.  Event 2 is a grouped event covering event numbers 64-95.  The BIOS configuration code sees event 2, and knowing it's a grouped event, it assigns ECM_dispatch as the processing function, and ECM_dispatch would call your ISR.  And ECM_dispatch itself is called by the HWI dispatcher, so you've got a lot of unnecessary overhead.

    What is timer2's event ID?

    You should set ("HWI_INT4").interruptSelectNumber to the timer 2 event ID directly.

    serdar said:

    When I consider the "all" mask setting, the clear thing is I need to set to BIOS to mask all the other interrupts, this means something in my program is preventing timer2 (INT4) isr to catch it's deadline somehow. I use only INT4 (my timer2 isr) and INT8, INT9 (dsplink).

    I don't understand what you're saying above.

    Without the "all" setting, or more precisely, without masking DSPLink interrupts INT8 & INT9 during INT4 processing, when INT4 fires and gets processed, just before calling INT4's function DSP/BIOS will enable global interrupts, and as soon as it does that the DSPLink interrupts can get processed (and thus preempting your INT4 ISR).

    With the "all" interrupt mask setting on HWI_INT4, interrupts will be disabled for the duration of HWI_INT4 processing, isn't that what you want?

    serdar said:
    If dsplink disables the global interrupt how a developer can build a hard-realtime system?

    Most framework-level SW disables global interrupts for some period of time.  DSP/BIOS does so, and I assume DSPLink would also.  And regardless of whether a SW layer does or does not disable global interrupts explicitly, interrupt processing inherently does so (since the HW disables GIE upon an interrupt, and DSP/BIOS strives to re-enable GIE as soon as possible) and is typically the longest span in which interrupts are disabled for a DSP/BIOS system.

    A user's system needs to be able to accomodate this latency.  Problems crop up when high-frequency (HF) interrupts are happening.  Modern HW generally has facilities to allow for delayed response to HF (such as buffering/FIFOs/etc.) and facilities to reduce the interrupt frequency (and therefore the overhead).  150 KHz is fairly high, but still not above what I would expect DSP/BIOS to be able to handle.

    Also, please summarize where you stand today on this issue.  Are you missing real-time when using the "all" setting on HWI_INT4?  If not, do you just not understand why you need "all"?

    From your previous posts I got the impression that your timer 2 ISR was not being missed when you used the "all" interrupt mask setting on HWI_INT4, so my approach to answering your post originally was to explain why the "all" setting was needed.

    Regards,

    - Rob

     

  • Rob,

    I would have explained my case more understandable, sorry about that.

    I have an ADC with 8 channels. My hardware is not good enough to use only peripherals to read ADC. I couldn't use timer output pins. I have to use an ISR to start conversion and get the conversion result. Also I want to read all the 8 channels as possible as in short time. I use timer2 compare interrupts to provide that. Timer2 interrupt is not active but timer2 compare interrupts are active. I set event combiner to connect 8 timer2 compare interrupt vectors to INT4. For instance I set the period of timer2 to 55usec and I want to read first channel after 4usec and the other channels at every 4usec.

    // ADC and timer timing diagram:
    // |--------------------------------------- PRD / REL ---------------------------------------------------------------|
    // |----------|----------|-----------|-----------|----------|-----------|-----------|----------|----------------------------|
    // T2Run CMP0 CMP1 CMP2 CMP3 CMP4 CMP5 CMP6 CMP7 Timer2 Overrun
    // | |
    // 1. Ch Sampling 8. Ch Sampling

    I have three tasks and an HWI for reading ADC. I changed DSPLink interrupt channels INT8 and INT9 then I recompiled. ADC reading must be done with restrict timing, then it posts a semaphore to start measurement task. Measurement task posts a semaphore to start DSPLink Tx task. DSPLink Rx task waits until ARM sends new message. In this case HWI for ADC reading shall preempt every thread and must provide restrict timing. ADC timing directly effects measurement accuracy.

    I thought INT4 is the highest priority interrupt channel and it can preempt every thread, so why I need the "all" masking option?

    I realized that my problem is not DSPLink. The problem is measurement task. I have a library which includes algorithms and algorithms run in measurement task. If I compile the library with -o1, I have no problem. But if I compile with -o3, it effects the interrupt (or isr). I am working on it. If you comment I would be glad. 

    Thanks for your support

    Serdar

  • serdar said:

    I thought INT4 is the highest priority interrupt channel and it can preempt every thread, so why I need the "all" masking option?

    For the sake of this discussion hardware priority does not matter.  When not masked by SW or GIE, any interrupt can happen at any time and preempt whatever was running (I'm avoiding the use of the term "thread" here, it is not appropriate).  If INT4 is running without the "all" masking, any other interrupt can preempt it, including DSPLink interrupts.  If you don't understand or accept that fact then you need to understand it better (perhaps ask someone else what I mean).

    So, yes, INT4 can preempt every thread and without the "all" masking option INT4 can be preempted by any other interrupt.  If your timer2 interrupt needs to respond as quickly as possible then you don't want it to get preempted before it's had a chance to perform the work with a hard deadline.

    serdar said:
    I realized that my problem is not DSPLink. The problem is measurement task. I have a library which includes algorithms and algorithms run in measurement task. If I compile the library with -o1, I have no problem. But if I compile with -o3, it effects the interrupt (or isr). I am working on it. If you comment I would be glad.

    Higher-level optimizations (as requested by -o3) often achieve their higher performance by disabling interrupts.  You can alleviate that with the use of the option "-mi##" where ## is the maximum number of cycles for which interrupts will be disabled.  Without that option the compiler feels free to disable interrupts for as long as necessary to achieve its optimizations.  Specifying, for example, "-mi10" tells the compiler to never disable interrupts for longer than 10 cycles.  The bigger the "-mi##" number the better the compiler can optimize, so you might want to play around to see the largest value that your real-time deadlines allow.  Since DSP/BIOS already disables interrupts for some period (its "interrupt latency" benchmark") you could safely use that for the compiler too.  For example, if DSP/BIOS has an "interrupt latency" of 100 cycles, you could safely use "-mi100", along with the "-o3" optimization level.

    Regards,

    - Rob

     

  • Rob,

    Robert Tivy said:

    If you don't understand or accept that fact then you need to understand it better (perhaps ask someone else what I mean).

    Ok, I accept it. I experienced that every HWI can preempt each other. :-)

    Robert Tivy said:

    Event 2 is a grouped event covering event numbers 64-95.  The BIOS configuration code sees event 2, and knowing it's a grouped event, it assigns ECM_dispatch as the processing function, and ECM_dispatch would call your ISR.  And ECM_dispatch itself is called by the HWI dispatcher, so you've got a lot of unnecessary overhead.

    I test timer2 interrupt vs event combiner with timer2 compare interrupts. I couldn't see any difference. Is there a practical way to measure overhead?

    Thanks,

    Serdar

  • serdar said:

    If you don't understand or accept that fact then you need to understand it better (perhaps ask someone else what I mean).

    Ok, I accept it. I experienced that every HWI can preempt each other. :-)

    [/quote]

    If you'd like to understand the "why" of it better, re-read my first paragraph in my initial response to you.

    serdar said:

    I test timer2 interrupt vs event combiner with timer2 compare interrupts. I couldn't see any difference. Is there a practical way to measure overhead?

    Not that I know of.  "CPU Load" tooling will, in general, "charge" interrupt processing time to the thread that was interrupted (TSK or SWI).

    However, you can be sure that there is more overhead simply by the fact that the HWI dispatcher is calling the ECM dispatcher which is calling your ISR, and then your ISR returns to the ECM dispatcher which does more work to check for further event processing and then returns to the HWI dispatcher.  Cutting out the ECM dispatcher will reduce your CPU load.

    Regards,

    - Rob