This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

CC2652R: Direct Hardware ISR routine on interrupt hit(before TI-RTOS)

Part Number: CC2652R
Other Parts Discussed in Thread: SYSCONFIG

Hi,

Is it possible to get a function called directly on interrupt hit, without being handled by TI-RTOS. We saw that Priority level lower than Disable priority allow for this in the datasheet but the SysConfig doesn't seem to support interrupts priority less < 1.

We are using the the ZStack from CC2652 SDK 4.10.00.0.78, where I have observed that often the stack disables and renables interrupts using Hwi_disable() and Hwi_enable(). Does this disable interrupts on the hardware directly(all interrupts delayed) or only for interrupts being processed by the TI-RTOS(i.e. priority  > disable priority).

My exact requirement is that I want a specific time interval to hit continuously to accuracy 1-5us.
However when using the Clock/Timer Driver I have observed that often there are a few jumps of 30-40 us at random intervals, which I think maybe because of ZStack temporarily disabling interrupts causing the timer interrupt event to be processed later causing the small delays.(I have tried this with highest priority for Timer on SysConfig)

We are currently using the CC2652 SDK 4.10.00.0.78.

Please could you advise on what I can do to get my required consistent timings.

Thanks
Akhilesh

  • Hi Akhilesh,

    Yes, we refer to them as "zero-latency" interrupts in the TI-RTOS documentation:

    "ZERO LATENCY INTERRUPTS

    The M3/M4 Hwi module supports "zero latency" interrupts. Interrupts configured with priority greater (in actual hardware priority, but lower in number) than the configured Hwi.disablePriority are NOT disabled by Hwi_disable(), and they are not managed by the internal interrupt dispatcher.
    Zero Latency interrupts fall into the commonly used category of "Unmanaged Interrupts". However they are somewhat distinct from that definition in that in addition to being unmanaged, they are also almost never disabled by SYS/BIOS code, thus gaining the "Zero Latency" title.
    Zero latency interrupts are distinguished from regular dispatched interrupts at create time solely by their interrupt priority being set greater than the configured Hwi.disablePriority.
    Note that since zero latency interrupts don't use the dispatcher, the arg parameter is not functional. Also note that due to the Cortex-M's native automatic stacking of saved-by-caller C context on the way to an ISR, zero latency interrupt handlers are implemented using regular C functions (ie no 'interrupt' keyword is required)."

     Basically, as you seems to have found out, setting the priority to "zero" makes it by-pass the TI-RTOS dispatcher. As you can see, these are also not impacted by the "Hwi_disable()" calls. 

    The reason for SysConfig not allowing you to configure "zero" is easy. Most of our drivers (which you configure there) expect a TI-RTOS/NoRTOS backend and as such, they also invoke TI-RTOS/NoRTOS APIs. As it is not safe to run any such APIs from the "zero-latency" interrupt as it is not "in the scope of the kernel", it is also not an option in SysConfig.

    The Clock/Timer jitter is likely not due to the critical sections. The Clock/Timer module is built around the RTC and as such, the resolution is 31 us (ish). This means that your timeout not always align with this and you will see jitter of +- RTC tick (typically).

    As for your requirement, please consider that it takes typically 14 us for the device to go from CPU Idle -> Active in the first place. Having an interrupt with such small jitter might be hard to achieve even without the TI-RTOS impact. 

    Could you maybe elaborate on your need for this interrupt, what purpose it serve etc. Maybe I could propose an alternative solution (I would not recommend the zero-latency approach unless everything else is ruled out).

  • Hi M-W,

    I want to build a PhaseCut Dimmer application which works as follows

    1. Detect GPIO interupt(Zero Cross of AC Signal),
    2. Start Timer(Period already configured) (Period can be anywhere between 500us - 9600us)
    3. On timer Interrupt, set Output Pin high.
    - Wait for Step 1 again.

    This is used for AC light dimming purposes.

    I don't mind a lag(the 14 us for Idle to Active is fine, since I can correct it) in the period,
    but the jitter you mentioned is of about 0-40us which causes the light to flicker because the Active duration(effective power) of AC line keeps changing.

    Avoiding the jitter is intent behind the requirement for the Zero-latency approach.

    Thanks
    Akhilesh

  • Hi Akhilesh,

    Have you looked into using the Sensor Controller to do this instead?

    The reason I suggest this is because there will potentially be an wake-up jitter even in the case previously discussed unless you disable power management all in all (meaning you never put any part of the device in standby).

    Using the Sensor Controller means you can implement the logic to be fully self contained, not impacted by any part of the ARM core processing, even interrupts.

  • Hi M-W,

    Thanks for input on Sensor Controller Studio, had not considered it but now we are trying it out.(Please do share any reference documents )

    We are fine with having to disable power management entirely just to meet this execution.
    Please could you suggest what we would need to do to in that case to avoid the jitter on the ARM core.
    We want to know how we can perform the zero latency interrupts, and plug in our interrupt function.

    We would want to implement both and try them.

    Thanks
    Akhilesh

  • Hi,

    There is a bunch of training modules available for the Sensor Controller Studio:

    https://dev.ti.com/tirex/explore/node?node=AGwGDhhNIUqfFzveQcalCw__pTTHBmu__LATEST

    The tool used to write code for it, Sensor Controller Studio, also features many examples that you can test out.

    As for the ARM approach, there is several steps to getting it up and running and I really do not recommend it over the Sensor Controller. In short, you would have to:

    * Setup a GPTimer instance in SysConfig

    * Disable SysConfig so that you can change the generated output without having it be overwritten again

    * Change the interrupt priority of the GPTimer in the output generated by SysConfig prior to disabling it to 0.

    * Set power constraints on the system: Power_setConstraint(PowerCC26XX_DISALLOW_STANDBY) + Power_setConstraint(PowerCC26XX_DISALLOW_IDLE)

    * Perform some tests.

    Now it is worth considering that the GPTimer is used for many other drivers, such as PWM and ADCBuf. If you have any dependencies on this, making it "zero latency" is not a good idea as you then also expect to call RTOS related functions.

    All in all, the Sensor Controller is the more sensible solution as it allows for both better power consumption and is more deterministic (it had no interrupts that could interfere etc). It also avoids having to break outside of the RTOS constraints (potentially impacting other parts of the application). 

  • Hi M-W,

    Thanks for the advice, we have got the Sensor Controller Studio working and it meets our requirements for now.

    However we could not get the Timer to operate in Zero Latency, this suggestion
    * Change the interrupt priority of the GPTimer in the output generated by SysConfig prior to disabling it to 0.
    does not work.
    We tried a few things just to understand how to implement zero latency interrupts. However, even if we plug the interrupt vector directly, the interrupt keeps hitting every 5us irrespective of the compare value(the original problem in this post).
    Please could you share an example on how I can implement a zero latency, even a gpio pin case would be fine.

    Thanks
    Akhilesh

  • Hi Akhilesh,

    Could you share the code(project) you use to test it so that I could look over what might be getting wrong? If you do I could run it myself and know I got exactly what you got. 

  • Hi M-W,

    I was able to get the it working and it seems to be generating an interrupt. As you mentioned I have observed a small delay.
    The issue seemed to be clearing only the hardware interrupt(Hwi_clearInterrupt) and not the timer interrupt(TimerIntClear), so the interrupt kept hitting.

    However, the interrupt shows as "Unmanaged" in the vector table, rather than "Zero Latency"(I have seen this once before while using m4 Timer library directly). The priority set in Hwi_params also doesn't not seem to show up in the vector table.
    All dispatched interrupts still show as dispatched.

    I have attached the code, please could you confirm if this is the right way to do it.

    Thanks
    Akhilesh

    /cfs-file/__key/communityserver-discussions-components-files/158/5047.timerled_5F00_zerolatency.zip

  • Hi Akhilesh,

    You seem to be on the correct trach but note when using the interrupt like this, you are responsible for clearing the peripheral interrupt flag. This is normally handled by the driver but as you by pass it, you need to do it yourself :)

    The "Unmanaged" status in the vector table is correct. As given in the documentation, the "Zero-Latency" title in TI-RTOS is given to ISRs that is plugged and not being managed by the Hwi dispatcher. As you can see in ROV (I guess that was what you used), the vector table is setup to vector to your ISR function directly. 

    Also, the Hwi priority is simply related to the Hwi module. Setting it to "0" is simply a way to tell it "Do not manage this, just plug it". I would however expect it to set the "preemptPriority" of the interrupt in question to 0 (that is what I see myself running your example).