This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM2634: interrupt high latency when used freertos

Part Number: AM2634


Tool/software:

Hello expert ~
When using an RTOS, the SVC interrupt of the RTOS (portYIELD trigger for task switching) turns off the IRQs interrupt, making the PWM interrupt not respond correctly and in a timely manner, resulting in a high latency of the control interrupt, which is unacceptable for inverter control.
1. Is there any way to configure the cortex-r5's SVC priority to be lower than irq to ensure low PWM interrupt latency
Or be able to use one of irq's interrupts to do task switching instead of using SVC.
2. Where can I find details about the 0xe000ed04 Interrupt Control register?
Thank you!

  • The first picture is wrong. It should be this picture

  • I found the same problem earlier,This is very important for inverter control.

    I suggest two ways to solve this problem. 1. Enable fiq to support interrupt nesting, which can be interrupted by high-priority fiq. 2. Make SVC interrupt support nested and can be interrupted by irq and fiq.

    e2e.ti.com/.../am2434-r5f-interrupt-handling-interrupt-nesting

  • Hi,

    Let me look into this and get back to you.

    Regards,
    Shaunak

  • Hello Expert ~
    I tried to switch tasks without using SWI, and tests showed that latency was still significant. At the same time, I also tested the PWM delay in the FREERTOS demo without LWIP, and the test results showed that the delay was less than 2us.
    I continued to search for the reason, and now I speculate whether the reason for the large PWM control delay is due to CPDMA interruption in CPSW, as global interruption is disabled in Cpsw_dmaTxIsr, Cpsw_dmaRxIsr and elsewhere, which may cause the PWM interruption to be delayed, as shown in Figure 1. As a result, the motor control loop cannot operate correctly.
    In addition, I tested the frequency of high delay and found that a PWM high delay occurred at a fixed 60ms, as shown in Figure 2. Is there any operation in TCP dome that fixes 60ms at a time?
    Hope to get your help ~
    Thank you!

    Regards,
    yuancheng

  • Hi Yuancheng,
      We have recently found an issue in the CPSW Statistics overflow (Misc) interrupts that executes a large code in ISR routine and blocks the interrupts. It happens periodically with periodicity is dependent upon the data rate.

    Can you try to disable the CPSW statistics and rerun the tests.


    Please do the below change to disable statistics module of CPSW.


    Filemcu_plus_sdk/source/networking/enet/core/src/mod/cpsw_stats.c:


    int32_t CpswStats_open(EnetMod_Handle hMod,
                           Enet_Type enetType,
                           uint32_t instId,
                           const void *cfg,
                           uint32_t cfgSize)
    {
    ...
    
        /* Enable statistics on all applicable ports */
    -   portStat.p0StatEnable = true;
    +   portStat.p0StatEnable = false;
        if (enetType == ENET_CPSW_9G)
        {
    -        portStat.p1StatEnable = true;
    -        portStat.p2StatEnable = true;
    -        portStat.p3StatEnable = true;
    -        portStat.p4StatEnable = true;
    -        portStat.p5StatEnable = true;
    -        portStat.p6StatEnable = true;
    -        portStat.p7StatEnable = true;
    -        portStat.p8StatEnable = true;
    +        portStat.p1StatEnable = false;
    +        portStat.p2StatEnable = false;
    +        portStat.p3StatEnable = false;
    +        portStat.p4StatEnable = false;
    +        portStat.p5StatEnable = false;
    +        portStat.p6StatEnable = false;
    +        portStat.p7StatEnable = false;
    +        portStat.p8StatEnable = false;
        }
        else if (enetType == ENET_CPSW_5G)
        {
    -        portStat.p1StatEnable = true;
    -        portStat.p2StatEnable = true;
    -        portStat.p3StatEnable = true;
    -        portStat.p4StatEnable = true;
    +        portStat.p1StatEnable = false;
    +        portStat.p2StatEnable = false;
    +        portStat.p3StatEnable = false;
    +        portStat.p4StatEnable = false;
        }
        else if (enetType == ENET_CPSW_3G)
        {
    -        portStat.p1StatEnable = true;
    -        portStat.p2StatEnable = true;
    +        portStat.p1StatEnable = false;
    +        portStat.p2StatEnable = false;
        }
        else
        {
    -        portStat.p1StatEnable = true;
    +        portStat.p1StatEnable = false;
        }
    
        CSL_CPSW_setPortStatsEnableReg(regs, &portStat);
    
        /* Clear all statistics counters */
        CpswStats_resetHostStats(hStats);


    With regards,
    Pradeep

  • Hi Pradeep~

    I changed the code according to the patch and recompiled the eth library, and finally the test results were basically the same as before.
    My latest tests show that high latency occurs every 60ms, I upload data at different rates, and when the data rate doubles, high latency occurs every 30ms.
    In addition, my latest tests found that high latency always occurs when calling netconn_write to send TCP. Using wireshark to capture packets (as shown in the following figure), it is found that the high latency occurs when TCP data is sent.

    Therefore, it is possible that EnetOsal_disableAllIntr in Cpsw_dmaTxIsr affects the execution of PWM interrupt? Or some other possible reason?
    In addition, is there any way to communicate with you quickly in order to solve the problem quickly~
    Thank you!

    Regards,
    yuancheng

  • Hello, please have a look at my reply, thank you ~

  • Hi,

    A few questions will help me understand the issue better.

    Q1. Can you share your ethernet data rate (packet rate) for both Rx and Tx tasks

    Q2. We already have some tasks running in the lwip-if layer for polling, Rx, Tx etc. Apart from this the application will have some tasks for Sending the data. You might have created a task in the application as well for the PWM. Can you share the list of tasks created and their priorities.

    Q3. When you measure the PWM interrupt latency, where exactly is it measured from? Is it from the PWM ISR context or this creates another task and you measure it there.

    Q4. Have you created any critical sections or mutexes in the application or any busy while loops ?

    A system block diagram might help me understand the architecture better to root cause the exact issue which increases the PWM interrupt latency

    At a very high level, this seems to be a problem being caused due to some blocking in critical section.  

    Regards,
    Shaunak

  • Nice to meet you ~
    1.Tx situation can be roughly seen from the figure above, the average speed is 1MB/s. The RX rate is about 5B/s, mainly heartbeat packets.
    2. I also created two tasks, TX and RX, for output transmission. The priority of the task is the same as that of AppTcp_ServerTask in demo.
    3. Interrupt delay is obtained by reading ePWM's count register when entering ePWM interrupt, because I use ePWM's count zero to trigger the interrupt.
    4. I only sent 1MB/s of data in the TCP server demo of eth and enabled a PWM zero interrupt, and there were no other critical area resources and other mutex, or while.
    Please look at my previous answer, I suspect that the TX of eth has the operation of shutting down the total interrupt of the CPU, which causes this problem.

    Regards,
    yauncheng

  • I think the essence of this problem is caused by the poor quality of the ETH library or interface, and changing the ETH library may require the ETH team to do this.
    Another solution I suggest is to refer to threadX to enable the CPU to support fiq nesting, and highly real-time tasks are executed in fiq.