This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

CCS/TMS320F28375D: Difference between main and interrupt processing time.

Part Number: TMS320F28375D

Tool/software: Code Composer Studio

What is the difference in processing time between the main routine and interrupt processing?

[environment]
・ TMS320F28375D
・ CCS8.1.0.00011

[Current status]
InitSysCtrl  not changed. should be running at 200Mhz.
The following processing is executed in the PWM interrupt processing.

 GpioDataRegs.GPBSET.bit.GPIO58 = 1;
 i ++;
 i ++;
 i ++;
 GpioDataRegs.GPBCLEAR.bit.GPIO58 = 1;

When checking the above GPIO with an oscilloscope, it takes 90nsec.
If you copy and check the above processing to the main routine, the processing time will be changed to 70nsec.

[Question]
1. Why is the processing time different between the main routine and interrupt processing?
2. Why is the expected value not 15nsec?

Thanks and regards.

  • Technically, there is no difference.  You should see the same timings whether the code is in a background loop or in an ISR.  Most likely the difference in timing is because of the surrounding code, or because you have the code in memory which is configured differently.

    I can look into this further, but will need the source files so I can see what you are doing.  Feel free to attach them to a reply to this post.

    Regards,

    Richard

  • The confidential part will be the deleted file.
    There is almost no deletion of processing. Most of the deletions are comments.

      Tested with main: Line 210-217
      Tested with PWM: Line 31-38

    The target function of PWM is set in RAM and copied with InitSysCtrl () without any change.
    Also attach a MAP file.

    Thankyou Regards.20191001_src.zip

  • To get reliable measurements, I recommend you have compiler optimization set to off and "i" declared as volatile.  Under these conditions I would expect 18 cycles for this stub of code in both cases. The compiler will use direct addressing for the I/O pin register and for "i", so DP will have to be loaded 3 times. Two more cycles for the I/O register writes, then the increments.

    The compiler will generate an INC instruction for each "i++". This is an atomic instruction which performs a read-modify-write to the data memory location in a single pipeline cycle. Consecutive updates to the same memory location will induce hardware pipeline stalls so the previous write happens before the next read. Therefore, expect 4 CPU cycles between consecutive writes to "i" - you can check this by adding more "i++" writes and measuring the difference.

    Below is the inter-leaved C/assembly code I see (I'm using GPIO56 for this test). The numbers in square brackets are the cycles from the first instruction:

    GpioDataRegs.GPBSET.bit.GPIO56 = 1;
        MOVW DP, #0x1fc        [1]
        OR @0xb, #0x0100      [2]

    i++;
        MOVW DP, #0x2a7       [3]
        INC @0x18                   [4]

    i++;
        INC @0x18                   [8]

    i++;
        INC @0x18                   [12]

    i++;
        INC @0x18                   [16]

    GpioDataRegs.GPBCLEAR.bit.GPIO56 = 1;
        MOVW DP, #0x1fc       [17]
        OR @0xd, #0x0100     [18]

    18 cycles corresponds to 90 ns. Why you measure less than that in the ISR I can't say. Check the optmmization setting is the same in both files. If that's not it, can you send me the disassembly for both stubs please?  A screen capture of the disassembly window will be fine.

    Regards,

    Richard

  • Thank you for answering.

    90ns is no problem and 70ns is likely to be an environmental problem including optimization. (Or just a mistake)
    Review and recheck the development environment settings.

    Thanks Richard.