I read this article,
and wondered what the TI CCS Compiler folks thought about this type of technology?
This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
I read this article,
and wondered what the TI CCS Compiler folks thought about this type of technology?
That's neither new nor groundbreaking.
Actually, the 'rules' cited there are already part of most optimizing compilers. Especially on the Microcontroller sector.
Nevertheless, even the best compiler cannot keep the programmer from actng like he has a multi-gigahertz multi-gigabytes system. So if the code you put into your ISRs is big and bloated, even the best compiler cannot do anything against.
Example: using double variables where an integer would be more than sufficient, using division (best of all: double precision divisions) where a simple integer shift would fit, will render all efforts of the compiler useless (its not the job of the compiler to decide or determine which data types you have to use and what to do with them).
Or if you call subroutines from within your ISR, best from a library or a different cod emodule, there's nothing the compiler can do but saving all registers to stack.
look into the IAR-generated code (or mspgcc and, sometimes to a lesser extend, CCS) will reveal that there is not only almost no room for optimization. And often you'll see that the compiler has done optimizations you'd have never come across.
Anyway, knowing what you do (and what better not to do) is often more worth than any fancy compiler tweak. And not knowing it can make way more damage than any compiler can fix (unless someone releases a KI-driven compiler that completely rewrites your program or even writes it for you).
**Attention** This is a public forum