This "Question of the Week" was originally posted over a year ago on March 24, 2010. The discussion on the topic has been active off and on al the way up until today still and has generated over 170 responses and comments to date across multiple communities, all of which can be read at Embedded Insights. Please join in on the discussion and keep a look out for an upcoming article that will further address the different sides of the issue.

Back when I was deep into building embedded control systems (and snow was always 20 feet deep and going to and from school was up hill both ways), the use of dynamic memory allocation was forbidden. In fact, using compiler library calls was also forbidden in many of the systems I worked on. If we needed to use a library call, we rewrote it so that we knew exactly what it did and how. Those systems were highly dependent on predictable, deterministic real-time behavior that had to run reliably for long periods of time without a hiccup of any kind. Resetting the system was not an option, and often the system had to keep working correctly in spite of errors and failures for as long as it could – in many cases lives could be on the line. These systems were extremely resource constrained both from a memory and processing duty-cycle time perspective and we manually planned out all of the memory usage.

That was then, this is now. Today’s compilers are much better than they were then. Today’s processors include enormous amounts of memory and peripherals compared to the processors back then. Processor clock rates support much more processing per processing period than before such that there is room to waste a few cycles on “inefficient” tasks. Additionally, some of what were application-level functions back then are low-level, abstracted function calls in today’s systems. Today’s tools are more aware of memory leaks and are better at detecting such anomalies. But are they good enough for low level or deeply embedded tasks?

Do today’s compilers generate good enough code with today’s “resource rich” microcontrollers to make the static versus dynamic memory allocation a non-issue for your application space? I believe there will always be some classes of applications where using dynamic allocation, regardless of rich resources, is a poor choice. So in addition to answering whether you use or allow dynamic memory allocation in your embedded designs, please share what types of applications your answer applies to.

Visit Embedded Insights to see the full conversation occurring across multiple communities about this and other questions of the week.

Anonymous