IStock_000006760484XSmallIn my position I hear a great deal of discussion regarding the physical trade-offs between performance and power consumption.  "If you want to accelerate quickly in a car, you need power to overcome inertia."  I agree... but increasing the size of the power plant in a car isn’t the only way to get it to accelerate faster.  Inertia is a function of mass (F=ma) so by decreasing the mass, you can get faster acceleration with the same power plant. 

This is a very common approach to improve either performance or fuel economy in today’s modern sports cars as well as jets, boats and other vehicles. But these principals also apply to electronic systems as well.  Complementary Metal Oxide Semiconductor (CMOS) based devices define modern digital and mixed signal electronics.  In the very design of these devices are issues with power as the performance is increased.  For example, DRAM designs have capitalized on the supply voltage vs. power equation for CMOS processes to reduce the power consumed (see the equation below).

 CMOS Equation

This equation shows that the frequency and capacitive load terms contribute linearly to the power consumption.  Reduce the frequency by half and the power will also be cut in half.  However, the supply voltage is a square function, so by reducing the supply voltage from 1.8V (DDR2 memory) to 1.5V (DDR3 memory), the power consumption is reduced by 30% which is a major savings.

As process geometries continue to shrink the conduction channel gets shorter (good) and the gate insulator gets thinner (bad). To reduce leakage (electrons that "tunnel" through the thin insulator) manufacturers have moved to lower supply voltages. By reducing the voltage across the transistor, the associated electric field that exists between the gate and the conduction channel is reduced as well.  New materials such as nitrided hafnium silicates (HfSiON) are being used to replace silicon dioxide in an effort to prevent leakage and electron tunneling (Intel is already shipping processors using hafnium based high-K dielectric in their 45 nm process).

No matter how you slice the problem, when you have a billion transistors all using a tiny amount of power, you end up with a large amount of power being consumed.  Processors and digital systems require large amounts of transistors and for the foreseeable future will only increase in density.  To increase the performance, there must be another way...

My impression is that the industry can take several paths in an effort to increase performance while minimizing power consumption.  One path (which is currently the path of choice) is to continue to shrink the process geometries to 20 nm or below which becomes extremely hard to fabricate.  This will allow more transistors on the same size die and utilize sub one-volt supply voltages.  Another avenue is to migrate away from silicon processes altogether and find another way to make transistors.  There is on-going research in the area of quantum well transistors made in indium antimonide which may be the next step for higher performance digital functions with extremely low power - one tenth of today’s power consumption.  There is a large capital investment in integrated circuit fabrication technology so the next step that will be the least painful will need to be similar to silicon based manufacturing.  There is also research being done in diamond based semiconductors as well as carbon nano-tube technologies to also reduce power while improving performance.

But what about revolutionary change?  What if we abandon semiconductors all together and move to optical non-linear crystal based computing and analog functions?  Is this even possible on the scale of which we currently build processors, analog-to-digital converters, amplifiers or other electronic components? Maybe our industry needs to take a step back and consider the new horizon in front of us...  a world were energy consumption is as much a factor as how fast we go... something to think about.  Till next time...

Anonymous