Prof Douglas J. Paul > The Fundamental Limits of Computation

All transistors, switches, logic devices or memories must obey the laws of physics!

 

There are a number of fundamental limits that exist for any transistor. These are:

The Landauer switching limit for a reversible, infinite-time process

Energy to switch 1 bit > kBT ln(2)

This is very similar in principle to the Carnot engine in thermodynamics allowing the absolute limit to be determined but in itself is not a very useful energy. In this case if you wait the length of the universe to switch 1 bit your calculation will take a little too long to be useful to most people! Therefore you need to consider the actual limit for a finite-time processor that people would actually buy to do computation such as a present microprocessor in a computer. This needs to have a much higher limit as it is defined by the signal-to-noise required to maintain the single bit in the 1 or 0 state. The faster the processor runs, the larger the energy required to maintain the bit in the predefined 1 or 0 state. You can spend a lot of time arguing about a sensible value but something like the following is not too unreasonable:

The Landauer switching limit at finite (GHz) clock speed:

Energy to switch 1 bit > 100 kBT ln (2)

The Heisenburg Uncertainty Principle:

Power > h / (switching time)2

PvsS

The above figure plots the delay time per transistor (using CVDD/Ion) versus the power dissipation (IonVDD for the transistors) using either a CMOS inverter, a n-MOS inverter or a p-MOS inverter as appropriate for the technology. To provide a fair comparison, a noise margin required to transmit 1 bit of information down a 1 mm long level-7 interconnect made of copper from a CMOS processor has been assumed for the logic so that the correct scaling of gate width can be accounted for. All devices use EXTRINSIC I-V chacteristics from the literature since for CMOS most of the performance limitations are related to contact resistivity, access resistance, parasitic RC time constants etc...... Therefore all comparisons are fair for making circuits from the technology base.

There are a number of important results when technologies are plotted as above. You can always run a particular technology a bit quicker but at the cost of higher power dissipation. The reverse is true also which is why many laptop computer microprocessors are run with a reduced clock rate when operated by battery. The HBT technologies of SiGe and InP are significantly faster than CMOS but at the cost of higher power dissipation. DRAM requires to be refreshed while MRAM is non-volatile which has significant advantages for many applications. Most of the carbon nanotube (CNT) and polymer FET technologies are significantly limited by poor contacts - many people only publish intrinsic properties which compare favourably with other technologies but the extrinsic are far poorer.

For more understanding of the above fundamental limiting concepts to computation the reader is pointed to the excellent text by Richard Feynman:

Feynman Lectures on Computation
R.P. Feynman
Penguin (ISBN 0-14-028451-6) (1996)