Is Computing in Reverse the Next Big Thing?

As Moore’s Law runs out of steam, and fabrication of Boolean circuits on silicon appears to be reaching its limits, some computer scientists and physicists are looking beyond the limits of current computing to “reversible computing.” That is, instead of one-way circuits that produce a deterministic output from given inputs, reversible computing works both ways: Inputs can be obtained from outputs by running the circuits in reverse. Generally speaking, computation runs in one direction, producing outputs from inputs, without the ability to run backwards and compute inputs from outputs. For example, a circuit to perform X AND Y single-bit logic is irreversible, because it is impossible to determine the inputs given an output of zero. There are three possible input pairs corresponding to the output of zero: 0 AND 0, 0 AND 1, and 1 AND 0. Which pair reconstructs the input values? This uncertainty in the input value is a form of information loss or entropy, and entropy escapes the circuit in the form of heat. Irreversibility means entropy increased.

On the other hand, a reversible circuit designed to perform the same function X AND Y, might use extra bits to remember how to compute in reverse so the output of the reversible circuit, plus extra memory bits, can be used to obtain the original inputs. If such a circuit can be run forward and backwards without loss of information, it is considered reversible.

One way to prove a circuit is reversible is to test it as follows: Submit every output as an input and decide if it produces the correspondingly correct input. For example, X AND Y produces 0 as an output, but submitting zero to the X AND Y circuit does not guarantee replication of the inputs that produced 0 as an output. For one thing, X AND Y maps two inputs to one output. For another thing, the uncertainty of which of the three possible input pairs produced 0 is a form of information loss, i.e. entropy. Therefore, reversibility requires an equal number of inputs and outputs plus a one-to-one match of inputs to outputs.

Thermodynamically lossless information systems are said to be “adiabatic” if they are closed systems sealed off from in-flow and out-flow of thermodynamic information. A thermodynamic system, such as a digital circuit in a computer, is reversible if it is adiabatic and bit operations do not increase entropy. Theoretically, an adiabatic machine is reversible without violating the second law of thermodynamics if it preserves information.

Do such circuits exist? And if so, what good are they?

Almost No Energy

Such circuits not only exist, but the future of computing may depend on them. Actually, reversible computing is an old idea, but perhaps its time has come. Heat dissipation rises with increasing processing speeds, and increasing heat from a circuit has limits. Sooner or later, computing will reach a limit due to energy consumption limits, unless circuits make better use of energy. To increase processing speeds indefinitely, engineers have to decrease energy consumption indefinitely. Energy consumption equals thermodynamic entropy, so the goal is to reduce or eliminate thermodynamic entropy. Reversible circuits eliminate thermodynamic entropy and hold promise as the next big thing in computing.

Rolf Landauer argued in 1961 that changing information in memory consumes energy proportional to the number of states possible, e.g. thermodynamic information (entropy) converts into heat—a form of energy. Landauer said erasing a bit consumes at least kTln(2) Joules, where k is Boltzmann’s constant, 1.38E-23, and T is temperature on the absolute Kelvin scale. Conversely, it takes at least kTln(2) Joules of energy to flip a bit—an extremely small amount of energy, but nonzero, nonetheless. This places a theoretical limit on the speed of a binary computer of approximately 6E18 bit operations per Joule of energy dissipated.

Contemporary computers consume a million times more energy per bit than Landauer’s limit. So today’s computers still have room to go, but sooner or later they will reach this limit, according to the Landauer principle. Keeping Moore’s Law going will require overcoming Landauer’s principle. Enter reversible (no entropy, so no heat loss) computing.

In 1973 Charles Bennett suggested a way around Landauer’s limit by reducing thermodynamic information loss (entropy) during computation at the very basic level. He reasoned if there was some way to eliminate entropy during a bit operation, one could theoretically reduce energy consumption to an arbitrarily small amount. He further noted entropy increases due to loss of information, e.g. irreversibility, but entropy goes away when information is conserved, e.g. reversibility. A reversible computer theoretically consumes no energy because it conserves thermodynamic information.

Consider a simple adiabatic and frictionless pool table with two billiard balls: One at rest and the other traveling straight at the resting ball at 10 feet per second. This system is adiabatic if we prevent energy from entering or leaving, and if there is no heat loss during collisions, etc. When the two balls collide head-on (at zero angle), all of the momentum in the moving ball is transferred to the stationary ball, sending it in the opposite direction at 10 feet per second.

This system is reversible because there is no change in entropy (heat) and it can be run backwards—ball two hitting ball one and transferring all of its momentum to ball one. If the two balls were connected by a thermodynamically perfect (lossless) elastic rope, they would oscillate forever between colliding and traveling back and forth. Without inputting energy in the form of friction or a push, this closed system runs forever.

The billiard ball system described here is highly idealized. Even so, it is not a perpetual motion machine, because it does not increase nor decrease entropy. Energy-wise, the billiard ball system breaks-even. Similarly, reversible circuits in a reversible computer are not perpetual motion machines, and yet they conserve entropy. The trick is to simulate the Boolean operations required to build a digital computer without increasing entropy, e.g. without loss (or gain) of information. One way to do this is to guarantee that every bit-processing step is reversible.

Reversible Gates

Edward Fredkin took this idea a giant step further with the invention of the reversible Fredkin gate— a universal logic gate with three inputs and three outputs. A handful of Fredkin gates can be used as building blocks to construct all possible Boolean logic circuits. We know it is reversible because it produces its inputs when its output is submitted to another Fredkin gate, which produces the original input. Multiple Fredkin gates can be used to construct reversible AND, OR, XOR, and other logic gates needed to construct an entire computer.

The Fredkin gate is simple. One input is a control bit. When it is zero, the two inputs are copied as outputs. When it is one, the output is equal to the swapped inputs: Input one transfers to output two, and input two transfers to output one. (For more details on Fredkin gates read https://en.wikipedia.org/wiki/Fredkin_gate.)

Other people have studied reversibility, e.g. Feynman, Toffoli, and Fredkin. They agree that gates all have the following properties in common:

  • the number of inputs must be equal to the number of outputs
  • there must be a unique output pattern for each input pattern
  • each output is used only once ~ no fan out is allowed
  • the resulting circuit must be acyclic (no feedback loops).

A Fredkin gate does not increase entropy, which means it consumes no energy! This seems impossible, but in 2016 researchers at Griffith University and the University of Queensland announced they had built a quantum Fredkin gate using quantum entanglement of particles of light to swap qubits. Michael Frank of the University of Florida has constructed adiabatic MEMS circuits using traditional silicon, see readings below. Because entropy is conserved, no energy is consumed, and this means the computation is adiabatic. Reversible computing is real and real systems are beginning to appear.

Note that not all computation is reversible with only Fredkin gates. Fredkin showed that Boolean logic circuits can be made reversible, but what about other calculations? Consider floating point arithmetic with built-in round-off error as specified by the IEEE Standard 754. There are five different round-off algorithms in the standard, and they all add or delete information. Round off increases entropy if information is lost, so extra state information must be kept to account for rounding. This usually means keeping extra bits that can be used to reconstruct the input from the output. Hence, a reversible IEEE floating point circuit will look much different than the current one. This problem is left as an exercise for the reader (see Talakal et. al).

The Next Big Thing

To fulfill Moore’s prophesy of doubling computing speeds every two years, engineers must come up with better utilization of energy, especially as we near Landauer’s limit on irreversible computing. This limit exists wherever entropy increases, which occurs when information is lost. Therefore, information loss equals energy consumption. But, if circuits are constructed to avoid information loss, they can also avoid energy consumption. It follows that reversible circuits might be one way to squeeze more computations per Joule of energy out of hardware, versus pumping more and more energy into irreversible circuits (to make them run faster).

Secondly, building a closed system to achieve adiabatic conditions may remain a challenge, because of energy leakage to exterior components or simple energy sapping resistance. Adiabatic reversibility means heat cannot enter or leave the closed system. This may be difficult to achieve with electrons, but photons may offer the solution. Quantum circuits stand a better chance of containing thermodynamic information than electronic circuits.

Silicon Valley innovation is sporadic, bursty, and unpredictable, but sooner or later multi-core chips will have to give way to a new technology—a process called “technology jumping.” Jumping onto reversible computing will send hardware fabrication down a new path. When this happens, reversible computing will be the next big thing.

Further Readings

Richard Feynman. Lectures on Computation. Westview Press (revised from original in 1984–86). 2000.

C. H. Bennett.  Logical reversibility of computation. IBM journal of Research and Development 17,6, 525–532, IBM, 1973

Edward Fredkin and Tommaso Toffoli. “Conservative Logic” (PDF). International Journal of Theoretical Physics. 21, 3-4 (1982), 219–253. doi:10.1007/BF01857727.

Michael P. Frank. “Common mistakes in adiabatic logic design and how to avoid them,” presented at MLPD ’03, Workshop on Methodologies in Low- Power Design, in H. R. Arabnia and Laurence T. Yang, eds., ESA ’03. In Proceedings of the International Conference on Embedded Systems and Applications (June 23- 26, Las Vegas, Nevada). CSREA Press, 2003, 216-222.

Raj B. Patel, Joseph Ho, Franck Ferreyrol, Timothy C. Ralph and Geoff J. Pryde. “A quantum Fredkin gate.” Science Advances 2, 3, (2016), e1501531, DOI: 10.1126/sciadv.1501531.

Peter Denning and Ted G. Lewis “Exponential laws of computing growth.” Communications of the ACM 60, 1 (2017), January 2017.

M. Talakal, C. Jyothi, K.N. Muralidhara, and M. Z. Kurian. “Design and implementation of REA for single precision floating point multiplier using reversible logic.” Int’l J. of Advanced Research in Electrical, Electronics and Instrumentation Engineering 3, 3, (March 2014).