acm - an acm publication


Ubiquity symposium: What is the Right Computational Model for Continuous Scientific Problems?

Ubiquity, Volume 2011 Issue January, January 2011 | BY Joseph Traub 


Full citation in the ACM Digital Library  | PDF


Volume 2011, Number January (2011), Pages 1-4

What is computation? What is the right computational model for continuous scientific problems?
Joseph Traub
DOI: 10.1145/1922681.1925842

Editor's Introduction
In this tenth piece to the Ubiquity symposium discussing 'What is computation?' Joseph Traub shares his views about using the Turing Machine model and the real number model for solving continuous scientific problems.

Peter J. Denning

I propose to address the question of whether the Turing machine is the right computational model for continuous scientific problems. The Turing Machine model is an abstraction. We know that because the tape is potentially infinite and hence not physically realizable.

The real number model, which is used in scientific computations, is another abstraction. The crux of this model is that we can store real numbers and perform arithmetic operations and comparisons on them exactly and at unit cost. Specifically, the two assumptions of the real number model are:

  1. Operations on real numbers can be performed exactly.
  2. They can all be performed at unit cost.

I'll discuss these assumptions in turn. Assumption 1 is forced on us because if there were an error associated with each operation, the complexity analysis of the complicated problems of scientific computation would be impossible. The only viable strategy is to identify an optimal algorithm and then follow up with a rounding error analysis-but the follow-up advice is frequently honored in the breach.

The second assumption is not restrictive. Comparisons and the four arithmetic operations all cost about the same in floating point arithmetic on a modern digital computer. The assumption about unit cost is just scaling.

The real number model has a long history. Alexander Ostrowski used it in his work on the computational complexity of polynomial evaluation in 1954. I used it in optimal iteration theory in the early 1960s. Shmuel Winograd and Völker Strassen used the real number model in their work on algebraic complexity in the late 1960s. For these and many more examples, see [2]. Since we cannot store even a single real number on a digital computer, the real number model is clearly an abstraction. For more on the real number model see [2, 4, 5, 6].

So we have two abstractions. Which one should be chosen? That depends on how useful that abstraction is for the purpose at hand. When the purpose is solving continuous scientific problems, the real number model is the better choice. Most of the work on algorithms and computational complexity of continuous scientific problems use the real number model. By continuous scientific problems, I include those occurring in the physical sciences, engineering, parts of computer science (such as vision, animation, and graphics), mathematical finance, and even parts of biology (Anton is a special-purpose supercomputer that works on biological problems such as protein folding, and computes continuous molecular forces [3]). Examples of the kind of problems occurring in these fields include partial differential equations, integration in hundreds of variables, continuous optimization, nonlinear equations, and systems of ordinary differential equations. Typically, these problems cannot be solved exactly; they have to be approximated numerically.

Here are three reasons for the use of the real number model for continuous scientific problems:

  1. Most scientific computation uses fixed-precision floating point arithmetic. Analysis using the real number model is predictive of running times for scientific computation. Erich Novak and Henryk Wozniakowski proved this must be the case, modulo a couple of fairly nonrestrictive assumptions; see [2].
  2. The real number model permits the full power of continuous mathematics. The argument for using the power of analysis was already made in 1948 by John von Neumann, one of the leading mathematical physicists of the century, and a father of the digital computer. In his Hixon Symposium lecture, von Neumann argues for a "more specifically analytical theory of automata and information." See, for example, [5].
  3. The mathematical models of science are often continuous, and use real (and complex) numbers. That is, scientists assume the continuum. Therefore it seems natural to use the real numbers in analyzing the numerical solution of continuous models. (I want to emphasize that the goal is to obtain an optimal algorithm and obtain the computational complexity of computing an approximate solution to the continuous mathematical model proposed by the scientist.)

Some of my colleagues in theoretical computer science used to say to me that the Turing Machine model was needed for deep questions such as the complexity hierarchy and NP-completeness theory. Then Lenore Blum, Michael Shub, and Steve Smale [1] formalized the real number model, and showed one can ask the same deep questions as with the Turing Machine model. I don't regard the formalization as very significant; as I said earlier, the real number model has been widely used for decades. Their major achievement was showing the real numbers had the same deep structural questions as were encountered with the Turing machine model. This deprived the Turing Machine model of its unique status.

I've given three reasons in favor of using the real number model for continuous scientific problems. I'll now give two reasons against using the Turing Machine model for scientific computations.

  1. Estimated running times on a Turing Machine are not predictive of scientific computations on digital computers. As I've said before, scientific computation is usually done with floating point arithmetic. Therefore the cost of arithmetic operations is essentially independent of the size of the operands. On the other hand, the cost of Turing Machine operations depends on operand size.
  2. It's often claimed that all models of computation are polynomially equivalent to the Turing Machine model. But consider the example of a random access machine, where multiplication is the basic operation, and memory access and the operations of multiplication and addition can be performed at unit cost. (This is called a UMRAM.) This seems like a reasonable abstraction of a digital computer, since multiplication and addition on floating point numbers cost about the same. But an UMRAM is not polynomially equivalent to a Turing Machine! (See, e.g., [7]).

I consider the invention of the Turing Machine by Alan Turing to settle a decidability question in logic as one of the greatest intellectual achievements of the twentieth century. Nonetheless, it is the wrong model of computation for the continuous problems of science.


1. L. Blum, M. Shub, and S. Smale, On a theory of computation and complexity over the real numbers: NP-completeness, recursive functions, and universal machines. Bull. Amer. Math. Soc. 21 (1989), 1-46.

2. E. Novak and H. Wozniakowski. Tractability of Multivariate Problems, vol. 1, Linear Information. European Mathematical Society (2008).

3. D. E. Shaw et al. Anton, a special-purpose machine for molecular dynamics simulation. Commun. ACM 51, 7 (2008), 91-97.

4. J. F. Traub and H. Wozniakowski. A General Theory of Optimal Algorithms. Academic Press (1980).

5. J. F. Traub and A. G. Werschulz. Complexity and Information. Cambridge University Press (1998).

6. J. F. Traub, G. Wasilkowski, and H. Wozniakowski. Information-based Complexity. Academic Press (1988).

7. P. van Emde Boas. Machine models and simulations, pp. 1-66 of van Leeweun, J., Ed., Handbook of Theoretical Computer Science: vol. A, Algorithms and Complexity, MIT Press (1990).


Joseph F. Traub is the Edwin Howard Armstrong Professor of Computer Science at Columbia University and External Professor, Santa Fe Institute

©2011 ACM  $10.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2011 ACM, Inc.


Leave this field empty


Leave this field empty

2018 Symposia

Ubiquity symposium is an organized debate around a proposition or point of view. It is a means to explore a complex issue from multiple perspectives. An early example of a symposium on teaching computer science appeared in Communications of the ACM (December 1989).

To organize a symposium, please read our guidelines.


Ubiquity Symposium: Big Data

Table of Contents

  1. Big Data, Digitization, and Social Change (Opening Statement) by Jeffrey Johnson, Peter Denning, David Sousa-Rodrigues, Kemal A. Delic
  2. Big Data and the Attention Economy by Bernardo A. Huberman
  3. Big Data for Social Science Research by Mark Birkin
  4. Technology and Business Challenges of Big Data in the Digital Economy by Dave Penkler
  5. High Performance Synthetic Information Environments: An integrating architecture in the age of pervasive data and computing By Christopher L. Barrett, Jeffery Johnson, and Madhav Marathe
  6. Developing an Open Source "Big Data" Cognitive Computing Platform by Michael Kowolenko and Mladen Vouk
  7. When Good Machine Learning Leads to Bad Cyber Security by Tegjyot Singh Sethi and Mehmed Kantardzic
  8. Corporate Security is a Big Data Problem by Louisa Saunier and Kemal Delic
  9. Big Data: Business, technology, education, and science by Jeffrey Johnson, Luca Tesei, Marco Piangerelli, Emanuela Merelli, Riccardo Paci, Nenad Stojanovic, Paulo Leitão, José Barbosa, and Marco Amador
  10. Big Data or Big Brother? That is the question now (Closing Statement) by Jeffrey Johnson, Peter Denning, David Sousa-Rodrigues, Kemal A. Delic