Before we dive into the limits of what a computer can do, we need to have a firm understanding of what a computer is. The most common way to model the concept of a computer is through an abstract device described by Alan Turing [10].
There are several manifestations of this device. What they have in common is that they consist of a finite state machine, a read/writable tape, and a read/write head that is located over a position on the tape. The device operates by reading the position on the tape, changing the state of the state machine depending on what it read, writing a symbol to the tape depending on the state and what it read, and moving the tape forward or backward depending on the state and what it read. A more formal definition, similar to the definition given by Cohen [5], is as follows:
-
\(\varSigma \) is a finite set of states for the Turing machine.
-
\(\varGamma \) is a finite set of symbols that can be read from and written to the tape.
-
\(\varOmega \) is a function \(\varSigma \times \varGamma \rightarrow \varGamma \) that decides the symbol to be written to tape in each computation step.
-
\(\varDelta \) is a function \(\varSigma \times \varGamma \rightarrow \{-1,0,1\}\) that decides in which direction the tape should move after having written a symbol to the tape.
-
\(\varPi \) is a function \(\varSigma \times \varGamma \rightarrow \varSigma \) that decides the state that the Turing machine enters after having written a symbol to the tape.
The Turing machine starts with a tape with symbols on it and executes its operation by performing the functions defined by \(\varOmega , \varDelta \), and \(\varPi \) in each execution step. In this definition, the computation of the machine halts when a computation step changes neither the symbol on the tape, the position of the tape, nor the state of the state machine (Fig. 5.1).
A famous postulate is that every function that is computable by any machine is computable by a Turing machine. Although this postulate has not been proven, it is widely believed to be true. In the years since its formulation, there has been no reasonable description of a machine that has been proven able to do computations that cannot be simulated by a Turing machine. Consequently, the strength of programming languages and computational concepts is often measured by their ability to simulate a Turing machine. If they are able to, then we can assume that anything programmable can be programmed by the language or the concept.
The definition of a Turing machine and the postulation that it can simulate any other computing machine is interesting in itself. Still, its most important property, from our point of view, is that it can help us understand the limits of what a computer can do. Turing himself was the first to consider the limitations of computations, in that he proved that a computer is unable to decide if a given Turing machine terminates by reaching a halt. This is famously known as the halting problem and, to prove it, Turing used the liar’s paradox in much the same way as Gödel did.
In a modernized form, we can state the proof as follows: assume that we have a programmed function P that, for any program U, is able to answer if U halts for all inputs. This would mean that P(U) returns a value of true if U halts for all inputs and false if there is an input for which U does not halt. Then we could write the following program Q:
$$\begin{aligned} Q: \text {if }P(Q) \text { then loop forever; else exit} \end{aligned}$$
This program is a manifestation of the liar’s paradox. If Q halts, then it will loop forever and, if it does not halt, then it halts. The only possible explanation for this is that our assumption that P exists was wrong. Therefore, no P can exist that is able to decide the halting problem.