Minds and Machines

, Volume 26, Issue 4, pp 359–388 | Cite as

How to Make a Meaningful Comparison of Models: The Church–Turing Thesis Over the Reals

Article
  • 315 Downloads

Abstract

It is commonly believed that there is no equivalent of the Church–Turing thesis for computation over the reals. In particular, computational models on this domain do not exhibit the convergence of formalisms that supports this thesis in the case of integer computation. In the light of recent philosophical developments on the different meanings of the Church–Turing thesis, and recent technical results on analog computation, I will show that this current belief confounds two distinct issues, namely the extension of the notion of effective computation to the reals on the one hand, and the simulation of analog computers by Turing machines on the other hand. I will argue that it is possible in both cases to defend an equivalent of the Church–Turing thesis over the reals. Along the way, we will learn some methodological caveats on the comparison of different computational models, and how to make it meaningful.

Keywords

Church–Turing thesis Type 2 theory of effectivity Analog computation Recursive analysis B.S.S. model \({\mathbb {R}}\)-recursive functions G.P.A.C. 

1 Introduction

In the case of integers, there is a quasi-universal belief that we have a thoroughly satisfying theory of computability over this domain. This belief is expressed by the almost universal support enjoyed by the Church–Turing thesis, according to which any reasonable model of computation over the integers is extensionally equivalent to the Turing machine model. For the mathematician, a simple argument is crucial in the justification of this belief: all our historical attempts to define reasonable computational models have yielded the same expressive power as the Turing machine model. This argument, sometimes called convergence of formalisms, supports the view that we have achieved a robust understanding of computability over the integers.

But in the case of computability over the reals, by contrast, there is no such convergence of formalisms. Different models, such as Type 2 Turing machines, the B.S.S. model, Shannon’s G.P.A.C. and Moore’s \({\mathbb {R}}\)-recursive functions, yield different sets of computable functions. Hence, so the argument goes, there is no equivalent of the Church–Turing thesis for the real functions, and we don’t have a satisfying theory of computability over that domain.

In this paper, I will argue that this vision fails to distinguish two issues, and leads to a wrong understanding of the state of the art in computability theory. The first issue is the understanding of what a generalization of effective computability to computability over the reals means. The second is the possibility of simulating any physically implementable model of computation over the reals by an effective computation. To understand this distinction, it is worth reminding the reader, as it has become common in the philosophical and technical literature of the last three decades or so (see, for instance, Gandy 1980; Deutsch 1985; Earman 1986; Pitowsky 1990; Shor 1997; Copeland 2002b; Shagrir and Pitowsky 2003; Piccinini 2011), that the Church–Turing thesis admits different interpretations, which raise distinct issues (part 1). I will then examine the challenges faced by a generalization of effective computability to computability over the reals, and conclude that recursive analysis successfully meets those challenges (part 2). I will finally examine the different challenges raised by analog models of computability over the reals, and their comparison with computable analysis, taking into consideration very recent results on that topic. It will then become transparent that a Church–Turing thesis for real computation can be defended (part 3).

The paper is self-contained, and no prior knowledge of computability over the reals is assumed. It has two intended audiences. The first audience would include philosophers and logicians, especially those interested in the recent debate on the meaning and scope of the Church–Turing thesis. The second would include computer scientists with a foundational interest in the definition and systematic comparison of computational models.

2 The Many Forms of the Church–Turing Thesis

2.1 A First Approach

Let us begin with a brief presentation of the state of the art in the interpretation of the Church–Turing thesis. I shall not aim at exhaustivity here, but will rather only introduce the elements necessary for the understanding of computability over the reals. The Church–Turing thesis is frequently presented under the following form:

1. Church–Turing Thesis.Every computable function over the integers is computable by a Turing machine. where the computable on the left is an intuitive, pretheoretic concept, while the computable on the right is a rigourous, mathematically well-defined concept, computable-by-a-Turing-machine. This concept can be shown to be extensionally equivalent to other rigourous notions such as recursivity, \(\lambda\)-definability, computable-by-a-Post-machine, and so on.

The following proposition is yet another, less frequent expression of the same point:

2. Church–Turing Thesis (computational model).Every reasonable computational model over the integers is simulable by the Turing machine model.

But there is a problem with this contemporary presentation of the Church–Turing thesis. Very little study of history shows that the founding fathers of computability had a more specific idea of the intuitive concept they wanted to capture.1 They used the old-fashioned concept of “effective procedure”. Even though there is no rigourous definition of what an effective procedure actually is—in this case, the Church–Turing thesis would be useless—there are sufficiently many informal constraints on the notion to avoid vacuousness, and to enable us to recognize that a given procedure is not effective. Here is a tentative list of those constraints:2
  1. 1.

    Finite formal description. An effective procedure can be completely described by a finite text written in a formal language. In modern parlance, that finite text is called a program, and the formal language a programming language.

     
  2. 2.

    Inputs, outputs. An effective procedure admits a well-defined set of inputs, which might be the null set, and a well-defined set of outputs. There is a well-defined mathematical relation between the set of outputs and the set of inputs.3

     
  3. 3.

    Definition. An effective procedure does not yield an output when the task it executes is not defined for a given input.

     
  4. 4.

    Uniformity. An effective procedure applies uniformly to all instances of the task at hand.

     
  5. 5.

    Termination in a finite number of steps. When it is applied to an input for which it is defined, an effective procedure completes its task in a finite number of steps.4

     
  6. 6.

    Termination in finite real time. When it is applied to an input for which it is defined, an effective procedure completes its task in a finite amount of real time.

     
  7. 7.

    Automatism. The execution of the procedure does not require ingenuity, intuition or guesses.

     
  8. 8.

    Step-by-step simulation by pen-and-paper computation. Disregarding contingent limitations in time and memory space, an effective procedure can be executed step by step by a human being equipped with pen and paper.

     
Let us make a couple more remarks on the nature of effective procedures, that will be useful later on (see Sect. 3.3, last paragraph, and Sect. 4, introduction). As Turing explicitly noticed in his 1936 paper, effective procedures abide a set of finitary constraints. These constraints can be more easily phrased if it is admitted, as Turing did in his paper, that the memory needed for an effective procedure can be described, without loss of generality, as a linear sequence of cases, and that the evolution of the computation depends on the state of the processor.5 A effective procedure must have:
  • A finite set of states.

  • A finite number of read and/or modified cases during one elementary step.

  • A finite movement between two steps.

Therefore, the computation executed by an effective procedure can be described as symbolic finitary computation. It is symbolic in the sense that
  1. 1.

    Inputs and outputs are represented by words on a finite alphabet \(\Sigma\).

     
  2. 2.

    The execution of the computation consists in the reading and modification of words.

     
It is finitary in the sense that:
  • The signature and the words are finite.

  • The description of the procedure is finite (finite formal description).

  • The execution of a single elementary steps obeys finitary constraints (finite movement between two steps, finite number of read and/or modified cases during one elementary step).

  • The execution of the entire computation obeys finitary constraints (finite time, finite number of steps, finite set of states).

2.2 The Algorithmic Church–Turing Thesis

Taking into account the preceding remarks, our two initial phrasings can thus be reformulated to better capture the intentions of the founding fathers of computability:

1′. Algorithmic Church–Turing Thesis.Every function over the integers computable by an effective procedure can be computed by a Turing machine. and

2′. Algorithmic Church–Turing Thesis (computational model).Every reasonable model of effective computation over integers can be simulated by the Turing machine model.

This modified phrasing of the Church–Turing thesis has two main advantages.6 It reminds us of the original intention behind that proposition. But that is not a purely historical point: it also reduces significantly the scope of that thesis, and leaves room for new questions.

Let us see first how this new formulation reduces the scope of the thesis. Our list of informal constraints has a primary epistemic function: it allows us to see that not every computational procedure is an effective procedure. For instance, before being marginalized by universal digital machines, analog machines were very common (see Marguin 1994, chap. 7). The computations performed by analog machines are not symbolic in the sense we have just defined. Those machines encode inputs and outputs in continuous parameters, not strings of digits, and execute computational procedures by a continuous dynamics, not a step-by-step modification of these strings.7 Consequently, there exists computational procedures that are not effective procedures: “effectively computable” is not equivalent to “computable”.

The same point can be made from a different angle, if we look at the usual arguments supporting the Church–Turing thesis. It can readily be seen that those arguments were meant to have effective computation in their scope, not any form of computation. Here is an exhaustive list of those arguments:
  1. 1.

    Naturality in extension. All the common functions that are intuitively computable by an effective procedure are computable by a Turing machine.

     
  2. 2.

    No known sophisticated counterexample. That sophisticated counterexample would play a role similar to Ackermann’s function, which shows that the set of primitive recursive functions is a proper subset of effectively computable functions.8

     
  3. 3.

    Failure of counterproof by diagonalization. This argument is rarely mentioned in contemporary presentations, even if it seems to have been historically important, especially for Gödel.9

     
  4. 4.

    Convergence of formalisms or Robustness. Different attempts at formalizing the intuitive notion of an effective procedure in different mathematical models have yielded the same set of computable functions, strengthening the belief that we are dealing with a sound concept.

     
  5. 5.

    Modelization. As Turing has shown in his 1936 paper (Turing 1936), the Turing machine is a natural modelization of a pen-and-paper computation executed by a human being. This argument is frequently forgotten in modern presentations, even if it was historically important.10

     
Those arguments provide strong support to the forms (1′) and (2′) of the Church–Turing thesis, but not the broader (1) and (2) forms. They show once again that the Church–Turing thesis is about a specific form of computation, or a specific set of computational models, and not any form of computation, or any computational model whatsoever.

2.3 The Distinction Between the Algorithmic and the Physical Church–Turing Theses

Our last remark does not imply that non-effective procedures can compute one or several functions that effective procedures could not compute. Even though these procedures might be different in nature, their input–output behavior might be perfectly identical: they might simulate each other. After distinguishing effective and non-effective computational procedures, we can ask a new question: is every computational procedure simulable by an effective procedure? If we follow the general consensus and admits the (2′) form of the Church–Turing thesis, we can rephrase this question immediately: is every computational procedure simulable by a Turing machine?, or otherwise put, is every reasonable computational model simulable by the Turing machine model?

A moment’s reflection can lead us to yet another phrasing of this question. It is of course very hard to determine what “reasonable” means in “every reasonable model of computation” (for more on this particular issue, see Pégny 2013, chap. 5 and 6). Many authors11 have argued that a reasonable model must satisfy the following necessary, if not sufficient, intuitive condition: if a model is reasonable, we must be able to harness some physical process to implement the computations it describes. But what are the physical processes counting as computations? Piccinini (2011) argued for an epistemic view, which he called the usability constraint: if a physical process is a computation, it can be used by a finite observer to get the desired values of a function. The constraint is epistemic, since a genuine computation has to produce knowledge about the values of a function for a given finite observer.

A procedure that can be defined but not executed on particular instances produces no computational knowledge whatsoever. Therefore, a reasonable computational model must describe computational procedures that can actually be executed: usability leads to executability.

A customary objection to this point is that even the Turing machine model describes computations that cannot be executed in this world, because of their titanic demands in time and/or space. That’s a fair remark, but I think it misconstrues the original point. Indeed, there is room for debate in the interpretation of the modality in the expression “actually executable.” But the fact that a philosophical principle leaves room for interpretation and debate is not necessarily a fatal argument against that principle. The original point of the principle was not to define what actual execution of a computation means. It was to set aside computational models who are obviously not defining any executable computation. Such models, even if they might have some theoretical interest, do not qualify as genuine computational models. For instance, if the Turing machine model never leads to physically executable computations, under any reasonable interpretation of the modality in “executable”, then it would not have been considered as a reasonable model of computation. But on the contrary, we have millions of examples of implementations of Turing-style computation, and that’s evidence enough that we are dealing with a serious computational model (for another expression of the same point, and references, see Aaronson 2005, 11–12).

But to be executed, a computation has to be implemented by some concrete system. Therefore, one could replace the long phrasing

Church–Turing Thesis (reasonable models).Every reasonable computational model is simulable by a Turing machine.

by the shorter

Physical Church–Turing Thesis.Every function computable by a physical system is computable by a Turing machine.

This latter proposition has become known in the recent literature as the “physical Church–Turing thesis” or “physical form of the Church–Turing thesis.” In order to make this distinction, the historical Church–Turing thesis has been relabelled as the “mathematical” or “algorithmic”12 Church–Turing thesis. All the reasons we have to believe in the algorithmic form are no reasons to believe in the physical form, which constitutes an autonomous question.

The label “physical” might be misleading. The physical Church–Turing thesis is not about a particular set of models that we would call “physical”: it is a thesis about the ultimate limits of computation. If one admits Piccinini’s epistemic view of computation, wondering whether the physical Church–Turing thesis is true is tantamount to wondering whether any reasonable computational model is simulable by the Turing machine model.13 There is a large inclusion of the set of functions computable by effective models of computation in the set of functions computable by physical models: the real question is whether the inclusion is strict.

Let us draw a morale from our first reflections. Vague formulations like (1) and (2) can mislead us into confounding two issues. The first issue is the robustness of our formal attempts to capture effective computation. The second is the ability of effective computation to simulate every computational procedure. We will now see how this morale carries over to computability over the reals.

3 Recursive Analysis: Or, How to Compute a Real Function Effectively

3.1 The Representation of Domain Elements

We will first examine the generalization of effective computation to real computation. The generalization of the notion of “effectively computable function” to non-denumerable domains raises several conceptual issues. Denumerable sets enjoy a remarkable property: every element of a denumerable set is encodable through a given surjection by an element of \(\Sigma ^*\), the set of finite words on a finite alphabet \(\Sigma\). We can thus define an effectively computable function on a denumerable domain f as an application associating to the name of element \(x \in Dom(f)\) the name of \(y= f(x)\), the value of f in this argument: \(f:\subseteq (\Sigma ^{*})^{n} \longrightarrow \Sigma ^{*}\).

A function on a non-denumerable domain cannot have this property, by a simple cardinality argument. The set of finite sequences of symbols being denumerable, there can be no injection from a non-denumerable set into this set. To encode the elements of a non-denumerable set, it is necessary to substitute to \(\Sigma ^*\) the set \(\Sigma ^{\omega }\) of infinite words on a finite alphabet \(\Sigma\). One can think of the decimal expansion of the reals as a possible set of words that can be used not only for \({\mathbb {R}}\), but for any set with the same cardinality.14

This problem of data representation is a fundamental conceptual problem for computability over the reals. It affects any model of computability over a domain of similar cardinality, and in particular models of effective computation over these domains. We will now look deeper into that latter problem, and see how recursive analysis tries to extend effective computation from the integers to the reals. We will then discuss whether this extension is successful.

3.2 A Very Short Introduction to Recursive Analysis

Recursive analysis is the most widely used model of computability over the reals. It is based on effective versions of the concepts of classical mathematical analysis, first and foremost the concepts of real number and real function.15 I will here reproduce the presentation of those concepts by Pour-El and Ian Richards (1989, 13).

A sequence of \({r_k}\) rational numbers is computable if there exists three recursive functions a, b, s: \({\mathbb {N}} \longrightarrow {\mathbb {N}}\) such that, for all k, \(b(k) \ne 0\) and
$$\begin{aligned} r(k) = (-1)^{s(k)} \frac{a(k)}{b(k)} \end{aligned}$$
(1)
A sequence of \({r_k}\) of rational numbers effectively converges towards a real number x if there exists a recursive function \(e: {\mathbb {N}} \longrightarrow {\mathbb {N}}\) such that for all N:
$$\begin{aligned} k \ge e(N) \quad \hbox {implies}\quad |r_k - x| \le 2^{-N}. \end{aligned}$$
(2)
A real number x is computable if there exists a computable sequence \({r_k}\) of rational numbers that effectively converges towards x.

This definition is nothing but an effective version of one of the usual definitions of real numbers as the limit of a sequence of rational numbers. All other definitions of computable real number conceived in the same spirit yield the same set of numbers (see Pour-El and Ian Richards 1989 for more details and references).

Since it will play an important role in our discussion of recursive analysis (see Sect. 3.4), it is necessary to expand on the least straightforward condition of this definition, that of effective convergence. Let us consider the following sum (Geroch and Hartle 1986):
$$\begin{aligned} K = \sum _{n=1}^\infty \alpha _n 2^{-n} \end{aligned}$$
(3)
with \(\alpha _n = 1\) iff the nth Turing machine (for a given recursive enumeration of Turing machines) halts, and \(\alpha _n = 0\) otherwise. This sum is obviously convergent, and defines a real number between 0 and 1. This real number cannot be effectively computable, for it would yield an immediate procedure to solve the halting problem. One would just have to read the rational approximation of K to the precision \(\epsilon = 2^{-n+1}\) in order to decide whether n belongs to the halting set. Nevertheless, there exists an algorithm producing a growing sequence converging towards K. Let us consider a Universal Turing machine whose tape contains an integer m, and a given recursive enumeration of Turing machines:
  1. 1.

    Simulate the m first Turing machines for m steps on entry \(n=i\), with i the code of each machine according to our recursive enumeration.

     
  2. 2.

    If the i-th machine with \(i \le m\) halts in \(m' \le m\) steps, let us consider \(\alpha _i = 1\) and \(\alpha _i = 0\) otherwise.

     
  3. 3.

    Increment m. Go back to 1.

     
It is thus easy to produce a sequence of rational approximations of K with growing precision. K remains uncomputable nevertheless, for it is impossible to determine, for any rational \(r_i\) in this sequence, to which precision \(\epsilon\) it approximates K: \(\epsilon\) cannot be written as a computable function of n. If the first machine halts after 1,000,000 steps, our approximation will suddenly grow by a half. Otherwise put, although the sequence of rationals is computable by an effective procedure, the sequence is not effectively convergent: there is no algorithm taking any precision \(\epsilon\) as an input, and giving as an output the element of the sequence such that an error bounded by \(\epsilon\) has been obtained. For a real number to be computable, it is not sufficient that there exists an algorithm producing a sequence of approximations with growing precision: the precision of the approximation itself has to be effectively computable.

The example we just gave is not just a sophisticate exception. It is on the contrary the instantiation of a more general property. Let A be a recursive set: the real defined by \(r = \sum _{i \in A} 2^{-i}\) is effectively computable. If A is recursively enumerable non-recursive set, with f a recursive function such that \(A =Dom(f)\), the sum \(\sum _{j \in A} 2^{-j}\) always converges towards a real number \(r'\), but this real number is not effectively computable. \(r'\) is nevertheless the limit of a monotonically increasing sequence, whose terms can be engendered by a Turing machine: \(r'\) is said to be left-computable. If the real number \(r'\) is the limit of a monotonically decreasing sequence, it is said to be right-computable. It can be easily shown that a real number r that is both left and right-computable is computable (Brattka et al. 2008).

Let us now give a first, rough definition of what a computable real function is.16 A function \(f : [a, b] \longrightarrow {\mathbb {R}}\) is computable iff:
  • (i) f is sequentially computable: if \(i \longmapsto x_i\) is a computable sequence in Dom(f) converging towards \(x \in Dom(f)\), then the sequence \(i \longmapsto f(x_i)\) is computable and converges towards f(x).

  • (ii) f is effectively continuous: there exists a recursive function \(d : {\mathbb {N}} \longrightarrow {\mathbb {N}}\) such that for all \(x, y \in Dom(f)\), and for all \(n \in {\mathbb {N}}\)
    $$\begin{aligned} |x-y| \le \frac{1}{d(n)} \quad \hbox {implies} \quad |f(x)-f(y)| \le 2^{-n}. \end{aligned}$$
    (4)

In more intuitive terms, this definition states that a real function f is computable iff there exists an algorithm which takes a sequence of approximations of x in Dom(f), and yields as an output a sequence of approximations of f(x), with computable precision.

In this paper, we will name the functions computable according to recursive analysis “RA-computable”. This use steps aside from the usual conventions in the logic and computer science community, where the model of recursive analysis is so dominant that these functions are simply qualified as “computable functions”. This contravention to the norm seems necessary for our particular purposes. Since we compare different models of computability and refer to pretheoretic notions on computability, especially that of effective computability, calling the functions computable according to a given model “computable functions” would be an undesirable source of confusion. Calling those functions “Turing-computable” would be more accurate, but it does not make the distinction between integer and real computation, and “real Turing-computable” would be obnoxiously ponderous.

Those definitions of a computable real number and a computable function over the reals have the advantage of being natural for an analyst: they essentially consist in the effective version of standard definitions in analysis. Can we consider that recursive analysis successfully generalize effective computation to the real domain? This is what we are about to consider.

3.3 Recursive Analysis as a Natural Extension of Effectivity to Non-denumerable Domains

Before we move on in the next section to actual criticism of recursive analysis, let us first present the arguments in favor of seeing recursive analysis as a natural extension of effective computation to non-denumerable domains. Extending effective computation to computation over the reals is not completely obvious: it raises two main issues, that both stem from the impossibility to represent elements of the domain with finite words. We can roughly sum them up by the two following questions:
  1. 1.

    How do we modify our conception of a correct computation to adapt it to a non-terminating computation ? As seen above, one of our conditions of effectivity was the following:

    Termination in a finite number of steps. When it is applied to an input for which it is defined, an effective procedure completes its task in a finite number of steps.

    But that condition cannot be carried over to effective computability over the reals. As a basic computational task for any model, the effective computation of a real number cannot be completed in a finite number of steps, since the representation of that number can be infinitely long. How do we relax that condition, while remaining faithful to the spirit of effective computability?

     
  2. 2.

    How can we feed an infinite input to a machine implementing a finitary procedure?

     
To answer both these questions, it is better to switch to yet another, more low level presentation of computability over the reals, that of Turing machines with real inputs, or Type 2 Turing machines.17 This presentation will help us understand how recursive analysis extends successfully effective computability to the computation of a sequence of approximations of an infinitary object.
For \(k \ge 0\) and \(Y_0, Y_1,\ldots , Y_k \in {\Sigma ^*, \Sigma ^{\omega }}\), let us define RA-computable functions \(f:\subseteq Y_1 \times Y_2 \times \cdots \times Y_k \longrightarrow Y_0\) with a Turing machine equipped with k read-only tapes, a finite number of work tapes, and a single one-way output tape. The initial configuration of the machine is defined naturally, each read-only tape i holding a finite or infinite word \(y_i \in Y_i\). Every case not holding a symbol of the name of the input contains the blank symbol B, and so does every other case of every other tape.18 Contrary to RA-computable functions over a denumerable domain, it is necessary to distinguish between results representable by finite words and results only representable by infinite words. More rigorously, let \(f_M:\subseteq Y_1 \times Y_2 \times \cdots \times Y_k \longrightarrow Y_0\) be the function computed by the Turing machine M defined above, for all \(y_0 \in Y_0, y_1 \in Y_1,\ldots , y_k \in Y_k\), let us define:
  1. 1.

    Case \(Y_0 = \Sigma ^*: f_M(y_1,\ldots ,y_k):= y_0 \in \Sigma ^*\), iff M halts on \((y_1,\ldots ,y_k)\) with \(y_0\) on the output tape.

     
  2. 2.

    Case \(Y_0 = \Sigma ^{\omega }: f_M(y_1,\ldots ,y_k):= y_0 \in \Sigma ^{\omega }\), iff M computes indefinitely on \((y_1,\ldots ,y_k)\) and prints \(y_0\) on the output tape.

     
A function \(f:\subseteq Y_1 \times Y_2 \times \cdots \times Y_k \longrightarrow Y_0\) is computable iff there exists a Type 2 machine M that computes it. Let us remark that \(f_M(y_1,\ldots ,y_k)\) is undefined if M computes indefinitely but writes only a finite numbers of symbols on the output tape.

Such a definition raises obvious issues of effectivity. For a procedure to qualify as effective, it is necessary that inputs and outputs can be read, written and modified. In the case of functions over denumerable domains, this list of properties were trivially verified, since the elements of the domain can be represented by finite words. But it is impossible to write or read a word of infinite length.

To define a notion of effectively computable function over non-denumerable domains, it is necessary to relax the condition of writing a finitary exact representation of the input and output, and to substitute for it the idea of approximate representation. Our Type 2 Turing machines should thus read and write a sequence of prefixes of the infinite string representing the considered element of the domain or codomain. Furthermore, it is impossible to demand that an effective procedure halts in a finite number of steps: the computation of an infinite string has to go on indefinitely. As Weihrauch puts it (2000, 15–16):

Clearly, infinite inputs or outputs do not exist and infinite computations cannot be finished in reality. But finite computations on finite initial parts of inputs producing finite initial parts of the outputs can be realized on physical devices as long as enough time and memory are available.(...) Of course, Type-2 machines can be simulated by digital computers. Therefore, infinite computations of Type-2 machines can be approximated by finite physical computations with arbitrary precision. The restriction to one-way output guarantees that any partial output of a finite initial part of a computation cannot be erased in the future and therefore, is final. For this reason, models of computation with two-way output would not be very useful.

Let us expand a little bit on this, for it is a crucial point to extend the very notion of correctness of computation to non-denumerable domains. In the case of denumerable domains, the correction of the writing of an output was defined in a natural way. Let us consider without loss of generality Turing machines with a one-way output tape. A Turing machine M computes the result of the evaluation of function f on a argument \((x_1,\ldots , x_k) \in Dom f\) iff there exists a n such that after n steps, the output tape holds the finite name w associated to \(y= f(x_1,\ldots , x_k)\), the blank symbol B on all other cases, and enters its halting state H.
In the case of a function with non-denumerable domain, this criterium has to be redefined when \(Y_0 = \Sigma ^{\omega }\). Let \(f:\subseteq Y_1 \times Y_2 \times \cdots \times Y_k \longrightarrow Y_0\) be a RA-computable function, \((y_1,\ldots ,y_k)\) an argument in Dom(f), and \(y_0 \in \Sigma ^{\omega }\), the value of f in this argument. The Turing Machine M computes the value \(y_0\) of f in the argument \((y_1,\ldots ,y_k)\) iff:
  1. 1.

    The machine begins to write the output. There exists a step \(k_0\) such that M writes the first symbol of \(y_0\).

     
  2. 2.

    The writing goes on. For all k, if M has written n symbols after k steps, there exists \(k'\ge k\) such that M writes the \(n+1\)th symbol after \(k'\) steps.

     
  3. 3.

    The machine writes prefixes or One-way output. For all \(k \ge k_0\), the string written on the tape is a prefix of \(y_0 \in \Sigma ^{\omega }.\)19

     
The design choice of a one-way output tape is thus justified by a more fundamental demand, which is the desire to get, for any computational step, a correct prefix of the name of the output written on the output tape. In more intuitive terms, the first two conditions are a recursive warrant that only a finite number of steps is needed to write a finite word. The third condition guarantees that the first two are meaningful: it demands that this finite word is actually a prefix of the name of the output.

In Weihrauch’s words, the computation of a real function in recursive analysis rests on a finiteness property. Let us consider \(f_{M}\), the real function computed by Turing Machine M. The finiteness property states: Every finite portion of the output\(f_M(p)\)is already determined by a finite portion of the inputp. This finiteness property is implied by the three conditions mentioned above. In the remainder of this article, I will denote those three specific conditions by the name “finiteness property”, and I will consider Weihrauch’s proposition as their intuitive and synthetic sum-up.

We have now seen how recursive analysis extends effective computation to non-terminating computation of values represented by infinite words. There is a remaining problem with our definition of RA-computable function by a Type 2 Turing machine. This definition supposes that the infinite name of a real is given as an input to the Type 2 Turing machine, so that it might compute the value of the function in the argument represented by this name. But by Weihrauch’s finiteness property, an actual computation on a given input never needs an infinite amount of information at any step: there is thus no need for an infinite name written on the input tape at the beginning of our computation. The model uses a physically unimplementable idealization without proper theoretical necessity: finite prefixes of the infinite input can instead be fed to the machine as the computation goes along, in perfect accordance with the finiteness property.

To capture that intuition, it is desirable to add to the model a description of how the input is fed to the machine.20 This is realized in Ker-i Ko’s oracle Turing machine model by a sequence of queries to an oracle.

An oracle Turing machine M is an ordinary Turing machine, with read-only tapes, a one-way output tape and a work tape. Another distinguished work tape is added, the query tape, and another distinguished state, the query state. The oracle is a black box that can provide the value of a function \(\phi : \Sigma ^{*} \longrightarrow \Sigma ^{*}\) in any arbitrary argument, through a mechanism left unspecified. Equipped with this oracle, the machine M replaces the string w written on the query tape by \(\phi (|w|)\) in one step, every time a query state has been reached. Intuitively, the function computed by the oracle is the type 1 function computing a converging sequence of approximations of the required input. The input can thus be provided to the machine with the precision needed as the computation goes along.

This intuition is hardwired in the definition of the computation of a real function by an oracle Turing machine M21 (Ko 1991):
  1. 1.

    The input x of f is given to M as an oracle.

     
  2. 2.

    The required precision on the output, \(2^{-n}\) is given as an integer n input to M.

     
  3. 3.
    M executes the computation in two steps:
    • (i) M computes from the required precision on the output \(2^{-n}\), the required precision on the input \(2^{-m}\);

    • (ii) M consults the oracle for \(\phi (m)\), such that \(|\phi (m)- x|\le 2^{-m}\), and computes from \(\phi (m)\) an output d such that \(|d- f(x)|\le 2^{-n}\).

     
Under a given representation \(\rho\), a function \(f: \subseteq {\mathbb {R}} \longrightarrow {\mathbb {R}}\) is computable if there exists an oracle Turing machine such that for any input \(n \in {\mathbb {N}}\) and any \(\rho\)-name of x in Dom(f) given as an oracle, the machine writes as an output a \(\rho\)-name y such that \(\Vert y - f(x)\Vert \le 2^{-n}\).

The oracle can be implemented by a simple subroutine computing the RA-computable real taken as an argument. It can also be a subroutine computing a sequence of rationals converging towards a non-computable real number: the inputs of a RA-computable real function do not have to be RA-computable real numbers.

Once the Type 2 Turing machines are formalized as oracle machines, it can be said that the computation of a RA-computable function \(f : \Sigma ^{\omega } \longrightarrow \Sigma ^{\omega }\) never requires the manipulation of an infinite amount of information, even though its modelization uses infinitary mathematics.

Recursive analysis can thus be seen as an extension of effective computation to the computation of a sequence of approximations of a result named by an infinite word. This interpretation leads to a relaxation of the condition of termination, which is to be replaced by the three conditions called finiteness property.

Two arguments can be presented in favor of this position. The first one is obviously that computations in recursive analysis are executed by a form of Turing machine. As such they respect Turing’s condition for a model of effective computation: finite cardinality of the signature and the set of states; finite number of symbols and moves per computational step. The second is that the finiteness property is consistent with our pretheoretic notion of effectivity. An effective computation should only require a finite amount of information and a finite number of steps to produce either a finite output, or a finite prefix of (the name of) an output: that is what the finiteness property guarantees. In the list of conditions defining effectivity (see Sect. 1), this finiteness property should replace the termination condition as a more including approach to effectivity.

3.4 Criticism of Recursive Analysis as Unnatural

Recursive analysis has indeed become the most commonly used model of computability over the reals. It is so dominant that “computability over the reals” is often implicitly synonymous with “computability according to recursive analysis.” It has nevertheless been criticized. Recursive analysis actually faces two distinct forms of criticisms. The first one consists in a direct criticism of the model, and the set of functions it selects as effectively computable. The second, and more common form, comes from a comparison of recursive analysis with other models of computability. I will address this second form when we will discuss analog models, because it constitutes a distinct philosophical problem. In this section, I will only consider the first form of criticism.

This critical stance can be found in a philosophical analysis by Earman (1986, 125, emphasis ours):

Church’s thesis, or proposal as I would prefer to call it, says,

(CP1)The class of programmable or algorithmically computable functions of the integers is to be identified with the Turing computable functions.

I have no doubts about the adequacy of (CP1), especially as regards the originally intended application to Hilbert’s decision problem.(...) Church’s initial proposal (CP1) could be extended to functions of the reals by

(CP2)The class of programmable or algorithmically computable functions of the reals is to be identified with the Grzegorczyk22computable functions.

However, (CP2) does not carry the conviction of (CP1) because Grzegorczyk’s definition, though useful for providing results in analysis, is only one of various possible ways to generalize Turing computability to functions of the reals.

Intuitively, the criticism is the following: the set of real functions computable according to recursive analysis is unnatural, for it excludes certain intuitively computable functions. We have seen above that computability on the integers did not face that kind of criticism: all “intuitively computable functions” over the integers are recursive. If such an attack would be vindicated, it would be a specific drawback of recursive analysis. It would then be justified to say that our theory of computability over the reals is less satisfying than that over the integers.

The criticism stems from a strong property of RA-computable real functions: every such computable function is continuous. This property excludes the computability of certain simple real functions, such as the step function \(s(x)= c\) if \(x <x_0\), and d otherwise, with c, d\(x_0\) RA-computable reals, and the Gauss staircase, \(g(x) = \lfloor x \rfloor\) (integer part of x), which are obviously discontinuous.

But the objection itself is disputable. In what sense are these functions “intuitively computable”? Several arguments might be used to support this stance. These functions are well-defined and their graph is easy to draw. Furthermore, the computability of these functions might seem intuitive, because it is easy to compute their values in so many cases: very few functions are as simple to compute as the Gauss staircase.

However, such arguments entirely miss the specific difficulty of effective computation over a non-denumerable domain. The equality to a RA-computable real is trivially decidable in an infinity of cases, but it is not decidable in all cases, because this would need an actual infinity of information. In analog fashion, the computation of values of a discontinuous function near a discontinuity point x would require the ability to distinguish a value equal to x from a value \(x'\), arbitrarily close to x, just to determine the first symbol of the name of \(f(x')\). Beyond disputable intuitions, the criticism of unnaturality in extension might stem from the possibility to apply weaker notions of computability to these functions. As shown by Weihrauch (2000, 7, 121), the integer part function has a remarkable property: there exists an algorithm associating to a sequence of rationals effectively converging towards x a sequence of rationals converging towards f(x): all values of this function are left-computable. As we have seen above, a pretheoretical intuition that a computable number is just the limit of a sequence of computable approximations might lead us to overlook the condition of effective convergence in the definition of a real number. That very same intuition might lead us to believe that a function whose values are left-computable should count as effectively computable.

But is it more intuitive to speak of effective computability with or without the condition of effective convergence? The question is tantamount to bickering. As Earman noticed (1986, 127), the definition of an effectively computable real provokes a scission of our intuitions. It is intuitive to consider a real number effectively computable when an algorithm can produce a convergent sequence of approximations of that same number. But it is also intuitive to believe that the precision of the approximation should be effectively computable. The relevance of a computational model such as recursive analysis should not be founded on such conflicting intuitions.

On the other hand, there is a more valuable positive argument in favor of the continuity of RA-computable functions. This property is demonstrated straightforwardly from the finiteness property, as is shown by Weihrauch (2000, 6, 30). As we have just seen, this property is essential to generalize some of our intuitions on effectivity to non-denumerable domains. Continuity cannot be seen as a contingent property of recursive analysis, that a variant of the same model could, and maybe should eliminate. It is a straightforward consequence of our effort to generalize our notion of effective computation to new domains. That this kind of generalization might yield surprising results, disrupting some other intuitive beliefs, is part of the fate of mathematical theories, when they formalize intuitive notions.

The argument that recursive analysis would be unnatural in extension does not withstand scrutiny. The argument of naturality in extension, if it seemed to be doing a good job in the realm of integer computation, is not relevant any longer in the more paradoxical world of the reals. On the other hand, as we have seen above, recursive analysis has very strong arguments in its favor, when it comes to generalizing effective computation to the reals. Henceforth, I will consider that recursive analysis is the right generalization of effective computability to the reals. Consequently, the following generalizations of the algorithmic form of the Church–Turing thesis

1′(Real) Algorithmic Church–Turing thesis over the Reals.Every real function computable by an effective procedure is computable by a Type 2 Turing Machine. and

2′(Real) Algorithmic Church–Turing thesis over the Reals (computational model).Every reasonable model of effective computability over the reals is simulable by Type 2 Turing Machines. are both perfectly natural.23 The algorithmic Church–Turing thesis extends successfully to computation over non-denumerable domains.

4 Analog Models

Another criticism of computability over the reals in general, and recursive analysis in particular, stems from a different source, that of the comparison between effective computation and other models of computation. Despite all their respective differences, all those models can be gathered under the same banner: they are analog models of computation.

There is no rigorous definition of what an analog model is. The notion refers generically to a model whose one or several parameters are described by continuous variables, be it time, space, or the state of the processor. One of the common properties of these models is that real numbers are no longer considered as strings, but as quantities in themselves.24

All these models are essentially different from recursive analysis.25 As we have just seen, recursive analysis is an extension of effective computation to the computation of a sequence of approximations of infinitary objects. Analog models of computability do not describe a symbolic finitary computational process. The execution of a computation does not consist in the reading and manipulation of finite strings. They are not executable as such by a human being equipped with pen and paper, even if their input-output behavior might very well be simulated by such a procedure.

Since these models do not aim at formalizing effective computation over the reals, the demonstration of their equivalence with recursive analysis, or lack thereof, does not have the same meaning as the historical demonstrations of equivalence for models of integer computability. In the case of computability over the integers, all the first historical models of computation aimed at capturing the same notion of effective computation, no matter how vague that notion might have been. As demonstrations of equivalence between various formal models of that same notion piled on, that was taken as a sign that effective computability was not a fuzzy informal concept, but a robust one, which had been successfully captured by formal modelization.

Analog models and recursive analysis, on the contrary, constitute two different classes of models, with different theoretical ambitions. If they turned out to be equivalent, it would show the capacity of effective procedures to simulate the computations performed by any analog model, and vice versa. Two very different notions of computational procedures would then yield an identical expressive power.

As we have have seen above (Sect. 1), that result would not be a generalization of the historical, algorithmic form of the Church–Turing thesis. It would be a physical form of that same thesis in the special case of computability over the reals:

Physical Church–Turing Thesis over the reals (computational model).All physically implementable models of computation over the reals can be simulated by Type 2 Turing Machines.

and

Physical Church–Turing Thesis over the reals.Every real function computable by a physical system can be computed by Type 2 Turing Machines.

We can now see what is problematic with the “no convergence of formalism” criticism of real computability, when it is left unqualified. It blurs the distinction between two issues, and leads the reader to believe that real computability is in the same situation as integer computability would have been, if the first models of effective computation would not have been equivalent. Nothing could be further away from the truth. First, there is no problem with the generalization of effective computation to real computability: as we have already seen, recursive analysis has achieved that generalization. And second, even the non-equivalence of several analog models with each other and effective computation cannot be construed immediately as a problem. To understand the full meaning of that absence of convergence, we have to think about the expressivity of analog models in the light of the following question: what can be said of the truth-value of the Physical Church–Turing thesis for analog models?

In order to do so, I will now proceed to a quick review of the main analog models. This review has no pretension to exhaustivity, and its sole aim is to address our question. We will first have to distinguish between physically implementable and unimplementable analog models, because only implementable models are relevant to a discussion of the physical Church–Turing thesis. I will then comment on recent technical results on the most common realistic model, the G.P.A.C., and how they fundamentally alter our view of the relation between effective and analog computation over the reals.

4.1 Physically Unimplementable Analog Models

4.1.1 B.S.S. and Real-RAM Models

Let us start with discrete time and continuous space analog models. In these models, even though there exists a notion of discrete computational step, or discrete time, the information is stored in continuously valued parameters, or continuous space. The Blum-Shub-Smale model, a.k.a. B.S.S. model, the real-RAM model, and the real Turing machine are members of that family of models, with equivalent computational power.

Beyond their differences, which are irrelevant for our present purposes, those models share some remarkable features.

The B.S.S. model can be seen as flowcharts acting on sequences of integer registers \(N_0,\ldots , N_1\) and sequences of real registers \(R_0,\ldots , R_1\). The reader shall remark that each real register is not assumed to contain an approximation of a given real, but the real number itself, that is to say an actual infinity of information. The model contains elementary arithmetical operations on those real numbers, usually at unitary costs. They also allow branchings conditioned on the comparison of the real numbers contained in two registers \(R_i\) and \(R_j\).

From the mere reading of their design choice, it is obvious that those models allow the computation of non RA-computable functions, since the comparison of two real numbers is only semi-decidable in recursive analysis. However, this operation being primitive in these models, there is no indication on how it could be executed.

The comparison of expressive powers with recursive analysis is a little tricky, since there is no relation of inclusion between B.S.S.-computable functions and RA-computable functions. The B.S.S. model can compute the Gauss staircase function, but it cannot compute certain ordinary functions such as the square root or exponential. B.S.S.-computable functions also have different properties. As we have just seen above, RA-computable functions are continuous, while most B.S.S.-computable functions are discontinuous: each non-trivial branching \(R_i < R_j\) introduces a discontinuity point. Finally, it is impossible to add a finite number of primitive functions to the B.S.S. model, so that it becomes able to compute all RA-computable functions (for more details, and references, see Weihrauch 2000, 260–265).

Those discrete time continuous space models cannot be considered as genuine computational models, because none of them can be implemented. The problems raised by such models are well summed up by Weihrauch’s following comment (2000, 262):

Real-RAMs cannot be realized by physical machines, that is, they are unimplementable, for the following reason: in a finite amount of time every physical information channel can transfer only finitely many bits of information and every physical memory is finite. Since there are uncountably many real numbers, it is impossible to identify an arbitrary real number by a finite amount of information. Therefore, it is impossible to transfer an arbitrary real number to or from a computer in a finite amount of time or to store a real number in a computer.

It is part of our naive understanding of physics that the stocking and manipulation of an infinite amount of information is impossible.26 More precisely, Weihrauch’s comments is based on two informal, but nevertheless reasonable empirical assumptions:
  1. 1.

    It is impossible to store an infinite amount of information in a finite volume.

     
  2. 2.

    It is impossible to carry an infinite amount of information through a finitely sized channel.

     
Further discussion of those hypotheses and references can be found in Pégny (2013), chap. 5 and 6.

Since those models do not describe algorithms that can actually be executed, the lack of convergence with recursive analysis is not really a surprise. It is not even a problem, since it does not challenge the truth of the physical Church–Turing thesis over the reals. For a lack of convergence of models to be significant, all the compared models must be physically implementable. This remark will carry over to a different model, Moore’s \({\mathbb {R}}\)-recursive functions.

4.2 \({\mathbb {R}}\)-Recursive Functions

Moore’s \({\mathbb {R}}\)-recursive functions are an extension of the recursive definition of effectively computable functions to real functions.27

Let us first introduce some notation. \(\overrightarrow{x}\) will denote the n-tuple of real numbers \(\{x_{1},\ldots , x_{n}\}\). \(\partial _y h(\overrightarrow{x}, y)\) will denote the partial derivative with respect to y of the many-variables function \(h(\overrightarrow{x}, y)\). The total derivative with respect to t of the one-variable function v(t) will be denoted by \(v'(t)\), sometimes abbreviated by \(v'\). \(\Vert x \Vert\) denotes the absolute value of x.

A function \({\mathbb {R}}^{m}\longrightarrow {\mathbb {R}}^{n}\) is \({\mathbb {R}}\)-recursive if it can be engendered from constant functions 0, 1 in the following fashion: if f and g are \({\mathbb {R}}\)-recursive, so is function h by the following operations:
  1. 1.

    Composition.\(h(\overrightarrow{x}) = f(g(\overrightarrow{x}))\).

     
  2. 2.
    Differential recursion or Integration. \(h(\overrightarrow{x}, 0)= f(\overrightarrow{x}), \partial _y h(\overrightarrow{x}, y)= g(\overrightarrow{x},y,h(\overrightarrow{x}, y))\). Equivalently, one can write
    $$\begin{aligned} h(\overrightarrow{x}, y) = f(\overrightarrow{x}) + \int _0^y g(\overrightarrow{x},y',h(\overrightarrow{x}, y')) {\mathrm {d}}y' \end{aligned}$$
    (5)
     
  3. 3.

    \(\mu\)-recursion or Zero detection.\(h(\overrightarrow{x}) = \mu _y f(\overrightarrow{x}, y)= \text {inf}\{y|h(\overrightarrow{x}, y)=0\}\), with inf selecting the smallest value y and, if there exists two ys with identical absolute value, the negative value by convention.

     
  4. 4.

    The vectorial functions are defined iff their components are defined.

     
The differential recursion is a natural analog of primitive recursion: instead of defining \(h(\overrightarrow{x}, y+1)\) as a function of h(y), y and \(\overrightarrow{x}\), one defines \(\partial h/\partial y\) in terms of h(y), y and \(\overrightarrow{x}\). The solution need not be unique, and it can also diverge. For simplicity’s sake, it is assumed that h is defined only when the equation admits a unique, finite solution including the point \(h(\overrightarrow{x}, 0)= f(\overrightarrow{x})\).

As in the partial recursive case, the role of the operator \(\mu\) is to detect the smallest y such that \(h(\overrightarrow{x}, y) = 0\). The existence of negative real numbers forces a modification of the definition: reals are ordered by their absolute value, and, for two values with identical absolute values, the negative value is chosen by convention. Furthermore, if an infinity of zeroes accumulate over a value y, the operator \(\mu\) selects this value, even if it is not itself a zero. Equivalently, if [ab] is the largest closed interval containing 0 such that \(f(y) \ne 0\), \(\mu\) returns the smallest absolute value between a and b, or, if \(a = -b\) the negative value between a and b.

Moore chose to include in his base cases vectorial functions. This is not necessary for partial recursive functions, since a vector of integers can be encoded into a single integer.

From these definitions, it is easy to demonstrate that the last equivalent of partial recursive functions’ base case, namely the projections, are definable. Many ordinary functions are also definable, such as elementary arithmetical operations, exponential and logarithm, usual trigonometric functions, x mod y. A new notion of computable real is also defined: a real number \(x \in {\mathbb {R}}\) is \({\mathbb {R}}\)-recursive iff there exists a \({\mathbb {R}}\)-recursive function f such that \(x = f(0)\).

The notion of computability used in this definition is very different from effective computability. A computable real is not a number whose infinite representation can be sequentially engendered by an algorithm with computable precision, but an exact quantity, engendered by an intrinsically analog process. It is thus not very surprising that some \({\mathbb {R}}\)-recursive numbers are not RA-computable.28

The set of \({\mathbb {R}}\)-recursive functions is not equivalent to the set of RA-computable functions either. Discontinuous functions such as Kronecker’s \(\delta\)-function, absolute value function and echelon function can be found among \({\mathbb {R}}\)-recursive functions. The definition of a discontinuous \({\mathbb {R}}\)-recursive function requires the use of the \(\mu\) operator.

As Moore explicitly remarks, several features of the model are not realistic:
  1. 1.

    Robustness to noise. The \(\mu\) can detect any zero of a given function, including the case where such a zero is isolated and creates a discontinuity, such as a function everywhere constant but for a single value y, where it takes the value 0. The implementation of such an operator seems empirically impossible, since it would necessitate a perfect protection against noise.

     
  2. 2.
    Infinitely many calls of a function. Moore (1996, 9) remarks that the computation of an integration \(h = f + \int g\) necessitates the execution of a for loop, presented here in pseudo-code:
    The computation of \(h = \mu _y f\) necessitates the execution of a while loop:
    As in the discrete case, if there is no such value y, the program never gets out of the while loop and h is undefined.

    Those for and while loops require an infinite number of calls to the functions g and f. If their execution time is not infinitesimal, as a reasonable definition of time should assume, the execution of those loops shall take an infinite time.

     
  3. 3.
    Divergent resources. Even if the preceding problems were solvable, the \({\mathbb {R}}\)-recursive model would still face other problems related to the minimization operator \(\mu\). It allows to define the \(\eta\) operator, which detects the existence of a zero for an arbitrary \({\mathbb {R}}\)-recursive function.
    $$\begin{aligned} \eta _{y}f(\overrightarrow{x}, y) = {\left\{ \begin{array}{ll} 1 \quad \hbox {if} \quad \exists y f(\overrightarrow{x}, y)=0,\\ 0 \quad \hbox {if} \quad \forall y f(\overrightarrow{x}, y) \ne 0.\\ \end{array}\right. } \end{aligned}$$
    (6)
    Otherwise put, \(\eta _{y}f(\overrightarrow{x}, y)\) is the characteristic function of the set of vectors \(\overrightarrow{x}\) such that \(\mu _{y}f\) is well defined. The \(\eta _{y}\) operator is very powerful, since it allows to restrain any partial function \(h =\mu _{y}f\) to its definition domain, turning it into a total function, \(h_{tot} = (\mu _{y}f)(\eta _{y}f)\). Readers familiar with computability theory might already anticipate that, with the help of \(\eta _{y}\) operator, it will be possible to solve undecidable problems. The key problem is that the computation of \(\eta _{y}f(\overrightarrow{x}, y)\) would involve the execution of a divergent function. If, as it would be expected, the values taken by this function are implemented by values of an observable quantity, this would imply that these observable quantities should diverge during a finite time dynamical evolution. Since a observable quantity is supposed to take finite, measurable values, such a behavior is the sign of an unrealistic model29 (for more details, see Moore 1996, section 8).
     
Just as B.S.S. and real-Ram models, if for different reasons, \({\mathbb {R}}\)-recursive functions face a fundamental implementability issue. Just as in the previous case, it does not qualify as a genuine computational model, even if it has a genuine theoretical interest.30

Our list of physically unimplementable analog models has no pretention to exhaustivity, which seems very difficult to achieve. There are many speculative models of analog computation, and many among them are models violating the physical Church–Turing thesis, or models of hypercomputation (for surveys of hypercomputational models, with an emphasis on analog models, see for instance Stannett 2004; Cotogno 2003; for a recent survey of continuous time models, see Bournez and Campagnolo 2008). Those models of hypercomputation have well known problems, such as lack of robustness to noise, and more general difficulties in encoding and decoding an infinite amount of information in a finite system (see Pégny 2013, chap. 6, for more details on this). Consequently, it has become a folk conjecture to assume that analog models that are robust to noise cannot compute a non-recursive function (see Fortnow 2012). Demonstrating such a folk conjecture has proven to be more difficult than might be expected (see Asarin and Bouajjani 2001), and the first attempt at a general proof is very recent (see Bournez et al. 2013a). It would be worthwhile to discuss the possibility that the physical Church–Turing thesis might thus be demonstrated for analog models, but that would be a topic for another paper.

4.3 The G.P.A.C.

Let us now move on to physically implementable models. The General Purpose Analog Computer (G.P.A.C.) is the oldest theoretical model of real computability.31 Shannon conceived it in the 1930s as a modelization and generalization of existing analog computers, V. Bush’s Differential Analyzer in particular (Shannon 1941). Even if it has been amended several times (see Costa and Graça 2003), it is still considered a robust model of existing analog computers. In this respect, it enjoys the privileged status of being the only generic model of existing analog computers. A G.P.A.C. is a circuit made of interrelated black boxes, called analog units. Each of these units computes a primitive function, whose inputs are parametrized by a continuous variable. From an engineering perspective, each of those black boxes should be implemented by a physical design modelized by a continuous formalism, whose inputs are continuous quantities parametrized by time.

The G.P.A.C. is constituted by four fundamental units executing the following primitive operations: the production of a real constant k, the addition of two values u and v, the multiplication of two values u and v, and the integration of two values u and v.32 Shannon demonstrated that those units allow the computation of many usual functions, such as polynomials, exponential, trigonometric functions and their inverses.

The expressive power of the G.P.A.C., according to Shannon’s first results, was nevertheless strictly inferior to that of recursive analysis. Shannon demonstrated indeed that G.P.A.C. could only compute differentially algebraic functions.33 The function \(\Gamma (x) = \int _0^\infty t^{x-1} e^{-t}dt\) is RA-computable, but it is not differential algebraic: consequently, it is not generable by a G.P.A.C.

Since it bears upon a model that is both physically implementable and generic, this first estimate is the legitimate basis of the common belief, that there is no convergence of formalisms in computability over the reals.

But since the functions generable by a G.P.A.C. are a strict subset of the Turing-computable functions, this absence of convergence does not constitute a challenge for the physical Church–Turing thesis over the reals. The physical and algorithmic Church–Turing theses give an upper bound for the expressive power of reasonable computational models. It does not mean that every reasonable model should have the same expressive power. There would be a genuine lack of convergence between formalisms if the G.P.A.C. would allow for the computation of a non-Turing computable function, but that is not the case.

At this stage of our analysis, it is not clear that analog computation raises any serious problem for the physical Church–Turing thesis. But we might now go even further, and defend that analog computation constitues a particularly favorable case for that thesis.

Shannon’s estimate of his model’s expressive power has been criticized in recent literature. Graça (2004) remarked that in G.P.A.C. computation, it is assumed that a function f(tx) is computed in real time t, a constraint unknown to recursive analysis. Consequently, he proposed to replace that first definition of computability according to G.P.A.C. by a notion of approximate computation, which would make the comparison with recursive analysis more natural.

In order to reach that goal, he used an alternative characterization of the G.P.A.C. expressive power, due to Costa and Graça (2003). A function f is generable by a G.P.A.C. if it is a component of the solution \(y = (y_1,\ldots , y_n)\) of the differential equation \(y'=p(y,t)\) where p is a vector of polynomials.

Graça (2004) has shown that the expressivity of the G.P.A.C. could vary, if one uses a different notion of computability. A function \(f: \subseteq {\mathbb {R}}^{n} \longrightarrow {\mathbb {R}}\) is computable by a G.P.A.C.34 by approximations if there exists an ordinary polynomial differential equation with n components \(y_1,\ldots , y_n\) with initial conditions \(x_1,\ldots ,x_n\) such that, for the particular components g and \(\epsilon\), we have \(\lim \limits _{t \rightarrow \infty } \epsilon (x_1,\ldots , x_n, t) = 0\) and \(\Vert f(x_1,\ldots ,x_n,t)-g(x_1,\ldots , x_n,t)\Vert \le \epsilon (x_1,\ldots , x_n, t).\)

In more intuitive terms, we consider a dynamical system \(y' = p(y, t)\) with initial condition x. For all x, the component g of the system approximates f(x) with an approximation whose upper-bound is another component \(\epsilon\) which vanishes at infinity. With such a conception of computability, Graça has shown the \(\Gamma\) function is actually computable by a G.P.A.C. Bournez et al. (2006) have then demonstrated in that recursive analysis and G.P.A.C. actually have equivalent expressive powers.

It would be worthwhile to comment on Bournez et al.’s hypotheses and demonstration in great details, but that would be a topic for another paper. I will just make a couple of basic comments. In Shannon’s original model, no constraint was imposed on the constants and initial conditions of the given ordinary differential equation. Those could be described by non RA-computable reals, and the output of a GPAC could a priori have an unbounded rate of growth. In this case, it is very easy to show that G.P.A.C. can compute non RA-computable functions. Bournez et al. have chosen to restrict their study of the G.P.A.C. expressive power to the case where initial conditions and constants are RA-computable, and the rate of growth of solution functions is bounded, as can be seen respectively in conditions 2 and 3 of definition below.

A function \(f:[a,b] \longrightarrow {\mathbb {R}}\) is G.P.A.C.-computable iff:
  1. 1.

    There is a \(\phi\) computed by a GPAC U by approximations, with initial conditions \(\alpha _1,\ldots , \alpha _{n-1}, x\) set at \(t_{0} = 0,\) such that \(f(x) = \phi (\alpha _1,\ldots , \alpha _{n-1}, x)\) for all \(x \in [a,b]\).

     
  2. 2.

    The initial conditions \(\alpha _1,\ldots , \alpha _{n-1}\) and the coefficients of the vector of polynomials p are computable reals.

     
  3. 3.

    If y is the solution of the GPAC U, then there exists \(c, K > 0\) such that \(y \le cK^{\Vert t\Vert }\) for \(t \ge 0\).

     
In intuitive terms, Bournez et al. have demonstrated that the G.P.A.C. and recursive analysis are equivalent models under two conditions:
  • The computability according to G.P.A.C. must be well defined, so as to warrant a meaningful comparison;

  • One must only consider computable initial conditions and constants.

If the first condition is perfectly natural, the second lacks a fundamental theoretical justification for the time being. Why should the initial state of the analog machine, modelized as a dynamical system, be described with real RA-computable values?

The justification for such a postulation can be more easily sought after in physics than pure mathematics. As we have seen above (see Sect. 4.1), it is a reasonable, though somewhat undefined empirical hypothesis that a finite size system cannot contain an infinity of information. Such an hypothesis would forbid any reasonable implementation of a G.P.A.C. to encode in its initial state the value of a non-RA-computable number, for a real number with finite expansion is always computable. The thorough discussion of such an hypothesis, and the general role of physical hypotheses in computability, is beyond the scope of the present paper, for it is a major topic of the current debate on hypercomputation (see Pégny 2013, chap. 6 for more details and references). In any event, Bournez et al.’s demonstration can be read as an argument in favor of the physical Church–Turing thesis in the case of analog computation.35

5 Conclusion

There is nothing wrong with computability over the reals. Quite the contrary: not only computability over the reals should not be considered as a computability theory without the desired convergence of formalisms, but there is a sense in which it is a particularly successful branch of computation theory. To understand this point, it is necessary to distinguish two fundamental questions.

The first issue is the extension of effective computability to non-denumerable domains. Once we desacralize the requirement of termination, and replace it by Weihrauch’s finiteness property, we have seen that recursive analysis is such a legitimate extension. The continuity of RA-computable functions should not be seen as a problematic property, excluding intuitively computable functions. On the one hand, our intuitions on naturality in extension do not carry over to the continuous world. More precisely, the extension of the notion of effective procedure to infinitary objects provokes a spliting of our intuitions: the integer-part function might seem intuitively computable according to a given set of intuitions, but it is not computable according to the more demanding intuitions at the root of recursive analysis, such as the effectivity of the modulus of convergence. We simply don’t have a robust intuition of a set of intuitively computable functions in the real domain, which makes irrelevant arguments based on such intuitions. On the other hand, continuity is a direct consequence of the finiteness property, which is an extremely natural set of constraints on effective computation.

The second fundamental question is the simulation of analog models by effective procedures over the reals. Our first issue must have been solved, before we tackle this second problem, for it is impossible to compare the respective expressive powers of analog and effective models, if we have no robust definition of effective computation over the reals. We must also set aside physically unimplementable models, since our second question is actually a particular case of the physical Church–Turing thesis. The continuous space-discrete time models, and Moore’s \({\mathbb {R}}\)-recursive functions are thus not relevant for that discussion.

At this point of our analysis, recent technical results radically alter the state of the art. Our most robust model of analog computation, Shannon’s G.P.A.C., was thought to have a lesser expressive power than recursive analysis. Even though that does not really constitute a problem for the physical Church–Turing over the reals, it was still remarkable that realist analog computers could not achieve the same expressive power as recursive analysis. But with a different conception of computability, it turns out the G.P.A.C. has exactly the same expressive power as recursive analysis. Not only computability theory over the reals should not be conceived as a theory of computability without convergence of formalisms, but it enjoys a remarkable convergence of formalisms between the model of effective computability and the main implementable model of analog computation. This is the main reason why we should see it as a particularly successful branch of computation theory.

There are thus good reasons to conjecture that the physical Church–Turing thesis holds for analog models. But the exact grounds for the truth of that conjecture still remain obscure at the time. To complete their demonstration, Bournez et al. had to postulate the computability of initial conditions and constants for the relevant differential equations. Even assuming that the proper ground for these postulations should be sought after in physics, not pure mathematics, it is still unclear what precise physical hypotheses could justify those assumptions, and thus the physical Church–Turing thesis in the analog case.

Computability over the reals is also a case study of the pitfalls of comparisons between computational models. Comparing computational models is not as straightforward as it may seem. If a set of models is conveniently regrouped under the banner “computational model over domain D”, it does not imply that expressive powers can be compared two by two without any reflection. It is always worthwhile to wonder what kind of computational procedure a given model attempts to capture; whether a given model is physically implementable; and whether the comparison between two different models is actually legitimate, that is, if it compares two models aiming to solve the same theoretical problem.

Footnotes

  1. 1.

    See for instance Turing (1936), Church (1936), Post (1936), Church (1937).

  2. 2.

    This modern list is more a modern reconstruction of the concept than a quote of the founding fathers’ view. It was heavily inspired by Knuth (1997) and Copeland (2002a).

  3. 3.

    That condition can be discussed in interactive models of computation, where the computation is no longer modelized by functions (see, for instance Wegner and Eberbach 2004; Goldin and Wegner 2008; Van Leeuwen and Wiedermann 2000). Since the issues in the scope of this paper are not truly affected by those, I will not discuss them.

  4. 4.

    This property is meant for the computation of integer functions. Turing (1936), who considered in the computation of real numbers and functions, had to analyse non-terminating computation of infinite sequences. I will come back to this point in Sect. 3.3, 13–14. For more details on Turing’s conceptions, see Gherardi (2011).

  5. 5.

    Of course, I am rephrasing Turing in a modern terminology: he was discussing the mental states of a human computer (see Turing 1936, §9).

  6. 6.

    The terminological choice of the adjective ’algorithmic’ will be explained below, see Sect. 2.3.

  7. 7.

    This is just a preliminary sketch of the discussion to come in Sect. 4.

  8. 8.

    It is often said that there is no known counterexample to the Church–Turing thesis. The proposition is true, but slightly inaccurate, inasmuch as it leads to confound two distinct ideas: the naturality in extension and the absence of a sophisticated counterexample.

  9. 9.
    In his 1946 Remarks before the Princeton Bicentennial Conference on Problems in Mathematics, Gödel stressed the importance of that argument very explicitly (see Davis 1965, 84, emphasis ours):

    Tarski has stressed in his lecture (and I think justly) the great importance of the concept of general recursiveness (or Turing’s computability). It seems to me that this importance is largely due to the fact that with this concept one has for the first time succeeded in giving an absolute definition of an interesting epistemological notion, i.e., one not depending on the formalism chosen. [...] By a kind of miracle it is not necessary to distinguish orders, and the diagonal procedure does not lead outside the defined notion.

  10. 10.
    Gödel would have shown little enthusiasm for Church’s original formulation, but was convinced by Turing’s work precisely because of that modelization argument (for more historical details, see Davis 1982). Church himself underlined this avantage of Turing’s approach in his review (Church 1937, emphasis ours):

    As a matter of fact, there is involved here the equivalence of three different notions: computability by a Turing machine, general recursiveness in the sense of Herbrand–Gödel–Kleene, and \(\lambda\)-definability in the sense of Kleene and the present reviewer. Of these, the first has the advantage of making the identification with effectiveness in the ordinary (not explicitly defined) sense evident immediately-i.e. without the necessity of proving preliminary theorems.

  11. 11.

    See, for instance, Shor (1997) for an early statement by a computer scientist, and Piccinini (2011), Pégny (2013) and references there, for an extensive philosophical discussion.

  12. 12.

    It is difficult to choose between those two formulations, since each of them has its own drawbacks. The first one is a little bit too general: an effective procedure can be said to be mathematical, but it is far from being a complete characterization of it. The second one is ambiguous. In the logic and computer science literature, “algorithm” is sometimes used as a perfect synonym of “effective procedure”. But it is also used to denote “any possible computational procedure”, as is the case when one discusses “quantum algorithms” or “analog algorithms”. Consequently, the word “algorithm” crosses the boundary that we are trying to establish between effective procedures as a specific class of computational procedure, and the more general idea of any possible computational procedure whatsoever. Alas, the most simple choice of “effective Church–Turing thesis” has already been taken for another use in complexity theory (see, for instance Button 2009; Bournez et al. 2013b). In this paper, I will use the adjective “algorithmic” even if I am fully aware of its shortcomings.

  13. 13.

    Piccinini’s views are restricted to functions of denumerable domain (Piccinini 2011, 7). But there is nothing incompatible with his analysis in an extension to non-denumerable domains.

  14. 14.

    The representation of the reals by their decimal expansion is actually problematic for recursive analysis: this example is thus purely pedagogical (see Weihrauch 2000, for more details on data representation in recursive analysis).

  15. 15.
    Such a description can be found in Ko (1991, 1):

    Recursive analysis studies effective computability in classical analysis; that is, it studies which mathematical notions and proofs are computable and which are not computable.

  16. 16.

    I will have to ask the indulgence of the expert reader for the definition I give here, which is inspired by an old definition in Pour-El and Ian Richards (1989). It is outdated, and used only for pedagogical purposes. My intent is just to give an intuition of the concept of computable real function understandable for a reader coming from a philosophical background, and formulated only with notions of recursive theory and basic analysis. I will discuss formulations referring to a machine model in Sect. 3.3. For those reasons, I do not want to get into technicalities such as representations, extensions to many-variables functions, definition over all \({\mathbb {R}}\) and uniform continuity of computable functions over a compact domain.

  17. 17.

    A rational is a type 0 object, a rational function is a type 1 object, a functional that takes rational functions as inputs and yields rational functions as outputs is a type 2 object. A RA-computable real number r can be seen as a recursive function taking a natural integer n as input and yielding a rational approximation \(r_n\) of r. A RA-computable real function \(f :\subseteq {\mathbb {R}}^{n} \longrightarrow {\mathbb {R}}\) is a functional associating such functions, and so a type 2 object.

  18. 18.

    For more details, see Weihrauch (2000, 14–15).

  19. 19.

    This set of correction conditions are not explicit in Weihrauch’s book, but seem to be implicit in his presentation of concepts, and the redaction of his demonstrations.

  20. 20.
    Ker-i Ko defends the same view in Ko (1991, 3):

    Since x is a type-1 function that does not have a finite representation, machine M cannot directly “read‘” its input x. Instead, we must provide a more complicated mechanism to allow machine M to access the information about the real number x. In our computational model, we use the oracle machine to formalize the communication between the machine M and the input real number x.

  21. 21.

    A rigorous presentation of Ker-i Ko’s definition would require an introduction to Cauchy functions formalism, which Ker-i Ko uses for reasons related to complexity in recursive analysis. Since the introduction of this formalism would be somewhat lengthy and unnecessary for our current purposes, an informal presentation will do.

  22. 22.

    In his own idiosyncratic terminology, J. Earman designates by “Grzegorczyk functions” what what we have called “RA-computable functions”.

  23. 23.

    It should be underlined that our formulation of the Church–Turing thesis over the reals is defined up to any substitution of an extensionally equivalent model, just as is the case with the thesis for integer computation. Instead of “computable by Type 2 Turing machine”, one could just as well read “computable according to recursive analysis.”

  24. 24.
    For instance, Moore makes that point in (1996, 1, emphasis ours):

    to discuss the physical world (or at least its classical limit) in which the states of things are described by real numbers and processes take place in continuous time, we need a different theory: a theory of analog computation, where states and processes are inherently continuous, and which treats real numbers not as sequences of digits but as quantities in themselves.

    The same point is made by Blum et al. (2000, 3):

    (...) we view a real number not as its decimal (or binary) expansion, but rather a mathematical entity as is generally the practice in numerical analysis.

  25. 25.

    In the logic and computer science community, the expressions “analog model” and “model of computation over the reals” are often used as perfect synonymous. In the context of our present discussion, this terminological convention would not have been profitable, because it blurs the conceptual distinction that we are trying to highlight between effective computation over the reals and continuous computation over the reals. Therefore, I have opted for a more stringent use of the expression “analog model”, which can also be found in the literature.

  26. 26.
    A similar position on the B.S.S. model is taken by Ko (1991, 5):

    (...) it is apparent that no physical implementation of this model is possible.

  27. 27.

    The following passage is a sum up of Moore (1996, sections 4–5, 4–5, pagination). The reader willing to know more details should read the illuminating original paper.

  28. 28.

    For the construction of such a non-RA computable \({\mathbb {R}}\)-recursive number, see Moore (1996, section 11, 16–17).

  29. 29.

    Moore wonders in (1996, 8) whether the last two problems are equivalent. Our analysis shows that it is not the case: even if the second problem was solved, the first one would still remain relevant.

  30. 30.

    For instance, in Costa and Graça (2003), Graça and Costa have studied the class of \({\mathbb {R}}\)-recursive functions generable by a G.P.A.C.

  31. 31.

    The reader might be reminded that the oldest computing machine known to historians, the Antikythera mechanism (−87 B.C.), is an analog machine.

  32. 32.

    The integration unit takes u and v, functions of time, as inputs, and yields as an output w with \(w(t) = u(t)v'(t)\) and \(w(t_0) = \alpha\).

  33. 33.

    A function f(x) is differentially algebraic iff its derivatives satisfy a polynomial equation with rational coefficients \(P(x, f(x), f'(x),\ldots , f^{k}(x))=0.\)

  34. 34.

    I will use here the distinction made by Bournez et al. (2006) between ‘function generable by a G.P.A.C.”, which denotes the first conception of computability according to this model, and “function computable by a G.P.A.C.” or “G.P.A.C.-computable function”, which denotes approximate computation.

  35. 35.
    A similar point was raised by Graça and Costa in their study of the G.P.A.C. The original model does not assume any constraint on the continuous functions of real time that can be taken as inputs by the analog units. However, the definition of certain functions demands the continuous differentiability of the functions taken as inputs (Costa and Graça 2003, 8, emphasis ours):

    (...) from now on, we will always assume that the inputs are continuously differentiable functions of the time. And if the outputs of all units are defined for all \(t \in I\); where I is an interval, then we will also assume that they are continuous in that interval. This is needed for the following results and may be seen as physical constraints to which all units are subjected.

Notes

Acknowledgements

I wish to thank first and foremost Olivier Bournez, for many fruitful discussions. My former advisors J.B. Joinet and A. Grinbaum were also instrumental in the making of that paper.

References

  1. Aaronson, S. (2005, March). NP-complete problems and physical reality. SIGACT news. arXiv:quant-ph/0502072.
  2. Asarin, E., & Bouajjani, A. (2001). Perturbed turing machines and hybrid systems. In IEEE (Ed.), Logic in computer science, 2001. Proceedings of 16th annual IEEE symposium, pp. 269–278.Google Scholar
  3. Blum, L., Shub, M., & Smale, S. (2000). On a theory of computation and complexity over the real numbers: NP-completeness, recursive functions and universal machines. The Collected Papers of Stephen Smale, 3, 1293.CrossRefMATHGoogle Scholar
  4. Bournez, O., Campagnolo, M. L., Graça, D. S., & Hainry, E. (2006). The general purpose analog computer and computable analysis are two equivalent paradigms of analog computation. In Theory and applications of models of computation, pp. 631–643. Springer.Google Scholar
  5. Bournez, O., Graça, D. S., & Hainry, E. (2013a). Computation with perturbed dynamical systems. Journal of Computer and System Sciences, 79, 714–724.MathSciNetCrossRefMATHGoogle Scholar
  6. Bournez, O., Graça, D. S., & Pouly, A. (2013b, May). Turing machines can be efficiently simulated by the general purpose analog computer. In Proceedings TAMC 2013, pp. 169–180, Hong Kong, China. Springer.Google Scholar
  7. Brattka, V., Hertling, P., & Weihrauch, K. (2008). A tutorial on computable analysis. In New computational paradigms, pp. 425–491. Springer.Google Scholar
  8. Button, T. (2009). SAD computers and two versions of the Church–Turing thesis. The British Journal for the Philosophy of Science, 60(4), 765–792.MathSciNetCrossRefMATHGoogle Scholar
  9. Church, A. (1936). An unsolvable problem of elementary number theory. American Journal of Mathematics, 58(2), 345–363.MathSciNetCrossRefMATHGoogle Scholar
  10. Church, A. (1937). Review: AM Turing, on computable numbers, with an application to the Entscheidungsproblem. Journal of Symbolic Logic, 2(1), 42–43.Google Scholar
  11. Copeland, J. (2002a). Accelerating turing machines. Minds and Machines, 12(2), 281–300.MathSciNetCrossRefMATHGoogle Scholar
  12. Copeland, J. (2002b). Hypercomputation. Minds and Machines, 12(4), 461–502.CrossRefMATHGoogle Scholar
  13. Costa, J. F., & Graça, D. (2003). Analog computers and recursive functions over the reals. Journal of Complexity, 19(5), 644–664.MathSciNetCrossRefMATHGoogle Scholar
  14. Cotogno, P. (2003). Hypercomputation and the physical Church–Turing thesis. The British Journal for the Philosophy of Science, 54(2), 181–223.MathSciNetCrossRefMATHGoogle Scholar
  15. Davis, M. (Ed.). (1965). The undecidable. New York: Raven Press Books.Google Scholar
  16. Davis, M. (1982). Why Gödel didn’t have Church’s thesis. Information and Control, 54(1/2), 3–24.MathSciNetCrossRefMATHGoogle Scholar
  17. Deutsch, D. (1985). Quantum theory, the Church–Turing principle and the universal quantum computer. Proceedings of the Royal Society of London A: Mathematical and Physical Sciences, 400(1818), 97–117.MathSciNetCrossRefMATHGoogle Scholar
  18. Earman, J. (1986). A primer on determinism (Vol. 32). Berlin: Springer.CrossRefGoogle Scholar
  19. Fortnow, L. (2012). The enduring legacy of the turing machine. The Computer Journal, 55(7), 830–831.CrossRefGoogle Scholar
  20. Gandy, R. (1980). Church’s thesis and principles for mechanisms. Studies in Logic and the Foundations of Mathematics, 101, 123–148.MathSciNetCrossRefMATHGoogle Scholar
  21. Geroch, R., & Hartle, J. B. (1986). Computability and physical theories. Foundations of Physics, 16(6), 533–550.MathSciNetCrossRefGoogle Scholar
  22. Gherardi, G. (2011). Alan turing and the foundations of computable analysis. Bulletin of Symbolic Logic, 17(3), 394–430.MathSciNetCrossRefMATHGoogle Scholar
  23. Goldin, D., & Wegner, P. (2008). The interactive nature of computing: Refuting the strong Church–Turing thesis. Minds and Machines, 18(1), 17–38.CrossRefGoogle Scholar
  24. Graça, D. (2004). Some recent developments on Shannon’s general purpose analog computer. Mathematical Logic Quaterly, 50(4–5), 473–485.CrossRefMATHGoogle Scholar
  25. Knuth, D. E. (1997). Art of computer programming, volume 1: Fundamental algorithms (3rd ed.). Boston: Addison-Wesley Professional.MATHGoogle Scholar
  26. Ko, K. I. (1991). Complexity theory of real functions. Boston: Birkhauser Boston Inc.CrossRefMATHGoogle Scholar
  27. Marguin, J. (1994). Histoire des instruments et machines à calculer : trois siècles de mécanique pensante, 1642–1942. Hermann.Google Scholar
  28. Moore, C. (1996). Recursion theory on the reals and continuous-time computation. Theoretical Computer Science, 162(1), 23–44.MathSciNetCrossRefMATHGoogle Scholar
  29. Bournez, O., & Campagnolo, M. (2008). New computational paradigms, chapter A survey on continuous time computation, pp. 383–423. Springer.Google Scholar
  30. Pégny, M. (2013). Sur les limites empiriques du calcul. Calculabilité, complexité et physique. Philosophie, Université de Paris 1.Google Scholar
  31. Piccinini, G. (2011). The physical Church–Turing thesis: Modest or bold? The British Journal for the Philosophy of Science, 62(4), 733–769.MathSciNetCrossRefMATHGoogle Scholar
  32. Pitowsky, I. (1990). The physical Church thesis and physical computational complexity. Iyyun, 39, 81–99.Google Scholar
  33. Post, E. L. (1936). Finite combinatory processes-formulation 1. The Journal of Symbolic Logic, 1(3), 103–105.CrossRefMATHGoogle Scholar
  34. Pour-El, M. B., & Ian Richards, J. (1989). Computability in analysis and physics. Berlin: Springer.CrossRefMATHGoogle Scholar
  35. Shagrir, O., & Pitowsky, I. (2003). Physical hypercomputation and the Church–Turing thesis. Minds and Machines, 13(1), 87–101.CrossRefMATHGoogle Scholar
  36. Shannon, C. E. (1941). A mathematical theory of the differential analyser. Journal of Mathematics and Physics, 20, 337–354.MathSciNetCrossRefMATHGoogle Scholar
  37. Shor, P. W. (1997). Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM Journal of Computing, 26, 1484–1509.MathSciNetCrossRefMATHGoogle Scholar
  38. Stannett, M. (2004). Hypercomputational models. In Alan Turing: Life and legacy of a great thinker, pp. 135–157. Springer.Google Scholar
  39. Turing, A. (1936). On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, 42, 230–265.MathSciNetMATHGoogle Scholar
  40. Van Leeuwen, J., & Wiedermann, J. (2000). Breaking the Turing barrier: The case of the Internet. Technical report, Institute of Computer Science, Academy of Sciences of the Czech Republic, Prague.Google Scholar
  41. Wegner, P., & Eberbach, E. (2004). New models of computation. The Computer Journal, 47(1), 4–9.CrossRefMATHGoogle Scholar
  42. Weihrauch, K. (2000). Computable analysis: An introduction. New York: Springer.CrossRefMATHGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2016

Authors and Affiliations

  1. 1.CNRS, IHPST, ANR-DFG Project «Beyond Logic»Université de Paris 1Panthéon-SorbonneParisFrance

Personalised recommendations