1 Introduction

The summary of The Philosophy of Computer Science entry in the Stanford Encyclopedia of Philosophy by Nicola Angius, Giuseppe Primiero and Raymond Turner (Angius et al. 2021), can help us to understand the underlying assumptions we will find in Giuseppe Primiero’s book On the foundations of computing (Primiero 2020) where the philosophy of computer science is the study of the fundamental questions and issues arising within the field of computer science, as well as those that arise from the practice of software development and its commercial and industrial applications. This includes the ontology and epistemology of computational systems, and the problems related to their specification, programming, implementation, verification, and testing, (Angius et al. 2021).

Primiero is firmly anchored in the present, with rich knowledge of the history of computing and approaching computing with the sensibility of a logician. His reflections on the past answer the question of how we got where we are today, and his outlook on the future is about what happens with what we have today. All of these are very important questions and are answered in a lot of detail and with plentiful valuable references.

Primiero identifies the three main foundations of the discipline, followed by the division of the book into the mathematical foundation, the engineering foundation and the experimental foundation.

To an extent, this parallels the classical division of knowledge production between theory, engineering (a constructive approach to knowledge acquisition that typically results in artifacts), and experiment (though in a narrowed-down view of computer models and simulations).

The title of the book On the foundations of computing, suggests implicitly that it addresses all the most important foundations of computing. However, one very important foundation of computing is missing: natural science. As Peter Denning wrote, computing is a natural science (Denning 2007). And he did not mean the tool for natural science, but the natural science itself. Both Peter Denning and Paul Rosenbloom write about computing as the fourth great scientific domain, alongside natural sciences, social sciences, and formal sciences (Denning and Rosenbloom 2009) (Rosenbloom 2012) Being a natural science, it belongs to the scientific domain of natural sciences. That is the view of computing as a process on natural informational structures. At the same time, computing is more than that. It constitutes a scientific domain on its own, an approach to science with its own ontology and epistemology. The basic fact about computing is its embodiment which is often forgotten under the excuse of “substrate independence”.

As Susan Stepney diagnosed, there is a neglected pillar of material computation we should consider when discussing pillars of the computing field (Stepney 2008).

In Primiero’s book, the theory is represented by mathematics and logic which certainly constitute the significant foundational layers of computing as a discipline. But are they the only theoretical roots of computing? How about the theoretical foundations in physical sciences?

In the vast seascape of modern computing, On the foundations of computing looks like a skillful sailing that successfully avoids temptations of new and emerging theories and implementations of physical computing such as unconventional computing or natural computing. At the end of the journey, the ship is in perfect condition as if nothing has happened since the time of Turing and his historical contributions to the theory of computation and computability.

The book is focusing on the view of computing as a finished, explained, secured, and verified research field that reminds of a museum with its logical structure and carefully organized artifacts. It consistently resists addressing the emergent, unfinished, uncertain territory such as unconventional computing or natural computing/morphological computing. There are still no regular commercial unconventional devices. And yet, there is already now a lot of theoretical and experimental ground covered by solid research in those fields worth noticing. Andrew Adamatzky aptly characterizes it:

„ Unconventional computing is a field without definition. There are two kinds of unconventional computing. The first one is about the implementation of computing systems in non-silicon substrates, e.g., chemical systems, plants, electricity, and slime mold. (…) The second kind deals with dissident thinking, by challenging current stereotypes and dogmas. (…) we can call then this second kind of unconventional computing ´uncommon thinking about computing.´ “ (Adamatzky 2018).

The field of Unconventional computing is to a big extent about novel hardware and novel concepts of computing as argued by (Ziegler 2020) as well. It is also about new models of computation, as Hava Siegelmann (Siegelmann 2013) explains how there have been numerous efforts to demonstrate the Church-Turing thesis, which posits that all physically realizable systems are no more powerful than classical models of computation. However, the analog shift map, a simple yet highly chaotic dynamical system, has been proposed as a counterexample to the Church-Turing thesis. This system has computational power that exceeds the Turing limit (also known as “super-Turing”) and is capable of computing in a way similar to neural networks and analog machines. Siegelmann suggests that this dynamical system may describe natural physical phenomena.

I must admit from the beginning – I am biased. I am approaching the book with a specific background. Starting as a physicist, I changed to a computer scientist, then moved into the philosophy of computing, then to (embodied, embedded, enacted) computational cognition, and recently to (computer-based) interaction design. For me, the physical basis of computation is the most important foundation and the sine qua non in every account of the foundations of computing.

Thus, for me, if the book aims at presenting the foundations of contemporary computing, it should mention the “dark matter” of physical computing, that is unconventional and natural computing as well as models of computing developed after the Turing machine.

From this perspective, one more appropriate title for Primero’s book in its present form would be: “On the foundations of conventional computing”.

2 Unconventional Computing. Beyond the Turing Machine Model of Computation

A short presentation of selected literature on unconventional computing can be used to illustrate the level of maturity of the field. For the introduction, see (Oltean 2009). The work of Peter Wegner and Dina Goldin on different aspects of unconventional computing is including computation beyond Turing machines in the form of interactive computation as the new paradigm, its principles, architecture, and paraconsistency of interactive computation, (Goldin and Wegner 2002, 2006; Goldin et al. 2006).

Also, the work of Hava Siegelmann (Siegelmann and Sontag 1994; Siegelmann 1995, 2013; Taylor et al. 2015) presents significant contributions to the field, addressing topics of computation beyond the Turing limit, Turing on super-Turing and adaptivity and analog computation via neural networks.

Jack Copeland, Carl Posy and Oron Shagrir contributed with the research on the Turing model of computation and its limits with the book Turing, Gödel, Church, and Beyond to the development of conceptual foundations of the notion of computability, (Copeland et al. 2015). Interesting insights into the nature of the connection between physics and computing can be found in the book Physical Perspectives on Computation, Computational Perspectives on Physics (Cuffaro and Fletcher 2018).

One of the presuppositions of the Turing machine model of computation is the existence of stable states, where computation proceeds through a succession of stable states. However, there are computational processes between states that are not well-defined. Wolfgang Maass, Thomas Natschläger, and Henry Markram presented computing in real-time without stable states in a new framework for neural computation based on perturbations, (Maass et al. 2002).

Among unconventional computing approaches, natural computing plays the most prominent role. It involves the study of computation in the physical world, which has led to a fundamentally new understanding of computation (Burgin and Dodig-Crnkovic 2015) (Dodig-Crnkovic 2011). Research in natural computation involves a bidirectional learning process, as the natural sciences are influenced by ideas of information processing, while computing incorporates concepts and approaches from the natural sciences, (Rozenberg and Kari 2008). Natural computation includes computing techniques that draw inspiration from nature, the use of computers to simulate natural phenomena, and computing with natural materials (e.g., molecules, and atoms).

Fields of research within natural computing include biological computing/organic computing, artificial neural networks, swarm intelligence, artificial immune systems, computing on continuous data, membrane computing, artificial life, DNA computing, quantum computing, neural computation, evolutionary computation, evolvable hardware, self-organizing systems, emergent behaviors, machine perception, and systems biology. Evolution is a prime example of a natural computational process, specifically morphological computation (Pfeifer and Iida 2005; Hauser et al. 2014), which produces optimized body shapes and materials for a given class of organisms in a particular environment, (Dodig-Crnkovic 2017).

Biology (living organisms and their relationships and processes) can be computationally modeled by natural computation, as Stanley Salthe shows in (Salthe 2013). Sloman noticed the irrelevance of Turing machines to AI (Sloman 2002) and the necessity of other models of computation for modeling intelligence.

The main differences between the Turing model of computation and unconventional (non-Turing) computation are characterized by Bruce MacLennan with the intention not to promote models of computation that surpass the capabilities of Turing Machines (TMs), but rather to highlight the importance of models with different, orthogonal notions of power. MacLennan recognizes that models are specific to a particular domain of application and are not necessarily applicable beyond that domain. Therefore, he examines the assumptions and context of the TM model and demonstrates that it is inappropriate for natural computation (computation that occurs in or is inspired by nature). In light of this, he argues that a more inclusive definition of computation is necessary, one that takes into account alternative models, particularly analog models, in addition to TMs, (MacLennan 2004).

The situation in the domain of computability regarding the Turing models vs. non-Turing models of computation can be compared with the Euclidean vs. non-Euclidean geometries. Mark Hogarth noticed that recent research on non-Turing computability can be compared to the field of modern geometry, in which there is no absolute dividing line between the computable and the uncomputable. If this is true, then any claims about the exact location of this supposed boundary would be misguided. The Church-Turing thesis is an example of such a claim.

Not many textbooks take such a broad view of computation. Here I highlight two notable exceptions. The first is John E. Savage’s book Models of Computation. Exploring the Power of Computing (Savage 1997). In the book, Savage presents a new perspective on theoretical computer science that places emphasis on resource tradeoffs and complexity classifications rather than on the structure of machines and their connections to languages. This approach reflects a teaching method that is motivated by the increasing importance of computational models that are more practical than the abstract models studied in the 1950s, ‘60s, and early ‘70s. Models of Computation assumes that readers have some background in computer organization and uses circuits to simulate machines with memory, also illustrating how tradeoffs between parameters of computation, such as space and time, govern all computations performed by machines with memory. Topics such as space-time tradeoffs, memory hierarchies, parallel computation, and circuit complexity are integrated throughout the book with a focus on finite problems and concrete computational models, (Savage 1997).

A more recent book, Models of Computation. An Introduction to Computability Theory by Maribel Fernández, presents another example of an approach to computing informed by the recent research developments in the field. The book is divided into two parts. The first part covers traditional models of computation that were used in early studies on computability: Automata and Turing machines, Recursive functions, the Lambda calculus, and logic-based computation models. The second part covers new object-oriented and interaction-based models and includes chapters on concurrency and emergent models of computation inspired by quantum mechanics and systems biology. (Maribel Fernández, 2009)

3 Natural Sciences Shaping Computing, and Computing as a Natural Science

The third part of the volume (Primiero 2020) analyzes the methods and principles of experimental sciences founded on computational methods and studying the use of machines to perform scientific tasks, with a focus on computer models and simulations. The topics covered are computational models, computational experiments, computer simulations, and controllability and explanation in experimental computing. I agree with Symons and Abumusab that this last section of the book is the least developed (Symons and Abumusab 2021).

This part answers the question of how computing helps scientific disciplines. But it is not about how scientific disciplines (physics, chemistry, biology, cognitive science) have been shaping computing and how they are continuing to influence computing. It is more about the future in terms of the computational foundations of the existing technology. However, new approaches have already emerged, especially in the field of physical computing, including quantum computing. “The futures have already happened”, as argued by Peter Drucker who described the trends not as predictions, but rather conclusions based on past events. The full impacts of these trends are still to come while few have probably considered how these futures will affect their own work and organizations. (Drucker 1989)

Looking into natural sciences one can find strong connections between physics and computing, especially analog computing, but also modern natural computing (Rozenberg, Bäck and Kok, 2012). Connections between biology and computing are old and go back to Heinz von Foerster’s Biological Computer Laboratory at the University of Illinois in the 1950ies (Asaro 2007). Research in biological computing/information processing is a very vivid field especially when it comes to basal cognition, see the work of Michael Levin (Levin et al. 2021; Lyon et al. 2021), but also neurocomputation (Piccinini and Shagrir 2014).

As Adelman anticipated, “Biology and computer science—life and computation—are related. (…) At their interface, great discoveries await those who seek them.” (Adleman 1998).

It is also important to notice that in communication between natural sciences and computing, the learning process goes both ways, as cogently argued in The Many Facets of Natural Computing by Rozenberg and Kari who found out that ”natural computing builds a bridge between computer science and natural sciences”. (Rozenberg and Kari 2008)

Furthermore, as two examples of mentioned new trends in computing, we can name the work of (Simeonov et al. 2013) who answer in the positive the question if biology can create profoundly new mathematics and computation, and Leslie Valiant’s book Probably Approximately Correct: Nature’s Algorithms For Learning And Prospering In A Complex World (Valiant 2013) where he addresses a fundamental question of real-world computing with limited resources (space, time, energy, material), uncertainties, and finite precision.

4 Comparison of Primiero’s Book with Related Earlier Books

The pioneering book on the subject of philosophy of computing, Floridi’s Philosophy and Computing. An Introduction (Floridi 1999) introduced the important new relationships between computing and philosophy, and in particular through the relationship between information and computation – philosophy of information and philosophy of computation. The book addresses the following themes: philosophy and the digital environment, the question “what is computer”, Internet, infosphere, databases and hypertexts, and artificial intelligence.

Similar in scope, the book by Timothy Colburn, Philosophy and Computer Science (Colburn 2000) focuses on the following: Philosophical foundations of artificial intelligence (The definition and scope of AI; Al and the history of philosophy; AI and the rise of contemporary science and philosophy); The new encounter of science and philosophy (AI and Logic; Models of the mind; Models of reasoning; The naturalization of epistemology) and The philosophy of computer science (Computer science and mathematics; Two views of computer science (the engineering of program solutions and experimental science); Abstraction in computer science; Software, abstraction, and ontology).

Unlike Floridi’s and Colburn’s books that investigate the relationships of computing and philosophy, a book that appeared fifteen years later, and is closer in scope and goals to Primero’s, with partly overlapping themes, is Matti Tedre’s The Science of Computing: Shaping a Discipline (Tedre 2014). It delves into the debates that formed the discipline of computing, discussing the topics of Science, Engineering, and Mathematics, Computer scientists and mathematicians (Theoretical roots of modern computing; connections to mathematics; The formal verification debate), Engineering (Engineering the modern computer; Software Engineering) and The science of computing (A name; Science of the artificial (Experimental Computer Science & The fundamental question); Empirical Computer Science; Experimental Computer Science; Science of the Natural).

Tedre highlights the following three traditions in the discipline of computing: the logico-mathematical tradition, the engineering tradition, and the scientific tradition.

Within the scientific tradition, Tedre analyzes both the science of the artificial and the science of the natural aspects of computing.

Giuseppe Primiero in his book has somewhat narrowed down the scope of computing. His mathematics-logic-heavy approach has its merits, but I am approaching the book from the perspective of a researcher interested in the roots of computing coming from natural sciences, especially physics (among others quantum computing), chemistry and biology, and nowadays increasingly also cognitive science (cognitive computing and artificial intelligence). Primiero’s book could be viewed solely from a historical perspective, but it builds at the same time an outlook on the future. Sorting out for example naturally embodied view of computation leaves us without connections to important developments in quantum computing, bioinformatics, medicine, and neuroscience. No dedicated chapters for parallel computing, leave the reader with the feeling of a lack of connection with the real life of computing today - no internet, no neural computing, and no biocomputing in general.

Another closely related book is the forthcoming book of William J. Rapaport, Philosophy of Computer Science: An Introduction to the Issues and the Literature, (Rapaport 2023) that addresses among others the following questions: What is philosophy? What is computer science and its relationship to science and to engineering? What are computers, computing, algorithms, and programs? How do computers and computation relate to the physical world? What is artificial intelligence, and should we build AIs? Should we trust decisions made by computers?

Earlier versions of the Rapaport book used to be freely available online, and I used them as readings in my courses, which is also the case with Tedre’s book. In the present context Rapaport’s chapter How do computers and computation relate to the physical world? is relevant as it explicitly acknowledges the importance of physical aspects of computation.

Both Tedre and Rapaport acknowledge the fundamental role of physical computing. On the contrary, the physical aspects of computing in Primiero’s book are much less distinct. In general, “physics” and “physical” is mentioned in the text more than three hundred times, as a label, without further details, to refer to the level of abstraction. However, exactly in the particulars of the morphology of a specific physical layer, there is a future of (unconventional) computing embedded.

As an aside, the division made in the book between experimental computing and physical computing is puzzling. Where would one place experimental physics, given those two possibilities?

5 Future Computing, New Models of Computation, and Limited Resources

Puzzling as it may sound, in the book titled “On the foundations of computing” there is no answer to the question “what is computation?” Instead, the question asked is: “what is computable?” It is implicitly presupposed that we agree on what it means to compute and that it is a given. Yet a lot of contemporary research is searching for new ways to compute. Computing, like other research fields, is in constant development and evolution.

Perhaps the title of the book should be “On the formal/logical foundations of computing, based on the Turing machine model of computation” so that leaving out the physical side of computing can be justified.

Indeed, computing is a formal science, but it is also natural science, and its embodiment, and embeddedness are fundamentally important for cognitive computing (embodied, embedded, enactive, and extended) and artificial intelligence, AI, that are changing our fundamental understanding of computing. According to Will Douglas, artificial intelligence is transforming the field of computing in three major ways: the way computers are constructed, the way they are programmed, and the way they are used. Ultimately, it will change their very purpose. Pradeep Dubey, director of the parallel computing lab at Intel, says ‘The core of computing is changing from number-crunching to decision-making’, (Douglas 2021).

A similar and very significant programmatic claim about the future of computing is signed by the internationally leading scientists, researchers, engineers, practitioners, and academics, in the fields of cloud computing, AI, and quantum computing on the current research and potential future directions for autonomic computing.

Even though efforts have been made to develop autonomic models for managing computer resources, from individual resources such as web servers to resource ensembles like data centers, the integration of artificial intelligence (AI) and machine learning (ML) to improve resource autonomy and performance at scale remains a significant challenge. The incorporation of AI/ML to achieve autonomy and self-management of systems can be implemented at different levels of granularity, from full to human-assisted automation. (Gill et al. 2022) explore the challenges and opportunities of using AI and ML in next-generation computing for emerging computing paradigms, including cloud, fog, edge, serverless, and quantum computing environments. (ibid.)

A lot of future computing is related to autonomic systems with cognitive and intelligent computing. They are fundamentally embodied (physical). Cognitive computing and AI are fields that are heavily learning from natural physical systems. 

Samson Abramsky asked fundamental questions such as “Why do we compute? What is computed? What is a process? He argues:

“ We need a theory of the dynamics of informatic processes, of interaction, and information flow, as a basis for answering such fundamental questions (…).

What are the analogues to Turing-completeness and universality when we are concerned with processes and their behaviours, rather than the functions which they compute?” (Abramsky 2008) (Emphasis added).

6 An Example of a Different Focus, Given the Existing Research Results

As an illustration of the choice of focus issues, in Primiero’s book, we can look at the inclusion of the works of Peter Denning in the context of the foundations of computing.

The topics addressed by Denning included in the book that in general describe computing/computer science are: “Exponential laws of computing growth” (Denning and Lewis 2017), “Is computer science science?” (Denning 2005), “Computer science” (Denning 2000), “Computing as a discipline” (Denning et al. 1989), “A discipline in crisis” (Denning et al. 1981) and “Performance evaluation: experimental computer science at its best” (Denning 1981).

However, the following fundamental work of Denning is not mentioned in the book:

“Great Principles of Computing” (Denning 2003), “Computing Is a Natural Science” (Denning 2007), “Computing: The Fourth Great Domain of Science” (Denning, 2009),

“Computing Science: The Great Principles of Computing” (Denning 2010a), “What Is Computation?: Editor’s Introduction” (Denning 2010b), “Structure and Organization of Computing” (Denning 2014), “Computing’s Paradigm” (Denning and Freeman 2009), “Great Principles of Computing” (Denning and Martell 2015), “The Fourth Great domain of science” (Denning and Rosenbloom 2009).

From the perspective of the foundations of computing, the second list of Peter Denning’s works is more important than the first one. It is the question of the view of computing, the issue of the “shifting identities in computing: from a useful tool [the first list] to a new method and theory of science [the second list]” as addressed by Matti Tedre and Peter Denning in the book Informatics in the Future (Werthner and Harmelen 2015), pp. 1–16.

7 Conclusion

The aim of this review is to point out the importance of physical computation as an indispensable and central foundation of computing. It is typically given a less prominent place in the existing books introducing computing as a discipline, its foundations, and philosophical aspects, books with aims to bring clarity into the history, current processes, and possible futures of computing as a profession, technology, methodological tool and a domain of science. I hope that the inevitability of giving a more central place to the physical side of computing will soon become even more obvious with the increasing use of autonomous, intelligent, and cognitive computation as well as different paradigms based on physical properties of computing substrates such as quantum computing and cognitive/intelligent computing. The necessity of novel methods and mechanisms of computing together with the increasing awareness of the acute need for careful use of resources, both in form of energy, space, and materials, but also human cognitive resources will hopefully trigger new revised ideas on what constitutes and should constitute the foundations of computing.