In the past, philosophy of science has been dominated by an analysis of physics and of the guiding principles underlying theory- and model-construction therein, such as the pursuit of simplicity. In physics, simplicity indeed appears to occupy a special role: In his Dreams of a Final Theory, the late Steven Weinberg (1993, pp. 148–9; emph. added), for instance, claimed that physicists “demand simplicity […] in [their] principles before [they] are willing to take them seriously,” and no lesser than Albert Einstein (1934, p. 168) once also declared that:

It is essential […] that we can arrive at [theoretical] constructions and the laws relating them one with another by adhering to the principle of searching for the mathematically simplest concepts and their connections.

More recently, particle physicist Gian Giudice (2010, p. 83) has put the relevance of simplicity to physics thus:

By moving towards smaller distances, we discover that the variety and complexity of our world is merely disguising the simplicity of hidden fundamental laws. The apparent chaos of the macroscopic world is magically resolved into a neater order […].

Hence, many physicists seem to believe that by zooming in on reality both theoretically and experimentally, there emerges an inherent, comprehensible order out of the apparent chaos present in the world around us; an emergent simplicity out of complexity.

It is quite conceivable that, by focusing on the principles underpinning physics, our traditional philosophical views on science have been blurred. However, when Kuhn (1977, p. 332) offered his famous list of theoretical virtues—accuracy, consistency, scope, fruitfulness, and, last but not least, simplicity—, he was not solely concerned with the physics of Newton and Einstein, but just as much with the chemistry of Lavoisier and Darwin’s theory on the origin of biological species. Similarly, the two main case studies in Sober’s (2015) recent book on Ockham’s well-known general precaution against unnecessary complexity are (evolutionary) biology and psychology, with modern physics occupying merely a one-page digression on James Clerk Maxwell.

The question thus arises what role simplicity actually plays in science overall. Have certain individuals in the history of science been taken in by the practice and success of physics, or can simplicity justifiably be seen as a more general guiding principle of science? If so, what is its role exactly? Is it epistemic in nature, as effectively claimed by Einstein or Giudice, or merely a pragmatic aid to theorizing?

Taking an interdisciplinary view on the matter, the present Topical Collection, which arose out of previous efforts for obtaining sensible answers, is devoted to these and similar questions. These previous efforts came in the form of a 2019 conference entitled Simplicities & Complexities: Interdisciplinary Perspectives on Simplicity and Complexity in Scientific Knowledge and Practices at Bonn University (Germany), hosted by the international and interdisciplinary research unit The Epistemology of the Large Hadron Collider. This conference combined the perspectives of scholars from, among other things, physics, chemistry, ecology, medicine, linguistics, and anthropology with those of scholars from philosophy and from science and technology studies (see Chall and Martens, 2020). Based on the success of this conference, the editors of the present Topical Collection, all of whom were affiliated with the very same research unit at the time, decided to take up the task of editing a follow-up publication.

The papers in this collectionFootnote 1 do not coincide with the contributions to said conference tout court, but there is at least some overlap. In particular, the paper “Simplicity of what? A case study from generative linguistics” by Giulia Terzian and María I. Corbalán (2021) was the winning essay of the conference’s Simplicities & Complexities Essay Competition. In their paper, Terzian and Corbalán discuss the Minimalist Program in generative linguistics, which aims to answer how the human language faculty works, and to describe its properties and evolution.

The authors identify two general sorts of notions of simplicity employed within that program, namely, an ‘external’ notion that serves as a theoretical value for theories of first language acquisition, and ‘internal’ or ontological notions, which pertain to the structure of the first language-acquisition process and the infamous ‘Universal Grammar’ assumed by the Minimalist Program; i.e., the general, inborn language faculty from which grammatical rules are deduced in accordance with data, rather than induced from them. Throughout the Minimalist literature, Terzian and Corbalán argue, these two distinct sorts of notions are being conflated. Underlying this sweeping conflation, they suggest, is the expectation that both sorts of simplicity converge: we can use simpler theories because the language faculty and the process of language acquisition happen to be simple.

However, are we even justified in employing either of these two notions of simplicity in the first place, and do they in fact converge? The main contribution of the paper, besides the above, desirable clarifications, is a novel affirmative and naturalistic answer: drawing on various philosophy of science discussions on theoretical virtues, the authors offer a justification for simplicity as a theoretical virtue, as long as it plays an epistemic role “indicative of understanding”. Furthermore, based on insights from cognitive science, the authors argue that one can justify the simplicity of the language faculty by arguing that it is inherited “from domain-general features of our cognitive system”. Accordingly, a dedicated theory of Universal Grammar, as presented e.g. by Chomsky (1981), though maybe false in certain details, can increase our understanding by establishing a theoretically simple account of the workings of these features in the case of language. Furthermore, this leads to a revised version of the ontological notions, wherein language is not intrinsically simple anymore, but rather due to the workings of the domain-general mechanisms.

Mousa Mohammadian (2021) turns his focus on physics, and argues against instrumentalist views of simplicity as defended by Hempel or, more recently, Sober. He argues that, if there are theoretical virtues, which are constituents of the aims of science, then simplicity should be considered as one of them.

As explained above, Mohammadian restricts himself to scientific theorizing in physics and addresses theory choice and theory development therein. Theoretical virtues that constitute the aims of science in this context are those virtues which inform the choice between rivaling theories or the direction theory-development should take. Mohammadian’s goal in his paper is to show that if simplicity is understood as merely a means to achieving these aims, the rationality of science cannot be persevered. He argues for this by presenting four cases in which an instrumentalist view of simplicity leads to counterintuitive results, which can be avoided by viewing simplicity as an epistemic aim of scientific theorizing, especially in physics.

Thomas Vogt (2021) discusses the role of simplicity in chemistry, and its function of providing “vague ideas” therein. To this end, he sketches the history of the periodic system of elements (PSE) and describes how the assumption of simplicity influenced the search for analytical and geometrical structures in the relationships between elements. Vogt employs the philosophical frameworks of Popper, Kuhn, and Lakatos and argues that sticking with vague concepts and temporarily ignoring Kuhnian anomalies was important for the long-term progress of the PSE. He exemplifies this in a selected history of models and concepts used during the exploration of chemical periodicity.

Vogt argues that vague concepts (such as the concept of the basic chemical element) are adaptable to epistemically complex processes. He also argues against a philosophy of reductionist physics and instead frames the crossover from physics to chemistry not as a loss of accuracy and precision but as an expansion of the use of vague concepts, such as similarity, which allows for classification. He offers historical evidence that vague ideas can be productive vehicles of conceptual progress by describing for example Prout’s hypothesis and the Döbereiner Triads.

Complexity, much like simplicity, is often invoked without a precise definition. When it is described, this is hardly ever done in a univocal way. It is at best a multifaceted notion with many conflicting and sometimes vague definitions. Those such as Mitchell (2003) and Bechtel and Richardson (1993), who have made extensive use of the notion of complexity, leave us with a plurality of notions of complexity that for Fridolin Gross (2021) raise further questions that are not adequately addressed. In response, Gross proposes a unifying definition of complexity that captures many of varying uses. His strategy draws on Gell-Mann’s notion of “effective complexity,” which is similar to algorithmic complexity. Here, the measure of the complexity is relative to the mechanism that produces the behavior, and depends on the choice of the ensemble the entity is considered a part of, which matches well the practice of biology where it is often classes of entities that one is concerned with.

Gross notes an important feature of complexity: that it is both a matter of complicatedness and of order or structure. “Biologists think of a multicellular organism as more complex than a random bag of cells, even though it seems more difficult to describe the latter at the same level of detail,” (p. 13). Something like a clock is complex in a way that is often not captured by other definitions of complexity, but can be captured by effective complexity, since with this one can understand the difference between the complexity of mechanisms and that of their behaviors. Gross proposes that one can describe mechanical complexity as those cases “where the effective complexity of the behavior more or less matches the complexity of the mechanism, whereas emergent complexity describes instances where the effective complexity of the behavior is high compared to the complexity of the mechanism,” (p. 16) and unnecessary complexity where it is lower. Gross then puts this notion of complexity to work in analyzing the pathway, network, and attractor approaches in cellular biology. These approaches carve systems up in different ways and focus on different kinds of complexity. The distinctions Gross draws provide a framework that can help make sense of what is meant by complexity here.

Acknowledging the fact that Darwin was admittedly influenced by the structure of Newtonian mechanics, Victor J. Luque and Lorenzo Baravalle (2021) present a fairly direct approach to the question of “physics envy” raised above, i.e., to the question of whether the success of certain principles underlying physics has simply taken in certain individuals working in other fields. They answer this question negatively though, by making the case that the Price equation in evolutionary biology, which relates the average fitness of an individual and the expected change in a given trait to their covariance and to their expected changes, compares to Newton’s second law in classical mechanics.

The line of argument pursued by Luque and Bravalle has two main components, namely, to first construe the relative simplicity of this equation in comparison to the vast and diverse phenomena in evolutionary biology as an expression of unity—something quite in line with the earlier quote by Giudice. And, secondly, to argue that opponents of a unified view of evolutionary biology, as comparable to physics, are making a false comparison.

This false comparison is established by Luque and Bravalle on the basis of a metatheoretical structuralism, which has its roots in the works of Balzer. Metatheoretical structuralism suggests to view theories as ‘nets’ in which a single theoretical element, such as a fundamental equation like F = ma, occupies a central role, meaning that it contains the fundamental concepts while being itself almost empirically vacuous, and that it applies to all models of the theory, in the sense of constraining their general shape. In contrast, Luque and Bravalle argue, evolutionary biology as a whole should rather be seen as a ‘theoretical holon’, i.e., a family of theory-nets between which systematic relations exist, as they do between classical particle mechanics and, say, fluid dynamics wherein F = ma is replaced by the structurally similar Cauchy momentum equation.

Thus, a particular version of the Price equation may well be the fundamental equation of population genetics (construed as a theory-net), which is then part of the theoretical holon of evolutionary biology, wherein other, structurally similar forms of the equation can occur. At the present stage of development of evolutionary biology however, this proposal retains a somewhat tentative character, as Luque and Bravalle readily admit.

Quite the opposite point is made by Mazviita Chirimuuta (2021), in her paper “Reflex theory, cautionary tale: misleading simplicity in early neuroscience.” Chirimuuta takes on the historical case of reflex theory, which was in its time equally hailed for offering a simple, unified approach to neurophysiology as Newton’s laws do in physics. But in retrospect, reflex theory was not the success it was supposed to wind up as, and thus, as Chritmuuta argues, its case offers reason to be cautious about oversimplification.

The kind of simplification present in reflex theory expresses a kind of reductionism: phenomena in neuroscience, and even psychology, were supposed to reduce entirely to the interplay of simple reflexes; some unconditioned, i.e., innate in the organism, some conditioned, i.e., obtained by associative processes, as with Pavlov’s famous dog. However, reflex theory, Chirimuuta argues following commentators as diverse as philosopher Maurice Mearleau-Ponty or neurologist Kurt Goldstein, was indeed an oversimplification. It could only be upheld by glossing over its failures by means of an endorsement of vague terminology that actually yielded ad hoc modifications, or by the proper neglect of a failure of stability regarding stimulus-response relations. This is similarly true of Skinner’s behaviorist appraisal of the reflex theory, which was divorced from the idea that the substratum of the reflexes was the cerebral cortex. Even Skinner’s observations could not, in fact, be taken out of the laboratory.

The lesson Chirimuuta draws from all this is that one must be cautious about embracing a simplistic ontology in neuroscience merely due to an over-indulgement of the desire for parsimony, and that one needs to be aware of the limits of simple concepts when taken in an instrumental fashion, as exhibited by the failure of Skinner’s approach outside of controlled laboratory conditions.

In his paper “Feynman diagrams – From complexity to simplicity and back”, Robert Harlander (2021) turns to simplicity and complexity within the main subject of study of the Epistemology of the LHC group, i.e., within particle physics. This is no accident: Harlander, himself a theoretical physicist, is one of the principal investigators of the group.

In his contribution, Harlander concerns himself with the process of connecting experimental data with different particle models built within Quantum Field Theory (QFT) via Feynman diagrams. Applying the notions of epistemic and pragmatic (algorithmic) simplicity / complexity by Bunge (1962), he defends that the employment of Feynman diagrams reduces such complexities. For Harlander, Feynman diagrams are not merely a tool, but they actually encode the particle model itself—at least in the perturbative regime.

The reduction of the pragmatic complexity comes precisely from this fact: diagrams are not only terms of the perturbation series, but rather, they can also be used in the first place to construct the series by following the so-called Feynman rules. That is, we can construct the relevant expressions in an algorithmic way, down to, in the author’s own words, “the apparent simplicity of child’s play.” Thus, the pragmatic (algorithmic) complexity of these models is drastically decreased by the use of the diagrams.

Feynman diagrams also provide, to some extent, a “language”, in a colloquial sense, directly related to experimental data; for instance, by identifying peaks in the measured cross-sections as virtual particle exchange, represented in the relevant diagrams. Such an ostensive reading of them, without the need of attaching any ontological commitment, amounts for a significant epistemic simplification, Harlander contends.

Despite these simplifications, as we go to higher orders in perturbation series, one must consider a higher number of diagrams, which is nowadays typically done by means of various computer algorithms. According to Harlander, this process increases the complexity in practical applications, thus perhaps signaling the need for a new way of comparing theory and experiment which will be accompanied, in turn, by a further simplification.

Martina Merz and Helene Sorgner (2022), both of whom are also members of the Epistemology of the LHC group, put the focus on the organization of ‘Big Science’ experiments and their strategies for reducing complexity. They argue that in the process of handling complexity, of creating and maintaining organizational order, new organizational complexities emerge and thus the complexity is not reduced or eliminated, but merely displaced. The authors draw this point from their own empirical study of the ATLAS experiment at the LHC. They take their investigative cue from Law (1994) and Law and Mol (2002) that characterize institutions as permeated by different ‘modes of ordering’ which may overlap, compete, to varying degrees depend on each other.

Merz and Sorgner identify three strategies the organization takes to mitigate complexity. One strategy is to segment research infrastructure, keeping epistemic aims and organizational logics partly independent. The collaboration is separated into different administrative units and there is a focus throughout on self-governance. A second strategy is to introduce bureaucratic governance, increasing formalization, levels of organization, and management positions. The organization creates committees to make formal decisions, applying transparent criteria and fairness guidelines for the selection of conference speakers, for example. Finally, the organization also creates and imposes standards to facilitate collaboration across entities and to support collective accountability. This highlights certain values and measures as relevant or more important than others, and in this sense reduces the complexity. These strategies attempt to introduce organizations to reduce the complexity of research practice, but each of these seems to displace the complexity, moving informal decisions outside of the committees that were meant to formalize those decisions.

The final contribution by Thomas Bonk (2023) investigates the now-traditional approach to simplicity in science, championed by Forster and Sober (1994). Famously, Forster and Sober discuss the simplicity of mathematical models by paying attention to the numbers of free parameters they contain. Here, Bonk suggests a novel approach that deviates from Forster and Sober’s.

Bonk’s main objection is that using the number of parameters as a measure of simplicity does not capture several intuitive features of model simplicity. For instance, parameters usually don’t have a unique representation; the sum over two masses could be seen as one total mass, but that would be less intuitive, though it might make the given model simpler. Furthermore, intuitive alternative criteria, like ease of manipulation or familiarity, are not generally covered by the parameter-number approach. However, these alternative criteria also do not, by themselves, deliver sensible measures of simplicity, as Bonk argues in some detail.

Thus, integrating perspectives, Bonk goes on to suggest his own, axiomatic approach, which delivers a formal measure for simplicity that relies on numbers of free parameters as well as on the frequency of success of a given sort of functional expression – a quantitative model or hypothesis. Hence, there is a more rigorous element in Bonk’s account (the success-frequency) that tracks the ‘user-related’ features, such as familiarity and ease of manipulation, next to the number of parameters. Bonk’s approach is modest: he concerns himself with phenomenological models that have a finite sum-expansion, and so effectively to finite-dimensional function-spaces as a model class. Nevertheless, the results might plausibly be suggestive of further directions of research, and Bonk also gives some consideration to possible continuations to infinity.

As can be seen, the topic of this collection invites for a variety of approaches, definitions and conclusions, sometimes with seemingly only little overlap. However, as should have become clear as well, there are also clear-cut connections and these could stimulate further research and promise some progress: the role and meaning of simplicity can vary across disciplines such as physics, chemistry, biology or even linguistics, as argued by Terzian and Corbalán, Mohammadian, Vogt, and Harlander, respectively. Nevertheless, there can also be distinctive commonalities, as shown by Luque and Bravelle, and more generally speaking, simplicity appears to be of epistemic value in all these disciplines.

Yet, blindly adhering to an ideal of simplicity can also be misleading, as carefully argued by Chirimuuta. Furthermore, the need to manage the complexity of the subject matter can be paralleled by the need to manage organizational complexity, as discussed in the case of particle physics by Merz and Sorgner, and corresponding efforts may be accompanied by obstacles and questions of their own.

Finally, it may be worth investigating the detailed connections between general-level considerations on simplicity that relate to formal features of scientific models, as in Bonk’s account, and discipline-level considerations of mechanisms and resulting behavior as in Gross’ account of ecology. As both of these are grounded in certain complementary intuitions about simplicity or complexity, respectively, maybe there is a more encompassing theory of theoretical simplicity to be sought on the common ground.