1 Introduction

When hearing the terms ‘indeterminacy’ and ‘underdetermination’, philosophers’ minds might jump quickly to a fairly restricted set of issues, such as Quine’s indeterminacy of translation, the Duhem–Quine problem of the underdetermination of theory by data, or the indeterminacy of future contingents like Aristotle’s ‘There will be a sea battle tomorrow’. This collection is designed to showcase the breadth of current philosophical work on indeterminacy and underdetermination. It includes papers in the philosophy of computation, philosophy of language, ethics, metaethics, epistemology, philosophy of science, logic, and metaphysics. The goal is to enhance communication across philosophical sub-disciplines by presenting their writings side-by-side in a single volume. Conceptual tools developed in one area will, hopefully, shed light on debates in other areas and open new and interesting lines of inquiry.

An introduction to a topical collection might normally be expected to define the topic. This introduction will not to do so, however, because the relationship between ‘indeterminacy’ and ‘underdetermination’ in different areas of philosophy is itself an open philosophical question. Are indeterminacy and underdetermination the same phenomenon? Is underdetermination in ethics of the same genus as underdetermination in science? These are some of the methodological questions that this collection aims to illuminate; for instance in the the papers by Baumann and Lee.

This Topical Collection stems from a conference on indeterminacy and underdetermination held at University College Dublin on 24–25th January 2020. That conference, generously funded by the UCD Seed Funding Scheme and the Irish Research Council’s New Foundations scheme, included presentations from several of the authors present in this collection, including Eli Pitcovski, Rachel Sterken, Anna Drożdżowicz, Annie Bosse, and Joe Dewhurst. Both the conference and Topical Collection include papers from junior researchers as well as more established academics, indicating continued interest in these topics.

I want to offer thanks to Helena McCann and Gillian Johnston, who greatly assisted with conference organisation; the Irish Research Council, who funded my Government of Ireland Postdoctoral Fellowship at UCD (grant number GOIPD/2018/605); to my UCD mentor, fellow Guest Editor, and co-organiser of the conference, Maria Baghramian; to UCD and the IRC for funding the original conference; to the Humanities Institute at UCD, who provided the conference venue; to the army of reviewers who kindly reviewed submissions to this collection, including all those who, while unable to review, offered helpful suggestions for alternative reviewers; to Kristie Miller, Shanthakumar Kulasekar and the rest of the editorial team at Synthese; to the authors who read over these summaries; and to all those who submitted their papers for consideration.

The following sections will briefly categorise and summarise the papers that appear in this collection.

2 Computation

2.1 André Curtis-Trudel, The determinacy of computation

A computational system might be described in terms of several different computations. An AND-gate, for example, that emits a high output only when both inputs are high, can be described as computing conjunction if high outputs/inputs are viewed as corresponding to truth, but can be viewed as computing disjunction if high outputs/inputs are viewed as corresponding to falsity. This phenomenon, known as the indeterminacy of computation (Copland informs me that the term is his. See 2.3.), poses a problem for computational explanations, which often assume that a given computational system implements a unique computation.

Curtis-Trudel argues that a system performs a determinate computation only relative to a labelling system that pairs the physical components with mathematical values. When high outputs/inputs are labelled as ‘true’, an AND-gate determinately implements conjunction; when labelled as ‘false’, it determinately implements disjunction. Curtis-Trudel argues that this view preserves computational explanation and compares the view to others in the literature.

A particularly interesting and satisfying aspect of Curtis-Trudel’s view is that the problem of computational indeterminacy turns out not to be a metaphysical problem about the nature of computation but an epistemic problem about scientific explanation. The problem is not unique, on this account, but an instance of a more pervasive issue. Any system can be described in many ways and we are always faced with a choice of descriptions when explaining the behaviour of a system.

2.2 Fiona T. Doherty and Joe Dewhurst, Structuralism, indiscernibility, and physical computation

Drawing on Doherty’s prior work (Doherty, 2021), Doherty and Dewhurst argue that computational indeterminacy poses a serious indiscernability problem for structuralists: the binary digits 1 and 0 are indiscernible in terms of their structural properties and therefore identical. This parallels a problem for mathematical structuralism on which seemingly distinct mathematical objects, such as 1 and -1, are shown to be indiscernible.

Their proposed solution also comes from a defence of mathematical structuralism proposed by Doherty (2019). They suggest that structuralists abandon Leibniz’s principle of the identity of indiscernibles. This rejection is supported by Hilbert’s Principle, on which truth and existence in mathematics are simply a matter of consistency. If consistent axioms specify a structure with two indiscernible but distinct positions, then there exist indiscernible but distinct objects to fill those positions. This principle can be adapted to the computational case, allowing the computational structuralist to avoid identifying indiscernable computational objects, like the binary digits. In addition to providing a solution to the indiscernibility problem, Doherty and Dewhurst take their discussion to show that the relationship between mathematical and computational structuralism is closer than might previously have been recognised.

2.3 Nir Fresco, B. Jack Copeland, and Marty J. Wolf, The indeterminacy of computation

Fresco, Copeland and Wolf provide a detailed introduction to indeterminacy of computation. They highlight the earliest description of the phenomenon, dating from the 1950s, by engineer Ralph Slutz. Slutz pointed out that one and the same hardware gate can be viewed as both an AND-gate and an OR-gate. Via various formal definitions and theorems, they develop the concept of a labelling scheme—a system for assigning labels to physical quantities, such as transient voltages in hardware gates—and they offer this system as a general framework for describing computational indeterminacy. [The idea of a labelling scheme derives from Copeland (1996)]. They then argue that the possibility of computational indeterminacy necessitates an extra step in computational explanations of cognitive systems: If we want to explain a cognitive system in terms of a particular computational function, we need to demonstrate that the cognitive system computes that function determinately. Where this cannot be done by appeal to the nature of the system itself, we can appeal to its interaction with other systems. Though a cognitive system might be computationally indeterminate when viewed by itself, there may be reasons to regard it as performing a determinate computation when embedded in a larger system. This is illustrated through Gabbiani et al. (2002) work on the locust.

More tentatively, the authors suggest that computational indeterminacy might afford neural plasticity. If cognitive systems indeterminately compute a number of different functions, they might be leveraged to determinately compute each of those different functions through changes to surrounding systems. A cognitive system with this malleability would be economical and so evolutionarily advantageous. Rather than a theoretical problem to be avoided, computational indeterminacy may be an evolved boon. This is one of several papers in the Collection suggesting that indeterminacy and underdetermination may be resources rather than problems. See also the contributions by Calosi, Drożdżowicz, and Martin.

3 Language

3.1 Annie Bosse, Generics: some (non) specifics

Bosse argues that generics like “Bus drivers are grumpy” are non-specific in that they fail to specify the quantificational force or ‘flavour’ of the connection between the relevant kind and property; in this case, bus drivers and grumpiness. Bosse argues that this non-specificity is not the result of context-sensitivity (Sterken, 2015) or semantic incompleteness (Nguyen, 2019), but that generics are second-order existential generalisations that quantify over more specific generalisations: to say that bus drivers are grumpy is to say that there is some true generalisation (within a restricted domain) linking bus drivers and grumpiness. More specific generalisations can then be conveyed pragmatically.

3.2 Anna Drożdżowicz, Making it precise—imprecision and underdetermination in linguistic communication

Drawing on an array of experimental evidence, Drożdżowicz argues that interpreters often form underdetermined, imprecise and ‘shallow’ linguistic representations. Based on the best current interpretations of this data, Drożdżowicz suggests that this is not a flaw in the linguistic system but a functional feature that allows for quick and flexible interpretation. Because it often goes unnoticed, Drożdżowicz argues that it is difficult to assess the degree to which this underdetermination interferes with the success of communication, raising problems for the acquisition of knowledge by testimony. Interpreters may be sensitive to this underdetermination in some ‘clarificatory contexts’, however. Through questioning or posthoc reflection, initially underdetermined representations might be precisified. Drożdżowicz closes by noting questions for further research.

3.3 David Plunkett, Rachel Katharine Sterken, and Tim Sundell, Generics and metalinguistic negotiation

This paper synthesises Plunkett and Sundell’s view about the pragmatics of metalinguistic negotiation with Sterken’s view about the semantics of generics. On Plunkett and Sundell’s view, expressions can be used to negotiate various aspects of meaning. According to Sterken’s view of generics, the generic operator Gen has three aspects: quantificational force, lexical domain restriction, and contextual domain restriction. Putting these theories together, they hypothesise that speakers should be able to use generics to negotiate all three and present several examples they view as demonstrating such negotiation. Plunkett, Sterken and Sundell argue that their preferred pragmatic and semantic pairing provides a better explanation of the phenomena than other theories of generics, notably those of Krifka (2012), Asher and Pelletier (2013), Asher and Morreau (1995), Nickel (2016), Liebesman (2011) and Leslie (2007, 2008).

3.4 Corine Besson and Anandi Hattiangadi, Does truth relativism account for the indeterminacy of future contingents?

Besson and Hattiangadi tackle a classic problem of indeterminacy: future contingents. They argue that MacFarlane’s (2003, 2008, 2014) truth relativism cannot vindicate the intuition that future contingents are neither true nor false at the time they are asserted. Consider Alice, who says ‘There will be a sea battle tomorrow’. According to MacFarlane, the proposition that Alice expresses is assessment sensitive, in that it is neither true nor false when assessed at the time of utterance but will be true or false when assessed after tomorrow. If so, then the truth-value of the proposition that what Alice said is neither true nor false should likewise vary depending on the context of assessment. Besson and Hattiangadi argue, however, that this latter proposition is false according to MacFarlane’s account. They extend this problem into a reductio against MacFarlane’s account, consider several responses, and conclude that none of them can preserve the key tenets of MacFarlane’s view. In particular, all but one require that we give up the assessment-sensitivity of the ordinary truth predicate.

4 Ethics/metaethics

4.1 Alex Horne, Too many cooks

At this moment, I might reasonably work on this paper, do my shopping, or trim my toe nails. At any time, it seems, there is no uniquely most reasonable way for us to act. This is the rational underdetermination problem. Horne argues that subjectivists, who take reasons to be determined by our desires, face a signficant underdetermination problem. Horne considers two subjectivist solutions to the problem: a democratic solution, on which the best action is determined by a vote among ideal selves, and a trusteeship solution, on which the best action is determined by the desires of whichever ideal self the agent would choose to be. The democratic solution threatens to severe the link between reasons and motivation. Horne concludes, therefore, that the subjectivist has good reason to prefer the trusteeship alternative.

4.2 Eli Pitcovski and Andrew Peet, Counterfactuals, indeterminacy, and value: a puzzle

Pitcovski and Peet present the following trilemma for the counterfactual comparative account of harm (CCA), the view that an event is harmful/beneficial for a subject to the extent that the subject is overall worse/better off in the actual world than in the nearest possible world in which the event does not occur. We must either (1) reject CCA; (2) accept that it is almost always indeterminate whether an event is extremely harmful, highly beneficial, or somewhere in between; or (3) reject several independently motivated principles about harm and benefit. The trilemma arises because, according to our best physical theories, there will always be nearby possible worlds at which the subject would be wildly better off, worlds in which they would be wildly worse off, and everything in between, due, for example, to weird but possible, quantum behaviour.

Pitcovski and Peet consider various modifications of CCA that allow us to ignore these atypical scenarios. They argue, however, that each of these modifications faces a different problem: if the actual world is itself atypical, the counterfactual non-occurrence of an actual benefit would turn out to be beneficial, even though the subject would be worse off than in the actual world. These modifications therefore require us to give up on various attractive principles about harm and benefit. They tentatively conclude that the best option is to reject CCA.

4.3 Marius Baumann, Moral underdetermination and a new skeptical challenge

Parfit (2011) has argued that moral realism gains significant support from the fact that different ethical theories agree on their ethical verdicts. On the contrary, according to Baumann, that very result sets the stage for the underdetermination argument against moral realism. From the classic underdetermination argument against scientific realism (Duhem, 1954; Quine, 1951), Baumann develops an analogous argument against moral realism which, he argues, is at least as plausible as the original. Significantly, that argument depends on Parfit’s claims about the moral equivalence of different ethical theories. This equivalence was intended to support moral realism but may in fact lay the groundwork for a new argument against it.

4.4 András Szigeti, Emotions as indeterminate justifiers

Szigeti argues against the sentimentalist view that emotional experience is necessary and can be sufficient for the justification of evaluative belief. The properties of emotional experience, Szigeti argues, are not sufficiently fine-grained to give determinate answers to key questions about evaluative properties. Focusing on the emotional experience of resentment, for example, we can characterise the response-dependent property of being resentment-worthy. There are evaluative properties, however, that are not extensionally-equivalent to response-dependent properties. Being worth of resentment, for example, is neither necessary nor sufficient for blameworthiness. The resentment-worthy is not always blameworthy and vice versa. While we can identify the resentment-worthy by scrutinising patterns of affective responses, we cannot identify the blameworthy in the same way. Typically, these patterns of responses cannot on their own answer key questions about the nature of blame, for example, whether blameworthiness requires the ability to have done otherwise.

Contra the sentimentalist, Szigeti concludes, answering key ethical questions requires attention to non-affective epistemic resources. However, Szigeti defends the view that emotions can provide justifying reasons for evaluative judgments. Though the resentment-worthy and the blameworthy are not coextensional, there is a correlation between them. The emotional experience of resentment can therefore be a prima facie indicator that an action is blameworthy.

4.5 Björn Lundgren, Ethical machine decisions and the input-selection problem

Lundgren’s paper addresses the ethics of machine decision-making, e.g. autonomous vehicles or AI for medical diagnosis. The focus is on the significance of factual uncertainty, such as when an autonomous vehicle is unaware of all the facts about a potential collision, or unaware of the precise consequences of slamming on the brakes.

Lundgren first argues against what he calls ‘the standard approach’ to factual uncertainty. On this view, we can answer questions about factual uncertainty by first analysing idealised cases devoid of uncertainty. The gap between the idealisations and actual cases can then be closed by a theory of rational decision-making under conditions of uncertainty. Lundgren prefers ‘the uncertainty approach’. Uncertainty can change the normative features of a situation. In order to know what should be done in some case involving uncertainty, we must analyse that case directly, with all of its contextual features.

Three considerations are offered in favour of the uncertainty approach. First, the admissible level of factual uncertainty is itself a normative question that will vary depending on the situation. Second, while a theory of rational decision-making may be able to tell us how we should act when we know the probabilities associated with our actions, it seems unable to tell us how we should act when these probabilities are unknown. Third, and this is central to what follows, mechanical decision-makers face technological limitations. Idealised cases, in which machines have access to all relevant information, tell us little about how actual machines should act.

Lundgren then argues that machine decision-making raises a trade-off, called the input-selection problem. On the one hand, machines need sufficiently complex inputs to reduce the risk of error to an ethically-acceptable level. On the other, increased complexity raises further ethical problems. For example, decision-making becomes less transparent, the risk of data privacy invasion is increased, and decisions take longer. In considering the ethics of machine decision-making, therefore, it is not sufficient to identify the ethically ideal decision for the machine to make; we have to take account of mechanical limitations and associated trade-offs.

4.6 Benjamin Hale, Indeterminacy and impotence

Can I reduce the effects of climate change by reducing my carbon footprint? If not, then how can I have any obligation to reduce my footprint? This is the causal impotence objection that can be used to argue against personal responsibility for many problems that seem to require collective action. Hale categorises causal impotence objection into three kinds, the third of which—causal indeterminacy arguments—presents a distinctive challenge to consequentialist moral theory. Causal indeterminacy arises when, due to complexity of situations and the intervention of other agents, we cannot be certain what effects our actions will have. Unlike other forms of causal impotence objection, causal indeterminacy arguments allow that our individual actions has significant consequences but question whether we can know if those consequences will be good or bad. The problem posed by causal indeterminacy is elaborated through real-world examples and several objections are considered.

5 Epistemology and philosophy of science

5.1 Chanwoo Lee, The structuralist approach to underdetermination

Lee discusses the structuralist response to the underdetermination of theory by evidence, according to which underdetermination can be resolved by identifying a common structure to rival theories. Lee argues that the structuralist approach has been applied in many different areas of philosophy. The structuralist approach is schematised on the model of the structuralist response to the underdetermination of theory by evidence, and that schema is applied to Benacerraf’s (1965) argument about the ontology of natural numbers and Quine’s (1960) argument about the indeterminacy of translation.

Lee draws two main conclusions from this discussion. First, that the structuralist approach can be applied to draw very different kinds of conclusions, e.g. ontological conclusions in the case of Benacerraf and semantic conclusions in the case of Quine. Second, that it offers a new way of viewing a metaphysical debate between Dasgupta (2009, 2017), Turner (2011, 2017) and Diehl (2018). In short, Lee argues that Turner is forced into a dialectically difficult position. Turner intends to refute Dasgupta’s ontological conclusion, which is based on the structuralist approach. To respond to Diehl’s counterexamples, however, Turner must appeal to the very structuralist approach they are trying to refute.

5.2 Ivan Hu, Epistemicism and response dependence

Hu defends the epistemicist view that vagueness entails epistemic indeterminacy: If it is vague whether p then it is unknowable that p and unknowable that not p. In a detailed discussion, Hu responds to Barnett’s (2010) argument that vagueness does not entail indeterminacy. Barnett argues that a hypothetical community of speakers cognitively superior to us could have vague knowledge of vague matters. Hu considers several interpretations of Barnett’s argument and concludes that none refutes either the clear truth of the entailment, which would require a proposition that is both clearly vague and vaguely knowable, or the truth simpliciter of the entailment, which would require a proposition that is both vague simpliciter and known simpliciter. Hu diagnoses several problems with Barnett’s argument and presents linguistic evidence that stands against Barnett’s conclusion.

5.3 John Brunero, Practical reasons, theoretical reasons, and permissive and prohibitive balancing

When practical reasons equally support two incompatible options A and B, we might have sufficient reason to do either A or B. When epistemic reasons equally support two incompatible propositions P and not-P, we are not permitted to believe P or to believe not-P. Following Berker (2018), call the first permissive balancing and the second prohibitive balancing. Brunero considers Schroeder’s (2012, 2015) proposal, which hinges on the notion of non-evidential epistemic reasons. In short, the evidential reasons in favour of believing that P must be weighed against the evidential reasons for believing that not-P and the non-evidential reasons for withholding belief.

Brunero presents two objections to this proposal. First, that Schroeder provides no explanation of a non-evidential epistemic reason sufficiently general to account for the full range of cases. Second, that we need to explain the following difference between practical and epistemic reasons: Where we have a practical case of prohibitive balancing, it can be converted into a case of permissive balancing by the addition of further, equally-weighted practical reasons for each alternative. Not so for epistemic reasons, however. So long as the reasons to believe P and to believe not-P are equally-weighted, the case is prohibitive.

Brunero supplements Schroeder’s view in two ways. He suggests that non-evidential reasons for withholding belief are given by the risk of being mistaken. This, Brunero argues, is a sufficiently general reason to account for the full range of cases. The difference between practical and epistemic reasons arises because refraining from action can entail opportunity costs that do not arise from withholding belief. Your reasons for going to a party or to the library might be equally-weighted but weak enough that doing neither is your best option (prohibitive balancing). Increasing equally the reasons for going to the party and the reasons for going to the library, however, might give you sufficient reason not to stay home (permissive balancing). But there are no opportunity costs to withholding belief, so increasing equally the reasons to believe that P and to believe that not-P can never have an analogous effect.

5.4 Lynn Hankinson Nelson, Underdetermination, holism, and feminist philosophy of science

Hankinson Nelson argues that feminists who argue for the indispensibility of values to science should not appeal to Quine’s thesis of global underdetermination but rather to what she calls ‘moderate underdetermination’. According to the global thesis, whatever our empirical evidence, there will always be multiple total theories of the world that are equally supported by empirical evidence. This thesis is not useful for feminists who want to argue that some, less androcentric, theories and hypotheses are empirically better supported than their competitors. Nor does it apply to partial theories of the world. Moderate underdetermination, in contrast, applies to all theories and hypotheses, not only to complete theories. This moderate underdetermination is motivated by a moderate holism, which views partial, rather than entire theories of the world, as facing the tribunal of experience. It is this moderate underdetermination on which the indispensibility of values to science should be based.

6 Logic and metaphysics

6.1 Samuel C. Fletcher and David E. Taylor, Two quantum logics of indeterminacy

Building on their prior work (Fletcher & Taylor, 2021), Fletcher and Taylor develop syntax and semantics for two quantum logics that include determinacy and indeterminacy operators. They are distinguished by the way their indeterminacy operators interact with other logical operators, especially negation. These two logics are then applied to deliver two different responses to Williamson’s (1994) reductio against the coherence of indeterminacy. On either of these logics, Williamson’s argument is invalid, but for distinct reasons.

6.2 James V. Martin, Indeterminacy, coincidence, and “sourcing newness” in mathematical research

Martin argues that indeterminacy in mathematics can be a driving force behind new discoveries. Martin’s primary notion of indeterminacy here is drawn from Dewey (1938) and characterised as applying to situations whose parts don’t “hang together”, are uncertain, cannot be predicted, and tend to evoke discordant responses from those encountering them. Martin utilises this notion of indeterminacy to give an anti-realist account of mathematical coincidence. When a mathematician identifies some fact as non-coincidental, they draw attention to some felt indeterminacy and present it as a worthwhile area of study. Describing a fact as a mere coincidence has the opposite effect, drawing attention away from any felt indeterminacy and presenting it as not a worthwhile topic of study.

The resulting picture is compared to that of Lange (2017). Martin argues that there are examples of mathematical coincidence and non-coincidence that cannot naturally be explained through Lange’s account. Martin’s view, it is argued, is better suited to explaining the connection between coincidence and motivation. It would be bizarre to diagnose a fact as coincidental and then spend years trying to explain it. According to Martin’s account, that is because diagnosing a fact as coincidental is to express a dismissive attitude towards it as a locus of further investigation.

6.3 Martin Pickup, Unsettledness in times of change

When a light bulb is switched off, what happens at the instant of the change? Pickup considers the possibility that, at that very moment, the light bulb is neither on nor off; that it is metaphysically indeterminate whether the light bulb is on or off. Pickup elaborates this view through the situationalist account of Pickup (forthcoming). Situations are parts of possible worlds, composed of entities that have properties and stand in relations. Key to the account is that situations can fundamentally disagree about what is the case. When an object changes, there are two situations that disagree about the state of the object. For any situation that contains such disagreeing situations as parts, the state of the object is indeterminate. Such is the case at the moment of change, which is an atomic time that corresponds to a composite situation composed of parts that disagree.

One of the key advantages that Pickup claims for the situationalist account is its malleability: it is compatible with various views about the metaphysics of time and events. The view is also somewhat conciliatory towards A-theoretic views of time. Though situationalism is a B-theory, it provides a way of identifying something distinctive about moments of change.

6.4 Roberto Loss, Open future, supervaluationism and the growing-block theory: a stage-theoretical account

Loss presents an interpretation of Thomason’s (1970) supervaluatist growing-block theory of time. This interpretation distinguishes between ‘times’ and ‘stages’. A stage represents a phase in the development of the growing block. For every stage, there is a linearly ordered sequence of times, with the last time being the present at that stage. In addition to the stage representing the actual development of the block, terminating in the actual present, there are other possible stages representing the way the block could or could have developed. A maximal sequence of stages is a ‘history’.

Loss argues that the resulting account is intuitive and avoids problems associated with other supervaluationist accounts. Correia and Rosenkranz (2018: 105) argue that supervaluationists cannot endorse a key claim of the growing-block theory of time: that there is no time later than the present. If there were, there would only be one history, with its last time being the present, resulting in an explosion of truths about the future. As Loss defines histories and stages, however, the existence of a final time at any individual stage is compatible with the existence of many alternative histories that vary in how they represent how the block may grow. Loss also presents a stage-theoretic interpretation of Briggs and Forbes’s (2012) supervaluationist account and leverages this interpretation to preserve the growing-block commitment that merely future entities do not exist.

6.5 Alessandro Torza, Quantum metametaphysics

Torza assesses the disagreement between classical and quantum logicians about whether there is quantum metaphysical indeterminacy. Under certain assumptions, Torza argues that the disagreement is illusory, amounting to a merely verbal disagreement about the meaning of the negation operator. Torza then argues that the disagreement may not be merely verbal if we assume Sider’s (2011) metaphysics of naturalness. If there is a uniquely natural interpretation of negation, the debate between classical and quantum logicians is substantive. Given some plausible constraints on naturalness, the classicist’s interpretation of negation is the most natural and the substantive debate is decided in the classicist’s favour: there is no quantum metaphysical indeterminacy. Given plausible constraints attributed to Dasgupta (2014), however, the debate is substantive but its resolution remains open.

6.6 Claudio Calosi, Gappy, glutty, glappy

Calosi operates within Wilson’s (2013, 2017a, b) Determinable-Based Account of metaphysical indeterminacy, on which metaphysical indeterminacy arises when some object has a determinable property (e.g., colour) without a unique associated determinate property (e.g., red or blue). This can happen when the object has no associated determinate property, in which case we have gappy indeterminacy, or when the object has more than one associated determinate property, in which case we have glutty indeterminacy. Calosi demonstrates that the space of determinate and determinable properties of a given family can be constructed with the determination-relation as the sole primitive and uses the resulting system to formalise various principles of determination.

Distinctive of Calosi’s system is that it refers to intermediate levels of determination, that is, properties like red that are both determinates of less specific properties like colour and determinables of more specific properties like crimson. As a consequence, there is a third logically possible category of metaphysical indeterminacy: glappy indeterminacy. These are cases in which an object is glutty at one level and gappy at a more specific level. While this result is interesting in itself, Calosi suggests that glappy indeterminacy might be useful in understanding quantum indeterminacy.

6.7 Ken Akiba, The Boolean many-valued solution to the sorites paradox

Akiba presents a many-valued Boolean solution to the sorites paradox. Classical logic is sound and complete with respect to a Boolean algebra, but Akiba points out that it is an oft-made mistake to consider the relevant Boolean algebra to be necessarily two-valued; any Boolean algebra, including many-valued Boolean algebra, can be used as a semantics of classical logic. Akiba then presents a solution to the sorites, on which the values of vague sentences are Boolean many-values, which are further identifiable with the sets of precisifications. Akiba argues that the Boolean approach is preferable to supervaluationist approaches (which also appeal to precifications) because the Boolean approach allows us to retain classical logic. Akiba also argues that the Boolean approach is preferable to the introduction of an S4 ‘determinately’ operator for reasons of simplicity.

7 Conclusion

The papers collected in this volume constitute a valuable resource for anyone interested in indeterminacy and underdetermination. This Topical Collection aims to promote the spread of useful concepts and tools by simultaneously showcasing work on indeterminacy and underdetermination from across philosophical sub-disciplines. This introduction closes by noting just a few of the many interesting questions raised by this collection.

This introduction has not offered a definition of indeterminacy and underdetermination. The papers in this collection exhibit many different characterisations, including pragmatist (Martin), metaphysical (Torza, Calosi) and epistemic (Baumann) notions of indeterminacy/underdetermination. It remains to be seen precisely how these different notions are related: are they different ways of describing the same phenomenon, different phenomena entirely, or is there a more complex relation between them? The same question can be posed for ‘indeterminacy’ and ‘underdetermination’ themselves. Are these different names for the same phenomenon and, if not, what is the relationship between these often interchangeable terms?

One suggestion of particular interest to me, given by previous work on underdetermination (Bowker, 2019a, b, 2022, forthcoming) is that we might need to revise our normative evaluation of indeterminacy and underdetermination. While they are often seen as theoretical problems to be solved, the papers by Drożdżowicz, Calosi, Martin, and by Fresco, Copeland, and Wolf suggest that indeterminacy/underdetermination may have practical benefits. If they are right, indeterminacy/underdetermination might not be a problem to be avoided but a resource to be leveraged.