An Operationalist Perspective on Setting Dependence

A well known logical loophole for Bell’s theorem is that it relies on setting independence: the assumption that the state of a system is independent of the settings of a measurement apparatus probing the system. In this paper the implications of rejecting this assumption are studied from an operationalist perspective. To this end a generalization of the ontic models framework is proposed that allows setting dependence. It is shown that within this framework Bell’s theorem reduces to the conclusion that no-signaling requires randomness at the epistemic level even if the underlying ontology is taken to be deterministic. The ideas underlying the framework are further used to defend setting dependence against the charges of being incompatible with free will and scientific methodology. The paper ends however with the sketch of a new problem for setting dependence: a necessary gap between the ontic and the epistemic level that may prevent the formulation of a successful setting dependent theory.


Introduction
You know, one of the ways of understanding this business is to say that the world is super-deterministic. That not only is inanimate nature deterministic, but we, the experimenters who imagine we can choose to do one experiment rather than another, are also determined. If so, the difficulty which this experimental result creates disappears.-Bell in [8].
Bell, here, is talking about Aspect's experimental violation of the CHSH inequality. Somewhat naively one may think that he is making a tempting case for superdeterminism. If experimenters are ultimately made of the same stuff as inanimate objects, then B Ronnie Hermens r.hermens@uu.nl the same laws should apply to them. If these laws are deterministic, then superdeterminism must be true. So surely determinism should be the way to explain the violation of the CHSH inequality rather than non-locality or something more complicated.
The case is not so simple of course, and the option of superdeterminism is widely rejected. More generally, what one usually wishes to maintain is setting independence: the idea that the state of a system prior to a measurement is independent of the settings of the measurement apparatus. Superdeterminism is the rejection of this assumption combined with the assumption that the laws of a future theory replacing quantum mechanics should be local and deterministic. A related position is the retro-causal approach. This is the rejection of setting independence combined with the assumption that the causal explanation of CHSH inequality violations should be local at the expense that measurement settings retro-causally influence system states.
Although the two views are often presented as rival candidates, 1 they are mutually exclusive only under the assumption that (super)determinism prohibits retro-causal explanations. 2 In this paper the focus is on their common ground: setting dependence (with the exception of Sect. 4.1, which pertains to superdeterminism). Necessarily then, the type of dependence considered is not causal dependence, but more akin to probabilistic dependence (full technical details in Sect. 3.2). Consequently, I will have little to say about possible explanations for setting dependence. But the upside is that the presented analysis pertains to both superdeterministic and retro-causal approaches. The focus is on how setting dependence, once considered as a viable option, may change the way experimental predictions are to be extracted from a theory: how is the end user of the theory to cope with setting dependence? An operationalist perspective so to speak.
The possibility of setting dependence has been criticized for several reasons, among which are the incompatibility with free will, incompatibility with scientific method (necessitating a vicious skeptic stance towards science), being conspiratorial, or simply being insane. As argued for example by Lewis [25], these criticisms do not necessarily hold up. But the situation is far from clear cut. One of the reasons for this, is that it is quite difficult to get a handle on what exactly is entailed by setting dependence. Both on its own as well as in combination with quantum mechanical constraints.
The purpose of this paper is to make some headway towards understanding setting dependence and the type of explanations of quantum mechanics it does and doesn't allow. To this end I mimic the approach of the ontic models framework [15,36], which has been successful for studying topics in the foundations of quantum mechanics such as contextuality, macroscopic realism and the reality of the quantum state. Within this approach quantum mechanics is viewed as an operational theory and one studies constraints on ontic models that are able to reproduce a fragment of the experimental predictions of quantum mechanics. However, the framework presupposes setting independence. Therefore a generalized framework is proposed, which will then be used to study the impact of setting dependence as well as to evaluate some of the objections against setting dependence.
The paper is outlined as follows. In Sect. 2 the original ontic model framework is rehearsed as well as a derivation of the CHSH inequality within this framework. In Sect. 3 setting dependent ontic models are introduced and the derivation of the CHSH inequality is reconsidered. It is shown that within the framework the inequality can still be derived from the assumptions of no-signaling and something I call epistemic determinism. Thus showing that, even if one allows setting dependence, one of these two assumptions has to fail. In Sect. 4 I use the insight from the new framework to defend setting dependence against two common objections: incompatibility with free will and incompatibility with scientific methodology. It is concluded in Sect. 5 that setting dependence is an option worthy of further formal and philosophical investigations. To add direction to this suggestion, I sketch a new possible problem for setting dependence: the problem of incorporating knowledge of the ontic state of a system and how it evolves into constraints on the epistemic description of the system.

Formalism
To describe experiments in an operational, theory-independent way, I make use of Prepare-Measure (PM) models. 3 A PM model is a pair (P, M) of two sets. Elements of P represent possible preparations of the system and provide an operational state description. The elements of M represent possible measurements. Specifically, with every measurement M ∈ M is associated a measurable space (Ω M , Σ M ). Here Ω M is the set of possible outcomes for the measurement M and Σ M is a σ -algebra of subsets of Ω M corresponding to measurement events. There further is assumed to be a rule which assigns to every P ∈ P and M ∈ M a probability measure P( . |M, P) over (Ω M , Σ M ). Thus P (E|M, P) denotes the probability of the measurement event E ∈ Σ M upon a measurement M after the system has been prepared according to P.
To study a particular type of explanation for some feature of a PM model, one can look at ontic models for the PM model. An ontic model consists of a measurable space (Λ, Σ) (where Λ is the set of ontic states) and a pair (Π , Ξ ) which serves as the counterpart for the pair (P, M) in the following way: -Π is a set of probability measures on (Λ, Σ) such that for every P ∈ P there is a non-empty subset Π P ⊂ Π of probability measures corresponding to P: whenever the system is prepared according to P, an ontic state is selected according to some probability measure μ P ∈ Π P . -Ξ is a set of Markov kernels 4 such that for every measurement M ∈ M there is a non-empty subset Ξ M ⊂ Ξ . Every ξ M ∈ Ξ M is a Markov kernel from (Λ, Σ) to (Ω M , Σ M ). For every λ ∈ Λ and E ∈ Σ M the probability that a measurement of M yields a result in E when the state is λ is expressed as ξ M (E|λ) .
Elements of Ξ are called response functions as they encode how the system responds to measurement operations. Elements of Π are called epistemic states as they may be taken to encode ones information concerning the ontic state of the system. It is further allowed that Π P contains multiple distinct epistemic states μ P , μ P , . . . and that Ξ M contains multiple distinct response functions ξ M , ξ M , . . . (these allowances go by the names of preparation contextuality and measurement contextuality, see also [36]). On average the probabilities at the operational level should be reproduced. That is, for every P ∈ P and μ P ∈ Π P and for every M ∈ M and

Bipartite Systems
To consider locality constraints for ontic models, one has to consider systems that consist of at least two spatially separated subsystems. Famous in this framework is the condition of preparation independence as used in the PBR theorem [30]. This concerns two spatially separated systems that are each being prepared and then brought together.
Here we are considering the more traditional EPRB scenario in which two systems are jointly prepared and then being spatially separated, one being send to Alice (system A), and one to Bob (system B). This separation should of course already be recognizable on the operational level. Accordingly, the set of measurements is divided into three subsets: Here For an ontic model for a bipartite system the same separation applies: Furthermore, for every M A ∈ M A and M B ∈ M B and every ξ A ∈ Ξ M A and ξ B ∈ Ξ M B there is a unique ξ AB ∈ Ξ M AB to denote the operational procedure where Alice performs the procedure ξ A and Bob performs the procedure ξ B . The definitions of parameter independence and outcome independence can now be formulated in this framework.

Parameter Independence For all measurements M
Outcome Independence For all measurements for all events E A ∈ Σ M A , E B ∈ Σ M B and every ontic state λ ∈ Λ.
When taken together, these conditions imply Bell locality, i.e., for all measurements for all events E A ∈ Σ M A , E B ∈ Σ M B and ontic states λ ∈ Λ.

Deriving the CHSH Inequality
It is instructive to give a derivation of a familiar result within the ontic models framework: the CHSH inequality. It will be helpful for understanding the setting dependent ontic models of Sect. 3. Especially since the proof of Theorem 1 in Sect. 3 is basically a derivation of the CHSH inequality within the new framework. Consider the standard setup with two possible ±1-valued measurements for Alice (M A 1 and M A 2 ) and two possible ±1-valued measurements for Bob (M B 1 and M B 2 ). Throughout the analysis the procedures for Alice and Bob for each possible measurement are kept fixed, i.e., the response functions For each of the four possible combinations of settings, ξ A i B j (.|λ) is just a probability distribution over the set of four possible outcome combinations. The action for any λ for each of the considered response functions can be neatly summarized with a table as in Fig. 1a. In this table the values satisfy 0 ≤ p i j ≤ 1 and i p i j = 1 for each j. E ± A denotes the event where Alice obtains the outcome ±1 and similarly for E ± B . The p i j denote the corresponding probabilities. For example: Outcome independence is a constraint that applies to each of the four columns separately. Given any pair of measurements, the distribution over the possible outcomes should be a product measure. Concretely, it means that the table from Fig. 1a can be rewritten to the one in Fig. 1b where  Parameter independence further demands that Under these circumstances, the four probability distributions in Fig. 1b can be written as marginals coming from a single probability distribution over definite value attributions to all four possible measurements. By Fine's theorem, any such probability distribution satisfies the CHSH inequality 5 Since quantum mechanics is able to violate this inequality, no ontic model that satisfies parameter independence and outcome independence can reproduce the predictions of quantum mechanics. One of the two assumptions has to be rejected, or the ontic models framework has to be rejected, which can be done by rejecting setting independence. In the next section we will see what remains of this theorem if this final option is chosen.

Incorporating Setting Dependence
Before delving into the details, I start with some considerations that motivate the particular choices made in the definition of setting dependent ontic models. To this Possible state spaces for Specker's parable. textbfa is the simple state space encoding all possible definite values for the three observables A, B, C. b is a state space that encodes setting dependence in the set of possible states. The Λ i refer to the definite value assignments in (a). All combinations of definite value assignments and measurement setting that violate the ±-law are excluded. c is the minimal value definite state space needed to reproduce the operational model: only measured observables have definite values. Note that in (b) the subsets Λ ξ AB , Λ ξ BC , Λ ξ C A need not be pairwise disjoint while in (c) they are end, I consider the model that lies at the heart of Specker's parable of the overprotective seer [35]. 6 Let A, B and C denote three observables for a system, each of which can only assume the values -1 and 1. So there are eight possible ways to assign definite values to all observables. Assuming value definiteness, the state space Λ can then be partitioned into eight corresponding subsets Λ 0 , . . . , Λ 7 as in Fig. 2a.
It is now further assumed that only pairwise joint measurements of the observables are possible. So apart from response functions ξ A , ξ B , ξ C one also has ξ AB , ξ BC and ξ C A , but not ξ ABC . Moreover, for any pair, the outcome will always be either (−1, 1) or (1, −1). Let us call this the ±-law. From Fig. 2a one finds that there is no λ that satisfies this law for all pairs AB, BC, C A.
The law can be salvaged by introducing setting dependence: for any state only the measurements that obey the ±-law are allowed. 7 This can be arranged by letting the probability distribution μ depend on the measurement setting. So one would have The dependence of μ on ξ is the main force behind setting dependence. But it is natural to allow a bit more structure in the framework. Considering the ±-law, it makes sense to dispose of the states Λ 0 and Λ 7 immediately. Furthermore, for a joint measurement of A and B Eq. (11) implies that the state of the system will not lie in Λ 1 or Λ 6 . One may then question whether it makes sense that ξ AB ( . |λ) should be defined at all for λ ∈ Λ 1 ∪ Λ 6 . What sense does it make to have response functions for situations in which the system cannot respond? To give substance to this idea, I allow for the possibility that response functions are defined only for a subset of the set of ontic states. In this case, one could have for example as the set of ontic states for which ξ AB is defined. This idea is made explicit in Fig.  2b. In this setting one still has that, for example, a state λ ∈ Λ 1 has responses both for ξ BC and ξ C A but not for ξ AB . It is of course possible to be more eliminative and let maximal measurements partition the state space. 8 This is illustrated in Fig. 2c. Here the state selects a maximal measurement and determines only the values for that measurement. Of course, for nonmaximal measurements one can still have that e.g.
On the operational level the choice between these state spaces is somewhat arbitrary as there is no operational distinction between an empty set of states or a non-empty set of states that has probability zero for all possible preparations. But the choice does matter for the type of explanations that can be given for the PM model. When using the state space of Fig. 2a, the ±-law is merely a contingent fact about observed values that stems from the special set of preparations that is allowed. But it is certainly not a law that is true for all values (observed and unobserved). For the state spaces in Fig.  2b, c, on the other hand, the ±-law is true in the sense that it represents a property of the system that holds for all possible states.
There is a peculiarity though about the state space of Fig. 2b. Even though a system in a state λ ∈ Λ 2 has no response for ξ C A , the state does determine values for both A and C and these values violate the ±-law. This is a peculiarity of the example rather than of the explanatory strategy. Within quantum mechanics such indirect contradictory value assignments can be avoided as evidenced by, for example, existing methods for partial Kochen-Specker colorings. 9

Formal Definition
We are now in position to define setting dependent ontic models. Let a prepare-measure model (P, M) be given. A setting dependent ontic model for (P, M) consists of a measurable space (Λ, Σ) and a pair (Π , Ξ ) where -Ξ is the set of response functions and for every ξ ∈ Ξ there is a (measurable) set of ontic states Λ ξ ⊂ Λ for which ξ has a response. Specifically, every ξ is a Markov Moreover, for every measurement M ∈ M there is a non-empty subset Ξ M ⊂ Ξ and every ξ ∈ Ξ M is a Markov kernel from (Λ ξ , Σ ξ ) to (Ω M , Σ M ). 8 It is not always immediately obvious what should count as a maximal measurement since to some extend this depends on the choice of the ontic model. But insofar as the notion is used in this paper it should be intuitively clear what is meant by it. 9 See for example [20, pp. 231-234] and references therein.
-Π is the set of epistemic states. Every μ ∈ Π is a map μ : Ξ × Σ → [0, 1] such that for every ξ ∈ Ξ the map μ ξ : Σ → [0, 1] is a probability measure specifying the probability 0f (sets of) ontic states conditional on the response function ξ . It thus satisfies μ ξ (Λ ξ ) = 1. For every P ∈ P there is a non-empty set Π P ⊂ Π such that the model reproduces the predictions of the operational model. That is, for every P and M, for every μ ∈ Π P and ξ ∈ Ξ M With this definition the idea that measurement settings and states need not be independent has been successfully incorporated. The old framework is re-obtained by assuming setting independence: Setting Independence For all preparations ξ, ξ ∈ Ξ and every epistemic state The way the definition is presented one may get the impression that it favors a retro-causal reading. The measurement setting represented by the response function ξ has an influence on which ontic states are more likely to show up via μ ξ . In the extreme cases the setting may even confine the possible ontic states to a strict subset Λ ξ ⊂ Λ. However, one should keep in mind that μ represents an epistemic state. That an agent evaluates the possible ontic states differently given one measurement setting rather than another can be motivated both by the idea that the setting influences the state or that the state influences the setting. The stricter dependency Λ ξ also suggests a false asymmetry because one can similarly start from sets of settings Ξ λ that are allowed by a particular ontic state (suggesting a more superdeterministic approach). One can thus choose whether one should be defined in terms of the other or the other way around: There is one important asymmetry between settings and ontic states however. Settings may influence the probability distributions over the set of states, but there is no converse. Probability distributions over settings are not even defined in the model and in this sense the type of dependence is not really probabilistic dependence. 10 This has everything to do with the operational approach as explained below. One price one immediately pays within this new framework is that ontic states no longer unambiguously give rise to epistemic states. In the original framework, every λ ∈ Λ defines a μ λ through In the new framework, every λ determines a set of possible measurements Ξ λ containing precisely the ξ with λ ∈ Λ ξ . For ξ ∈ Ξ λ one can define μ λ ξ (Δ) to be 1 Δ (λ). But for ξ ∈ Ξ \Ξ λ no such definition is available. Holding on to the idea that Π encodes the set of all possible epistemic states, setting dependence can lead to the situation where ontic states are typically not knowable. The rationale behind this is the following. The epistemic states are associated with preparations which, at least on the operational level, give predictions for all measurements M ∈ M. Thereby the idea has been brought in that for any epistemic state all measurements in M are possible. 11 I think this is the appropriate attitude to adopt when considering what it means to be an agent in a world with setting dependence. Of course, one can insist that epistemic states that preclude certain measurements should also be possible. Call them oracle states. An oracle state would be akin to that of the epistemic state of a time traveler traveling back in time in a grandfather paradox scenario. The traveler walks around knowing that certain acts with particular outcomes are just not realizable. Although oracle states may be physically possible in such scenario's, they are not the kind of states that are fitting when adopting an operationalist approach. This is also reflected in the fact that already at the level of the PM model the preparation of oracle states is ruled out since a preparation determines probabilities for all possible measurements.

Bipartite Systems Revisited
In a setting dependent ontic model, states and measurements can be correlated. This requires a re-evaluation of how bipartite systems should be treated in this framework. Specifically, the definitions for parameter and outcome independence need to be reconsidered.
At the operational level everything remains the same of course. At the level of the ontic models the partition Ξ = Ξ A ∪ Ξ B ∪ Ξ joint also remains intact. And for every M A ∈ M A and M B ∈ M B and every ξ A ∈ Ξ M A and ξ B ∈ Ξ M B there is a unique ξ AB ∈ Ξ M AB to denote the case where Alice performs ξ A and Bob performs ξ B .
Outcome independence is a criterion that translates easily to the generalized framework since it is formulated for a fixed combination of measurement settings. Therefore Eq. (6) is adopted in the generalized framework with the minor alteration that λ now ranges over the set of compatible states instead of all states: for all events E A ∈ Σ M A , E B ∈ Σ M B and ontic states λ ∈ Λ ξ AB .
The case of parameter independence is more complicated. The straightforward reformulation would be that However, there are several problems with this reformulation. First, it only makes sense if one further requires that Λ ξ AB ⊂ Λ ξ A ∩ Λ ξ B . This in itself is quite a natural constraint. More problematic is that Eq. (18) is too weak to capture the spirit of parameter independence. The appeal of parameter independence is that it leads to further constraints like, for example, for all possible measurements M B and M B for Bob. In the new setting, this constraint would have to be reformulated with λ ranging over Λ ξ AB ∩ Λ ξ AB instead of over Λ. This would make the constraint vacuous in cases where the intersection Λ ξ AB ∩ Λ ξ AB is empty. For setting dependent ontic models the notion of parameter independence looses its meaning because it relies on a type of counterfactual reasoning that is no longer applicable. That is, it expresses the idea that for any ontic state, if Bob had performed a different measurement than he in fact did, the predictions for Alice's system would have been unaffected. But the point of setting dependence is that by shifting to a possible world in which Bob's alternate measurement is the actual measurement, one may be required to change the state of the system.
To keep some of the initial appeal of parameter independence, we need to move up to the operational level and replace it with a no-signaling condition. This is the constraint that any non-local correlations that may be lurking around cannot be employed for signaling. So selecting a setting on Bob's side does not, on average, change the statistics for Alice's measurement and vice versa. This is a condition that can be adopted in the generalized framework in a meaningful way:

Signaling and Randomness in Setting Dependent Ontic Models
When thinking about setting dependence, many may have the intuitive reaction that it allows for too much. Peculiar correlations between states and measurement settings could be used to make just about any theory fit the data. But once one moves from considering setting dependence as a mere logical loophole to an idea that deserves to be taken seriously, one has to introduce conditions for models to make the idea more mature. Setting dependent ontic models are aimed at doing just that. In this Sect. I demonstrate by example that non-trivial theorems about setting dependent models can be proven.
The crux of the analysis is that epistemic states in a setting dependent ontic model (mathematically) behave as ontic states in an ordinary ontic model. Specifically, for any epistemic state μ in the new formalism one can define a new ontic stateλ μ that has responses for all possible measurements via μ's expectation values for response functions. It thus behaves as a "traditional" ontic state. Every ξ M ∈ Ξ M can be extended to Λ ∪ {λ μ } by setting for all E ∈ Ω M . Thus in a sense, on average, a setting dependent ontic model behaves as a standard ontic model. 12 The idea can be further illustrated using Fig. 1a. In the original framework, this table represents a single ontic state λ. In the generalized framework the same picture can be used to represent a single epistemic state μ. Every column then displays the expectation values for the response function ξ A i B j as determined by μ ξ A i B j . Consequently, constraints for ontic states for ordinary ontic models can be reformulated into constraints for epistemic states for setting dependent ontic models. The reformulation of parameter independence to no-signaling is a concrete example of this idea.
The upshot is that theorems for ontic models may be reformulated to theorems for setting dependent ontic models. Whether this is possible and meaningful for any particular theorem of course depends on the precise constraints. But for the derivation of the CHSH inequality we do have a meaningful result. The following theorem demonstrates that, given the predictions of quantum mechanics and the impossibility of signaling, even if one gives up on setting independence, the world appears to be random. That is, not every epistemic state can be written as a convex combination of non-signaling epistemically deterministic states, where an epistemically deterministic state is an epistemic state μ ∈ Π that satisfies for every response function ξ and event E. There are four non-signaling epistemically deterministic states that satisfy this, which are depicted in Fig. 3. All of these satisfy the inequality (23).

Theorem 1 Let (P, M) be a PM model for a bipartite system such that M A and M B each contain at least two ±1-valued measurements. Let (Λ, Σ, Π, Ξ) be an ontic model for the PM model and let Π ED NS be the convex hull of the set of non-signaling epistemically deterministic states. Then every μ ∈ Π ED NS satisfies the CHSH inequality
In a similar way one can find that the inequality also holds for option two. Because every convex combination of epistemic states that satisfy the CHSH inequality also satisfies the inequality, every μ ∈ Π ED NS satisfies it.
The fact that all non-signaling epistemically deterministic distributions satisfy all CHSH inequalities was already established by Masanes et al. [27]. But as far as I can tell it has not been fully appreciated that the result does not rely on an assumption of setting independence. For example, Seevinck [32, p. 278], building further on the work of Masanes et al., explicitly incorporates an assumption of setting independence in his analysis.
Both the no-signaling assumption and the epistemic determinism assumption are necessary. Maximal violations of the inequality are easily obtained with a signaling distribution or a PR-box distribution as illustrated in Fig. 4. The PR-box configuration (Fig. 4b) is an extreme point in the convex set of non-signaling epistemic states. But it can of course be written as the convex combination of two signaling distributions, one of them being the distribution in Fig. 4a. The proponent of setting dependence thus faces a choice: either no-signaling is a law, in which case there is necessarily randomness on the epistemic level, or signaling is allowed in principle.
The upshot of this analysis is not that Theorem 1 presents a problem for setting dependence. A priori, there is no reason to believe that both horns of the dilemma presented by the theorem lead to insurmountable complications. Rather, the message is that despite the radical nature of setting dependence, allowing it does not imply that "anything goes". Quantum mechanics still poses non-trivial constraints. Theorem 1 establishes that even if local determinism is salvaged at the ontic level, it cannot be had at the epistemic level (unless one allows oracle states). As such, it could provide a handle for understanding the probabilistic nature of quantum mechanics from a setting dependent perspective.

Revisiting Some Problems with Setting Dependence
Arguments against setting dependence are commonly quite emotional. Clearly articulating problems for setting dependence and specifying what would count as a solution for such problems is therefore difficult. Here I will not attempt to give an exhaustive list of problems and responses. 13 Instead I will focus on problems for which the formal considerations of the previous section lead to valuable insights. An often heard serious worry about setting dependence, dating back as early as the work of Shimony et al. [34] concerns the (in)compatibility with the scientific method. Esfeld [9, p. 473], for example, writes To obtain any experimental evidence whatsoever, one has to presuppose that the questions that the experimenter asks (i.e. the choice of measurement settings) are independent of the past state of the measured system.
It is not further explicated why setting dependence implies that experimental data can no longer serve as evidence for scientific theories. A possible worry, is that the data are then no longer guaranteed to reflect the facts of the world. A theory supported by the data could be completely mistaken about its statements concerning unperformed experiments. This is the worry as expressed by Zeilinger [44, p. 266]: The second important property of the world that we always implicitly assume the freedom of the experimentalist […] This fundamental assumption is essential to doing science. If this were not true, then, I suggest, it would make no sense at all to ask nature questions in an experiment, since then nature could determine what our questions are, and that could guide our questions such that we arrive at a false picture of nature.
Another point in Zeilinger's quote that deserves attention is an allusion to some form of free will. Such allusions are quite common in these type of discussions, with the free will theorem as the most evident example. Although the relevance of free will is controversial, it is worthwhile to delve into the issue to get a better grip on setting dependence. This will be done in the next section, after which I will return to the more pressing problem of compatibility with scientific methodology.

Compatibility with Free Will
The notion of free will is quite slippery. It is therefore difficult to see what kind of role, if any, it could play in the foundations of quantum mechanics. On the other hand, certain aspects associated with free will, like agency, do play a certain role when one talks about choices for measurements. One may insist that measurement settings need not be chosen by experimenters by every run of a Bell test; one may also use the outcomes of the Swiss lottery machine [3]. But even then there is still a choice involved for using that machine rather than any other.
That being said, it is not obvious how or if setting dependence threatens whatever notion of free will is at stake. At any rate, the tension is usually framed as a problem for superdeterminism specifically. Even proponents of retro-causality seem to agree with this sentiment [28]. Therefore, I restrict attention to superdeterminism in this section as well and will explicate how the tension between free will and superdeterminism can be relieved.
My starting point is the recent paper by Landsman [21], which I think is one of the most serious attempts of analyzing the role of free will in the free will theorem 14 and, by similarity, in Bell's theorem. 15 Theories of free will roughly fall into two camps: libertarianism, which requires indeterminism, and compatibilism, which is compatible with determinism. Our focus is necessarily on the latter since the free will objection is targeted at superdeterminism in particular rather than just determinism. 16 In a deterministic world, the actions of agents are obviously determined: their actions cannot be other than their actual actions. On the other hand, there is also a sense in which the actions could have been other than what they actually are. The distinction is usefully illustrated by Landsman's reformulation of Lewis' separation of two notions of "being able" [24]: -I am able to do something such that, if I did it, the state of the actual world at some earlier time would have been different. -I am able to change the state of the actual world at some earlier time.
The second is clearly false while the first is the kind of statement that can be true in a deterministic world in examples such as "I am able to raise my hand" even if I do not, in fact, raise my hand. Roughly, the idea is that there is a possible world that has 14 The formal part of the theorem was first proven by Heywood and Redhead [17] and used as an argument by Stairs [39]. Its (in)famous reformulation with an emphasis on "free will" is due to Conway and Kochen [6,7]. 15 See [5,14,43] for discussion on the distinction between the philosophical implications of Bell's theorem and the free will theorem and further discussion. 16 Bohmian mechanics, for example, is usually taken not to suffer from a free will problem. At least not in the same way as superdeterminism.
virtually the same history as the actual world up until the point where I do in fact raise my hand. 17 The idea is then that this notion of agency, as a necessary condition for compatibilist free will, is violated in any superdeterministic model for quantum mechanics. To show this the idea has to be captured in a formal requirement that can play a role in the free will theorem. To this end Landsman invokes the intuition "that free will involves a separation between the agent, Alice, (who is to exercise it) and the rest of the world, under whose influence she acts" [21, p. 101]. This allows one to unambiguously talk about the state of the agent a, encoding which action the agent will perform, and the state of the system (possibly the rest of the universe) λ. The formal requirement, dubbed freedom, is that these states are independent in the following sense: for any possible agent state a and possible state of the system λ, there is a possible world in which both are actualized.
It is easy to argue that the freedom assumption is a sufficient condition to restore compatibilist free will. Suppose in the actual world (λ, a) is the case and a is considered a possible (counterfactual) action. Then, according to freedom, there is a possible world in which (λ, a ) is the case. It is reasonable to assume that there are possible histories H and H that are very similar up until the point where the first yields (λ, a) and the second (λ, a ).
Setting dependent ontic models are specifically designed to violate the freedom assumption. Each possible measurement ξ is only possible when the system is in a state Λ ξ , which may be a proper subset of Λ. And for each possible state λ, the only possible actions are those corresponding to measurements in Ξ λ . The state spaces in Fig. 2b, c illustrate this explicitly. But this only causes a problem for compatibilist free will if freedom is a necessary condition.
Freedom, as a general constraint, however, is too strong to capture the kind of agency required for compatibilist free will. Even outside the realm of quantum mechanics the set of possible actions for an agent will depend on the state of the rest of the universe λ. For example, in our current world there is the possible action a="travel to the moon". But that clearly is not a possible action in any state of the universe where our planet does not have a moon. Actions at least need to refer to something in the universe to be possible. And even in our actual universe a only became a possible action relatively recently. This further illustrates that the set of possible actions is time dependent, which only seems natural since λ is as well.
Although the example violates freedom, there is no violation of compatibilist free will. The two histories leading up to a universe in which our world has a moon and one in which it does not have a moon are in no way similar. This argument only shows that Landsman's criterion is too crude for the idea it is supposed to capture. So something more subtle is going on.
The possible actions to which freedom is to be applied are the familiar quantum measurements. Surely they are possible: spin measurements along x or z axes have been performed many times. But what the superdeterminist may call into question, is if they are possible at any time for any state of the system. Specifically, what may be called into question is counterfactual definiteness. The intersection of the sets Λ σ x and Λ σ z may be empty. And even though this is a much more subtle violation of freedom than the moon example, it still isn't enough to violate compatibilist free will. There may still be similar histories H x , H z with one leading to a state λ x ∈ Λ σ x and the other to a state λ z ∈ Λ σ z . After all, the only change needed in the state of the rest of the universe for the agent to perform a z measurement instead of an x measurement is that ξ σ z should correspond to a property of the system rather than ξ σ x .
What remains peculiar though, is the special way in which λ and Ξ λ evolve parallel in perfect harmony with each other. The trivial explanation is that at any point in time, only measurements can be performed that correspond to properties of systems. In a superdeterministic world, the possible settings Ξ λ are necessarily constrained by the state λ. But a hint of conspiracy remains and one would want a more solid explanation for why the set of epistemically possible measurements Ξ is typically much larger than the set of ontologically possible measurements Ξ λ . The problem of free will for superdeterminism is dissolved and replaced with a problem for understanding why evaluating possible actions explicitly requires one to consider counterfactual states for the system. As I will argue in the next section, this is not just a problem for superdeterminism but a general problem for setting dependence. This is because the problem of compatibility with scientific methodology may also be seen to reduce to it.

Compatibility with Scientific Methodology
Setting independence is often taken to be a prerequisite for scientific methodology. I think that the main intuition behind this relies on a similar condition in statistics: the criteria for selecting a sample from a population should be independent of the property of the population under investigation. If one is interested in the distribution of age among a certain population, the selection method for the sample should not be "just select the ten youngest subjects". The art of sampling is to construct a sample in such a way that it is as representative for the entire population as possible within the constraints of your research facilities. Setting dependence now seems to suggest that despite all our best efforts we are still forced to always select samples that are not representative for the entire population. If this is the case, then the theories that we can come up with to fit the data necessarily don't fit the population and thus we arrive at a false picture of nature.
Using these intuitions, here is a naive idea of how setting dependence could be used to explain the experimental violations of the CHSH inequality. We have an ensemble of pairs of particles that are send to Alice and Bob. For each combination of settings A i , B j the pairs for which the measurement ξ A i B j is performed determines a subensemble. Setting dependence can then be used to argue that for at least one of the four possible settings the outcome statistics for the sub-ensemble ξ A i B j leads to a biased estimate for the response of the entire ensemble to the ξ A i B j measurement. The violation of the CHSH inequality is merely an artifact of biased sampling and thus does not reflect a property of the total ensemble. Experiments have led us to a false picture of nature, and that picture is quantum mechanics.
The naive explanation is problematic. But it is not trivial to point at the culprit and assess if it is a necessary feature of any setting dependent explanation. The conclusion that QM is false in itself is not per se problematic. It is common scientific progress that older theories are in some sense false in the light of newer theories. Since it can be part of the aim of setting dependence to surpass quantum mechanics, a rejection of the universal validity of quantum mechanics is acceptable. But preferably, the new theory is also able to explain why the old theory is false. And it is here where the suggested naive explanation runs into complications.
The naive explanation necessitates a mismatch between observed phenomena (for which quantum mechanics holds) and unobserved phenomena (for which quantum mechanics fails). But it is unclear how the observed/unobserved distinction can be incorporated into the setting dependent theory in a meaningful way. In essence, it is a type of measurement problem akin to the one pointed out by Lewis [25, §6]. However, there is one important distinction. Lewis explains the problem completely in terms of hidden mechanisms (à la a superdeterministic theory) that cause the distinction between observed and unobserved behavior. But, as also noted later by Lewis, it also lurks for retro-causal models. Here the measurement retro-causally influences the system in such a way so as to behave as an observed system. If the only way to account for the occurrence of this effect is the stipulation that a measurement happens, then the problem remains.
Can this problem be avoided? Consider again the toy example from Sect. 3.1. Any way of assigning non-contextual definite values to all observables violates the ±-law. A conspiratorial recovery of the law is possible by demanding that measurements necessarily select out a sub-ensemble in which the law does hold: sometimes A and B have the same value, but never when they are measured simultaneously. This is the proposal of Eq. (11). It suffers from the measurement problem because there is nothing in the ontology that justifies or explains the distinction between the epistemic judgments μ ξ AB , μ ξ BC and μ ξ C A .
The problem is slightly alleviated by switching to the state space of Fig. 2b. Here the set of possible measurements is constrained by the ontic state. A system on the ontic state λ ∈ Λ 2 only has responses for ξ AB and ξ BC but not for ξ C A . This is the basis for a new understanding of setting dependence. It is not a mismatch between the selected sample and the population. Rather it is that the sample upon which the ξ C A measurement is performed is not a sample for the entire population when considering responses to ξ C A because certain systems in the population do not even have a response for ξ C A .
Thus what the defender of setting dependence ought to deny is what may be called counterfactual responsiveness. It is the idea that systems should have responses for all possible measurements at all times. Specifically, it is the insistence that in general, for a system in a state λ, the set of measurements for which it has responses Ξ λ is a proper subset of the set of measurements that are deemed possible at the epistemic level Ξ . It is thus a generalization of counterfactual definiteness, allowing for the option that responses are non-deterministic.
Returning to the case of the violation of the CHSH inequality, we now find the following response to the naive explanation. If counterfactual responsiveness fails, we can no longer conclude that the sub-ensemble of systems on which the ξ A i B j measure-ment is performed is biased with respect to how systems in the total ensemble respond to ξ A i B j . This is for the simple reason that, because counterfactual responsiveness fails, not all systems in the total ensemble have a response for ξ A i B j . The violation of the CHSH inequality then still points to a property of the total ensemble and is not an artifact of some peculiar selection of the sub-ensembles for each of the four measurement settings.
The total ensemble used in the experiment may still be assumed to be a fair sample for the population of all pairs of particles in the singlet state. And experimental investigations of the sample can still be used for inductive inferences about the population in the standard way. Thus in this sense there is no conflict with scientific methodology; setting dependence does not imply that statistical sampling is necessarily biased.
There is of course still the issue of having to explain how measurement settings always line up with well-defined responses of the system. This is akin to the remaining problem of the previous subsection. It seems to me there is no general strategy for setting dependence to resolve this issue and it will have to be resolved within specific models. But it is a much more subtle problem than the outright incompatibility with scientific methodology.

Discussion and Conclusion
In this paper I have presented an analysis of setting dependence that has mainly served as a defense of the tenability of the idea. Conflicts with free will and scientific methodology are not as dire as many would want us believe (Sect. 4). Therefore setting dependence is an option that deserves to be taken seriously. Moreover, that it can be taken seriously as more than just a logical possibility and be the subject of formal analysis was shown in Sect. 3. I wish to end this paper, however, with some critical considerations.
An important ingredient of the defense for setting dependence in Sect. 4 was the rejection of counterfactual responsiveness. 18 On the other hand, considering counterfactual measurements is common scientific practice. An appropriate operational description yields predictions for all measurements that are deemed to be possible, not just for those for which the response of the system is well-defined given its current ontic state. This suggests that whatever information one may gather about the ontic state of a system, details about compatibility of measurement settings should wash out.
That the ontic state of a system is not epistemically accessible is of course a common trait of ordinary hidden variable theories. And it is a poor criticism, since the goal is not per se to move beyond quantum mechanics, but to have an explanation of its operational success. However, in contrast with ordinary hidden variable theories, in the setting dependent approach the ontological details are not just inaccessible, but also inadmissible. This suggests a principled underdetermination of the ontology for setting dependent approaches, which may obstruct the possibility of a satisfactory explanation of quantum mechanics.
The problem can be made more precise within the framework of setting dependent ontic models. Its roots are already visible in ordinary ontic models. Although ontic models are extremely useful, there are limitations to their ability to model the ontology for a theory. Specifically, they suffer from a type of measurement problem. Response functions encode how systems respond when being measured. But nowhere in the model is it specified what constitutes a measurement.
In a way, it is a good thing that "measurement" is a primitive concept in the ontic model framework. It allows one to study foundational questions in quantum mechanics whilst setting aside the measurement problem. Moreover, the problem may also find a solution within the framework. This can be done by also adding dynamics. A transformation T for the system, taking one preparation P to another preparation T (P) is encoded by a Markov kernel γ T from (Λ, Σ) to itself. Thus for a system in state λ undergoing the transformation T γ T (Δ|λ) denotes the probability that the ontic state after the transformation lies in the set Δ. Performing a measurement M may then be taken to also initiate a transformation of the system T M . The response function ξ M then merely is a short-hand for denoting the probabilities with which γ T M evolves the state to one in which M has the appropriate definite value. Thus The measurement problem then reduces to the task of explaining that γ T M encodes a natural physical process just like any other γ T . 19 Could a similar approach be adopted for setting dependent ontic models? I do not have a definite answer at the moment, but there are troublesome complications. It is reasonable to assume that also in this case a transformation T should be encoded by a Markov kernel γ T . How should an agent update their beliefs in the light of such a transformation? 20 In the original framework this is straightforward. If the initial epistemic state was μ P , the epistemic state after the transformation should be given by Of course γ T should be such that μ T (P) ∈ Π T (P) and thus satisfies Eq. (1). Translating this directly to a setting dependent ontic model we get μ ξ,T (P) (Δ) = γ T (Δ|λ) d μ ξ,P (λ).
But this is nonsensical because we would have that in general μ ξ,T (Λ ξ ) < 0 because not all states in Λ ξ evolve again to states in Λ ξ . Similarly, not all states that end up in Λ ξ after T started of in Λ ξ . The problem is thus that even if the agent is initially completely oblivious of which measurements are compatible with the current ontic state of the system, having complete knowledge of the dynamical laws undermines this obliviousness concerning the future ontic state of the system. Thus not only is the ontic state epistemically inadmissible, but the dynamical laws as well. But without dynamical laws we have no theory and principled underdetermination follows. This is of course in no way a knock down argument because it relies on several presuppositions that may be rejected. 21 After all, setting dependent ontic models are merely a tool to get a better grip on setting dependence and are not meant to be the template for setting dependent theories. But they do reveal the difficulties involved in finding a proper setting dependent theory. In an attempt to nevertheless end on a positive note, I propose a tentative idea.
Even if the underdetermination problem turns out to be vicious, setting dependence may still be a resourceful concept for defending neo-Copenhagen type interpretations of quantum mechanics such as QBism [12,16]. The argument would roughly run as follows. Bell's theorem shows that, provided we assume that it is meaningful to ascribe states to systems and measurements yield single definite outcomes, locality demands that we give up setting independence. The underdetermination problem then implies that nevertheless we are stuck with operational descriptions of systems; systems have definite states but we cannot characterize them even in principle.
One would have all the benefits of an epistemic interpretation of quantum states [22,37] whilst avoiding familiar charges such as those of instrumentalism. On this approach, Bell's famous question "information about what?" has a straight forward answer: information about the actual state of the system. It is just that this information is necessarily incomplete. Moreover, because of setting dependence, ψ-ontology theorems like the PBR theorem do not apply.
Whether this proposal is tenable or not remains to be seen. It is unlikely to be endorsed by any of the proponents of either setting dependence or neo-Copenhagen approaches. Although personally I think it may provide a more appealing ontology for QBsim than the fundamental lawlessness proposed by Timpson [40] or the creating experiences of Fuchs [13]. But that is possible future work. For now, I leave it to the reader to judge the merit of this proposal. 21 One could for example suggest that Eq. (26) should be modified to also integrate over all possible values of ξ using an appropriate probability distribution such that μ T (P) is again an epistemic state as defined in Sect. 3.2. Although one may question if this solution works. If this probability distribution is given an epistemic interpretation, it implies that how the agent evaluates the outcomes for possible measurements after γ T depends on how likely certain measurements were before γ T . But this dependence is spurious given that the transformation γ T itself is purely ontological. If the probability distribution is given an ontic interpretation, it should be determined by the ontic states, as they determine which set of possible measurements are compatible. But this type of information is unlikely to be epistemically accessible for the same reason that we required μ to be determined for all ξ ∈ Ξ in the first place. Allowing oracle states also doesn't resolve the problem. In the end, quantum states ought to correspond to non-oracle states. So recovering unitary evolution from dynamical laws at the ontic level still yields the problem of understanding Eq. (26).