Randomness? What randomness?

This is a review of the issue of randomness in quantum mechanics, with special emphasis on its ambiguity; for example, randomness has different antipodal relationships to determinism, computability, and compressibility. Following a (Wittgensteinian) philosophical discussion of randomness in general, I argue that deterministic interpretations of quantum mechanics (like Bohmian mechanics or 't Hooft's Cellular Automaton interpretation) are strictly speaking incompatible with the Born rule. I also stress the role of outliers, i.e. measurement outcomes that are not 1-random. Although these occur with low (or even zero) probability, their very existence implies that the no-signaling principle used in proofs of randomness of outcomes of quantum-mechanical measurements (and of the safety of quantum cryptography) should be reinterpreted statistically, like the second law of thermodynamics. In appendices I discuss the Born rule and its status in both single and repeated experiments, and review the notion of 1-randomness introduced by Kolmogorov, Chaitin, Martin-Lo"f, Schnorr, and others.


Introduction
Quantum mechanics commands much respect. But an inner voice tells me that it is not the real McCoy. The theory delivers a lot, but it hardly brings us closer to God's secret. Anyway, I'm sure he does not play dice. (Einstein to Born, 1926). 1 In our scientific expectations we have grown antipodes. You believe in God playing dice and I in perfect laws in the world of things existing as real objects, which I try to grasp in a wildly speculative way. (Einstein to Born 1944). 2 Einstein's idea of 'perfect laws' that should in particular be deterministic is also central to 't Hooft's view of physics, as exemplified by his intriguing Cellular Automaton Interpretation of Quantum Mechanics ('t Hooft [64]). One aim of this paper is to provide arguments against this view, 3 but even if these turn out to be unsuccessful I hope to contribute to the debate about the issue of determinism versus randomness by providing a broad view of the latter. 4 My point in Sect. 2 is that randomness is a Wittgensteinian family resemblance (Sluga [106]; Baker and Hacker [6], Ch. XI), but a special one that is always defined through its antipode, which may change according to the specific use of the term.
The antipode defining which particular notion of randomness is meant may vary even within quantum mechanics, and here two main candidates arise (Sect. 3): one is determinism, as emphatically meant by Born [16] and most others who claim that randomness is somehow 'fundamental' in quantum theory, but the other is compressibility or any of the other equivalent notions defining what is called 1-randomness in 1 3 Foundations of Physics (2020) 50:  mathematics as its antipode (see Appendix B for an explanation of this). The interplay between these different notions of randomness is the topic of Sects. 4-5. In Sect. 5 I argue that one cannot eat one's cake and have it, in the sense of having a deterministic hidden variable theory underneath quantum mechanics that is strictly compatible with the Born rule. I also propose, more wildly, that Einstein's prohibition of superluminal signaling should be demoted from an absolute to a statistical law, much as the second law of thermodynamics. My analysis relies on some mathematical background presented in (independent) Appendices A-C on the Born rule, Algorithmic (or 1-) randomness, and the Bell and Free Will Theorems.

Randomness as a Family Resemblance
The idea that in order to get clear about the meaning of a general term one had to find the common element in all its applications has shackled philosophical investigation; for it has not only led to no result, but also made the philosopher dismiss as irrelevant the concrete cases, which alone could have helped him to understand the usage of the general term. (Wittgenstein,Blue Book,.
I'm saying that these phenomena have no one thing in common in virtue of which we use the same word for all -but there are many kinds of affinity between them. (...) we see a complicated network of similarities overlapping and criss-crossing: similarities in the large and in the small. I can think of no better expression to characterize these similarities than "family resemblances"; for the various resemblances between members of a family -build, features, colour of eyes, gait, temperament, and so on and so forth -overlap and criss-cross in the same way. (Wittgenstein,Philosophical Investigations,. Though he did not mention it himself, randomness seems a prime example of a phenomenon Wittgenstein would call a 'family resemblance'. 5 Independently, as noted by historians Lüthy and Palmerino [83] on the basis of examples from antiquity and medieval thought, 6 the various different meanings of randomness (or chance) can all be identified by their antipode. Combining these ideas, I submit that randomness is not just any old Wittgensteinian family resemblance, but a special one that is always defined negatively: • To begin with, in Aristotle's famous example of a man who goes to the market and walks into his debtor, the randomness of the encounter derives from the fact that the man did not go the the market in order to meet his debtor (but instead 1 3 went there to buy food). Similarly for the man who digs a hole in his garden to plant a tree and finds a treasure. Even the birth of female babies (and certain other 'chance substances' for which he literally uses the Greek word for 'monsters') was identified by The Philosopher as a failure of purpose in Nature.
Thus what actually happened in all these examples was accidental because (as we would say it) it was not intended, or, in Aristotelian parlance, because there was no final cause. By the same token, Aristotle found the atomistic cosmos of Democritus "random" because it was purposeless, ridiculing him for making the cosmic order a product of chance. • In contrast, half a century later Epicurus found the atomic world not random at all and introduced randomness through the swerve, immortalized by Lucretius: When the atoms are traveling straight down through empty space by their own weight, at quite indeterminate times and places they swerve ever so little from their course, just so much that you can call it a change of direction. If it were not for this swerve, everything would fall downwards like raindrops through the abyss of space. No collision would take place and no impact of atom on atom would be created. Thus nature would never have created anything. (Lucretius, De Rerum Natura, Book II). 7 This was, so to speak, the first complaint against determinism (the goal of Epicurus/Lucretius was to make room for free will in a world otherwise seen as effectively dead because of the everlasting sequence of cause and effect), and indeed, in the context of our analysis, the key point is that the swerve is random because it is indeterminate, or because the atoms depart from their natural straight course. • Neither of these classical meanings is at all identical with the dominant usage from medieval times to the early 20th century, which was exemplified by Spinoza, who claimed that not only miracles, but also circumstances that have concurred by chance are reducible to ignorance of the true causes of phenomena, for which ultimately the will of God ('the sanctuary of ignorance') is invoked as a placeholder. 8 Thus Spinozist randomness lies in the absence of full knowledge of the entire causal chain of events. • In the Leibniz-Clarke correspondence, 9 the latter, speaking for Newton, meant involuntariness by randomness. Against Leibniz, Clarke (and Newton) denied that at least God could be limited by involuntariness. Leibniz, on the other hand, in some sense ahead of his time (yet in another following Epicurus/Lucretius), used the word 'random' to designate the absence of a determining cause-a possibility which he (unlike Epicurus/Lucretius) in turn denied on the basis of his principle of sufficient reason. 10 This is clear from an interesting passage which is not widely known and predates Laplace:

3
Foundations of Physics (2020) 50:  One sees then that everything proceeds mathematically -that is, infallibly -in the whole wide world, so that if someone could have sufficient insight into the inner parts of things, and in addition could have remembrance and intelligence enough to consider all the circumstances and to take them into account, then he would be a prophet and would see the future in the present as in a mirror. (Leibniz). 11 • Arbuthnot, a younger contemporary and follower of Newton, may have been one of the first authors to explicitly address the role of randomness in the deterministic setting of Newtonian physics. In the Preface to his translation of Huygens's path-breaking book De Ratiociniis in Ludo Aleae on probability theory, he wrote: It is impossible for a Die, with such determin'd force and direction, not to fall on such determin'd side, only I don't know the force and direction which makes it fall on such determin'd side, and therefore I call it Chance, which is nothing but the want of Art. (Arbuthnot 1692). 12 And similarly, but highlighting the alleged negativity of the concept even more: Chance, in atheistical writings or discourse, is a sound utterly insignificant: It imports no determination to any mode of Existence; nor in deed to Existence itself, more than to non existence; it can neither be defined nor understood; nor can any Proposition concerning it be either affirmed or denied, excepting this one, "That it is a mere word." (De Moivre 1718). 13 So this is entirely in the medieval spirit, where ignorance-this time relative to Newton's physics as the ticket to full knowledge-is seen as the origin of randomness. • A century later, and like Arbuthnot and De Moivre again in a book on probability theory (Essai philosophique sur les probabilités, from 1814), Laplace portrayed his demon to make the point that randomness arises in the absence of such an intellect: An intelligence which could comprehend all the forces that set nature in motion, and all positions of all items of which nature is composed-an intelligence sufficiently vast to submit these data to analysis-it would embrace in the same formula the movements of the greatest bodies in the 11 The undated German original is quoted by Cassirer [26]

3
universe and those of the lightest atom; for it, nothing would be uncertain and the future, as well as the past, would be present to its eyes. (Laplace 1814). 14 Note that Leibniz' prophet appeals to the logical structure of the universe that makes it deterministic, whereas Laplace's intelligence knows (Newtonian) physics. 15 In any case, it is important to note that Laplacian randomness is defined within a deterministic world, 16 so that its antipode is not indeterminism but full knowledge (and computing power, etc.). Indeed, less well known than the above quotation is the following one, from the very same source: All events, even those which on account of their insignificance do not seem to follow the great laws of nature, are a result of it just as necessarily as the revolutions of the sun. (Laplace 1814). 17 • The ignorance interpretation of randomness and chance still lurks behind the probabilities introduced in the 19th century in statistical mechanics, which in my view were therefore wholeheartedly construed in the medieval and early modern sprit. 18 • Hacking [54] explains how the doctrine of necessity began to erode in the 19th century, largely through the use of statistics in population studies and biology. 19 In this respect, the century to some extent culminated in the following words of Peirce: I believe I have thus subjected to fair examination all the important reasons for adhering to the theory of universal necessity, and shown their nullity. (...) If my argument remains unrefuted, it will be time, I think, to doubt the absolute truth of the principle of universal law. (Peirce [96], p. 337). 14 The translation is from Laplace [77], p. 4. See van Strien (2014) for history and analysis. 15 However, van Strien (2014) argues that Laplace also falls back on Leibniz (and gets the physics wrong by not mentioning the momenta that the intelligence should know, too, besides the forces and positions). 16 Famously: 'The world W is Laplacian deterministic just in case for any physically possible world W ′ , if W and W ′ agree at any time, then they agree at all time.' (Earman [43], p. 13). 17 Quoted by Hacking [54], p. 11. 18 It is often maintained that these probabilities are objective, which might cast doubt over the idea that they originate in ignorance. I personally find the distinction between "objective" and "subjective" chances quite unhelpful in the context of fundamental physics, a left-over from (by now) irrelevant and outdated attempts to define probabilities "objectively" as relative frequencies (see also Appendix B) as opposed to interpreting them "subjectively" as credences à la Ramsey [99]. Loewer ([80, 81]) claims that the probabilities in statistical mechanics are as objective as those in quantum mechanics (and hence are objective). Indeed, both are predicated on a choice of observables, based on ignoring microscopic and quantum-mechanical degrees of freedom, respectively. Given that choice, the probabilities are objective, but the choice is surely subjective (see also Heisenberg quoted in Sect. 3 below). Instead of a pointless table-tennis game between "objective" and "subjective", a better term would be "perspectival". For example, I agree with Rovelli [100] that time's arrow is perspectival (but neither "objective" nor "subjective"). See also Jaynes [65], pp. 118-120, for similar comments on terminology in the philosophy of probability. 19 Though also inspired by the eventual rise of quantum mechanics, Kern [68] gives a completely different history of causality and uncertainty in the 19th and 20th centuries, tracking the changing explanatory roles of these factors in the the study of murder as documented by more than a hundred novels.
This partly paved the way for the claim of irreducible randomness in quantum mechanics, although the influence of population studies and biology on intrinsic developments in physics should perhaps not be overestimated. However, the insight that probability and statistics gave rise to their own laws (as opposed to the fear that randomness is pretty much the same as lawlessness), which partly dated back to the previous two centuries (Hacking [55]), surely made quantum theory possible. • The randomness of variations in heritable traits that-almost simultaneously with the rise of statistical physics-were introduced in Darwin's theory of evolution by natural selection meant something completely different from Laplace etc., best expressed by the renowned geneticist Theodosius Dobzhansky a century later (cf. Merlin [87]): Mutations are random changes because they occur independently of whether they are beneficial or harmful. (Dobzhansky et al. [36], p. 66).
Indeed, both historically and actually, the antipode to Darwin's randomness of variations is Lamarck's goal-orientedness thereof, 20 intended to strengthen the species (like the proverbial sons of the blacksmith who according to Lamarck inherit his strong muscles). In particular, it does not matter if the variations are of known or unknown origin, or fall under a deterministic or an indeterministic kind of physics. • Continuing our detour into biology, the well-known Hardy-Weinberg equilibrium law in population genetics, which gives the relative frequencies of alleles and genotypes in a given (infinite) population, is based on the assumption of random mating. This means that mating takes place between pairs of individuals who have not selected each other on the basis of their genetic traits (i.e. there is no sexual selection). • Eagle ([41], p. 775-776) proposes that 'randomness is maximal unpredictability' (which agrees with criterion 3 at the end of this section), and argues that this is equivalent to a random event being 'probabilistically independent of the current and past states of the system, given the probabilities supported by the theory.' • Most people, especially those without scientific training, 21

3
The subjective nature of this definition (i.e. 'surprising', 'perceived', 'meaningful', and 'apparent') may repel the scientist, but ironically, the aim of this definition is to debunk coincidences. Moreover, there is a striking similarity between analyzing coincidences in daily life and coincidences in the physical setting of bipartite correlation experiments à la EPR-Bohm-Bell: to this end, let me briefly recall how most if not all everyday coincidences can be nullified on the basis of the above definition: 22 1. Against first appearances there was a causal connection, either through a common cause or through direct causation. This often works in daily life, and also in Bohmian mechanics, where direct (superluminal) causation is taken to be the "explanation" of the correlations in the experiments just referred to. However, if superluminal causation is banned, then one's hand is empty because one interpretation of Bell's Theorem (cf. Appendix C) excludes common causes (van Fraassen [121]), and hence both kinds of causation are out! See also Sect. 3.

The concurrence of events was not at all as surprising as initially thought.
This argument is either based on the inability of most people to estimate probabilities correctly (as in the well-known birthday problem), or, if the events were really unlikely, on what Diaconis and Mosteller call the law of truly large numbers: 23 With a large enough sample, any outrageous thing is likely to happen. (Diaconis and Mosteller [34], p. 859).
Neither of this helps in ERR-Bohm-Bell, though, leaving one's hand truly empty. • The choice sequences introduced by Brouwer in his intuitionistic construction of the real numbers, and then especially the "lawless" ones (which Brouwer never defined precisely) are often associated with notions of "freedom" and "randomness": A choice sequence in Brouwer's sense is a sequence of natural numbers (to keep it simple), which is not a priori given by a law or recipe for the elements, but which is created step by step (...); the process will go on indefinitely, but it is never finished. (...) Informally, we think of a lawless sequence of natural numbers as a process of choosing values in ℕ (...) under the a priori restriction that at any stage of the construction never more that an initial segment has been determined and that no restrictions have been imposed on future choices, [and that] there is a commitment to determine more and more values (so the sequence is infinite). (...) A lawless sequence 1 3 Foundations of Physics (2020) 50:  may be compared to the sequence of casts of a die. There too, at any given stage in the generation of a sequence never more than an initial segment is known. (Troelstra [113], italics added). 24 Indeed, von Mises [128] mentioned choice sequences as an inspiration for his idea of a Kollektiv (van Lambalgen [123]), which in turn paved the way for the theory of algorithmic randomness to be discussed as the next and final example of this section. Nonetheless, a more precise analysis (Moschovakis [88,89]) concludes as follows: Lawless and random are orthogonal concepts. A random sequence of natural numbers should possess certain definable regularity properties (e.g. the percentage of even numbers in its nth segment should approach 50 as n increases), 25 while a lawless sequence should possess none. Any regularity property definable in L by a restricted formula can be defeated by a suitable lawlike predictor. (Moschovakis [89], pp. 105-106, italics in original, footnote added).
A further conceptual mismatch between lawless choice sequences and random sequences of the kind studied in probability theory is that any randomness of the former seems to lie on the side of what is called process randomness, whereas the latter is concerned with product randomness. 26 In a lawless choice sequence it is its creation process that is lawless, whereas no finished sequence (i.e. the outcome) can be totally lawless by Baudet's Conjecture, proved by van der Waerden [120], which implies that every binary sequence x satisfies certain arithmetic laws. • Finally, serving our aim to compare physical and mathematical notions of randomness, I preview the three equivalent definitions of 1-randomness (see Appendix B and references therein for details) and confirm that also they fit into our general picture of randomness being defined by some antipode. What will be remarkable is that the three apparently different notions of randomness to be discusses now, which at first sight are as good a family resemblance as any, actually turn out to coincide. The objects whose randomness is going to be discussed are binary strings, and our discussion here is so superficial that I will not even distinguish between finite and infinite ones; see Appendix B for the kind of precision that does enable one to do so.

3 3 Randomness in Quantum Mechanics
Moving towards the main goal of the paper, I now continue our list of examples (i.e. of the principle that randomness is a family resemblance whose different meanings are always defined negatively through their antipodes) in the context of quantum mechanics, which is rich enough by itself to provide its own family of different meanings of randomness (all duly defined negatively), although these may eventually be traceable to the above cases. 27 Already the very first (scholarly) exposition of the issue of randomness in quantum mechanics by Born made many of the major points that are still relevant today: Thus Schrödinger's quantum mechanics gives a very definite answer to the question of the outcome of a collision; however, this does not involve any causal relationship. One obtains no answer to the question "what is the state after the collision," but only to the question "how probable is a specific outcome of the collision" (in which the quantum-mechanical law of [conservation of] energy must of course be satisfied). This raises the entire problem of determinism. From the standpoint of our quantum mechanics, there is no quantity that could causally establish the outcome of a collision in each individual case; however, so far we are not aware of any experimental clue to the effect that there are internal properties of atoms that enforce some particular outcome. Should we hope to discover such properties that determine individual outcomes later (perhaps phases of the internal atomic motions)? Or should we believe that the agreement between theory and experiment concerning our inability to give conditions for a causal course of events is some pre-established harmony that is based on the non-existence of such conditions? I myself tend to relinquish determinism in the atomic world. But this is a philosophical question, for which physical arguments alone are not decisive. (Born [16], p. 866). 28 Given the fact that Born was the first to discuss such things in the open literature, it is remarkable how perceptive his words are: he marks the opposition of randomness to determinism, recognizes the possibility of hidden variables (with negative advice though), and understands that the issue is not just a technical one. Bravo! Having said this, in line with the previous section our aim is, of course, to confront the antipode of determinism with other possible antipodes to randomness as it is featured by quantum mechanics. The introduction of fundamental probabilities in quantum theory is delicate in many ways, among which is the fact that the Schrödinger equation is even more 27 It will be clear from the way I discuss quantum mechanics that this paper will just be concerned with process randomness, as opposed to product randomness. There is certainly a distinction between the two, but the former drags us into the quagmire of the measurement problem (cf. Landsman [75], Chapter 11). 28 Translation by the author. The reference is to the German original.
Foundations of Physics (2020) 50:  deterministic than Newton's laws. 29 Hence what is meant is randomness of measurements outcomes; since it is not our aim (here) to solve the measurement problem-for which see Landsman [75], Chapter 11-I simply assume that (i) measurement is a well-defined laboratory practice, and (ii) measurements have outcomes. In all that follows, I also accept the statistical predictions of quantum mechanics for these outcomes (which are based on the Born rule reviewed in Appendix A). Even so, the claim of irreducibility of randomness, which is typical for all versions of the Copenhagen Interpretation (and for any mainstream view held by physicists) is almost incomprehensible, since one of the pillars of this interpretation is Bohr's doctrine of classical concepts, according to which the apparatus must be described classically; randomness of measurement outcomes is then seen as a consequence of the very definition of a measurement. But this suggests that randomness should be reducible to ignorance about the quantum-mechanical degrees of freedom of the apparatus: these uncertainties (...) are simply a consequence of the fact that we describe the experiment in terms of classical physics. (Heisenberg [58], p. 53).
Ironically, Bell's Theorem(s), which arose in opposition to the the Copenhagen Interpretation (Cushing [32]), did not only prove Einstein (as the leading opponent of this interpretation) wrong on the issue that arguably mattered most to him (namely locality in the sense defined later in a precise way by Bell), but also proved Bohr and Heisenberg right on the irreducibility of randomness, at least if we grant them randomness per se. Indeed, suppose we equate reducibility of randomness with the existence of a "Laplacian" deterministic hidden variable theory (i.e. use the antipode of determinism), and assume, as the Copenhagenists would be pleased to, the conjunction of the following properties: 1. The Born rule and the ensuing statistical predictions of quantum mechanics; 2. (Hidden) Locality, i.e. the impossibility of active superluminal communication or causation if one knows the state of the underlying deterministic theory. 30 3. Freedom (or free choice), that is, the independence of the choice of measurement settings from the state of the system one measures using these settings, in a broad sense of 'state' that includes the prepared state as well as the "hidden" state . 31 Bell's [10] Theorem then implies (robustly) that a deterministic hidden variable satisfying these assumptions theory cannot exist, as does the so-called Free Will Theorem (which relies on a non-probabilistic but non-robust special case of the Born rule implying perfect correlation, and also on the Kochen-Specker Theorem, which restricts its validity to quantum systems with three levels or more). 32 Thus the Laplacian interpretation of randomness does not apply to quantum mechanics 31 This assumption even makes sense in a super-deterministic theory, where the settings are not free. 29 In the sense that the solution is not only uniquely determined by the initial state, but, by Stone's Theorem, even exists for all t ∈ ℝ ; in other words, incomplete motion is impossible, cf. Earman [45]. 30 This is often called Parameter Independence, as in e.g. the standard textbook by Bub [21]. 32 See Cator and Landsman [27], Landsman [75], Chapter 6, or Appendix C below for a unified view of both Bell's Theorem (from 1964) and the Free Will Theorem (named as such by Conway and Kochen).
(granting assumptions 1-3), which warrants the Copenhagen claim of irreducible or non-Laplacian or Leibnizian randomness. Viable deterministic hidden variable theories compatible with the Born rule therefore have to choose between giving up either Hidden Locality or Freedom (see also the conclusion of Appendix C). Given this choice, we may therefore distinguish between theories that: • give up Hidden Locality, like Bohmian mechanics; • give up Freedom, like the cellular automata interpretation of 't Hooft [64].
In both cases the statistical predictions of quantum mechanics are recovered by averaging the hidden variable or state with respect to a probability measure on the space of hidden variables, given some (pure) quantum state . The difference is that in Bohmian mechanics the total state (which consists of the hidden configuration plus the "pilot wave" ) determines the measurement outcomes given the settings, whereas in 't Hooft's theory the hidden state (see below) all by itself determines the outcomes as well as the settings.
• In Bohmian mechanics the hidden variable is position q, and d = | (q)| 2 dx is the Born probability for outcome q with respect to the expansion � ⟩ = ∫ dq (q)�q⟩. 33 • In 't Hooft's theory the hidden state is identified with a basis vector �n⟩ in some Hilbert space H ( n ∈ ℕ ), and once again the measure (n) = |c n | 2 is given by the Born probability for outcome n with respect to the expansion � ⟩ = ∑ n c n �n⟩.
In Bohmian mechanics ('t Hooft does not need this!) such averaging also restores Surface Locality, i.e., the impossibility of superluminal communication on the basis of actual measurement outcomes (which is a theorem of quantum theory, though a much more delicate one than is usually thought, as I will argue in Sect. 5), see also Valentini [117,118]).

Probabilistic Preliminaries
O False and treacherous Probability, Enemy of truth, and friend to wickednesse; With whose bleare eyes Opinion learnes to see, Truth's feeble party here, and barrennesse. When thou hast thus misled Humanity, And lost obedience in the pride of wit, With reason dar'st thou judge the Deity, And in thy flesh make bold to fashion it. Vaine thoght, the word of Power a riddle is, And till the vayles be rent, the flesh newborne, 33 Bohmians call this choice of the quantum equilibrium condition. It was first written down by Pauli.
Foundations of Physics (2020) 50:  Reveales no wonders of that inward blisse, Which but where faith is, every where findes scorne; Who therfore censures God with fleshly sp'rit, As well in time may wrap up infinite Philip Sidney (1554-1586), Coelica, Sonnet CIV. 34 My aim is to give a critical assessment of the situation described in the previous section. My analysis is based on the interplay between the single-case probability measure on an outcome space X, which for the purpose of this paper will be the Born measure = a on the spectrum X = (a) of some self-adjoint operator a, and hence is provided by theory (see also Appendix A), and the probabilities defined as long-run frequencies for outcome sequences x = (x 1 , x 2 , …) of the Bernoulli process defined by (X, ) , which are given by experiment. To obtain clean mathematical results, I assume experiments can be repeated infinitely often. This is clearly an idealization, which is regulated by Earman's Principle: While idealizations are useful and, perhaps, even essential to progress in physics, a sound principle of interpretation would seem to be that no effect can be counted as a genuine physical effect if it disappears when the idealizations are removed. (Earman [44], p. 191).
As shown in Appendix B, finite-size effects amply confirm the picture below, comparable to the way that the law(s) of large numbers have finite-size approximants (such as the Chernoff-Hoeffding bound). In particular, zero/unit probability of infinite sequence comes down to very low/high probability for the corresponding finite strings. 35 Consequently, Earman's Principle (in contrapositive form) holds if we use the canonical probability measure ∞ on the infinite product space X ℕ of all infinite sequences x = (x 1 , x 2 , …) , where x n ∈ X , and X ℕ are canonically equipped with the cylindrical -algebra S ⊂ P(X ℕ ). 36 To see how one may recover or verify from the long-term frequencies governed by the product measure ∞ , for any function f ∶ X → ℝ define f (N) ∶ X N → ℝ by 34 The first four lines of this poem are printed on the last page of Keynes [69], without any source. 35 One has to distinguish between outcomes with probability zero and properties that hold with probability zero. Indeed, every single outcome (in the sense of an infinite bitstream produced by a fair quantum coin flip) has probability zero, and hence this property alone cannot distinguish between random outcomes (in whatever sense, e.g. 1-random) and non-random outcomes (such as deterministic outcomes, or, more appropriately in our technical setting, computable ones), or indeed between any kind of different outcomes. On the other hand, not being random is a property that holds with probability zero, in that the set of all outcomes that are not random has probability zero (equivalently, the event consisting of all outcomes that are random happens almost surely, i.e. with probability 1). And yet outcomes with this probability zero property exist and may occur (in the finite case, with very small but positive probability). See also Sect. 5. 36 Here P(Y) denotes the power set of Y. The -algebra S is generated by all subsets of the form The probability measure on X then defines a probability measure ∞ on S , whose value on B is defined by ∞ (B) = ∏ N n=0 (A n ) . See e.g. Dudley [39], especially Theorem 8.2.2, for this and related constructions (due to Kolmogorov).

3
Then (by the ergodic theorem), for any f ∈ C(X) , almost surely with respect to ∞ , times the unit function 1 X ℕ on X ℕ . This is for continuous functions f, but a limit argument extends it to characteristic functions 1 A ∶ X → 2 ≡ {0, 1} , where A ⊂ X , so that times 1 X ℕ , again ∞ -almost surely. If we define the probability of A within some infinite sequence (x 1 , x 2 , …) as the relative frequency of A (i.e. the limit as N → ∞ of the number of x n within (x 1 , … , x N ) that lie in A divided by N), then (4.3) states that for almost all sequences in X ℕ with respect to the infinite product measure ∞ this probability of A equals its Born probability. This is useful, since the latter is a purely mathematical quantity, whereas the former is experimentally accessible (at least for large N).
For simplicity (but without real loss of generality, cf. footnote 45), in what follows I specialize this setting to a (theoretically) fair coin flip, that is, X = 2 = {0, 1} and (0) = (1) = 1∕2 . Hence 2 ℕ is the space of infinite binary sequences, equipped with the probability measure ∞ induced by (as I shall argue in Sect. 5 below, despite tradition this situation cannot in fact arise classically, at least not in a deterministic theory). We have: See e.g. Calude [25], Corollary 6.32. Thus the set E of all sequences that are not 1-random has probability zero, i.e. ∞ (E) = 0 , but this by no means implies that E is "small" in any other sense: set-theoretically, it is as large as its complement, i.e. the set of all 1-random sequences, which makes it bizarre that (barring a few exceptions related to Chaitin's number Ω ) not a single one of these 1-random sequences can actually be proved to be 1-random, cf. Appendix B. Theorem 4.1 has further amazing consequences: Corollary 4.2 With respect to ∞ , almost every infinite outcome sequence x of a fair coin flip is Borel normal, 37 incomputable, 38 and contains any finite string infinitely often. 39

3
Foundations of Physics (2020) 50:  This follows because any 1-random sequence has these properties with certainty, see Calude [25], Sect. 6.4. The relevance of Bernoulli processes for quantum theory comes from Theorem 5.1 in the next section, whose second option almost by definition yields these processes.

Critical Analysis and Claims
The relevance of the material in the previous section comes from the following result. The Born rule therefore implies very strong randomness properties of outcome sequences, albeit merely with probability one (i.e. almost surely) with respect to ∞ a . Moreover, Chaitin's Incompleteness Theorem (see Appendix B) makes it impossible to prove that some given outcome sequence is 1-random even if it is! 40 Also in the spirit of the general 'family resemblance' philosophy of randomness, this puzzling situation makes it natural to compare randomness of infinite measurement outcome sequences as defined by: 41 1. 1-randomness (with compressible sequences as its antipode), as suggested by the Born rule and the above analysis of its mathematical implications; 42 2. indeterminism, as suggested by Born himself, and in his wake also by Bell's Theorem and the Free Will Theorem (seen as proofs of indeterminism under assumptions 1-3).
To make this comparison precise, we once again need the case distinction between hidden variable theories giving up Hidden Locality like Bohmian mechanics and those giving up Freedom, like 't Hooft's theory. See Appendix C for notation. In the usual EPR-Bohm-Bell setting (Bub [21]), let ( , ) ∈ O × O be Alice's and Bob's outcomes for given settings (a, b) ∈ S × S , so that ( , a) are Alice's and ( , b) are Bob's (outcomes, settings).
• In Bohmian mechanics, the hidden state q ∈ Q just pertains to the correlated particles undergoing measurement, whilst the settings (a, b) are supposed to be "freely chosen" for each measurement (and in particular are independent of q). 43 The outcome is then given by ( , ) = q(a, b) . So if we number the consecutive runs of the experiment by n ∈ ℕ = {1, 2, …} , then everything is determined by functions since these also give the outcome sequence by ( n , n ) = q n (a n , b n ) = f 1 (n)(f 2 (n)). • In 't Hooft's theory, the hidden state x ∈ X of "the world" determines the settings as well as the outcomes, i.e. (a, b, , ) = (a(x), b(x), (x), (x)) . In this case, the 41 In the literature on quantum cryptography and quantum random number generators (QRNG) one also find notions of ("free") randomness for single qubits, see e.g. the reviews Brunner et al. (2019), Herrero-Collantes and Garcia-Escartin [61], Ma et al. [84], and Pironio [97]. The two (closely related) definitions used there fall into our general scheme of antipodality, namely randomness as unpredictability defined as minimizing the probability that an eavesdropper correctly guesses the qubit, and randomness as the absence of correlations between the qubit in question and anything outside its forward lightcone (which is heavily Leibnizian). These are related, since guessing is done through such correlations of the qubit. Certification of both cryptographic protocols and QRNGs is defined in these terms, backed by proofs of indeterminism à la Bell, so that in all versions non-locality plays a crucial role. See also footnote 50. 42 I am by no means the first to relate quantum theory to algorithmic randomness: see, for example, Bendersky et al. ( [12][13][14]), Calude [24], Svozil [109,110], Senno [105], Yurtsever [134], and Zurek [135]. The way my work relates to some of these references will become clear in due course. 43 This is true if one knows the hidden positions exactly at the time of measurement. At earlier times, the pilot wave = quantum state is needed to make predictions, since determines the trajectories. See e.g. Barrett [7], Chapter 5, for a discussion of measurement in Bohmian mechanics, including EPR-Bohm. entire situation including the outcome sequence is therefore determined by a function A key point in the analysis of the functions f i and g is the requirement that both theories reproduce the statistical predictions of quantum mechanics given through the Born rule relative to some pure state . As already noted, this is achieved by requiring that q is averaged with respect to some probability measure on Q, and likewise x is averaged with respect to some probability measure ′ on X. If the experimental run is to respect this averaging, then in Bohmian mechanics the map f 1 must be "typical" for the Born-like measure (cf. Dürr, Goldstein, and Zanghi [40]; Callender [23]; Norsen [93]; Valentini [119]); 44 see below for the special problems posed also by f 2 . Analogously to the previous discussion of the Born measure itself and its sampling, any sampling of produces a sequence (q 1 , q 2 , …) that almost surely should have typical properties with respect to ∞ . In particular, such a sequence should typically be 1-random (in a suitable biased sense fixed by ). 45 Anything remotely deterministic, like computable samplings, will only contribute sequences that are atypical (i.e. collectively have measure zero) for . 46 Likewise for the sampling of X with respect to ′ in 't Hooft's theory. Thus the requirement that the functions f 1 and g randomly sample and ′ introduces an element of unavoidable randomness into the hidden variable theories in question, which seems whackingly at odds with their deterministic character. Indeed, I only see two possibilities: • This sampling is provided by the hidden variable theory. In that case, the above argument shows that the theory must contain an irreducibly random ingredient. • The sampling is not provided by the theory. In that case, the theory fails to determine the outcome of any specific experiment and just provides averages of outcomes.
Either way, although at first sight our hidden variable theories are (Laplacian) deterministic (as is quantum mechanics, see footnote 29), in their accounting for measurement outcomes they are not (again, like quantum theory). What is the source of indeterminism? 44 Appendix B contains a precise definition of typicality, see in particular point 3 below eq. (B.3). 45 We only defined 1-randomness for outcome sequences of fair coin flips, but the definition can be extended to other measures, see Downey and Hirschfeldt [37], Sect. 6.12 and references therein. 46 In this light I draw attention to an important result of Senno [105], Theorem 3.2.7, see also Bendersky et al. [14], which is entirely consistent with the above analysis: If the functions f 1 and g are computable (within a computable time bound), then Alice and Bob can signal superluminally. In other words, where mathematically speaking averaging the hidden state over the probability measure suffices to guarantee Surface Locality even in a theory without Hidden Locality (Valantini, 2002), if this averaging is done by sampling Q in a long run of repeated measurements, then this sampling must at least be incomputable.
• In standard (Copenhagen) quantum mechanics this source lies in the outcomes of experiments given the quantum state, whose associated Born measure is sampled; • In "deterministic" hidden variable theories it is the assignment of the hidden variable to each measurement no. n, i.e. the sampling of the Born-like measures and ′ .
So at best the source of indeterminism has been shifted. Moreover, in Bohmian mechanics and 't Hooft's theory and ′ equal the Born measure, so one wonders what has been gained against Copenhagen quantum mechanics. Therefore, one has to conclude that: Truly deterministic hidden variable theories (i.e. those in which well-defined experiments have determined outcomes given the initial state and no appeal has to be made to irreducibly random samplings from this state) compatible with the Born rule do not exist.
In other words, as long as they reproduce all statistical predictions of quantum mechanics, deterministic theories underneath quantum mechanics still need a source of irreducible randomness in each and every experiment. In my view, this defeats their purpose. 47 In classical coin tossing the role of the hidden state is played by the initial conditions (cf. Diaconis and Skyrms [35], Chapter 1, Appendix 2). The 50-50 chances (allegedly) making the coin fair are obtained by averaging over the initial conditions, i.e., by sampling. By the above argument, this sampling cannot be deterministic, for otherwise the outcome sequences appropriate to a fair coin do not obtain: it must be done in a genuinely random way. This is impossible classically, so that fair classical coins do not exist, as confirmed by the experiments of Diaconis et al. reviewed in Diaconis and Skyrms [35], Chapter 1.
In response to this argument, both the Bohmians and 't Hooft go for the second option and blame the randomness in question on the initial conditions, whose specification is indeed usually seen as lying outside the range of a deterministic theory. 48 As explained by both parties (Dürr et al. [40]; 't Hooft [64]), the randomness in the outcomes of measurement on quantum system, including the Born rule, is a direct consequence of the above randomness in initial conditions. But in a Laplacian deterministic theory one can either predict or retrodict and these procedures should be equivalent; so within the context of a deterministic hidden variable theory of the kinds under discussion, Copenhagenists attributing the origin of randomness to the outcomes of measurement and our hidden variable theorists attributing it to the initial conditions for measurement, should be equivalent. Once again, this makes it 47 A related point about the double role of in the pilot wave theory of De Broglie was made by Pauli ( [95], p. 38): 'The hypothesis of a general probability distribution for the hidden variables that is determined by the single [wave] function is not justified from the point of view of a deterministic scheme: it is borrowed from a theory which is based on the totally different hypothesis that the [wave] function provides a complete description of the system.' I am indebted to Guido Bacciagaluppi for this information. 48 The Bohmians are divided on the origin of the quantum equilibrium distribution, cf. Dürr et al. [40], Callender [23], Norsen [93], and Valentini [119]. The origin of is not my concern here; the problem I address is the need to randomly sample it and the justification for doing so.
impossible to regard the hidden variable theories in question as deterministic underpinnings of (Copenhagen) quantum mechanics.
Bohmians (but not 't Hooft!) have another problem, namely the function (5.2) that provides the settings. Although f 2 is outside their theory, Bohmians should either account for both the "freedom" of choosing these settings and their randomness, or stop citing Bell's Theorem (whose proof relies on averaging over random settings) in their favour.
Bell [11] tried to kill these two birds with the same stone by saying that the settings had to be 'at least effectively free for the purpose at hand', and clarifiying this as follows: Suppose that the instruments are set at the whim, not of experimental physicists, but of mechanical random generators. (...) Could the input of such mechanical devices be reasonably be regarded as sufficiently free for the purpose at hand? I think so. Consider the extreme case of a "random" generator which is in fact perfectly deterministic in nature and, for simplicity, perfectly isolated. In such a device the complete final state perfectly determines the complete initial state-nothing is forgotten. And yet for many purposes, such a device is precisely a "forgetting machine". (...) To illustrate the point, suppose that the choice between two possible [settings], corresponding to a and a ′ , depended on the oddness of evenness of the digit in the millionth decimal place of some input variable. Then fixing a or a ′ indeed fixes something about the input-i.e., whether the millionth digit is odd or even. But this peculiar piece of information is unlikely to be the vital piece for any distinctly different purpose, i.e., it is otherwise rather useless. (...) In this sense the output of such a device is indeed a sufficiently free variable for the purpose at hand. (Bell [11], p. 105).
This seems to mix up the two issues. Though independence of the settings is defensible in a theory like Bohmian mechanics (Esfeld [47]), concerning their randomness Bell apparently ignored von Neumann's warning against mechanical (i.e. pseudo) random generators: Any one who considers arithmetical methods of producing random digits is, of course, in a state of sin (von Neumann [129]). See also Markowsky [85]. Thus Bell's statement was questionable already at the time of writing, but today we know for sure that mechanical random generators leave a loophole in the EPR-Bohm-Bell experiment: as soon as just one of the functions defining the settings (i.e. either of Alice or of Bob) is computable, 49 there is a model that is local (in the sense of Bell). See Bendersky et al. [13] or Senno [105], Theorem 2.2.1 This implies that Bohmian mechanics (as well as other deterministic hidden variable theories that leaves the settings free) requires even more randomness than the sampling of the (Born) probability measure , which further undermines the claim that it is a deterministic theory. 50 The analysis given so far focused on the necessity of correctly sampling a probability measure : if, so far in the context of hidden variable theories, where = , this is not done correctly, quantum-mechanical predictions such as Surface Locality may be threatened. But in general there are measurement outcome sequences that fail to sample : atypical events with very low or even zero probability can and do occur. This is even the whole point of the "law of truly large numbers" quoted in Sect. 2! In general, the Hidden Locality (or no-signaling) property of quantum mechanics states that the probability is independent of b, where P ( , | a, b) is the Born probability that measurement of observables determined by the settings a and b give outcomes and . Indeed, we have where |A is the restriction of the state on B(H A ⊗ H B ) to Alice's part B(H A ) . Similarly, the Born probability P ( | a, b) should be independent of a and in fact equals However, I have repeatedly noted that the empirical probability extracted from a long measurement outcome sequence coincides with the corresponding Born probability only almost surely with respect to the induced probability measure on the space of outcome sequences, and hence outliers violating the property (5.4) exist (for finite sequences even with positive probability). If one such run is found, the door to superluminal signaling is open, at least in principle. To see this, recall that the crudest form of determinism is what is called Predictability by Cavalcanti and Wiseman [28], i.e. the property that It is easy to show that the conjunction of Predictability and Surface Locality implies factorization and hence, for random settings, the Bell (CHSH) inequalities, and therefore for suitable states this conjunction contradicts the statistical predictions of quantum mechanics as expressed by the Born rule. Accepting the latter, Surface Locality therefore implies unpredictability and hence some (very weak) form of randomness. There are many other results in this direction, ranging from e.g. Barrett, (5.6) P ( , | a, b) ∈ {0, 1}. 50 In order not to be in a state of sin even Copenhagen quantum mechanics has yet to prove that it is capable of building "true" Random Number Generators. See e.g. Abbott et al. [1,3], Acín et al. [4], Ananthaswamy [5], Bendersky et al. [13], Herrero-Collantes and Garcia-Escartin [61], and Kovalsky et al. [72]. See also the end of Appendix B for the (lacking) connection with 1-randomness.

Foundations of Physics (2020) 50:61-104
Kent, and Hardy [8] to Wolf (2015), 51 involving varying definitions of randomness, all coming down to the implication assuming Freedom and random settings. 52 What I'd like to argue for is the contrapositive My argument is only heuristic, since the terms are used differently in (5.7) and (5.8): to prove (5.7) one typically uses Born probabilities and other theoretical entities, whereas for (5.8) I use probabilities obtained from outcome sequences as limiting relative frequencies: (5.8) then comes from low-probability sequences that violate (5.4), which is satisfied with probability one by sufficiently random sequences. Indeed, this difference is the whole point: Surface Locality is a statistical property, like the second law of thermodynamics. 53 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
(5.7) no signaling ⇒ randomness, (5.8) no randomness ⇒ signaling. 51 Note that the aim of Barrett, Kent, and Hardy [8] is to prove security of some quantum key distribution protocol on the basis of Surface Locality even if quantum mechanics turns out to be incorrect, whereas the present paper investigates the role of statistical outliers assuming quantum mechanics is correct. These aims are closely related, of course, since too many (how many?) outliers may make one question the theory. 52 See Hermens [60] for arguments for (5.7) and (5.8) even without assuming Freedom. 53 Similarly, Valentini [117,118] pointed out that violation of the equilibrium condition in Bohmian mechanics leads to superluminal signaling. Giving an actual signaling protocol for an outcome sequence (in the EPR-Bohm-Bell setting) that violates (5.4) is highly nontrivial. Yurtsever [134] gives a protocol for superluminal signaling if the outcome sequence is not 1-random, but his arguments are tricky and the paper with complete details he announces has never appeared. His approach may be neither viable nor necessary, since such a protocol need only exist for outcome sequences violating (5.4), which is a stronger property than violating 1-randomness-since contrapositively 1-randomness implies all averaging properties like (5.4). In fact, further assumptions are necessary in order for Alice to have a chance (sic) to learn Bob's settings b from her part ( n ) n of the (double) outcome sequence ( n , n ) n ; a complete lack of structure would also make any kind of signaling random. I know of only one such protocol (which, however, should suffice as a "proof of concept"): if the unlikely outcome sequence ( n , n ) n comes from computable functions n = (a, b, n) and n = (a, b, n) with O(T 2 ) time bounds, then signaling is possible through a learning algorithm for computable functions, see Senno [105], Theorem 3.2.7 and Bendersky et al. [14]. This is the same result as in footnote 46; by Proposition 3.1.7 in Senno [105], mutatis mutandis it applies both with and without hidden variables. This protocol enables Alice (or Bob) to signal after a finite (but alas unknown and arbitrarily large) number of of measurements (this answers a speculation by Carlo Rovelli that finite-size corrections to the lack of randomness may conspire to prevent superluminal signaling).

3 Appendix A: The Born Rule Revisited
Every argument on probabilities in quantum mechanics relies on the Born rule, which is so natural that it will probably (!) be with us forever. In order to explain its canonical mathematical origin, I use the C*-algebraic approach to quantum theory, which because of its ability to simultaneously talk about commutative and non-commutative algebras (and hence about classical and quantum probabilities) is especially useful in this context. 54 Indeed, the Born rule arises by restricting a quantum state on the non-commutative algebra B(H), i.e. the algebra of all bounded operators on some Hilbert space H, to the commutative C*-algebra C * (a) generated by some self-adjoint operator a = a * ∈ B(H) and the unit operator 1 H (so perhaps C * (a, 1 H ) would have been better notation). 55 This view has been inspired by the Copenhagen Interpretation, in that the associated probabilities originate in the classical description of a measurement setting, as called for by Bohr. However, precisely because of this, my derivation also seems to undermine the Copenhagen claim of the irreducibility of randomness, which may now clearly be traced back to a voluntary loss of information in passing from an algebra to one of its subalgebras and ignoring the rest (see also Sect. 3). Happily, the Copenhagen Interpretation will be saved from this apparent inconsistency by a mathematical property that has no classical analog.
As I see it, the Born rule is the quantum-mechanical counterpart of the following elementary construction in measure theory. Let X be a compact Hausdorff space, 56 with associated commutative C*-algebra A = C(X) of complex-valued continuous functions on X. Let f ∈ A be real-valued and denote its (automatically closed) range in ℝ by (f ) ; the notation is justified by the fact that the spectrum of a self-adjoint operator in quantum theory is analogous to (f ) in every possible way. Let be a state on A, or, equivalently (by the Riesz representation theorem), a probability measure on X, related to by The state then also defines a probability measure f on the "spectrum" (f ) through where Δ ⊆ (f ) and f ∈ Δ or f −1 (Δ) denotes the subset {x ∈ X | f (x) ∈ Δ} of X. Virtually all of probability theory is based on this construction, which I now rephrase. For any C*-algebra A with unit 1 A , let C * (a) be the smallest C*-algebra in A (under the same operations and norm) that contains a and the unit 1 A , as above for 54 See e.g. Haag [53], Ruetsche [101], or Landsman [75] for the C*-algebraic approach. My discussion of the Born rule is partly taken from the latter, which in turn elaborates on Landsman [74], but the corresponding "classical" construction has been added and may be new; I find it very instructive. 55 This is well defined, since C * (a) is the intersection of all C*-algebras in B(H) that contain a and 1 H . 56 With some modifications the construction easily generalizes to the locally compact case.
Foundations of Physics (2020) 50:61-104 A = B(H) . If a * = a , then C * (a) is the norm-closure of the set of all (finite) polynomials in a. Take A = C(X) and f ∈ C(X) , assumed real-vaued (since f * = f ). This gives as isomorphism where f ∈ C(X) , g ∈ C( (f )) , as follows from the Stone-Weierstrass Theorem. 57 If C is a C*-subalgebra of A with the same unit as A, then any state on A defines a state |C on C. Thus |C * (f ) is a state on C * (f ) . Furthermore, if ∶ B → C is a unital homomorphism of unital C*-algebras (i.e., a linear map preserving all operations as well as the unit-in our case is even an isomorphism), and if ′ is a state on C, We then replay our record: a state ∈ S(B(H)) restricts to a state |C * (a) on C * (a) , which is mapped to a state on C( (a)) by the CFC, which state in turn is equivalent to a probability measure on (a) . The probability measure a on (a) obtained by this construction is exactly the Born measure, which is more commonly defined as follows: H be a Hilbert space, let a  *  = a ∈ B(H) , and let be a state on  B(H). There exists a unique probability measure a on the spectrum (a)

of a such that
We now return to the issue raised in the introduction to this appendix: even short of hidden variables (see Sect. 3), to what extent is the probabilistic structure of quantum mechanics that is already inherent in the Born measure reducible to ignorance? From a naive perspective it is, for in the corresponding classical case the measure f on (f ) does not determine the measure on X except when C * (f ) = C(X) (which is certainly possible, take e.g. X = [0, 1] and f (x) = x ). However, in the quantum case it so happens that if a is maximal (i.e. its spectrum is nondegenerate, or, equivalently, C * (a) is a maximal commutative C*-subalgebra of B(H)), then the measure a on (a) does determine the state , at least if is pure and normal, as in (A.11), despite the fact that C * (a) is just a small part of B(H). So the Copenhagen Interpretation is walking a tightrope here, but it doesn't fall. 58 The Born measure is a mathematical construction; what is its relationship to experiment? This relationship must be the source of the (alleged) randomness of quantum mechanics, for the Schrödinger equation is deterministic. We start by postulating, as usual, that a (A) is the (single case) probability that measurement of the observable a in the state (which jointly give rise to the pertinent Born measure a ) gives a result ∈ A ⊂ (a) . What needs to be clarified in the above statement is the word probability. Although I do not buy Lewis's [78] "Best Systems Account" (BSA) of laws of nature, 59 I do agree with his identification of single-case probabilities as numbers (consistent with the probability calculus as a whole) that theory assigns to events, upon which long-run frequencies provide empirical evidence for the theory in question, 60 but do not define probabilities. 61 The Born measure is a case in point: these probabilities are theoretically given, but have to be empirically verified by long runs of independent experiments. In other words, by the results reviewed below such experiments provide numbers whose role it is to test the Born rule as a hypothesis. This is justified by (4.3) in §4, which equates probabilities computed from long-run frequencies with the corresponding single-case probabilities that define the clause "with probability one" under which (4.3) holds, and without which clause the limit in (4.3) would be undefined. As explained in Sects. 4-5, the relevance of (4.3) to quantum physics comes from Theorem 5.1, which I now prove. where is the state on B(K) with respect to which the Born probability is defined. If dim(K) < ∞ , as I assume, we always have (a) = Tr ( a) for some density operator , and for a general Hilbert space K this is the case iff the state is normal on B(K). For (normal) pure states we have = � ⟩⟨ � for some unit vector ∈ K , in which case The Born rule (A.16) follows from the same reasoning as the single-operator case explained in the run-up to Theorem A.1 (Landsman [75], Sect. 4.1): 63 we have an isomorphism of (commutative) C*-algebras, and under this isomorphism the restriction of the state , originally defined on B(K), to C * (a) defines a probability measure a on the joint spectrum (a) , which is just the Born measure whose probabilities are given by (A.

3
Foundations of Physics (2020) 50:61-104 To describe the limit N → ∞ , let B be any C*-algebra with unit 1 B ; below I take B = B(H) , B = C * (a) , or B = C( (a)) . We now take the N-fold tensor product of B with itself. 64 The special cases above may be rewritten as with N copies of H and (a) , respectively; in (A.28) the a i are given by (A.13)-(A.14).
We may then wonder if these algebras have a limit as N → ∞ . They do, but it is not unique and depends on the choice of observables, that is, of the infinite sequences = ( 1 , 2 , …) , with N ∈ A N , that are supposed to have a limit. One possibility is to take sequences for which there exists M ∈ ℕ and M ∈ A M such that for each N ≥ M, with N − M copies of the unit 1 B . On that choice, one obtains the infinite tensor product B ⊗∞ , see Landsman [75], Sect. C.14. The limit of (A.27) in this sense is B(H ⊗∞ ) , where H ⊗∞ is von Neumann's 'complete' infinite tensor product of Hilbert spaces, 65 in which C * (a) ⊗∞ is the C*-algebra generated by the operators (a 1 , a 2 , …) . The limit of (A.29) is where (a) ℕ , which we previously saw as a measure space (as a special case of X ℕ for general compact Hausdorff spaces X), is now seen as a topological space with the product topology, in which it is compact. 66 As in the finite case, we have an isomorphism and hence, on the given identifications and the notation (A.13) -(A.14), we have … , a N ); 64 If B is infinite-dimensional, for technical reasons the so-called projective tensor product should be used. 65 See Landsman [75], §8.4 for this approach. The details are unnecessary here. 66 From Tychonoff's Theorem. The associated Borel structure is the one defined in footnote 36 in §5.
It follows from the definition of the infinite tensor products used here that each state 1 on B defines a state ∞ 1 on B ⊗∞ . Take B = B(H) and restrict ∞ 1 , which a priori is a state on B(H ⊗∞ ) , to its commutative C*-subalgebra C * (a 1 , a 2 , …) . The isomorphism (A.33) then gives a probability measure a on the compact space (a) ℕ , where the label a now refers to the infinite set of commuting operators (a 1 , a 2 , …) on H ⊗∞ . To compute this measure, use (A.12) and the fact that by construction functions of the type where N < ∞ and f (N) ∈ C( (a) N ) , are dense in C( (a) ℕ ) with respect to the appropriate supremum-norm, and that in turn finite linear combinations of factorized C( (a) N ) . It follows from this that Since this generalizes (A.25) to N = ∞ , this finishes the proof of Theorem 5.1. ◻

Appendix B: 1-Randomness
As we have seen, randomness (of measurement outcomes) in quantum mechanics was originally defined by Born as indeterminism. This is what hidden variable theories like 't Hooft's and Bohmian mechanics challenge, to which in turn Bell's Theorem and the Free Will Theorem (read in a specific way reviewed in Appendix C below) provide obstacles. Indeterminism is a physical definition of randomness, which in mathematics is most closely matched by incomputability. However, there is a much deeper notion of randomness in mathematics, which is what at least in my view quantum mechanics should aspire to produce. This notion, now called 1-randomness, has its roots in Hilbert's Sixth Problem, viz. the Mathematical Treatment of the Axioms of Physics, 67 which was taken up independently and very differently by von Mises [128] and by Kolmogorov [70].
Von Mises was a strict frequentist for whom probability was a derived concept, predicated on first having a good notion of a random sequence from which relative frequencies defining probability could be extracted. Kolmogorov, on the other hand, started from an axiomatic a priori notion of probability from which a suitable mathematical concept of randomness was subsequently to be extracted. Despite the resounding and continuing success of the first step, Kolmogorov's initial failure to achieve the follow-up led to his later notion of algorithmic randomness, which Foundations of Physics (2020) 50:  (subject to a technical improvement) was to become one of the three equivalent definitions of 1-randomness. In turn, von Mises's failure to adequately define random sequences eventually led to the other two definitions. 68 Kolmogorov's problem, which was noticed already by Laplace and perhaps even earlier probabilists, was that, specializing to a 50-50 Bernoulli process for simplicity, 69 any binary string of length N has probability P( ) = 2 −N and any (infinite) binary sequence x has probability P(x) = 0 , although say = 0011010101110100 looks much more random than = 111111111111111 . In other words, their probabilities say little or nothing about the randomness of individual outcomes. Imposing statistical properties helps but is not enough to guarantee randomness. It is slightly easier to explain this in base 10, to which I therefore switch for a moment. If we call a sequence x Borel normal if each possible string in x has (asymptotic) frequency 10 −| | (so that each digit 0, … , 9 occurs 10% of the time, each block 00 to 99 occurs 1% of the time, etc.), then Champernowne's number can be shown to be Borel normal. The decimal expansion of is conjectured to be Borel normal, too (and has been empirically verified to be so in billions of decimals), but these numbers are hardly random: they are computable, which is an antipode to randomness.
Von Mises's problem was that his definition of a random sequence (called a Kollektiv), despite his great insights, simply did not work. Back to binary sequences, his notion of randomness was supposed to guarantee the existence of limiting relative frequencies (which in turn should define the corresponding single-case probabilities), but of course he understood that a sequence like 01010101 ⋯ is not very random at all. His idea was that limiting relative frequencies should exist not only for the given sequence, but also for all its permutations defined by so-called placeselection functions, which he tried to find (in vain) by precluding successful gambling strategies on the digits of the sequence.
Of the three equivalent definitions of 1-randomness already mentioned in Sect. 2, namely: the first may be said to go back to Kolmogorov's struggle, whereas the other two originate in later attempts by other mathematicians to improve the work of von Mises. 70 Although there isn't a single "correct" mathematical notion of randomness, the notion of 1-randomness featured here stands out (and represents a consensus view) largely because it can be defined in these three equivalent yet fairly different ways,

…
Incompressibility; Patternlessness; Unpredictability, 68 See van Lambalgen [122] or, for a lighter account, Diaconis and Skyrms [35], for this history. 69 A string is finite row of bits, whereas a sequence x is an infinite one. The length of a string is | |. 70 As mentioned and referenced in most literature as well as in the next footnote, Kolmogorov's work in the 1960s on the first definition was predated by Solomonoff and matched by independent later work of Chaitin. Key players around the other definitions were Schnorr and Martin-Löf, respectively. each of which realizes some basic intuition on randomness (of course defined through its obvious antipode!).
In what follows, these notions will be defined more precisely, followed by some of their consequences. 71 We assume basic familiarity with the notion of a computable function f ∶ ℕ → ℕ , which may technically be defined through recursion theory or equivalently through Turing machines: a function is computable if it can be computed by a computer.

Incompressibility
The idea is that a string or sequence is random iff its shortest description is the sequence itself, but the notion of a description has to made precise to avoid Berry's paradox: The Berry number is the smallest positive integer that cannot be described in less than eighteen words.
The paradox, then, is that on the one hand this number must exist, since only finitely many integers can be described in less than eighteen words and hence the set of such numbers must have a lower bound, while on the other hand Berry's number cannot exists by its own definition. This is, of course, one of innumerable paradoxes of natural language, which, like the liar's paradox, will be seen to lead to an incompleteness theorem once the notion of a description has been appropriately formalized in mathematics, as follows. 72 The plain Kolmogorov complexity C( ) of ∈ 2 N is defined as the length (in bits) of the shortest computer program (run on some fixed universal Turing machine U) that computes . The choice of U affects this definition only up to a -independent constant. For technical reasons (especially for defining the randomness of sequences) it is preferable to work with prefix-free Turing machines T, whose domain D(T) consists of a prefix-free subset of 2 * , i.e., if ∈ D(T) then ∉ D(T) for any , ∈ 2 * , where is the concatenation of and , as the notation suggests. This is also independent of U up to a constant, and defines the the prefix-free Kolmogorov complexity K( ) as the length of the shortest program (run on some fixed universal prefix-free Turing machine U) that computes . For fixed c ∈ ℕ we then say that ∈ 2 * is c-compressible if K( ) < | | − c ; of course, this depends on U via K( ) , but nonetheless a simple counting argument shows: At least 2 N − 2 N−c+1 + 1 strings of length | | = N are not c-compressible. 71 In increasing order of technicality, readers interested in more detail are referred to Diaconis and Skyrms ([35], Chapter 8), Terwijn [111], or Volchan [127] at a popular level, then Grünwald and Vitányi [52], Dasgupta [33], Downey et al. [38], or Muchnik et al. [90], then Li and Vitányi [79] or Calude [25], and finally Downey and Hirschfeldt [37]. See also Baumeler et al. [9], Bendersky et al. [12], Calude [24], Eagle [42], Earman [44], Kamminga [67], Senno [105], Svozil [109,110], Wolf (2015), and Zurek [135] for brief introductions to Kolmogorov randomness with applications to physics. 72 We write 2 N for the set of all binary strings of length | | = N ∈ ℕ , and 2 * = ∪ N 2 N for the set of all binary strings. Finally, 2 ℕ denotes the set of all binary sequences x (which are infinite by convention).

3
Foundations of Physics (2020) 50:  Clearly, as grows and c > 0 , these will form the overwhelming majority. Finally, is Kolmogorov c -random if it is not c-compressible, i.e., if K( ) ≥ | | − c , and Kolmogorov random if it is not c-compressible for any c ∈ ℕ , that is, if one even has K( ) ≥ | | . Since K( ) can at most be equal to the length of an efficient printing program plus the length of , this simply means that is Kolmogorov random if where ≈ means 'up to a -independent constant'. Finally, if x |N is the initial segment of length N of a sequence x ∈ 2 ℕ , i.e. |x |N | = N , then x is Kolmogorov-Chaitin random if Equivalently, 73 a sequence x is Kolmogorov-Chaitin random if there exists c ∈ ℕ such that each truncation x |N satisfies K(x |N ) ≥ N − c . I note with satisfaction that randomness of sequences, seen as idealizations of finite strings, as such satisfies Earman's principle (Sect. 4).
This definition of randomness of both finite strings and infinite sequences looks very appealing, but in a way it is self-defeating, since although it is defined in terms of computability, the complexity function K is not computable, and hence one cannot even determine algorithmically if a finite string is random (let alone an infinite one). 74 Moreover, Berry's paradox strikes back through Chaitin's incompleteness Theorem: For any consistent, sound, and sufficiently rich formal system F (containing basic arithmetic) there is a constant f such that the statement K( ) > f cannot be proved in F for any ∈ 2 * , although it is true for all (random) strings that satisfy K( ) ≥ | | > f (and there are infinitely many such strings, as the above counting argument shows). To get a very rough idea of the proof, let me just say that any proof in F of the sentence K( ) > f would identify and hence give a shorter description of than its complexity K( ) allows.
Chaitin's Theorem gives a new and inexhaustible class of unprovable but true statements (which also lie in a very different class from those provided by Gödel himself): For almost all random strings their randomness cannot be proved. 75 73 See Calude [25], Theorem 6.38 (attributed to Chaitin) for this equivalence. 74 To see this, define a function L ∶ ℕ → ℕ by L(n) = min{m ∈ ℕ | K(m) ≥ n} . Identifying ℕ ≅ 2 * (in a computable way), L(n), seen as an element of 2 * , is the shortest string m whose complexity K(m) exceeds n, so that by construction K(L(n)) ≥ n . If K were computable, then so would L be. Suppose T is the shortest prefix-free program that computes L. Since K(L(n)) is the length of shortest prefix-free program that computes L(n), we have K(L(n)) ≤ |n| + |T| , where |n| is the length of under the above bijection n ↔ , so that |n| ≈ 10 log n . Thus n ≤ K(L(n)) ≤ |n| + |T| , which cannot be true for large n. 75 See Raatikainen [98] for a detailed presentation of Chaitin's incompleteness Theorem including a devastating critique of the far-reaching philosophical interpretation Chaitin himself gave of his theorem.
To the extent that there are deep logico-philosophical truths, surely this is one! Compare: It may be taken for granted that any attempt at defining disorder in a formal way will lead to a contradiction. This does not mean that the notion of disorder is contradictory. It is so, however, as soon as I try to formalize it. 76 The satisfactory definition of 1-randomness proves this wrong, but the statement preceding it is a correct version of a similar intuition: randomness is by its very nature elusive.

Patternlessness
This is the most direct attack on the "paradox of probability", which states that each individual sequence has probability zero despite the huge differences in their (true or apparent) "randomness". We use the notation of §4, so that ∞ is the probability measure on the set 2 ℕ of all binary sequences induced by the 50-50 measure on the outcome space {0, 1} of a single fair coin flip (the discussion is easily adapted to other cases).
As opposed to single outcomes, a key role will be played by tests, i.e. (measurable) sets T ⊂ 2 ℕ of outcomes for which ∞ (T) = 0 . Such a test defines a property that a sequence x ∈ 2 ℕ may or may not have, namely membership of T. Conversely, and perhaps less circularly, one may start with a given pattern that x might have, which would make it appear less random if it had it, and express this pattern as a test T. Conceptually, tests contain outcomes that are supposed to be "atypical", and randomness of x ∈ 2 ℕ will be defined by the property of not belonging to any such test, making x "typical". For example, the property lim N→∞ N −1 ∑ N n=1 x n = 1∕2 is "typical" for a fair coin flip and indeed (by the strong law of large numbers) it holds with probability 1 (in that the set E of all x for which this limit exists and equals 1/2 has ∞ (E) = 1 , so that its complement T = 2 ℕ �E has ∞ (T) = 0 ). One has to proceed very carefully, though, since all singletons T = {x} for x ∈ 2 ℕ are to be excluded; indeed, these all have measure zero and yet, returning to the paradox of probability, some uncontroversially random sequence x (on whatever definition) would fail to be random by the criterion just proposed if all measure zero sets were included in the list of tests. Kolmogorov's former PhD student Martin-Löf [86] saw his way out of this dilemma, combining techniques from computability/recursion theory with some ideas from the intuitionistic/constructive mathematics of the great L.E.J. Brouwer: 1. One specializes to tests of the form T = ∩ n∈ℕ U n , where U n+1 ⊆ U n and 76 Statement by the influential German-Dutch mathematician and educator Hans Freudenthal from 1969, quoted in both van Lambalgen [122], p. 8 and Terwijn [111], p. 51. which guarantees that ∞ (T) = 0 . Perhaps the simplest example is not the law of large numbers, for which see Li and Vitányi [79], §2.4, Example 2.4.2, but the test where U n consists of all sequences starting with n zeros; clearly, ∞ (U n ) = 2 −n . In this case, the test T = {x love } does consist of a singleton x love = 000 ⋯ (zeros only). 2. Both the sets U n and the map n ↦ U n have to be computable in a suitable sense. 77 If x ∉ T , one says that x passes the test T. Note that the computability requirement implies that the set of all tests T satisfying these two criteria is countable, which fact by itself already shows what a huge cut in the set of all measure-zero sets has been achieved. 78 It should be clear that this definition makes no sense if the sample space is finite, but in order to adhere to Earman's principle one could still check to what extent randomness of sequences is determined by their finite initial segments. This is indeed the case, as follows from the detailed structure of the admissible sets U n above (see footnote 77).

Unpredictability
Unpredictability is a family resemblance, like randomness: the following definition is just one possibility, cf. e.g. footnote 41 and eq. (5.6). Having said this: unpredictability of a binary sequence x = (x 1 , x 2 , …) is formalized as the impossibility of a successful betting strategy on the successive digits of x. Suppose a punter with initial capital d(0) starts by betting b( This motivates the concept of a martingale as a function d ∶ 2 * → [0, ∞) that satisfies for each ∈ 2 * . Both the betting strategy itself and the payoff (for given x ∈ 2 ℕ ) can be reconstructed from d: the bet on the N + 1'th digit x N+1 is given by 77 First, each U n has to be open in 2 ℕ , which means that U n = ∪ ∈V n N( ) , where V n ⊂ 2 * and N( ) = { y | y ∈ 2 ℕ } consists of all sequences x = y that start with the given finite part . Second, each V n must be countable, so that V n = ∪ m (n,m) , where each (n,m) ∈ 2 * . Finally, there must be a single program enumerating the sets (V n ) , so that all in all one requires a (partial) computable function ∶ ℕ × ℕ → 2 * such that U n = ∪ m N( (n,m) ) and hence T = ∩ n ∪ m N( (n,m) ) . Such a set T is called effective, and if also (B.3) holds, then T is an effective measure zero set or Martin-Löf test. 78 There is even a single "universal" Martin-Löf test U such that x ∉ U iff x is Martin-Löf random. and after N bets the punter owns d(x |N ) . A martingale d succeeds on A ⊂ 2 ℕ if in which case the punter beats the casino. 79 Our first impulse would now be to call x ∈ 2 ℕ random if there exists no martingale that succeeds on A = {x} , but, Ville [125] proved:

and only if there exists a martingale that succeeds on A.
Here A should be measurable. In particular, for any sequence x ∈ 2 ℕ there exists a martingale that succeeds on x, and hence no sequence x would be random on the criterion that no martingale succeeds on it. Fortunately, the previous two definitions of randomness suggest that all that is missing is a suitable notion of computability for martingales. This notion was provided by Schnorr [102]: all that needs to be added is that the class of martingales d that succeed on A be uniformly left computably enumerable, in the sense that firstly each real number d( ) is the limit of a computable increasing sequence of rational numbers, and secondly that there is a single program that computes d( ) in that way.
Defining x ∈ 2 ℕ to be Schnorr random if there exists no uniformly left computably enumerable martingale that succeeds on it, 80 one has the crowning theorem on 1-randomness: 81 Theorem B.1 A sequence x ∈ 2 ℕ is Kolmogorov-Chaitin random (i.e. incompressible) iff it is Martin-Löf random (i.e. patternless) and iff it is Schnorr random (i.e. unpredictable).
These equivalent conditions, then, define 1-randomness of sequences x ∈ 2 ℕ . 79 In practice someone beating the casino will go home with a finite amount of money. The fact that the right-hand side of (B.8) is infinite is the result of idealizing long strings by sequences: if the punter has a uniform winning strategy and places infinitely many bets, he will earn an infinite amount of money. 80 In the literature the term 'Schnorr randomness' is often used differently, namely to indicate that the martingales d in the above definition are merely computable, which yields a weaker notion of randomness. Also, note that Schnorr [102] used so-called supermartingales in his definition, for which one has ≤ instead of equality in (B.5), but martingales also work, cf. Downey and Hirschfeldt [37], Theorem 6.3.4. 81 The equivalence between the criteria of Martin-Löf and of Kolmogorov-Chaitin was proved by Chaitin (cf. Calude [25], Theorem 6.35) and by Schnorr [103]. The equivalence between Martin-Löf and Schnorr is due to Schnorr [

Some Reflections on 1-Randomness
Any sound definition of randomness (for binary sequences) has to navigate between Scylla and Charybdis: if the definition is too weak (such as Borel normality), counterexamples will undermine it (such as Champernowne's number), but if it is too strong (such as being lawless as in Brouwer's choice sequences, cf. §2), it will not hold almost surely in a 50-50 Bernoulli process. In this respect 1-randomness does very well: see Theorem 4.1 and the theorem below (which lies at the basis of Corollary 4.2): Theorem B.2 Any 1-random sequence is Borel normal, incomputable, and contains any finite string infinitely often. 82 In fact, since computability is sometimes used as a mathematical metaphor for determinism, it is worth mentioning that 1-random sequences are incomputable in a very strong sense: each 1-random sequence x is immune in the sense that neither the set {n ∈ ℕ | x n = 1} nor its complement {n ∈ ℕ | x n = 0} is computably enumerable (c.e.), or can even contain an infinite c.e. subset. Thus 1-randomness is far stronger than mere incomputability (or perhaps indeterminism). On the other hand, as already noted in §5, Chaitin's Incompleteness Theorem (see Appendix B) makes it impossible to prove that some given outcome sequence of a quantum-mechanical experiment with binary outcomes with 50-50 Born probabilities is 1-random even if it is (which is worrying, because one can never be sure, but only almost sure, that outcomes are 1-random). At best, in a bipartite (Alice and Bob) setting of the kind discussed in §5, one may hope for results showing that if an outcome is not 1-random, then some kind of superluminal signaling is possible, but, as reviewed in §5, even that much has not been rigorously established so far. 83 This suggests to try and prove randomness properties of such outcomes sequences that are on the one hand weaker than 1-randomness, like incomputability, but on the other hand are stronger, in that they hold surely. To this end, Abbott, Calude, and Conder [1] take a 3-level quantum system, prepare this in some state ∈ ℂ 3 and repeatedly (indeed, infinitely often) measure any projection � ⟩⟨ � for which √ 5∕14 ≤ �⟨ , ⟩� ≤ √ 9∕14 ; for an experimental setup realizing these measurements see Kulikov et al. [73]. In that case there is a finite set P of 1-dimensional projections on ℂ 3 that contains � ⟩⟨ � as well as � ⟩⟨ � and admits no coloring (Abbott, Calude, and Svozil, [2]). 84 Some particular outcome sequence is described by a function f ∶ ℕ → {0, 1} . The crucial assumption Abbott et al. then make is that if f is computable, then it extends to a function f ∶ ℕ × P → {0, 1} for which each f n ∶ P → {0, 1} is a coloring of P , where f n (e) =f (n, e) . Clearly, then, on this assumption f cannot be computable. This gives the desired result, but moves the 82 See the footnotes to Corollary 4.2 for the meaning of these terms, and Calude [25], §6.4 for proofs. 83 See also Kamminga [67] and references therein for a survey of the literature on this topic. 84 A coloring would be a function C ∶ P → {0, 1} such that for any orthogonal set {e 1 , e 2 , e 3 } in P with e 1 + e 2 + e 3 = 1 3 (where 1 3 is the 3 × 3 unit matrix) there is exactly one e i for which C(e i ) = 1.

3
burden of proof to justifying the assumption. Phrased by the authors in terms of "elements of physical reality" à la EPR, their assumption in fact postulates that if f is computable, then it originates in some ("morally" deterministic) non-contextual hidden-variable theory (to which their own sharpened Kochen-Specker Theorem applies).

Appendix C: Bell's Theorem and Free Will Theorem
In support of the analysis of hidden variable theories in §3 this appendix reviews Bell's [10] Theorem and the Free Will Theorem, streamlining earlier expositions (Cator and Landsman [27]; Landsman [75], Chapter 6) and leaving out proofs and other adornments. 85 In the specific context of 't Hooft's theory (where the measurement settings are determined by the hidden state) and Bohmian mechanics (where they are not, as in the original formulation of Bell's Theorem and in most hidden variable theories) a major advantage of my approach is that both determined und undetermined stettings fall within its scope; the latter case arises from the former by adding an independence assumption. 86 As a warm-up I start with a version of the Kochen-Specker Theorem whose logical form is very similar to Bell's [10] Theorem and the Free Will Theorem, as follows: Theorem C.1 Determinism, Nature, Non-contextuality, and Freedom are contradictory.
Of course, this slightly unusual formulation hinges on the precise meaning of these terms.
• Determinism is the conjunction of the following two assumptions: 85 The original reference for Bell's Theorem is Bell [10], with innumerable follow-up papers of which we especially recommend Werner and Wolf [130]. The Free Will theorem originates in Heywood and Redhead [62], Stairs [108], Brown and Svetlichny [19], Clifton [29], and Conway and Kochen [30,31]. Both theorems can and have been presented and (re)interpreted in many different ways, of which we choose the one that is relevant for the general discussion on randomness in the main body of the paper. 86 This addresses a problem Bell faced even according to some of his most ardent supporters (Norsen,[92]; Seevinck and Uffink [104]), namely the tension between the idea that the hidden variables (in the causal past) should on the one hand include all ontological information relevant to the experiment, but on the other hand should leave Alice and Bob free to choose any settings they like. Whatever its ultimate fate, 't Hooft's staunch determinism has drawn attention to issues like this, as has the Free Will Theorem.
1. There is a state space X with associated functions A ∶ X → S and L ∶ X → O , where S is the set of all possible measurement settings Alice can choose from, namely a suitable finite set of orthonormal bases of ℝ 3 (11 well-chosen bases will to to arrive at a contradiction), 87 and O is some set of possible measurement outcomes. Thus some x ∈ X determines both Alice's setting a = A(x) and her outcome = L(x). 2. There exists some set Λ and an additional function H ∶ X → Λ such that in the sense that for each x ∈ X one has L(x) =L(A(x), H(x)) for a certain function L ∶ S × Λ → O . This self-explanatory assumption just states that each measurement outcome L(x) =L(a, ) is determined by the measurement setting a = A(x) and the "hidden" variable or state = H(x) of the particle undergoing measurement. • Nature fixes O = {(0, 1, 1), (1, 0, 1), (1, 1, 0)} , which is a non-probabilistic fact of quantum mechanics with overwhelming (though indirect) experimental support. • Non-contextuality stipulates that the function L just introduced take the form for a single function L ∶ S 2 × Λ → {0, 1} that also satisfies L (− , ) =L( , ). 88 • Freedom finally states that the following function is surjective: In other words, for each (a, ) ∈ S × Λ there is an x ∈ X for which A(x) = a and H(x) = . This makes A and H "independent" (or: a and are free variables). See Landsman [75], §6.2 for a proof of the Kochen-Specker Theorem in this language. 89 Bell's [10] Theorem and the Free Will Theorem both take a similar generic form, namely: Theorem C.2 Determinism, Nature, (Hidden) Locality, and Freedom, are contradictory.
Once again, I have to explain what these terms exactly mean in the given context. , where J 1 = ⟨ , i ⟩ is the component of the angular momentum operator of a massive spin-1 particle in the direction i . 88 Here S 2 = {(x, y, z) ∈ ℝ 3 | x 2 + y 2 + z 2 = 1} is the 2-sphere, seen as the space of unit vectors in ℝ 3 . Eq. (C.2) means that the outcome of Alice's measurement of J 2 i is independent of the "context" (J 2 1 , J 2 2 , J 2 3 ) ; she might as well measure J 2 i by itself. The last equation is trivial, since (J − i ) 2 = (J i ) 2 . 89 The assumptions imply the existence of a coloring C ∶ P → {0, 1} (cf. footnote 84), where P ⊂ S 2 consist of all unit vectors contained in all bases in S, and "goes along for a free ride". Indeed, one finds C ( ) =L( , ) . The point, then, is that on a suitable choice of the set S such a coloring cannot exist.
• Determinism is a straightforward adaptation of the above meaning to the bipartite "Alice and Bob" setting. Thus we have a state space X with associated functions where the set S of all possible measurement settings Alice and Bob can each choose from differs a bit between the two theorems: for the Free Will Theorem it is the same as for the Kochen-Specker Theorem above, as is the set O of possible measurement outcomes, whereas for Bell's Theorem (in which Alice and Bob each measure a 2-level system), S is some finite set of angles (three is enough), and O = {0, 1}.
-In the Free Will case, these functions and the state x ∈ X determine both the settings a = A(x) and b = B(x) of a measurement and its outcomes = L(x) and = R(x) for Alice on the Left and for Bob on the Right, respectively. -Although this is also true in the Bell case, his theorem relies on measurement statistics (as opposed to individual outcomes), so that one in addition assumes a probability measure on X (sampled by repeated measurements, see Sects. [4][5]. 90 Furthermore, there exists some set Λ and some function H ∶ X → Λ such that in the sense that for each x ∈ X one has functional relationships  a, b, z), 90 The existence of is of course predicated on X being a measure space with corresponding -algebra of measurable subsets, with respect to which all functions in (C.4) and below are measurable. 91 In Bell's Theorem quantum theory can be replaced by experimental support (Hensen et al., [59]). 92 As in Kochen-Specker, this is because Alice and Bob measure squares of (spin-1) angular momenta. This concludes the joint statement of the Free Will Theorem and Bell's [10] Theorem in the somewhat unusual form we need for the main text. The former is proved by reduction to the Kochen-Specker Theorem, whilst the latter follows by reduction to the usual version of Bell's Theorem via the Freedom assumption; see Landsman (1917), Chapter 6 for details.
For our purposes these theorems are equivalent, despite subtle differences in their assumptions. Bell's Theorem is much more robust in that it does not rely on perfect correlations (which are hard to realize experimentally), and in addition it requires almost no input from quantum theory. 94 On the other hand, Bell's Theorem uses probability theory in a highly nontrivial way: like the hidden variable theories it is supposed to exclude it relies on the possibility of fair sampling of the probability measure . The factorization condition defining probabilistic independence passes this requirement of fair sampling on to both the hidden variable and the settings, which brings us back to Sect. 5.
All trusting Nature, different parties may now be identified by the assumption they drop: • Following Copenhagen quantum mechanics, most physicists reject Determinism; • Bohmians reject (Hidden) Locality; • 't Hooft rejects Freedom.
However, as we argued in Sect. 5, even the latter two camps do not really have a deterministic theory underneath quantum mechanics because of their need to randomly sample the probability measure they must use to recover the predictions of quantum mechanics.