An Intricate Quantum Statistical Effect and the Foundation of Quantum Mechanics

An intricate quantum statistical effect guides us to a deterministic, non-causal quantum universe with a given fixed initial and final state density matrix. A concept is developed on how and where something like macroscopic physics can emerge. However, the concept does not allow philosophically crucial free will decisions. The quantum world and its conjugate evolve independently, and one can replace fixed final states on each side just with a common matching one. This change allows for external manipulations done in the quantum world and its conjugate, which do not otherwise alter the basic quantum dynamics. In a big bang/big crunch universe, the expanding part can be attributed to the quantum world and the contracting one to the conjugate one. The obtained bi-linear picture has several noteworthy consequences.


Introduction
The interrelation of classical and quantum physics is treated in one respect too timidly, and we advocated a new approach yielding a novel concept [13,15,17]. This paper aims to develop our basic argument considerably more thoroughly than previously done.
It is not meant as an exercise in finely nuanced words. Nevertheless, two definitions are necessary: In the next section, the basic argumentation will be presented. It will contain no ad hoc assumptions. A straightforward consideration then, in Sect. 4, will lead to an absolutely deterministic concept. To allow for "free will", a suitable modification with a bi-directional universe will be introduced in Sects. 5 and 6. A discussion of essential consequences follows.

Measurements
The traditional bridges between quantum dynamics and the macroscopic world are measurements. They are not simple projection operators: at furcation points. Essential is an effective decoherence setup. A simple generic setup is shown in Fig. 1. An electron with an "in the board" spin gets split in an inhomogeneous magnetic field. Its "up" resp. "down" component enters a drift chamber where lots of photons of various frequencies are produced. This production is called the decoherence part of the measurement process. A few electrons are kicked of their atoms and collected. Suitable charge-coupled electronics flashes "up" resp. "down" on displays.
Empirically the (here just effective) macroscopic dynamics requires no coexisting pathways. So there has to be a decision leading, e.g., to Fig. 2. This decision is called the actual "measurement" M involving a "jump" and a "collapse".
What does this decision mean? Many authors see a violation of locality. In the framework of relativistic theory, this does not seem right.
Consider the needed part of Bohm's version of the Einstein-Rosen-Podolsky experiment [10]. A spin-less ion emits two electrons to form a spin-less ground state. Both electrons have to have opposite spins. If Bob measures the spin to be in the "up" direction, the electron coming to Alice will have a spin in the "down" direction, and Alice will measure "down" and vice versus. If Bob measures the spin sidewise independent of his result,, the electron coming to Alice will not know whether Alice will measure "up" or "down". In this way Bob's decision changes the nature of the electron coming to Alice. It is well known that Bob is a relatively-shy one. So he will be at least twice as far from the exited atom as Alice. In some Lorentz system, Alice's measurement will be in Bob's past, and with his measurement, he influences the property of an electron in his past. That means backward causation, and what is violated is causality [7,40]. It is not a trivial distinction: but To give up causality is very serious and widely not accepted. A traditional defense is called the Copenhagen interpretation. It denies ontological reality to the electron wave going to Alice. In this way, the statement about the electron wave-function becomes meaningless. It opens up intensively discussed interpretations. Some physicists find it not appealing. They want to know what is really going on and not just have a law to predict outcomes. Nevertheless, non-causality is hard to accept, and for the considered situations, the Copenhagen interpretation has to be considered the most reasonable choice. It was advocated by most physicists we admire.
However, there are quantum statistical effects [14][15][16][17], which, in our opinion, change the conclusion. This observation is our central point, which we contemplated for many years. They are, unfortunately, rarely discussed. Field theoretical results do not involve von Neumann measurement, and even top field theorists tend to claim ignorance to questions involving jumps. In the quantum optics community, one encounters a feeling that problems with Schrödinger's equation are challenging enough and that it is reasonable to postpone questions involving second quantization.
So it will not be easy to be convincing. There are several versions. Closest to our background [1] is a quantum statistical effect in high energy heavy-ion scattering. It is one of what Glauber called "known crazy" effects [30].
For non-experts, the description of high energy heavy-ion scattering usually involves a somewhat simple picture mixing coordinate and momentum space. It assumes-not knowing the needed N Hamiltonian-that both fast incoming, more or less round nuclei are in the central Lorentz system contracted to backward causality ∪ forward causality ⇒ non-locality non-locality ⇏ backward light cone causality. pancake-shaped objects. The actual scattering is then assumed to occur when the pancakes overlap in the narrow region shown as red in Fig. 3.
Lots of particles are produced, including two say + mesons with the momenta Q 1 and Q 2 . We denote the amplitude of the pictured process as A(1, 2). As they are bosons, also the crossed contribution shown as dashed lines in the figure has to be included, and the probability of the process is, therefore: Obviously, for Q 1 = Q 2 both amplitudes are equal. yielding the factor two on the right side. Close-by, the phase changes rapidly. It eliminates the interference contribution yielding the shown factor one.
The resulting Q 1 = Q 2 enhancement is experimentally observed, as shown in Fig. 4. The chosen data are from the STAR collaboration [2]. Q inv is the difference of the momenta in the center-of-mass system of the + mesons. The normalization of the two-particle spectrum C(Q inv ) uses an estimate obtained by mixing similar events. In the last 50 years, there were many dozens of large collaborations seeing it. The observation of the statistical enhancement is textbook level and beyond doubt [39].
For the considered central scattering, the emission area's height reflects the uncontracted size of the nuclei. In contrast, the + -emission region is usually associated with individual nucleons determining its size. Therefore, one can select events for which one + originates in the upper and lower half. The particle emission is The statistical enhancement generally assumed to take less than 10 fm∕c [4,28]. The emission process is taken quantum mechanically; after emission, particles are treated macroscopically. The "crazy" observation appears in the following gedanken experiment (see Fig. 5). One considers at a time 20 fm∕c an emission happening in the Bose enhanced region with a probability of ∝ 2 . Orders of magnitudes later on at a time 1 m∕c the situation is suddenly disturbed by a neutron at a suitable position so that the + originating in the lower half independent of its momentum Q 1 or Q 2 will be absorbed. The interference enhancement is gone, and the emission probability is now ∝ 1 . At times the emission has to be taken back. It means backward causation for a particular emission probability.
The ontological reality of emission and its probability can not be questioned. So in this exceptional situation, there is backward causation for real objects. The purpose of the Copenhagen interpretation was to avoid violations of causality. As it was not successful, one should abolish it. One can then accept wave functions' and not gauge dependent fields' ontological reality in a trade-off.
A critical ingredient in the argument is the assumption about the position of the transition from the quantum world to the macroscopic one (drawn as the dash-dotted line in the figures). As said, in particle physics, the transition is usually taken as a process dependent, and the emission process itself is pictured as a measurement procedure.
One way to escape the argument is to postpone the transition to the end of the process, say to 11m∕c . The problem is that there is an analogous astronomical Hanbury Brown-Twiss observation [15,32] where the possible change in the setup corresponding to the neutron insertion can be light-years away. The Copenhagen interpretation assumes that such a transition exists in a reasonable range. Its exact To keep something like the Copenhagen interpretation, one would need to develop a formalism where at least for a year, most measurements in the star are somehow provisional. Also, one would need to introduce an arbitrary time scale for the final transition.
The presented gedanken experiment is a delayed-choice experiment. However, unlike Wheeler's [42], it involves not the value of a wave function-which might have no ontological reality-but a probability of a physical process. It is a quantumstatistical effect involving two identical particles. If they are always geometrically disconnected, the following approximation holds for the product of their creation and annihilation operators It allows for the usual description. If contact occurs, interference contributions appear, and the amplitude will be enhanced or reduced. The process considered with the amplitude can start long before the decision about allowing a contact is made. In this way, the probability of a physical process is affected in a backward causal way.

Scenario with an Extended Final State
Rejecting the repaired Copenhagen interpretation, we argue for a more straightforward way out. Reconsider the situation with measurements. Two central questions are: What does the measurement have to do?
-Identify states originating in something like the "up" or "down" choice.
-Randomly select the contributions from one choice.
-Delete the deselected contributions.
-Renormalize the selected one to get a unit probability.
As there is backward causation, the time of the actual measurement is not fixed to be the time when the electron passes the furcation point or the setup.

When does the measurement have to act?
-Outside the quantum domain behind the decoherence process.
-Witnesses have to be around encoding the measurement results.
The survival time of witnesses is not fully appreciated. In indeed "macroscopic" measurements, some witnesses are around practically forever.
To avoid the definition of limits, we assume a finite lifetime final of the universe. This assumption allows us to postpone measurements to this end of the universe. In this way, wave function collapses are entirely avoided in the "physical" regions where one just has quantum dynamics.
We define just the projection part of the measurement operator M = M ⋅ N , where N is the normalization factor, as: The postponement can then be written as: To illustrate the situation, one can consider Schrödinger's cat (Fig. 6). If the cruel experiment is done in a perfectly enclosed box, all ergodically accessible states will be visited before the end final is reached. There is no possibility that specific witnesses can have survived. In this way, the final state can not select a unique macroscopic pathway. Macroscopic dynamics is an approximation and in the considered situation coexisting macroscopic states have to be considered a given.

How Is It in Reality?
Measurable radiofrequency fields emitted from the brain indicate whether the cat is alive. Usually, nobody talks about individual radiofrequency photons. They carry an energy of something like unmeasurable 10 −28 J.
Some of them will escape the box, the house, and the ionosphere to the dark sky, eventually reaching the final state at which point measurement can backward in time select the macroscopic path with an alive cat (Fig. 7) and deselect the one with a dead cat.
The exact value of the chosen scale final is not significant. Around final , our universe is thin and rather non-interacting. So the witness evolution between final or 1000 final is trivial. A scale choice discussed above is not avoided, but now its value is irrelevant.

The resulting effective basic rules:
-There are enough witnesses for every macroscopic decision so that measurements at final can select/deselect it. In this way, the complete, unique macroscopic pathway is determined. -Coexisting quantum pathways cannot be discerned and selected/deselected by a measurement at final as not enough witnesses were produced.
Our concept of how QM works can be written more symmetrically with the definition below.

Definition of an effective final state density matrix:
With suitable boundary states density matrices, one obtains: Each of zillion branching of the macroscopic pathway corresponds to a measurement decision which can, again and again, be accounted for in this way by a change of the effective final density matrix finally yielding ̃ f ,f * : The presented "two density matrices interpretation" (see also [33,49,50]) is the simplest way of fulfilling the requirements of the discussed gedanken experiment. Also, its derivation did not involve speculative assumptions. It should be useful to compare it with other interpretations discussed below.

Relationship to Everett's Quantum Mechanics
In Everett's QM, all measurement options stay existing in a multiverse. A random association to observers who have witnessed the same quantum decisions replaces the random physics decisions in measurements. Our universe within the multiverse is defined by this community of observers we associate with. Implicit is the assumption that observed universes can split, as shown in Fig. 8, but they never join. As in the two density matrices interpretation, an abundant existence of witnesses is therefore required.
To have our universe defined up to final our community needs observers until that time. In principle, these observers have access to all quantum decisions. They can therefore determine a final density matrix consistent with all macroscopic decisions. This density matrix allows then to macroscopically describe our universe in the multiverse in a two density matrix formalism. The fate of the multiverse outside of our universe shown in red in the figure is then irrelevant.

Relationship to Two-State-Vector Quantum Mechanics
Let us begin with the argument for the dominant state vector approximation. It is not rigorous as it requires a reasonably convergent expansion of the density matrix in non-degenerate state-vector products.
Without the normalization factor N, the effective final density matrix gets extremely small, i.e., something like ∼ 2 −# of all binary decisions . Of course, in a more precise consideration weights, and non-binary branching will have to be included. Expanding the matrix: The tiny coefficients are statistically independent, and it is practically impossible that they are of the same magnitude. Therefore the largest term should suffice, i.e.: The simplification is also applied to the initial state In this way, one obtains the Two-State-Vector description of Aharonov and collaborators [3,6,8] . For simplicity, we adhere in the following often to this Two-State-Vector description. The arguments can be transferred to the two-density-matrix description if the density matrices are constrained appropriately.
To obtain the Aharonov-Bergman-Lebowitz equation [5], one can take all macroscopic measurements in the universe as accounted for in �f ⟩ except for an additional measurement M: The Two-State-Vector description was carefully investigated over many decades, and no inconsistencies were found on the quantum side. However, the central question is, how can causal macroscopic dynamics follow from non-causal quantum dynamics?

The Time-Ordered Causal Macroscopic Dynamics
The considered gedanken experiments involved exceptional situations. Typical macroscopic measurements will approximately average out enhancing and depleting phase effects. In [15], this was called "the correspondence transition rule". Hence the net effect of interference terms vanishes, the considered changes in the settings are irrelevant, and direct macroscopic backward causation is disallowed.
However, what happens on a basic level? Causal macroscopic dynamics involves a decision tree shown in Fig. 9. A decision, e.g., at D 1 , determines the future. How can a time-symmetric non-causal theory underlie such a macroscopic causal decision tree with a time direction? To explain the proposed mechanism, one can start with a definition. The "Macroscopic State" {�q⟩} is defined as sum/integral over all states macroscopically indistinguishable from the quantum state �q⟩: It includes all possible phases between different components and all unmeasurable individual low-frequency photons. Of course, the Macroscopic State inherits from the quantum state �q i ⟩ uncertainties about its exact position, momentum, e.c.t. For a simpler argumentation, we will assume that the Ljapunow-exponents allow us to ignore such uncertainties. In principle, there are no serious problems.
As said, the full initial and final quantum states allow one single macroscopic path shown in red in Fig. 10. What happens if one replaces the initial and final quantum state with the corresponding Macroscopic States? Quantum decisions are often encoded in relative phases. With choices available, the underlying QM now allows for many pathways consistent with the "macroscopic" initial and final states yielding a situation depicted in Fig. 10 with black branching and merging slashed lines.
The Macroscopic States somehow live in macroscopic dynamics. In purely macroscopic dynamics, there would, of course, be one pathway from the initial to some final state. The splitting and joining in the figure is an effect of the underlying quantum dynamics. To avoid a contradiction to what is known in macroscopic dynamics, one has to assume that the splitting and joining in the figure involves cosmologically long time scales. Macroscopic dynamics is only an empirical approximation which can be violated at untested scales.
The central assumption is our position in the universe. It is indicated by the dotted line in Fig. 10. The source of the observed macroscopic causal time direction is the asymmetry of our position, i.e.:  Past evolution is assumed to be too short to allow multiple pathways. With the known cosmic microwave background, with the known distribution of galaxies, and with the more or less known astrophysical mechanisms, the backward evolution is pretty much determined at least up to the freeze-out. The hypothesis is that the macroscopic past can be determined in an essentially in-ambiguous way if all macroscopic details of the present universe would be known. These include the macroscopic properties of all atoms in all the stars in all the galaxies.
The situation of the future is assumed to be long enough to allow for multiple pathways. Allow for an anthropogenic picture in which decisions are usually considered. Driving on the highway, one can turn right to Dortmund or left to Frankfurt and then make a mess in Frankfurt, which will have obvious consequences afterward. That the fixed final macroscopic state at the end of the universe limits what can be done is of no practical concern.
In reality, the present and final boundary states are quantum states which yield a uniquely determined macroscopic pathway. All decisions are encoded in the final state, which obviously can not contain a time direction. That they happen at the bifurcation points denoted by "D" is an illusion faking the causal direction.
Problems with the absolute deterministic the fixed final state model The argumentation for a final state model is convincing, and there are no intrinsic paradoxes. But some aspects of the fixed final state model are problematic to agree to: • Willful agents cannot exist! Within the considered framework, a willful agent had to adjust the fixed final state at the end of the universe in an incalculable way. To avoid recalculating the universe, one has to drop the concept of willful agents, but this is hard to accept [35]. It is not just philosophical. Consider a seminar. Without a willful chair, a speaker could go on forever. The second problem is more on an aesthetic level. • The fixed randomness within the final state! No to disturb Born's Rule, the final state can not bias quantum decisions. It has to be fixed in a random arrangement, which is uglier done within such a detached state than the usual random decisions during measurement processes. Except for these decisions, the wavefunctions (or fields) evolve independently of the corresponding evolution of the complex conjugate wave-functions (or fields). The basis for the connecting deci-now − big bang ≪ final − now . Fig. 12 Future evolution sions is that the final states fixing these random decisions are equal on both sides. There is no intrinsic, natural mechanism for the corresponding property of the final state density matrix.

The Matching State
There is a way out. So far, we have mainly considered the evolution of wave-functions or fields. Physics depends on them and their conjugate. To allow for external manipulations one can consider the quantum world and its conjugate separately with distinct initial values and then replace both fixed final states with a common just matching one. An external agent lives in the macroscopic world. He can manipulate the wave functions or fields and their conjugates at a given time. The matching final state will then change by itself accordingly. No incalculable action of the agent is required.
To avoid arbitrary assumptions about the time and nature of the matching, we turn to a simple cosmological big bang/big crunch scenario. It allows for a straightforward implementation of the bidirectional concept. It is, however, not essential for the concept.

The Bidirectional Big Bang/Big Crunch Scenario
There are many exciting new observations in cosmology and astrophysics. Extrapolating observations it is usually assumed that a rather but not completely homogenous universe undergoes an accelerating expansion. Our argument for macroscopic causality required that the universe's total lifetime has to be much larger than its present age. In this way, the extrapolations of the present observation are not relevant.
The understanding of dark energy or whatever drives the cosmos' dynamic is not yet available [25,37]. The concept that eventually the anti-gravitating dark energy gets exhausted, leading to a big bang/big crunch universe, is at least appealing.
Of course, there are black holes, and the structure of the universe must be topologically intricate. The expectation is that these complications are not relevant for our epoch's basic understanding and that one can consider a simple most configuration where the total age of the considered universe is and both the expanding and the contracting phase last for ∕2 . The point of the maximum extension will be called the border.
The initial and final states are not CPT conjugates. (If universes are created in CPT conjugate pairs [43], non-matching ones have to be chosen). As above, all quantum decisions are attributed to the initial and final state. Their overlap: extremely tiny � has to be again something like 2 −# all decisions . This (relation 11) also holds for the overlap of (1) the state evolved from the initial state to just before the border and of (2) the state revolved from the final state just after the border. On each side, the evolved, resp., the revolved state contains witnesses for all possible macroscopic pathways.
No "fine-tuning" is involved as no big number is created dynamically. At the border, the extremely extended universe has only a tiny fraction of truly occupied nonvacuum states. So matching is extremely rare. Both strongly entangled evolved states should miss common entanglements simply for statistical reasons (see also [20]). Coexisting pathways involving the expanding and contracting phases are practically excluded.
For the border state, one can define something like a density matrix connecting the evolved incoming and outgoing states.
As the Hamiltonian describing the evolution is hermitian, the matrix max. extend is diagonalizable. With the dominant state argument, its extreme smallness means that typically only a single component dominates, i.e., one can just approximate it as: For the total evolution, it leaves two factors: No time arrow is accepted, so the expanding world is analogous to the contracting one. For both the "expanding" and the "contracting" phases, the border state is an effective final quantum state determining the macroscopic pathways in its neighborhood, as argued in Sect. 3. In an expanding universe, witnesses typically connect to the huge effective final state, and in the contracting universe, the situation is analogous. So the neighborhood can be assumed to cover much of the considered universe, including our epoch.
In this region, the common quantum border state has the consequence: The expanding and contracting worlds are macroscopically identical.
This result allows an obvious interpretation:

Surjection Hypothesis
To avoid strange partnerships, one has to drop the usually implied complex conjugate part and postulates: -The quantum states are defined in [0, ].
-Macroscopic dynamics is taken to extend from [0, ∕2]. In this way, the Born-rule can be written as (t) = (t) ⋅ ( − t) CPT . The proposition has several attractive consequences.

A Will-Full Agent is Now Possible
At the macroscopic time t, corresponding to the quantum times t and − t , a manipulating agent can introduce unitary operators: In the macroscopic future [t, − t] the wave functions change, and a new border component will dominate: automatically reflecting the manipulation. No unusual action of the agent is required.
The manipulation of the agent does not introduce a fundamentally new time direction. The changed matching can, in principle, affect the contributing wave functions also in the macroscopic past. However, as t ≪ the functions (t � < t) and (t � > − t) stay practically unchanged.

Stern-Gerlach Experiment
An agent can prepare a "Stern-Gerlach experiment" shown in Fig. 13. As the drift chambers create macroscopic traces with a large number of witnesses, mixed "up"/"down" contributions are excluded leaving the red or yellow contributions.  Statistically, one contribution will completely dominate. The choice reflects unknown properties of the available future path. The randomness disliked by Einstein found a fundamentally deterministic explanation.
As it is well known, quantum randomness gets lost in the macroscopic world just by statistics as large numbers (like Avogadro's) are involved. As there are no correlations between the considered ensemble and the future pathways, the effective randomness obtained suffices.
On average, both possibilities are equal, i.e.: which has the consequence: It means the "Born rule" holds [47]. The squares brackets on the right are no longer chosen as they have the required properties, but they are now a direct consequence of the physical process.

Important Cosmological Consequences
In cosmological development, there can be special situations or early periods where the remoteness of the final state does not allow a macroscopic description and the needed difference between the initial bang and the final crunch state will get important.
The possible absence of a macroscopic description demystifies paradoxes. In a closed box, Schrödinger's cat can be dead and alive. The same applies to the grandpa in a general relativity loop [18] used in arguments discrediting backward causation.
It also could affect the view of early cosmological development. Before QED freeze out the universe is heavily interacting and it is to be expected that there are sooner or later no longer surviving witnesses to fix a unique macroscopic pathway to eliminate macroscopic coexistence.
A macroscopic description of the earlier universe could be unacceptable. Even to use a unique macroscopic Hubble parameter H(t) as it used in the Friedman-equations might be questionable. (17) contributions ∝ 2 −decision on paths I and I' = 2 −huge 2 −decision on paths II and II' = 2 −huge �

Homogeneity of the Early Universe
The transition from a period without a macroscopic description to a macroscopic one requires special considerations. There is a simple observation about contributing pathways. Unusual components of the quantum phase will typically be deselected, and only components close to the average will collectively produce a significant contribution entering the macroscopic phase. In this way, a homogeneous contribution at the transition point is strongly favored. The initial big bang state in our argument for macroscopic causality can be replaced in this framework by this initial homogeneous state. The basic initial state/border state asymmetry needed for the argument stays.
The universe is more homogeneous than expected from simple estimates [31]. It is usually attributed to a limited horizon caused by a rapid expansion of the universe due to inflation. The concept might offer a way to avoid the complicated requirements of inflation models.
Inflation models have, according to a recent work of Chowdhury et al. [19], a serious fundamental problem within the Copenhagen quantum mechanics. One needs to come from an initially coherent state to one allowing for temperature fluctuations. Quantum jumps would do the trick, but they are not possible in inflation models as the universe is taken as a closed system without an external observational macroscopic entity.

Summary
Quantum statistical effects strongly suggest abolishing causality on the quantum side and finding arguments to resurrect it in the macroscopic world effectively.
In a universe with a finite lifetime final sufficiently abundant witnesses make it possible to postpone all measurement decisions to final where they then can be incorporated in an effective final density matrix. The resulting absolutely deterministic concept with a fixed initial and a fixed final density matrix is closely related to the Two-State-Vector quantum mechanics of Aharonov and coworkers and to a universe in the Everett multiversum inhabited by a final observer, our community in our particular universe associates with.
As it stands, the concept is not acceptable. Free macroscopic agents are indispensable. A simple way to incorporate individuals with free will is to turn to a slightly modified model. The fields and their conjugates are taken to evolve independently, and the fixed final state on each side is replaced with a common just matching one. For simplicity and to avoid ad hoc assumptions about the matching, a big bang/big crunch cosmology is chosen with an expanding and a contracting quantum phase. A free agent then lives-like all macroscopic objectswith the wave function in the expanding part and the complex conjugate one in the contracting part. Operators he is allowed to enter at his macroscopic time t on both sides, i.e., at the quantum time t and − t , will affect the quantum evolution in between, i.e., in his macroscopic future.
To conclude, we obtained a surjective interpretation that has no intrinsic paradoxes and allows for free agents. Unfortunately, it requires abandoning concepts many people are not willing to question.