1 Introduction

The interrelation of classical and quantum physics is treated in one respect too timidly, and we advocated a new approach yielding a novel concept [13, 15, 17]. This paper aims to develop our basic argument considerably more thoroughly than previously done.

It is not meant as an exercise in finely nuanced words. Nevertheless, two definitions are necessary:

QUANTUM DYNAMICS

MACROSCOPIC DYNAMICS

\(=\)

\(=\)

Quantum mechanics

Classical mechanics

without measurements

\(+\) classical electrodynamics

\(\in\)

\(+\) most of statistical mechanics

Relativistic quantum field theory

\(+\) parts of general relativity

The first definition was coined by Sakurai [44]. Quantum dynamics represents quantum mechanics (QM) without measurements. Meant are the von Neumann projection operators, i.e., the jumps. Decoherence [36] is part of quantum dynamics. Sakurai made the point that all the spectacular achievements of QM lie in the domain of quantum dynamics. Underlying quantum dynamics is, of course, relativistic quantum field theory. For the considered questions, both are taken as a unit. The second definition is almost trivial. It is given in the above right box.

Both dynamic descriptions of the world differ centrally. In macroscopic dynamics, there is a unique pathway. Ensembles are often specified in a limited way. However, it just reflects ignorance. On a fundamental level, there is at each point in time one valid configuration.

This rule is absent in quantum dynamics. Here many distinct pathways can coexist. What is meant with distinct? Topologically Feynman paths are distinct if they belong to different homotopy classes. Paths going through the upper and the lower gap of a two-slit experiment are distinct. Essentially it is assumed that Feynman paths in a homotopy class can be integrated out to a practical “real” pathway. For a more careful consideration of how real paths arise, we refer to [51].

The hard conclusion is: Both world views are incompatible! It was recognized early on [12, 27]. Historically the basic premise seems to have been that something was missing in the young QM and that one had somehow to repair it by a suitable amendment. An example of such an attempt is de Broglie–Bohm guiding field theory [11, 24, 26]. Almost a century has passed, and much serious work was done investigating all aspects (see e.g. [13, 21,22,23, 29, 35, 38, 41, 45, 46, 48, 53,54,55]). There are various proposed interpretations to solve the problem or at least make the “incompatibility” acceptable. However, it is fair to say that this was not successful. No interpretation is generally accepted.

Our basic concept to avoid the incompatibility will be not to change quantum dynamics but macroscopic dynamics. In literature, various observations are requiring some of such changes. As outlined in a recent review of Wharton and Argaman [52], whatever one does on the quantum-theoretical side aspects of the macroscopic dynamics have to change as they disagree with Bell-type experiments [9]. We advocate a more radical position and question everything we think to know of macroscopic dynamics. It will be taken to hold only approximately and only in our epoch in the universe.

On the other hand, quantum dynamics will be considered an exact theory of the whole universe. It is the only theory confirmed on a 16 digit level (for QED anomalous moments [34]) and it is entirely reasonable to be taken as a safe base. Then, one task will be how something like causal macroscopic dynamics comes out of the unamended non-causal quantum dynamics.

In the next section, the basic argumentation will be presented. It will contain no ad hoc assumptions. A straightforward consideration then, in Sect. 4, will lead to an absolutely deterministic concept. To allow for “free will”, a suitable modification with a bi-directional universe will be introduced in Sects. 5 and 6. A discussion of essential consequences follows.

2 Measurements

The traditional bridges between quantum dynamics and the macroscopic world are measurements. They are not simple projection operators:

$$\begin{aligned} \left( \begin{array}{cc} 1 &{} 0\\ 0 &{} 0 \end{array}\right) \cdot (\text{not relevant}) \end{aligned}$$

at furcation points. Essential is an effective decoherence setup. A simple generic setup is shown in Fig. 1. An electron with an “in the board” spin gets split in an inhomogeneous magnetic field. Its “up” resp. “down” component enters a drift chamber where lots of photons of various frequencies are produced. This production is called the decoherence part of the measurement process. A few electrons are kicked of their atoms and collected. Suitable charge-coupled electronics flashes “up” resp. “down” on displays.

Fig. 1
figure 1

Stern–Gerlach arrangement

Empirically the (here just effective) macroscopic dynamics requires no coexisting pathways. So there has to be a decision leading, e.g., to Fig. 2. This decision is called the actual “measurement” M involving a “jump” and a “collapse”.

Fig. 2
figure 2

Stern–Gerlach measurement

What does this decision mean? Many authors see a violation of locality. In the framework of relativistic theory, this does not seem right.

Consider the needed part of Bohm’s version of the Einstein–Rosen–Podolsky experiment [10]. A spin-less ion emits two electrons to form a spin-less ground state. Both electrons have to have opposite spins. If Bob measures the spin to be in the “up” direction, the electron coming to Alice will have a spin in the “down” direction, and Alice will measure “down” and vice versus. If Bob measures the spin sidewise independent of his result,, the electron coming to Alice will not know whether Alice will measure “up” or “down”. In this way Bob’s decision changes the nature of the electron coming to Alice.

It is well known that Bob is a relatively-shy one. So he will be at least twice as far from the exited atom as Alice. In some Lorentz system, Alice‘s measurement will be in Bob’s past, and with his measurement, he influences the property of an electron in his past. That means backward causation, and what is violated is causality [7, 40]. It is not a trivial distinction:

$$\begin{aligned} \text{backward causality}\cup \text{forward causality}\Rightarrow \text{non-locality} \end{aligned}$$

but

$$\begin{aligned} \text{non-locality}\not \Rightarrow \text{backward light cone causality}. \end{aligned}$$

To give up causality is very serious and widely not accepted. A traditional defense is called the Copenhagen interpretation. It denies ontological reality to the electron wave going to Alice. In this way, the statement about the electron wave-function becomes meaningless. It opens up intensively discussed interpretations. Some physicists find it not appealing. They want to know what is really going on and not just have a law to predict outcomes. Nevertheless, non-causality is hard to accept, and for the considered situations, the Copenhagen interpretation has to be considered the most reasonable choice. It was advocated by most physicists we admire.

However, there are quantum statistical effects [14,15,16,17], which, in our opinion, change the conclusion. This observation is our central point, which we contemplated for many years. They are, unfortunately, rarely discussed. Field theoretical results do not involve von Neumann measurement, and even top field theorists tend to claim ignorance to questions involving jumps. In the quantum optics community, one encounters a feeling that problems with Schrödinger‘s equation are challenging enough and that it is reasonable to postpone questions involving second quantization.

So it will not be easy to be convincing. There are several versions. Closest to our background [1] is a quantum statistical effect in high energy heavy-ion scattering. It is one of what Glauber called “known crazy” effects [30].

For non-experts, the description of high energy heavy-ion scattering usually involves a somewhat simple picture mixing coordinate and momentum space. It assumes—not knowing the needed \(\pi N\) Hamiltonian—that both fast incoming, more or less round nuclei are in the central Lorentz system contracted to pancake-shaped objects. The actual scattering is then assumed to occur when the pancakes overlap in the narrow region shown as red in Fig. 3.

Fig. 3
figure 3

Two emitted \({\pi ^+}'s\)

Lots of particles are produced, including two say \(\pi ^{+}\) mesons with the momenta \(Q_{1}\) and \(Q_{2}\). We denote the amplitude of the pictured process as A(1, 2). As they are bosons, also the crossed contribution shown as dashed lines in the figure has to be included, and the probability of the process is, therefore:

$$\begin{aligned}&\text {emission probability} \nonumber \\&\qquad {=\frac{1}{2}|A(1,2)+A(2,1)|^{2}= {\left\{ \begin{array}{ll} 2\cdot |A(1,2)|^{2} &{} \text{for} Q_{1}=Q_{2}\\ 1\cdot |A(1,2)|^{2} &{} \text{for}\, Q_{1}\ne Q_{2}\,\text{but}\, Q_{1}\sim Q_{2} \end{array}\right. }} \end{aligned}$$
(1)

Obviously, for \(Q_{1}=Q_{2}\) both amplitudes are equal. yielding the factor two on the right side. Close-by, the phase changes rapidly. It eliminates the interference contribution yielding the shown factor one.

The resulting \(Q_{1}=Q_{2}\) enhancement is experimentally observed, as shown in Fig. 4. The chosen data are from the STAR collaboration [2]. \(Q_{inv}\) is the difference of the momenta in the center-of-mass system of the \(\pi ^{+}\) mesons. The normalization of the two-particle spectrum \(C(Q_{inv})\) uses an estimate obtained by mixing similar events. In the last 50 years, there were many dozens of large collaborations seeing it. The observation of the statistical enhancement is textbook level and beyond doubt [39].

Fig. 4
figure 4

The statistical enhancement

For the considered central scattering, the emission area’s height reflects the uncontracted size of the nuclei. In contrast, the \(\pi ^+\)-emission region is usually associated with individual nucleons determining its size. Therefore, one can select events for which one \(\pi ^+\) originates in the upper and lower half. The particle emission is generally assumed to take less than \(10\,\text{fm}/c\) [4, 28]. The emission process is taken quantum mechanically; after emission, particles are treated macroscopically.

The “crazy” observation appears in the following gedanken experiment (see Fig. 5). One considers at a time \(20\,\text{fm}/c\) an emission happening in the Bose enhanced region with a probability of \(\propto 2\). Orders of magnitudes later on at a time \(1\,\text{m}/c\) the situation is suddenly disturbed by a neutron at a suitable position so that the \(\pi ^+\) originating in the lower half independent of its momentum \(Q_1\) or \(Q_2\) will be absorbed. The interference enhancement is gone, and the emission probability is now \(\propto 1\). At times the emission has to be taken back. It means backward causation for a particular emission probability.

Fig. 5
figure 5

Crazy gedanken experiment

The ontological reality of emission and its probability can not be questioned. So in this exceptional situation, there is backward causation for real objects. The purpose of the Copenhagen interpretation was to avoid violations of causality. As it was not successful, one should abolish it. One can then accept wave functions’ and not gauge dependent fields’ ontological reality in a trade-off.

A critical ingredient in the argument is the assumption about the position of the transition from the quantum world to the macroscopic one (drawn as the dash-dotted line in the figures). As said, in particle physics, the transition is usually taken as a process dependent, and the emission process itself is pictured as a measurement procedure.

One way to escape the argument is to postpone the transition to the end of the process, say to \(11 \text{m}/c\). The problem is that there is an analogous astronomical Hanbury Brown–Twiss observation [15, 32] where the possible change in the setup corresponding to the neutron insertion can be light-years away. The Copenhagen interpretation assumes that such a transition exists in a reasonable range. Its exact position is not specified. However, a year is by far outside of the expectation of the Copenhagen interpretation closing the escape.

To keep something like the Copenhagen interpretation, one would need to develop a formalism where at least for a year, most measurements in the star are somehow provisional. Also, one would need to introduce an arbitrary time scale for the final transition.

The presented gedanken experiment is a delayed-choice experiment. However, unlike Wheeler’s [42], it involves not the value of a wave function—which might have no ontological reality—but a probability of a physical process. It is a quantum-statistical effect involving two identical particles. If they are always geometrically disconnected, the following approximation holds for the product of their creation and annihilation operators

$$\begin{aligned} a_1 a_2 a_2^+ a_1^+ = (a_1 a_1^+) (a_2 a_2^+)\ . \end{aligned}$$
(2)

It allows for the usual description. If contact occurs, interference contributions appear, and the amplitude will be enhanced or reduced. The process considered with the amplitude can start long before the decision about allowing a contact is made. In this way, the probability of a physical process is affected in a backward causal way.

3 Scenario with an Extended Final State

Rejecting the repaired Copenhagen interpretation, we argue for a more straightforward way out. Reconsider the situation with measurements. Two central questions are:

What does the measurement have to do?

  • Identify states originating in something like the “up” or “down” choice.

  • Randomly select the contributions from one choice.

  • Delete the deselected contributions.

  • Renormalize the selected one to get a unit probability.

As there is backward causation, the time of the actual measurement is not fixed to be the time when the electron passes the furcation point or the setup.

When does the measurement have to act?

  • Outside the quantum domain behind the decoherence process.

  • Witnesses have to be around encoding the measurement results.

The survival time of witnesses is not fully appreciated. In indeed “macroscopic” measurements, some witnesses are around practically forever.

To avoid the definition of limits, we assume a finite lifetime \(\tau_{\text{final}}\) of the universe. This assumption allows us to postpone measurements to this end of the universe. In this way, wave function collapses are entirely avoided in the “physical” regions where one just has quantum dynamics.

We define just the projection part of the measurement operator \(M=\mathcal{M}\cdot N\), where N is the normalization factor, as:

$$\begin{aligned} \mathcal{M}_{up}(t)=\sum_{q(i) \text{originating in}\ ``up''}|q_i\rangle \langle q_i| \end{aligned}$$

The postponement can then be written as:

$$\begin{aligned}\langle i|\, U(t-t_{i})\,\mathcal{M}_{up}(t)\, U(\tau_{\text{final}}-t)\,=\langle i|\, U(\tau_{\text{final}}-t_{i})\,\mathcal{M}_{up-evolved}'(\tau_{\text{final}}) \end{aligned}$$
(3)

where \(\mathcal{M}_{up}(t)\) is effectively replaced by \(\mathcal{M}_{up-evolved}'(\tau_{\text{final}})\).

To illustrate the situation, one can consider Schrödinger’s cat (Fig. 6). If the cruel experiment is done in a perfectly enclosed box, all ergodically accessible states will be visited before the end \(\tau_\text{final}\) is reached. There is no possibility that specific witnesses can have survived. In this way, the final state can not select a unique macroscopic pathway. Macroscopic dynamics is an approximation and in the considered situation coexisting macroscopic states have to be considered a given.

Fig. 6
figure 6

Completely enclosed

3.1 How Is It in Reality?

Measurable radiofrequency fields emitted from the brain indicate whether the cat is alive. Usually, nobody talks about individual radiofrequency photons. They carry an energy of something like unmeasurable \(10^{-28}\) J.

Some of them will escape the box, the house, and the ionosphere to the dark sky, eventually reaching the final state at which point measurement can backward in time select the macroscopic path with an alive cat (Fig. 7) and deselect the one with a dead cat.

Fig. 7
figure 7

Real box

The exact value of the chosen scale \(\tau_{\text{final}}\) is not significant. Around \(\tau_{\text{final}}\), our universe is thin and rather non-interacting. So the witness evolution between \(\tau_{\text{final}}\) or \(1000\,\tau_{\text{final}}\) is trivial. A scale choice discussed above is not avoided, but now its value is irrelevant. 


The resulting effective basic rules:

  • There are enough witnesses for every macroscopic decision so that measurements at \(\tau_{\text{final}}\) can select/deselect it. In this way, the complete, unique macroscopic pathway is determined.

  • Coexisting quantum pathways cannot be discerned and selected/deselected by a measurement at \(\tau_{\text{final}}\) as not enough witnesses were produced.

Our concept of how QM works can be written more symmetrically with the definition below.


Definition of an effective final state density matrix:

With suitable boundary states density matrices, one obtains:

$$\begin{aligned} \text{probability}{}_{\mathcal{M}}=\frac{Tr(\rho_{i^*,i}\, U(\tau_{f}-\tau_{i})\,\mathcal{M}'\,\rho_{f,f^*}\,\mathcal{M}'\, U^{*}(\tau_{f^*}-\tau_{i^*}))}{Tr(\rho_{i^*,i}\, U(\tau_{f}-\tau_{i})\,\rho_{f,f^*}\, U^{*}(\tau_{f^*}-\tau_{i^*}))} \end{aligned}$$
(4)

Defining \(\widetilde{\rho_{f,f^{*}}}=\mathcal{M}'\,\rho_{f,f^{*}}\,\mathcal{M}'\) it simplifies. Each of zillion branching of the macroscopic pathway corresponds to a measurement decision which can, again and again, be accounted for in this way by a change of the effective final density matrix finally yielding \(\widetilde{\widetilde{\rho_{f,f^*}}}\) :

$$\begin{aligned} \text{probability}{}_{\mathcal{M}_1\cdots \mathcal{M}_{zillion}}=\frac{Tr(\rho_{i^*,i}\, U(\tau_{f}-\tau_{i})\,\widetilde{ \widetilde{\rho_{f,f^*}}}\, U^{*}(\tau_{f^*}-\tau_{i^*}))}{Tr(\rho_{i^*,i}\, U(\tau_{f}-\tau_{i})\,\rho_{f,f^*}\, U^{*}(\tau_{f^*}-\tau_{i^*}))} \end{aligned}$$
(5)

The presented “two density matrices interpretation” (see also [33, 49, 50]) is the simplest way of fulfilling the requirements of the discussed gedanken experiment. Also, its derivation did not involve speculative assumptions. It should be useful to compare it with other interpretations discussed below.

3.2 Relationship to Everett’s Quantum Mechanics

In Everett’s QM, all measurement options stay existing in a multiverse. A random association to observers who have witnessed the same quantum decisions replaces the random physics decisions in measurements. Our universe within the multiverse is defined by this community of observers we associate with.

Implicit is the assumption that observed universes can split, as shown in Fig. 8, but they never join. As in the two density matrices interpretation, an abundant existence of witnesses is therefore required.

Fig. 8
figure 8

Everett’s tree

To have our universe defined up to \(\tau_{\text{final}}\) our community needs observers until that time. In principle, these observers have access to all quantum decisions. They can therefore determine a final density matrix consistent with all macroscopic decisions. This density matrix allows then to macroscopically describe our universe in the multiverse in a two density matrix formalism. The fate of the multiverse outside of our universe shown in red in the figure is then irrelevant.

3.3 Relationship to Two-State-Vector Quantum Mechanics

Let us begin with the argument for the dominant state vector approximation. It is not rigorous as it requires a reasonably convergent expansion of the density matrix in non-degenerate state-vector products.

Without the normalization factor N, the effective final density matrix gets extremely small, i.e., something like \(\sim 2^{-\#\, of\, all\, binary\, decisions}\). Of course, in a more precise consideration weights, and non-binary branching will have to be included. Expanding the matrix:

$$\begin{aligned} \widetilde{\widetilde{\rho_{f,f^*}}} = c_{1}\cdot |f_{1}\rangle \langle f_{1}|+c_{2}\cdot |f_{2}\rangle \langle f_{2}|+c_{3}\cdot |f_{3}\rangle \langle f_{3}|\cdots \end{aligned}$$
(6)

The tiny coefficients are statistically independent, and it is practically impossible that they are of the same magnitude. Therefore the largest term should suffice, i.e.:

$$\begin{aligned} \widetilde{\widetilde{\rho_{f,f^*}}}\approx & {} c_{1}\cdot |\, f_{1}\rangle \langle f_{1}\,| . \end{aligned}$$
(7)

The simplification is also applied to the initial state

$$\begin{aligned} \rho_{i,i^{*}}\,=|i\rangle \langle i| . \end{aligned}$$
(8)

In this way, one obtains the Two-State-Vector description of Aharonov and collaborators [3, 6, 8] . For simplicity, we adhere in the following often to this Two-State-Vector description. The arguments can be transferred to the two-density-matrix description if the density matrices are constrained appropriately.

To obtain the Aharonov–Bergman–Lebowitz equation [5], one can take all macroscopic measurements in the universe as accounted for in \(|f\rangle\) except for an additional measurement \(\mathcal{M}\):

$$\begin{aligned} \text{probability}{}_{\mathcal{M}}=\frac{|\left[\langle i|U(\tau -\tau_{i})\,\mathcal{M}\, U(\tau_{f}-\tau )|f\rangle \right] |^{2}}{|\left[ \langle i|U(\tau_{f}-\tau_{i})|f\rangle \right] |^{2}}. \end{aligned}$$
(9)

The Two-State-Vector description was carefully investigated over many decades, and no inconsistencies were found on the quantum side. However, the central question is, how can causal macroscopic dynamics follow from non-causal quantum dynamics?

4 The Time-Ordered Causal Macroscopic Dynamics

Fig. 9
figure 9

Decision tree

The considered gedanken experiments involved exceptional situations. Typical macroscopic measurements will approximately average out enhancing and depleting phase effects. In [15], this was called “the correspondence transition rule”. Hence the net effect of interference terms vanishes, the considered changes in the settings are irrelevant, and direct macroscopic backward causation is disallowed.

However, what happens on a basic level? Causal macroscopic dynamics involves a decision tree shown in Fig. 9. A decision, e.g., at \(D_{1}\), determines the future. How can a time-symmetric non-causal theory underlie such a macroscopic causal decision tree with a time direction?

To explain the proposed mechanism, one can start with a definition. The “Macroscopic State” \(\{|q\rangle \}\) is defined as sum/integral over all states macroscopically indistinguishable from the quantum state \(|q\rangle\):

$$\begin{aligned} \left\{ |q\rangle \right\} =\sum_{\text{all states macroscopically consistent with}|q\rangle }|q_{i}\rangle \end{aligned}$$
(10)

It includes all possible phases between different components and all unmeasurable individual low-frequency photons. Of course, the Macroscopic State inherits from the quantum state \(|q_i\rangle\) uncertainties about its exact position, momentum, e.c.t. For a simpler argumentation, we will assume that the Ljapunow-exponents allow us to ignore such uncertainties. In principle, there are no serious problems.

Fig. 10
figure 10

Macroscopic pathways

As said, the full initial and final quantum states allow one single macroscopic path shown in red in Fig. 10. What happens if one replaces the initial and final quantum state with the corresponding Macroscopic States? Quantum decisions are often encoded in relative phases. With choices available, the underlying QM now allows for many pathways consistent with the “macroscopic” initial and final states yielding a situation depicted in Fig. 10 with black branching and merging slashed lines.

The Macroscopic States somehow live in macroscopic dynamics. In purely macroscopic dynamics, there would, of course, be one pathway from the initial to some final state. The splitting and joining in the figure is an effect of the underlying quantum dynamics. To avoid a contradiction to what is known in macroscopic dynamics, one has to assume that the splitting and joining in the figure involves cosmologically long time scales. Macroscopic dynamics is only an empirical approximation which can be violated at untested scales.

Fig. 11
figure 11

Past evolution

The central assumption is our position in the universe. It is indicated by the dotted line in Fig. 10. The source of the observed macroscopic causal time direction is the asymmetry of our position, i.e.:

$$\begin{aligned} \left( \tau_{\text{now}}-\tau_{\text{big bang}}\right) \ll \left( \tau_{\text{{final}}}-\tau_{\text{now}}\right) . \end{aligned}$$

Figures 11 and 12 considers the resulting situation for both directions.

Past evolution is assumed to be too short to allow multiple pathways. With the known cosmic microwave background, with the known distribution of galaxies, and with the more or less known astrophysical mechanisms, the backward evolution is pretty much determined at least up to the freeze-out. The hypothesis is that the macroscopic past can be determined in an essentially in-ambiguous way if all macroscopic details of the present universe would be known. These include the macroscopic properties of all atoms in all the stars in all the galaxies.

Fig. 12
figure 12

Future evolution

The situation of the future is assumed to be long enough to allow for multiple pathways. Allow for an anthropogenic picture in which decisions are usually considered. Driving on the highway, one can turn right to Dortmund or left to Frankfurt and then make a mess in Frankfurt, which will have obvious consequences afterward. That the fixed final macroscopic state at the end of the universe limits what can be done is of no practical concern.

In reality, the present and final boundary states are quantum states which yield a uniquely determined macroscopic pathway. All decisions are encoded in the final state, which obviously can not contain a time direction. That they happen at the bifurcation points denoted by “D” is an illusion faking the causal direction.

Problems with the absolute deterministic the fixed final state model

The argumentation for a final state model is convincing, and there are no intrinsic paradoxes. But some aspects of the fixed final state model are problematic to agree to:

  • Willful agents cannot exist! Within the considered framework, a willful agent had to adjust the fixed final state at the end of the universe in an incalculable way. To avoid recalculating the universe, one has to drop the concept of willful agents, but this is hard to accept [35]. It is not just philosophical. Consider a seminar. Without a willful chair, a speaker could go on forever. The second problem is more on an aesthetic level.

  • The fixed randomness within the final state! No to disturb Born’s Rule, the final state can not bias quantum decisions. It has to be fixed in a random arrangement, which is uglier done within such a detached state than the usual random decisions during measurement processes. Except for these decisions, the wave-functions (or fields) evolve independently of the corresponding evolution of the complex conjugate wave-functions (or fields). The basis for the connecting decisions is that the final states fixing these random decisions are equal on both sides. There is no intrinsic, natural mechanism for the corresponding property of the final state density matrix.

5 The Matching State

There is a way out. So far, we have mainly considered the evolution of wave-functions or fields. Physics depends on them and their conjugate. To allow for external manipulations one can consider the quantum world and its conjugate separately with distinct initial values and then replace both fixed final states with a common just matching one.

An external agent lives in the macroscopic world. He can manipulate the wave functions or fields and their conjugates at a given time. The matching final state will then change by itself accordingly. No incalculable action of the agent is required.

To avoid arbitrary assumptions about the time and nature of the matching, we turn to a simple cosmological big bang/big crunch scenario. It allows for a straightforward implementation of the bidirectional concept. It is, however, not essential for the concept.

6 The Bidirectional Big Bang/Big Crunch Scenario

There are many exciting new observations in cosmology and astrophysics. Extrapolating observations it is usually assumed that a rather but not completely homogenous universe undergoes an accelerating expansion. Our argument for macroscopic causality required that the universe’s total lifetime has to be much larger than its present age. In this way, the extrapolations of the present observation are not relevant.

The understanding of dark energy or whatever drives the cosmos’ dynamic is not yet available [25, 37]. The concept that eventually the anti-gravitating dark energy gets exhausted, leading to a big bang/big crunch universe, is at least appealing.

Of course, there are black holes, and the structure of the universe must be topologically intricate. The expectation is that these complications are not relevant for our epoch’s basic understanding and that one can consider a simple most configuration where the total age of the considered universe is \(\tau\) and both the expanding and the contracting phase last for \(\tau /2\) . The point of the maximum extension will be called the border.

The initial and final states are not CPT conjugates. (If universes are created in CPT conjugate pairs [43], non-matching ones have to be chosen). As above, all quantum decisions are attributed to the initial and final state. Their overlap:

$$\begin{aligned} \langle \text{bang}\,|\,\text{crunch}\rangle = \left( \begin{array}{c} \text{extremely}\\ \text{tiny} \end{array}\right) \end{aligned}$$
(11)

has to be again something like \(2^{-\#\,\text{all decisions}}\).

This (relation 11) also holds for the overlap of (1) the state evolved from the initial state to just before the border and of (2) the state revolved from the final state just after the border. On each side, the evolved, resp., the revolved state contains witnesses for all possible macroscopic pathways.

No “fine-tuning” is involved as no big number is created dynamically. At the border, the extremely extended universe has only a tiny fraction of truly occupied non-vacuum states. So matching is extremely rare. Both strongly entangled evolved states should miss common entanglements simply for statistical reasons (see also [20]). Coexisting pathways involving the expanding and contracting phases are practically excluded.

For the border state, one can define something like a density matrix connecting the evolved incoming and outgoing states.

$$\begin{aligned} \rho_{\text{max. extend}}=\sum_{i,j}\rho (i,j)\,| \begin{array}{c} \text{max.}\\ \text{extend} \end{array}(i)\rangle \langle \begin{array}{c} \text{max.}\\ \text{extend} \end{array}(j)| \end{aligned}$$
(12)

As the Hamiltonian describing the evolution is hermitian, the matrix \(\rho_{\text{max. extend}}\) is diagonalizable. With the dominant state argument, its extreme smallness means that typically only a single component dominates, i.e., one can just approximate it as:

$$\begin{aligned} \rho_{\text{max. extend}}\sim |\text{border}\rangle \langle \text{border|}. \end{aligned}$$
(13)

For the total evolution, it leaves two factors:

$$\begin{aligned}\langle \text{bang}|\, U\,|\,\text{border}\rangle \otimes \langle \text{border|} U\text{| crunch}\rangle \end{aligned}$$
(14)

No time arrow is accepted, so the expanding world is analogous to the contracting one. For both the “expanding” and the “contracting” phases, the border state is an effective final quantum state determining the macroscopic pathways in its neighborhood, as argued in Sect. 3. In an expanding universe, witnesses typically connect to the huge effective final state, and in the contracting universe, the situation is analogous. So the neighborhood can be assumed to cover much of the considered universe, including our epoch.

In this region, the common quantum border state has the consequence:

The expanding and contracting worlds are macroscopically identical.

This result allows an obvious interpretation:

6.1 Surjection Hypothesis

To avoid strange partnerships, one has to drop the usually implied complex conjugate part and postulates:

  • The quantum states are defined in \([0,\tau ]\).

  • Macroscopic dynamics is taken to extend from \([0,\tau /2]\).

Macroscopic objects (like us) then live

  • with their wave function \(\psi (t)\) in the “expanding” phase \([0,\tau /2]\),

  • with their conjugate one \(\psi (\tau -t)^{CPT}\) in the “contracting” phase \([\tau /2,\tau ]\).

In this way, the Born-rule can be written as \(\rho (t)=\psi (t)\cdot \psi (\tau -t)^{CPT}\). The proposition has several attractive consequences.

6.2 A Will-Full Agent is Now Possible

At the macroscopic time t, corresponding to the quantum times t and \(\tau -t\), a manipulating agent can introduce unitary operators:

$$\begin{aligned} \psi (t)\longmapsto & {} \widetilde{\psi }(t+\epsilon )=\text{Operator}[\psi (t)]\nonumber \\ \psi (\tau -t)\longmapsto & {} \widetilde{\psi }(\tau -t-\epsilon )= \text{Operator}^\text {CPT}[\psi (\tau -t)] \end{aligned}$$
(15)

In the macroscopic future \([t,\tau -t]\) the wave functions change, and a new border component will dominate:

$$\begin{aligned} \psi (\text{border})\longmapsto \widetilde{\psi }(\text{border}) \end{aligned}$$
(16)

automatically reflecting the manipulation. No unusual action of the agent is required.

The manipulation of the agent does not introduce a fundamentally new time direction. The changed matching can, in principle, affect the contributing wave functions also in the macroscopic past. However, as \(t\ll \tau\) the functions \(\psi (t'<t)\) and \(\psi (t'>\tau -t)\) stay practically unchanged.

6.3 Stern–Gerlach Experiment

An agent can prepare a “Stern–Gerlach experiment” shown in Fig. 13.

Fig. 13
figure 13

Bidirectional Stern–Gerlach measurement

As the drift chambers create macroscopic traces with a large number of witnesses, mixed “up”/“down” contributions are excluded leaving the red or yellow contributions.

One can now compare the red and the yellow contribution:

$$\begin{aligned} \text{contributions} \propto {\left\{ \begin{array}{ll} 2^{-\text{decision on paths I and I'}} = 2^{-huge} \\ 2^{-\text{decision on paths II and II'}} = 2^{-huge'}\, \end{array}\right. } \end{aligned}$$
(17)

Statistically, one contribution will completely dominate. The choice reflects unknown properties of the available future path. The randomness disliked by Einstein found a fundamentally deterministic explanation.

As it is well known, quantum randomness gets lost in the macroscopic world just by statistics as large numbers (like Avogadro’s) are involved. As there are no correlations between the considered ensemble and the future pathways, the effective randomness obtained suffices.

On average, both possibilities are equal, i.e.:

$$\begin{aligned} \text{probability}\left( huge>huge'\right) =\text{probability}\left( huge<huge'\right) \end{aligned}$$
(18)

which has the consequence:

$$\begin{aligned} \begin{array}{ccccc} \text{prob.}\left[ e_{\,\uparrow }\,\right] &{} = &{} \left( \begin{array}{c} expanding\\ component \end{array}\right) \cdot \left( \begin{array}{c} contracting\\ component \end{array}\right) &{} = &{} \left|\langle e_{\otimes }\,|\, e_{\,\uparrow }\,{\rangle }\right| ^{2}\\ \text{prob.}\left[ e_{\,\downarrow }\,\right] &{} = &{} \left( \begin{array}{c} expanding\\ component \end{array}\right) \cdot \left( \begin{array}{c} contracting\\ component \end{array}\right) &{} = & {} \left| \langle e_{\otimes }\,|\, e_{\,\downarrow }\,{\rangle }\right| ^{2} \end{array} \end{aligned}$$
(19)

It means the “Born rule” holds [47]. The squares brackets on the right are no longer chosen as they have the required properties, but they are now a direct consequence of the physical process.

7 Important Cosmological Consequences

In cosmological development, there can be special situations or early periods where the remoteness of the final state does not allow a macroscopic description and the needed difference between the initial bang and the final crunch state will get important.

The possible absence of a macroscopic description demystifies paradoxes. In a closed box, Schrödinger’s cat can be dead and alive. The same applies to the grandpa in a general relativity loop [18] used in arguments discrediting backward causation.

It also could affect the view of early cosmological development. Before QED freeze out the universe is heavily interacting and it is to be expected that there are sooner or later no longer surviving witnesses to fix a unique macroscopic pathway to eliminate macroscopic coexistence.

A macroscopic description of the earlier universe could be unacceptable. Even to use a unique macroscopic Hubble parameter H(t) as it used in the Friedman-equations might be questionable.

7.1 Homogeneity of the Early Universe

The transition from a period without a macroscopic description to a macroscopic one requires special considerations. There is a simple observation about contributing pathways. Unusual components of the quantum phase will typically be deselected, and only components close to the average will collectively produce a significant contribution entering the macroscopic phase. In this way, a homogeneous contribution at the transition point is strongly favored.

The initial big bang state in our argument for macroscopic causality can be replaced in this framework by this initial homogeneous state. The basic initial state/border state asymmetry needed for the argument stays.

The universe is more homogeneous than expected from simple estimates [31]. It is usually attributed to a limited horizon caused by a rapid expansion of the universe due to inflation. The concept might offer a way to avoid the complicated requirements of inflation models.

Inflation models have, according to a recent work of Chowdhury et al. [19], a serious fundamental problem within the Copenhagen quantum mechanics. One needs to come from an initially coherent state to one allowing for temperature fluctuations. Quantum jumps would do the trick, but they are not possible in inflation models as the universe is taken as a closed system without an external observational macroscopic entity.

8 Summary

Quantum statistical effects strongly suggest abolishing causality on the quantum side and finding arguments to resurrect it in the macroscopic world effectively.

In a universe with a finite lifetime \(\tau_\text{final}\) sufficiently abundant witnesses make it possible to postpone all measurement decisions to \(\tau_\text{final}\) where they then can be incorporated in an effective final density matrix. The resulting absolutely deterministic concept with a fixed initial and a fixed final density matrix is closely related to the Two-State-Vector quantum mechanics of Aharonov and coworkers and to a universe in the Everett multiversum inhabited by a final observer, our community in our particular universe associates with.

As it stands, the concept is not acceptable. Free macroscopic agents are indispensable. A simple way to incorporate individuals with free will is to turn to a slightly modified model. The fields and their conjugates are taken to evolve independently, and the fixed final state on each side is replaced with a common just matching one. For simplicity and to avoid ad hoc assumptions about the matching, a big bang/big crunch cosmology is chosen with an expanding and a contracting quantum phase. A free agent then lives—like all macroscopic objects—with the wave function in the expanding part and the complex conjugate one in the contracting part. Operators he is allowed to enter at his macroscopic time t on both sides, i.e., at the quantum time t and \(\tau -t\), will affect the quantum evolution in between, i.e., in his macroscopic future.

To conclude, we obtained a surjective interpretation that has no intrinsic paradoxes and allows for free agents. Unfortunately, it requires abandoning concepts many people are not willing to question.