1 Benveniste’s Experiments and “Memory of Water”

In 1988, an article published in Nature suggested that molecules were not always necessary to convey “biological information” (Davenas et al. 1988). It was as if specific information associated with molecules—including huge macromolecules—could be stored or imprinted in water samples. However, the proofs of storage of information in water—quickly called “memory in water” in lay press—remained indirect and the main weakness of the article was the absence of physical measurements or clear theory to support these claims. The published data mainly consisted of results from a single experimental model with biologically-active solutions (immunoglobulins) that had been highly diluted using a well-defined protocol (including shaking between each dilution step). The mechanisms of this phenomenon, as recognized by the authors themselves, however, remained unknown. A vague hypothesis was proposed—at the insistence of the Editor—on a possible “hydrogen-bond network or electromagnetic fields that could act as a template for the biological molecules”. This suggestion was however more a direction for future research than an appropriate explanation. Actually, the evidence for “ghost molecules” in high dilutions was only supported by circular reasoning: (1) modifications of water structure induced biological changes; (2) biological changes were a result of modifications in water structure. Nevertheless, the fact that the outcomes were different for “inactive” and “active” samples in blind experiments remained puzzling and was the strong point of the article.

Details on the investigation by Nature’s team and the subsequent controversy, which are not the subject of this article, can be found elsewhere (Benveniste 2005; de Pracontal 1990; Schiff 1998; Maddox 1988; Maddox et al. 1988; Beauvais 2007, 2012). During the following years, new data with the basophil model were published by other teams or by Benveniste’s team itself with various conclusions (Benveniste et al. 1991; Ovelgonne et al. 1992; Hirst et al. 1993; Belon et al. 1999; Brown and Ennis 2001; Ennis 2010).

The episode with Nature in 1988 was not the end of the “memory-of-water” story. Indeed, Benveniste developed other biological models and proposed new concepts such as “electronic transmission” and “digital biology”. The isolated rodent heart is a classical model in physiology (Langendorff apparatus) that was routinely used in Benveniste’s laboratory. Early attempts in 1991 showed that changes in coronary flow were recorded after injection of high dilutions of histamine or other molecules in the circuitry of the Langendorff apparatus (Hadji et al. 1991; Benveniste et al. 1992). One of the advantages of this biological model was that the changes in the flow through the coronary arteries were obvious in the series of tubes that covered the coronary flow. Therefore, it was easy to show “in live” an experiment to a public.

A series of experiments with the isolated rodent heart showed that high dilutions of histamine became ineffective after treatment with a magnetic field (Hadji et al. 1991). These results prompted Benveniste to use an electronic amplifier to transmit the “biological information” supposed to be emitted from a sample containing a solution of biological molecules. A tube containing the solution of a molecule to be “transmitted” was placed into an electric wire plugged at the input of the electronic amplifier and a tube containing a sample of “naive” water (i.e. water that did not contain “information”) was placed in a wire plugged at the output. Again the experiment was a success and coronary flow changes were observed as expected (Aïssa et al. 1993).

For the next step, Benveniste used the sound card of a personal computer as an electronic amplifier and he reported that the “electromagnetic information” emitted by a solution containing biological molecules in a wire plugged at input could be digitized and stored on the hard disk of a computer. In a second time, this file could be “played” and information could be imprinted into a sample of water placed in a wire plugged at output (Aïssa et al. 1993; Benveniste et al. 1994; Aïssa et al. 1995; Benveniste et al. 1996, 1997, 1998). To describe these methods, Benveniste coined the term “digital biology”. In a further refinement, the wire plugged at the output was placed around the column of the physiological liquid that infused the heart. This means that the “digitized information” could be directly sent from the computer to the apparatus without an intermediary water sample. Therefore, the contamination argument, which was frequently used to dismiss these experiments, became worthless.

2 “Jumping” of Biological Activity from Sample to Sample

The question now is—if these results were so obvious—why could Benveniste not convince the scientific community of the importance of his “discoveries”? Indeed, Benveniste often invited colleagues to witness these experiments. “Public demonstrations” with the isolated rodent heart model were regularly organized from 1992 to 1997 for colleagues and other scientists interested in Benveniste’s experiments. The protocol of these experimental demonstrations was designed as “proof of concept” to hopefully give a definitive confirmation on the reality of “electronic transmission” or “digital biology”. Details on these demonstrations have been given elsewhere (Beauvais 2007, 2012, 2013a); the results of one public demonstration has been thoroughly analyzed in a recent article (Beauvais 2013b).

During the public demonstrations, inactive samples (controls) and active samples (supposed to have received specific “biological information”) were prepared; in some experiments, the “samples” were computer files (Beauvais 2007, 2012). Usually, this preparation was organized in a laboratory other than Benveniste’s laboratory. The preparation of samples was tightly controlled and their initial labels were replaced with code numbers by participants not belonging to Benveniste’s team (Eve in Fig. 1). Open-label samples were also prepared, but they nevertheless received a blind code in Benveniste’s laboratory by a team member (Bob in Fig. 1) before testing. During the next days, all samples were tested in Benveniste’s laboratory. The results associated with each sample (change or no change of the biological parameter) were sent to Eve (usually by fax). Eve compared the two lists (initial labels, i.e. expected effects, and observed effects) and assessed the rate of concordant pairs, that is, “inactive” label associated with resting state of the biological device and “active” labels associated with “activated” state of the biological device.

Fig. 1
figure 1

Definitions of the roles of the different agents in an experimental situation. Eve is outside the laboratory and she remotely assesses (or not) the rate of success of the blind experiments made by Alice. For this purpose, Eve replaces the initial label of each experimental sample by a code number. When all samples have been tested, the observed states of the experimental device are sent to Eve. Then, Eve assesses the rate of “success” by comparing the two lists: “expected” effects (“inactive” (IN) and “active” (AC) samples under code numbers) and observed effects (resting state or “activated” state). Bob who is inside the laboratory observes Alice making an experiment and can also locally assess the results in blind experiments

Thus, suppose that the “expected” results were ↓↓↓↓↑↑↑↑ (i.e. four “inactive” and four “active” samples in that order). The observed results showed a statistically significant concordance with Bob: ↓↓↓↓↑↑↑↑. In contrast, the concordance was lost with Eve: for example, ↓↑↓↑↓↓↑↑. In other words, blind experiments with Eve (external assessment) who remotely checked the results disturbed them; the concordance of the biological outcomes with the labels was not best than random. Note that “activated” state (↑) was nevertheless observed, but not at the correct (i.e. “expected”) places. This was in contrast with open-label experiments or blind experiments with Bob who locally checked the results; in this case, “expected” results were observed. Such a series of experiments with both Bob and Eve who assessed the results has been described in detail in a previous article (Beauvais 2013b). Description of Benveniste’s experiments in different experimental conditions (with or without Eve’s assessment) have been reported elsewhere (Beauvais 2012, 2013a).

Various explanations were proposed to explain these apparent disturbances in experiments: water contamination, electromagnetic pollution, or spontaneous “jumps” of the alleged electromagnetic molecular memory from sample to sample. The hypothesis that “active” samples had been “erased” by external influence was reasonable; but how an “inactive” sample could become “active” was more challenging. No satisfying explanation was proposed for these results that were different according to a simple change of blinding conditions (local assessment with Bob vs. remote assessment with Eve). Note that the formal distinction between these two types of assessment was not done by Benveniste’s team at this time.

For Benveniste, these weird phenomena did not call into question the validity of his claims, but indicated that further technical improvements were necessary to protect the samples from possible external influences, including potential unknown influences from the experimenters/observers. Indeed, even if one dismissed “memory of water”, the fact that consistent changes of a biological parameter were observed was difficult to explain (Beauvais 2012). Despite technical improvements, the mismatches persisted in blind experiments when Eve assessed the results (Beauvais 2007).

3 A Need for a Theoretical Framework

After the isolated rodent heart, Benveniste’s team developed another promising biological model, which could help for a clear demonstration of the “digital biology” concepts. Indeed, one of the advantages of this biological model, based on plasma coagulation, was the possibility to automate it. It was hoped that the interaction of the experimenter with an automated device would be minimal, thus possibly avoiding mismatches between various samples.

At this juncture, the United States Defense Advanced Research Projects Agency (DARPA) became interested in the concept of “digital biology” proposed by Benveniste and decided to commission a team of experts to assess the robot analyzer set up by Benveniste’s team. The results of this expertise have been described in an article in 2006 (Jonas et al. 2006). The authors concluded that results supporting “digital biology” were obtained when members of Benveniste’s team were present, but they were unable to reproduce these results in their absence. The possibility of explaining these results as an experimenter’s effect was put forward by the authors, but they concluded that a framework was necessary before continuing such a research (Jonas et al. 2006).

If simple differences in assessment of experiments have such negative effects on experimental results, it is difficult to continue to talk about water properties, high dilutions, “digital biology” or biological activity without molecules. The suggestion that the experimenter played a nontrivial role in these experiments was repeatedly suggested, including by Benveniste himself (Jonas et al. 2006). This role was however generally considered as an external noise that disturbed the “biological information without molecules”.

I proposed in recent articles to describe Benveniste’s experiments in a probabilistic quantum-like modeling (Beauvais 2012, 2013a, b, 2014). The price to pay was to abandon the notions of “memory of water”, “digital biology”, and so on. Note, however, that there is today no convincing direct proof of a modification of water structure capable of storing specific information about complex biological molecules.

In the present paper, using ideas from quantum Bayesianism or QBism, which is a recent interpretation of quantum physics (and also of classical physics), we present a description of Benveniste’s experiments without the explicit use of mathematical tools of quantum logic, such as Hilbert’s space, state vectors, Born’s rule, probability amplitudes, non commuting observables, etc. Moreover, we show that this formalism explains the origin of the relationship between “expected” results and observed results if the fluctuations of measurements are taken into account.

4 Violation of the Law of Total Probability in Benveniste’s Experiments

The law of total probability is a basic law of classical probability theory. If we consider two disjoint events, B1 and B2, such as Prob (B1 U B2) = 1 (the probability of realization of either B1 or B2 is equal to 1), for any event A, we can express the formula of total probability as:

Prob (A) = Prob (B1) × Prob (A|B1) + Prob (B2) × Prob (A|B2)

In this formula, the conditional probability of one event with respect to another is given by the Bayes formula: Prob (A|B1) = Prob (A ∩ B1)/Prob (B1).

In previous articles, we described Benveniste’s experiments and we colligated large series of experiments using the Langendorff device (Beauvais 2007, 2012, 2013a). These experiments are summarized in Table 1 according to three experimental conditions: (1) Alice assessed success rate (open-label) or Bob assessed success rate (Alice blind); (2) Eve assessed success rate (Alice blind). The respective probabilities of success (SUCC) should be identical with or without Eve’s assessment.

Table 1 Violation of total probability law in Benveniste’s experiments

Without Eve’s assessment, ProbA (SUCC) = 0.92 (Alice’s assessment) and ProbB (SUCC) = 0.88 (Bob’s assessment). Then Eve tried to confirm these results by assessing blind experiments with the participation of Alice (Fig. 1). After receiving the results corresponding to each label, she assessed the success rates for each label (IN for “inactive” label and AC for “active” label) and she calculated the overall rate of success:

$${\text{Prob}}_{{\text{E}}} (SUCC){\text{ }} = {\text{ Prob }}(IN)~ \times ~{\text{Prob }}(SUCC{\text{ }}|{\text{ }}IN)~ + ~{\text{Prob }}(AC)~ \times ~{\text{Prob }}(SUCC{\text{ }}|{\text{ }}AC) = ~0.5{\text{ }} \times {\text{ }}0.57{\text{ }} + {\text{ }}0.5{\text{ }} \times {\text{ }}0.56{\text{ }} = {\text{ }}0.57$$

Therefore the probability of success was different according to the experimental conditions (assessment of success rates firstly by Alice/Bob or firstly by Eve):

$${\text{Prob}}_{{\text{A}}} (SUCC)~ \approx ~{\text{Prob}}_{{\text{B}}} (SUCC){\text{ }} > {\text{ Prob}}_{{\text{E}}} (SUCC)$$

Remote assessment of success rate led to a decrease of the rate of success (concordance between “expected” results and observed results). The law of total probability was thus violated in Benveniste’s experiments.

The law of total probability can be violated in quantum logic. Thus, in the two-slit Young’s experiment, the probability to observe a screen impact of the photon at a given place is different if the path of the photon (though slit 1 or slit 2) is identified or not. The decoherence process is however an obstacle to use quantum logic in macroscopic experimental situations. In this article we show how a personalist view of physics gives tools to describe Benveniste’s experiments.

5 Summary of a Personalist Interpretation of Probability

In a recent article, Mermin commented quantum Bayesianism (or QBism), which is a personalist interpretation of quantum physics (Mermin 2014). The main feature of this interpretation is that the scientist (the observer) is central in the description, not only of quantum phenomena, but of the entire physical world. The proponents of this interpretation (Fuchs, Caves and Schack) proposed to consider the perceptios of the observer as central in any physical description of the world: “The outcome of an experiment is the experience it elicits in an agent” (Fuchs 2010; Fuchs et al. 2013). The outcome is thus displaced from the object to its perception by the observer. In other words, the experience is the outcome. As a consequence, experiments that have not been experienced by one agent have no result for this agent (there are as many descriptions of the reality as observers). Moreover, a measurement does not reveal a preexisting outcome, but results in the creation of something new. Nevertheless all the observers agree when they compare their observations and measurements; each observer enters then the own experience of the other observer. Another interesting and important feature of QBism is that there is no split between microscopic and macroscopic, but between the world where an agent lives and his internal experience of that world.

6 Modeling of Benveniste’s Experiments

6.1 Definitions and “Rules of the Game”

A “success” is defined as the experience by Alice of either “inactive” (IN) sample with resting state (↓) or “active” (AC) sample with “activated” state (↑). Success is thus defined as A IN associated with A or A AC associated with A . The probability that an experimenter/observer experiences the event X is symbolized by Prob (A X ).

A label is like a pointer of a meter device that is moved by Alice (or Bob) toward the words IN (“inactive”) or AC (“active”) (Fig. 2). Which samples are “inactive” or “active” is decided by the experimental protocol. Although these samples are not physically changed during this process, the formalism requires that the experiences elicited by “inactive” versus “active” labels must be different. The “aim of the game” is thus to compare for each sample the perception of the position of “Pointer 1” (IN or AC labels) with the perception of the position of “Pointer 2”, namely the biological device (resting state ↓ or “activated” state ↑).

Fig. 2
figure 2

Perception by Alice of the different “pointers”. A “label” is like a pointer that is moved by Alice toward IN (“inactive”) or AC (“active”). Despite the apparent subjective characteristics of “labels”, their perception is not of a different nature compared to the perception of the states of the experimental device (resting state or “activated” state). The comparison is also like a pointer moved by the experimenter (toward “success” or “failure”) according to defined rules; the results of comparison that leave traces in the macroscopic world (pointer) are in turn perceived by the experimenter

The comparison of the position of these two pointers is also like another pointer which is moved toward “success” or “failure” by the experimenter (or any automatic device) according to defined rules; the position of this third pointer in turn is perceived by the experimenter. Therefore, despite the apparent subjective characteristics of labels and assessment of success, their perception is not of a different nature compared to the perception of the states of the experimental device.

6.2 Construction of a relationship between “expected” results and observed results

In the context of a personalist view of probability, Alice experiences an outcome of an experiment not only after “direct” observation, but also from other “channels” such as Bob or other observers. Although measurements through these different channels are independent, the resulting experience by Alice must be both unique and consistent. For example, Alice and Bob must agree on the unique result of the experiment—namely, success or failure—and an experimental situation such as success for Alice and failure for Bob is not allowed.

For simplification, we first consider only two channels that we name Channel #1 and Channel #2. Each measurement through each canal is independent. We suppose that the probability of success (SUCC) is p and probability of failure is q for measurement through Channel #1 or Channel #2 (p + q = 1). Therefore, the probability for success experienced by Alice via the two channels is the joint probability (Fig. 3):

$${\text{Prob}}(A_{SUCC\# 1} \cap A_{SUCC\# 2} ) = \frac{p \times p}{p \times p + q \times q} = \frac{{p^{2} }}{{p^{2} + q^{2} }}$$

“Classically” (i.e. if one considers that the outcome belongs to the object and not to the observer), the joint probability for success (and the joint probability for failure) is equal to the probability through each channel:

$$p = \frac{{p^{2} }}{{p^{2} + q^{2} }}{\text{ and }}q = \frac{{q^{2} }}{{p^{2} + q^{2} }}$$
Fig. 3
figure 3

Consequences of a personalist view of probability on joint probabilities of the outcome from different channels. Alice perceives the outcome of an experiment through different channels (e.g. “direct” observation or other observers such as Bob). These different channels must be consistent (same outcome for all channels) because Alice experiences a unique outcome. For simplification we show only two channels. White areas correspond to experimental situations where outcomes from the two channels are not consistent (e.g. success via Channel #1 and failure via Channel #2). These white areas correspond thus to impossible experimental situations and are excluded for probability calculations. The probability of each white area is pq (probability of “cross-terms”). The probability to observe success from both channels is thus the ratio of the central gray area (Alice perceives success through both channels) which has probability p 2 divided by the probability to observe consistent outcomes for either success or failure via both channels (p 2 + q 2)

One can easily calculate that \({\text{Prob}}(A_{SUCC\# 1} \cap A_{SUCC\# 2} ) = 1/2\).Footnote 1

However, measurements are not performed “in the sky”, but in the macroscopic world. As a consequence, any measurement is submitted to random microscopic fluctuations. In a QBist point of view, the random probability fluctuations ε1 and ε2, respectively associated with Channel #1 and Channel #2 are independent (ε1 and ε2 are positive or negative tiny random real numbers).

After one fluctuation, the probability that Alice experiences “success” through Channel #1 is 1/2 + ε1; for both channels the joint probability is given by the following equation:

$${\text{Prob}}(A_{SUCC\# 1} \cap A_{SUCC\# 2} ) = \frac{{({1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2} + \varepsilon_{1} )({1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2} + \varepsilon_{2} )}}{\Delta }$$

with ∆ = (1/2 + ɛ 1) × (1/2 + ɛ 2) + (1/2 − ɛ 1) × (1/2 − ɛ 2)

The initial probability of success (equal to 1/2 before the measurement) is thus updated with a new value, which in turn is updated after another random fluctuation and so on. A computer simulation with very small probability fluctuations easily shows that, after several calculation steps, two stable positions are obtained (Fig. 4). With a greater number of channels, the transition towards one of the two stable positions occurs after a lower number of calculation steps (not shown).

Fig. 4
figure 4

Evolution of the probability of success after a series of random microscopic fluctuations. Microscopic random fluctuations of measurement are the source of tiny variations of the probability of success perceived by Alice. As in Fig. 1, we consider that Alice gains information from two channels (from “direct” observation and from Bob for example). The probability of success is initially equal to 1/2. The equation \({\text{Prob}}(A_{SUCCESS\# 1} \cap A_{SUCCESS\# 2} ) = \frac{{({1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2} + \varepsilon_{1} )({1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2} + \varepsilon_{2} )}}{\Delta }\) (see text) allows calculating the next probability of success after one random fluctuation. This new value replaces then the initial 1/2 probability and probability obtained after each random step is reinjected for the next calculation of probability of success. A small elementary change of probability of “success” is defined at each calculation step of this computer simulation: from −0.5 to +0.5 × 10−6. In this computer simulation, the probability of “success” becomes unstable after a few calculation steps toward one of the two stable positions: either “success” or “failure”. With a smaller change of probability of “success”, the bifurcation occurs after a greater number of calculation steps (40–50 calculation steps with ε = random [−0.5 to +0.5] × 10−15); with a number of channels >2, the bifurcation towards stable positions occurs with fewer calculation steps (not shown). The results of eight computer simulations are shown in this figure

These results show that the initial state with probability of “success” equal to 1/2 is metastable if we take into consideration measurement fluctuations. Indeed, two stable positions emerge: probability of “success” equal to 1 (A IN /A AC associated with A /A for stable position 1) or probability of “success” equal to 0 (A IN /A AC associated with A /A for stable position 2, respectively).

6.3 Selection of the “expected” stable position and local assessment of success in blind experiments

Nothing in the modeling allows selecting one of the two stable positions 1 and 2. However, we must note that, by construction, the biological device is asymmetrical: A is implicitly associated with A IN . One could thus consider that a sample with “inactive” label associated with resting state (↓) is systematically intercalated between each test sample. As a consequence, only the possible “universes” where A is associated with A IN are allowed. Therefore, only stable position 1 is possible. In this case, “expected” results fit observed results (Table 2).

Table 2 Comparison of classical and personalist views of probability to describe Benveniste’s experiments

Bob can locally assess the results of Alice by making blind experiments. In this experimental situation, both Alice and Bob are in the same stable position (Bob is a channel for Alice) with probability of success equal to one. “Success” with blind experiments locally assessed by Bob is thus easily explained.

6.4 Transition of the biological device from resting state to “activated” state

It is important to underscore an important consequence of the formalism that is the transition of the biological device from resting state to “activated” state. Indeed, in stable position 1, A is associated with certainty (probability of success equal to 1) with A AC . Introducing the fluctuation ε for probability of success (Sect. 6.2) implicitly means that the perception of the spontaneous transition from A to A is possible, even though with a low probability. Indeed, the “expected” probability of success is initially equal to 1/2: the initial probabilities of success associated with A IN and A AC are equal since only resting state is observed (A IN with A is a success and A AC with A is a failure) (Fig. 4). If the probability of success is slightly modified (1/2 + ε), it means that the probability of A is different of zero (with some “inactive” samples associated with “failure” or “active” samples associated with “success”).

6.5 Description of the Apparent “Jumping” of the Biological Activity from Sample to Sample

As described in Sect. 2, Eve remotely assesses the experiment by comparing two lists that summarize the results: for each sample, Eve compares the state of the biological device (resting state or “activated” state) observed in blind tests by Alice under code label and the corresponding initial label (“inactive” or “active” label).Footnote 2 Eve calculates then the rate of success of the experiment (Fig. 1).

The probability of success (SUCC) in this experimental situation can be calculated using conditional probabilities to take into account information on inactive (IN) and active (AC) labels (in contrast with Bob, Eve is not a channel for Alice):

$${\text{Prob }}\left( {A_{{SUCC}} } \right){\text{ }} = {\text{ Prob }}\left( {A_{{IN}} } \right)~ \times ~{\text{Prob }}\left( {A_{{SUCC}} |A_{{IN}} } \right){\text{ }} + {\text{ Prob }}\left( {A_{{AC}} } \right)~ \times ~{\text{Prob }}\left( {A_{{SUCC}} |A_{{AC}} } \right) = ~1/2 \times ~1/2{\text{ }} + {\text{ }}1/2 \times ~1/2{\text{ }} = {\text{ }}1/2$$

Consequently, in blind experiments with Eve, the proportion of samples with an “active” label observed in association with an “activated” state decreases from 100 to 50 % and the proportion of samples with an “inactive” label observed in association with an “activated” state increases from 0 to 50 %. Thus, in a series of samples, it is as if the “biological activity” jumped from some samples with an “active” label to other samples with “inactive” label; success is observed not better than random. Failure with blind experiments remotely assessed by Eve is thus easily explained.

6.6 Limits Imposed by the Macroscopic States of the Experimental Device and of the Experimenter

One could argue that, according to this modeling, a relationship between “expected” results and observed results should be frequently observed in any experimental context as soon as two parameters—supposed to be connected—are assessed. However, in real experimental situations, the application of this formalism could be limited for physical reasons. The fact that Benveniste’s experiments were performed in the context of a laboratory dedicated to biological sciences is perhaps not without significance. Indeed, biological models have many degrees of freedom and Brownian motion of molecules in solution confers a large plasticity to these systems. The mean values of a biological parameter may vary (under some range), thus allowing the perception of “activated” state as described in Fig. 4. Such a transition from initial position to stable position is possible only with experimental systems that are sufficiently “flexible”.

Thus, suppose that we replace the Langendorff’s apparatus in Benveniste’s experiments with a Schrödinger’s cat. We assume that the radioactive decay of the device gives a probability 1/100 for dead cat (“activated” state) and 99/100 for living cat (resting state). A stable position (Fig. 4) is achieved only if the “activated” state (dead cat) is observed in 50 % of measurements (supposing that half of labels should be associated with “dead cat”). This is an example of a “rigid” system due to the physical impossibility to change radioactive decay. Another example is a beam splitter reflecting individual photons with a probability 1/100 (“activated” state) or transmitting them with a probability 99/100 (resting state). Despite random microscopic fluctuations of measurement, there is no possibility to achieve a stable position. Indeed, the beam splitter does not allow observing 50 % of transmitted photons and 50 % of reflected photons without changing its “rigid” internal structure. In contrast, in all biological models used in Benveniste’s experiments, an “activated” state could be easily obtained by adding pharmacological or biological molecules at “classical” concentrations. In other words, the formalism allows connecting perceptions of two parameters only if laws of physics are respected.

Another limitation, which could explain why Benveniste’s experiments were not easily reproduced by other teams, is related to the cognitive aspect present in this formalism. Indeed, the successive calculation steps that allow building Fig. 4 suppose that Alice permanently adjusts her a priori probability for success after each experiment (or series of experiments) according to the defined rules. If there are defects in these cognitive processes, stable position with probability equal to unity (i.e. certainty to experience “success”) could not be achieved and maintained.

7 Discussion

For most scientists, Benveniste’s experiments remain an example of poor science—whatever the alleged reasons (artifact, wishful thinking, unseen error)—aimed to support an alternative and controversial medicine, namely homeopathy. Among the arguments against “memory of water” were the incompatibility of this hypothesis with our knowledge of physics of water, the poor reproducibility of the experiments by other teams and the failures of blind experiments (with controllers such as Eve). Other arguments supported however the idea that something was at work in these experiments and was of scientific interest. More particularly, there were unexplained variations of biological parameters and numerous consistent results including blind experiments (with controllers such as Bob). For these latter reasons, Benveniste’s team considered that these results were not due to trivial errors or artifacts and persisted in this direction.

Even though the “activated” state was not always at the “expected” place, its appearance remained unexplained and puzzling (Beauvais 2007, 2008, 2013a, b, 2014). In a first step, we analyzed large series of experiments obtained by Benveniste’s team in the 1990s with the Langendorff device (Beauvais 2007, 2012, 2013a). In particular, we studied the correlations of outcomes with two Langendorff devices that worked in parallel in Benveniste’s laboratory from 1992 to 1996 (double measurements allowed Benveniste’s team to be confident on results for public demonstrations). The relationship between “expected” effects and observed effects was quantified and we defined the experimental conditions to observe significant relationships. The outcomes of the trials strongly depended on the location of people who assessed success rates of blind experiments (success with Bob vs. not better than random with Eve). We analyzed also in detail a “public demonstration” with four series of blind experiments controlled by Eve and Bob (Beauvais 2013b).

It appeared thus that the scientific fact in these data was not elusive water properties, but the difference for “success” according to the experimental conditions (blind experiments with Bob vs. Eve). Note that the early experiments with basophils were also not devoid of “experimenter-dependent” outcomes and effects of blinding (Beauvais 2007). A similar unusual conclusion on a possible role of the experimenter was also reported by the multidisciplinary team that was mandated by the DARPA to expertise the “digital biology” of Benveniste (Jonas et al. 2006).

In mathematical terms, the difference in outcomes according to the experimental conditions was translated by the violation of the total probability law. In other words, there was no possible explanation based on classical probability. In these conditions, finding a “structural” and “local” explanation into water samples (similar to a classical pharmacological effect) would be a chimera. According to the present formalism, the experimenters describe outcomes that they contribute to construct.

Therefore, the violation of the total probability law suggested that these experiments could have been misinterpreted and that the causal relationship between samples and outcomes was only apparent. If we accept to abandon the hypothesis of “memory of water”, we have to reinterpret these experiments in a new framework. In a series of articles, we showed that the logic of Benveniste’s experiments reminded quantum logic as observed in self-interference of single photon in two-slit Young’s experiment or in Mach–Zehnder apparatus (Beauvais 2013a, b, 2014).

Decoherence is generally thought to be an obstacle for the description of macroscopic events with quantum logic. Nevertheless, Fuchs et al. proposed in their personalist interpretation of quantum physics (QBism) to displace the outcome of an experiment from the object to its perception by an agent. In QBism approach, there is no split between microscopic and macroscopic, but between the world where an agent lives and his internal experience of that world. In the present article, we propose a new framework for Benveniste’s experiments that is inspired from the personalist principles of QBism. Note that we do not use the typical tools of quantum logic such as Hilbert space, probability amplitudes, Born rule, etc. Even though the mathematical tools used in the present modeling are classical, the initial hypotheses inspired from QBism are nevertheless not classical. The main hypothesis is the displacement of the outcome from the observed object to its perception by the observer. As an example, if one describes a player who flips a coin, one would classically say that heads came up; in a personalist view, one says that the player experienced that heads came up (“The experience is the outcome”).

All characteristics of Benveniste’s experiments, including their paradoxical aspects, are taken into account in the modeling. Thus, the transition from resting state to “activated” state of the biological system, the apparent causal relationship between observables (interpreted as “success”) and the “jumps” of the biological activity from “active” to “inactive” samples (interpreted as “failure”) are easily described. Overall, this modeling fits the corpus of the experimental data gained by Benveniste’s team over the years (Beauvais 2007, 2012, 2013a, b).

If establishing relationships between observables was so easy as suggested by the proposed modeling, why “expected” results are not more frequently observed in daily scientific measurements? We can suggest that some physical constraints related to the “rigidity” of the macroscopic states corresponding to the experimental device could limit the evolution toward a “successful” stable position as described in Fig. 4. Experimental models sufficiently “flexible” are necessary and biological models—at least some of them—appear to be appropriate for this purpose. Cognitive processes are also at work in this formalism (with permanent adjustment of a priori probability according to defined rules) and deficiency of these processes could be an obstacle for “successful” experiments.

Conversely, we cannot exclude that some “classical” experiments are “polluted” with phenomena similar to those reported by Benveniste in some “favorable” experimental conditions; this could lead to describe experimental outcomes that have been in fact “constructed” by the experimenters without their knowledge.

In conclusion, a personalist interpretation of Benveniste’s experiments offers for the first time a logical framework for these experiments that have remained controversial and paradoxical till date.