Keywords

1 Introduction: The Infodemic of COVID-19

The COVID-19 pandemic has been accompanied by an unprecedented ‘infodemic’, as the World Health Organization Director-General Tedros Adhanom Ghebreyesus characterized it in February 2020, i.e., an overabundance of information about the new coronavirus and the disease it causes, which is often false or misleading. The information is updated and provided on an ongoing basis to the public, who now watches the process of knowledge production in almost real time. The presence of scientific discourse in the public sphere is perhaps stronger than ever. At the same time, however, we face the unprecedented growth of mis- and disinformation, which involves conspiracy theories about the origin and transmission of the novel coronavirus, efforts to trivialize the risks related to it, promotion of unproven treatments, as well as false claims about actions or policies that public authorities are taking to address the problem, that can prove almost ‘as dangerous as the virus’.

The dissemination of this information is not always intended to distort the truth or to deceive. And this is why misinformation is clearly distinguished from disinformation in the context at least of the European Union’s (EU) disinformation policy and in an attempt to clarify the problem: the former is supposed to be unintentional, while the latter is defined as intentional. Under these circumstances, fact-checking and access to reliable (sources of) health information have been central to protecting the public’s health and safety. The EU’s actions to tackle COVID-19 disinformation aim, in particular, at promoting accurate and reliable science-based information on COVID-19, as well as at raising citizens' awareness of the risks of misinformation (Document—Communication from the European Commission, 2020; Lee, 2020). However, the attempt to separate facts from fiction and control the flow of information, no matter how useful or successful it might be, is hindered by the uncertainties surrounding the scientific understanding of SARS-CoV-2. This ushers in the main question that this chapter focuses on: how to communicate about science in such a way that the adverse effects of mis/disinformation (in for example the covid-19 crisis) is mitigated. The answer offered, from a philosophical perspective, is that we should distinguish between really reasonable scientific disagreements, and disagreements triggered by mis- or dis-information.

We highlight our answer by discussing the recent debate between John P. Ioannidis and Nassim N. Taleb about the COVID-19 forecasts and the measures we should take to prevent and/or control SARS-CoV-2 transmission. This debate is interesting, among other things, because it invites two readings: it can be seen as a debate between scientists about a scientific issue or as a debate between scientists about what to advise to policy makers on the basis of scientific research findings. While attempting to reconstruct the arguments provided by the two scientists, we show that both readings are fine or at least equally supported under uncertainty and in particular that the second reading is related to the issue of how much transparency is needed to ensure the legitimacy of the values involved in decision-making.

2 Managing the COVID-19 Infodemic

The coronavirus crisis falls into the category of the so-called open or ‘wicked’ problems (Rittel & Webber, 1973), which means that it goes beyond the traditional classification systems and taxonomies of medical science. We cannot understand it from the perspective alone of public health and medical science. It is rather a symptom of deeper systemic problems, a complex phenomenon, which is characterized by indeterminateness, systemic complexity and, finally, absence of ‘right’ or definite solutions, as it is argued (ibid.). Or as Méndez (2020) put it, it is ‘…a symptom of more profound global problems’, which nevertheless ‘…cannot be tackled now, since resources are scarce and directed towards the resolution of the symptom, as are political action, and media and public attention’, the point being that the values involved in the implications of the actions that need to be taken (or not taken) to combat the spread of the disease exceed the very disease and its biomedical basis and impact (e.g., they involve apparent restrictions in basic human rights).

Policy decisions are therefore taken with the aim to manage—rather than to solve—the problem(s) related to COVID-19 pandemic and its effects. In the absence of right or ultimate solutions, policy-makers are trying to anticipate and prevent some of these problems from occurring or to minimize their impacts if they cannot be prevented. However, the scientific basis for decision-making is indeterminate and unstable. Data are capable of multiple changes and interpretations, which means that both facts and uncertainties are (and remain) uncertain. There are many unidentified risks or the so-called known unknowns: things that we still ignore about the origin and nature of (the different variants of) SARS-CoV-2 or the potential treatments and long-term health effects of COVID-19.

In light of this, how should the public be properly informed about COVID-19? There are two issues related to science communication to be dealt with:

  1. 1.

    What language should be used for the communication of scientific information?

  2. 2.

    What should be communicated?

The first issue is related to the argumentative practices and patterns used in science communication, and, in a certain sense, to the problem of thick concepts and metaphors used in this context (Elliott, 2017).

The language of COVID-19 communication involves definitions and classifications based on current epidemiological data and subject to constant changes and updates (cf. Lewiński & Abreu 2022, this volume). But it is not merely descriptive. Any reference to ‘public health emergency’ or ‘pandemic’ is legitimized on the basis of evaluative judgments, to give but one example. The WHO’s declaration that the global spread of coronavirus disease is a pandemic was meant to send a powerful signal to countries that urgent action was essential to combat the spread of the disease. It was intended to raise awareness. But it could also instill panic and fear in people. And that was why the appropriateness of this declaration as well as of the time at which it was made have been the subject of considerable criticism.

Besides, scientists can use informal argumentative practices and means of persuasion when communicating with public. They sometimes rely on metaphors (cf. Oswald & Rihs, 2014), for example, to make sense of scientific explanations or abstract scientific concepts. But despite their utility, all these (informal) practices and means of communication can also constrain scientific reasoning, as we shall see below.

The second issue has to do with the risk of error involved in decision making under conditions of uncertainty, what is now known as ‘inductive risk’ in the philosophy of science literature. Depending on the range of these decisions, we can distinguish different versions of this problem. The assumption is always that there is a gap between data and hypotheses, which allows non-epistemic values to enter scientific reasoning. But while, in a traditional version of this argument (Rudner, 1953), the risk is limited to the final decision that a scientist must make on whether or not to accept a hypothesis (the decision, that is, that the evidence is sufficiently strong to warrant the acceptance of the hypothesis), according to a more recent version (Douglas, 2009), inductive risk is present from the beginning and throughout all scientific process: in the choice of methodology, in the decision of the models used in science, in evidence characterization, as well as in the analysis or interpretation of data. So even a purely methodological decision, such as the choice of a level of statistical significance, involves values according to this argument—an appropriate balance between the two kinds of error (false positives/false negatives) and therefore a decision on which errors we should mostly avoid.

In cases of public health emergency, such as the coronavirus pandemic, where research is conducted under time pressure and the need to move expeditiously is of vital importance, we also need to find the appropriate balance between the need to act and the desire for more reliable findings, which would nevertheless be time-consuming. Hence, the question that arises is which of the research results could responsibly be communicated to the general public, when these results are controversial; they are revised or updated almost daily; they are often inaccurate or conflicting and they furthermore lead to non-accountable (and/or irresponsible) decisions, in the sense that the responsibility of these decisions can be transferred from the political level to healthcare workers and vice versa.

A concern expressed here is that the presence of scientific discourse in the public sphere may create confusion and distrust (in both science and government) or undermine the consistency of the message, if people see research data and recommendations being constantly revised or scientists failing to reach an agreement on them. But it is also argued that precisely because the scientific basis for decision-making is indeterminate and scientific controversies may reflect ideological or political differences, all information should be made publicly available in terms of full transparency (Elliott, 2017) and with the aim to involve in decision making process those affected by or interested in these decisions.

Be that as it may, the root of the problem is the public should understand that there is no such thing as absolute certainty in science, but only degrees of certainty; and hence that uncertainty rules in science. Besides, the public should understand that precisely because of this uncertainty, scientists often disagree with each other. But therein lies an important challenge. Can we tell the difference between a reasonable disagreement that may arise in the context of a properly functioning science, on the one hand, and misinformation or dissemination of false news, on the other? In other words, what is the difference between disagreeing about P, based for instance on different evaluations of the relevance of the evidence and/or the levels and importance of uncertainty, and disagreeing about P on the basis of mishandled or inaccurate information, and/or ideological stances?

Now, it’s clear that a full transparency policy requires that research data and results be made available in open access even when they are inconclusive or conflicting. But unless we can distinguish between a legitimate disagreement among scientists and controversies arising from science denial or disinformation, transparency could cause more confusion. So, what is a ‘reasonable’ scientific disagreement and how should we communicate uncertainty?

3 The Debate Over COVID-19 Forecasting: Ioannidis Versus Taleb

The recent debate between Ioannidis and Taleb on COVID-19 forecasts (Ioannidis et al., 2020; Taleb, 2020) was quite revealing of the interplay of science and values in assessing and regulating different (types of) risks under conditions of uncertainty. It started with the question of whether forecasting for COVID-19 failed. But it actually focused on the need to take (or refrain from taking any) strict but costly measures to prevent and control the spread of the disease. which is a rather political issue. It involves trade-off decisions which go beyond the best available science or even scientists’ authority, and consequently it shouldn’t be left to them. It could (and/or should) probably be best handled with informed public contributions—or at least consent.

3.1 Background

On March 17, 2020, John Ioannidis, Professor of Epidemiology at Stanford University and one of the most cited scientists in medical history, published an opinion essay in STAT arguing that the current coronavirus disease, COVID-19, may be ‘a one-in-a-century evidence fiasco’.

The argument behind this—seemingly hyperbolic—statement is quite simple and has two premises:

  1. A.

    There is no reliable evidence on how many people have been infected with SARS-CoV-2 (or who continue to become infected) and how the epidemic is evolving:

  • The actual COVID-19 testing capacity is limited in most countries, which makes it likely that some deaths and probably the vast majority of infections due to SARS-CoV-2 are being missed.

  • Patients who have been tested for SARS-CoV-2 are disproportionately those with severe symptoms and bad outcomes.

  • COVID-19 Case Fatality Ratio (CFR) seems to vary from 0,05%—which is ‘lower than seasonal influenza’—to 1% (when it comes to elderly population).

  • Especially for patients with multiple comorbidities (or infections), a positive test for coronavirus does not necessarily mean that the cause a patient’s death is always this virus (cf. Lewiński & Abreu 2022, this volume).

So, Ioannidis argues, there is much we do not know about COVID-19: There is no evidence so far that SARS-CoV-2 causes more severe illness than previous versions of the virus or increased risk of death. And yet, he adds:

  1. B.

    We are adopting measures of dubious effectiveness and safety:

    • We do not know whether or how effective the measures implemented to prevent and reduce the transmission of SARS-CoV-2 are. Some of these measures may have undesirable consequences and even exacerbate the problem.

    • The already overburdened public health systems are being pushed to their limits. If they collapse, the majority of the extra deaths will be due to other common diseases and conditions, which normally are effectively addressed.

    • Given the uncertainties surrounding our assessments of the development and duration of the pandemic, the coronavirus health crisis may be followed by an unprecedented socioeconomic and/or even mental health crisis, if social distancing measures and lockdowns last for too long.

The dilemma arising from the uncertainties we have to deal with in the face of COVID-19, is therefore the following: should we adopt aggressive (but potentially harmful) measures to manage the COVID-19 crisis or should we refrain from taking any (tough) measure, at the risk of highly increasing the number of deaths. Ioannidis estimates that the total number of deaths could reach 40 million globally—which sounds huge. We can only hope that life will continue, as he says. But should we probably take this risk, if it is just ‘the most pessimistic scenario’ or if ‘the vast majority of this hecatomb would be people with limited time expectances’? Do we really need ‘more or sufficient data’—if we can ever obtain them—to guide decision making? And finally, how long should we wait for these data and at what cost?

A reasonable objection to Ioannidis’ demand for more data could be that a decision to postpone decision-making until more data are collected, is still a decision. Indeed, as Taleb was quick to point out, this is the so-called delay-fallacy: “If we wait, we will know more about X, hence no decision about X should be made now”. But, instead of focusing on the questions and arguments that Ioannidis posed, many took him to be the black sheep of scientific community. ‘A week ago, Ioannidis’ legacy in medical science seemed unassailable’, as Freedman (2020) says. ‘Today, not so much’. For many of his colleagues, Ioannidis’ views could support conspiracy theories in the middle of the crisis. ‘The prevailing take now is that Ioannidis has fallen prey to the very sorts of biases and distortions that he became revered for exposing in others’, adds Freedman. Ioannidis was accused of cherry-picking data to prove his point. And his failure would be ‘affirmed’ just two months later, when a science reporter for BuzzFeed News, Stephanie Lee, would reveal that Ioannidis had failed to disclose financial ties: his study was funded in part by David Neelman, founder of JetBlue Airways, who would certainly benefit from research indicating that the threat of COVID-19 had been exaggerated. Ioannidis rejected the suspicion of a financial conflict of interest, but this part of the story is beyond the scope of this chapter. The question at issue here whether heretical scientific voices should be silenced or kept out of the public sphere.

3.2 The Debate

Three months later, in June 2020, John Ioannidis is invited by the International Institute of Forecasters to discuss on how to handle COVID-19 pandemic and its potentially devastating consequences with Nassim Taleb, Professor of Risk Engineering at the New York University (NYU) Tandom School of Engineering and widely known for his black swan theory. The discussion is conducted online, in the form of a debate organized by Pierre Pinson and Spyros Makridakis between the two scientists. The starting question is whether forecasting for COVID-19 failed and the participants are invited, in a first phase, to simultaneously prepare two blog posts expressing their views to be posted at exactly the same time. A kind of deuterology follows. Ioannidis and Taleb are given the opportunity to access and think of each other’s arguments, as they are stated on the two blog posts, and they are both then invited to write an opinion page to better detail their views and explain why they think the opposite side’s view may not be an adequate response to the COVID-19 outbreak.

The aim of the debate is twofold, according to Pinson and Makridakis’s (2020): (a) to ‘alert and inform relevant stakeholders, who can then better appraise their recommendations about what needs to be done’ and (b) to ‘give some valuable insight on how to deal with similar situations in the future’. Both these tasks depend on the quality and value of the forecasts produced for the organizers of the debate, who tacitly attribute the opposing views of the two thinkers to different forecast evaluations. But one of the most important lessons we should draw from this debate is probably that the assumption underlying the relevant initiative, regarding the basis of the disagreement and the relevance of forecast assessments, is rather vulnerable.

The question at issue for Ioannidis and Taleb is not whether forecasting for COVID-19 failed. There is no doubt (or controversy) over forecasting failure or the uncertainty inherent in clinical and epidemiological research. But because of this uncertainty:

  • Ioannidis argues that we may be overestimating the mortality risk of COVID-19, which means that more (reliable) data are needed to proceed to so strong prevention and protection measures, while:

  • Taleb believes that the magnitude of the risk we run, if the pandemic’s worst-case scenario comes true, makes it imperative to take targeted actions to suppress the virus transmission as a matter of urgency.

So, here is the structure of the debate. Let’s call P the contested issue, viz, how to handle the risk involved in the assessment of high mortality rate. Ioannidis looks for more and better quality data to narrow down the risk before restrictive measures are taken whereas Taleb takes the risk to be high enough to warrant an immediate curbing action plan. A scientific disagreement lies here in the interpretation of fat-tailed distributions,Footnote 1 i.e., distributions with high probability of extreme outcomes, and the importance—or implications—of focusing on the extreme values of a distribution for the validity of forecasting, on the one hand, and the management of the (foreseeable) risks, on the other.

Taleb argues that a global disease outbreak is an extreme but highly consequential event, i.e., an event that falls on the tail ends of a statistical distribution (it is not very likely to happen) and, nevertheless, represents a source of existential risk. In such a case ‘much of what takes place in the bulk of the distribution is just noise’, says Taleb. All relevant information lies in the tails themselves, and therefore risk management decisions should be based on them. ‘Sound risk management is concerned with extremes, tails and their full properties, and not with averages, the bulk of a distribution or naïve estimates’, as Taleb puts it, while ‘…more evidence is not necessarily needed. Extra (usually imprecise) observations, especially when coming from the bulk of the distribution, will not guarantee extra knowledge.’

Ioannidis argues that there is nothing special with the tails and that the focus should be on the entire predictive distribution. He claims that ‘when calibration/communication on extremes is adopted, one should also consider similar calibration for the potential harms of adopted measures.’ However, he insists that we should accurately quantify the entire distribution of forecasts, instead of making single point predictions. He sees “selection bias” in choosing tail events as done by Taleb’s (2020), when for Taleb the standard technique used there is the exact opposite of selection bias: “in Extreme Value Theory,Footnote 2 one purposely focuses on extremes, to derive properties that nevertheless influence the rest of the distribution as well, especially from a risk management point of view”, as he himself notes.

It is obvious that if we focus on the extremes, as Taleb suggests, we will probably overestimate the risk. We will assume that this risk is higher than it actually is or what, at least, the mean of the distribution suggests. We will probably underestimate the risk, on the other hand, if we focus on the entire distribution of forecasts, as Ioannidis suggests, although it is not clear here if the study of the entire distribution is actually the calculation of its mean, which is the target of Taleb’s criticism. In any case risk assessment varies depending on where the emphasis is put. However, the opposite might also be true. It might be that the assessment and relative weighting of (foreseen) risks determines where the emphasis should be put, in which case we are talking about a (non-epistemic) decision, i.e., a decision that cannot but (and/or should) involve human values.

3.3 Argumentation Schemes and Fallacies

It is not clear here which of the two applies—if, i.e., the decision on where the emphasis should be put depends on prior risk assessments or vice versa—and the argumentative practices to which the two scientists resort could create even greater confusion. In Taleb’s paper, for instance, there is a whole section entitled “fortune-cookie evidentiary methods’ to stress the failure of evidentiary methods to work under both risk management and fat tails. But it should be obvious that the reference to ‘fortune-cookies’ can neither raise awareness nor improve public understanding of science. It is clearly aimed to strengthen Taleb’s argument that we should adopt strong measures for COVID-19 prevention and control. And yet, it could also reinforce the already existing tendency towards public mistrust of science, if taken at face value, which arguably goes far beyond Taleb’s intentions.

Ioannidis (2020) accuses—rightly—Taleb and social media of having misrepresented his positions. As he puts it:

Taleb caricatures the position of a hotly debated mid-March op-ed by one of us, alluring it “made statements to the effect that one should wait for “more evidence” before acting with respect to the pandemic”, a strawman distortion.Footnote 3 Anyone who reads the op-ed unbiasedly realizes that it says exactly the opposite. (p. 7)

And a few lines below:

Another strawman distortion propagated in social media is that supposedly the op-ed had predicted that only 10,000 deaths in the USA. The key message of the op-ed was that we lack reliable data, i.e., we don’t know. The strawman interpretation as “we don’t know, but actually we do know that 10,000 deaths will happen” is maliciously self-contradicting. (ibid., p. 8)

But he changes, in turn, the subject of discussion, when he ironically notes the progress made in science since “the times of the Antonine plague or even 1890” (ibid., page 6). The question, for Taleb, is not whether the science we have is capable of identifying the pathogen or elucidating its true prevalence, but the time we should spend on it. He does not deny, that is, the value of science or the progress it has made. He is concerned with the consequences of delayed response, when Ioannidis argues for evidence-based responses.

In both cases, we see a kind of ignoratio elenchi, i.e., a distortion of—or departure from—the issue in question, which creates a great deal of confusion regarding not only the subject but also the depth and extent of the disagreement. And yet both scientists appeal to science. Ioannidis insists on the need to use science (or intensive testing) and more reliable data, while Taleb resorts to definitions and re-definitions of the nature and role of science and evidence in his attempt to refute Ioannidis’ view, as illustrated in the following:

Apparently, the prevailing idea is that producing a single numerical estimate is how science is done […]. Well, no. That is not how ‘science is done’, at least in this domain, and that is not how informed decision-making should develop. (…) Science is about understanding properties, not forecasting single outcome. (Taleb, 2020, p. 1)

He indirectly but clearly connects Ioannidis’ view to conspiracy theories:

And if people take action boarding up windows, and evacuating, a claim that someone might afterwards make that ‘look it was not so devastating’, such claim should be considered closer to a lunatic conspiracy fringe than scientific discourse. (ibid., p. 2)

He is supposed to defend, that is, science against Ioannidis, although he seems to contradict himself, when at the end he distinguishes real life from experiments. And he closes his paper with the emphatic wording of one more definition:

By definition, evidence follows – and does not precede! – rare impactful events. (ibid.)

which is based on an appeal to ancestral wisdom and Seneca’s authority:

Ancestral wisdom has numerous versions such as ‘Cineri nunc medicina datur’ (one does not give remedies to the dead), or the famous saying by Seneca ‘Serum est cavendi tempus in mediis malis’ (you don’t wait for peril to run its course to start defending yourself). (ibid.)

3.3.1 The Role of Analogies

A great part of the discussion focuses on the evaluation of the analogical arguments offered in support of the one or the other view, although it is Taleb who mainly makes use of this strategy. We have already seen his reference to ‘fortune cookies’ with regards to the methods of evidence-based practice, but while discussing evidentiary methods, Taleb uses five more analogies, in order to persuade us that uncertainty makes it even more urgent to take tough measures. So, he argues that:

1. If you are uncertain about the skills of the pilot, you get off the plane

where the uncertainty characterizing COVID is compared to the uncertainty we could have about the skills of a pilot.

A second analogy is expressed in the form of a question:

2. If there is an asteroid headed for earth, should we wait for it to arrive to see what the impact will be?

We are asked what we should do if an asteroid were to hit earth. But what follows, is not the obvious, for Taleb reply, that we should take action, but the objection that, for Taleb, one could raise, if one ignores ‘the power of science to generalize (and classify), and the power of actions to possibly change the outcome of events’ (Taleb, 2020, p. 2).

We did not see this particular asteroid yet

Of course, this is not an anticipated objection. It is probably what we should have replied, for reasons of coherence, if we were to follow Ioannidis’ argumentation. But the absurdity of such an objection shows how deep the ‘logical fallacy’Footnote 4 runs, according to Taleb.

This schema is repeated one more time, when the situation we are in is compared to a hurricane.

3. Similarly, if we had a hurricane headed for Florida, a statement that “We have not seen this hurricane yet, perhaps it will not be like the other hurricanes!” misses the essential role of risk management: to take preventive actions, not to complain ex post.’

And there then follow two more analogies:

4. Waiting for the accident before putting the seat belt on

5. or [waiting for] evidence of fire before buying insurance

where calling for more evidence in the face of pandemic is compared to waiting for an accident to happen (4) or evidence for fire (5), and the need for immediate action to prevent the spread of the disease is compared accordingly to the need for—the relevant—precautionary action.Footnote 5

Do these analogies succeed in the aim they are used for? Do they strengthen Taleb’s argument? The use of analogical reasoning is, undoubtedly, quite common in science.Footnote 6 It is supposed to have not only heuristic, but justificatory role as well. It has indeed been argued that analogy is a commonly used strategy for complex problem solving under uncertainty—or that uncertainty is a triggering mechanism for analogy as (Chan et al., 2012) put it—which means that analogical reasoning is rather legitimate—if not unavoidable—in the case of pandemic.

To focus our attention, let us take a look at Hesse’s (1966) influential account of analogical reasoning in science. As is well-known, Hesse spoke of reasoning in terms of the analogies between a source X and a target system Y, and in particular some strong positive analogies between X and Y, for otherwise there is no reason to think that X may be useful for its purpose; some negative analogies and some neutral analogies, i.e., some properties about which we do not yet know whether they are positive analogies, and which may turn out either positive analogies or negative analogies. In light of the positive and the neutral analogies, the source system X can play a significant heuristic role; i.e., it can help the discovery of other properties of Y which may be either positive analogies between X and Y or negative analogies. For, by trying to explore the neutral analogies between the source X and the target Y, (i.e., by trying to find out whether Y possesses more of the properties of X) we end up with a better knowledge of what Y is and what is not. The ‘transference’ of properties from X to Y, and hence the justificatory role of analogy, is a function of the strength of the positive and negative analogies.

However, all five cases Taleb uses are such that the negative analogies are too strong and the positive analogies too weak. The cases of hurricanes, car-crashes and fires are too predictable to serve as a model for COVID-19. The only positive analogy is that some risk is involved in all; the strong negative analogy is precisely that in the sources of the analogy the risk is very well-known and so is the cost of not taking preventive action. But in the case of COVID-19, both the risk and the cost of the measures proposed are heavily underdetermined. Hence, the supposed analogies are no more than rhetorical devices.

Ioannidis is quick to point out the analogy between the need for precautionary and safety measures for Covid-19 and the need of wearing seat belt in a car is unfortunate, to say the least, because ‘seat belts cost next to nothing to produce in cars and have unquestionable benefits.’ (Ioannidis, 2020, p. 8) They prevent ~50% of serious injuries and deaths at almost zero cost. And therefore, they are not equivalent to a prolonged draconian lockdown, in terms of benefit—harm profile, but to some rather simple interventions, like face mask use and hand hygiene. For similar reasons, the analogy of fire insurance is considered inappropriate too. For ‘fire insurance makes sense only at reasonable price’, says Ioannidis. ‘Draconian prolonged lockdown may be equivalent to paying fire insurance at a price higher than the value of the house.’ (Ibid., p. 9).

But it is obvious that the two scientists do not focus on the same problem. Ioannidis is more concerned about the financial consequences of a prolonged lockdown, while Taleb focuses on the health risks of the pandemic. It is precisely for this reason, that they cannot agree upon what is analogous to what and from what aspect. And the same holds for the case of the Dutch flood risk management policy they discuss.

Taleb (2020, p. 4) says that Extreme Value Theory (EVT) applies to Dutch policy of building and calibrating their dams and dykes on the extreme sea levels expected on the basis EVT. They do not build them on the average height of sea level, but on the extremes, ‘and not only on the historical ones but also on those one can expect by modelling the tail using EVT, mainly via semi-parametric approachesFootnote 7’, Taleb notes (ibid.). So, policy making focuses on tail properties, for Taleb, in this case, and not on the body of probability distribution. Ioannidis (2020, p. 9) disagrees. He believes that this analogy is inappropriate too, mainly because, despite its cost, and contrary to lockdown measures, anti-flooding engineering has a favorable decision-analysis profile after considering multiple types of impact. He challenges again EVT, and along with that, the severity of flood control methods in Netherlands (or the comparison of these methods with a prolonged lockdown, which would be equivalent, for him, to only an emergency evacuation). He argues that ‘the observed flooding maximum to-date does not preclude even higher future values.’ However, it is not clear here if Ioannidis’s criticism is aimed at the assumptions of EVT or the assumption that the Dutch flood defense system is based on EVT.

The likelihood is that the two scientists assess both the prevailing circumstances and the (levels and kinds of) risks we can accept or tolerate under the threat of a pandemic differently. And so, the question remains as to whether this is a genuine scientific disagreement and if (or how) it should be communicated to the public.

4 Reasonable Scientific Disagreement

The preceding discussion suggests that scientists are often influenced by non-epistemic considerations. Both Ioannidis and Taleb need to make trade-off decisions that reflect ethical, economic and political interests and values, and affect not only the recommendations they make at the level of science policy but also what they take to be true or valid, as we saw. And yet this is a reasonable scientific disagreement that can arise in the decision-making process under conditions of uncertainty, we argue here, where by ‘scientific’, we mean that this is.

[Scientific]: a disagreement arising in the context of science or scientific enterprise, as a practice, which largely involves evidence-based reasoning and complies with certain norms or standards and rules (of inference).

while by ‘reasonable’, we mean that this is.

[Reasonable]: a disagreement about P such that each side holds mutually inconsistent (or simply different) views about P without flouting any criteria of rationality (e.g., taking all relevant evidence into account, being responsive to reasons and argument, open to criticism etc.)

What is required for there being reasonable disagreement about P? Either there is no fact of the matter about P, e.g., P is about an issue of taste or aesthetics. Or there is a fact of the matter about P, hence the disagreement is potentially factual, but there are value-related issues such that the criterion of relevance, or the level of acceptable risk etc. These issues are such that there are no context-independent ways to address them or there is no value-free framework in which they can be set. But as will be shown in the sequel, contrary to the first case and in spite of the values involved, such a disagreement does not affect or challenge the validity of science as such and it can, furthermore, be settled, and minimized (or resolved) in light of new evidence.

We argue that the disagreement between Ioannidis and Taleb is a reasonable scientific disagreement, since:

  • Confidence in both science and scientists prevails or is at least positively related to the conduct of the debate and the willingness of the two parties to participate in it. Even when the adequacy of epidemiological data or the scope of the models used for their analysis are questioned, the reliability of scientific methodology is not affected. Nor is there any question of extra-scientific solutions.

  • The papers are subjected to peer review. Throughout the submission and peer review evaluation process both the validity and strength of authors’ reasoning, the methodology of research, as well as the evidence provided in favor of a fact are independently assessed and approved.

  • The use of certain standards and procedures or inference methods is required. Both the protagonists of the debate and the anonymous reviewers commit themselves to some normative principles or conventions, which are supposed to fill a large part of the gap between evidence and scientific hypotheses. They govern the collection, recording and interpretation of data thus constraining the relevant risks. Or they specify what amount of risk can reasonably be tolerated. They determine, for instance, the maximum acceptable magnitude of error (significance level), which is usually set to 0.05. They guide and/or restrict scientists in their decisions about experimental or research design (Wilholt, 2009, p. 98, 2013, pp. 242–243) and coordinate them; they help them cooperate with each other.

  • The context of the discussion is clearly defined. There is a clearly formulated and defined problem, while conventions determine also, to a large extent, whether or when a problem solving or management process (and solution) is appropriate or effective.

  • The uncertainty is not (anymore) radical. It can be modelled or described through probabilistic reasoning (cf. Méndez, 2020). The risk of error is reduced with further data collection and probability redistribution. And so, dialogue remains open and continuous.

While, for example, one year ago we were still confronting radical uncertainty and high systemic risks (Méndez, 2020), in the period from the emergence of COVID-19 pandemic to date, research has already led to more (reliable) data and a much better understanding of the nature of the disease and the possibility of COVID-19 transmission risk and prevention. We now know more about the symptoms of coronavirus or the adverse effects occurring during (or even after) its healing treatment. The systematic review and meta-analysis of the results of individual studies conducted on droplets size and transmission or the distance they can travel has led to safer conclusions regarding the distance that must be kept between individuals to reduce transmission of SARS-CoV-2 and the effectiveness of masks or the specifications they should meet so as to be effective (Goodwin & Bogomoletc 2022, this volume). We have obtained more information about SARS-CoV-2 mutations and variants, the development of antibodies or the effects of environmental conditions on the dispersion of the virus, while similar progress has been accomplished at the methodological level. The models used to track and forecast the spread of COVID-19 are updated regularly to respond to new data and data is in turn enriched or revised through the use of more precise models.

So even if we admit that certainty is unattainable, the magnitude of our ignorance decreases. The distinction between what we know we do not know, the so-called ‘known unknowns’, and what we do not know we do not know or ‘unknown unknowns’ becomes more and more clear, while the scope, as well as the extent of the disagreement, may change over time, too.

But it is a reasonable disagreement too, since: far from being infallible or certain—and despite the increasing accumulation of data and information—scientific beliefs are subject to constant revision. Our estimates of existing and foreseen risks or the solutions proposed to reduce them may thus vary considerably in time, and the same holds true for policy decisions. As we learn more about the virus and update or improve our models, we obtain a better—albeit not full—understanding of how different policy decisions can impact the trajectory of COVID-19.

What needs to be stressed is that the urgency of the situation warrants immediate action. The risk of COVID-19 transmission requires acting in a situation of undesirable uncertainty, where policy decision-making largely reflects value trade-offs. Scientists need to judge and balance risks and expected benefits or costs, if they are to engage in devising policy responses. And here non-epistemic values become (a lot more) visible; i.e., more or less conscious economic, ethical, cultural or political interests and values, which may normally differ from scientist to scientist and result in conflicts. The variety of the values involved causes a substantial heterogeneity in risk assessments. Depending on the values, i.e., that they hold, scientists attach different values to the associated risks. And policy responses vary accordingly.

This suggests that the translation of scientific evidence into policy-making and implementation is not a linear path. But as research continues and constantly new data come to light, the extent of the disagreement changes too, as said above. Scientists revise their explanations and models in light of new evidence, and they are forced in some sense to do so. For they submit them to the judgment of their colleagues (cf. Psillos, 2015).

5 Mis/disinformation—Propagation and the Need for Transparency

There is no doubt that the consensus view might be wrong; or that, unless it is legitimately inappropriate, a scientific dissent challenging the prevailing view can play a key role to the advancement of science (cf. de Melo-Martin & Intemann, 2018). But if we now turn to the strategies and tactics often used by those spreading fake news and other forms of misinformation, with the intention to deceive the public, we see that these tactics are quite different. When, for example, it is argued that there is no coronavirus or that mRNA vaccines are going to alter the recipient’s DNA, the relevant information is not based on evidence. Those promoting such information are either non-scientists or discredited scientists who use their own standards. And yet the language they use sounds ‘scientific’. Or they stress the uncertainties surrounding science and the differences or disagreements arising synchronically and diachronically.

It should therefore be made clear that the key hallmark of science is not the absence of human values or much more the stability of its results, but the fact, instead, that scientists should substantiate the claims they make; submit them to peer review and in any case revise them with the emergence of new data and as the research continues. Scientific recommendations or suggestions should be recognized as being transient.

But equally important it is to realize that scientists often disagree. Even when they have access to the same data and comply with established rules, scientists may hold different beliefs regarding data characterization and interpretation or on what counts as relevant evidence. They may disagree, that is, on the levels of statistical significance they use and hence on the tolerable kinds and levels of risk (cf. Douglas, 2009) or on the acceptable ways of forming beliefs and inference. In such a case we are talking about a deep or substantial disagreement. But this is a normal mode of communication within science and an arguably indispensable condition of its progress. It is well known that science progresses through disagreements and as the one theory succeeds the other, while De Cruz and De Smedt (2013) have convincingly argued that epistemic peer disagreement can be practically valuable too, that is in the practice of science and with regards to the generation of new evidence and (re)evaluation of existing evidence and assumptions.

That being so, there is a need to appropriately assess the legitimacy of the values involved in decision-making and by extension of the decisions made themselves. For it is of course one thing to build theories, which is, in fact, a never-ending process, and quite another to make science-based policy. So, the public should be kept informed of the risks and potential interests involved in decision-making. It is a matter of justice and a way to restore, at the same time, public trust in science, given the increasing suspicion of the findings of science and of political decision making related to cutting edge science and technology. And, independently of their depth or causes, scientific disagreements are arguably essential to this end, when they are appropriately communicated. Since scientists are not always aware of the values guiding their reasoning, so as to make them explicit to stakeholders, open debates among experts holding opposing views can shed light on these influences.

So, to return to the question we posed above regarding which of the research results should be communicated to the public, when evidence is inconclusive or scientists disagree, there is neither way nor any legitimate reason to hide the uncertainty of science or the values involved in decision-making. It is actually unrealistic to think that scientists can provide policy advice without being influenced by their financial, social, political and personal interests and values, as Elliott and Resnik (2014) have convincingly argued, and imprudent, we would also add, to insist (or pretend) otherwise, when most of these decisions are often proved wrong and revised.

6 Conclusion

In this chapter we dealt with a key question that has arisen as a result of the recent health crisis: how to communicate science under uncertainty. We used the recent Ioannidis-Taleb debate concerning the pandemic to achieve two things. First, to show how the quality and strength of argumentation is often clouded by the metaphorical use of language and the use of the rhetorical devices as well as logical fallacies. Second, to show how the substance of their argumentation involves reliance on various non-epistemic values and considerations. However, we argued that though both of the above should be flagged, none of them renders a scientific dispute part of misinformation or disinformation. In science there is room for reasonable disagreement.

It is a key target of science communication to make the public appreciate these features of science, thereby enhancing the public’s trust in science, while at the same time acknowledging that scientific information is uncertain and revisable. Effective (and responsible) communication of reasonable disagreement as such could improve public understanding of the nature—the strengths and limits—of science.