1 Introduction

Academic economists have recently spent a huge amount of energy to better understand the science of pandemic dynamics in the face of the emergence of the covid-19. Economists are contributing to the analysis of the covid-19 crisis by integrating economic dimensions to the models, such as the economic cost of social distancing and the statistical value of lives lost. These are key elements necessary for public and private decision-makers interested in shaping strategies and policies that minimize the welfare cost of the crisis. My preferred reading list on this issue as I write this paper is composed of papers by Acemoglu and Chernozhukov (2020), Alvarez et al. (2020), Brotherhood et al. (2020), Favero et al. (2020), Fischer (2020), Greenstone and Nigam (May 2020), Miclo et al. (2020), Pindyck (2020). This investment by the profession is impressive and highly policy-relevant. It raised critical debates about, for example, when and how much to deconfine people, who should remain confined longer, the value of testing and tracing, or whether the individual freedom of movement should be limited.

One of the most striking feature of the crisis is the deep uncertainties that surrounded most parameters of the model at the initial stage of the pandemic. To illustrate, here is a short list of the sources of covid-19 uncertainties: The mortality rate, the rate of asymptomatic sick people, the rate of prevalence, the duration of immunity, the impact of various policies (lockdown, social distancing, compulsory masks, …) on the reproduction numbers, the proportion of people who could telework efficiently, and the possibility of cross-immunization from similar viruses. Still, all models that have been built over such a short period of time by economists assumed no parameter uncertainty, and I am not an exception (Gollier 2020). This is amazing. Large discrepancies between the predictions of these models and their associated “optimal” policies do not illustrate deep disagreements about the dynamics of the pandemic, but rather deep uncertainties about the true values of its parameters. This parameter uncertainty should be recognized and integrated in the modeling. Economists are well aware that uncertainty is typically a key component to explain observed behaviors and to shape efficient policies. Precautionary savings, option value to wait before investing, risk premia on financial markets, insurance demand, risk-sharing and solidarity mechanisms, and preventive efforts are obvious examples to demonstrate that risk and uncertainty are at the heart of the functioning of our society. But in the cases of climate change and covid-19, we most often assume no uncertainty to make policy recommendations in spite of the fact that uncertainty is everywhere in these contexts. I feel this fact as an impressive failure of our profession to be useful to make the world better.

In this paper, I go one step towards including risk in the shaping of efficient pandemic policies. Suppose that a virus has contaminated a small fraction of the population, and that no treatment or vaccine is available. Because of the high lethality of the virus, I suppose that the only feasible strategy is to “crush the (infection) curve” by imposing a partial lockdown. The intensity of the confinement can be adapted in continuous-time to the evolution of the pandemic to minimize the total cost of the confinement. Following Pollinger (2020), I show that in the absence of uncertainty, the optimal intensity of the lockdown should be constant over time until the eradication of the virus in the population. The optimal confinement intensity is the best compromise between the short-term cost of increasing the confinement and the long-term benefit of reducing the duration of the confinement. Confining people modifies the reproduction number. Under the standard SIR pandemic model (Kermack and McKendrick 1927), there is a quadratic relation between the instantaneous intensity of the confinement and the instantaneous reproduction number.

Consider the situation prevailing in the western world in April 2020, after a partial lockdown was imposed. In this context, suppose that the reproduction number under full lockdown is known, but the reproduction number under full deconfinement is uncertain. This uncertainty will evaporate within a few weeks by observing the propagation of the virus under the partial lockdown. How should this uncertainty with learning affect the initial intensity of the lockdown? Surprisingly, I show that it tends to reduce it. To obtain this result, I assume that the representative agent is risk-neutral. However, risk plays a role in this model because of two non-linear interactions: the quadratic relation between the cost of confinement and the instantaneous reproduction number, and the hyperbolic relation between the reproduction number and the duration of the pandemic. This double non-linearity makes the analysis quite complex, and I have been able to prove the main result only in the case of small risk. The calibration exercise suggests that my result holds for large risks too.

I use a simplified version of the SIR model that has been introduced by Pollinger (2020). It is assumed in the SIR model that the rate of change in new infections is equal to the sum of the rates of change in the number of infected and susceptible people in the population. When crushing the curve, the rate of change in the number of susceptible people remains almost constant at zero. For example, between early April to mid July 2020, the number of susceptible people in France was officially estimated to be reduced by 170,000 persons, for a population of 66 million people. Taking account of unaccounted infections, the number of susceptible persons has been reduced by just a few percents. During the same period, the number of infectious people in France went down by a factor larger than 20. Thus, when crushing the curve, the dynamics of the pandemic is almost entirely driven by the rate of change in the number of infectious people. In this paper, I assume that it is entirely driven by changes in the prevalence rate. This approximation is exact when the initial prevalence rate tends to zero, assuming a reproduction number R less than unity. My results hold only under this approximation. An important unsolved question is related to the impact of uncertainty on the initial prevention effort when the initial rate of prevalence is large, or when the objective is herd immunity (\(R > 1\)).

There is a long tradition in decision theory and finance on optimal choice under uncertainty and learning to which this paper is related. It is closest to the literature on the real option value to wait introduced by McDonald and Siegel (1984) and popularized by Dixit and Pindyck (1994). An important message from this literature is that risk-neutral agents could optimally reduce their initial effort to achieve a long-term goal to obtain additional information about the welfare impact of this effort. I obtain a similar result in this pandemic model. As in all real option value models, there is a cost and a benefit in reducing the initial lockdown. The benefit is the reduced immediate economic, social and psychological costs associated to confining people. The cost of reducing the initial lockdown is that it will increase the uncertain duration of the lockdown necessary to eradicate the virus, or to increase the intensity of the lockdown in the future. The uncertainty surrounding the reproduction number affects this expected cost because of the intricate non-linearities in the duration of the pandemic and in the sensitivity of the optimal future lockdown to new information. It happens that the uncertainty reduces the expected cost of reducing the initial intensity of the lockdown, so that it is optimal to initially confine people less intensively.

2 The model

My model is based on the classical SIR model developed by Kermack and McKendrick (1927) to describe the dynamics of a pandemic. Each person is either Susceptible, Infected or Recovered, i.e., the health status of a person belongs to \(\{S,I,R\}\). This implies that \(S_{t}+I_{t}+R_{t}=N\) at all dates \(t\ge 0\). A susceptible person can be infected by meeting an infected person. Following the key assumption of all SIR models, this number of new infections is assumed to be proportional to the product of the numbers of infected and susceptible persons in the population, weighted by the intensity of their social interaction. With no further justification, this is quantified as follows:

$$\begin{aligned} \frac{\mathrm{{d}}S_t}{\mathrm{{d}}t}=-\beta _{t}I_{t}S_{t}. \end{aligned}$$
(1)

I will soon describe how \(\beta _{t}\), which measures the intensity of the risk of contagion of a susceptible person by an infected person at date t, is related to the social interactions between these two groups and by the confinement policy. Once infected, a person quits this health state at rate \(\gamma \), so that the dynamics of the infection satisfies the following equations:

$$\begin{aligned} \frac{\mathrm{{d}}I_t}{\mathrm{{d}}t}=\beta _{t}I_{t}S_{t}-\gamma I_{t}. \end{aligned}$$
(2)
$$\begin{aligned} \frac{\mathrm{{d}}R_t}{\mathrm{{d}}t}=\gamma I_{t} \end{aligned}$$
(3)

The pandemic starts at date \(t=0\) with \(I_0\) infected persons and \(N-I_0\) susceptible persons. I assume that the virus is eradicated when the number \(I_t\) of infected persons goes below \(I_{\min }\), in which case an aggressive tracing-and-testing strategy is implemented to eliminate the last clusters of the epidemic.

Because on average an infected person remains contagious for a duration \(1/\gamma \), and because the instantaneous number of susceptible persons infected by a sick person is \(\beta _tS_t\), the reproduction number at date t equals

$$\begin{aligned} r_{t}=\frac{\beta _{t}S_{t}}{\gamma }. \end{aligned}$$
(4)

Herd immunity is obtained when the number of infected persons start to decrease over time. From Eq. (2), this is obtained when the number of susceptible persons goes below the herd immunity threshold \(S^*=\gamma /\beta _t\), i.e., when the reproduction number goes below 1. In this paper, I focus on policies aimed at “crushing the curve”, where \(r_t\) remains permanently below unity. Other policies, such as the laissez-faire policy or policies aimed at “flattening the curve”, consist in building herd immunity through a rapid or gradual infection of a large fraction of the population, implying a large health cost but a limited economic cost. When crushing the curve, a sufficiently strong confinement is imposed to the population to maintain the reproduction number permanently below 1, so that the virus is eradicated relatively quickly. Under this family of scenarios, the number \(S_t\) of susceptible persons remain close to unity, very far from herd immunity under the laissez-faire policy. This implies that the changes in \(I_tS_t\) in Eq. (2) mostly comes from changes in \(I_t\). Following (Pollinger 2020), I, therefore, simplify the SIR dynamic described above into a single differential equation:

$$\begin{aligned} \frac{\mathrm{{d}}I_t}{\mathrm{{d}}t}=(\beta _{t}S-\gamma ) I_{t}, \end{aligned}$$
(5)

where S is the average number of susceptible persons during the pandemic. This approximation of the SIR model is exact when the ratio of infected to susceptible is close to zero.

I examine policies of social distancing and lockdown. Let x denote the intensity of this policy. One can interpret x as a measure of the fraction of people that are confined. For simplicity, I assume that infected people are asymptomatic and that there is no PCR test, so that one cannot discriminate the intensity of confinement on the basis of the health status. This means that x is the fraction of people, both infected or susceptible, who are confined. A free infected person has a reproduction number \(r_{\text {f}}=\beta _{\text {f}}S/\gamma \). I assume that there is no herd immunity at the start of the pandemic, i.e., that \(r_{\text {f}}\) is larger than unity, or \(\beta _{\text {f}}S>\gamma \). The confinement reduces this number to \(r_{\text {c}}=\beta _{\text {c}}S/\gamma \), with \(\beta _{\text {c}}< \beta _{\text {f}}\). I assume that the full confinement of the population crushes the curve in the sense that \(r_{\text {c}}<1\), or \(\beta _{\text {c}}S< \gamma \).

As said earlier, a crucial element of the SIR model is that the speed of infection is proportional to the product of the numbers of people infected and susceptible. Confining people reduces both the number of infected people and the number of susceptible persons, implying a quadratic relation between the intensity x of the confinement and propagation of the virus in the population (Acemoglu and Chernozhukov 2020). From this observation, the pandemic parameter \(\beta _t\) takes the following form:

$$\begin{aligned} \beta _t=\beta (x_t)=(\beta _{\text {c}} x_t+\beta _{\text {f}} (1-x_t))(1-x_t). \end{aligned}$$
(6)

The true contagion rate \(\beta _{\text {c}} x_t+\beta _{\text {f}} (1-x_t)\) of infected people is a weighted average of the contagion rates \(\beta _{\text {c}}\) and \(\beta _{\text {f}}\) of infected people who are respectively confined and let free to live their life. They meet a reduced fraction \(1-x\) of susceptible people, because the remaining fraction x is lockdown. The quadratic nature of this relation plays a crucial role in this paper. The lockdown has also an economic cost. I assume that the instantaneous cost of confining a fraction x of the population at date t is equal to wx, where \(w>0\) can be interpreted as the sum of the wage and psychological costs of confinement. Abstracting from discounting given the short duration of the pandemic when crushing the curve, the objective of the policy is to minimize the total cost of the health crisis. This yields the following value function:

$$\begin{aligned} V(I)=\min _{x(.)}\quad w\int _0^T x(t){\text {d}}t\quad \text {s.t. }I_0=I\text { and }I_T=I_{\min }, \end{aligned}$$
(7)

where I is the current rate of prevalence of the virus in the population. The termination date corresponds to the time when the rate of prevalence of the virus attains the eradication threshold \(I_{\min }\). Observe that I assume an objective that ignores the potential lethality of the virus. But even when the virus is lethal, policies aimed at crushing the curve typically yields economic costs that are at least one order of magnitude larger than the value of lives lost (Gollier 2020), thereby justifying this objective of minimizing costs.

3 Optimal suppression under certainty

Pollinger (2020) derives the solution of a more general version of this dynamic problem under certainty. Using standard dynamic programming techniques, problem (7) can be rewritten as follows:

$$\begin{aligned} V(I)= & {} \min _x\quad wx\Delta t+V(I+(\beta (x)S-\gamma )I\Delta t) \\\approx & {} \min _x\quad wx\Delta t+V(I)+(\beta (x)S-\gamma )IV'(I)\Delta t, \end{aligned}$$

or, equivalently,

$$\begin{aligned} 0=\min _{x}\text { }wx+(\beta (x)S-\gamma )IV'(I). \end{aligned}$$
(8)

The first-order condition of this problem is

$$\begin{aligned} w=-\beta _x(x^*)SIV'(I), \end{aligned}$$
(9)

Under this notation, \(\beta _x\) is the derivative of \(\beta \) with respect to x. Equation (9) expresses the optimal intensity \(x^*(I)\) of confinement as a function of the rate of prevalence of the virus. However, let us guess a constant solution \(x^*\) independent of I. From Eq. (9), this would be the case if \(IV'(I)\) is a constant. In that case, the duration T of the pandemic will be such that

$$\begin{aligned} I_{\min }=I \exp ((\beta (x^*)S-\gamma )T). \end{aligned}$$
(10)

This equation tells us that there is an hyperbolic relation between the reproduction number and the duration of the pandemic. The total cost under such a constant strategy is

$$\begin{aligned} V(I)=wx^*T=\frac{-wx^*}{\beta (x^*)S-\gamma }\ln \left( \frac{I}{I_{\min }}\right) . \end{aligned}$$
(11)

This implies that \(IV'(I)\) is a constant, thereby confirming the guess that it is optimal to maintain a constant intensity of lockdown until the eradiction of the virus. Combining Eqs. (9) and (11) yields the following optimality condition for \(x^*\):

$$\begin{aligned} x^*=\frac{\beta (x^*)S-\gamma }{\beta _x(x^*)S}. \end{aligned}$$
(12)

The optimal intensity of lockdown is a best compromise between the short-term benefit of easing the lockdown and the long-term cost of a longer duration of the pandemic. Under the quadratic specification (6) for beta, Eq. (9) simplifies to

$$\begin{aligned} x^*=\sqrt{\frac{\beta _{\text {f}}S-\gamma }{\beta _{\text {f}}S-\beta _{\text {c}}S}}=\sqrt{\frac{r_{\text {f}}-1}{r_{\text {f}}-r_{\text {c}}}}. \end{aligned}$$
(13)

Because \(r_{\text {c}}<1<r_{\text {f}}\), the optimal intensity of confinement is between 0 and 1. For example, if the reproduction number goes from 2 to 0.5 when moving from the laissez-faire to the 100% lockdown, the optimal intensity of confinement is \(\sqrt{2/3}=81\%\). I summarize my results under certainty in the following proposition. Its first part is a special case of Pollinger (2020).

Proposition 1

Under certainty, the optimal suppression strategy is to impose a constant intensity of confinement until the virus is eradicated. In the quadratic case (6), the optimal intensity of confinement is \(\sqrt{(r_{\text {f}}-1)/(r_{\text {f}}-r_{\text {c}})}\), where \(r_{\text {f}}\) and \(r_{\text {c}}\) are the reproduction numbers under respectively the laissez-faire and the full lockdown.

4 Optimal suppression under uncertainty

Suppose that some parameters of the pandemic are unknown at date 0. Suppose also that the only way to learn the true value of these parameters is to observe its dynamics over time. How should this parameter uncertainty affect the optimal effort to fight the virus in the population? I have not been able to solve the continuous-time version of this dynamic learning problem. I therefore simplified the problem as follows. I assume that parameter \(\beta _{\text {f}}\) is unknown. At date 0, a decision must be made for an intensity \(x_0\) of confinement under uncertainty about \(\beta _{\text {f}}\). This intensity of confinement will be maintained until date \(\tau \).Footnote 1 Between dates 0 and \(\tau \), the observation of the propagation of the virus will inform us about \(\beta _{\text {f}}\). Therefore, at date \(\tau \), \(\beta _{\text {f}}\) is known and the intensity of confinement is adapted to the information. My objective is to compare the optimal \(x_0\) under uncertainty to the \(x_0\) that would be optimal when ignoring the fact that \(\beta _{\text {f}}\) is uncertain.

This is thus a two-stage optimization problem that I solve by backward induction. From date \(\tau \) on, there is no more uncertainty. As observed in the previous section, it is optimal to revise the confinement policy to the information about the true \(\beta _{\text {f}}\). We know from the previous section that the optimal contingent policy \(x^*(\beta _{\text {f}})\) is constant until the eradication of the virus. The minimal total cost of this policy is denoted \(V(I_\tau ,\beta _{\text {f}})\). Combining Eqs. (11) and (12), it is equal to

$$\begin{aligned} V(I_\tau ,\beta _{\text {f}})=\frac{-w}{\beta _x(x^*(\beta _{\text {f}}))S}\ln \left( \frac{I_\tau }{I_{\min }}\right) . \end{aligned}$$
(14)

It is a function of the rate of prevalence \(I_\tau \) of the virus observed at date \(\tau \) and of the pandemic parameter \(\beta _{\text {f}}\) observed during the first stage of the pandemic.

The first stage of the pandemic takes place under uncertainty about \(\beta _{\text {f}}\). I assume risk neutrality, so that the objective is to minimize the expected total cost of the suppression strategy:

$$\begin{aligned} W_0=\min _{x_0}wx_0\tau +EV(I_\tau ,\beta _{\text {f}}), \end{aligned}$$
(15)

where \(I_\tau =I_0\exp ((\beta (x_0,\beta _{\text {f}})S-\gamma )\tau )\) is also a function of random variable \(\beta _{\text {f}}\). The first-order condition of this stage-1 problem can be written as follows:

$$\begin{aligned} E\left[ F(x_0^*,\beta _{\text {f}})\right] =1, \end{aligned}$$
(16)

with

$$\begin{aligned} F(x_0,\beta _{\text {f}})=\frac{\beta _x(x_0,\beta _{\text {f}})}{\beta _x(x^*(\beta _{\text {f}}),\beta _{\text {f}})}. \end{aligned}$$
(17)

In the absence of uncertainty, i.e., when \(\beta _{\text {f}}\) takes value \(\beta _{f0}\) with probability 1, the optimal solution is the solution of Eq. (16) in that particular case, which implies

$$\begin{aligned} x^*_0=x^*(\beta _{f0}). \end{aligned}$$
(18)

How does the uncertainty and learning about \(\beta _{\text {f}}\) affect the optimal effort to mitigate the pandemic? Because \(\beta \) is a convex function of the mitigation effort x, function F is increasing in \(x_0\). By Jensen’s inequality, Eq. (16) implies that the uncertainty affecting \(\beta _{\text {f}}\) reduces the optimal initial mitigation effort if an only if F is convex in its second argument. I have not been able to demonstrate, in general, that F is convex. I therefore limited my analysis to the case of a small risk surrounding \(\beta _{\text {f}}\). More precisely, suppose that \(\beta _{\text {f}}\) is distributed as \(\beta _{f0}+h\varepsilon \), where \(\beta _{f0}\) is a known constant, \(\varepsilon \) is a zero-mean random variable and h is an uncertainty-intensity parameter. I examine the sensitivity of the optimal confinement \(x^*_0\) as a function of the intensity h in the neighborhood of \(h=0\). In the “Appendix”, I demonstrate that F is locally convex in its second argument, i.e., that \(x^*_0(h)\) is decreasing in h in the neighborhood of \(h=0\). More precisely, I show that \(x^{*'}_0(0)=0\) and \(x^{*''}_0(0)<0\). This yields the following main result of the paper.

Proposition 2

Consider the quadratic case (6). Introducing a small risk about the transmission rate \(\beta _{\text {f}}\) reduces the optimal initial intensity of confinement.

Proof See “Appendix”.

I used here a very specific strategy to explore this problem. Ideally, one should start with an uncertain \(\beta _{\text {f}}\) to which some Rothschild-Stiglitz increase in risk would be imposed. I limit the analysis to the special case in which the initial \(\beta _{\text {f}}\) is certain and in which only a small risk is added. I do not characterize the optimal reaction of a social planner faced by an increase in risk in the reproduction number of the virus. For example, I cannot tell whether the intensity of the confinement should be increased if we learn that the virus underwent some mutation that changed the reproduction number in an uncertain way. My result above only suggests that it could reduce the initial confinement effort, assuming that one pursues an eradication strategy. We should also address situations in which new social distancing measures (facial masks, ventilation of closed public spaces, \(\ldots \)) have an uncertain impact on the reproduction number. Under these generalized framework, Proposition 2 only suggests that these sources of uncertainty could reduce the optimal instantaneous mitigation effort.

5 Calibration exercise

In this section, I quantify the negative impact of uncertainty on the optimal confinement in the learning stage 1. I solve numerically the optimality condition (16) in the quadratic context. This equation takes the following form in that case:

$$\begin{aligned} E\left[ \frac{(2\beta _{\text {f}}-\beta _{\text {c}})S-2(\beta _{\text {f}}-\beta _{\text {c}})Sx^*_0}{(2\beta _{\text {f}}-\beta _{\text {c}})S-2\sqrt{(\beta _{\text {f}}-\beta _{\text {c}})S(\beta _{\text {f}}S-\gamma )}}\right] =1 \end{aligned}$$
(19)

It yields the following solution:

$$\begin{aligned} x^*_0=\frac{E\left[ \frac{\sqrt{(r_{\text {f}}-r_{\text {c}})(r_{\text {f}}-1)}}{2r_{\text {f}}-r_{\text {c}}-2\sqrt{(r_{\text {f}}-r_{\text {c}})(r_{\text {f}}-1)}}\right] }{E\left[ \frac{r_{\text {f}}-r_{\text {c}}}{2r_{\text {f}}-r_{\text {c}}-2\sqrt{(r_{\text {f}}-r_{\text {c}})(r_{\text {f}}-1)}}\right] }, \end{aligned}$$
(20)

where \(r_{\text {f}}=\beta _{\text {f}}S/\gamma \) and \(r_{\text {c}}=\beta _{\text {c}}S/\gamma \) are the reproduction numbers in the laissez-faire and total lockdown respectively. I first describe a simulation in the spirit of the covid-19. There has been much debate about the reproduction number under the laissez-faire policy. Ferguson et al. (2020) assumed that it was between 2 and 2.6 at the beginning of the pandemic. However, I focus in this paper on a post-lockdown situation in which people have learned the benefit of washing hands, bearing masks and basic social distancing behaviors. Therefore, the expected reproduction number under the laissez-faire in this new situation is probably smaller than 2. I assume an expected value of \(Er_{\text {f}}=1.5\). For France, Santé Publique FranceFootnote 2 has estimated the reproduction number at different stages of the pandemic. It was estimated at 0.8 at the end of the strong confinement period in May. Because the confinement was partial, this observation is compatible with a \(r_{\text {c}}\) equaling 0.5.

Fig. 1
figure 1

Optimal confinement \(x^*_0\) in stage 1 as a function of the intensity h of the uncertainty. I assume that \(r_{\text {c}}=0.5\) and \(r_{\text {f}}=1.5+h\varepsilon \), with \(\varepsilon \sim (-1,\pi ;\pi /(1-\pi ), 1-\pi )\)

In Fig. 1, I describe the optimal intensity \(x^*_0\) in stage 1 as a function of the intensity h of the uncertainty surrounding \(r_{\text {f}}\), with \(r_{\text {f}}=1.5+h\varepsilon \), with \(E\varepsilon =0\). More specifically, I consider binary distribution with \(\varepsilon \sim (-1,\pi ;\pi /(1-\pi ), 1-\pi )\). In order to keep \(r_{\text {f}}\) above 1 with probability 1, I consider risk intensities h between 0 and 0.5. Under certainty (\(r_{\text {f}}=1.5\) with certainty, or \(h=0\)), the optimal intensity of confinement is a constant \(\sqrt{0.5}=70.7\%\). Suppose alternatively that \(r_{\text {f}}\) is either 1 or 2 with equal probabilities. In that case, the optimal confinement goes down to \(66.2\%\). If our beliefs about the reproduction number \(r_{\text {f}}\) are distributed as 1 with probability 0.9 and 6 with probability 0.1, then the optimal initial confinement goes down to \(61.4\%\).

In Fig. 2, I describe the percentage reduction in the optimal initial confinement for different \(r_{\text {c}}\) and \(r_{\text {f}}\sim (1,1/2;2\overline{r}_{\text {f}}-1,1/2)\). We see that the impact of uncertainty on the optimal confinement is largest when the reproduction numbers in the pre- and post-confinement are close to unity. Suppose for example that \(r_{\text {c}}=0.9\) and \(r_{\text {f}}=1.1\). In this context of certainty, the optimal confinement is 70.7%. If \(r_{\text {f}}\) is distributed as (1, 1/2; 1.2, 1/2), the optimal initial confinement goes down to 34.7%, a 51% reduction in the initial mitigation effort.

Fig. 2
figure 2

Percentage reduction in the optimal confinement \(x^*_0\) in stage 1 due to uncertainty for different values of \((r_{\text {c}},\overline{r}_{\text {f}})\). I assume that \(r_{\text {f}}\) is distributed as \((1,1/2;2\overline{r}_{\text {f}}-1,1/2)\)

6 Concluding remarks

The uncertainty surrounding the reproduction number when reducing the strength of the lockdown is an argument in favor of lowering the intensity of this lockdown in the learning phase of the pandemic. This rather surprising result is the outcome of two non-linearities of the model. First, the duration of the pandemic is an hyperbolic function of the reproduction number. Second, the reproduction number is a quadratic function of the cost of confinement. These two non-linearities explain why one should be sensitive to the uncertainty when shaping the confinement policy, but I confess that these observations do not explain why this uncertainty should reduce the optimal confinement at the first stage of the pandemic. More work should be done to explain this result.

This research opens a new agenda of research that I am glad to share with the readers of this paper. For example, shame on me, I assume here risk neutrality, in spite of the large size of the risk and its correlation with aggregate consumption. Could there be a precautionary motive for a larger initial intensity of the confinement? No doubt that my result should be refined in that direction. Also, I limited the analysis to suppression policies. This restriction was necessary to simplify the dynamic equations of the generic SIR model, so that the assumption of an almost constant number of susceptible people in the population is a reasonable approximation. This excludes the possibility to compare the optimal solution among this family of policies to other plausible policies, in particular policies aimed at attaining a high rate of herd immunity. Introducing uncertainty in the generic SIR model and measuring its impact on the optimal policy is another promising and useful road for research. In my to-do list, I also have the exploration of other sources of uncertainty, such as not knowing the rate of prevalence, the fraction of the population already immunized, or the time of arrival of a vaccine. Finally, because the value of lives lost associated to most suppression strategies is typically one or two orders of magnitude smaller than the direct economic cost of the lockdown, I assumed that the objective of the social planner is to minimize the economic cost incurred to eradicate the virus in the population. It would be useful, as in Pollinger (2020), to incorporate the value of lives lost in the objective function.