Abstract
We consider the following stochastic heat equation
defined for \((t,x)\in (0,\infty )\times {\mathbb {R}}\), where \({\dot{W}}\) denotes space-time white noise. The function \(\sigma \) is assumed to be positive, bounded, globally Lipschitz, and bounded uniformly away from the origin, and the function b is assumed to be positive, locally Lipschitz and nondecreasing. We prove that the Osgood condition
implies that the solution almost surely blows up everywhere and instantaneously, In other words, the Osgood condition ensures that \(\textrm{P}\{ u(t,x)=\infty \quad \hbox { for all } t>0 \hbox { and } x\in {\mathbb {R}}\}=1.\) The main ingredients of the proof involve a hitting-time bound for a class of differential inequalities (Remark 3.3), and the study of the spatial growth of stochastic convolutions using techniques from the Malliavin calculus and the Poincaré inequalities that were developed in Chen et al. (Electron J Probab 26:1–37, 2021, J Funct Anal 282(2):109290, 2022).
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
We consider the following stochastic heat equation
The noise term is space-time white noise; that is, \({\dot{W}}\) is a centered, generalized Gaussian random field with
Throughout, we assume that \(u_0\), \(\sigma \) and b satisfy the following hypotheses:
Assumption 1.1
The initial profile \(u_0\) is a non-random bounded function.
Assumption 1.2
\(\sigma :{\mathbb {R}}\rightarrow (0, \infty )\) is Lipschitz continuous, and satisfies \(0<\inf _{\mathbb {R}}\sigma \le \sup _{\mathbb {R}}\sigma <\infty .\)
Assumption 1.3
\(b:{\mathbb {R}}\rightarrow (0,\,\infty )\) is locally Lipschitz continuous, as well as nondecreasing.
We recall that a random field solution to (1.1) is a predictable random field \(u=\{u(t,x)\}_{t \ge 0, x \in {\mathbb {R}}}\) that satisfies the following integral equation:
where
the symbol \(*\) denotes convolution, and
When b and \(\sigma \) are Lipschitz continuous, general theory ensures that the SPDE (1.2) is well posed; see Dalang [5] and Walsh [20]. However, general theory fails to be applicable when b and/or \(\sigma \) are assumed to be only locally Lipschitz continuous. Here, we can exploit the fact that b is nondecreasing in order to ensure the existence of a “minimal solution” u under Assumptions 1.2 and 1.3. The beginning of the proof of Theorem 1.5 in §5 contains the details of the construction of the minimal solution. But we can summarize that effort succinctly as follows: Consider (1.1) with b replaced by \(b\wedge n\) and denote the solution by \(u_n\). Because \(b\wedge n\uparrow b\) and \(b\wedge n\) is globally Lipschitz continuous, \(u_n\) is a classical solution and can be shown to increase pointwise to a random field u. Moreover, u is a mild solution to (1.1) whenever the latter makes sense; see §5. The random field u is the minimal solution in the sense that any other solution theory that agrees with general theory when b is Lipschitz, has a comparison theorem, and yields a solution v satisfies \(v\ge u\). We can now turn to the main objective of this paper and prove that, under Assumptions 1.2 and 1.3, the classical Osgood condition (1.3) of ODEs ensures that u, and hence v, blows up everywhere and instantaneously.
There is a large and distinguished literature in PDEs that focuses on these types of questions; see for example Cabré and Martel [2], Peral and Vázquez [17], and Vázquez [19]. To the best of our knowledge, the present paper contains the first instantaneous blowup result for SPDEs of the type given by (1.1). For PDEs, various different definitions for instantaneous blowup are used but all these notions basically mean that the solution blows up for every \(t>0\). We provide a different definition that is particularly well suited for our purposes.
Definition 1.4
Let \(u=\{u(t,x)\}_{t \ge 0, x \in {\mathbb {R}}}\) denote a space-time random field with values in \([-\infty ,\infty ]\). We say that u blows up everywhere and instantaneously when
Our notion of instantaneous, everywhere blowup is sometimes referred to as instantaneous and complete blowup.
We are not aware of any prior results on instantaneous nor everywhere blowup in the SPDE literature. However, broader questions of blowup for SPDEs have received recent attention. Recent examples include Ref.s [6, 9,10,11,12], where criteria for the blowup in finite time with positive probability or almost surely are studied. And De Bouard and Debussche [8] investigate blowup in \(H^1({\mathbb {R}}^d)\) for the stochastic nonlinear Schrödinger equation, valid in arbitrarily small time, and with positive probability; see also the references in [8].
In order to state our result precisely, we need the well-known Osgood condition from the classical theory of ODEs.
Condition 1.5
A function \(b:{\mathbb {R}}\mapsto (0,\infty )\) is said to satisfy the Osgood condition if
where \(1/0=\infty \).
It was proved in Foondun and Nualart [11] that, when \(\sigma \) is a positive constant, the Osgood condition implies that the solution to (1.1) blows up almost surely. Earlier, this fact was previously proved by Bonder and Groisman [10] for SPDEs on a finite interval. In the reverse direction, and for the same equations on finite intervals, Foondun and Nualart [11] have shown that if \(\sigma \) is locally Lipschitz continuous and bounded, then the Osgood condition is necessary for the solution to blow up somewhere with positive probability.
Recall Assumptions 1.2 and 1.3. The aim of the present paper is to show that the Osgood condition in fact implies that, almost surely, the solution to Eq. (1.1) blows up everywhere and instantaneously.
Theorem 1.6
If b satisfies the Osgood Condition 1.5, then the minimal solution to (1.1) blows up everywhere and instantaneously almost surely.
A few years ago, Professor Alison Etheridge asked one of us a number of questions about the time to blow up and the nature of blowup for stochastic reaction–diffusion equations of the general type studied here. This paper provides an answer to Professor Etheridge’s questions in the case that \(\sigma \) satisfies Assumptions 1.2 and 1.3.
Remark 1.7
(On Assumption 1.2) Whereas Assumption 1.2 is likely not necessary for instantaneous everywhere blowup, something like this assumption is clearly needed. In fact, there is good reason to believe that the blowup phenomena for (1.1) changes completely when \(\sigma \) deviates sharply from Assumption 1.2; see for example Dozzi and López-Mimbela [9] for this phenomenon in the context of a related SPDE in which \(\sigma (u)=u\).
Remark 1.8
(On Assumption 1.3) It is easy to use Theorem 1.6 to improve itself beyond the monotonicity constraint of Assumption 1.3. For example, consider (1.1) when the reaction term is \(b(x)=1+x^2\). Clearly, b fails to verify Assumption 1.3. However, \(b(x)\ge {\tilde{b}}(x) = 1+[\max (x\,,0)]^2\), and the function \({\tilde{b}}\) does satisfy Assumption 1.3. Thus, it is possible to use a comparison argument to show that Theorem 1.6 applies and implies the instantaneous, everywhere blowup of (1.1) when \(b(x)=1+x^2\). We do not know if the strict positivity part of Assumption 1.3 can be replaced with non-negativity.
Let us now describe the main strategy behind the the proof of Theorem 1.6. We may recast (1.2) as
notation being clear. Term A is deterministic, involves the initial condition, and plays no role in the blowup phenomenon because the initial condition is a nice function. In the PDE literature, there are many results about blowup that hold because the initial condition is assumed to be singular. Here, the initial data is a very nice function with no singularities. In our setting, blowup occurs for very different reasons, and is caused by the interplay between the stochastic Term B, which is the highly non-linear term, and the other stochastic Term C, which is regarded as a Walsh stochastic integral. interplay. More precisely: (i) A spatial ergodicity argument ensures that at any time \(t>0\) there will be spatial intervals over which Term C reaches an arbitrary (fixed) height; (ii) The explosive drift ensures that the solution rapidly reaches infinity in those spatial intervals; and (iii) The instantaneous propagation of the heat equation will ensure the everywhere blowup of the solution.
As part of our analysis, we prove that, when b is in fact a Lipschitz continuous function that satisfies the Osgood condition (1.3), the process \(x\mapsto u(t,x)\) is almost surely unbounded for every \(t>0\). The proof of this fact makes use of ideas from the Malliavin calculus and Poincaré inequalities developed in a recent paper by Chen et al. [4]. The limiting procedure used to define the solution then allows us to use the growth property of b to show blowup of the solution and thus complete the proof of the main result.
We end this introduction with a plan of the paper. In §2 we study ergodicity and growth properties for a family of stochastic convolutions and we use some of these results to show that, when b is Lipschitz and the initial condition is a constant, the solution to (1.1) is spatially stationary and ergodic. In §4 we develop a hitting-time estimate for a family of differential inequalities and subsequently use that estimate in order to obtain a lower bound for u. The remaining details of the proof of Theorem 1.6 are gathered in §5, using the earlier results of the paper.
Throughout this paper, we write
For every function \(f:{\mathbb {R}}\rightarrow {\mathbb {R}}\), \(\textrm{Lip}(f)\) denotes the optimal Lipschitz constant of f; that is,
In particular, f is Lipschitz continuous iff \(\textrm{Lip}(f)<\infty \).
2 Spatial growth of stochastic convolutions
2.1 Spatial ergodicity via the Malliavin calculus
We introduce following Nualart [16] some elements of the Malliavin calculus that we will need. Let \({\mathcal {H}}=L^2({\mathbb {R}}_+ \times {\mathbb {R}})\). For every Malliavin-differentiable random variable F, we let DF denote the Malliavin derivative of F, and observe that \(DF=\{ D_{r,z}F\}_{r>0,z\in {\mathbb {R}}}\) is a random field indexed by \((r, z)\in {\mathbb {R}}_+\times {\mathbb {R}}\).
For every \(p \ge 2\), let \({\mathbb {D}}^{1,p}\) denote the usual Gaussian Sobolev space endowed with the semi-norm
We will need the following version of the Poincaré inequality due to Chen et al. [4, (2.1)]:
Next, let us recall some notions from the ergodic theory of multiparameter processes (see for example Chen et al. [3]): We say that a predictable random field \(Z=\{Z(t,x)\}_{(t,x)\in (0,\infty )\times {\mathbb {R}}}\) is spatially mixing when the random field \(x \rightarrow Z(t,x)\) is weakly mixing in the usual sense for every \(t>0\). This property can be stated as follows: For all \(k\in {\mathbb {N}}\), \(t>0\), \(\xi ^1,...,\xi ^k\in {\mathbb {R}}\), and Lipschitz-continuous functions \(g_1,...,g_k:{\mathbb {R}}\rightarrow {\mathbb {R}}\) that satisfy \(g_j(0)=0\) and Lip\((g_j)=1\) for every \(j =1,...,k\),
where
Whenever the process \(x \rightarrow Z(t,x)\) is stationary and weakly mixing for all \(t>0\), it is ergodic.
Finally, we will require the following elementary identity for products of the heat kernel
See Chen et al. [4, below (2.7)].
2.2 Ergodicity of stochastic convolutions
Let \(Z=\{Z(t,x)\}_{(t,x)\in (0,\infty )\times {\mathbb {R}}}\) be a predictable random field that satisfies
for two positive and finite constants \(c_1\) and \(c_2\) that are fixed throughout. Set \(I_Z(0,x)=0\), and consider the associated stochastic convolution
The main aim of this section is to study the growth properties of the random field \(x \rightarrow I_Z(t\,,x)\). Next we develop natural conditions under which the random field \(x \rightarrow I_Z(t\,,x)\) is stationary and ergodic at all times \(t>0\).
Proposition 2.1
Assume that \(x \rightarrow Z(t, x)\) is stationary for all \(t>0\). Assume also that \(Z(t,x) \in {\mathbb {D}}^{1,p}\) for all \(p \ge 2\), \(t>0\) and \(x \in {\mathbb {R}}\), and that its Malliavin derivative DZ(t, x) has the following property: For every \(T>0\) and \(p \ge 2\) there exists a number \(C_{T,p}>0\) such that
for every \(t \in (0\,,T)\) and \(x \in {\mathbb {R}}\) and for almost every \((r,z) \in (0,t) \times {\mathbb {R}}\). Then the process \(x \rightarrow Z(t\,, x)\) is ergodic for every \(t>0\), and \(x \rightarrow I_Z(t\,, x)\) is stationary and ergodic for every \(t>0.\)
Proof
Thanks to the Poincaré inequality (2.1), the proof of ergodicity follows the same pattern as [3, Proof of Theorem 1.3]. Therefore, we describe the argument quickly mainly where adjustments are needed.
We start with the process Z and use a similar argument as Chen et al. [3, Proof of Corollary 9.1]; see also Chen et al. [4, Theorem 1.1]. Define \({\mathcal {G}}(x)\) as was done in (2.3). It then follows from (2.7) that there exists a constant \(c_{T,k}>0\) such that
valid for all \(0<r<t\le T\) and \(x,z \in {\mathbb {R}}\).Footnote 1
We can combine the Poincaré inequality (2.1) and the semigroup property of the heat kernel to find that
This yields (2.2), whence follows the ergodicity of \(x \rightarrow Z(t\,,x)\) for every \(t>0\).
Next, we show that the process \(x \rightarrow I_Z(t,x)\) is stationary for all \(t>0\). The proof of this fact follows the proof of Lemma 7.1 in [3] closely. First, let us choose and fix some \(y \in {\mathbb {R}}\) and apply (7.2) in [3] as follows:
where \(\theta _y\) denotes the shift operator (see Chen et al. [3]), and \(W_y\) is the associated shifted Gaussian noise [3, (7.1)]. The spatial stationarity of \(I_Z\) follows from the facts that W and \(W_y\) have the same law and the random field \(Z\circ \theta _y\) has the same finite-dimensional distributions as Z because Z is assumed to be spatially stationary.
We now turn to the spatial ergodicity of the process \(I_Z\). By the properties of the divergence operator [16, Proposition 1.3.8], \(I_Z(t,x) \in {\mathbb {D}}^{1,k}\) for all \(k \ge 2\), \(t>0\), and \(x \in {\mathbb {R}}\). Moreover, the Malliavin derivative \(DI_Z(t,x)\) a.s. satisfies
In principle, the above is valid for a.e. (r, z) but in fact the right-hand side can be used to define the Malliavin derivative everywhere a.s. And that is what we do here. In particular, for any integer \(k \ge 2\), the Burkholder-Davis-Gundy inequality and the estimate (2.7) together imply that
Thanks to (2.4), this yields
Define
using the same \(g^1,\ldots ,g^k\) and \(\xi ^1,\ldots ,\xi ^k\) that were introduced earlier. In this way we can conclude from (2.8) and elementary properties of the Malliavin derivative that
valid for all \(0<r<t\le T\) and \(x,z \in {\mathbb {R}}\).
Now we apply (2.1) together with the semigroup property of the heat kernel to see that
Therefore, \(\lim _{\vert x \vert \rightarrow \infty } \text { Cov } [{\mathcal {J}}(x)\,, {\mathcal {J}}(0)]=0\), and hence follows the ergodicity of \(x \rightarrow I_Z(t,x)\) for every \(t>0\). This concludes the proof. \(\square \)
2.3 Ergodicity of the solution
In this section, we consider Eq. (1.1) with constant initial condition \(\rho \in {\mathbb {R}}\). That is,
where
The aim of this section is to show that when \(\sigma \) and b are Lipschitz continuous the solution to (2.9) is spatially ergodic. This follows from an application of Proposition 2.1. Note that because we are discussing Lipschitz continuous b, there is no need to describe what we mean by solution; that is done already in Walsh [20].
According to Bally and Pardoux [1] (see also Nualart [16, Proposition 1.2.4]), under these conditions \(u(t\,,x) \in {\mathbb {D}}^{1,P}\) for all \(p \ge 2\), \(t>0\), and \(x \in {\mathbb {R}}\), and the Malliavin derivative Du(t, x) satisfies
for a.e. \((r,z)\in (0,t)\times {\mathbb {R}}\) where B and \(\Sigma \) are a.s. bounded random fields. We have the following estimate on the Malliavin derivative.
Lemma 2.2
If \(\sigma \) and b are Lipschitz continuous, then for every \(T>0\) and \(p \ge 2\) there exists \(C_{T,p}>0\) such that
for all \(t \in (0\,,T)\) and \(x \in {\mathbb {R}}\), and for almost every \((r,z) \in (0,t) \times {\mathbb {R}}\).
Proof
The proof follows closely the proof of Lemma 2.1 in Chen et al. [4] but we must account for a few of the changes that are caused by the drift b: By Minkowski’s inequality,
This is the same expression that appears in the right-hand side of (2.6) in [4]. Therefore, the rest of the proof follows the analogous argument in [4, Proof of Lemma 2.1]. \(\square \)
We are now ready to state the main result of this section.
Corollary 2.3
If \(\sigma \) and b are Lipschitz continuous, then the random fields \(x \rightarrow u(t\,,x)\) and \(x \rightarrow {\mathcal {I}}(t\,,x)\) are stationary and ergodic for every \(t>0\).
Proof
Stationarity follows from Chen et al. [3, Lemma 7.1], and ergodicity is a direct consequence of Lemma 2.2 and Proposition 2.1. \(\square \)
2.4 Spatial growth of stochastic convolutions
We are ready to state the main result of this section.
Theorem 2.4
For every predictable random field Z that satisfies the boundedness condition (2.5) and for which \(x\mapsto I_Z(t\,,x)\) is stationary and ergodic for all \(t>0\), there exists \(\eta =\eta (c_1,c_2)>0\) such that
valid for every non-random number \(a>0\).
Remark 2.5
A crucial part of the message of Theorem 2.4 is that \(\eta \) depends only on \(c_1,c_2\) from (2.5) and is, in particular, independent of the choice of Z.
The proof of Theorem 2.4 requires a few prefatory steps that we present as a series of lemmas. Once those lemmas are under way, we are able to prove Theorem 2.4 promptly.
Lemma 2.6
For every \(c_2>c_1>0\) there exist \(C_2,C_1>0\) such that
uniformly for all \(t,\lambda \ge 0\) and \(x\in {\mathbb {R}}\), and for every predictable random field Z that satisfies (2.5).
Proof
Choose and fix \(t>0\) and consider
Because Z is uniformly bounded, the above is a continuous, \(L^2\)-martingale with quadratic variation
Because
the inequalities (2.5) yield
The Dubins, Dambis-Schwarz theorem, see [18], ensures that \(M_r = B(\langle M\rangle _r)\) for a standard, linear Brownian motion B. Since \(I_Z(t,x)=M_t\) is the terminal point of our martingale M, and because (2.10) implies that \(\langle M\rangle _t\le c_2^2\sqrt{t/\pi }\), we learn from the reflection principle and the scaling property that
A standard estimate yields the upper bound. For the lower bound we observe in like manner to the preceding that
where \(\varpi = \textrm{P}\{ \sup _{\nu \in [1,(c_2/c_1)^2]} | B(\nu )-B(1)| \le 1\}\in (0\,,1).\) This proves that
where the implied constant depends only on \(c_1\) and \(c_2\). When \(\lambda \in (0,1)\), it suffices to lower bound the integral by a constant. \(\square \)
Lemma 2.7
Choose and fix a non-random number \(c_0>0\). Then,
for every \(k\in [2,\infty )\) and for all predictable random fields Z that satisfy \(\sup _{p\in {\mathbb {R}}_+\times {\mathbb {R}}}|Z(p)|\le c_0\).
Remark 2.8
We emphasize that Lemma 2.7 assumes that Z is bounded. This is a much weaker condition than (2.5), as the latter implies also that, among other things, \(\inf _{p\in {\mathbb {R}}_+\times {\mathbb {R}}}Z(p)\) is a.s. bounded from below by a strictly positive, deterministic number. The next lemmas also in fact require only this weaker boundedness condition.
Proof
Choose and fix \(t\ge 0\) and \(x\ne z\in {\mathbb {R}}\), and let Z be as described. By the Burkholder-Davis-Gundy inequality in the form [7], for every real number \(k\ge 2\),
This proves the lemma. \(\square \)
Lemma 2.9
Choose and fix a non-random number \(c_0>0\). Then,
for every \(k\in [2,\infty )\) and for all predictable random fields Z that satisfy \(\sup _{p\in {\mathbb {R}}_+\times {\mathbb {R}}}|Z(p)|\le c_0\).
Proof
Choose and fix \(t,h>0\) and \(x\in {\mathbb {R}}\), and a predictable random field Z as above, and then write
where
By the Burkholder-Davis-Gundy inequality in the form [7], for every real number \(k\ge 2\),
where we have used the bound \(1-\exp (-y^2)\le y^2\wedge 1\) in order to obtain the last concrete numerical estimate. Similarly, we obtain
We finally obtain
This complete the proof. \(\square \)
Define
and for convenience, we use the following notation, \(I_Z(p):=I_Z(p_1,p_2).\)
Lemma 2.10
For every non-random numbers \(c_0,m>0\) and \(\delta \in (0\,,1)\),
where \(\sup _{Z,{\mathbb {I}}}\) denotes the supremum over all predictable random fields Z that satisfy \(\sup _{p\in {\mathbb {R}}_+\times {\mathbb {R}}} |Z(p)|\le c_0\) and over all intervals \({\mathbb {I}}\subset {\mathbb {R}}\) that have length \(\le m\), and \(\alpha \) is any positive number that satisfies
Proof
Since \((a+b)^k\le 2^k(a^k+b^k)\) for all \(k\ge 1\) and \(a,b\ge 0\), Lemmas 2.7 and 2.9 together and Jensen’s inequality imply that
valid for all real numbers \(k\ge 1\), distinct \(p,q\in {\mathbb {R}}_+\times {\mathbb {R}}\), and predictable Z that satisfy \(\sup _{p\in {\mathbb {R}}_+\times {\mathbb {R}}}|Z(p)|\le c_0\).
We are going to use a suitable form of Garsia’s lemma [14, Appendix C], and will begin by verifying the conditions that can be found in that reference. Note that \(\varrho (0)=0\) and \(\varrho \) is subadditive: \(\varrho (p+q)\le \varrho (p)+\varrho (q)\) for all \(p,q\in {\mathbb {R}}^d\). We use the notation of [14, Appendix C] and let
and for all real numbers \(k\ge 1\),
We know that \({\mathcal {I}}_k<\infty \) a.s. for every \(k\ge 1\). In fact, (2.11) ensures that
for all real numbers \(k\ge 1\), distinct \(p,q\in {\mathbb {R}}_+\times {\mathbb {R}}\), and predictable Z that satisfy \(\sup _{p\in {\mathbb {R}}_+\times {\mathbb {R}}}|Z(p)|\le c_0\). If \((s,y)\in {\mathbb {R}}_+\times {\mathbb {R}}^2\) satisfies \(|y_1|\le (s/2)^4\) and \(|y_2|\le (s/2)^2\) then certainly \(y\in B_\varrho (s)\). Similarly, if \(y\in B_\varrho (s)\), then certainly \(|y_1|\le s^4\) and \(|y_2|\le s^2\). This argument shows that \((s/2)^6\le |B_\varrho (s)|\le s^6\) for all \(s\ge 0\), where \(|\,\cdots |\) denotes the Lebesgue measure on \({\mathbb {R}}^2\). Consequently, \(\int _0^{r_0} |B_\varrho (s)|^{-2/k}\,\textrm{d}s <\infty \) for one, hence all, \(r_0>0\), if and only if \(k>12\) and
Apply Theorem C.4 of [14] with \(\mu (z)=z\) – so that \(C_\mu =2\) there – in order to see that
for every non-random \(k\ge 24\) and \(r_0>0\). In particular, we learn from (2.12) that
for every \(k\ge 24\) and \(r_0>0\), and all intervals \({\mathbb {I}}\) of length m, and all predictable fields Z that satisfy \(\sup _{p\in {\mathbb {R}}_+\times {\mathbb {R}}}|Z(p)|\le c_0\). We freeze all variables and define for every \(\delta \in (0\,,1)\) and \(n\in {\mathbb {Z}}_+\),
It follows that as long as \(k\ge 24\),
Sum the preceding over all \(n\in {\mathbb {Z}}_+\) to see that, as long as \(k\ge (24/\delta )>(12/\delta )\vee 24\),
Replace k by 2k and restrict attention to integral choices of k in order to see that
for every integer \(k \ge 12/\delta \), as well as all \(r>0\), all intervals \({\mathbb {I}}\) of length m, and all predictable fields Z that satisfy \(\sup _{p\in {\mathbb {R}}_+\times {\mathbb {R}}}|Z(p)|\le c_0\), where where we have used the inequality \(k^k\le \textrm{e}^k k!\) valid for all positive integers k. An appeal to the Taylor series expansion of the exponential function \(v\mapsto \exp (\alpha v^2)\) yields
for every \(\alpha \) that satisfies \(\alpha <Q^{-1}\). This proves the lemma. \(\square \)
We are ready to conclude this section.
Proof of Theorem 2.4
Lemma 2.6 ensures that
for all \(a>0\), \(c\in {\mathbb {R}}\), and \(M\ge 1\). In particular,
Chebyshev’s inequality yields the following:
for all \(M\ge 1\) and \(a,c,\varepsilon ,\alpha >0\). Choose and fix
and apply Lemma 2.10 [with \(\delta =\frac{1}{2}\) and \(c_0=c_2\)] in order to see that there exists \(K = K(c_1,c_2)>1\) such that
for all \(M\ge 1\) and \(a>0\). In particular, there exists \(M_0=M_0(c_1,c_2)>1\) such that for all \(M\ge 1\) and \(a>0\),
for all \(M\ge M_0\). To be sure, we remind also that \(\varepsilon =\varepsilon (a,c_1,c_2)\) is defined in (2.13). In any case, this readily yields
for all \(M\ge M_0\). Since we are assuming that the infinite-dimensional process \(x\mapsto I_Z(\cdot ,x)\) is ergodic, we can improve (2.14) to the following without need for additional work:
for all \(M\ge M_0\) and \(a>0\). We now can send \(M\rightarrow \infty \) to deduce the theorem from the particular form of \(\varepsilon \) that is given in (2.13). \(\square \)
3 A lower bound via differential inequalities
In this section, we continue to assume that b is Lipschitz continuous and increasing. Our aim is to prove the following key result.
Theorem 3.1
If \(b:{\mathbb {R}}\rightarrow (0,\infty )\) is Lipschitz continuous and non decreasing, then for every non-random number \(a>0\), there exists a non-random number \(\varepsilon = \varepsilon (a)>0\) – not depending on the choice of b – that satisfies the following for every \(M>\Vert u_0\Vert _{L^\infty ({\mathbb {R}})}\): \(\lim _{a\rightarrow 0^+} \varepsilon (a)=0\), and there exists an a.s.-finite random variable \(c = c(a,M)>0\) independent of b, such that
where \(\rho : = \inf _{x\in {\mathbb {R}}} u_0(x)\).
The following result will be useful for the proof of the above theorem.
Lemma 3.2
Fix two numbers \(N>A>0\) and suppose \(B:{\mathbb {R}}_+\rightarrow (0,\infty )\) is Lipschitz continuous and non decreasing. Let \(T=\int _A^N\textrm{d}s/B(s)\), and suppose that \(F:{\mathbb {R}}_+\rightarrow {\mathbb {R}}_+\) solves
Then \(\inf _{t\in [T,2T]}F(t)\ge N\).
Remark 3.3
Lemma 3.2 can recast in slightly weaker terms as a statement about the differential inequality,
In this case, \(F(t)\ge N\) for all times t between \(T=\int _A^N\textrm{d}s/B(s)\) and time 2T.
Proof
Choose and fix an \(A>0\). The ordinary differential equation \(G(t)=A+\int _0^t B(G(s))\,\textrm{d}s\) has a unique, strictly increasing, continuous solution up to its blowup time. Using the differential equation, \(G'(t)=B(G(s))\), we find that the time \(T = \sup \{ t>0:\, G(t)\le N\}<\infty \). For every \(N>A\), and \(G(T)=\lim _{s\uparrow T}G(s)=N\). We also have that \(G(t) \ge N\) for all \(t \in [T, 2T]\). A comparison theorem yields \(F\ge G\) on [0, 2T], and completes the proof. \(\square \)
Proof of Theorem 3.1
We first assume that the initial data is equal to a constant \(\rho \in {\mathbb {R}}\). Choose and fix \(a>0\). According to Corollary 2.3 and Theorem 2.4, we can associate to a a non-random number \(\varepsilon =\varepsilon (a)>0\) such that
Also choose and fix a number \(M>0\). According to Theorem 2.4, we can find a random number \(c>0\) such that
Because \(b\ge 0\) and b is nondecreasing,
a.s., for every \(t,c>0\) and \(x\in {\mathbb {R}}\). If in addition \(x\in (0\,,\sqrt{\varepsilon })\) and \(t\in (0\,,2\varepsilon )\), then
for all \(s\in (0,t)\). Therefore, (3.2) tells us that, for all \(x\in (0\,,\sqrt{\varepsilon })\) and \(t\in (0\,,2\varepsilon )\),
In other words, we have shown that the function
satisfies
Thanks to (4.2), we can find \(N>M\) such that \(\int _{M+\rho }^{N+\rho } [b(y)]^{-1}\,\textrm{d}y<\varepsilon \), whence \(\int _{M+\rho }^{N+\rho } [\ell b(y)]^{-1}\,\textrm{d}y<\varepsilon /\ell \). Therefore, Lemma 3.2 assures us that \(\inf _{t\in [\varepsilon /\ell ,2\varepsilon /\ell ]}f(t) \ge N\) and hence
Because \(\lim _{a\rightarrow 0+}\varepsilon =0\) [see (3.1)], this yields the theorem in the case that the initial data is constant.
For the general case that the initial condition is bounded, using a standard comparison theorem we can deduce the proof of the theorem. \(\square \)
4 Minimal solutions, and proof of Theorem 1.6
We begin by revisiting the well posedness of (1.1) under Assumptions 1.2 and 1.3. After that, we prove Theorem 1.6 and conclude the paper.
4.1 Minimal solutions
Let \({\mathscr {L}}_{ loc }\) denote the collection of all functions \(f:{\mathbb {R}}\rightarrow (0,\infty )\) that are nondecreasing and locally Lipschitz continuous. In particular, Assumption 1.3 is shortened to the assertion that \(b\in {\mathscr {L}}_{ loc }\). We also define \({\mathscr {L}}\) to be the collection of all elements of \({\mathscr {L}}_{ loc }\) that are [globally] Lipschitz continuous.
Throughout this subsection, we write the solution to (1.1) as \(u_b\) provided that (1.1) well posed for a given \(b\in {\mathscr {L}}_{ loc }.\) As a consequence of the theory of Walsh [20], (1.1) is well posed for example when \(b\in {\mathscr {L}}\); see also Dalang [5]. Moreover, \(u_b\) is the unique solution provided additionally that \(\sup _{t\in (0,T)}\sup _{x\in {\mathbb {R}}}\Vert u(t,x)\Vert _2<\infty \) for all \(T>0\). Finally,
Now suppose that \(b\in {\mathscr {L}}_{ loc }\), as is the case in the Introduction. Let \(b^{(n)}=b\wedge n\) for every \(n\in {\mathbb {N}}\). The monotonicity of b implies that every \(b^{(n)}\in {\mathscr {L}}\) for every \(n\in {\mathbb {N}}\), and \(b^{(n)} \le b^{(m)}\) when \(n \le m\). Since \(u_{b^{(n)}} \le u_{b^{(m)}}\) whenever \(n\le m\), it follows that the random field
exists and has lower-semicontinuous sample functions. Note also that if \(c\in {\mathscr {L}}\) satisfies \(c\le b\), then \(u_c\le u\). This proves that
Therefore, we refer to u as the minimal solution to (1.1) when b satisfies Assumption 1.3.
Next we describe why u can justifiably be called the minimal “solution” to (1.1). Minimality is clear from context. However, “solution” deserves some words.
If b is in addition Lipschitz continuous, then u is the solution to (1.1) that the Walsh theory yields and there is nothing to discuss. Now suppose \(b\in {\mathscr {L}}_{ loc }\) and recall \(b^{(n)}\in {\mathscr {L}}\). We may observe that
off a single null set that does not depend on (b, n, m). Since
it follows that
again off a single null set. Therefore, the monotone convergence theorem yields
where \(b(\infty )=\sup b\).
Next, let us consider the \([0,\infty ]\)-valued random variable
where \(\inf \varnothing =0\). Because u is lower semicontinuous, one can show that \(\tau \) is a stopping time with respect to the filtration of the noise, which we assume satisfies the usual conditions of martingale theory, without loss of generality. Of course, \(\tau \) is the first blowup time of u. Since \(\sigma \) is a bounded and continuous function,
where \(\int _\varnothing (\,\cdots )=0\). Taken together, these comments prove that if \(\tau >0\) – that is if the solution to (1.1) does not instantly blow up – then u satisfies (1.2) for all \(x\in {\mathbb {R}}\) and all times \(t<\tau \).Footnote 2 In this sense, our extension of the solution theory of Walsh [20] indeed produces solutions for \(b\in {\mathscr {L}}_{ loc }\) if there is chance for non-instantaneous blowup, and the smallest such solution is u.
Theorem 1.6 says that if \(b\in {\mathscr {L}}_{ loc }\) satisfies the Osgood condition (1.3), then the minimal solution satisfies \(u(t)\equiv \infty \) for all \(t>0\).
Now suppose the Osgood condition holds, and consider any solution theory that extends the Walsh theory and has a comparison theorem. The preceding comments prove that if that solution theory produces a solution v, then that solution satisfies \(u\le v\) and hence \(v(t)\equiv \infty \) for all \(t>0\) by Theorem 1.6. This is a precise sense in which Theorem 1.6 says that “the solution” to (1.1) blows up instantaneously and everywhere.
We can now conclude the paper with the following.
4.2 Proof of Theorem 1.6
We now prove the everywhere and instantaneous blow up of u under (1.3), where the symbol u denotes the minimal solution to (1.1). Recall the process \(u^{(n)} = u_{b^{(n)}}\) from the previous subsection. Choose and fix an arbitrary number \(a>0\), fixed but as small as we would like, and let \(\varepsilon =\varepsilon (a)>0\) be chosen according to Theorem 3.1. Recall, in particular, the following relationship between a and \(\varepsilon =\varepsilon (a)\):
In light of (1.3), we may choose and fix \(M>\Vert u_0\Vert _{L^\infty ({\mathbb {R}})}\) such that
where we recall that \(\rho =\inf _{x\in {\mathbb {R}}}u_0(x).\)
The construction of u and Theorem 3.1 together yield a random constant \(c=c(a, M)>0\) – independent of b – such that the following holds for every \(n\in {\mathbb {N}}\):
Let \(n\uparrow \infty \) to see from the monotone convergence theorem that
This proves that the blowup time is a.s. \(\le a+2\varepsilon \) and that the solution blows up everywhere in a random interval of the type \((c,c+\sqrt{\varepsilon })\). Consequently, for every non-random \(t \ge a+2\varepsilon \) there a.s. is a random closed interval \(I(t) \subset (0, \infty )\) and a non-random closed interval \({\tilde{I}}(t)=[a+\varepsilon , a+2\varepsilon ]\subset (0,t)\) such that
Since a can be as small as we would like, and because \(\lim _{a\rightarrow 0}\varepsilon =0 \), we have shown instantaneous blowup. We now show that the blowup happens everywhere. For every \(n\in {\mathbb {N}}\), the random field \(u^{(n)}\) solves
By the monotone convergence theorem, for \(t \ge a+2\varepsilon \) and \(x\in {\mathbb {R}}\),
as \(n\rightarrow \infty \); see (4.1) and (4.3). At the same time, standard estimates such as those in §2 show that
for every compact set \(K\subset {\mathbb {R}}_+\times {\mathbb {R}}\). Therefore, Fatou’s lemma ensures that a.s.,
It follows that \(\inf _K u=\infty \) a.s. for all compact sets \(K\subset {\mathbb {R}}_+\times {\mathbb {R}}\). This concludes the proof. \(\square \)
Notes
The notation \(c_{T,k}\) may refer to a constant that changes from line to line but in any case depends only on (T, k).
In fact, one can show that the \(\liminf \) of the stochastic integrals in the mild formulation of \(u^{(n)}\) is finite a.s. See the end of the proof of Theorem 1.6. This implies the stronger statement that, for all \(t>0\) and \(x\in {\mathbb {R}}\),
$$\begin{aligned} u(t,x) = ( G_t*u_0)(x) + \int _{(0,t)\times {\mathbb {R}}} G_{t-s}(y-x) b(u(s,y))\,\textrm{d}s\,\textrm{d}y + \text { a finite term}, \end{aligned}$$where \(b(\infty )=\sup b\). Theorem 1.6 implies that both sides of the above identity are infinite when (1.3) holds.
References
Bally, V., Pardoux, É.: Malliavin calculus for white noise driven parabolic SPDEs. Potential Anal. 9(1), 27–64 (1998)
Cabré, X., Martel, Y.: Existence versus instantaneous blowup for linear heat equations with singular potentials. C. R. Acad. Sci. Paris Sér. I Math. 329, 973–978 (1999)
Chen, L., Khoshnevisan, D., Nualart, D., Fei, P.: Spatial ergodicity for SPDEs via Poincaré-type inequalities. Electron. J. Probab. 26, 1–37 (2021)
Chen, L., Khoshnevisan, D., Nualart, D., Fei, P.: Spatial ergodicity and central limit theorems for parabolic Anderson model with delta initial condition. J. Funct. Anal. 282(2), 109290 (2022)
Dalang, R.C.: Extending the martingale measure stochastic integral with applications to spatially homogeneous s.p.d.e.’s. Electron. J. Probab. 4(6), 29 (1999)
Dalang, R.C., Khoshnevisan, D., Zhang, T.: Global solutions to stochastic reaction–diffusion equations with super-linear drift and multiplicative noise. Ann. Probab. 47(1), 519–559 (2019)
Davis, B.: On the \(L^{p}\) norms of stochastic integrals and other martingales. Duke Math. J. 43(4), 697–704 (1976)
De Bouard, A., Debussche, A.: Blow-up for the stochastic nonlinear Schrödinger equation with multiplicative noise. Ann. Probab. 33(3), 1078–1110 (2005)
Dozzi, M., López-Mimbela, J.A.: Finite-time blowup and existence of global positive solutions of a semi-linear SPDE. Stoch. Process. Appl. 120, 767–776 (2010)
Fernández Bonder, J., Groisman, P.: Time-space white noise eliminates global solutions in reaction–diffusion equations. Physica D 238(2), 209–215 (2009)
Foondun, M., Nualart, E.: The Osgood condition for stochastic partial differential equations. Bernoulli 27, 295–311 (2021)
Foondun, M., Nualart, E.: Non-existence results for stochastic wave equations in one dimension. J. Differ. Equ. 318, 557–578 (2022)
Geiß, C., Manthey, R.: Comparison theorems for stochastic differential equations in finite and infinite dimensions. Stoch. Process. Appl. 53(1), 23–35 (1994)
Khoshnevisan, D.: Analysis of stochastic partial differential equations. In: CBMS Regional Conference Series in Mathematics, vol. 119. American Mathematical Society, Providence (2014)
Mueller, C.: On the support of solutions to the heat equation with noise. Stoch. Stoch. Rep. 37(4), 225–245 (1991)
Nualart, D.: The Malliavin Calculus and Related Topics. Probability and Its Applications, 2nd edn. Springer, Berlin (2006)
Peral, I., Luis Vázquez, J.: On the stability or instability of the singular solution of the semilinear heat equation with exponential reaction term. Arch. Rat. Mech. Anal. 129, 201–224 (1995)
Revuz, D., Yor, M.: Continuous Martingales and Brownian Motion. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 293. Springer, Berlin (1991)
Vázquez, J.L.: Domain of existence and blowup for the exponential reaction–diffusion equation. Indiana Univ. Math. J. 48(2), 677–709 (1999)
Walsh, J.B.: An introduction to stochastic partial differential equations. In: École d’été de Probabilités de Saint-Flour, XIV–1984, volume 1180 of Lecture Notes in Mathematics, pp. 265–439. Springer, Berlin (1986)
Acknowledgements
We are grateful to Alison Etheridge for her questions that ultimately led to this paper, and for sharing her insights with us. We also thank Raluca Balan for pointing out a mistake in an earlier version of Lemma 2.2.
Funding
Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature.
Author information
Authors and Affiliations
Contributions
MF, DK and EN wrote the main manuscript text.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Davar Khoshnevisan: Research supported in part by the United States’ National Science Foundation Grants DMS-1855439 and DMS-2245242.
Eulalia Nualart: Acknowledges support from the Spanish MINECO Grant PGC2018-101643-B-I00 and Ayudas Fundacion BBVA a Proyectos de Investigación Científica 2021.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Foondun, M., Khoshnevisan, D. & Nualart, E. Instantaneous everywhere-blowup of parabolic SPDEs. Probab. Theory Relat. Fields (2024). https://doi.org/10.1007/s00440-024-01263-7
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00440-024-01263-7