1 Introduction

Let us consider a Nakao-type weakly coupled system with nonlinearities of derivative-type, namely,

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t^2 u-\Delta u+b\partial _t u+m^2 u=\vert \partial _t v\vert ^p, &{} x\in {\mathbb {R}}^n, \, t\in (0,T), \\ \partial _t^2 v-\Delta v=\vert \partial _t u\vert ^q, &{} x\in {\mathbb {R}}^n, \, t\in (0,T), \\ (u, \partial _t u)(0,x)=\varepsilon (u_0, u_1)(x), &{} x\in {\mathbb {R}}^n, \\ (v, \partial _t v)(0,x)=\varepsilon (v_0, v_1)(x), &{} x\in {\mathbb {R}}^n, \end{array}\right. } \end{aligned}$$
(1)

where \(p,q>1\), \(\varepsilon \) is a positive parameter describing the size of the Cauchy data, and \(b>0\), \(m^2\geqslant 0\) are real constants.

Over the last years, systems of diffusion and wave equations with coupled nonlinear terms have been studied in the literature (see [3, 6, 13, 14, 18]). By diffusion equations here we mean, in a broad sense, not only parabolic equations but also hyperbolic equations which present diffusion phenomena towards certain parabolic models. This kind of nonlinear coupled systems have been named Nakao’s problems in the case of a weakly coupled Cauchy system of wave and damped wave equations in [3, 6, 18] after the author of [13, 14], who first proposed and studied these systems in the case of bounded domains.

Let us summarize briefly the results for the Nakao’s problems considered in the case of the whole space, i.e. for Cauchy problems. In [6, 18] the Nakao’s problem with weakly coupled power nonlinearities, namely,

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t^2 u-\Delta u+\partial _t u=\vert v\vert ^p, &{} x\in {\mathbb {R}}^n, \, t\in (0,T), \\ \partial _t^2 v-\Delta v=\vert u\vert ^q, &{} x\in {\mathbb {R}}^n, \, t\in (0,T), \\ (u, \partial _t u)(0,x)=\varepsilon (u_0, u_1)(x), &{} x\in {\mathbb {R}}^n, \\ (v, \partial _t v)(0,x)=\varepsilon (v_0, v_1)(x), &{} x\in {\mathbb {R}}^n, \end{array}\right. } \end{aligned}$$
(2)

has been investigated from the viewpoint of the blow-up in finite time (for suitable pq and under suitable sign assumptions on the Cauchy data). While in [18] the so-called test function method is used, in [6] an iteration argument is employed, by considering the space averages of the components of a local solution as time-dependent functionals.

On the other hand, in [3] the Nakao’s problem with weakly coupled nonlinearities of derivative type, namely (1) for \((b,m^2)=(1,0)\), is studied again from the sufficiency part. In particular, the blow-up in finite time is proved for \(p,q>1\) such that

$$\begin{aligned} \frac{1}{pq-1}>\frac{n-1}{2} \end{aligned}$$

provided that the Cauchy data are compactly supported, nonnegative and nontrivial. The approach used to prove this blow-up result is inspired in some sense by [11, Sect. 13.2] and by [10].

In what follows we called (1) a Nakao-type weakly coupled system, since we will consider a semilinear wave equation for v and a semilinear damped Klein–Gordon equation for u which are weakly coupled through the nonlinear terms given by powers of the time-derivatives. We shall focus only on the case of the Cauchy problem and our goal will be determining a blow-up result in finite time when the exponents of the nonlinear terms pq belong to a suitable range and under suitable sign assumptions for the Cauchy data.

Our approach is based on the blow-up technique introduced by Zhou in [20] for the treatment of the semilinear wave equation with a nonlinearity of derivative type in all space dimensions combined with an iteration argument for determining a sequence of lower bound estimates for a suitable time-dependent functional related to a local in time solution to (1). The above cite technique of Zhou consists in reducing the problem to the one-dimensional case by integrating with respect to the last \((n-1)\) space-variables and, then, in proving the blow-up on a suitable characteristic line. More specifically, when dealing with the wave equation in one space dimension, d’Alembert’s formula is used to describe explicitly the solution. Consequently, before proving the main blow-up result of this paper, we are going to recall an integral representation formula for the linear equation associated with the equation for u in (1) (which is a damped Klein–Gordon equation) in one space dimension. Moreover, since the kernel function appearing in this integral formula contain an exponential factor, we will need to adapt the treatment of an unbounded exponential multiplier in the iteration frame from [4, 5] to our problem by applying a slicing procedure while shrinking the domain of integration in the iteration frame. We anticipate that the other factor appearing in the integral kernel will be the composition of the modified Bessel function of the first kind of order 0 with another function related to the forward light-cone. In the derivation of the iteration frame, we will take advantage of the fact that this special function (denoted \(\mathrm {I}_0\)) is bounded from below by a positive function. On the contrary, we may not use the asymptotic behavior of \(\mathrm {I}_0\) for large arguments due to the contemporary presence of the aforementioned exponential factor. For a rigorous explanation we address the reader to Remark 4.

The range of pq for which our blow-up result is valid is exactly the same one as in [3] for the special case \((b,m^2)=(1,0)\) that we recalled above, although the methods employed in our proof and in the proof of the corresponding result in [3] are quite different. Moreover, we will extend the blow-up result even to the limit case

$$\begin{aligned} \frac{1}{pq-1}=\frac{n-1}{2}. \end{aligned}$$

Finally, we point out that the blow-up result in the present work is valid only under the further assumption

$$\begin{aligned} b^2\ge 4 m^2. \end{aligned}$$
(3)

We refer to Remark 3 for a technical explanation on the unsuitableness of our method for \(b^2<4m^2\). We may interpret the condition (3) by saying that we consider the case in which the equation for u in (1) has a mass term \(m^2 u\) that is dominated (or balanced, when the equality holds) by the damping term \(b\partial _t u\). Therefore, this equation has some properties which resemble the ones for the damped wave equation rather than the ones for the Klein–Gordon equation. Let us explain the previous heuristic considerations more rigorously. If we consider the linear damped Klein–Gordon equation

$$\begin{aligned} \partial _t^2 \phi -\Delta \phi +b\partial _t \phi +m^2 \phi =0, \end{aligned}$$

then, carrying out the transformation \(\phi (t,x)=\mathrm {e}^{\gamma t}\psi (t,x)\), where \(\gamma \) is a real constant, it results that \(\psi \) solves

$$\begin{aligned} \partial _t^2 \psi -\Delta \psi +(2\gamma +b)\partial _t \psi +(\gamma ^2+b\gamma +m^2) \psi =0. \end{aligned}$$

For \(b^2>4m^2\) we can choose \(\gamma \doteq \frac{1}{2}(-b+\sqrt{b^2-4m^2})\) so that \(\psi \) solves the damped wave equation

$$\begin{aligned} \partial _t^2 \psi -\Delta \psi +(b^2-4m^2)^{\frac{1}{2}} \partial _t \psi =0. \end{aligned}$$

For this reason, we call the case \(b^2>4m^2\) the case with dominant damping. On the contrary, for \(b^2<4m^2\), setting \(\gamma \doteq -\frac{b}{2}\), we get that \(\psi \) solves the Klein–Gordon equation (with positive mass)

$$\begin{aligned} \partial _t^2 \psi -\Delta \psi +\left( m^2-\frac{b^2}{4}\right) \psi =0. \end{aligned}$$

Hence, we call \(b^2<4m^2\) the case with dominant mass. In the limit case \(b^2=4m^2\), we find that \(\psi \) solves the free wave equation, therefore, we call it the balanced case. We stress that this nomenclature is borrowed from the introduction of [8].

The paper is organized as follows: in Sect. 2 we state the main blow-up result for (1); in Sect. 3 we recall the integral representation formula for the linear Cauchy problem associated with the damped Klein–Gordon equation when \(n=1\); finally, in Sect. 4 we derive the iteration frame and we apply the slicing procedure to perform the iteration procedure.

2 Main result

Theorem 1

Let \(n\geqslant 1\) and let \(b>0,m^2\geqslant 0\) be real constants satisfying (3). We assume that \(u_0,v_0\in {\mathscr {C}}^2_0({\mathbb {R}}^n)\), \(u_1,v_1\in {\mathscr {C}}^1_0({\mathbb {R}}^n)\) are nonnegative and compactly supported functions with supports contained in \(B_R\) for some \(R>0\), and that \(v_1\) is nontrivial. Let us consider exponents for the nonlinear terms \(p,q>1\) satisfying

$$\begin{aligned} \theta (n,p,q)\doteq \frac{1}{pq-1} -\frac{n-1}{2}\geqslant 0. \end{aligned}$$
(4)

Then, there exists a positive constant \(\varepsilon _0=\varepsilon _0(n,p,q,b,R,v_1)\) such that for any \(\varepsilon \in (0,\varepsilon _0]\) if \((u,v)\in \big ({\mathscr {C}}^2([0,T)\times {\mathbb {R}}^n)\big )^2\) is a local in time solution to (1) such that

$$\begin{aligned} \mathrm {supp} \, u(t,\cdot ), \mathrm {supp} \, v(t,\cdot ) \subset B_{R+t} \quad \text{ for } \text{ any } \ t\in [0,T), \end{aligned}$$
(5)

where \(T=T(\varepsilon )\) denotes the lifespan of (uv), then, (uv) blows up in finite time.

Furthermore, the following upper bound estimate for the lifespan holds

$$\begin{aligned} T(\varepsilon )\leqslant {\left\{ \begin{array}{ll} C \varepsilon ^{-\theta (n,p,q)^{-1}} &{} \text{ if } \ \ \theta (n,p,q)>0, \\ \exp \left( C\varepsilon ^{-(pq-1)}\right) &{} \text{ if } \ \ \theta (n,p,q)=0, \end{array}\right. } \end{aligned}$$
(6)

where the positive constant C is independent of \(\varepsilon \).

3 Integral representation formula in one space dimension

In the proof of Theorem 1 we are going to use the approach from [20] to proving the blow-up on a certain characteristic line, as described in the introduction.

Since the second order partial operator acting on u in (1) is a damped wave operator with a mass term we need first to get a representation formula for the corresponding linear Cauchy problem in the one-dimensional case, namely,

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t^2 \phi -\partial _x^2 \phi +b\partial _t \phi +m^2\phi =F(t,x), &{} x\in {\mathbb {R}}, \, t>0, \\ \phi (0,x)=f(x), &{} x\in {\mathbb {R}}, \\ \partial _t \phi (0,x)=g(x), &{} x\in {\mathbb {R}}. \\ \end{array}\right. } \end{aligned}$$
(7)

The integral representation formula for the solution to (7), under suitable regularity assumptions on the data fgF, is already known in the literature. However, the proof of this representation formula in the form that we will employ is scattered through different references. For the ease of readability we shall provide an elementary proof of it.

In what follows, we collect and adapt the results from [7, Chapter III Sect. 3.5 and Chapter VI Sect. 12.6] and [19, Sect. 1.1]. Hereafter, we denote by \(\mathrm {I}_\nu ,\mathrm {J}_\nu \) the modified Bessel function and the Bessel function of the first kind of order \(\nu \), respectively.

Lemma 2

Let \(b>0\) and \(m^2\geqslant 0\). For any \(h\in {\mathscr {C}}^1({\mathbb {R}})\) and any \(t\geqslant 0, x\in {\mathbb {R}}\) we define the solution operator

$$\begin{aligned} \mathrm {S}\big (t;b,m^2\big ) h(x)\doteq {\left\{ \begin{array}{ll} \displaystyle {\frac{1}{2} \, \mathrm {e}^{-\frac{b}{2} t} \int _{x-t}^{x+t} \mathrm {I}_0\left( \mu \sqrt{t^2-\vert x-y\vert ^2} \,\right) h(y) \, \mathrm {d}y} &{} \hbox {for} \ \, 4m^2< b^2, \\ \displaystyle {\frac{1}{2} \, \mathrm {e}^{-\frac{b}{2}t} \int _{x-t}^{x+t} h(y) \, \mathrm {d}y} &{} \hbox {for} \ \, 4m^2= b^2, \\ \displaystyle {\frac{1}{2} \, \mathrm {e}^{-\frac{b}{2}t} \int _{x-t}^{x+t} \mathrm {J}_0\left( \mu \sqrt{t^2-\vert x-y\vert ^2} \,\right) h(y) \, \mathrm {d}y} &{} \hbox {for} \ \, 4m^2> b^2, \end{array}\right. } \end{aligned}$$
(8)

where

$$\begin{aligned} \mu \doteq \sqrt{\left| \frac{b^2}{4}-m^2\right| } \end{aligned}$$

and \(\mathrm {I}_0,\mathrm {J}_0\) denote the modified Bessel function and the Bessel function of the first kind of order 0, respectively, (cf. [15, Sections 10.2 and 10.25]).

Let us consider \(f\in {\mathscr {C}}^2({\mathbb {R}}),g\in {\mathscr {C}}^1({\mathbb {R}})\) and \(F\in {\mathscr {C}}^1([0,\infty )\times {\mathbb {R}})\). Then, the solution to the linear Cauchy problem (7) is given by

$$\begin{aligned} \phi (t,x)&= \mathrm {S}\big (t;b,m^2\big )(g+bf)(x)+\frac{\partial }{\partial t} \mathrm {S}\big (t;b,m^2\big )f(x)\nonumber \\&\qquad +\int _0^t \mathrm {S}\big (t-\tau ;b,m^2\big )(F(\tau ,\cdot ))(x)\, \mathrm {d}\tau . \end{aligned}$$
(9)

Remark 1

In the special case \((b,m^2)=(1,0)\) the representation formula (9) coincides with the one for the classical linear damped wave equation (see [7, Equation (43), page 695] or [17, Proposition 2.1]).

Proof

In the balanced case \(b^2=4m^2\) the function \(\psi (t,x)=\mathrm {e}^{\frac{b}{2}t}\phi (t,x)\) solves the Cauchy problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t^2 \psi -\partial _x^2 \psi =\mathrm {e}^{\frac{b}{2}t}F(t,x), &{} x\in {\mathbb {R}}, \, t>0, \\ \phi (0,x)=f(x), &{} x\in {\mathbb {R}}, \\ \partial _t \phi (0,x)=g(x)+\frac{b}{2}f(x), &{} x\in {\mathbb {R}}. \\ \end{array}\right. } \end{aligned}$$

Combining d’Alembert’s formula with Duhamel’s principle and the inverse transformation \(\phi (t,x)=\mathrm {e}^{-\frac{b}{2}t}\psi (t,x)\), we get immediately (9).

When \(b^2\ne 4m^2\) we begin by proving that \(\mathrm {S}\big (t;b,m^2\big )(g)(x)\) solves the Cauchy problem (7) for \(f=0\) and \(F=0\). We carry on the computation only in the dominant damping case \(b^2>4m^2\), since in the dominant mass case \(b^2<4m^2\) the procedure is completely analogous. Let us check the Cauchy conditions first. Clearly \(\mathrm {S}\big (0;b,m^2\big )(g)(x)=0\). On the other hand, using \(\mathrm {I}_0(0)=1\), we have

$$\begin{aligned} \frac{\partial }{\partial t}\mathrm {S}\big (t;b,m^2\big )(g)(x)&= \frac{1}{2} \, \mathrm {e}^{-\frac{b}{2}t}(g(x+t)+g(x-t)) \nonumber \\&\qquad -\frac{b}{4} \, \mathrm {e}^{-\frac{b}{2} t} \int _{x-t}^{x+t} \mathrm {I}_0\left( \mu \sqrt{t^2-\vert x-y\vert ^2} \,\right) g(y) \, \mathrm {d}y \nonumber \\&\qquad +\frac{\mu }{2} \, t \, \mathrm {e}^{-\frac{b}{2} t} \int _{x-t}^{x+t} \frac{\mathrm {I}_0'\left( \mu \sqrt{t^2-\vert x-y\vert ^2} \right) }{\sqrt{t^2-\vert x-y\vert ^2} } g(y) \, \mathrm {d}y. \end{aligned}$$
(10)

Consequently, \(\partial _t\mathrm {S}\big (t;b,m^2\big )(g)(x)\big \vert _{t=0}=g(x)\).

We prove now that \(\mathrm {S}\big (t;b,m^2\big )(g)(x)\) solves the homogeneous differential equation. A further differentiation of (10) with respect to t provides

$$\begin{aligned} \frac{\partial ^2}{\partial t^2}\mathrm {S}&\big (t;b,m^2\big )(g)(x) = \left( \!-\frac{b}{2} +\frac{\mu ^2 t}{4} \!\right) \mathrm {e}^{-\frac{b}{2}t}(g(x+t)+g(x-t))\nonumber \\&\quad +\frac{1}{2} \, \mathrm {e}^{-\frac{b}{2}t}(g'(x+t)-g'(x-t)) \nonumber \\&\quad +\frac{b^2}{8} \, \mathrm {e}^{-\frac{b}{2} t} \int _{x-t}^{x+t} \mathrm {I}_0\left( \mu \sqrt{t^2-\vert x-y\vert ^2} \,\right) g(y) \, \mathrm {d}y \nonumber \\&\quad +\!\frac{1}{2} \, \mathrm {e}^{-\frac{b}{2} t} \int _{x-t}^{x+\!t} \mathrm {I}_0'\left( \mu \sqrt{t^2-\vert x\!-\!y\vert ^2} \right) \left( \frac{\mu (1-bt)}{\sqrt{t^2-\vert x\!-\!y\vert ^2} } \!-\!\frac{\mu t^2}{(t^2-\vert x-y\vert ^2)^{3/2}}\right) g(y) \, \mathrm {d}y \nonumber \\&\quad +\frac{1}{2} \, \mathrm {e}^{-\frac{b}{2} t} \int _{x-t}^{x+t} \mathrm {I}_0''\left( \mu \sqrt{t^2-\vert x-y\vert ^2} \right) \frac{\mu ^2t^2}{t^2-\vert x-y\vert ^2 } \, g(y) \, \mathrm {d}y. \end{aligned}$$
(11)

We point out that, differentiating the second integral in (10), we applied the relation

$$\begin{aligned} \frac{\mathrm {I}_0'(z)}{z}\bigg \vert _{z=0}=\frac{1}{2} \end{aligned}$$
(12)

that follows from the relation \(\mathrm {I}_0'=\mathrm {I}_1\) and from the Maclaurin series expansion for the function \(z^{-1}\mathrm {I}_1(z)\) (cf. [15, Equations (10.29.3) and(10.25.2)]). Using again (12), we find that the second order derivative with respect to x of \(\mathrm {S}\big (t;b,m^2\big )(g)(x)\) is given by

$$\begin{aligned} \frac{\partial ^2}{\partial x^2}&\mathrm {S}\big (t;b,m^2\big )(g)(x) = \frac{\mu ^2 t}{4} \mathrm {e}^{-\frac{b}{2}t}(g(x+t)+g(x-t))\nonumber \\&\quad +\frac{1}{2} \, \mathrm {e}^{-\frac{b}{2}t}(g'(x+t)-g'(x-t)) \nonumber \\&\quad +\!\frac{1}{2} \, \mathrm {e}^{-\frac{b}{2} t} \int _{x-t}^{x+t} \mathrm {I}_0'\left( \mu \sqrt{t^2-\vert x\!-\!y\vert ^2} \right) \left( -\!\frac{\mu }{\sqrt{t^2-\vert x\!-\!y\vert ^2} } \!-\!\frac{\mu (x-y)^2}{(t^2-\vert x-y\vert ^2)^{3/2}}\right) g(y) \, \mathrm {d}y \nonumber \\&\quad +\frac{1}{2} \, \mathrm {e}^{-\frac{b}{2} t} \int _{x-t}^{x+t} \mathrm {I}_0''\left( \mu \sqrt{t^2-\vert x-y\vert ^2} \right) \frac{\mu ^2(x-y)^2}{t^2-\vert x-y\vert ^2 } \, g(y) \, \mathrm {d}y. \end{aligned}$$
(13)

Combining (10), (11) and (13), we get

$$\begin{aligned}&\left( \frac{\partial ^2}{\partial t^2}-\frac{\partial ^2}{\partial x^2}+b\frac{\partial }{\partial t}+m^2 I\right) \mathrm {S}\big (t;b,m^2\big )(g)(x) \nonumber \\&\quad = \frac{1}{2} \, \mathrm {e}^{-\frac{b}{2} t} \int _{x-t}^{x+t} \mu ^2 \mathrm {I}_0''\left( \mu \sqrt{t^2-\vert x-y\vert ^2 }\right) g(y) \, \mathrm {d}y \nonumber \\&\qquad + \frac{1}{2} \, \mathrm {e}^{-\frac{b}{2} t} \int _{x-t}^{x+t} \frac{\mu \mathrm {I}_0'\left( \mu \sqrt{t^2-\vert x-y\vert ^2} \right) }{\sqrt{t^2-\vert x-y\vert ^2}}g(y) \, \mathrm {d}y \nonumber \\&\qquad +\frac{1}{2} \, \mathrm {e}^{-\frac{b}{2} t} \int _{x-t}^{x+t} \left( m^2-\frac{b^2}{4}\right) \mathrm {I}_0\left( \mu \sqrt{t^2-\vert x-y\vert ^2} \right) g(y) \, \mathrm {d}y \nonumber \\&\quad = \frac{\mu ^2}{2} \, \mathrm {e}^{-\frac{b}{2} t} \int _{x-t}^{x+t} \left( \mathrm {I}_0''(z)+\frac{\mathrm {I}'_0(z)}{z}-\mathrm {I}_0(z)\right) \bigg \vert _{z=\mu \sqrt{t^2-(x-y)^2}} \ g(y) \, \mathrm {d}y =0, \end{aligned}$$
(14)

where in the last step we used the fact that \(\mathrm {I}_0\) is a solution of the ODE (see [15, Equation (10.25.1)])

$$\begin{aligned} z^2\mathrm {I}_0''(z)+z \mathrm {I}'_0(z)-z^2\mathrm {I}_0(z)=0. \end{aligned}$$

We emphasize that in the dominant mass case we can repeat the same steps as before. However, since \(\mu ^2=m^2-\frac{b^2}{4}\) in this case, we use the fact that \(\mathrm {J}_0\) is a solution of the ODE (see [15, Equation (10.2.1)])

$$\begin{aligned} z^2\mathrm {J}_0''(z)+z \mathrm {J}'_0(z)+z^2\mathrm {J}_0(z)=0. \end{aligned}$$

So, we proved (9) for \(f=0\) and \(F=0\).

Now we focus on the case \(g=0\) and \(F=0\). We claim that

$$\begin{aligned} {\widetilde{\phi }}(t,x)\doteq \frac{\partial }{\partial t}\mathrm {S}\big (t;b,m^2\big )(f)(x)+b\mathrm {S}\big (t;b,m^2\big )(f)(x) \end{aligned}$$

is the solution of (7) with vanishing second data and source term.

Clearly, \({\widetilde{\phi }}\) solves the homogeneous differential equation as the differential operators \((\partial _t+bI)\) and \((\partial _t^2-\partial _x^2+b\partial _t+m^2 I)\) commute. We check now the Cauchy conditions. Using the initial conditions derived in the previous case, we see immediately that \({\widetilde{\phi }}(0,x)=f(x)\). On the other hand,

$$\begin{aligned} \frac{\partial }{\partial t}{\widetilde{\phi }}(t,x)= \left( \frac{\partial ^2}{\partial t^2}+b\frac{\partial }{\partial t}\right) \mathrm {S}\big (t;b,m^2\big )(f)(x) = \left( \frac{\partial ^2}{\partial x^2}-m^2 I\right) \mathrm {S}\big (t;b,m^2\big )(f)(x). \end{aligned}$$

Therefore, combining (8) and (13) with the previous relation it follows that \(\partial _t {\widetilde{\phi }}(0,x)\!=\!0\).

It remains to consider the inhomogeneous Cauchy problem (7) with both vanishing initial data \(f=g=0\). By using Duhamel’s principle together with the solution operator defined in (8), since the model under consideration is invariant by time translations, we get that the solution for this case is given by

$$\begin{aligned} \int _0^t \mathrm {S}\big (t-\tau ;b,m^2\big )(F(\tau ,\cdot ))(x)\, \mathrm {d}\tau . \end{aligned}$$

Due to the linearity of (7), combining the results from the previous subcases, we conclude the validity of (9). \(\square \)

Remark 2

By using (8) and (10), we can rewrite (9) more explicitly as follows:

$$\begin{aligned} \phi (t,x)&= \frac{1}{2} \, \mathrm {e}^{-\frac{b}{2}t}(f(x+t)+f(x-t))\nonumber \\&\qquad + \frac{\mu }{2} \, t \, \mathrm {e}^{-\frac{b}{2} t} \int _{x-t}^{x+t} \frac{\mathrm {I}_1\left( \mu \sqrt{t^2-\vert x-y\vert ^2} \right) }{\sqrt{t^2-\vert x-y\vert ^2} } f(y) \, \mathrm {d}y \nonumber \\&\qquad +\frac{1}{2} \, \mathrm {e}^{-\frac{b}{2} t} \int _{x-t}^{x+t} \mathrm {I}_0\left( \mu \sqrt{t^2-\vert x-y\vert ^2} \,\right) \left( g(y)+\frac{b}{2} f(y)\right) \, \mathrm {d}y \nonumber \\&\qquad + \frac{1}{2} \int _0^t \mathrm {e}^{-\frac{b}{2}(t-\tau )}\int _{x-t+\tau }^{x+t-\tau } \mathrm {I}_0\left( \mu \sqrt{(t-\tau )^2-\vert x-y\vert ^2} \,\right) F(\tau ,y) \, \mathrm {d}y \, \mathrm {d}\tau \end{aligned}$$
(15)

for \(b^2>4m^2\), and

$$\begin{aligned} \phi (t,x)&= \frac{1}{2} \, \mathrm {e}^{-\frac{b}{2}t}(f(x+t)+f(x-t))+\frac{1}{2} \, \mathrm {e}^{-\frac{b}{2} t} \int _{x-t}^{x+t} \left( g(y)+\frac{b}{2} f(y)\right) \, \mathrm {d}y \nonumber \\&\qquad + \frac{1}{2} \int _0^t \mathrm {e}^{-\frac{b}{2}(t-\tau )}\int _{x-t+\tau }^{x+t-\tau } F(\tau ,y) \, \mathrm {d}y \, \mathrm {d}\tau \end{aligned}$$
(16)

for \(b^2=4m^2\). Finally, for \(b^2<4m^2\) the representation formula is analogous the the one in (15), but instead of the modified Bessel functions \(\mathrm {I}_0,\mathrm {I}_1\) we have the Bessel functions \(\mathrm {J}_0,-\mathrm {J}_1\), respectively. In particular, we use the relation \(\mathrm {J}_0'=-\mathrm {J}_1\), see [15, Equation (10.6.2)].

Remark 3

In the statement of Theorem 1 we consider only \(b,m^2\) such that \(b^2\geqslant 4m^2\). This assumption is due to the fact in the dominant mass case \(b^2<4m^2\) the kernel functions in the representation formula (9) are no longer nonnegative functions. Indeed, in the iteration argument that we will use to prove the blow-up result it is crucial the fact that we will be working with a nonnegative functional. For \(b^2<4m^2\) the partial differential operator acting on u in (1) is in this sense very close to the Klein–Gordon operator (i.e. for \(b=0\)) and the damped oscillations of the Bessel functions of the first kind do not allow to carry on with the iteration procedure. We stress that when the equation for u in (1) is exactly Klein–Gordon equation (namely, for \(b=0\)) the approach of this paper is unfruitful for a positive mass, but it could be used to deal with a negative mass (that is, for \(m^2<0\) in our notations).

4 Proof of Theorem 1

The proof of Theorem 1 is based on the approach introduced by Zhou in [20], where a blow-up result for the semilinear wave equation with nonlinearity of derivative-type is proved for all space dimensions. Recently, this approach have been applied to study semilinear models with time-dependent coefficients (cf. [9, 12, 16]).

In [20] d’Alembert’s formula is used to prove the blow-up result for the semilinear wave equation with nonlinearity of derivative type. In our case, since we work with the weakly coupled system (1) together with d’Alembert’s formula (coming from the equation for v) we shall also employ the representation formulas (15) and (16) from Sect. 3. Notice that (15) coincides exactly with (16) for \(\mu =0\). Hence, in what follows we work always with (15) for both cases.

Let us introduce the following notation: we will write any \(x\in {\mathbb {R}}^n\) as \(x=(z,w)\) with \(z\in {\mathbb {R}}\) and \(w\in {\mathbb {R}}^{n-1}\). Thanks to this notation we might introduce the following functions

$$\begin{aligned}&{\mathscr {U}}(t,z)\doteq \int _{{\mathbb {R}}^{n-1}}u(t,z,w) \, \mathrm {d}w, \ {\mathscr {V}}(t,z)\doteq \int _{{\mathbb {R}}^{n-1}}v(t,z,w) \, \mathrm {d}w \quad \text{ for } \text{ any } \ t\in [0,T), \, z\in {\mathbb {R}}, \\&{\mathscr {U}}_j(z)\doteq \int _{{\mathbb {R}}^{n-1}}u_j(z,w) \, \mathrm {d}w, \ \ \ {\mathscr {V}}_j(z)\doteq \int _{{\mathbb {R}}^{n-1}}v_j(z,w) \, \mathrm {d}w \qquad \, \text{ for } \text{ any } \ z\in {\mathbb {R}}, \, j=0,1. \end{aligned}$$

Clearly, it makes sense to introduce these functions only for \(n\geqslant 2\), while for \(n=1\) we set simply \(({\mathscr {U}},{\mathscr {V}})=(u,v)\) and \(({\mathscr {U}}_0,{\mathscr {U}}_1,{\mathscr {V}}_0,{\mathscr {V}}_1)=(u_0,u_1,v_0,v_1)\).

We remark that due to the assumption \(\mathrm {supp}\, u_j, \mathrm {supp} \, v_j \subset B_R\) for \(j=0,1\) it follows that

$$\begin{aligned} \mathrm {supp}\, {\mathscr {U}}_j , \mathrm {supp}\, {\mathscr {V}}_j \subset (-R,R) \quad j=0,1. \end{aligned}$$
(17)

Analogously, from (5) we have

$$\begin{aligned} \mathrm {supp}\, {\mathscr {U}}(t,\cdot ) , \mathrm {supp}\, {\mathscr {V}}(t,\cdot ) \subset \big (-(R+t),R+t\big ) \quad \text{ for } \text{ any } \ t\in [0,T). \end{aligned}$$
(18)

By a straightforward computation we find that \(({\mathscr {U}},{\mathscr {V}})\) solves for \(n\geqslant 2\) the following system

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _t^2 {\mathscr {U}}-\partial _z^2 {\mathscr {U}}+b \partial _t {\mathscr {U}}+ m^2 {\mathscr {U}}=\displaystyle {\int _{{\mathbb {R}}^{n-1}}\vert \partial _t v(t,z,w)\vert ^p \mathrm {d}w}, &{} t\in (0,T), \, z\in {\mathbb {R}}, \\ \partial _t^2 {\mathscr {V}}-\partial _z^2 {\mathscr {V}}=\displaystyle {\int _{{\mathbb {R}}^{n-1}}\vert \partial _t u(t,z,w)\vert ^q \mathrm {d}w}, &{} t\in (0,T), \, z\in {\mathbb {R}}, \\ ({\mathscr {U}}, \partial _t {\mathscr {U}})(0,z)=\varepsilon ({\mathscr {U}}_0, {\mathscr {U}}_1)(z), &{} z\in {\mathbb {R}}, \\ ({\mathscr {V}}, \partial _t {\mathscr {V}})(0,z)=\varepsilon ({\mathscr {V}}_0, {\mathscr {V}}_1)(z), &{} z\in {\mathbb {R}}. \end{array}\right. } \end{aligned}$$

By using D’Alembert’s formula and the representation formula for the damped wave equation with a mass term from Sect. 3, we obtain the following integral representations

$$\begin{aligned} {\mathscr {U}}(t,z)&= {\mathscr {U}}^{\mathrm {lin}}(t,z)+{\mathscr {U}}^{\mathrm {nlin}}(t,z), \\ {\mathscr {V}}(t,z)&= {\mathscr {V}}^{\mathrm {lin}}(t,z)+{\mathscr {V}}^{\mathrm {nlin}}(t,z), \end{aligned}$$

where

$$\begin{aligned} {\mathscr {U}}^{\mathrm {lin}}(t,z)&\doteq \frac{\varepsilon }{2}\,\mathrm {e}^{-\frac{b}{2}t} \big ({\mathscr {U}}_0(z+t)+{\mathscr {U}}_0(z-t)\big ) \\&\quad + \frac{\varepsilon }{2} \,\mathrm {e}^{-\frac{b}{2}t} \int _{z-t}^{z+t}\mathrm {I}_0\left( \mu \sqrt{t^2-\vert z-y\vert ^2} \,\right) \big ({\mathscr {U}}_1(y)+\tfrac{b}{2} {\mathscr {U}}_0(y)\big ) \, \mathrm {d}y \\&\quad +\frac{\mu \, \varepsilon }{2} \, t\, \mathrm {e}^{-\frac{b}{2}t} \int _{z-t}^{z+t} \frac{\mathrm {I}_1\left( \mu \sqrt{t^2-\vert z-y\vert ^2} \,\right) }{\sqrt{t^2-\vert z-y\vert ^2}} \, {\mathscr {U}}_0(y) \, \mathrm {d}y, \\ {\mathscr {U}}^{\mathrm {nlin}}(t,z)&\!\doteq \! \frac{1}{2} \!\int _0^t \! \mathrm {e}^{\frac{b}{2}(\tau -t)}\!\!\! \int _{z-t\!+\!\tau }^{z\!+t\!\!-\tau } \!\mathrm {I}_0\!\! \left( \!\mu \sqrt{(t-\tau )^2\!-\!\vert z\!-\!y\vert ^2} \right) \!\! \int _{{\mathbb {R}}^{n-1}} \!\!\vert \partial _t v(\tau ,y,w)\vert ^p \mathrm {d}w \mathrm {d}y \mathrm {d}\tau , \\ {\mathscr {V}}^{\mathrm {lin}}(t,z)&\doteq \frac{\varepsilon }{2} \big ({\mathscr {V}}_0(z+t)+{\mathscr {V}}_0(z-t)\big )+ \frac{\varepsilon }{2} \int _{z-t}^{z+t}{\mathscr {V}}_1(y) \, \mathrm {d}y, \\ {\mathscr {V}}^{\mathrm {nlin}}(t,z)&\doteq \frac{1}{2} \int _0^t \int _{z-t+\tau }^{z+t-\tau } \int _{{\mathbb {R}}^{n-1}}\vert \partial _t u(\tau ,y,w)\vert ^q \mathrm {d}w \, \mathrm {d}y \, \mathrm {d}\tau . \end{aligned}$$

Now that we obtained the explicit integral representation formulas for \(({\mathscr {U}},{\mathscr {V}})\), we need to determine the functional related to (uv) that blows up in finite time. We anticipate that this functional will be \({\mathscr {V}}\) evaluated on a certain characteristic line. In order to prove the blow-up result we will establish a sequence of lower bound estimates for this functional, that we will determine by means of a suitable iteration frame.

The next step is to determine the iteration frame. For this purpose we proceed with lower bound estimates for the functions \({\mathscr {U}}^{\mathrm {nlin}},{\mathscr {V}}^{\mathrm {nlin}}\). Hereafter we focus on the case \(n\geqslant 2\), nevertheless our computations can be repeated with simple modifications in the case \(n=1\).

By the support condition (5) we get

$$\begin{aligned} \mathrm {supp} \, \partial _t u(t,\cdot ), \, \mathrm {supp} \, \partial _t v(t,\cdot ) \subset B_{R+t} \quad \text{ for } \text{ any } \ t\in [0,T), \end{aligned}$$

that implies in turn

$$\begin{aligned} \mathrm {supp} \, \partial _t u(t,z,\cdot ), \, \mathrm {supp} \, \partial _t v(t,z,\cdot ) \subset \left\{ w\in {\mathbb {R}}^{n-1}: \vert w\vert \leqslant \left( (R+t)^2-z^2\right) ^{1/2}\right\} \end{aligned}$$
(19)

for any \(t\in [0,T)\) and any \(z\in {\mathbb {R}}\) such that \(\vert z\vert \leqslant R+t\). Combining Hölder’s inequality and (19), we arrive at

$$\begin{aligned}&\int _{{\mathbb {R}}^{n-1}}\vert \partial _t v(\tau ,y,w)\vert ^p \mathrm {d}w \gtrsim \big ((R+\tau )^2-y^2\big )^{-\frac{n-1}{2}(p-1)} \vert \partial _t {\mathscr {V}}(\tau ,y)\vert ^p, \\&\int _{{\mathbb {R}}^{n-1}}\vert \partial _t u(\tau ,y,w)\vert ^q \mathrm {d}w \gtrsim \big ((R+\tau )^2-y^2\big )^{-\frac{n-1}{2}(q-1)} \vert \partial _t {\mathscr {U}}(\tau ,y)\vert ^q, \end{aligned}$$

for any \(\tau \in [0,t]\) and any \(y\in [z-t+\tau ,z+t-\tau ]\). Thus, we obtain

$$\begin{aligned}&{\mathscr {U}}^{\mathrm {nlin}}(t,z) \gtrsim \int _0^t \mathrm {e}^{-\frac{b}{2}(t-\tau )}\int _{z-t+\tau }^{z+t-\tau } \frac{\mathrm {I}_0\left( \mu \sqrt{(t-\tau )^2-\vert z-y\vert ^2} \,\right) }{\big ((R+\tau )^2-y^2\big )^{\frac{n-1}{2}(p-1)} } \vert \partial _t {\mathscr {V}}(\tau ,y)\vert ^p \,\mathrm {d}y \,\mathrm {d}\tau , \\&{\mathscr {V}}^{\mathrm {nlin}}(t,z) \gtrsim \int _0^t \int _{z-t+\tau }^{z+t-\tau } \big ((R+\tau )^2-y^2\big )^{-\frac{n-1}{2}(q-1)} \vert \partial _t {\mathscr {U}}(\tau ,y)\vert ^q\, \mathrm {d}y \, \mathrm {d}\tau . \end{aligned}$$

Applying Fubini’s theorem, we have

$$\begin{aligned}&{\mathscr {U}}^{\mathrm {nlin}}(t,z) \gtrsim \int _{z-t}^{z+t} \int _0^{t-\vert z-y\vert } \mathrm {e}^{-\frac{b}{2}(t-\tau )} \frac{\mathrm {I}_0\left( \mu \sqrt{(t-\tau )^2-\vert z-y\vert ^2} \,\right) }{\big ((R+\tau )^2-y^2\big )^{\frac{n-1}{2}(p-1)}} \vert \partial _t {\mathscr {V}}(\tau ,y)\vert ^p \, \mathrm {d}\tau \, \mathrm {d}y, \\&{\mathscr {V}}^{\mathrm {nlin}}(t,z) \gtrsim \int _{z-t}^{z+t} \int _0^{t-\vert z-y\vert } \big ((R+\tau )^2-y^2\big )^{-\frac{n-1}{2}(q-1)} \vert \partial _t {\mathscr {U}}(\tau ,y)\vert ^q \, \mathrm {d}\tau \, \mathrm {d}y. \end{aligned}$$

From here on we will work on the characteristic line \(t-z=R\) for \(z\geqslant R\). Also, shrinking the domain of integration in the previous estimate for \({\mathscr {U}}^{\mathrm {nlin}}\), we find

$$\begin{aligned} {\mathscr {U}}^{\mathrm {nlin}}&(R+z,z) \gtrsim \int _{R}^{z} \int _{y-R}^{y+R} \mathrm {e}^{-\frac{b}{2}(t-\tau )} \frac{\mathrm {I}_0\left( \mu \sqrt{(t-\tau )^2-\vert z-y\vert ^2} \,\right) }{ \big ((R+\tau )^2-y^2\big )^{\frac{n-1}{2}(p-1)}} \vert \partial _t {\mathscr {V}}(\tau ,y)\vert ^p \, \mathrm {d}\tau \, \mathrm {d}y \\&\gtrsim \int _{R}^{z} \frac{\mathrm {e}^{-\frac{b}{2}(z-y)}}{(R+y)^{\frac{n-1}{2}(p-1)}}\int _{y-R}^{y+R} \mathrm {I}_0\left( \mu \sqrt{(t-\tau )^2-\vert z-y\vert ^2} \,\right) \vert \partial _t {\mathscr {V}}(\tau ,y)\vert ^p \, \mathrm {d}\tau \, \mathrm {d}y \\&\gtrsim \int _{R}^{z} \mathrm {e}^{-\frac{b}{2}(z-y)} (R+y)^{-\frac{n-1}{2}(p-1)}\int _{y-R}^{y+R} \vert \partial _t {\mathscr {V}}(\tau ,y)\vert ^p \, \mathrm {d}\tau \, \mathrm {d}y, \end{aligned}$$

where in the last step we used the inequality \(\mathrm {I}_0(s)\geqslant 1\) for any \(s\geqslant 0\) (due to the identity \(\mathrm {I}_0'(s)=\mathrm {I}_1(s)\geqslant 0\) for any \(s\geqslant 0\) and \(\mathrm {I}_0(0)=1\)).

Remark 4

As we pointed out in the introduction, we may not use the asymptotic estimate

$$\begin{aligned} \mathrm {I}_0(s)\sim \frac{1}{\sqrt{2\pi s}} \mathrm {e}^s \quad \text{ for } \ s\rightarrow \infty \end{aligned}$$

while deriving the previous inequality. Indeed, on the domain of integration (namely, for \(y\in [R,z]\) and \(\tau \in [y-R,y+R]\)) the argument of the modified Bessel function of the first kind of order 0 satisfies

$$\begin{aligned} \mu \sqrt{(t-\tau )^2-\vert z-y\vert ^2}\approx \mu \sqrt{z-y}, \end{aligned}$$

so it can be large only for y away from a neighborhood of z. However, if we shrink further the domain of integration by removing a neighborhood of z, then, we are not able to compensate the exponentially decaying term \(\mathrm {e}^{-\frac{b}{2}z}\) through the factor \(\mathrm {e}^{\frac{b}{2}y}\) in the integral. This explains why earlier we had to use the lower bound estimate \(\mathrm {I}_0\geqslant 1\) rather than the asymptotic estimate for \(\mathrm {I}_0\).

Then, by Jensen’s inequality and the fundamental theorem of calculus, we get

$$\begin{aligned} {\mathscr {U}}^{\mathrm {nlin}}(R+z,z)&\gtrsim \int _{R}^{z} \mathrm {e}^{-\frac{b}{2}(z-y)} (R+y)^{-\frac{n-1}{2}(p-1)}\bigg \vert \int _{y-R}^{y+R} \partial _t {\mathscr {V}}(\tau ,y) \, \mathrm {d}\tau \, \bigg \vert ^p \mathrm {d}y \nonumber \\&\gtrsim \int _{R}^{z} \mathrm {e}^{-\frac{b}{2}(z-y)} (R+y)^{-\frac{n-1}{2}(p-1)} \vert {\mathscr {V}}(y+R,y)\vert ^p \, \mathrm {d}y \end{aligned}$$
(20)

for \(z\geqslant R\), where we employed \( {\mathscr {V}}(y-R,y)=0\) that follows from the support condition (18). For \({\mathscr {V}}^{\mathrm {nlin}}\) the estimate from below on the characteristic line \(t-z=R\) can be obtained in a similar way. For \(z\geqslant R\) it holds

$$\begin{aligned} {\mathscr {V}}^{\mathrm {nlin}}(R+z,z)&\gtrsim \int _{R}^{z} (R+y)^{-\frac{n-1}{2}(q-1)} \vert {\mathscr {U}}(y+R,y)\vert ^q \, \mathrm {d}y. \end{aligned}$$
(21)

Therefore, since \(u_0,u_1,v_0,v_1\) and the kernel functions in the definitions of \({\mathscr {U}}^{\mathrm {lin}},{\mathscr {V}}^{\mathrm {lin}}\) are nonnegative, for suitable positive constants CK depending on npqR from (20) and (21) we have the iteration frame

$$\begin{aligned} {\mathscr {U}}(R+z,z)&\geqslant C \int _{R}^{z} \mathrm {e}^{-\frac{b}{2}(z-y)} (R+y)^{-\frac{n-1}{2}(p-1)} \vert {\mathscr {V}}(y+R,y)\vert ^p \, \mathrm {d}y \, \quad \text{ for } \ z\geqslant R, \end{aligned}$$
(22)
$$\begin{aligned} {\mathscr {V}}(R+z,z)&\geqslant K \int _{R}^{z} (R+y)^{-\frac{n-1}{2}(q-1)} \vert {\mathscr {U}}(y+R,y)\vert ^q \, \mathrm {d}y \qquad \qquad \quad \, \text{ for } \ z\geqslant R. \end{aligned}$$
(23)

In order to start the iteration procedure, we need a first lower bound estimate for \({\mathscr {V}}(R+z,z)\). Since \(v_0\) is nonnegative (and so is \({\mathscr {V}}_0\)), from the definition of \({\mathscr {V}}^{\mathrm {lin}}\) we get immediately

$$\begin{aligned} {\mathscr {V}}^{\mathrm {lin}}(t,z)\geqslant \frac{\varepsilon }{2} \int _{z-t}^{z+t}{\mathscr {V}}_1(y) \, \mathrm {d}y. \end{aligned}$$

On the characteristic line \(t-z=R\) for \(z\geqslant R\) it results \([-R,R]\subset [z-t,z+t]\) , thus, from the support condition (17) we obtain

$$\begin{aligned} {\mathscr {V}}^{\mathrm {lin}}(R+z,z)\geqslant \frac{\varepsilon }{2} \int _{{\mathbb {R}}} {\mathscr {V}}_1(y) \, \mathrm {d}y = \frac{\varepsilon }{2} \int _{{\mathbb {R}}} \int _{{\mathbb {R}}^{n-1}} v_1(y,w) \, \mathrm {d}w \, \mathrm {d}y = \frac{1}{2} \Vert v_1\Vert _{L^1({\mathbb {R}}^n)} \, \varepsilon , \end{aligned}$$
(24)

where we used Fubini’s theorem and the nonnegativity of \(v_1\).

Remark 5

Let us point out that for \({\mathscr {U}}^{\mathrm {lin}}\) we may derive only lower bounds that decay exponentially. Namely, since \(\mathrm {I}_0(s)\geqslant 0\) and \(\mathrm {I}_1(s)\geqslant \frac{s}{2}\) for \(s\geqslant 0\) (the estimate from below for \(\mathrm {I}_1\) is a straightforward consequence of the Maclaurin series expansion), and we assumed \(u_0,u_1\geqslant 0\), from the definition of \({\mathscr {U}}^{\mathrm {lin}}\) for \(z\geqslant R\) we have

$$\begin{aligned} {\mathscr {U}}^{\mathrm {lin}}(R+z,z)\geqslant \frac{\varepsilon }{2} \big \Vert u_1+\tfrac{b}{2} u_0\big \Vert _{L^1({\mathbb {R}}^n)} \, \mathrm {e}^{-\frac{b}{2}t} + \frac{\mu ^2 \varepsilon }{4} \Vert u_0\Vert _{L^1({\mathbb {R}}^n)} \, t \, \mathrm {e}^{-\frac{b}{2}t}. \end{aligned}$$

Unfortunately, combining the previous exponential lower bound for \({\mathscr {U}}\) with the iteration frame (22)–(23) we are not able to get a sequence of lower bound estimates for \({\mathscr {U}}(R+z,z)\) whose lower bound diverges as \(j\rightarrow \infty \) for t above a certain \(\varepsilon \)-dependent threshold (j denotes here the index in the sequence of lower bounds). In other words, an exponentially decaying lower bound for \({\mathscr {U}}\) does not allow us to derive a blow-up result for (1).

Since the nonlinear term in the second equation in (1) is nonnegative, from (24) it follows

$$\begin{aligned} {\mathscr {V}}^{}(R+z,z)\geqslant M\varepsilon \end{aligned}$$
(25)

for \(z\geqslant R\), where \(M\doteq \frac{1}{2} \Vert v_1\Vert _{L^1({\mathbb {R}}^n)}\).

We can start now the iteration argument to get a sequence of lower bound estimates for \({\mathscr {V}}(R+z,z)\). Since in (22) it is present an exponential factor we need to use a slicing procedure when shrinking the domain of integration. The idea to shrink the domain of integration and cut intervals smaller and smaller on each step (i.e. the slicing procedure) was introduced for the first time in [1]. Hence, in the series of papers [4, 5] it was developed a slicing procedure associated with an increasing exponential function. Later, this method has been applied to study the blow-up dynamic of several semilinear weakly coupled systems (cf. [2, 3, 6]).

We shall consider separately the treatment of the subcritical case \(\theta (n,p,q)>0\) from the treatment of the critical case \(\theta (n,p,q)=0\).

4.1 Subcritical case

In this section we focus on the subcritical case \(\theta (n,p,q)>0\). Let us introduce the parameters that individuate the slicing procedure, namely, the sequences of positive reals \(\{\ell _j\}_{j\in {\mathbb {N}}}\), \(\{L_j\}_{j\in {\mathbb {N}}}\) defined as follows:

$$\begin{aligned} \ell _0&\doteq \max \left\{ \frac{2}{bR} ,1 \right\} \quad \text{ and } \quad \ell _j \doteq 1+(pq)^{-j} \quad \text{ for } \text{ any } \ j\in {\mathbb {N}} {\setminus } \{0\}, \end{aligned}$$
(26)
$$\begin{aligned} L_j&\doteq \prod _{k=0}^j \ell _k \quad \text{ for } \text{ any } \ j\in {\mathbb {N}}. \end{aligned}$$
(27)

We emphasize that

$$\begin{aligned} L\doteq \lim _{j\rightarrow \infty } L_j= \prod _{k=0}^\infty \ell _k \in {\mathbb {R}} \end{aligned}$$
(28)

and, moreover, since \(\ell _j>1\) for any \(j\in {\mathbb {N}}{\setminus } \{0\}\), it results \(L_j\uparrow L\) as \(j\rightarrow \infty \).

Remark 6

We emphasize that the convergence of \(\prod _{k=0}^\infty \ell _k \) is a consequence of the following elementary property: given a sequence of positive real numbers \(\{a_k\}_{k\in {\mathbb {N}}}\), then, the infinite product \(\prod _{k=0}^\infty a_k\) is convergent if and only if the series \(\sum _{k=0}^\infty \ln (a_k)\) is convergent.

Our next goal is to prove

$$\begin{aligned} {\mathscr {V}}(R+z,z)\geqslant C_j (R+z)^{-\alpha _j}(z-L_j R)^{\beta _j} \qquad \text{ for } \ z\geqslant L_j R \ \ \text{ and } \text{ any } \ j\in {\mathbb {N}}, \end{aligned}$$
(29)

where \(\{C_j\}_{j\in {\mathbb {N}}}\), \(\{\alpha _j\}_{j\in {\mathbb {N}}}\) and \(\{\beta _j\}_{j\in {\mathbb {N}}}\) are sequences of nonnegative real numbers that we shall determine iteratively. Clearly, due to (25), (29) for \(j=0\) holds true by setting \(C_0\doteq M\varepsilon \) and \(\alpha _0\doteq 0,\beta _0\doteq 0\). Next we prove the inductive step. We assume that (29) is satisfied for some \(j\geqslant 0\) and we will prove it for \(j+1\). Plugging (29) in (22), for \(z\geqslant L_jR\) we get

$$\begin{aligned} {\mathscr {U}}(R+z,z)&\geqslant C \int _{L_j R}^z \mathrm {e}^{-\frac{b}{2}(z-y)} (R+y)^{-\frac{n-1}{2}(p-1)} \vert {\mathscr {V}}(y+R,y)\vert ^p \, \mathrm {d}y \\&\geqslant CC_j^p \int _{L_j R}^z \mathrm {e}^{-\frac{b}{2}(z-y)} (R+y)^{-\frac{n-1}{2}(p-1)-\alpha _j p} (y-L_j R)^{\beta _j p} \, \mathrm {d}y \\&\geqslant CC_j^p (R+z)^{-\frac{n-1}{2}(p-1)-\alpha _j p} \int _{L_j R}^z \mathrm {e}^{-\frac{b}{2}(z-y)} (y-L_j R)^{\beta _j p} \, \mathrm {d}y. \end{aligned}$$

Thus, if we consider \(z\geqslant L_{j+1}R\) then \([z/\ell _{j+1},z]\subset [L_j R,z]\). Therefore, shrinking the domain of integration in the previous inequality, for \(z\geqslant L_{j+1}R\) we have

$$\begin{aligned} {\mathscr {U}}(R+z,z)&\geqslant CC_j^p (R+z)^{-\frac{n-1}{2}(p-1)-\alpha _j p} \int _{z/\ell _{j+1}}^z \mathrm {e}^{-\frac{b}{2}(z-y)} (y-L_j R)^{\beta _j p} \, \mathrm {d}y \nonumber \\&\geqslant CC_j^p (R+z)^{-\frac{n-1}{2}(p-1)-\alpha _j p} \left( \tfrac{z}{\ell _{j+1}}-L_j R\right) ^{\beta _j p} \int _{z/\ell _{j+1}}^z \mathrm {e}^{-\frac{b}{2}(z-y)} \, \mathrm {d}y \nonumber \\&= \frac{2 CC_j^p}{b \ell _{j+1}^{\beta _j p}} (R+z)^{-\frac{n-1}{2}(p-1)-\alpha _j p} (z-L_j \ell _{j+1} R)^{\beta _j p} \left( 1-\mathrm {e}^{-\frac{b}{2}(1-1/\ell _{j+1})z}\right) . \end{aligned}$$
(30)

Let us estimate from below the factor on the right hand-side of the previous chain of inequalities that contains the exponential term. Then, for \(z\geqslant L_{j+1}R\) it holds

$$\begin{aligned} 1-\mathrm {e}^{-\frac{b}{2}(1-1/\ell _{j+1})z}&\geqslant 1-\mathrm {e}^{-\frac{b}{2}RL_{j+1}(1-1/\ell _{j+1})} = 1-\mathrm {e}^{-\frac{b}{2}RL_{j}(\ell _{j+1}-1)} \nonumber \\&\geqslant 1-\mathrm {e}^{-\frac{b}{2}R\ell _{0}(\ell _{j+1}-1)} \geqslant 1-\mathrm {e}^{-(\ell _{j+1}-1)} \nonumber \\&\geqslant 1-\left( 1-(\ell _{j+1}-1)+\tfrac{1}{2} (\ell _{j+1}-1)^2\right) \nonumber \\&= (\ell _{j+1}-1)\left( 1-\tfrac{1}{2} (\ell _{j+1}-1)\right) = (pq)^{-2(j+1)}\left( (pq)^{j+1}-\tfrac{1}{2}\right) \nonumber \\&\geqslant (pq)^{-2(j+1)}\left( (pq)-\tfrac{1}{2}\right) . \end{aligned}$$
(31)

Combining (30) and (31), for \(z\geqslant L_{j+1} R\) we arrive at

$$\begin{aligned}&{\mathscr {U}}(R+z,z)\\&\geqslant \frac{2pq-1}{b}CC_j^p \ell _{j+1}^{-\beta _j p} (pq)^{-2(j+1)} (R+z)^{-\frac{n-1}{2}(p-1)-\alpha _j p} (z-L_{j+1} R)^{\beta _j p}. \end{aligned}$$

Plugging the previous upper bound for \({\mathscr {U}}(R+z,z)\) in (23), for \(z\geqslant L_{j+1} R\) we get

$$\begin{aligned} {\mathscr {V}}(R+z,z)&\geqslant K \int _{ L_{j+1} R}^{z} (R+y)^{-\frac{n-1}{2}(q-1)} \vert {\mathscr {U}}(y+R,y)\vert ^q \, \mathrm {d}y \\&\geqslant \frac{KC^q (2pq-1)^q b^{-q} C_j^{pq} }{\ell _{j+1}^{\beta _j pq} (pq)^{2q(j+1)} }\int _{ L_{j+1} R}^{z} (R+y)^{-\frac{n-1}{2}(pq-1)-\alpha _j pq} (y-L_{j+1} R)^{\beta _j pq} \, \mathrm {d}y \\&\geqslant \frac{KC^q (2pq-1)^q b^{-q} C_j^{pq} }{\ell _{j+1}^{\beta _j pq} (pq)^{2q(j+1)} } (R+z)^{-\frac{n-1}{2}(pq-1)-\alpha _j pq}\int _{ L_{j+1} R}^{z} (y-L_{j+1} R)^{\beta _j pq} \, \mathrm {d}y \\&= \frac{KC^q (2pq-1)^q b^{-q} C_j^{pq} }{\ell _{j+1}^{\beta _j pq} (pq)^{2q(j+1)} (\beta _j pq+1)} (R+z)^{-\frac{n-1}{2}(pq-1)-\alpha _j pq} (z-L_{j+1} R)^{\beta _j pq+1}. \end{aligned}$$

Thus, we proved (29) for \(j+1\) with

$$\begin{aligned} C_{j+1}&\doteq \frac{KC^q (2pq-1)^q b^{-q} C_j^{pq} }{\ell _{j+1}^{\beta _j pq} (pq)^{2q(j+1)} (\beta _j pq+1)}, \end{aligned}$$
(32)
$$\begin{aligned} \alpha _{j+1}&\doteq \frac{n-1}{2}(pq-1)+pq \alpha _j, \qquad \beta _{j+1}\doteq 1+pq \beta _j. \end{aligned}$$
(33)

The next step is to determine a suitable lower bound for \(C_j\), that will be easier to handle. First we derive an explicit representation for \(\alpha _j\) and \(\beta _j\). By using recursively (33), we have

$$\begin{aligned} \alpha _j&= \tfrac{n-1}{2}(pq-1)+pq \alpha _{j-1}= \cdots = (pq)^j\alpha _0+\tfrac{n-1}{2}(pq-1)\sum _{k=0}^{j-1}(pq)^k \nonumber \\&=\tfrac{n-1}{2}((pq)^j-1), \end{aligned}$$
(34)
$$\begin{aligned} \beta _j&= 1+pq\beta _{j-1}= \cdots = (pq)^j\beta _0+\sum _{k=0}^{j-1}(pq)^k = \tfrac{(pq)^j-1}{pq-1}. \end{aligned}$$
(35)

Therefore,

$$\begin{aligned} (\beta _{j-1}pq+1)^{-1}=\beta _j^{-1}\geqslant (pq-1) (pq)^{-j}. \end{aligned}$$

Moreover, since

$$\begin{aligned} \lim _{j\rightarrow \infty } \ell ^{\beta _{j-1}pq}_j=\lim _{j\rightarrow \infty } \exp \left( \frac{(pq)^j-pq}{pq-1}\ln \big (1+(pq)^{-j}\big )\right) =\mathrm {e}^{1/(pq-1)}, \end{aligned}$$

there exists \(N=N(p,q,b,R)>0\) such that \(\ell _j^{-\beta _{j-1}pq}>N\) for any \(j\in {\mathbb {N}}\). Consequently,

$$\begin{aligned} C_j=\frac{KC^q (2pq-1)^q b^{-q} C_{j-1}^{pq} }{\ell _{j}^{\beta _{j-1} pq} (pq)^{2qj} (\beta _{j-1} pq+1)}\geqslant D (pq)^{-(2q+1)j}C_{j-1}^{pq} \end{aligned}$$
(36)

for any \(j\in {\mathbb {N}}\), where \(D\doteq KC^q N (2pq-1)^q(pq-1) b^{-q}\). Applying the logarithmic function to both sides of (36) and using iteratively the resulting inequality, we have

$$\begin{aligned} \ln C_j&\geqslant pq\ln C_{j-1} -(2q+1)j\ln (pq)+\ln D \\&\geqslant (pq)^2\ln C_{j-2} -(2q+1)\ln (pq)(j+(j-1)pq) +(1+pq)\ln D \\&\geqslant \cdots \geqslant (pq)^j\ln C_{0} -(2q+1)\ln (pq)\sum _{k=0}^{j-1} (j-k)(pq)^k +\ln D \sum _{k=0}^{j-1} (pq)^k. \end{aligned}$$

Using the identities

$$\begin{aligned} \sum _{k=0}^{j-1} (j-k)(pq)^k =\frac{1}{pq-1}\left( \frac{(pq)^{j+1}-pq}{pq-1}-j\right) , \qquad \sum _{k=0}^{j-1} (pq)^k= \frac{(pq)^j-1}{pq-1}, \end{aligned}$$
(37)

it results

$$\begin{aligned} \ln C_j&\geqslant (pq)^j\left( \ln C_{0} -\frac{(2q+1)pq\ln (pq)}{(pq-1)^2}+\frac{\ln D}{pq-1}\right) +\frac{(2q+1)pq\ln (pq)}{(pq-1)^2}\\&\qquad +\frac{(2q+1)\ln (pq)}{pq-1}j-\frac{\ln D}{pq-1}. \end{aligned}$$

Let us denote by \(j_0=j_0(n,b,p,q,R)\) the smallest nonnegative integer such that

$$\begin{aligned} j_0\geqslant \frac{\ln D}{(2q+1)\ln (pq)}-\frac{pq}{pq-1}. \end{aligned}$$

Then, for any \(j\geqslant j_0\)

$$\begin{aligned} \ln C_j&\geqslant (pq)^j\left( \ln C_{0} -\frac{(2q+1)pq\ln (pq)}{(pq-1)^2}+\frac{\ln D}{pq-1}\right) =(pq)^j\ln (E\varepsilon ), \end{aligned}$$
(38)

where \(E\doteq M(pq)^{-(2q+1)(pq)/(pq-1)^2}D^{1/(pq-1)}\). Combining (29), (34), (35) and (38), for \(j\geqslant j_0\) and \(z\geqslant LR\) we find

$$\begin{aligned} {\mathscr {V}}(R+z,z)&\geqslant \exp \left( (pq)^j\ln (E\varepsilon )\right) (R+z)^{-\frac{n-1}{2}((pq)^j-1)}(z-L R)^{\frac{(pq)^j-1}{pq-1}} \\&= \exp \left( (pq)^j\left( \ln (E\varepsilon )-\tfrac{n-1}{2}\ln (R+z)+\tfrac{1}{pq-1}\ln (z-LR)\right) \right) \\&\qquad \times (R+z)^{\frac{n-1}{2}}(z-L R)^{-\frac{1}{pq-1}}. \end{aligned}$$

Equivalently, for \(t\geqslant (L+1)R\) and \(j\geqslant j_0\) it holds

$$\begin{aligned} {\mathscr {V}}(t,t-R)&\geqslant \exp \left( (pq)^j\left( \ln (E\varepsilon )-\tfrac{n-1}{2}\ln t +\tfrac{1}{pq-1}\ln (t-(L+1)R)\right) \right) \\&\qquad \times t^{\frac{n-1}{2}}(t-(L+1) R)^{-\frac{1}{pq-1}}. \end{aligned}$$

For \(t\geqslant 2(L+1)R\) we can estimate \(\ln (t-(L+1)R)\geqslant \ln t -\ln 2\). Consequently, for \(t\geqslant 2(L+1)R\) and \(j\geqslant j_0\) we have

$$\begin{aligned} {\mathscr {V}}(t,t-R)&\geqslant \exp \left( (pq)^j\left( \ln (E\varepsilon )+(\tfrac{1}{pq-1}-\tfrac{n-1}{2})\ln t -\tfrac{1}{pq-1} \ln 2 \right) \right) \end{aligned}$$
(39)
$$\begin{aligned}&\qquad \times t^{\frac{n-1}{2}}(t-(L+1) R)^{-\frac{1}{pq-1}} \nonumber \\&=\exp \left( (pq)^j\left( \ln \left( E_1\varepsilon t^{\theta (n,p,q)}\right) \right) \right) t^{\frac{n-1}{2}}(t-(L+1) R)^{-\frac{1}{pq-1}}, \end{aligned}$$
(40)

where \(E_1\doteq 2^{-1/(pq-1)}E\) and \(\theta \) is defined in (4).

Let us fix \(\varepsilon _0=\varepsilon _0(n,p,q,b,R,v_1)>0\) sufficiently small so that

$$\begin{aligned} \varepsilon _0\leqslant E_1^{-1} (2(L+1)R)^{-\theta (n,p,q)}. \end{aligned}$$

Then, for any \(\varepsilon \in (0,\varepsilon _0]\) and any \(t\geqslant (E_1 \varepsilon )^{-\theta (n,p,q)^{-1}}\) we have

$$\begin{aligned} t\geqslant 2(L+1)R \quad \text{ and } \quad \ln \left( E_1\varepsilon t^{\theta (n,p,q)}\right) >0, \end{aligned}$$

so letting \(j\rightarrow \infty \) in (40) we see that \({\mathscr {V}}(t,t-R)\) is not finite.

Summarizing we proved that (uv) blows up in finite time and we established the upper bound estimate in (6).

4.2 Critical case

In this section we study the blow-up result in the critical case \(\theta (n,p,q)=0\). In this case it is more convenient to rewrite the iteration frame as follows

$$\begin{aligned} {\mathscr {U}}(R+z,z)&\geqslant C \int _{R}^{z} \mathrm {e}^{-\frac{b}{2}(z-y)} y^{-\frac{n-1}{2}(p-1)} \vert {\mathscr {V}}(R+y,y)\vert ^p \, \mathrm {d}y \, \quad \text{ for } \ z\geqslant R, \end{aligned}$$
(41)
$$\begin{aligned} {\mathscr {V}}(R+z,z)&\geqslant K \int _{R}^{z} y^{-\frac{n-1}{2}(q-1)} \vert {\mathscr {U}}(R+y,y)\vert ^q \, \mathrm {d}y \qquad \qquad \quad \, \text{ for } \ z\geqslant R. \end{aligned}$$
(42)

Note that, for the sake of simplicity, we kept the same notations for the multiplicative constants as in Sect. 4.1.

The main difference in comparison to the subcritical case consists in the choice of the parameters characterizing the slicing procedure. We introduce the sequence \(\{\Lambda _j\}_{j\in {\mathbb {N}}}\), where

$$\begin{aligned} \Lambda _j\doteq 1+\frac{4}{bR}\big (2-2^{-j}\big ) \quad \text{ for } \text{ any } \ j\in {\mathbb {N}}. \end{aligned}$$

The sequence \(\{\Lambda _j\}_{j\in {\mathbb {N}}}\) is strictly increasing and bounded and \(\Lambda _j \uparrow \Lambda \doteq 1+8/(bR)\) as \(j\rightarrow \infty \). We shall employ this sequence when applying the slicing procedure.

The next step is to prove the sequence of lower bound estimates

$$\begin{aligned} {\mathscr {V}}(R+z,z)\geqslant K_j \left( \ln \left( \frac{z}{\Lambda _j R}\right) \right) ^{\gamma _j} \qquad \text{ for } \ z\geqslant \Lambda _j R \ \ \text{ and } \text{ any } \ j\in {\mathbb {N}}, \end{aligned}$$
(43)

where \(\{K_j\}_{j\in {\mathbb {N}}}\) and \(\{\gamma _j\}_{j\in {\mathbb {N}}}\) are sequences of nonnegative real numbers to be determined iteratively throughout the proof.

We remark that (25) implies the validity of (43) for \(j=0\), provided that \(K_0=M\varepsilon \) and \(\gamma _0=0\). In order to establish (43) for any \(j\in {\mathbb {N}}\) it remains to demonstrate the inductive step. Let us assume that (43) is fulfilled for some \(j\in {\mathbb {N}}\), then, we have to prove that (43) is satisfied also for \(j+1\). Plugging (43) in (41), for \(z\geqslant \Lambda _{j+1} R\) we find

$$\begin{aligned} {\mathscr {U}}(R+z,z)&\geqslant C \int _{\Lambda _j R}^{z} \mathrm {e}^{-\frac{b}{2}(z-y)} y^{-\frac{n-1}{2}(p-1)} \vert {\mathscr {V}}(R+y,y)\vert ^p \, \mathrm {d}y \\&\geqslant C K_j^p \int _{\Lambda _j R}^{z} \mathrm {e}^{-\frac{b}{2}(z-y)} y^{-\frac{n-1}{2}(p-1)} \left( \ln \left( \frac{y}{\Lambda _j R}\right) \right) ^{\gamma _j p} \, \mathrm {d}y \\&\geqslant C K_j^p z^{-\frac{n-1}{2}(p-1)} \int _{\tfrac{\Lambda _j z}{\Lambda _{j+1}} }^{z} \mathrm {e}^{-\frac{b}{2}(z-y)} \left( \ln \left( \frac{y}{\Lambda _j R}\right) \right) ^{\gamma _j p} \, \mathrm {d}y \\&\geqslant C K_j^p z^{-\frac{n-1}{2}(p-1)} \left( \ln \left( \frac{z}{\Lambda _{j+1} R}\right) \right) ^{\gamma _j p} \int _{\tfrac{\Lambda _j z}{\Lambda _{j+1}} }^{z} \mathrm {e}^{-\frac{b}{2}(z-y)} \, \mathrm {d}y \\&=2 b^{-1}C K_j^p z^{-\frac{n-1}{2}(p-1)} \left( \ln \left( \frac{z}{\Lambda _{j+1} R}\right) \right) ^{\gamma _j p} \left( 1- \mathrm {e}^{-\frac{b}{2}(\Lambda _{j+1}-\Lambda _j)\tfrac{z}{\Lambda _{j+1}}}\right) , \end{aligned}$$

where in the third step we used \([\Lambda _j R,z]\supset \big [\frac{\Lambda _j z}{\Lambda _{j+1}},z\big ]\). Since \(\{\Lambda _j\}_{j\in {\mathbb {N}}}\) is an increasing sequence, for \(z\geqslant \Lambda _{j+1} R\) we may estimate

$$\begin{aligned} 1- \mathrm {e}^{-\frac{b}{2}(\Lambda _{j+1}-\Lambda _j)\tfrac{z}{\Lambda _{j+1}}}&\geqslant 1- \mathrm {e}^{-\frac{bR}{2}(\Lambda _{j+1}-\Lambda _j)} \\&\geqslant \frac{bR}{2}(\Lambda _{j+1}-\Lambda _j)\left( 1-\frac{bR}{4}(\Lambda _{j+1}-\Lambda _j)\right) \\&= 2^{-(2j+1)}(2^{j+1}-1)\geqslant 2^{-(2j+1)}, \end{aligned}$$

where in the second inequality we used the elementary inequality \(\mathrm {e}^{-s}\leqslant 1-s+\frac{s^2}{2}\) for any \(s\geqslant 0\). Thus, for \(z\geqslant \Lambda _{j+1} R\) we showed that

$$\begin{aligned} {\mathscr {U}}(R+z,z)&\geqslant b^{-1}C \, 2^{-2j} K_j^p z^{-\frac{n-1}{2}(p-1)} \left( \ln \left( \frac{z}{\Lambda _{j+1} R}\right) \right) ^{\gamma _j p} . \end{aligned}$$

Using the last lower bound estimate for \({\mathscr {U}}(R+z,z)\) in (42) and the critical condition \(\theta (n,p,q)=0\), for \(z\geqslant \Lambda _{j+1} R\) we get

$$\begin{aligned} {\mathscr {V}}(R+z,z)&\geqslant K \int _{\Lambda _{j+1} R}^{z} y^{-\frac{n-1}{2}(q-1)} \vert {\mathscr {U}}(R+y,y)\vert ^q \, \mathrm {d}y \\&\geqslant K b^{-q} C^q \, 2^{-2qj} K_j^{pq} \int _{\Lambda _{j+1} R}^{z} y^{-\frac{n-1}{2}(pq-1)} \left( \ln \left( \frac{y}{\Lambda _{j+1} R}\right) \right) ^{\gamma _j pq} \, \mathrm {d}y \\&= K b^{-q} C^q \, 2^{-2qj} K_j^{pq} \int _{\Lambda _{j+1} R}^{z} y^{-1} \left( \ln \left( \frac{y}{\Lambda _{j+1} R}\right) \right) ^{\gamma _j pq} \, \mathrm {d}y \\&= K b^{-q} C^q \, 2^{-2qj} K_j^{pq} (\gamma _j pq+1)^{-1} \left( \ln \left( \frac{z}{\Lambda _{j+1} R}\right) \right) ^{\gamma _j pq+1}, \end{aligned}$$

which is exactly (43) for \(j+1\), provided that we set

$$\begin{aligned} K_{j+1}&\doteq K b^{-q} C^q \, 2^{-2qj} K_j^{pq} (\gamma _j pq+1)^{-1}, \end{aligned}$$
(44)
$$\begin{aligned} \gamma _{j+1}&\doteq \gamma _j pq+1. \end{aligned}$$
(45)

By applying iteratively (45) and \(\gamma _0=0\), we obtain

$$\begin{aligned} \gamma _j= 1+pq\gamma _{j-1}= \cdots = (pq)^j\gamma _0+\sum _{k=0}^{j-1}(pq)^k = \tfrac{(pq)^j-1}{pq-1}. \end{aligned}$$
(46)

Next we determine a lower bound estimate for the constant \(K_j\). From the previous representation for \(\gamma _j\) it follows

$$\begin{aligned} K_{j}&= K b^{-q} C^q \, 2^{-2q(j-1)} K_{j-1}^{pq} (\gamma _{j-1} pq+1)^{-1} = K b^{-q} C^q \, 2^{-2q(j-1)} K_{j-1}^{pq} \gamma _{j}^{-1} \\&\geqslant 2^{2q} K b^{-q} C^q (pq-1) \, 2^{-2qj} (pq)^{-j} K_{j-1}^{pq} = {\tilde{D}} (2^{2q}pq)^{-j} K_{j-1}^{pq}, \end{aligned}$$

where \({\tilde{D}}\doteq 2^{2q} K b^{-q} C^q (pq-1)\). Applying the logarithmic function to both sides of the inequality \(K_{j}\geqslant {\tilde{D}} (2^{2q}pq)^{-j} K_{j-1}^{pq}\) and using in an iterative way the obtained inequality, we obtain

$$\begin{aligned} \ln K_j&\geqslant pq\ln K_{j-1} -j\ln (2^{2q}pq)+\ln {\tilde{D}} \\&\geqslant (pq)^2\ln K_{j-2} -\ln (2^{2q}pq)(j+(j-1)pq) +(1+pq)\ln {\tilde{D}} \\&\geqslant \cdots \geqslant (pq)^j\ln K_{0} -\ln (2^{2q}pq)\sum _{k=0}^{j-1} (j-k)(pq)^k +\ln {\tilde{D}} \sum _{k=0}^{j-1} (pq)^k\\&=(pq)^j \left( \ln (M\varepsilon ) -\frac{(pq)\ln (2^qpq) }{(pq-1)^2}+\frac{\ln {\tilde{D}}}{pq-1}\right) +\frac{\ln (2^q pq)}{pq-1}j \\&\qquad +\frac{(pq)\ln (2^q pq)}{(pq-1)^2}-\frac{\ln {\tilde{D}}}{pq-1}, \end{aligned}$$

where in the last step we applied the identities in (37).

If we denote by \(j_1=j_1(n,b,p,q,R)\) the smallest nonnegative integer number such that

$$\begin{aligned} j_1\geqslant \frac{\ln {\tilde{D}}}{\ln (2^q pq)}-\frac{pq}{pq-1}, \end{aligned}$$

then, for any \(j\geqslant j_1\) it results

$$\begin{aligned} \ln K_j&\geqslant (pq)^j \left( \ln (M\varepsilon ) -\frac{(pq)\ln (2^qpq) }{(pq-1)^2}+\frac{\ln {\tilde{D}}}{pq-1}\right) = (pq)^j \ln ({\tilde{E}}\varepsilon ), \end{aligned}$$
(47)

where \({\tilde{E}}\doteq M (2^q pq)^{-(pq)/(pq-1)^2}{\tilde{D}}^{1/(pq-1)}\).

Combining (43), (46) and (47), for \(j\geqslant j_1\) and \(z\geqslant \Lambda R\) we find

$$\begin{aligned} {\mathscr {V}}(R+z,z)&\geqslant \exp \left( (pq)^j\ln ({\tilde{E}}\varepsilon )\right) \left( \ln \left( \frac{z}{\Lambda R}\right) \right) ^{\tfrac{(pq)^j-1}{pq-1}} \\&= \exp \left( (pq)^j\left( \ln \left( {\tilde{E}}\varepsilon \ln \left( \frac{z}{\Lambda R}\right) ^{\frac{1}{pq-1}}\right) \right) \right) \left( \ln \left( \frac{z}{\Lambda R}\right) \right) ^{-\tfrac{1}{pq-1}} \end{aligned}$$

Therefore, for \(t\geqslant (\Lambda +1)R\) and for any \(j\geqslant j_1\) we have

$$\begin{aligned} {\mathscr {V}}(t,t-R)&\geqslant \exp \left( (pq)^j\left( \ln \left( {\tilde{E}}\varepsilon \ln \left( \frac{t-R}{\Lambda R}\right) ^{\frac{1}{pq-1}}\right) \right) \right) \left( \ln \left( \frac{t-R}{\Lambda R}\right) \right) ^{-\tfrac{1}{pq-1}} \nonumber \\&\geqslant \exp \left( (pq)^j\left( \ln \left( {\tilde{E}}\varepsilon \ln \left( \frac{t}{2\Lambda R}\right) ^{\frac{1}{pq-1}}\right) \right) \right) \left( \ln \left( \frac{t-R}{\Lambda R}\right) \right) ^{-\tfrac{1}{pq-1}} . \end{aligned}$$
(48)

Let us fix \(\varepsilon _0=\varepsilon _0(n,b,p,q,R)>0\) sufficiently small so that \(\varepsilon _0^{-(pq-1)}\geqslant {\tilde{E}}^{pq-1}\ln \frac{\Lambda +1}{2\Lambda }\). Then, for any \(\varepsilon \in (0,\varepsilon _0]\) and any \(t> (2\Lambda R)\exp \big (({\tilde{E}}\varepsilon )^{-(pq-1)}\big )\) the following inequalities are satisfied

$$\begin{aligned} t\geqslant (\Lambda +1)R \quad \text{ and } \quad \ln \left( {\tilde{E}}\varepsilon \ln \left( \frac{t}{2\Lambda R}\right) ^{\frac{1}{pq-1}}\right) >1, \end{aligned}$$

thus, letting \(j\rightarrow \infty \) in (48) the lower bound for \({\mathscr {V}}(t,t-R)\) diverges. Consequently, \({\mathscr {V}}(t,t-R)\) cannot be finite. So, we proved that v blows up in finite time and we found as byproduct of the iteration procedure the upper bound estimate

$$\begin{aligned} T(\varepsilon )\leqslant \exp \left( {\tilde{E}}_1 \varepsilon ^{-(pq-1)}\right) \end{aligned}$$

for the lifespan of the solution (uv) for any \(\varepsilon \in (0,\varepsilon _0]\), where \({\tilde{E}}_1>0\) is a suitable constant depending on \(n,b,p,q,R,v_1\).