1 Introduction

We consider the nonlinear differential system of Emden–Fowler type

$$\begin{aligned} \left\{ \begin{array}{l} (p(t){\varPhi }_\alpha (x^{\prime }))^{\prime }=\varphi (t){\varPhi }_\lambda (y),\\ (q(t){\varPhi }_\beta (y^{\prime }))^{\prime }=\psi (t){\varPhi }_\mu (x), \end{array}\right. \end{aligned}$$
(1)

where \({\varPhi }_\zeta (u)=|u|^\zeta \text{ sgn} \,\,u\), with \(\zeta \!>\!0, \alpha ,\beta ,\lambda ,\mu \) are positive constants, and \(p, q,\varphi ,\psi \) are positive continuous functions defined on \([a,\infty ),\,a \ge 0\). If it is not said otherwise, we assume

$$\begin{aligned} p\in \mathcal{RV }(\gamma ),\ q\in \mathcal{RV }(\delta ),\ \varphi \in \mathcal{RV }(\sigma ),\ \psi \in \mathcal{RV }(\varrho ), \end{aligned}$$
(2)

that is, the coefficients \(p,q,\varphi ,\psi \) are regularly varying (at infinity) of indices \(\gamma ,\,\delta ,\,\sigma ,\,\varrho \), respectively, with \(\gamma , \delta , \sigma , \varrho \in \mathbb{R }\). The definition of regular variation will be recalled in the next section, together with the relevant properties of \(\mathcal{RV }\)-functions. Further, we suppose that system (1) is subhomogeneous (at \(\infty \)), that is,

$$\begin{aligned} \alpha \beta >\lambda \mu . \end{aligned}$$

In contrast to the most of related works, we do not pose in general any condition on divergence or convergence of the integrals

$$\begin{aligned} P=\int \limits _a^\infty p^{-\frac{1}{\alpha }}(s)\,ds,\quad Q=\int \limits _a^\infty q^{-\frac{1}{\beta }}(s)\,ds, \end{aligned}$$
(3)

and we do not explicitly distinguish among particular cases. In fact, all possible cases (including the mixed ones) are covered by our results.

By a solution of (1), we mean a continuously differentiable couple \((x,y)\) on \((T,\infty ),\,T\ge a\), such that the quasiderivatives

$$\begin{aligned} x^{[1]}=p\, \varPhi _\alpha (x^{\prime }), \quad y^{[1]}=q \, \varPhi _\beta (y^{\prime }) \end{aligned}$$

are continuously differentiable in \((T,\infty )\) and satisfy (1). We study the exact asymptotic behavior of solutions to (1), where both components are eventually positive decreasing and tend to zero along with their quasiderivatives as the independent variable tends to infinity; such solutions are usually called strongly decreasing. In contrast to the case where at least one of the component or its quasiderivative tends to a nonzero real number, classical existence results do not provide any asymptotic formula for strongly decreasing solutions. The statement of our main result is given in Sect. 3; roughly speaking, it asserts the following:

If the coefficients of (1) are regularly varying functions and suitable conditions on \(\alpha , \beta , \lambda , \mu , \gamma , \delta , \sigma \) and \(\varrho \) are assumed, then (1) possesses strongly decreasing solutions. Further, each strongly decreasing solution \((x,y)\) is regularly varying of known index and satisfies the asymptotic formulae

$$\begin{aligned}&x(t)\sim C_1 \left(\frac{t^{\alpha +1}\varphi (t)}{p(t)}\right)^{\beta /(\alpha \beta -\lambda \mu )}\left(\frac{t^{\beta +1}\psi (t)}{q(t)} \right)^{\lambda /(\alpha \beta -\lambda \mu )},\nonumber \\&y(t)\sim C_2 \left(\frac{t^{\beta +1}\psi (t)}{q(t)}\right)^{\alpha /(\alpha \beta -\lambda \mu )} \left(\frac{t^{\alpha +1}\varphi (t)}{p(t)}\right)^{\mu /(\alpha \beta -\lambda \mu )} \end{aligned}$$
(4)

as \(t\rightarrow \infty ,\) where \(C_1,C_2\) are explicitly given positive constants, depending only on the exponents and indices.

The application of the theory of regularly varying functions to the study of the asymptotic behavior of nonoscillatory solutions plays a key role in deriving our main result. After the pioneering works by Marić and Tomić (see the monography [20]), several authors worked in this field, providing important contributions to the understanding of the asymptotic expansion for special classes of nonoscillatory solutions of linear and nonlinear equations, especially of order two. We refer, for instance, to the recent works [9, 1113, 17, 24] and the references therein. Our paper continues the study and extends the results of previous papers on nonoscillatory solutions of systems of the type (1). For instance, Tanigawa in [23] presented some existence results for decreasing solutions. Some of those results are here generalized, and additional information about behavior of strongly decreasing solutions is provided. Further, we extend the asymptotic results of [21] since more general coefficients are here considered and more general existence conditions are established. Our results provide new observations even for the case of second-order scalar equations of the Emden–Fowler type (cf. [10] and very recent paper [17]). More detailed comparisons and further related works are mentioned in the third section. We remark that, if \((x,y)\) is a solution of (1), then \(x\) satisfies the fourth-order equation

$$\begin{aligned} \left\{ q(t){\varPhi }_\beta \left[\left(\frac{1}{\varphi ^{\frac{1}{\lambda }}(t)}{\varPhi }_{\frac{1}{\lambda }}\bigg (\big ( p(t){\varPhi }_\alpha (x^{\prime })\big )^{\prime }\bigg )\right)^{\prime }\right]\right\} ^{\prime }= \varphi (t) {\varPhi }_\mu (x), \end{aligned}$$
(5)

and, vice versa, if \(x\) is a solution of (5), then \(\left(x,\varphi ^{-\frac{1}{\lambda }}(t){\varPhi }_{\frac{1}{\lambda }}\left(\left(p(t){\varPhi }_\alpha (x^{\prime })\right)^{\prime }\right)\right)\) is a solution of (1). A similar remark can be made for the component \(y\) of any solution \((x,y)\) of (1). Therefore, system (1) covers various nonlinear fourth-order equations, like, for example, \((\tilde{\varphi }(t)x^{\prime \prime })^{\prime \prime }=\psi (t){\varPhi }_\delta (x)\), which have been studied previously; see, for instance [6, 14, 15]; see also [19] where the structure of positive solutions is analyzed under the opposite sign assumption on the potential. In particular, the recent paper [14] is closely related to our setting, and in Sect. 3, we will provide a precise comparison with those results. Of course, since the coefficients are of one sign, Eq. (5) (or system (1)) can be equivalently written as a suitable first-order nonlinear system of four equations, and the results can be formulated in such terms. But, in some situations, fourth-order equations viewed as coupled systems enable better understanding of the structure of a solution space; in particular, we can use the similarity with some computations and results for scalar second-order equations, which have been widely studied.

Our results can also be applied to obtain nontrivial information about the behavior of positive radial solutions of systems of quasilinear partial differential equations of the form

$$\begin{aligned} \left\{ \begin{array}{l} \text{ div}\left(\Vert \nabla u\Vert ^{\alpha -1}\nabla u\right)={\bar{\varphi }}(\Vert z\Vert )|v|^{\lambda -1} v,\\ \text{ div}\left(\Vert \nabla v\Vert ^{\beta -1}\nabla v\right)={\bar{\psi }}(\Vert z\Vert )|u|^{\mu -1} u \end{array}\right. \end{aligned}$$

in exterior domains in \(\mathbb{R }^N,\,N\ge 2\); see Sect. 3.

The paper is organized as follows: in Sect. 2, we recall the main properties of \(\mathcal{RV }\) functions, and we state the existence theorem for strongly decreasing solutions of (1). The main result about existence of regularly varying solutions of (1) and about their asymptotic form is given in Sect. 3; there we also compare our results with the existing literature, and we show some possible applications. The proofs of all statements presented are given in the last section.

2 Preliminaries

Basic properties of regularly varying functions are here recalled. Further, we derive conditions guaranteeing the existence of strongly decreasing solutions and show important relations between system (1) and its reciprocal counterpart.

We make use of the following notation: \(f(t)\sim g(t)\) as \(t\rightarrow \infty \) means \(f(t)/g(t) \rightarrow 1\) as \(t \rightarrow \infty \).

A measurable function \(f:[a,\infty )\rightarrow (0,\infty )\) is called regularly varying (at infinity) of index \(\vartheta \) if

$$\begin{aligned} \lim _{t\rightarrow \infty }\frac{f(\kappa t)}{f(t)}=\kappa ^\vartheta \quad \text{ for} \text{ every}\,\kappa >0; \end{aligned}$$

we write \(f\in \mathcal{RV }(\vartheta )\). If \(\vartheta =0\), then \(f\) is called slowly varying; we write \(f\in \mathcal{SV }\). We recommend the monographs [1, 8] as very good sources of information on the theory of regular variation. The following properties of regularly varying functions find applications in our considerations.

Proposition 1

([1, 8])

  1. 1.

    \(f\in \mathcal{RV }(\vartheta )\) if and only if \(f(t)=t^\vartheta L(t)\), where \(L\in \mathcal{SV }\).

  2. 2.

    If \(L_1,\dots ,L_n\in \mathcal{SV }\), \(n\in \mathbb{N }\), and \(R(x_1,\dots ,x_n)\) is a rational function with positive coefficients, then \(R(L_1,\dots ,L_n)\in \mathcal{SV }\).

  3. 3.

    If \(L\in \mathcal{SV }\) and \(\vartheta >0\), then \(t^\vartheta L(t)\rightarrow \infty ,\) \(t^{-\vartheta }L(t)\rightarrow 0\) as \(t\rightarrow \infty \).

  4. 4.

    (The Karamata theorem; direct half). If \(L\in \mathcal{SV }\), then

    $$\begin{aligned} \int \limits _t^\infty s^\zeta L(s)\,ds\sim \frac{1}{-\zeta -1}t^{\zeta +1}L(t)\quad \text{ as}\,t\rightarrow \infty \end{aligned}$$
    (6)

    provided \(\zeta <-1\), and

    $$\begin{aligned} \int \limits _a^t s^\zeta L(s)\,ds\sim \frac{1}{\zeta +1}t^{\zeta +1}L(t)\quad \text{ as}\,t\rightarrow \infty \end{aligned}$$

    provided \(\zeta >-1.\) The integral \(\int _a^\infty L(s)/s\,ds\) may converge or not. The function \(\tilde{L}(t)=\int _a^t L(s)/s\,ds\) is a new slowly varying function such that \(L(t)/\tilde{L}(t)\rightarrow 0\) as \(t\rightarrow \infty \).

  5. 5.

    If \(f\in \mathcal{RV }(\vartheta )\) with \(\vartheta \le 0\) and \(f(t)=\int _t^\infty g(s)\,ds\) with \(g\) nonincreasing, then

    $$\begin{aligned} \frac{-tf^{\prime }(t)}{f(t)}= \frac{tg(t)}{f(t)}\rightarrow -\vartheta \ \ \text{ as} t\rightarrow \infty \text{.} \end{aligned}$$
    (7)
  6. 6.

    (The representation theorem) It holds \(L\in \mathcal{SV }\) if and only if

    $$\begin{aligned} L(t)=c(t)\exp \left\{ \int \limits _a^t h(s)/s\, ds\right\} , \end{aligned}$$
    (8)

    \(t\ge a\), for some \(a>0\), where \(c,h\) are measurable functions and \(c(t)\rightarrow c\in (0,\infty )\), \(h(t)\rightarrow 0\) as \(t\rightarrow \infty \).

If \(c(t)\equiv c\) in (8), then \(L\) is called normalized slowly varying; we write \(L\in \mathcal{NSV }\). A regularly varying function \(f(t)=t^\vartheta L(t)\) with \(L\in \mathcal{NSV }\) is called normalized regularly varying of index \(\vartheta \); we write \(f\in \mathcal{NRV }(\vartheta )\). From the above results, we can see that a regularly varying function of index \(\vartheta \) can be seen as a product of the power function \(t^\vartheta \) and a factor which varies “more slowly” (a slowly varying function \(L\)). The coefficients in (1) can therefore been regarded as power functions multiplied by any slowly varying function.

The class of regularly varying functions substantially extends the class of the functions which are asymptotically equivalent to (a multiple of) a power function. Here are some examples of slowly varying functions: \(\prod _{i=1}^n(\log _i t)^{\mu _i}\), where \(\log _i t=\log \log _{i-1}t\) and \(\mu _i\in \mathbb{R }\); \(\exp \left\{ \prod _{i=1}^n(\log _i t)^{\nu _i}\right\} \), where \(0<\nu _i<1\); \(2+\sin (\log _2 t)\); \((\log \varGamma (t))/t\), where \(\varGamma \) is the classical gamma function. Note that \(2+\sin t,2+\sin (\log t)\not \in \mathcal{SV }\). For \(L\in \mathcal{SV }\), it may happen \(\liminf _{t\rightarrow \infty }L(t)=0\), \(\limsup _{t\rightarrow \infty }L(t)=\infty \) (i.e., may exhibit “infinite oscillation”), an example being \(L(t)=\exp \left\{ (\log t)^{1/3}\cos ((\log t)^{1/3})\right\} \).

As we will show, in the framework of our theory, suitable transformations can enable us to consider even some systems with coefficients that are not regularly varying.

In the remaining part of this section, we analyze positive decreasing solutions of (1) and establish existence results. Here, we do not need to assume in general that the coefficients of (1) are regularly varying, that is, (2) is dropped for now.

Let \(\mathcal DS \) denote the set of all positive decreasing solutions of (1), that is, all solutions whose components are both eventually positive and decreasing. Note that, due to the sign conditions on the coefficients, any solution of (1) has necessarily both components eventually of one sign and monotone. If \(P=Q=\infty \), then \(\lim _{t\rightarrow \infty }x^{[1]}(t)=\lim _{t\rightarrow \infty }y^{[1]}(t)=0\) for any \((x,y)\in \mathcal DS \). Indeed, \(-x^{[1]}\) is eventually positive decreasing. If \(\lim _{t\rightarrow \infty }-x^{[1]}(t)=c>0\), then \(p(t)(-x^{\prime }(t))^\alpha \ge c\) or \(-x^{\prime }(t)\ge c^{\frac{1}{\alpha }}p^{-\frac{1}{\alpha }}(t)\), \(t\ge t_0\) with some \(t_0\ge a\). Integrating the latter inequality, we get \(x(t)\le x(t_0)-c^{\frac{1}{\alpha }}\int _{t_0}^t p^{-\frac{1}{\alpha }}(s)\,ds\rightarrow -\infty \) as \(t\rightarrow \infty \), a contradiction. Similarly, we prove \(\lim _{t\rightarrow \infty }y^{[1]}(t)=0\). However, in general, for positive decreasing solutions, the limits of quasiderivatives do not need to be zero. Observe that any positive decreasing solution has both the components and their quasiderivatives tending to a finite value, which is, respectively, nonnegative and nonpositive. Among all these solutions, we are interested in the so-called strongly decreasing solutions, which we denote as

$$\begin{aligned} \mathcal{SDS }=\left\{ (x,y)\in \mathcal{DS }:\lim _{t\rightarrow \infty }x(t)=\lim _{t\rightarrow \infty }y(t) =\lim _{t\rightarrow \infty }x^{[1]}(t)=\lim _{t\rightarrow \infty }y^{[1]}(t)=0\right\} . \end{aligned}$$

The integral expressions below play important roles in existence results for strongly decreasing solutions:

$$\begin{aligned} I_1(t)&:= \int \limits _t^\infty \left(\frac{1}{p(u)}\int \limits _u^\infty \varphi (s)\left(\int \limits _s^\infty \left(\frac{1}{q(r)}\int \limits _r^\infty \psi (\tau )\,d\tau \right)^{\frac{1}{\beta }}\!\!\! dr\right)^\lambda \!\!\! ds\right)^{\frac{1}{\alpha }}\!\!\! du,\\ I_2(t)&:= \int \limits _t^\infty \left(\frac{1}{q(u)}\int \limits _u^\infty \psi (s)\left(\int \limits _s^\infty \left(\frac{1}{p(r)}\int \limits _r^\infty \varphi (\tau )\,d\tau \right)^{\frac{1}{\alpha }} \!\!\! dr\right)^\mu \!\!\! ds\right)^{\frac{1}{\beta }}\!\!\! du,\\ I_3(t)&:= \int \limits _t^\infty \varphi (u)\left(\int \limits _u^\infty \left(\frac{1}{q(s)}\int \limits _s^\infty \psi (r) \left(\int \limits _r^\infty \frac{1}{p^{\frac{1}{\alpha }}(\tau )}\,d\tau \right)^\mu \!\!\! dr\right)^{\frac{1}{\beta }} \!\!\! ds\right)^\lambda \!\!\! du,\\ I_4(t)&:= \int \limits _t^\infty \psi (u)\left(\int \limits _u^\infty \left(\frac{1}{p(s)}\int \limits _s^\infty \varphi (r) \left(\int \limits _r^\infty \frac{1}{q^{\frac{1}{\beta }}(\tau )}\,d\tau \right)^\lambda \!\!\! dr\right)^{\frac{1}{\alpha }}\!\!\! ds\right)^\mu \!\!\! du. \end{aligned}$$

The following result holds. Its proof is given in the last section.

Theorem 1

If \(I_i(a)<\infty \) for at least one index \(i \in \{1,2,3,4\}\), then \(\mathcal{SDS } \ne \emptyset \).

This theorem improves some results in [23]. Indeed, Tanigawa in [23] proved, under the condition \(P=Q=\infty \), that \(\mathcal{SDS }\ne \emptyset \) provided

$$\begin{aligned} \int \limits _a^\infty \left(\frac{1}{p(s)}\int \limits _t^\infty \varphi (s)\,ds\right)^{\frac{1}{\alpha }}dt<\infty , \quad \int \limits _a^\infty \left(\frac{1}{q(s)}\int \limits _t^\infty \psi (s)\,ds\right)^{\frac{1}{\beta }}dt<\infty . \end{aligned}$$
(9)

It is easy to see that (9) implies \(I_1(a)<\infty \). Similarly, (9) implies \(I_2(a)<\infty \), but none of the opposite implications holds. Note that, as shown in [23], (9) is sufficient and necessary for the existence of \((x,y)\in \mathcal DS \) with \(\lim _{t\rightarrow \infty }x(t)\in (0,\infty )\), \(\lim _{t\rightarrow \infty }y(t)\in (0,\infty )\) provided \(P=Q=\infty \). Further, in [23], it was proved that \(\mathcal{SDS }\) is nonempty provided

$$\begin{aligned}&\int \limits _a^\infty \!\!\!\varphi (t)\left(\int \limits _t^\infty \!\!\!\left(\frac{1}{q(s)}\right)^{\frac{1}{\beta }} \!\!\!ds \right)^\lambda \!\!\! dt<\infty ,\nonumber \\ \\&\int \limits _a^\infty \!\!\! \psi (t)\left(\int \limits _t^\infty \!\!\! \left(\frac{1}{p(s)}\right)^{\frac{1}{\alpha }} \!\!\!ds \right)^\mu \!\!\! dt<\infty ,\nonumber \end{aligned}$$
(10)

where, of course, \(P<\infty ,Q<\infty \) need to be assumed. Again, it is easy to see that (10) implies \(I_3(a)<\infty \), and, similarly, (10) implies \(I_4(a)<\infty \), but the opposite implications do not hold. As shown in [23], (10) is sufficient and necessary for the existence of \((x,y)\in \mathcal DS \) with \(\lim _{t\rightarrow \infty }x(t)/\int _t^\infty p^{-\frac{1}{\alpha }}(s)\,ds\in (0,\infty )\), \(\lim _{t\rightarrow \infty }y(t)/\int _t^\infty q^{-\frac{1}{\beta }}(s)\,ds\in (0,\infty )\) provided \(P<\infty ,Q<\infty \). Observe that for \(I_3(a)<\infty \) or \(I_4(a)<\infty \) to be fulfilled, we do not necessarily need both \(P\) and \(Q\) to converge. A closer examination of the proof which is given in the last section shows how the existence results involving \(I_3\) and \(I_4\) simply follow from the ones involving \(I_1\) and \(I_2\), respectively, by using the reciprocity principle for (1); this principle will also enable us to extend existence and asymptotic results to new situations.

Here is a brief description of the reciprocity principle. Let \((x,y)\in \mathcal DS \). Set \(u=-x^{[1]}\), \(v=-y^{[1]}\). Then, \((u,v)\) is an eventually positive decreasing solution of the reciprocal system

$$\begin{aligned} \left\{ \begin{array}{l} \left(\frac{1}{\varphi ^{\frac{1}{\lambda }}(t)}{\varPhi }_{\frac{1}{\lambda }}(u^{\prime })\right)^{\prime } =\frac{1}{q^{\frac{1}{\beta }}(t)}{\varPhi }_{\frac{1}{\beta }}(v),\\ \left(\frac{1}{\psi ^{\frac{1}{\mu }}(t)}{\varPhi }_{\frac{1}{\mu }}(v^{\prime })\right)^{\prime } =\frac{1}{p^{\frac{1}{\alpha }}(t)}{\varPhi }_{\frac{1}{\alpha }}(u). \end{array}\right. \end{aligned}$$
(11)

Observe that (11) has the same structure as (1) and is subhomogeneous. Indeed, \(1/(\lambda \mu )>1/(\alpha \beta )\). Moreover, the quasiderivatives of \(u\) and \(v\), that is, \(\varphi ^{-\frac{1}{\lambda }}{\varPhi }_{\frac{1}{\lambda }}(u^{\prime })\) and \(\psi ^{-\frac{1}{\mu }}{\varPhi }_{\frac{1}{\mu }}(v^{\prime })\), are equal to \(-y\) and \(-x\), respectively. Conversely, if \((u,v)\) is an eventually positive decreasing solution of (11) and we set \(x=-\psi ^{-\frac{1}{\mu }}{\varPhi }_{\frac{1}{\mu }}(v^{\prime })\) and \(y=-\varphi ^{-\frac{1}{\lambda }}{\varPhi }_{\frac{1}{\lambda }}(u^{\prime })\), then \((x,y)\in \mathcal DS \) and \(x^{[1]}=-u,y^{[1]}=-v\). Hence, with the use of these relations, it holds

$$\begin{aligned} \begin{array}{c} (x,y)\,\text{ is} \text{ a} \text{ strongly} \text{ decreasing} \text{ solution} \text{ of}\quad \,(1) \\ \Updownarrow \\ (u,v)\,\text{ is} \text{ a} \text{ strongly} \text{ decreasing} \text{ solution} \text{ of}\quad \,(11). \end{array} \end{aligned}$$
(12)

It is easy to see that the roles which are played by the integrals \(I_1\) and \(I_2\) for system (1) are played by the integrals \(I_3\) and \(I_4\), respectively, for system (11).

3 Main results

In this central section, we show that strongly decreasing solutions of (1) are regularly varying provided the coefficients are regularly varying and the indices and powers in nonlinearities satisfy certain assumptions. We also derive an asymptotic formula and give several comments, applications, and comparisons with existing results. From now on, we always assume (2).

For a function \(f\in \mathcal{RV }(\vartheta )\) we denote

$$\begin{aligned} L_f(t)=f(t)/t^\vartheta . \end{aligned}$$

We set

$$\begin{aligned} \varLambda =\frac{1}{\alpha \beta -\lambda \mu } \end{aligned}$$

and

$$\begin{aligned} \nu&= \varLambda \big (\beta (\alpha -\gamma +1+\sigma )+\lambda (\beta -\delta +1+\varrho )\big ),\nonumber \\ \omega&= \varLambda \big (\alpha (\beta -\delta +1+\varrho )+\mu (\alpha -\gamma +1+\sigma )\big ). \end{aligned}$$
(13)

Further, we denote

$$\begin{aligned} \nu ^{[1]}=(\nu -1)\alpha +\gamma ,\quad \quad \omega ^{[1]}=(\omega -1)\beta +\delta . \end{aligned}$$
(14)

Theorem 2

Assume

$$\begin{aligned} \nu <0,\,\omega <0,\,\nu ^{[1]}<0,\,\omega ^{[1]}<0. \end{aligned}$$
(15)

Then \(\mathcal{SDS }\ne \emptyset \). Further, for every \((x,y)\in \mathcal{SDS }\), there hold \((x,y)\in \mathcal{RV }(\nu )\times \mathcal{RV }(\omega )\) and

$$\begin{aligned} x(t)\sim \left(K_1K_2^{\frac{\lambda }{\alpha }}\right)^{\alpha \beta \varLambda }t^\nu L_1(t),\quad y(t)\sim \left(K_2K_1^{\frac{\mu }{\beta }}\right)^{\alpha \beta \varLambda }t^\omega L_2(t) \end{aligned}$$
(16)

as \(t\rightarrow \infty \), where

$$\begin{aligned} K_1=\frac{-1}{\nu \left(-\nu ^{[1]}\right)^{\frac{1}{\alpha }}},\quad K_2=\frac{-1}{\omega \left(-\omega ^{[1]}\right)^{\frac{1}{\beta }}}, \end{aligned}$$
(17)

and

$$\begin{aligned} L_1=\left(\frac{L_\varphi ^\beta L_\psi ^\lambda }{L_p^\beta L_q^\lambda }\right)^\varLambda \in \mathcal{SV },\quad L_2=\left(\frac{L_\varphi ^\mu L_\psi ^\alpha }{L_p^\mu L_q^\alpha }\right)^\varLambda \in \mathcal{SV }. \end{aligned}$$

Remark 1

  1. (i)

    Theorem 2 has a partial converse, which reads as follows. If \((x,y)\in \mathcal{SDS }\cap (\mathcal{RV }(\tau )\times \mathcal{RV }(\zeta )),\) then \(\tau =\nu ,\zeta =\omega ,\,\nu \le 0,\omega \le 0,\nu ^{[1]}\le 0,\omega ^{[1]}\le 0\). If the assumption (2) on regular variation of the coefficients are strengthened to the asymptotic equivalences \(p(t)\sim a_1t^\gamma \), \(q(t)\sim a_2t^\delta \), \(\varphi (t)\sim a_3t^\sigma \), \(\psi (t)\sim a_4t^\varrho \) as \(t\rightarrow \infty \), \(a_1,a_2,a_3,a_4\in (0,\infty )\), then the inequalities for \(\nu ,\omega ,\nu ^{[1]},\omega ^{[1]}\) are strict.

  2. (ii)

    Under the assumptions of Theorem 2, the quasiderivatives of a strongly decreasing solution \((x,y)\) of (1) satisfy

    $$\begin{aligned} -x^{[1]}(t)&\sim&\frac{\left(K_2K_1^{\frac{\mu }{\beta }}\right)^{\alpha \beta \varLambda \lambda }}{-\nu ^{[1]}} t^{\nu ^{[1]}}L_\varphi (t)L_2^\lambda (t)\in \mathcal{RV }\left(\nu ^{[1]}\right)\\ -y^{[1]}(t)&\sim&\frac{\left(K_1K_2^{\frac{\lambda }{\alpha }}\right)^{\alpha \beta \varLambda \mu }}{-\omega ^{[1]}} t^{\omega ^{[1]}}L_\psi (t)L_1^\mu (t)\in \mathcal{RV }\left(\omega ^{[1]}\right). \end{aligned}$$
  3. (iii)

    The asymptotic formula (16) can be alternatively written in terms of the coefficients, see (4), where

    $$\begin{aligned} C_1=(K_1K_2^{\lambda /\alpha })^{\alpha \beta /(\alpha \beta -\lambda \mu )}, \quad C_2=(K_2K_1^{\mu /\beta })^{\alpha \beta /(\alpha \beta -\lambda \mu )}. \end{aligned}$$
  4. (iv)

    If the components of a (decreasing) regularly varying solution to (1) are eventually convex, then they are both normalized regularly varying. Note that the eventual convexity of decreasing solutions of (1) can be guaranteed, for example, by \(p(t)=q(t)=1\).

  5. (v)

    The assumptions of Theorem 2 can be replaced by any of the following inequalities

    $$\begin{aligned} \varrho +1<\min \left\{ 0,\delta -\beta ,\delta -\beta -\frac{\beta }{\lambda }(\sigma +1),\delta -\beta -\frac{\beta }{\lambda }(\sigma +1+\alpha -\gamma )\right\} \end{aligned}$$
    (18)

    or

    $$\begin{aligned} \sigma +1<\min \left\{ 0,\gamma -\alpha ,\gamma -\alpha -\frac{\alpha }{\mu }(\varrho +1),\gamma -\alpha -\frac{\alpha }{\mu }(\varrho +1+\beta -\delta )\right\} \end{aligned}$$
    (19)

    or

    $$\begin{aligned} \alpha -\gamma <\min \left\{ 0,-\frac{\alpha }{\mu }(\varrho +1),-\frac{\alpha }{\mu }(\varrho +1+\beta -\delta ),-\frac{\alpha }{\mu }(\varrho +1+\beta -\delta )- \frac{\alpha \beta }{\lambda \mu }(\sigma +1)\right\} \end{aligned}$$
    (20)

    or

    $$\begin{aligned} \beta -\delta <\min \left\{ 0,-\frac{\beta }{\lambda }(\sigma +1), -\frac{\beta }{\lambda }(\sigma +1+\alpha -\gamma ), -\frac{\beta }{\lambda }(\sigma +1+\alpha -\gamma ) - \frac{\alpha \beta }{\lambda \mu }(\varrho +1)\right\} , \end{aligned}$$
    (21)

    and the statement remains valid. Moreover, the following equivalence among the sufficient conditions hold:

    $$\begin{aligned} (15)\,\,\Leftrightarrow \,\,(18)\,\,\text{ or}\,\,(19) \,\,\text{ or}\,\,(20)\,\,\text{ or}\,\,(21). \end{aligned}$$
    (22)

As an example of typical setting, assume \(P=Q=\infty \). Then, necessarily \(\alpha \ge \gamma \) and \(\beta \ge \delta \); the sufficient condition (15) reduces to

$$\begin{aligned} \nu <0,\,\omega <0. \end{aligned}$$
(23)

Indeed, \(\nu ^{[1]}=\nu \alpha - \alpha +\gamma \le \nu \alpha <0\). Similarly, we obtain \(\omega ^{[1]}<0\).

Another example of a typical situation is when \(\int _a^\infty \varphi (t)\,dt=\int _a^\infty \psi (t)\,dt=\infty \) is assumed. This condition implies \(\sigma +1 \ge 0, \, \varrho +1 \ge 0\), and (15) reduces to \(\nu ^{[1]}<0, \omega ^{[1]}<0\). We can also quite easily observe how the “mixed” cases, for instance, \(\int _a^\infty p^{-\frac{1}{\alpha }}(s)\,ds=\infty , \int _a^\infty q^{-\frac{1}{\beta }}(s)\,ds<\infty ,\) or \(\int _a^\infty \varphi (t)\,dt=\infty , \int _a^\infty \psi (t)\,dt<\infty \), can be covered by our results as well.

Our results can be applied also in some special situations where the coefficients of (1) are not regularly varying, by means of a change of the independent variable. Let \(s=\zeta (t)\), where \(\zeta \) is a differentiable function such that \(\zeta ^{\prime }(t)\ne 0\) on \([a,\infty )\). Set \((w,z)(s)=(x,y)(\zeta ^{-1}(s)),\,\zeta ^{-1}\) being the inverse of \(\zeta \). Since \(d/dt=\zeta ^{\prime }(t)d/ds\), system (1) is transformed into the system

$$\begin{aligned} \left\{ \begin{array}{l} \frac{d}{ds}\left({\hat{p}}(s){\varPhi }_\alpha \left(\frac{dw}{ds}\right)\right) ={\hat{\varphi }}(s){\varPhi }_\lambda (z),\\ [2mm] \frac{d}{ds}\left({\hat{q}}(s){\varPhi }_\beta \left(\frac{dz}{ds}\right)\right) ={\hat{\psi }}(s){\varPhi }_\mu (w), \end{array}\right. \end{aligned}$$
(24)

where

$$\begin{aligned} {\hat{p}}&:= (p\circ \zeta ^{-1})\cdot {\varPhi }_\alpha (\zeta ^{\prime }\circ \zeta ^{-1}),\quad {\hat{q}}:=(q\circ \zeta ^{-1})\cdot {\varPhi }_\beta (\zeta ^{\prime }\circ \zeta ^{-1}),\\ {\hat{\varphi }}&:= \frac{\varphi \circ \zeta ^{-1}}{\zeta ^{\prime }\circ \zeta ^{-1}},\ \ {\hat{\psi }}:=\frac{\psi \circ \zeta ^{-1}}{\zeta ^{\prime }\circ \zeta ^{-1}}. \end{aligned}$$

Clearly, (24) is of the form (1) and is subhomogeneous. If \(\zeta \) is unbounded with \(\zeta ^{\prime }>0\) and such that \({\hat{p}},{\hat{q}},{\hat{\varphi }},{\hat{\psi }}\in \bigcup _{\vartheta \in \mathbb{R }}\mathcal{RV }(\vartheta )\), then our results can be applied to (24). To illustrate a possible application, take, for example,

$$\begin{aligned} p(t)=e^{\gamma t}h_1(t),\,q(t)=e^{\delta t}h_2(t),\,\varphi (t)=e^{\sigma t}h_3(t),\ \psi (t)=e^{\varrho t}h_4(t), \end{aligned}$$

where \(\gamma ,\delta ,\sigma ,\varrho \in \mathbb{R }\) and \(h_i\in \bigcup _{\vartheta \in \mathbb{R }}\mathcal{RV }(\vartheta ),\,i=1,2,3,4\). In such a case, we can set \(\zeta (t)=e^t\). Thus, \(t=\ln s\) and \([a,\infty )\) is transformed into another right half-line. We then get

$$\begin{aligned} {\hat{p}}(s)=s^{\gamma +\alpha }H_1(s)\in \mathcal{RV }(\gamma +\alpha ), \quad \text{ where}\,\,H_1:=h_1\circ \ln , \end{aligned}$$

since \(H_1\in \mathcal{RV }(\gamma _1\cdot 0)=\mathcal{RV }(0)=\mathcal{SV },\,\gamma _1\) being the index of regular variation of \(h_1\). Similarly,

$$\begin{aligned} {\hat{q}}\in \mathcal{RV }(\delta +\beta ),\,\, {\hat{\varphi }}\in \mathcal{RV }(\sigma -1),\,\, {\hat{\psi }}\in \mathcal{RV }(\varrho -1). \end{aligned}$$

Consequently, Theorem 2 can directly be applied to system (24) and hereby to the original system through the transformation.

Our results give also useful information about asymptotic form of radial solutions to the partial differential system

$$\begin{aligned} \left\{ \begin{array}{l} \text{ div}\left(\Vert \nabla u\Vert ^{\alpha -1}\nabla u\right)={\bar{\varphi }}(\Vert z\Vert )|v|^{\lambda -1} v,\\ \text{ div}\left(\Vert \nabla v\Vert ^{\beta -1}\nabla v\right)={\bar{\psi }}(\Vert z\Vert )|u|^{\mu -1} u \end{array}\right. \end{aligned}$$
(25)

in an exterior domain in \(\mathbb{R }^N,\,N\ge 2\), where \({\bar{\varphi }}(t)=t^{\bar{\sigma }}L_{\bar{\varphi }}(t) \in \mathcal{RV }(\bar{\sigma })\), \(\bar{\psi }(t)=t^{\bar{\varrho }}L_{\bar{\psi }}(t)\in \mathcal{RV }(\bar{\varrho })\). If we assume

$$\begin{aligned} \bar{\nu }&:= \varLambda (\beta (\alpha +1+\bar{\sigma })+\lambda (\beta +1+\bar{\varrho }))<0,\\ \bar{\omega }&:= \varLambda (\alpha (\beta +1+\bar{\varrho })+\mu (\alpha +1+\bar{\sigma }))<0,\\ N&< \min \{1-(\bar{\nu }-1)\alpha , 1-(\bar{\omega }-1)\beta \}, \end{aligned}$$

then the existence of a positive (strongly) decreasing radial solution of (25) is guaranteed, and any such a solution \((u,v)\) satisfies

$$\begin{aligned}&\lim _{\Vert z\Vert \rightarrow \infty }\frac{u(z)}{|z|^{\bar{\nu }} L_{\bar{\varphi }}^{\beta {\varLambda }}(\Vert z\Vert )L_{\bar{\psi }}^{\lambda {\varLambda }}(\Vert z\Vert )} =\left(\bar{K}_1\bar{K}_2^{\frac{\lambda }{\alpha }}\right)^{\alpha \beta \varLambda },\\&\lim _{\Vert z\Vert \rightarrow \infty }\frac{v(z)}{|z|^{\bar{\omega }} L_{\bar{\varphi }}^{\mu \varLambda }(\Vert z\Vert )L_{\bar{\psi }}^{\alpha \varLambda }(\Vert z\Vert )} =\left(\bar{K}_2\bar{K}_1^{\frac{\mu }{\beta }}\right)^{\alpha \beta \varLambda }, \end{aligned}$$

where

$$\begin{aligned} \bar{K}_1 =-\left({\bar{\nu }\left((1-\bar{\nu })\alpha -N+1\right)^{\frac{1}{\alpha }}}\right)^{-1}\!\!\!\!, \quad \bar{K}_2=-\left({\bar{\omega }\left((1-\bar{\omega })\beta -N+1\right)^{\frac{1}{\beta }}}\right)^{-1}\!\!\!\!. \end{aligned}$$

Indeed, a radial function \((u(z),v(z))\) is a solution of (25) in \(\varSigma _a=\{z\in \mathbb{R }^N:\Vert z\Vert \ge a\}\) if and only if \((x(t), y(t)),\,t=\Vert z\Vert \), given by \((x(\Vert z\Vert ),y(\Vert z\Vert ))=(u(z),v(z))\), satisfies the ordinary differential system

$$\begin{aligned} \left\{ \begin{array}{l} \left(t^{N-1}{\varPhi }_\alpha (x^{\prime })\right)^{\prime }=t^{N-1}\bar{\varphi }(t){\varPhi }_\lambda (y),\\ \left(t^{N-1}{\varPhi }_\beta (y^{\prime })\right)^{\prime }=t^{N-1}\bar{\psi }(t){\varPhi }_\mu (x), \end{array}\right. \end{aligned}$$
(26)

\(t\ge a\). System (26) has the same structure as (1), with \(p(t)=q(t)=t^{N-1},\,\varphi (t)=t^{N-1}\bar{\varphi }(t),\,\psi (t)=t^{N-1}\bar{\psi }(t)\). Thus \(\gamma =\delta =N-1,\,L_p(t)=L_q(t)=1,\,\sigma =N-1+\bar{\sigma }, \varrho =N-1+\bar{\varrho },\,L_\varphi (t)=L_{\bar{\varphi }}(t),\,L_\psi (t)=L_{\bar{\psi }}(t)\). Applying Theorem 2 to system (26) and going back to the original variables, we get the result.

Note that if Theorem 2 was obtained just for system (1) with \(p(t)=q(t)=1\), then, after a suitable transformation of the independent variable, only partial systems of the form (25) satisfying the conditions \(\alpha =\beta \) and \(\alpha +1>N\) would be detectable. Arbitrariness of \(p,q\) and of the convergence or divergence of the integrals in (3) enable us to omit these restrictions. Moreover, our general setting allows us to consider even more general partial differential systems where the leading coefficients are formed by elliptic matrices of certain special forms.

For some other information concerning related partial differential systems, see, for example, [4, 5] and the references in [5, 23]. Recall that systems similar to (25) are sometimes in the literature called as of Lane–Emden type.

In the remaining part of this section, we apply Theorem 2 to some particular cases of (1) and compare our results with existing literature. Consider first the widely studied fourth-order equation

$$\begin{aligned} x^{\prime \prime \prime \prime }=\psi (t){\varPhi }_\mu (x). \end{aligned}$$
(27)

Equations of these forms have been studied in a slightly related manner, for instance, in [2, 6]. Practically under the same setting as in this paper, Eq. (27) was analyzed in [14]; the case with opposite sign assumption on \(\psi \) is discussed in [15]. The below-mentioned observations can be easily extended to more general fourth-order equations (like (5)), but for comparison purposes, we take the form (27), which appears quite frequently in the literature. We assume \(\psi \in \mathcal{RV }(\varrho )\) and \(\mu >0\). This equation is equivalent to system (1), where we set \(\alpha =\beta =\lambda =1\) and \(p(t)=q(t)=\varphi (t)=1\). Then, \(\gamma =\delta =\sigma =0,\,L_p(t)=L_q(t)=L_\varphi (t)=1\), and \(\varLambda =1/(1-\mu )\). The subhomogeneity assumption reads as \(\mu <1\). Further,

$$\begin{aligned} \nu =\frac{\varrho +4}{1-\mu },\quad \omega =\frac{\varrho +2+2\mu }{1-\mu },\quad \nu ^{[1]}=\frac{\varrho +3+\mu }{1-\mu },\quad \omega ^{[1]}=\frac{\varrho +1+3\mu }{1-\mu }. \end{aligned}$$

A strongly decreasing solution \(x\) of (27) is such that

$$\begin{aligned} \lim _{t\rightarrow \infty }x(t)=\lim _{t\rightarrow \infty }x^{\prime }(t)=\lim _{t\rightarrow \infty }x^{\prime \prime }(t)=\lim _{t\rightarrow \infty }x^{\prime \prime \prime }(t)=0. \end{aligned}$$

It is easy to see that in order (15) to be fulfilled, it is sufficient to take \(\varrho <-4\). Alternatively, we can check how (18) is verified (this is the only one among conditions (18), (19), (20), (21) that can be satisfied in this situation). Thus, under the assumptions \(0<\mu <1\) and \(\varrho <-4\), Theorem 2 assures that (27) possesses a strongly decreasing solution, and for any such a solution \(x\), it holds

$$\begin{aligned} x(t)\sim \left(\frac{(1-\mu )^4t^{\varrho +4}L_\psi (t)}{\prod _{i=1}^4[\varrho +i+(4-i)\mu ]}\right)^{\frac{1}{1-\mu }} \end{aligned}$$

as \(t\rightarrow \infty \). This result extends [14, Theorem 8], since it gives a precise asymptotic formula for \(\mathcal{SDS }\) solutions, and it holds for any \(\mathcal{SDS }\) solution of (27). Indeed, under the same assumptions, in [14, Theorem 8] is proved that (27) possesses a solution \(x\) satisfying

$$\begin{aligned} m\left(\frac{(1-\mu )^4t^{\varrho +4}L_\psi (t)}{\prod _{i=1}^4[\varrho +i+(4-i)\mu ]}\right)^{\frac{1}{1-\mu }} \le x(t)\le M \left(\frac{(1-\mu )^4t^{\varrho +4}L_\psi (t)}{\prod _{i=1}^4[\varrho +i+(4-i)\mu ]}\right)^{\frac{1}{1-\mu }} \end{aligned}$$

for some positive constants \(m,M\). We point out that a quite different approach than in our paper is used in [14].

In [21], we considered system (1), where we assumed \(p(t)=1=q(t),\,\varphi (t)\sim c_1t^\sigma \) and \(\psi (t)\sim c_2 t^\varrho \) with \(c_1,c_2\in (0,\infty )\). Under the condition

$$\begin{aligned} \sigma +1+\alpha <0,\quad \varrho +1+\beta <0. \end{aligned}$$
(28)

we showed that \(\mathcal{SDS }\ne \emptyset \) and that any \((x,y)\in \mathcal{SDS }\) satisfies

$$\begin{aligned} (x,y)(t)\sim (k_1t^\nu ,k_2t^\omega )\quad \text{ as}\,t\rightarrow \infty \end{aligned}$$

where

$$\begin{aligned}&k_1^{\lambda \mu -\alpha \beta }=\alpha ^\beta \beta ^\lambda |\nu |^{\alpha \beta }|\nu -1|^\beta |\omega |^{\beta \lambda } |\omega -1|^\lambda c_1^{-\beta } c_2^{-\lambda },\\&k_2^{\lambda \mu -\alpha \beta }=\alpha ^\mu \beta ^\alpha |\nu |^{\alpha \mu } |\nu -1|^\mu |\omega |^{\alpha \beta } |\omega -1|^\alpha c_1^{-\mu } c_2^{-\alpha }. \end{aligned}$$

Since every function which is asymptotically equivalent to a power function is trivially regularly varying, it is clear that this result is a special case of Theorem 2 (with \(P=Q=\infty \)). Moreover, the sufficient condition (28) is stronger than (23), which, in this case, reduces to \(\beta (\alpha +1+\sigma )+\lambda (\beta +1+\varrho )<0\). Observe how condition (28) corresponds to Tanigawa’s integral existence condition (9) stated in [23]. Indeed, for \(f(t)\) such that \(f(t)\sim t^\vartheta \), it holds

$$\begin{aligned} \int \limits ^\infty f(s)\,ds<\infty \,\text{ if} \text{ and} \text{ only} \text{ if}\,\vartheta <0, \end{aligned}$$
(29)

while for a more general \(f\in \mathcal{RV }(\vartheta )\), the convergence of \(\int ^\infty f(s)\,ds\) is possible also with \(\vartheta =0\); see also Proposition 1–4. The fact that the coefficients of the system in [21] are trivially regularly varying enabled us to apply the so-called asymptotic equivalence theorem, and we obtained the result by making a comparison with an (explicitly solvable) system which has precise power coefficients. That approach does not work in the present more general setting.

From our results for system (1) we can also obtain the asymptotic formula for strongly decreasing solutions to scalar second-order equations of the form

$$\begin{aligned} (p(t){\varPhi }_\alpha (x^{\prime }))^{\prime }=\varphi (t){\varPhi }_\lambda (x), \end{aligned}$$
(30)

where \(p\in \mathcal{RV }(\gamma )\) and \(\varphi \in \mathcal{RV }(\sigma )\), under the conditions \(\alpha >\lambda \) and

$$\begin{aligned} \sigma +1<\min \left\{ \gamma -\alpha ,\frac{\lambda }{\alpha }(\gamma -\alpha )\right\} . \end{aligned}$$
(31)

Indeed, if \(\alpha > \lambda \), it is known that (30) has strongly decreasing solutions if

$$\begin{aligned} \int \limits _a^\infty \left( \frac{1}{p(t)} \int \limits _t^\infty \varphi (s) \, ds\right)^{\frac{1}{\alpha }} dt <\infty , \end{aligned}$$
(32)

or

$$\begin{aligned} \int \limits _a^\infty \varphi (t) \left( \int \limits _t^\infty \frac{1}{p^{\frac{1}{\alpha }}(s)} \, ds\right)^{\lambda } dt <\infty , \end{aligned}$$
(33)

holds, see, for instance, [3, Proposition 2]. Notice that (31) implies (32) if \(\gamma \le \alpha \), while it implies (33) if \(\gamma > \alpha \). If \(x\) is a strongly decreasing solution of (30), then \((x,x)\in \mathcal{SDS }\) for system (1), in which we take \(p(t)=q(t),\,\varphi (t)=\psi (t),\,\alpha =\beta \), and \(\lambda =\mu \). The subhomogeneity condition is implied by \(\alpha >\lambda \). Further,

$$\begin{aligned} \nu =\omega =\frac{\alpha -\gamma +\sigma +1}{\alpha -\lambda },\quad \text{ and}\quad \nu ^{[1]}=\omega ^{[1]}=\frac{\lambda (\alpha -\gamma )+\alpha (\sigma +1)}{\alpha -\lambda }, \end{aligned}$$

and condition (15) (or equivalently condition (19)) reduces to (31). Then, in view of Theorem 2, under (31), there hold \(x\in \mathcal{RV }(\nu )\) and

$$\begin{aligned} x(t)\sim \left(\frac{t^{\alpha -\gamma +\sigma +1}L_\varphi (t)}{(-\nu )^{\alpha }(-\nu ^{[1]})L_p(t)}\right)^{\frac{1}{\alpha -\lambda }} \end{aligned}$$
(34)

as \(t\rightarrow \infty \). If, in Eq. (30), additionally we assume \(p(t)=1\) and \(\varphi (t)\sim t^\sigma \) (thus \(\varphi \) is trivially regularly varying), then the just described result reduces to the relevant part of [10, Theorem 4.4] by Kamo and Usami; notice that their proof is based on the asymptotic equivalence theorem. Our results turn out to be new in the second-order equation setting even when compared with [17]. Indeed, in [17, Theorem 2.2] is proved that the equation \(({\varPhi }_\alpha (x^{\prime }))^{\prime }= \varphi (t){\varPhi }_\lambda (x)\) possesses a positive decreasing solution satisfying

$$\begin{aligned} m\left(\frac{t^{\alpha +\sigma +1}L_\varphi (t)}{(-\nu )^{\alpha }(-\nu ^{[1]})}\right)^{\frac{1}{\alpha -\lambda }} \le x(t)\le M\left(\frac{t^{\alpha +\sigma +1}L_\varphi (t)}{(-\nu )^{\alpha }(-\nu ^{[1]})}\right)^{\frac{1}{\alpha -\lambda }} \end{aligned}$$

for some positive constants \(m,M\), where in \(\nu ,\nu ^{[1]}\) one has \(\gamma =0\), and \(\varphi \in \mathcal{RV }(\sigma )\), \(\alpha >\lambda \), \(\sigma <-\alpha -1\) are assumed.

Further related results for scalar second-order equations were obtained by Marić [20], who considered the equation \(x^{\prime \prime }=\varphi (t)F(x)\), where \(\varphi \) is regularly varying (at infinity) and \(F\) is regularly varying at zero of index \(\lambda \); the superlinearity condition \(1<\lambda \) is assumed there. Other related results have been obtained by Kusano and Manojlović in [11, 12] for the equation \(x^{\prime \prime }+r(t)F(x)=0\), where \(r\) is regularly varying (at infinity) and \(F={\varPhi }_\lambda \) or \(F\) is regularly varying at zero of index \(\lambda \), under the sublinearity condition \(1>\lambda \), and in [13], where the existence of slowly varying solutions and regularly varying solutions of index 1 is studied both in the superlinear and in the sublinear case. Note that the potential in the last mentioned works is of the opposite sign than those in the formerly mentioned works.

Finally, for completeness, we mention several other works which are somehow related to the above-presented ones. Increasing solutions have been studied deeper for singular Emden–Fowler-type systems rather than for regular ones; see, for example, [18] and the references therein. An asymptotic theory of such systems in the framework of regular variation can be a topic for a future research. There are quite many works devoted to study of half-linear differential equations (ordinary as well as functional ones) in the framework of regular variation. However, they can usually use different methods than in more general, quasilinear case, see e.g. [9, 24]. A higher-order quasilinear two-term differential equation is studied, for example, in [22], and a generalization of some results from [2] is obtained. The theory of regular variation was applied to the study of odd-order equations in [16]. Somehow, related asymptotic formulas for solutions to \(n\)-th-order equations were derived in [7]. The two-term equation can be written as a system of the form

$$\begin{aligned} x_i^{\prime }=-a_i(t){\varPhi }_{\alpha _i}(x_{i+1}),\quad i=1,\dots ,n, \end{aligned}$$
(35)

where \(x_{n+1}\) means \(x_1\). We stress that nonlinear two-term differential equations which are included in (35) can be of arbitrary order (even as well as odd). In addition, (35) covers systems analogous to (1), where, instead of two equations, we can have \(k\) second-order equations, \(k\in \mathbb{N }\). Similar systems of two or more second-order equations are considered, for example, in [5]. We believe that our methods can be modified and extended to general systems of the form (35).

4 Proofs

 

Proof of Theorem 1

Case I. Assume \(I_1(a)< \infty \), and let \(t_0 \ge a\) be sufficiently large, such that

$$\begin{aligned} I_1(t_0)< \frac{1}{2}. \end{aligned}$$
(36)

For every \(n \in \mathbb{N }\), let \(\Omega _n \subseteq C[t_0, \infty ) \times C_0[t_0, \infty )\) denote the set of functions \((x, y)\) satisfying

$$\begin{aligned} \frac{1}{n} \le x(t) \le 2, \quad \quad 0 \le y(t) \le 2^{\mu /\beta } \int \limits _t^\infty \left(\frac{1}{q(r)}\int \limits _r^\infty \psi (\tau )\,d\tau \right)^{\frac{1}{\beta }} dr, \end{aligned}$$
(37)

for every \(t \ge t_0\). Let \(\mathcal T _n : \, \Omega _n \rightarrow C[t_0, \infty ) \times C_0[t_0, \infty )\) be the map defined by \(\mathcal T _n(x,y)=(\mathcal F _n y, \mathcal G _n x)\), where

$$\begin{aligned} (\mathcal F _n y)(t)&= \frac{1}{n}+ \int \limits _t^\infty \left(\frac{1}{p(r)}\int \limits _r^\infty \varphi (\tau ) y^\lambda (\tau ) \,d\tau \right)^{\frac{1}{\alpha }} dr,\\ (\mathcal G _n x)(t)&= \int \limits _t^\infty \left(\frac{1}{q(r)}\int \limits _r^\infty \psi (\tau ) x^\mu (\tau ) \,d\tau \right)^{\frac{1}{\beta }} dr. \end{aligned}$$

Notice that the assumption \(I_1(a)<\infty \) assures that \(\mathcal T _n\) is well defined on \(\Omega _n\) for every \(n \in \mathbb{N }\); further, the following estimates hold, taking account of (36), (37), and of the subhomogeneity condition \(\alpha \beta > \lambda \mu \):

$$\begin{aligned} \frac{1}{n}&\le \mathcal F _n y(t) \le \frac{1}{n} + 2^{\frac{\mu \lambda }{\alpha \beta }} I_1(t_0)< 1 + 2^{\frac{\mu \lambda }{\alpha \beta }} \frac{1}{2} < 2,\\ 0&\le \mathcal G _n x(t) \le 2^{\mu /\beta } \int \limits _t^\infty \left(\frac{1}{q(r)}\int \limits _r^\infty \psi (\tau )\,d\tau \right)^{\frac{1}{\beta }} dr, \end{aligned}$$

for every \(t \ge t_0\). Hence, \(\mathcal T _n\) maps \(\Omega _n\) into itself, for every \(n \in \mathbb{N }\). The Schauder–Tychonoff fixed-point theorem can be applied to obtain the existence of a fixed point \((x_n, y_n)\) of \(\mathcal T _n\) in \(\Omega _n\), \(n \in \mathbb{N }\). Notice that \((x_n, y_n)\) is a positive moderately decreasing solution of (1), satisfying the asymptotic conditions \(\lim _{t\rightarrow \infty }x_n(t)=1/n, \, \lim _{t\rightarrow \infty }y_n(t)=0, \, \lim _{t\rightarrow \infty }x^{[1]}_n(t)=0, \, \lim _{t\rightarrow \infty }y^{[1]}_n(t)=0\). Therefore, we have constructed a sequence \(\{(x_n, y_n)\}\) of positive decreasing solutions of (1), defined on \([t_0, \infty )\), which is uniformly bounded, since for every \(n \in \mathbb{N }\) it holds

$$\begin{aligned} 0 \le x_n(t) \le 2, \quad 0 \le y_n(t) \le 2^{\mu /\beta } \int \limits _t^\infty \left(\frac{1}{q(r)}\int \limits _r^\infty \psi (\tau )\,d\tau \right)^{\frac{1}{\beta }} dr, \quad \quad t \ge t_0. \end{aligned}$$

The estimates

$$\begin{aligned} 0 \le -x_n^{\prime }(t)&\le 2^{\frac{\mu \lambda }{\alpha \beta }} \left(\frac{1}{p(t)}\int \limits _t^\infty \varphi (s)\left(\int \limits _s^\infty \!\! \left(\frac{1}{q(r)}\int \limits _r^\infty \psi (\tau )\,d\tau \right)^{\frac{1}{\beta }}dr\right)^\lambda ds\right)^{\frac{1}{\alpha }}\\ 0 \le -y_n^{\prime }(t)&\le 2^{\mu /\beta } \left(\frac{1}{q(t)}\int \limits _t^\infty \psi (\tau )\,d\tau \right)^{\frac{1}{\beta }} \end{aligned}$$

assure that also the sequence \(\{(x_n^{\prime }, y_n^{\prime })\}\) is uniformly bounded, and therefore, \(\{(x_n, y_n)\}\) is locally equicontinuous on \([t_0, \infty )\). By the Ascoli-Arzelà theorem, there exists a subsequence of \(\{(x_n, y_n)\}\) which converges to a limit function \((x_0, y_0)\), uniformly on compact subsets of \([t_0, \infty )\). Simple calculations show that \((x_0, y_0)\) satisfies

$$\begin{aligned} x_0(t)&= \int \limits _t^\infty \left(\frac{1}{p(r)}\int \limits _r^\infty \varphi (\tau ) y_0^\lambda (\tau ) \,d\tau \right)^{\frac{1}{\alpha }} dr,\\ y_0(t)&= \int \limits _t^\infty \left(\frac{1}{q(r)}\int \limits _r^\infty \psi (\tau ) x_0^\mu (\tau ) \,d\tau \right)^{\frac{1}{\beta }} dr, \end{aligned}$$

that is, \((x_0, y_0)\) is a solution of (1), with

$$\begin{aligned} \lim _{t\rightarrow \infty }x_0(t)=\lim _{t\rightarrow \infty }y_0(t)=\lim _{t\rightarrow \infty }x^{[1]}_0(t)= \lim _{t\rightarrow \infty }y^{[1]}_0(t)=0. \end{aligned}$$

To prove that \((x_0, y_0) \in \mathcal{SDS }\), it is sufficient to show that \(x_0(t)>0, y_0(t)>0\) on \([t_0, \infty )\) and that it can be extended to the whole interval \([a, \infty )\). This part of the proof is analogous to the corresponding one in the proof of [23, Theorem 2.2], and it is omitted.

Case II. Assume \(I_2(a)< \infty \), and let \(t_0 \ge a\) be sufficiently large, such that \(I_2(t_0)<1/2\). Analogously to the proof of Case I, for every \(n \in \mathbb{N }\), we can define the set \({\hat{\varOmega }}_n \subseteq C[t_0, \infty ) \times C_0[t_0, \infty )\) as the set of functions \((x, y)\) satisfying

$$\begin{aligned} 0 \le x(t) \le 2^{\lambda /\alpha } \int \limits _t^\infty \left(\frac{1}{p(r)}\int \limits _r^\infty \varphi (\tau )\,d\tau \right)^{\frac{1}{\alpha }} dr, \quad \quad \frac{1}{n} \le y(t) \le 2, \end{aligned}$$

for every \(t \ge t_0\), and the map \({\hat{\mathcal{T }}}_n:\,{\hat{\varOmega }}_n \rightarrow C[t_0, \infty ) \times C_0[t_0, \infty )\), \({\hat{\mathcal{T }}}_n(x,y)=({\hat{\mathcal{F }}}_n y, {\hat{\mathcal{G }}}_n x)\), where

$$\begin{aligned} ({\hat{\mathcal{F }}}_n y)(t)&= \int \limits _t^\infty \left(\frac{1}{p(r)}\int \limits _r^\infty \varphi (\tau ) y^\lambda (\tau ) \,d\tau \right)^{\frac{1}{\alpha }} dr,\\ ({\hat{\mathcal{G }}}_n x)(t)&= \frac{1}{n} + \int \limits _t^\infty \left(\frac{1}{q(r)}\int \limits _r^\infty \psi (\tau ) x^\mu (\tau ) \,d\tau \right)^{\frac{1}{\beta }} dr. \end{aligned}$$

We easily get \({\hat{\mathcal{T }}}_n ({\hat{\varOmega }}_n) \subseteq {\hat{\varOmega }}_n\), and also the other parts of the proof follow similar arguments to the ones in the proof of Case I.

Case III. Assume \(I_3(a)< \infty \). By means of the reciprocity principle, this case corresponds to Case I for the reciprocal system (11), and therefore, there exist strongly decreasing solutions of (11). In view of (2), the assertion follows.

Case IV. Assume \(I_4(a)< \infty \). Similar to the previous case, this assumption corresponds to Case II for the reciprocal system (11), and therefore, there exist strongly decreasing solutions of (11) and thus of (1), in view of (2). \(\square \)

Proof of Theorem 2

First, we show that

$$\begin{aligned} (15)\quad \text{ implies}\quad (18)\quad \text{ or} \quad (19) \quad \text{ or} \quad (20) \quad \text{ or} \quad (21). \end{aligned}$$
(38)

Since in the subsequent part of this proof the statement of Theorem 2 is shown to be guaranteed by any of the conditions (18) or (19) or (20) or (21), the result follows. Observe

$$\begin{aligned} \text{ sgn}\nu&= \text{ sgn}\big [\beta (\alpha -\gamma )+\lambda (\beta -\delta )+\beta (\sigma +1)+\lambda (\varrho +1)\big ],\\ \text{ sgn}\omega&= \text{ sgn}\big [\mu (\alpha -\gamma )+\alpha (\beta -\delta )+\mu (\sigma +1)+\alpha (\varrho +1)\big ],\\ \text{ sgn}\nu ^{[1]}&= \text{ sgn}\big [\lambda \mu (\alpha -\gamma )+\lambda \alpha (\beta -\delta )+\alpha \beta (\sigma +1)+\lambda \alpha (\varrho +1)\big ],\\ \text{ sgn}\omega ^{[1]}&= \text{ sgn}\big [\mu \beta (\alpha -\gamma )+\mu \lambda (\beta -\delta )+\mu \beta (\sigma +1)+\alpha \beta (\varrho +1)\big ]. \end{aligned}$$

Thus, any of the conditions \(\nu <0, \omega <0, \nu ^{[1]}<0, \omega ^{[1]}<0\) implies that at least one of the inequalities

$$\begin{aligned} \alpha -\gamma <0\quad \text{ or}\quad \beta -\delta <0\quad \text{ or}\quad \sigma +1<0\quad \text{ or}\quad \varrho +1<0 \end{aligned}$$
(39)

holds. One can proceed in such a way that we distinguish the cases where just one or just two or just three or all the inequalities in (39) hold:

I. Only one inequality in (39) holds. Assume, for example, \(\varrho +1<0\), \(\sigma +1 \ge 0, \, \beta - \delta \ge 0\), and \(\alpha -\gamma \ge 0\). From \(\nu <0\), we have

$$\begin{aligned} \varrho +1 < \delta - \beta - \frac{\beta }{\lambda }(\sigma +1+\alpha -\gamma )\le \delta - \beta - \frac{\beta }{\lambda }(\sigma +1) \le \delta -\beta . \end{aligned}$$

Thus, we get (18). Similarly, \(\sigma +1<0\) with \(\omega <0\) implies (19), \(\alpha -\gamma <0\) with \(\nu ^{[1]}<0\) implies (20), and \(\beta -\delta <0\) with \(\omega ^{[1]}<0\) implies (21).

II. Two inequalities in (39) hold. Assume, for instance, \(\sigma +1<0,\,\varrho +1<0,\,\alpha -\gamma \ge 0\) and \(\beta -\delta \ge 0\). Either of conditions \(\nu <0,\omega <0\) implies \(\alpha -\gamma +\sigma +1<0\) or \(\beta -\delta +\varrho +1<0\), see (13). Assume, for instance, that the former inequality holds. From \(\omega <0\) we have,

$$\begin{aligned} \sigma +1<\gamma -\alpha -\frac{\alpha }{\mu }(\varrho +1)+\frac{\alpha }{\mu }(\delta -\beta ) \le \gamma -\alpha -\frac{\alpha }{\mu }(\varrho +1). \end{aligned}$$

Thus (19) follows. Similarly, if \(\beta -\delta +\varrho +1<0\), condition \(\nu <0\) leads to (18). The case where \(\sigma +1\ge 0,\,\varrho +1\ge 0,\,\alpha -\gamma <0\) and \(\beta -\delta <0\) simply follows from the “reciprocity argument,” which is based on the relations among the coefficients of (1) and (11), and yields (20) or (21). The remaining four cases where two inequalities in (39) hold can be shown similarly; in particular, if \(\beta -\delta <0\) and \(\varrho +1<0\) hold, then (18) follows, if \(\alpha -\gamma <0\) and \(\sigma +1<0\) then (19) is satisfied, \(\alpha -\gamma <0\), \(\varrho +1<0\) imply (20), and \(\beta -\delta <0\), \(\sigma +1<0\) imply (21).

The cases where three or four inequalities in (39) hold are trivial.

Now we show that \(\mathcal{SDS }\ne \emptyset \). In view of (38), we may assume that, for instance, (18) happens. Inequality (18) implies that \(I_1\) is well defined and converges, and thus \(\mathcal{SDS }\ne \emptyset \) by Theorem 1. Indeed, \(\varrho +1<0\) implies the convergence of \(\int _r^\infty \psi (\tau )\,d\tau \) (see the 4th item in Proposition 1) and by (6) there is \(c>0\) such that \(\int _r^\infty \psi (\tau )\,d\tau \le c r^{\varrho +1}L_\psi (r)\), \(r\ge a\). Using the same arguments and the just obtained estimate, the inequalities in (18) imply that all other integrals in \(I_1\) including the \(I_1\) itself are well defined and converge. Similarly, we get that (19) implies \(I_2(a)<\infty \), (20) implies \(I_3 (a)<\infty \), and (21) implies \(I_4(a)<\infty \).

For the remaining part of the proof, we will make use repeatedly of the integral form of (1) for strongly decreasing solutions. Let \((x,y)\in \mathcal{SDS }\); integrating both equations in (1) and taking into account that \(x^{[1]}(\infty )=y^{[1]}(\infty )=0\), we get

$$\begin{aligned} -x^{[1]}(t)=\int \limits _t^\infty \varphi (s)y^\lambda (s)\,ds,\quad -y^{[1]}(t)=\int \limits _t^\infty \psi (s)x^\mu (s)\,ds. \end{aligned}$$
(40)

Integrating again, we have

$$\begin{aligned} \left\{ \begin{array}{l} x(t)=\displaystyle \int \nolimits _t^\infty \left(\frac{1}{p(s)}\int \limits _s^\infty \varphi (u){\varPhi }_\lambda (y(u))\,du\right)^{1/\alpha }ds,\\ y(t)=\displaystyle \int \nolimits _t^\infty \left(\frac{1}{q(s)}\int \limits _s^\infty \psi (u){\varPhi }_\mu (x(u))\,du\right)^{1/\beta }ds, \end{array} \right. \end{aligned}$$
(41)

where we have used that \(x(\infty )=y(\infty )=0\).

In the next step, we show that for any \((x,y)\in \mathcal{SDS }\) there exist \(c_1,c_2,d_1,d_2\in (0,\infty )\) such that

$$\begin{aligned} c_1t^\nu L_1(t)\le x(t)\le c_2 t^\nu L_1(t),\quad d_1t^\omega L_2(t)\le y(t)\le d_2 t^\omega L_2(t) \end{aligned}$$
(42)

for large \(t\). Assume first (18), and take any \((x,y)\in \mathcal{SDS }\). For brevity, we will not mention the phrase “for large \(t\)” repeatedly. From (41), since \(x\) is decreasing, we get

$$\begin{aligned} x(t)&\le x^{\frac{\lambda \mu }{\alpha \beta }}(t) \int \limits _t^\infty \left(\frac{1}{s^\gamma L_p(s)}\int \limits _s^\infty u^\sigma L_\varphi (u) \right.\nonumber \\&\times \left.\left(\int \limits _u^\infty \left(\frac{1}{\tau ^\delta L_q(\tau )}\int \limits _\tau ^\infty r^\varrho L_\psi (r)\,dr\right)^{\frac{1}{\beta }}d\tau \right)^\lambda du\right)^{\frac{1}{\alpha }}ds, \end{aligned}$$
(43)

where the improper integrals converge due to (18). Applying (6) repeatedly (which is possible since the respective indices are always less than \(-1\)), starting with the most inner integral, we find \(k_1,k_2\in (0,\infty )\) such that

$$\begin{aligned} x(t)&\le k_1 x^{\frac{\lambda \mu }{\alpha \beta }}(t) L_1^{\frac{1}{\alpha \beta \varLambda }}(t)\int \limits _t^\infty \!\!\left(s^{-\gamma }\int \limits _s^\infty \!\!\! u^\sigma \left(\int \limits _u^\infty \!\!\! \left(\tau ^{-\delta }\int \limits _\tau ^\infty \!\!\! r^\varrho dr\right)^{\frac{1}{\beta }}\!\!\! d\tau \right)^\lambda \!\!\!du\right)^{\frac{1}{\alpha }}\!\!\! ds\\&= k_2 x^{\frac{\lambda \mu }{\alpha \beta }}(t) L_1^{\frac{1}{\alpha \beta \varLambda }}(t) t^{\nu /(\alpha \beta \varLambda )}, \end{aligned}$$

and so \(x(t)\le c_2 t^\nu L_1(t)\), for some \(c_2\in (0,\infty )\), follows. To find an upper estimate for \(y\), we use the just obtained estimate for \(x\), which plugged into the second equation of (41) yields

$$\begin{aligned} y(t)&\le \int \limits _t^\infty \left(\frac{1}{q(s)}\int \limits _s^\infty \psi (u)c_2^\mu u^{\nu \mu } L_1^\mu (u) \right)^{\frac{1}{\beta }}ds\\&= \int \limits _t^\infty \left(\frac{1}{s^\delta L_q(s)}\int \limits _s^\infty u^{\varrho +\mu \nu }c_2^\mu L_\psi (u) L_1^\mu (s)\,du\right)^{\frac{1}{\beta }}ds\\&\le d_2 t^{\frac{\varrho +\mu \nu +1-\delta }{\beta }+1}\left(\frac{L_\psi (t)L_1^\mu (t)}{L_q(t)} \right)^{\frac{1}{\beta }}\\&= d_2 t^\omega L_2(t) \end{aligned}$$

for some \(d_2\in (0,\infty )\). Note that the convergence of all integrals and the applicability of (6) are assured by \(\varrho +\mu \nu <-1\) and \((\varrho +\mu \nu +1-\delta )/\beta =\omega -1<-1\). If, instead of (18), we assume (19), then we proceed in an analogous way, where first we establish an upper estimate for \(y\), and then we utilize it in estimating \(x\). We will also need estimates for \(-x^{[1]},-y^{[1]}\). Notice that the following identities hold:

$$\begin{aligned} \lambda \omega +\sigma +1&= (\nu -1)\alpha + \gamma = \nu ^{[1]},\nonumber \\ \mu \nu + \varrho +1&= (\omega -1) \beta +\delta = \omega ^{[1]}. \end{aligned}$$
(44)

From (40), applying the just obtained estimates for \(x,y\), and the relations (44), we have that there are \(k_3,k_4\in (0,\infty )\) such that

$$\begin{aligned} -x^{[1]}(t)\le k_3 t^{\nu ^{[1]}} L_\varphi (t)L_2^\lambda (t),\quad -y^{[1]}(t)\le k_4 t^{\omega ^{[1]}} L_\psi (t)L_1^\mu (t) \end{aligned}$$
(45)

for large \(t\). Clearly, all improper integrals in auxiliary computations converge, and, moreover, (6) can be applied.

Recall that all the above estimates were made just for the case when (18) or (19) holds. The upper estimates when (20) or (21) happens follow by the reciprocity principle. Indeed, (20) and (21) play in (11) the same role as (18) and (19) in (1). Let \(u=-x^{[1]}\) and \(v=-y^{[1]}\). Define \(\tilde{\nu },\tilde{\omega },\tilde{L}_1,\tilde{L}_2\) by means of (11) as \(\nu ,\omega ,L_1,L_2\), respectively, are defined by means of (1). In view of (2), arguing as in the previous case, we find \(\tilde{c}_2,\tilde{d}_2\in (0,\infty )\) such that

$$\begin{aligned} u(t)\le \tilde{c}_2 t^{\tilde{\nu }}\tilde{L}_1(t),\quad v(t)\le \tilde{d}_2 t^{\tilde{\omega }}\tilde{L}_2(t) \end{aligned}$$
(46)

for large \(t\). Since \(\tilde{\nu }=\nu ^{[1]},\tilde{\omega }=\omega ^{[1]},\tilde{L}_1=L_\varphi L_2^\lambda ,\tilde{L}_2=L_\psi L_1^\mu \), (46) yields (45), from which we easily obtain the upper estimates for \(x\) and \(y\) displayed in (42).

Now we prove lower estimates for \(x,y\); here, everything works in the same way no matter which of conditions (18) or (19) or (20) or (21) happens. For brevity, we sometimes omit arguments; again, all estimates are made for sufficiently large values of arguments. Set \(z=-xx^{[1]}\) and \(w=-yy^{[1]}\). Then, from (1),

$$\begin{aligned} -z^{\prime }&= x^{\prime }p{\varPhi }_\alpha (x^{\prime })+x\varphi y^\lambda =z\left(-\frac{x^{\prime }}{x}+\frac{\varphi y^\lambda }{p(-x^{\prime })^\alpha } \right),\\ -w^{\prime }&= y^{\prime }q{\varPhi }_\beta (y^{\prime })+y\psi x^\mu =w\left(-\frac{y^{\prime }}{y}+\frac{\psi x^\mu }{q(-y^{\prime })^\beta }\right). \end{aligned}$$

Denote

$$\begin{aligned} \xi _1&= 2\alpha \beta +\alpha \beta \mu +\alpha \lambda \mu +\alpha \mu +\beta \mu +\beta +\mu ,\\ \xi _2&= 2\alpha \beta +\alpha \beta \lambda +\beta \lambda \mu +\alpha \lambda +\beta \lambda +\alpha +\lambda . \end{aligned}$$

Consider the three pairs of conjugate numbers \(p_i,q_i\) (\(i=1,2,3\)), which are given by

$$\begin{aligned} p_1&= \xi _1/(\alpha (\beta +\beta \mu +\lambda \mu +\mu )),\ \ q_1=\xi _1/(\alpha \beta +\beta +\mu +\beta \mu ),\\ p_2&= \xi _2/(\beta (\alpha \lambda +\lambda +\lambda \mu +\alpha )),\ \ q_2=\xi _2/(\alpha \beta +\alpha \lambda +\alpha +\lambda ),\\ p_3&= (\xi _1+\xi _2)/\xi _1,\ \ q_3=(\xi _1+\xi _2)/\xi _2. \end{aligned}$$

It is not difficult to verify that the numbers \(p_i,q_i\) (\(i=1,2,3\)) satisfy the equalities

$$\begin{aligned} \frac{\mu }{q_2q_3}-\frac{1}{p_1p_3}= \frac{1}{\alpha p_1p_3}-\frac{1}{q_1 p_3}= \frac{\lambda }{q_1p_3}-\frac{1}{p_2q_3}= \frac{1}{\beta p_2 q_3}-\frac{1}{q_2q_3}. \end{aligned}$$

Denote

$$\begin{aligned} \xi = \frac{\mu }{q_2q_3}-\frac{1}{p_1p_3}+1 \end{aligned}$$

Let \(k_5,k_6\in (0,\infty )\) be such that \(k_5\le \min \{p_1,q_1,p_2,q_2\}\) and \(k_6\le k_5\min \{p_3,q_3\}\). Then, in view of the above identities for \(z^{\prime }\) and \(w^{\prime }\), applying the Young inequality at three places, we get

$$\begin{aligned} (zw)^{\prime }&= -z^{\prime }w-zw^{\prime }=zw\left(\frac{-x^{\prime }}{x}+\frac{\varphi y^\lambda }{-x^{[1]}} +\frac{-y^{\prime }}{y}+\frac{\psi x^\mu }{-y^{[1]}}\right) \nonumber \\&\ge k_5 zw\left(\left(\frac{-x^{\prime }}{x}\right)^{\frac{1}{p_1}}\left( \frac{\varphi y^\lambda }{-x^{[1]}}\right)^{\frac{1}{q_1}}+ \left(\frac{-y^{\prime }}{y}\right)^{\frac{1}{p_2}}\left(\frac{\psi x^{\mu }}{-y^{[1]}}\right)^{\frac{1}{q_2}} \right) \nonumber \\&\ge k_6 zw \left(\frac{-x^{\prime }}{x}\right)^{\frac{1}{p_1p_3}} \left(\frac{\varphi y^\lambda }{-x^{[1]}}\right)^{\frac{1}{q_1p_3}} \left(\frac{-y^{\prime }}{y}\right)^{\frac{1}{p_2q_3}} \left(\frac{\psi x^{\mu }}{-y^{[1]}}\right)^{\frac{1}{q_2q_3}} \nonumber \\&= k_6 zwx^{\xi -1}y^{\xi -1}\left(-x^{[1]}\right)^{\xi -1}\left(-y^{[1]}\right)^{\xi -1} p^{-\frac{1}{\alpha p_1p_3}}q^{-\frac{1}{\beta p_2q_3}} \varphi ^{\frac{1}{q_1p_3}}\psi ^{\frac{1}{q_2q_3}} \nonumber \\&= k_6 (zw)^\xi p^{-\frac{1}{\alpha p_1p_3}}q^{-\frac{1}{\beta p_2q_3}} \varphi ^{\frac{1}{q_1p_3}}\psi ^{\frac{1}{q_2q_3}}. \end{aligned}$$
(47)

It is easy to verify that the assumption of subhomogeneity is equivalent to \(1-\xi >0\). Integrating inequality (47) from \(t\) to \(\infty \), taking into account that \(z(t)\rightarrow 0\) and \(w(t)\rightarrow 0\) as \(t\rightarrow \infty \), and applying (6), we find \(k_7,k_8\in (0,\infty )\) such that

$$\begin{aligned} (z(t)w(t))^{1-\xi }&\ge k_7\int \limits _t^\infty s^{\frac{\sigma }{q_1p_3}+\frac{\varrho }{q_2q_3}-\frac{\gamma }{\alpha p_1p_3} -\frac{\delta }{\beta p_2q_3}} \nonumber \\&\times L_\varphi ^{\frac{1}{q_1p_3}}(s)L_\psi ^{\frac{1}{q_2q_3}}(s) L_p^{-\frac{1}{\alpha p_1p_3}}(s)L_q^{-\frac{1}{\beta p_2 q_3}}(s)\,ds \nonumber \\&\ge k_8 \, t^{\frac{\sigma }{q_1p_3}+\frac{\varrho }{q_2q_3}-\frac{\gamma }{\alpha p_1p_3} -\frac{\delta }{\beta p_2q_3}+1} L_\varphi ^{\frac{1}{q_1p_3}}(t)L_\psi ^{\frac{1}{q_2q_3}}(t)\nonumber \\&\times L_p^{-\frac{1}{\alpha p_1p_3}}(t)L_q^{-\frac{1}{\beta p_2 q_3}}(t). \end{aligned}$$
(48)

From the upper estimates for \(x,y,x^{[1]},y^{[1]}\) obtained in the previous part of this proof, we get

$$\begin{aligned} (z(t)w(t))^{1-\xi }&= x^{1-\xi }(t)\left(-x^{[1]}(t)\right)^{1-\xi }y^{1-\xi }(t)\left(-y^{[1]}(t)\right)^{1-\xi }\nonumber \\&\le k_9 x^{1-\xi }(t)t^{(1-\xi )(\nu ^{[1]}+\omega +\omega ^{[1]})} \left(L_\varphi (t)L_2^{\lambda +1}(t)L_\psi (t) L_1^{\mu }(t)\right)^{1-\xi } \end{aligned}$$
(49)

for some \(k_9\in (0,\infty )\). Combining now (48) and (49) and extracting \(x\), after a series of tedious computations, we find \(c_1\) such that \(x(t)\ge c_1t^\nu L_1(t)\). Similar arguments lead to the existence of \(d_1\in (0,\infty )\) such that \(y(t)\ge d_1 t^\omega L_2(t)\).

Now, we show that strongly decreasing solutions have regularly varying components with particular indices. Take any \((x,y)\in \mathcal{SDS }\) and any \(\vartheta \in (0,\infty )\). Denote \(M_*=\liminf _{t\rightarrow \infty }x(\vartheta t)/x(t)\), \(M^*=\limsup _{t\rightarrow \infty }x(\vartheta t)/x(t)\). In view of (42) and the fact that \(L_1\in \mathcal{SV }\), we find \(M_1,M_2\in (0,\infty )\) such that

$$\begin{aligned} M_1\le \frac{c_1 L_1(\vartheta t)\vartheta ^\nu }{c_2 L_1(t)}\le \frac{x(\vartheta t)}{x(t)}\le \frac{c_2 L_1(\vartheta t)\vartheta ^\nu }{c_1 L_1(t)}\le M_2 \end{aligned}$$

for all large \(t\). Hence, \(M_*,M^*\in (0,\infty )\). In view of (41), the L’Hospital rule yields

$$\begin{aligned} M_*&= \liminf _{t\rightarrow \infty }\frac{\int _{\vartheta t}^\infty \left(\frac{1}{p(s)}\int _s^\infty \varphi (u)y^\lambda (u)\,du\right)^{1/\alpha }ds}{\int _t^\infty \left(\frac{1}{p(s)}\int _s^\infty \varphi (u)y^\lambda (u)\,du\right)^{1/\alpha }ds}\nonumber \\&\ge \vartheta \liminf _{t\rightarrow \infty }\left(\frac{p(t)\int _{\vartheta t}^\infty \varphi (s)y^\lambda (s)\,ds}{p(\vartheta t)\int _{t}^\infty \varphi (s)y^\lambda (s)\,ds}\right)^\frac{1}{\alpha }\nonumber \\&\ge \vartheta ^{1-\frac{\gamma }{\alpha }}\liminf _{t\rightarrow \infty } \left(\frac{\vartheta \varphi (\vartheta t)y^\lambda (\vartheta t)}{\varphi (t)y^\lambda (t)}\right)^{\frac{1}{\alpha }}\nonumber \\&\ge \vartheta ^{1-\frac{\gamma }{\alpha }+\frac{1}{\alpha }+\frac{\sigma }{\alpha }} \liminf _{t\rightarrow \infty }\left(\frac{\int _{\vartheta t}^\infty \left(\frac{1}{q(s)}\int _s^\infty \psi (u)x^\mu (u)\,du\right)^{1/\beta }ds}{\int _t^\infty \left(\frac{1}{q(s)}\int _s^\infty \psi (u)x^\mu (u)\,du\right)^{1/\beta }ds} \right)^{\frac{\lambda }{\alpha }}\nonumber \\&\ge \vartheta ^{1-\frac{\gamma }{\alpha }+\frac{1}{\alpha }+\frac{\sigma }{\alpha }+\frac{\lambda }{\alpha }} \liminf _{t\rightarrow \infty }\left(\frac{q(t)\int _{\vartheta t}^\infty \psi (s)x^\mu (s)\,ds}{q(\vartheta t)\int _{t}^\infty \psi (s)x^\mu (s)\,ds}\right)^{\frac{\lambda }{\alpha \beta }}\nonumber \\&\ge \vartheta ^{1-\frac{\gamma }{\alpha }+\frac{1}{\alpha }+\frac{\sigma }{\alpha }+ \frac{\lambda }{\alpha }-\frac{\delta \lambda }{\alpha \beta }} \liminf _{t\rightarrow \infty }\left(\frac{\vartheta \psi (\vartheta t)x^\mu (\vartheta t)}{\psi (t) x^\mu (t)}\right)^{\frac{\lambda }{\alpha \beta }}\nonumber \\&\ge \vartheta ^{1-\frac{\gamma }{\alpha }+\frac{1}{\alpha }+\frac{\sigma }{\alpha }+ \frac{\lambda }{\alpha }-\frac{\delta \lambda }{\alpha \beta } +\frac{\lambda }{\alpha \beta }+\frac{\varrho \lambda }{\alpha \beta }}M_*^\frac{\lambda \mu }{\alpha \beta }, \end{aligned}$$
(50)

and so \(M_*\ge \vartheta ^\nu \). Similarly, we obtain \(M^*\le \vartheta ^\nu \). This implies \(\lim _{t\rightarrow \infty }x(\vartheta t)/x(t)=\vartheta ^\nu \), and since \(\vartheta \) was arbitrary positive, we get \(x\in \mathcal{RV }(\nu )\). Using the same ideas, we get \(y\in \mathcal{RV }(\omega )\).

Now, we derive the asymptotic formula (16). We have \(x(t)=t^\nu L_x(t),y(t)=t^\omega L_y(t)\), where \(L_x,L_y\in \mathcal{SV }\). From the first equation in (41), with the use of (6) and (44) we obtain

$$\begin{aligned} t^\nu L_x(t)&= \int \limits _t^\infty \left(\frac{1}{s^\gamma L_p(s)}\int \limits _s^\infty u^\sigma L_\varphi (u) u^{\omega \lambda }L_y^\lambda (u)\,du\right)^{\frac{1}{\alpha }}ds\nonumber \\&\sim&\int \limits _t^\infty \left(\frac{L_\varphi (s)L_y^\lambda (s)}{L_p(s)}\cdot \frac{s^{\sigma +\omega \lambda +1-\gamma }}{-\sigma -\omega \lambda -1}\right)^{\frac{1}{\alpha }}ds\nonumber \\&\sim&\left(\frac{L_\varphi (t)L_y^\lambda (t)}{L_p(t)}\right)^\frac{1}{\alpha } t^\nu K_1 \end{aligned}$$
(51)

as \(t\rightarrow \infty \). Similarly, from the second equation in (41),

$$\begin{aligned} L_y(t)\sim \left(\frac{L_\psi (t)L_x^\mu (t)}{L_q(t)}\right)^{\frac{1}{\beta }}K_2 \end{aligned}$$
(52)

as \(t\rightarrow \infty \). Combination of (51) with (52) yields

$$\begin{aligned} L_x(t)\sim \left(K_1K_2^{\frac{\lambda }{\alpha }}\right)^{\alpha \beta \varLambda }L_1(t),\quad L_y(t)\sim \left(K_2K_1^{\frac{\mu }{\beta }}\right)^{\alpha \beta \varLambda } L_2(t) \end{aligned}$$

as \(t\rightarrow \infty \), that is, (16). \(\square \)

Proof of Remark 1

(i) The proof is based on a similar application of the L’Hospital rule as that in (50):

$$\begin{aligned} \vartheta ^\tau =\lim _{t\rightarrow \infty }\frac{x(\vartheta t)}{x(t)}=\dots =\vartheta ^{1-\frac{\gamma }{\alpha }+\frac{1}{\alpha }+\frac{\sigma }{\alpha }+ \frac{\lambda }{\alpha }-\frac{\delta \lambda }{\alpha \beta } +\frac{\lambda }{\alpha \beta }+\frac{\varrho \lambda }{\alpha \beta }+\frac{\tau \lambda \mu }{\alpha \beta }}, \end{aligned}$$

from which we simply obtain \(\tau =\nu \). The equality \(\zeta =\omega \) follows by the same ideas. The inequalities \(\nu \le 0,\omega \le 0,\nu ^{[1]}\le 0,\omega ^{[1]}\le 0\) are the consequences of the properties of regularly varying functions. Indeed, if at least one of this inequalities fails to hold, then the relevant solution component which has just this index of regular variation could not tend to zero by the 3rd item in Proposition 1, and this is a contradiction. The strictness of the inequalities in the case when the coefficients are asymptotically equivalent to power functions is clear because of the obvious behavior of power functions as integrands in improper integrals; see also discussion in Sect. 3.

(ii) The formulae follow from the integral form (40) applying (6).

(iv) Let \((x,y)\in \mathcal{SDS }\cap (\mathcal{RV }(\nu )\times \mathcal{RV }(\omega ))\). Then \(x(t)=t^\nu L_x(t)\) with \(L\in \mathcal{SV }\) and

$$\begin{aligned} \frac{tx^{\prime }(t)}{x(t)}=\nu +\frac{tL_x^{\prime }(t)}{L_x(t)}. \end{aligned}$$

Because of (7), where we set \(f=x\) and \(g=-x^{\prime }\) (note that \(g\) decreases due to convexity of \(x\)), we get \(tx^{\prime }(t)/x(t)\rightarrow \nu \) as \(t\rightarrow \infty \). Thus \(tL_x^{\prime }(t)/L_x(t)\rightarrow 0\) as \(t\rightarrow \infty \). By the representation theorem, \(L_x\in \mathcal{NSV }\), i.e., \(x\in \mathcal{NRV }(\vartheta )\). Similarly we obtain \(y\in \mathcal{NRV }(\omega )\).

(v) The first claim of this item follows by a closer examination of the proof of Theorem 2, and it is based on the implication (38). We thus get one direction of the equivalence (22). Concerning the opposite direction, we show only that (18) implies (15); the other implications follow similarly. Thus assume (18). Then, we immediately get \(\nu <0\). By observing that \(\omega ^{[1]}=(\varrho +1)+\mu \nu \), since both the summands are negative by (18), we get \(\omega ^{[1]}<0\). To prove \(\nu ^{[1]}<0\), we use the identities (44) and the inequality \(\varrho +1<\delta -\beta -\beta (\sigma +1)/\lambda \), which is implied by (18):

$$\begin{aligned} \nu ^{[1]}&= \sigma +1+\lambda +\frac{\lambda }{\beta }(\omega ^{[1]}-\delta )\\&< \sigma +1+\lambda +\frac{\lambda }{\beta }\left(\delta -\beta -\frac{\beta }{\lambda }(\sigma +1) +\mu \nu -\delta \right)\\&= \sigma +1+\lambda -\lambda -(\sigma +1)+\frac{\lambda \mu }{\beta }\nu =\frac{\lambda \mu }{\beta }\nu <0. \end{aligned}$$

Similarly, with the use of \(\varrho +1<\delta -\beta \) and (44), we get

$$\begin{aligned} \lambda \omega&= \lambda +\frac{\lambda }{\beta } (\omega ^{[1]}-\delta )=\lambda +\frac{\lambda }{\beta }(\varrho +1+\mu \nu -\delta )\\&< \lambda +\frac{\lambda }{\beta }(\delta -\beta +\mu \nu -\delta ) =\frac{\lambda \nu }{\beta }\nu <0, \end{aligned}$$

which implies \(\omega <0\). \(\square \)