1 Introduction

In this paper, we prove a fixed point theorem for classes of functions taking values in a random normed space (RN-space) and show some applications of it to several issues connected with Ulam-type stability.

The study on such stability was initiated by a question of Ulam from 1940 (cf., e.g., [48, 96]) asking if an “approximate” solution of the functional equation of group homomorphisms must be “close” to an exact solution of the equation. The first answer was provided by Hyers [48], who considered the question for the Cauchy functional equation in Banach spaces and used the method that subsequently was called the direct method. He defined the equation solution explicitly as a pointwise limit of a sequence of mappings constructed from the given approximate solution.

Later, Hyers’ result was generalized by Aoki [10], Rassias [84], Forti [37], Gajda [39], Gǎvruta [40] and others, with a similar method. We refer to the monographs [25, 49, 53] for more information on history and recent research directions related to the subject.

Further, in 2003, Radu [82] proposed a new method to retrieve the main result of Rassias [84], based on the fixed point alternative in [34]. The same fixed point method, using also Banach Contraction Principle, has subsequently been used by many other authors to study the stability of a large variety of functional equations (see for example [21, 27, 33, 69, 74] and the references therein). A modification of it was proposed in [74, 75], where the author tied some set of functions to the given approximate solution of a given functional equation to make it a complete metric space, and then to apply the Banach theorem. Many new fixed point theorems have been shown in the literature, to investigate Ulam stability in spaces endowed with some kind of generalized metrics, such as fuzzy metric, quasi-metric, partial metric, G-metric, D-metric, b-metric, 2-metric, ultrametric, modular metric, and dislocated metric; see for instance [5, 9, 46, 56, 64].

Some authors have also used a somewhat different approach, proposed for the first time in [18, 19] (see [21] for further references), which applies the fixed point result for function spaces proved in [20]. For instance, Bahyrycz and Olko [13] applied that approach in their study on stability of the general functional equation

$$\begin{aligned} \sum _{i=1}^m A_i f\left( \sum _{j=1}^n a_{ij} x_j\right) + A = 0 \end{aligned}$$
(1.1)

for functions f mapping a linear space X over a field \({\mathbb {K}}\) into a Banach space Y, where \(A \in Y\) and, for every \(i =1, 2, \ldots , m\), \(j = 1, 2, \ldots , n\), \(A_i \in {\mathbb {K}}^* := {\mathbb {K}} {\setminus } \{0\}\), and \(a_{ij} \in {\mathbb {K}}\). Let us mention that numerous functional equations that are well known in the literature are particular cases of (1.1) (see Sect. 6 for more details).

Bahyrycz and Olko [14] and Zhang [98] (see also [75]) published the hyperstability results for Eq. (1.1) obtained by the same theorem in [20]. Related results can also be found in [16, 17].

The theory of probabilistic metric (or random normed) spaces was proposed by Menger [66] as a probabilistic extension of the metric space theory (see also [87]). This theory was later investigated by Šerstnev [89,90,91] (we also refer to the book [44]). It seems that Alsina [7] was the first to consider Ulam-type stability of functional equations in probabilistic normed spaces. Next, in 2008, Mihet and Radu [68], using the fixed point method, proved the stability results for the Cauchy and Jensen functional equations in random normed spaces.

The stability of many other functional equations was also investigated in random spaces. For example, Kim et al. [57] investigated the stability of the general cubic functional equation, Abdou et al. [1] studied the stability of the quintic functional equations, Alshybani et al. [6] used the direct and the fixed point methods to prove the stability results for the additive–quadratic functional equation, and Pinelas et al. [76] used the direct and the fixed point method to show stability of a new type of the n-dimensional cubic functional equation. We also refer to the book of Cho et al. [28] for more details on that type of stability in random normed spaces.

In this paper, we will first show a general fixed point theorem for classes of functions taking values in a random normed space. This is the random normed space version of the fixed point theorems in [20, 22] (see also [24]), which turned out to be very useful in investigations of the stability of various functional equations. Next, we show how to use the theorem to study the Ulam stability of various functional equations in a single variable and investigate the approximate eigenvalues and eigenvectors in the spaces of function taking values in RN-spaces.

Finally, using this fixed point theorem, we prove the very general results on the stability of the functional equation

$$\begin{aligned} \sum _{i=1}^m A_i f\left( \sum _{j=1}^n a_{ij} x_j\right) = D(x_1,\ldots ,x_n) \end{aligned}$$
(1.2)

for functions mapping a linear space X into a random normed space Y, with a given function \(D : X^n \rightarrow Y\). As special cases of this result, we can obtain the stability criteria for numerous functional equations in several variables, in the framework of random normed spaces.

2 Preliminaries

In the sequel, we use the definitions and properties of the random normed space (RN-space) as in [7, 28, 44, 45, 64, 68, 87, 89,90,91]. However, for the convenience of the reader, we remind some of them.

Definition 2.1

A mapping \(g : {\mathbb {R}} \rightarrow [0, 1] \) is called a distribution function if it is left continuous, non-decreasing and

$$\begin{aligned} \sup _{t \in {\mathbb {R}}} g(t) = 1, \quad \inf _{t \in {\mathbb {R}}} g(t) = 0. \end{aligned}$$

The class of all distribution functions g with \(g(0) = 0\) is denoted by \({\mathcal {D}}_+\).

For any real number \(a \ge 0\), \(H_a\) is the element of \({\mathcal {D}}_+\) defined by

$$\begin{aligned} H_a(t) := \left\{ \begin{array}{ll} 0 &{} \quad \text{ if } t \le a ; \\ 1 &{} \quad \text{ if } t > a . \end{array} \right. \end{aligned}$$

Definition 2.2

[28] A mapping \(T : [0, 1] \times [0, 1] \rightarrow [0, 1] \) is a triangular norm (briefly a t-norm) if T satisfies the following conditions:

  1. (a)

    T is commutative and associative;

  2. (b)

    \(T(a, 1) = a\) for all \(a \in [0, 1];\)

  3. (c)

    \(T(a,b) \le T(c,d)\), whenever \(a \le c\) and \(b \le d.\)

Remark 2.3

Clearly, in general, a t-norm does not need to be continuous. Typical examples of continuous t-norms are as follows:

$$\begin{aligned} T_p(a,b) = ab, \quad T_M(a,b) = \min (a,b), \quad T_L(a,b) = \max (a+b-1,0). \end{aligned}$$

Moreover, in view of (b) and (c), for each t-norm T and \(x \in [0, 1]\), we have:

$$\begin{aligned} T(x, 1) = T(1, x) = x, \quad T(x, 0) = T(0, x) = 0. \end{aligned}$$

Remark 2.4

(Cf. [28]) If T is a t-norm, \(m \in {\mathbb {N}}_0\) and \(a_i \in [0, 1]\) for \(i \in {\mathbb {N}}_0\), then we write

$$\begin{aligned} T_{i=m}^{m} a_i := a_m,\qquad T_{i=m}^{m+ n } a_i := T(a_{m+n}, T_{i=m}^{m+n-1} a_i),\quad n \in {\mathbb {N}}. \end{aligned}$$

Since T is commutative and associative, it is easy to show by induction that

$$\begin{aligned} T_{i=m}^{m+n+l} a_i = T\big (T_{i=m}^{m+n} a_i,T_{i=m+n+1}^{m+n+l} a_i\big ), \quad m,n,l\in {\mathbb {N}}_0, l>0. \end{aligned}$$
(2.1)

Note yet that, by (c), the sequence \((T_{i=m}^{m+n} a_i)_{n\in {\mathbb {N}}}\) is non-increasing for every \(m \in {\mathbb {N}}\) and therefore always convergent. So, for each \(m \in {\mathbb {N}}\), we may introduce the following notation:

$$\begin{aligned} T_{i=m}^{\infty } a_i := \lim _{n\rightarrow \infty } T_{i=m}^{m+n} a_i=\inf _{n\in {\mathbb {N}}}T_{i=m}^{n+m} a_i. \end{aligned}$$

A t-norm T can be extended in a unique way to an n-ary operation taking:

$$\begin{aligned} T(a_1, \ldots , a_n) := T_{i=1}^n a_i. \end{aligned}$$

To shorten some long formulas, we will write

$$\begin{aligned} {\widehat{T}}(a) := T(a,a), \quad a\in [0, 1]. \end{aligned}$$

It is easy to show by induction on k (using the associativity and commutativity of T) that

$$\begin{aligned} T_{j =1}^k {\widehat{T}}\Big (T_{i=m}^{m+n} a_{ij} \Big ) = {\widehat{T}}\Big (T_{i=m}^{m+n} T_{j =1}^k a_{ij} \Big ) \end{aligned}$$
(2.2)

for every \(k,n,m \in {\mathbb {N}}_0\), \(k\ge 1\), and \(a_{ij}\in [0,1]\) with \(j=1,\ldots ,k\) and \(i=m,\ldots ,m+n\). We need that property a bit later.

Definition 2.5

Let Y be a real vector space, \(F : x \mapsto F_x\) a mapping from Y into \({\mathcal {D}}_+\), and T a continuous t-norm. We say that (YFT) is a random normed space (briefly RN-space) if the following conditions are satisfied:

  1. (1)

    \( F_x = H_0 \) if and only if \(x = 0\) (the null vector);

  2. (2)

    \( F_{\alpha x }(t) = F_x\left( \frac{t}{| \alpha |}\right) \) for all \( x \in Y, \ t > 0 \) and \( \alpha \not = 0;\)

  3. (3)

    \( F_{x+y}(t+s) \ge T(F_x(t), F_y(s)) \) for all \( x, y \in Y \) and \( t, s \ge 0.\)

For more information on the RN-spaces, we refer to [41, 45, 65, 87, 89].

Example

Let \((Y, \Vert \ \Vert )\) be a normed space. Then both \((Y, F, T_M)\) and \((Y, F, T_p)\) are random normed spaces, where for every \(x \in Y\)

$$\begin{aligned} F_x(t) := \left\{ \begin{array}{ll} 0 &{} \quad \text{ if } t \le 0, \\ \frac{t}{t + \Vert x\Vert } &{} \quad \text{ if } t> 0. \end{array} \right. \end{aligned}$$

The same remains true if

$$\begin{aligned} F_x(t) := \left\{ \begin{array}{ll} 0 &{} \quad \text{ if } t \le 0, \\ e^{-\Vert x\Vert /t} &{} \quad \text{ if } t > 0. \end{array} \right. \end{aligned}$$

Definition 2.6

(Cf., e.g.,, [41, 65]) Let (YFT) be an RN-space.

  1. (1)

    A sequence \((x_n)_{n\in {\mathbb {N}}}\) in Y is said to converge (or to be convergent) to \(x \in Y\) (which we denote by: \( \lim _{n \rightarrow +\infty } x_n = x)\) if

    $$\begin{aligned} \lim _{n \rightarrow +\infty } F_{x_n - x}(t) = 1,\quad t>0, \end{aligned}$$

    i.e., for each \(\epsilon > 0\) and each \(t > 0\), there exists an \( N_{\epsilon , t} \in {\mathbb {N}}\) such that \(F_{x_n - x}(t) > 1 - \epsilon \), for all \(n \ge N_{\epsilon , t}\).

  2. (2)

    A sequence \((x_n)_{n\in {\mathbb {N}}}\) in Y is said to be an M-Cauchy sequence if

    $$\begin{aligned} \lim _{n,m \rightarrow +\infty } F_{x_n - x_m}(t)= 1,\quad t > 0, \end{aligned}$$

    i.e., for each \( \epsilon > 0\), and each \(t > 0\), there exists \(N_{\epsilon , t} \in {\mathbb {N}}\) such that \(F_{x_n - x_m}(t) > 1 - \epsilon \), for all \(N_{\epsilon ,t} \le n < m\).

  3. (3)

    A sequence \((x_n)_{n\in {\mathbb {N}}}\) in Y is said to be a G-Cauchy sequence if

    $$\begin{aligned} \lim _{n \rightarrow +\infty } F_{x_n - x_{n+k}}(t)= 1,\quad t > 0,k\in {\mathbb {N}}, \end{aligned}$$

    i.e., for every \(\epsilon > 0\), \(k\in {\mathbb {N}}\) and \(t > 0\), there exists an \(N_{\epsilon , t,k} \in {\mathbb {N}}\) such that \(F_{x_{n} - x_{n+k}}(t) > 1 - \epsilon \) for all \(n \ge N_{\epsilon ,t,k}\).

  4. (4)

    (YFT) is said to be G-complete (M-complete, respectively) if every G-Cauchy (M-Cauchy, resp.) sequence in Y is convergent to some point in Y.

Remark 2.7

Since every M-Cauchy sequence is also G-Cauchy, it is easily seen that each G-complete RN-space is M-complete.

3 A general fixed point theorem in RN-spaces

Our first main result is a very general RN-space version of a fixed point theorem in [20]; actually, we follow the approach from [22] (see also [24]). We provide some applications of it in the next sections.

In what follows, X is a non-empty set, (YFT) is an RN-space, \({\mathbb {N}}_0:={\mathbb {N}}\cup \{0\}\) and \({\mathbb {R}}_+ := [0, +\infty )\) (the set of non-negative real numbers). If U and V are nonempty sets, then as usual \(U^V\) denotes the family of all mappings from V to U. If \(F \in U^U\), then \(F^n\) stands for the n-th iterate of F, i.e., \(F^0(x)= x\) and \(F^{n+1}(x)= F(F^n(x))\) for \(x \in U\) and \(n \in {\mathbb {N}}_0\). The space \(Y^X\) is endowed with the coordinatewise operations, so that it is a linear space.

To simplify some expressions, for given \(\phi \in {\mathcal {D}}_+^X\) and \(x\in X\), we write \(\phi _x\) to mean \(\phi (x)\), i.e.,

$$\begin{aligned} \phi _x(t) :=\phi (x)(t),\quad x \in X,\ t \in {\mathbb {R}}. \end{aligned}$$

For every \(\varphi , \psi \in {\mathcal {D}}_+\) the inequality \(\varphi \le \psi \) means that \(\varphi (t) \le \psi (t)\) for each \(t > 0\). We use this abbreviation to simplify formulas whenever the variable t is not necessary to express them precisely.

Definition 3.1

Let \(\Lambda : {\mathcal {D}}_+^X \rightarrow {\mathcal {D}}_+^X\) and \(J : Y^X \rightarrow Y^X\) be given. We say that the operator J is \(\Lambda \)-contractive if, for every \(\xi , \eta \in Y^X\) and every \(\phi \in {\mathcal {D}}_+^X\),

$$\begin{aligned} \bigg (\forall _{x\in X}\ F_{(\xi -\eta )(x)}\ge \phi _x \bigg ) \Longrightarrow \bigg (\forall _{x\in X}\ F_{(J\xi - J\eta )(x)} \ge (\Lambda \phi )_x \bigg ). \end{aligned}$$

The convergence in \({\mathcal {D}}_+\) will mean the pointwise convergence. Therefore, we say that a sequence \((\psi _n)_{n\in {\mathbb {N}}}\) in \({\mathcal {D}}_+\) converges to some \(\psi \in {\mathcal {D}}_+\) if

$$\begin{aligned} \lim _{n \rightarrow \infty } \psi _n(t)= \psi (t),\quad t >0. \end{aligned}$$

Hence, the convergence of \((\psi _n)_{n\in {\mathbb {N}}}\) to \(H_0\) means that

$$\begin{aligned} \lim _{n \rightarrow \infty } \psi _n(t)=1,\quad t > 0. \end{aligned}$$

We need yet the following hypothesis on \(\Lambda : {\mathcal {D}}_+^X \rightarrow {\mathcal {D}}_+^X\).

\(({\mathcal {C}}_0)\):

If \((g_n)_{n\in {\mathbb {N}}}\) is a sequence in \(Y^X\) such that the sequence \(\big (F_{g_n(x)}\big )_{n\in {\mathbb {N}}}\) converges to \(H_0\) for every \(x\in X\), then the sequence \(\big ((\Lambda F_{g_n})_x\big )_{n\in {\mathbb {N}}}\) converges to \(H_0\) for every \(x\in X\), where \(F_{g_n} \in {\mathcal {D}}_+^X\) is given by \(F_{g_n}(x) := F_{g_n(x)}\) for \(x \in X\).

Remark 3.2

Let \(\chi _0\in {\mathcal {D}}_+^X\) be given by: \(\chi _0(x)=H_0\) for \(x\in X\). Then \(({\mathcal {C}}_0)\) actually means the continuity of \(\Lambda \) at the point \(\chi _0\) (with respect to the pointwise convergence topologies in \({\mathcal {D}}_+^X\) and \({\mathcal {D}}_+\)) and the property: \(\Lambda \chi _0=\chi _0\).

Let \(\nu \in {\mathbb {N}}\), \(\xi _1,\ldots ,\xi _{\nu }:X\rightarrow X\), and \(L_1,\ldots ,L_{\nu }:X\rightarrow (0,\infty )\) be fixed. A natural example of operator \(\Lambda \) fulfilling hypothesis \(({\mathcal {C}}_0)\) can be defined by

$$\begin{aligned} (\Lambda \delta )_x(t) :=T_{i=1}^{\nu } \delta _{\xi _i(x)}\left( \frac{t}{\nu L_i(x)}\right) , \quad \delta \in {\mathcal {D}}_+^X, \ x \in X, \ t>0. \end{aligned}$$
(3.1)

We refer to Remark 3.10 for further comments on this situation.

In what follows, \(\Omega \) stands for the family of all real sequences \((\omega _n)_{n \in {\mathbb {N}}_0}\) with \(\omega _n \in (0,1)\) for each \(n\in {\mathbb {N}}_0\) and

$$\begin{aligned} \sum _{i=0}^{\infty }\omega _i=1. \end{aligned}$$

Let us first state the following lemma, which will be used in the sequel.

Lemma 3.3

Let \(\Lambda : {\mathcal {D}}_+^X \rightarrow {\mathcal {D}}_+^X\) and \(\epsilon : X \rightarrow {\mathcal {D}}_+\) be arbitrary. Then,  for every \(x \in X,\) \(k \in {\mathbb {N}}_0,\) \(\omega \in \Omega ,\) and \(t > 0,\) the limits

$$\begin{aligned} \sigma _x^k(t)&:= \lim _{j \rightarrow \infty }T_{i=k}^{k+j-1} (\Lambda ^{i} \epsilon )_x \left( \frac{t}{j}\right) , \end{aligned}$$
(3.2)
$$\begin{aligned} ^{\omega }\!\sigma _x^k(t)&:= \lim _{j \rightarrow \infty }T_{i=k}^{k+j-1} (\Lambda ^i \epsilon )_x \big (\omega _{i-k}t\big ) \end{aligned}$$
(3.3)

exist in \({\mathbb {R}}\) and

$$\begin{aligned} \sigma _x^k(t)&= \inf _{j \in {\mathbb {N}}}T_{i=k}^{k+j-1} (\Lambda ^{i} \epsilon )_x \left( \frac{t}{j}\right) , \end{aligned}$$
(3.4)
$$\begin{aligned} ^{\omega }\!\sigma _x^k(t)&= \inf _{j \in {\mathbb {N}}}T_{i=k}^{k+j-1} (\Lambda ^i \epsilon )_x \big (\omega _{i-k}t\big ) . \end{aligned}$$
(3.5)

Proof

Fix \(k \in {\mathbb {N}}_0\), \(x \in X\) and \(t > 0\) and write

$$\begin{aligned} \tau _m(x,t,k):=T_{i=k}^{k+m-1} (\Lambda ^i \epsilon )_x\left( \frac{t}{m}\right) ,\quad m \in {\mathbb {N}}. \end{aligned}$$

Since \((\Lambda ^i \epsilon )_x\in {\mathcal {D}}_+\), it is a non-decreasing function for each \(i \in {\mathbb {N}}\). Hence,

$$\begin{aligned} (\Lambda ^i \epsilon )_x\left( \frac{t}{m}\right) \ge (\Lambda ^i \epsilon )_x\left( \frac{t}{m+1}\right) . \end{aligned}$$

Consequently,

$$\begin{aligned} \tau _m(x, t, k)&=T_{i=k}^{k+m-1} (\Lambda ^i \epsilon )_x\left( \frac{t}{m}\right) \ge T_{i=k}^{k+m-1} (\Lambda ^i \epsilon )_x\left( \frac{t}{m+1}\right) \\&=T\Bigg (1,T_{i=k}^{k+m-1} (\Lambda ^i \epsilon )_x\left( \frac{t}{m+1}\right) \Bigg )\\&\ge T\Bigg ((\Lambda ^{k+m} \epsilon )_x\left( \frac{t}{m+1}\right) ,T_{i=k}^{k+m-1} (\Lambda ^i \epsilon )_x\left( \frac{t}{m+1}\right) \Bigg )\\&=T_{i=k}^{k+m} (\Lambda ^i \epsilon )_x\left( \frac{t}{m+1}\right) =\tau _{m+1}(x,t,k), \end{aligned}$$

whence the sequence \((\tau _m(x,t,k))_{m \in {\mathbb {N}}}\) is non-increasing and, therefore, for every \(k \in {\mathbb {N}}_0\), \(x \in X\) and \(t > 0\), the following limit exists

$$\begin{aligned} \sigma ^k_x(t)=\lim _{m \rightarrow \infty }\;\tau _m(x,t,k) = \inf _{m \in {\mathbb {N}}}\;\tau _m(x,t,k). \end{aligned}$$
(3.6)

Next, fix \(\omega \in \Omega \), \(k \in {\mathbb {N}}_0\), \(x \in X\) and \( t >0\), and write

$$\begin{aligned} \rho _m(x, t, k) :=T_{i=k}^{k+m-1} (\Lambda ^i \epsilon )_x\big (\omega _{i - k} t\big ),\quad m \in {\mathbb {N}}. \end{aligned}$$

Then,

$$\begin{aligned} \rho _m(x, t, k)&= T_{i=k}^{k+m-1} (\Lambda ^i \epsilon )_x\big (\omega _{i - k} t\big ) \\&=T\Big (1,T_{i=k}^{k+m-1} (\Lambda ^i \epsilon )_x\big (\omega _{i - k} t\big )\Big ) \\&\ge T\Big ((\Lambda ^{k+m} \epsilon )_x\big (\omega _{m} t\big ),T_{i=k}^{k+m-1} (\Lambda ^i \epsilon )_x\big (\omega _{i - k} t\big )\Big ) \\&=T_{i=k}^{k+m} (\Lambda ^i \epsilon )_x\big (\omega _{i - k} t\big )=\rho _{m+1}(x,t,k). \end{aligned}$$

This means that the sequence \((\rho _m(x,t,k))_{m \in {\mathbb {N}}}\) is non-increasing. Therefore, there exists the limit

$$\begin{aligned} ^\omega \!\sigma _x^k (t) := \lim _{m\rightarrow \infty }\;\rho _m(x,t,k) = \inf _{m \in {\mathbb {N}}}\;\rho _m(x,t,k). \end{aligned}$$

\(\square \)

Remark 3.4

Fix \(x \in X\) and \(k \in {\mathbb {N}}_0\). If \(T=T_M\) in Lemma 3.3, then

$$\begin{aligned} \sigma _x^k(t)&:= \lim _{m\rightarrow \infty }T_{i=k}^{k+m-1} (\Lambda ^{i} \epsilon )_x \left( \frac{t}{m}\right) \\&= \lim _{m\rightarrow \infty }\inf _{i=1,\ldots ,m} (\Lambda ^{k+i-1} \epsilon )_x \left( \frac{t}{m}\right) \\&= \inf _{m\in {\mathbb {N}}} (\Lambda ^{k+m-1} \epsilon )_x \left( \frac{t}{m}\right) . \end{aligned}$$

If \(T=T_p\), then (3.4) implies that

$$\begin{aligned} \sigma _x^k(t) =\inf _{m\in {\mathbb {N}}} \prod _{i=1}^m (\Lambda ^{k+i-1} \epsilon )_x \left( \frac{t}{m}\right) . \end{aligned}$$

Analogous equalities are valid for \(\,^{\omega }\!\sigma _x^k\) with any \(\omega \in \Omega \).

In the sequel, given \(\Lambda : {\mathcal {D}}_+^X \rightarrow {\mathcal {D}}_+^X\) and \(\epsilon : X \rightarrow {\mathcal {D}}_+\), we write

$$\begin{aligned} {\underline{\sigma }}_x^k (t) := \sup _{\omega \in \Omega } \,^{\omega }\!\sigma _x^{\,k}(t),\quad {\widehat{\sigma }}_x^k(t) :=\max \{\sigma _x^k(t),{\underline{\sigma }}_x^k(t)\} \end{aligned}$$
(3.7)

for every \(x \in X\), \(k \in {\mathbb {N}}_0\) and \(t > 0\), where \(\sigma _x^k(t)\) and \(^{\omega }\!\sigma _x^k(t)\) are defined by (3.2) and (3.3).

Theorem 3.5

Let \(\Lambda : {\mathcal {D}}_+^X \rightarrow {\mathcal {D}}_+^X,\) \(\epsilon : X \rightarrow {\mathcal {D}}_+,\) \(J : Y^X \rightarrow Y^X\) and \(f : X \rightarrow Y\) be given. Assume that \(\Lambda \) satisfies hypothesis \(({\mathcal {C}}_0),\) J is \(\Lambda \)-contractive, 

$$\begin{aligned} F_{(Jf-f)(x)} \ge \epsilon _x, \quad x \in X, \end{aligned}$$
(3.8)

and one of the following three conditions holds.

  1. (i)

    (YFT) is M-complete and

    $$\begin{aligned} \lim _{k \rightarrow +\infty } \;\inf _{j\in {\mathbb {N}}_0}T_{i=k}^{k+j} (\Lambda ^{i} \epsilon )_x \left( \frac{t}{j+1}\right) =1, \quad x \in X,\ t>0. \end{aligned}$$
    (3.9)
  2. (ii)

    (YFT) is M-complete and for each \(k\in {\mathbb {N}}\) there is a sequence \((\omega ^k_n)_{n\in {\mathbb {N}}_0}\in \Omega \) with

    $$\begin{aligned} \lim _{k \rightarrow +\infty } \;\inf _{j\in {\mathbb {N}}_0}T_{i=k}^{k+j} (\Lambda ^{i} \epsilon )_x \big (\omega ^k_{i-k}t\big ) =1, \quad x \in X,\ t>0. \end{aligned}$$
    (3.10)
  3. (iii)

    (YFT) is G-complete and \(\lim _{n \rightarrow +\infty }(\Lambda ^{n} \epsilon )_x = H_0\) for \(x\in X,\) i.e., 

    $$\begin{aligned} \lim _{n \rightarrow +\infty } \,(\Lambda ^{n} \epsilon )_x (t) =1, \quad x \in X,\ t>0. \end{aligned}$$
    (3.11)

Then,  for every \(x \in X,\) the limit

$$\begin{aligned} \psi (x) := \lim _{n \rightarrow +\infty } (J^n f)(x) \end{aligned}$$
(3.12)

exists in Y and \(\psi \in Y^X\) thus defined is a fixed point of J with

$$\begin{aligned} F_{(\psi - J^k f)(x)}(t) \ge \sup _{\alpha \in (0,1)}\; {\widehat{\sigma }}_x^k(\alpha t), \quad k \in {\mathbb {N}}_0, \ x \in X,\ t > 0. \end{aligned}$$
(3.13)

Moreover,  in case (i) or (ii) holds,  \(\psi \) is the unique fixed point of J such that there exists \(\alpha \in (0,1)\) with

$$\begin{aligned} F_{(\psi - J^k f)(x)}(t) \ge {\widehat{\sigma }}_x^k(\alpha t),\quad k\in {\mathbb {N}}_0,\ x \in X,\ t>0. \end{aligned}$$
(3.14)

Proof

First we show by induction that, for every \(n \in {\mathbb {N}}_0\),

$$\begin{aligned} F_{(J^{n+1}f - J^nf)(x)} \ge (\Lambda ^n \epsilon )_x, \quad x \in X. \end{aligned}$$
(3.15)

The case \(n = 0\) is just (3.8). So, fix \(n \in {\mathbb {N}}_0\) satisfying (3.15). Then, using the \(\Lambda \)-contractivity of J and the inductive assumption, we obtain

$$\begin{aligned} F_{(J^{n+2}f - J^{n+1}f)(x)}\ge \big (\Lambda (\Lambda ^n \epsilon )\big )_x = (\Lambda ^{n+1}\epsilon )_x, \quad x \in X. \end{aligned}$$

Thus, we have proved that (3.15) holds for every \(n \in {\mathbb {N}}_0\). Consequently, for every \(n \in {\mathbb {N}}_0\), \(m \in {\mathbb {N}}\), \(x \in X\) and \(t > 0\) we have

$$\begin{aligned} F_{(J^{n+m} f - J^n f)(x)}(t)&= F_{\sum _{i=0}^{m-1} (J^{n+i+1} f - J^{n+i}f)(x)}(t) \nonumber \\&\ge T_{i=0}^{m-1} F_{(J^{n+i+1}f - J^{n+i}f)(x)}\left( \frac{t}{m}\right) \nonumber \\&\ge T_{i=0}^{m-1} (\Lambda ^{n+i} \epsilon )_x \left( \frac{t}{m}\right) =T_{i=n}^{n+m-1} (\Lambda ^{i} \epsilon )_x \left( \frac{t}{m}\right) , \end{aligned}$$
(3.16)

and analogously, as \(\omega _{m-1}t< \sum _{i=m-1}^{\infty }\omega _{i}t\) for every \((\omega _n)_{n\in {\mathbb {N}}_0}\in \Omega \),

$$\begin{aligned} F_{(J^{n+m} f - J^n f)(x)}(t)&\ge T_{i=1}^{m} F_{(J^{n+i}f - J^{n+i-1}f)(x)}\big (\omega _{i-1}t\big ) \nonumber \\&\ge T_{i=1}^{m} (\Lambda ^{n+i-1} \epsilon )_x \big (\omega _{i-1}t\big )\nonumber \\&=T_{i=n}^{n+m-1} (\Lambda ^{i} \epsilon )_x \big (\omega _{i-n}t\big ),\quad (\omega _n)_{n \in {\mathbb {N}}_0}\in \Omega . \end{aligned}$$
(3.17)

Now, we show that the limit (3.12) exists in Y for every \(x\in X\). First consider the case of (i). Then, by (3.16), for all \(k, m\in {\mathbb {N}}\), \(n \in {\mathbb {N}}_0\), \(x\in X\) and \(t>0\),

$$\begin{aligned} F_{(J^{n+k} f - J^{n+m}f)(x)}(2t)&\ge T\Big (F_{(J^{n+k} f - J^{n} f)(x)}(t),F_{(J^n f - J^{n+m}f)(x)}(t)\Big )\\&\ge T\Bigg (T_{i=n}^{n+k-1} (\Lambda ^{i} \epsilon )_x \left( \frac{t}{k}\right) ,T_{i=n}^{n+m-1} (\Lambda ^{i} \epsilon )_x \left( \frac{t}{m}\right) \Bigg ). \end{aligned}$$

Consequently, by (c),

$$\begin{aligned}&\inf _{k,m\in {\mathbb {N}}} F_{(J^{n+k} f - J^{n+m}f)(x)}(2t) \\&\quad \ge \inf _{k,m\in {\mathbb {N}}}T\Bigg (T_{i=n}^{n+k-1} (\Lambda ^{i} \epsilon )_x \left( \frac{t}{k}\right) ,T_{i=n}^{n+m-1} (\Lambda ^{i} \epsilon )_x \left( \frac{t}{m}\right) \Bigg ) \\&\quad \ge T\Bigg (\inf _{k\in {\mathbb {N}}_0}T_{i=n}^{n+k} (\Lambda ^{i} \epsilon )_x \left( \frac{t}{k+1}\right) ,\inf _{m\in {\mathbb {N}}_0}T_{i=n}^{n+m} (\Lambda ^{i} \epsilon )_x \left( \frac{t}{m+1}\right) \Bigg ). \end{aligned}$$

Hence, (3.9), (b) and the continuity of T at (1, 1) yield

$$\begin{aligned} \lim _{n \rightarrow \infty } \;\inf _{k,m\in {\mathbb {N}}}F_{(J^{n+k} f - J^{n+m}f)(x)}(t) = 1, \quad x \in X,\ t > 0. \end{aligned}$$

If (ii) is valid, then (3.17) implies that, for all \(k, m\in {\mathbb {N}}\), \(n \in {\mathbb {N}}_0\), \(x\in X\) and \(t>0\),

$$\begin{aligned} F_{(J^{n+k} f - J^{n+m}f)(x)}(2t)&\ge \;T\Big (F_{(J^{n+k} f - J^{n} f)(x)}(t),F_{(J^n f - J^{n+m}f) (x)}(t)\Big )\\&\ge T\Big (T_{i=n}^{n+k-1} (\Lambda ^{i} \epsilon )_x \big (\omega ^k_{i-n}t\big ),T_{i=n}^{n+m-1} (\Lambda ^{i} \epsilon )_x \big (\omega ^m_{i-n}t\big )\Big ), \end{aligned}$$

and consequently, by (c),

$$\begin{aligned}&\inf _{k,m\in {\mathbb {N}}}F_{(J^{n+k} f - J^{n+m}f) (x)}(2t) \\&\quad \ge \inf _{k,m\in {\mathbb {N}}}T\Big (T_{i=n}^{n+k-1} (\Lambda ^{i} \epsilon )_x \big (\omega ^k_{i-n} t\big ),T_{i=n}^{n+ m-1} (\Lambda ^{i} \epsilon )_x \big (\omega ^m_{i-n}t\big )\Big )\nonumber \\&\quad \ge T \Big ( \inf _{k \in {\mathbb {N}}_0}T_{i=n}^{k+n} (\Lambda ^{i} \epsilon )_x \big (\omega ^k_{i-n}t\big ),\inf _{m\in {\mathbb {N}}_0}T_{i=n}^{m+n} (\Lambda ^{i} \epsilon )_x \big (\omega ^m_{i-n}t\big )\Big ). \end{aligned}$$

Hence, (3.10), (b) and the continuity of T at (1, 1) yield

$$\begin{aligned} \lim _{n \rightarrow \infty } \;\inf _{k,m\in {\mathbb {N}}}F_{(J^{n+k} f - J^{n+m}f)(x)}(t) = 1, \quad x \in X,\ t > 0. \end{aligned}$$

Thus we have proved that, for every \(x \in X\), \((J^n f(x))_{n \in {\mathbb {N}}}\) is an M-Cauchy sequence and, as (YFT) is M-complete, the limit (3.12) exists.

In the case of (iii), in view of (3.16),

$$\begin{aligned} F_{(J^{n+m} f - J^n f)(x)}(t) \ge T_{i=0}^{m-1} (\Lambda ^{n+i} \epsilon )_x \left( \frac{t}{m}\right) , \quad x \in X, \ t>0, \ n, m \in {\mathbb {N}}, \end{aligned}$$

whence (3.11) and the continuity of T at (1, 1) imply that

$$\begin{aligned} \lim _{n \rightarrow +\infty } F_{J^n f(x) - J^{n+m}f(x)}(t) = 1, \quad x \in X, \ t>0, \ m \in {\mathbb {N}}. \end{aligned}$$

Thus, for every \(x \in X\), \((J^n f(x))_{n \in {\mathbb {N}}}\) is a G-Cauchy sequence. As (YFT) is G-complete, the limit (3.12) exists.

Now, we prove (3.13). Note that, in view of Lemma 3.3, \({\widehat{\sigma }}_x^k( t)\) is well defined by (3.7) for every \(k\in {\mathbb {N}}_0\), \(x \in X\) and \(t>0\).

Fix \(t>0\), \(x\in X\), \(\alpha \in (0,1)\) and \(n \in {\mathbb {N}}_0\). First, we show that

$$\begin{aligned} F_{(\psi - J^n f)(x)}(t) \ge \sigma _x^n(\alpha t). \end{aligned}$$
(3.18)

To this end, observe that (3.16) implies

$$\begin{aligned} F_{(\psi - J^n f)(x)} (t)&\ge T\Big (F_{(\psi - J^{n+m}f)(x)}\big ((1-\alpha ) t\big ), F_{(J^{n+m}f - J^{n}f)(x)}\big (\alpha t\big )\Big )\nonumber \\&\ge T\bigg (F_{(\psi - J^{n+m}f)(x)}\big ((1-\alpha ) t\big ), T_{i=n}^{n+m-1} (\Lambda ^i \epsilon )_x\left( \frac{\alpha t}{m}\right) \bigg ) \end{aligned}$$
(3.19)

for every \(m \in {\mathbb {N}}\). Hence, by (3.12) and the continuity of T at the point \(\big (1,\sigma _x^n(\alpha t)\big )\), by letting \(m\rightarrow +\infty \), we obtain (3.18).

Next, we show that

$$\begin{aligned} F_{(\psi - J^n f)(x)}(t) \ge {\underline{\sigma }}_x^n(\alpha t). \end{aligned}$$
(3.20)

So, fix \(\omega \in \Omega \) and note that (3.17) implies

$$\begin{aligned} F_{(\psi - J^n f)(x)} (t)&\ge T\Big (F_{(\psi - J^{n+m}f)(x)}\big ((1-\alpha ) t\big ), F_{(J^{n+m}f - J^{n}f)(x)}\big (\alpha t\big )\Big )\nonumber \\&\ge T\Big (F_{(\psi - J^{n+m}f)(x)}\big ((1-\alpha ) t\big ), T_{i=n}^{n+m-1} (\Lambda ^i \epsilon )_x\big (\omega _{i-n}\alpha t\big )\Big ) \end{aligned}$$
(3.21)

for every \(m \in {\mathbb {N}}\). Hence, by (3.12) and the continuity of T at the point \(\big (1,\,^{\omega }\sigma ^n_x(\alpha t)\big )\), by letting \(m\rightarrow +\infty \), we obtain

$$\begin{aligned} F_{(\psi - J^n f)(x)}(t) \ge \,^{\omega }\sigma _x^ n(\alpha t). \end{aligned}$$
(3.22)

Clearly, (3.22) implies (3.20), which with (3.18) yields (3.13).

Furthermore, by the \(\Lambda \)-contractivity of J,

$$\begin{aligned} F_{(J\psi -J^{n+1}f )(x)}(t)&\ge \big (\Lambda F_{\psi - J^n f}\big )_x(t),\quad t > 0, \ x \in X. \end{aligned}$$
(3.23)

Since (3.12) means that

$$\begin{aligned} \lim _{n \rightarrow +\infty } F_{(\psi -J^{n}f )(x)}= H_0, \quad x \in X, \end{aligned}$$

by \(({\mathcal {C}}_0)\) we have

$$\begin{aligned} \lim _{n \rightarrow +\infty } \big (\Lambda F_{\psi - J^{n} f}\big )_x=H_0, \quad x \in X. \end{aligned}$$

Whence, on account of (3.23),

$$\begin{aligned} \lim _{n \rightarrow +\infty } F_{(J\psi - J^{n+1} f)(x)}=H_0, \quad x \in X, \end{aligned}$$

and consequently

$$\begin{aligned} J\psi (x) = \lim _{n \rightarrow +\infty } (J^{n+1} f)(x)= \psi (x), \quad x \in X. \end{aligned}$$

Thus, we have shown that \(\psi \) is a fixed point of J.

It remains to prove the statements on the uniqueness of \(\psi \). So, assume that (i) or (ii) holds and \(\psi _1, \psi _2 \in Y^X\) are two fixed points of J such that

$$\begin{aligned} F_{(\psi _j - J^kf)(x)}(t) \ge {\widehat{\sigma }}_x^k(\alpha _j t), \quad k \in {\mathbb {N}}_0, \ x \in X,\ t > 0, j=1,2, \end{aligned}$$

with some \(\alpha _1, \alpha _2\in (0,1)\). Then, for all \(x \in X\), \(t > 0\) and \(k \in {\mathbb {N}}_0\), we get

$$\begin{aligned} F_{(\psi _1 - \psi _2)(x)}(2t)\ge&\; T\big (F_{(\psi _1- J^kf)(x)}(t), F_{(J^kf - \psi _2)(x)}(t)\big )\nonumber \\ \ge&\; T\big ({\widehat{\sigma }}_x^k(\alpha _1\,t), {\widehat{\sigma }}_x^k(\alpha _2\,t)\big ). \end{aligned}$$
(3.24)

Note yet that, in view of (3.4) and (3.5), each of the conditions (3.9) and (3.10) implies

$$\begin{aligned} \lim _{k \rightarrow +\infty } \;{\widehat{\sigma }}_x^k(t) =1, \quad x \in X,\ t>0. \end{aligned}$$
(3.25)

Hence, by letting \(k\rightarrow \infty \) in (3.24), by the continuity of T at the point (1, 1), we finally obtain that

$$\begin{aligned} F_{(\psi _1 - \psi _2)(x)}=H_0, \quad x \in X, \end{aligned}$$
(3.26)

which means that \(\psi _1 = \psi _2\). \(\square \)

Remark 3.6

If, for a given \(k\in {\mathbb {N}}_0\) and \(x\in X\), the function \({\widehat{\sigma }}_x^k\) is left continuous (which is not necessarily the case, because this depends on the forms of \(\epsilon \) and T), then it is easily seen that (3.13) can be replaced by

$$\begin{aligned} F_{(\psi - J^k f)(x)}(t) \ge {\widehat{\sigma }}_x^k( t),\quad t>0. \end{aligned}$$

Otherwise, for every fixed \(x\in X\) and \(k\in {\mathbb {N}}_0\), the inequality in (3.13) can of course be replaced by

$$\begin{aligned} F_{(\psi - J^k f)(x)}(t) \ge {\widehat{\sigma }}_x^k(\alpha _{x,k}\; t), \quad t > 0, \end{aligned}$$

with any fixed \(\alpha _{x,k}\in (0,1)\).

Remark 3.7

The assumptions (i) and (ii) in Theorem 3.5 look nearly the same and (i) is a bit simpler than (ii). However, as we will see below, in some situations (3.10) (with some sequence \((\omega ^k_n)_{n\in {\mathbb {N}}_0}\in \Omega \)) and (3.11) are fulfilled, while (3.9) is not.

Namely, let \(T=T_M\) and \(\Lambda \) have the following simple form:

$$\begin{aligned} (\Lambda \delta )_x (t)=\delta _{ax}(bt),\quad x\in X,\ t>0,\ \delta \in {\mathcal {D}}_+^X, \end{aligned}$$

with some \(a,b\in (0,\infty )\) (cf. the proof of Corollary 6.4). Then,

$$\begin{aligned} (\Lambda ^n \delta )_x (t)=\delta _{a^nx}(b^nt),\quad x\in X,\ t>0,\ \delta \in {\mathcal {D}}_+^X,\ n\in {\mathbb {N}}. \end{aligned}$$

Further, assume that X is a normed space, \(p\in [0,\infty )\), \(v\in {\mathbb {R}}_+\) and

$$\begin{aligned} \epsilon _x (t)=\frac{t}{t+v\Vert x\Vert ^p},\quad x\in X,\ t>0. \end{aligned}$$

Write \(e_0:=ba^{-p}\). Clearly, for every \(n\in {\mathbb {N}}\), \(x\in X\) and \(t>0\),

$$\begin{aligned} (\Lambda ^n \epsilon )_x (t)=\epsilon _{a^nx}(b^nt)=\frac{b^nt}{b^nt+v\Vert a^nx\Vert ^p}=\epsilon _{x}\left( \frac{b^nt}{a^{np}}\right) =\epsilon _{x}\big (e_0^nt\big ), \end{aligned}$$
(3.27)

and therefore

$$\begin{aligned} T_{i=k}^{k+j-1} (\Lambda ^{i} \epsilon )_x (t)=\inf _{i\in \{0,\ldots ,j-1\}} \epsilon _{x}\big (e_0^{k+i} t\big ),\quad j\in {\mathbb {N}}. \end{aligned}$$
(3.28)

Assume that \(e_0> 1\). Then, by (3.27),

$$\begin{aligned} \lim _{n \rightarrow +\infty } \,(\Lambda ^{n} \epsilon )_x (t)= \lim _{n \rightarrow +\infty } \,\epsilon _{x}\big (e_0^nt\big )=1, \quad x \in X,\ t>0, \end{aligned}$$
(3.29)

which means that (3.11) holds. Further, for every \(k\in {\mathbb {N}}_0\), (3.28) yields

$$\begin{aligned} T_{i=k}^{k+j-1} (\Lambda ^{i} \epsilon )_x (t)=\epsilon _{x}\big (e_0^{k} t\big ),\quad j\in {\mathbb {N}},\ x \in X,\ t>0, \end{aligned}$$

whence

$$\begin{aligned} \inf _{j\in {\mathbb {N}}_0}T_{i=k}^{k+j} (\Lambda ^{i} \epsilon )_x \left( \frac{t}{j+1}\right) =\inf _{j\in {\mathbb {N}}_0}\epsilon _{x}\left( \frac{e_0^{k} t}{j+1}\right) =0, \quad x \in X,\ t>0. \end{aligned}$$

Consequently, for every \(x \in X\) and \(t > 0\),

$$\begin{aligned} \sigma _x^k(t)= \lim _{j\rightarrow \infty }T_{i=k}^{k+j-1} (\Lambda ^{i} \epsilon )_x \left( \frac{t}{j}\right) = \lim _{j\rightarrow \infty }\; \epsilon _{x} \left( \frac{e_0^{k}t}{j}\right) =0,\quad k\in {\mathbb {N}}_0, \end{aligned}$$

and

$$\begin{aligned} \lim _{k \rightarrow +\infty } \;\inf _{j\in {\mathbb {N}}_0}T_{i=k}^{k+j} (\Lambda ^{i} \epsilon )_x \left( \frac{t}{j+1}\right) =0. \end{aligned}$$

Hence, (3.9) is not valid and \(\sigma _x^k\) makes no contribution in estimation (3.13).

On the other hand, for every \(x\in X\), \(t>0\) and \(\omega =(\omega _n)_{n\in {\mathbb {N}}_0}\in \Omega \), we have

$$\begin{aligned} T_{i=k}^{k+j} (\Lambda ^{i} \epsilon )_x \big (\omega _{i-k}t\big ) =\inf _{i\in \{0,\ldots ,j-1\}} \epsilon _{x} \big (e_0^{k+i}\omega _{i}t\big ). \end{aligned}$$

So, for \({\widehat{\omega }}=({\widehat{\omega }}_n)_{n\in {\mathbb {N}}_0}\in \Omega \), with

$$\begin{aligned} {\widehat{\omega }}_i := r^i(1 - r), \quad i \in {\mathbb {N}}_0, \qquad r := \frac{1}{e_0}, \end{aligned}$$

we have

$$\begin{aligned} e_0^{k+i}{\widehat{\omega }}_{i}=e_0^{k}(1-r)=e_0^{k-1}(e_0-1),\quad i\in {\mathbb {N}}_0. \end{aligned}$$

Therefore, for every \(x\in X\) and \(t > 0\),

$$\begin{aligned} T_{i=k}^{k+j} (\Lambda ^{i} \epsilon )_x \big ({\widehat{\omega }}_{i-k} t \big ) =\epsilon _{x} \big (e_0^{k-1}(e_0-1)t\big ), \end{aligned}$$

whence

$$\begin{aligned}&\lim _{m \rightarrow +\infty } \;\inf _{j\in {\mathbb {N}}_0}T_{i=m}^{m+j} (\Lambda ^{i} \epsilon )_x \big ({\widehat{\omega }}^m_{i-m}t\big ) =\lim _{m \rightarrow +\infty } \;\inf _{j\in {\mathbb {N}}_0}\;\epsilon _{x} \big (e_0^{m-1}(e_0-1)t\big )\\&=\lim _{m \rightarrow +\infty } \;\epsilon _{x} \big (e_0^{m-1}(e_0-1)t\big ) = 1,\\&\,^{{\widehat{\omega }}}\!\sigma _x^k(t)= \lim _{j\rightarrow \infty }\;T_{i=k}^{k+j-1} (\Lambda ^{i} \epsilon )_x \big ({\widehat{\omega }}_{i-k} t \big )=\epsilon _{x} \big (e_0^{k-1}(e_0-1)t\big ). \end{aligned}$$

This means that (3.10) holds with \(\omega ^k={\widehat{\omega }}\) for \(k\in {\mathbb {N}}_0\) and

$$\begin{aligned} {\widehat{\sigma }}_x^k(t)\ge \,^{{\widehat{\omega }}}\!\sigma _x^k(t)=\epsilon _{x} \big (e_0^{k-1}(e_0-1)t\big ),\quad x\in X,\ t>0. \end{aligned}$$

For the situation where (3.10) is valid with sequences \(\omega ^k\in \Omega \) that are not the same for all \(k\in {\mathbb {N}}\), we refer to Remark 5.3.

Remark 3.8

Note that in the proof of Theorem 3.5, we have only used continuity of  T at the points of the form \((1,\xi )\) for \(\xi \in (0,1]\). Actually, even that assumption can be weakened. Namely, it is enough to assume that T is continuous only at the point (1, 1), but then we have to modify inequality in (3.13) basing it only on (3.19) and (3.21) without taking the limits.

Remark 3.9

Observe that the properties of the t-norm yield

$$\begin{aligned} \inf _{k\in {\mathbb {N}}_0}T_{i=n}^{k+n} (\Lambda ^{i} \epsilon )_x \Bigg (\frac{t}{k+1}\Bigg )\le (\Lambda ^{n} \epsilon )_x (t), \quad x \in X,\ t>0,\ n\in {\mathbb {N}}_0, \end{aligned}$$

whence (3.9) implies (3.11). However, since every G-complete RN-space is M-complete (see Remark 2.7) and not necessarily conversely, assumption (iii) is not weaker than (i).

Remark 3.10

Let \(\nu \in {\mathbb {N}}\), \(\xi _1,\ldots ,\xi _{\nu }:X\rightarrow X\), and \(L_1,\ldots ,L_{\nu }:X\rightarrow (0,\infty )\) be fixed. If the operator J has the form

$$\begin{aligned} J \eta (x) :=H\big (x,\eta (\xi _1(x)),\ldots ,\eta (\xi _{\nu }(x))\big ), \quad \eta \in Y^X, \ x \in X, \end{aligned}$$
(3.30)

with a function \(H:X\times Y^{\nu }\rightarrow Y\) satisfying the following Lipschitz-type condition:

$$\begin{aligned} F_{H(x,y_1,\ldots ,y_{\nu })-H(x,z_1,\ldots ,z_{\nu })}(t) \ge T_{i=1}^{\nu } F_{y_i-z_i}\left( \frac{t}{\nu L_i(x)}\right) ,\quad t>0, \end{aligned}$$
(3.31)

for all \(x\in X\) and \(y_1,\ldots ,y_{\nu },z_1,\ldots ,z_{\nu }\in Y\), then such J is \(\Lambda \)-contractive with \(\Lambda \) defined by (3.1) and such \(\Lambda \) fulfills hypothesis \(({\mathcal {C}}_0)\) (see Remark 3.2).

Clearly, (3.31) holds if H has the following simple form:

$$\begin{aligned} H(x,y_1,\ldots ,y_{\nu })=\sum _{i=1}^{\nu } L_i(x)y_i+h(x),\quad x\in X,\ y_1,\ldots ,y_{\nu }\in Y, \end{aligned}$$
(3.32)

with a fixed function \(h\in Y^X\). Then, (3.30) becomes

$$\begin{aligned} J f(x) :=\sum _{i=1}^{\nu } L_i(x)f(\xi _i(x))+h(x),\quad f \in Y^X,\ x\in X. \end{aligned}$$
(3.33)

In particular, such J satisfies the following Lipschitz-type condition:

$$\begin{aligned} F_{(J\mu - J\eta )(x)}(t) \ge T_{i=1}^{\nu } F_{(\mu -\eta )(\xi _i(x))}\left( \frac{t}{\nu L_i(x)}\right) , \quad&\mu , \eta \in Y^X, \nonumber \\&x\in X,\ t>0. \end{aligned}$$
(3.34)

If we want to admit functions \(L_i\) taking values in \({\mathbb {R}}\) (i.e., in particular taking the value zero), then we can rewrite that condition in the subsequent form:

$$\begin{aligned} F_{(J\mu - J\eta )(x)}(t) \ge T_{i=1}^{\nu } F_{L_i(x)(\mu (\xi _i(x))-\eta (\xi _i(x)))}\Bigg (\frac{t}{\nu }\Bigg ), \quad&\mu , \eta \in Y^X, \nonumber \\&x\in X,\ t>0. \end{aligned}$$
(3.35)

Note that if, in such a situation, \(L_i(x)\ne 0\) for some \(i\in \{1,\ldots ,\nu \}\) and some \(x\in X\), then

$$\begin{aligned} F_{L_i(x)(\mu (\xi _i(x))-\eta (\xi _i(x)))}(t)=F_{\mu (\xi _i(x))-\eta (\xi _i(x))}\left( \frac{t}{|L_i(x)|}\right) ,\quad t>0; \end{aligned}$$

but if \(L_i(x)=0\), then

$$\begin{aligned} F_{L_i(x)(\mu (\xi _i(x))-\eta (\xi _i(x)))}(t)=F_{0}(t)=1,\quad t>0. \end{aligned}$$

In view of Remark 3.10, for operators \(J : Y^X \rightarrow Y^X\) fulfilling condition (3.34), we have the following particular case of Theorem 3.5, with a stronger statement on the uniqueness of fixed point (because under the weaker assumption that (3.14) holds only for \(k=0\)).

Theorem 3.11

Let \({\nu } \in {\mathbb {N}},\) \(\epsilon \in {\mathcal {D}}_+^X,\) \(\xi _1,\ldots , \xi _{\nu } \in X^X,\) \(L_1, \ldots , L_{\nu } : X \rightarrow (0, \infty ),\) \(\Lambda : {\mathcal {D}}_+^X \rightarrow {\mathcal {D}}_+^X\) be defined by (3.1), \(J : Y^X \rightarrow Y^X\) satisfy condition (3.34), and \(f : X \rightarrow Y\) fulfil (3.8). Assume that one of the conditions (i)–(iii) of Theorem 3.5 holds. Then,  for every \(x \in X,\) the limit (3.12) exists and the function \(\psi \in Y^X,\) defined in this way,  is a fixed point of J satisfying (3.14).

Moreover,  if (i) or (ii) holds,  then \(\psi \) is the unique fixed point of J such that there is \(\alpha \in (0,1)\) with

$$\begin{aligned} F_{(\psi - f)(x)}(t) \ge {\widehat{\sigma }}_x^0(\alpha t),\quad t>0,\ x\in X. \end{aligned}$$
(3.36)

Proof

First, fix \(\xi , \eta \in Y^X\) and \(\phi \in {\mathcal {D}}_+^X\) with

$$\begin{aligned} F_{(\xi - \eta )(x)} \ge \phi _x, \quad x \in X. \end{aligned}$$

Then, by (3.34),

$$\begin{aligned} F_{(J\mu - J\eta )(x)} (t)&\ge T_{i=1}^{\nu } F_{(\mu -\eta )(\xi _i(x))}\left( \frac{t}{\nu L_i(x)}\right) \\&\ge T_{i=1}^{\nu } \phi _{\xi _i(x)}\left( \frac{t}{\nu L_i(x)}\right) = (\Lambda \phi )_x(t), \quad x \in X, \ t > 0. \end{aligned}$$

Hence, J is \(\Lambda \)-contractive. Moreover, as we have noticed in Remark 3.10, \(\Lambda \) satisfies hypothesis \(({\mathcal {C}}_0)\). Hence, by Theorem 3.5, limit (3.12) exists for every \(x \in X\) and so defined function \(\psi \) is a fixed point of J satisfying (3.14).

It remains to show the statement on uniqueness of \(\psi \). So, let \(\tau \in Y^X\) be a fixed point of J such that, for some \(\alpha \in (0, 1)\),

$$\begin{aligned} F_{(\tau - f)(x)}(t) \ge {\widehat{\sigma }}_x^0(\alpha t), \quad t > 0,\ x \in X. \end{aligned}$$
(3.37)

Fix \(x \in X\), \(\omega =(\omega _n)_{n \in {\mathbb {N}}_0} \in \Omega \) and \(t > 0\). We show that, for every \(n \in {\mathbb {N}}_0\), we have

$$\begin{aligned} F_{(J^n \psi - J^n\tau )(x)}(t)&\ge \lim _{m \rightarrow +\infty } {\widehat{T}}\bigg (T_{i=n}^{n+m-1} (\Lambda ^{i} \epsilon )_x \left( \frac{\alpha t}{2m}\right) \bigg ), \end{aligned}$$
(3.38)
$$\begin{aligned} F_{(J^n \psi - J^n\tau )(x)}(t)&\ge \lim _{m \rightarrow +\infty } {\widehat{T}}\bigg (T_{i=n}^{n+m-1} (\Lambda ^{i} \epsilon )_x \left( \frac{\omega _{i-n}\alpha t}{2}\right) \bigg ). \end{aligned}$$
(3.39)

This is the case for \(n = 0\), because by the continuity of T, for every \(x \in X\) and \(t > 0\), we have

$$\begin{aligned} F_{(\psi - \tau )(x)}(t)&\ge T\bigg (F_{(\psi - f)(x)}\left( \frac{t}{2}\right) ,F_{(\tau - f)(x)}\left( \frac{t}{2}\right) \bigg ) \\&\ge {\widehat{T}}\bigg (\sigma _x^0\left( \frac{\alpha t}{2}\right) \bigg )={\widehat{T}}\bigg (\lim _{m \rightarrow + \infty } T_{i=0}^{m-1} (\Lambda ^i\epsilon )_x\left( \frac{\alpha t}{2m}\right) \bigg ) \\&= \lim _{m \rightarrow + \infty } {\widehat{T}}\bigg ( T_{i=0}^{m-1} (\Lambda ^i\epsilon )_x\left( \frac{\alpha t}{2m}\right) \bigg ). \end{aligned}$$

Analogously,

$$\begin{aligned} F_{(\psi - \tau )(x)}(t) \ge {\widehat{T}}\bigg ({\underline{\sigma }}_x^0\left( \frac{\alpha t}{2}\right) \bigg ) \ge {\widehat{T}}\bigg (\,^{\omega }\sigma _x^0\left( \frac{\alpha t}{2}\right) \bigg ),\quad \omega \in \Omega . \end{aligned}$$

Since T is continuous, we finally get

$$\begin{aligned} F_{(\psi - \tau )(x)}(t)&\ge \lim _{m \rightarrow + \infty } {\widehat{T}}\bigg ( T_{i=0}^{m-1}(\Lambda ^i\epsilon )_x \left( \frac{ \omega _{i-n} \alpha t}{2} \right) \bigg ). \end{aligned}$$

Now assume that (3.38) is valid for some \(n \in {\mathbb {N}}_0\). Then, by (2.2) and the continuity of T, for every \(x \in X\) and \( t > 0\),

$$\begin{aligned} F_{(J^{n+1} \psi - J^{n+1}\tau )(x)}( t)&\ge T_{j=1}^{\nu } F_{(J^{n} \psi - J^{n}\tau )(\xi _j(x))}\left( \frac{t}{\nu L_j(x)}\right) \\&\ge T_{j =1}^{\nu } \lim _{m \rightarrow +\infty } {\widehat{T}}\bigg (T_{i=n}^{n+m-1} (\Lambda ^{i} \epsilon )_{\xi _j(x)} \left( \frac{\alpha t}{2 m \nu L_j(x)}\right) \bigg ) \\&= \lim _{m \rightarrow +\infty } T_{j =1}^{\nu } {\widehat{T}}\bigg (T_{i=n}^{n+m-1} (\Lambda ^{i} \epsilon )_{\xi _j(x)} \left( \frac{\alpha t}{2m\nu L_j(x)}\right) \bigg ) \\&= \lim _{m \rightarrow +\infty } {\widehat{T}}\bigg (T_{i=n}^{n+ m-1} T_{j =1}^{\nu } (\Lambda ^{i} \epsilon )_{\xi _j(x)} \left( \frac{\alpha t}{2m\nu L_j(x)}\right) \bigg ) \\&= \lim _{m \rightarrow +\infty } {\widehat{T}}\bigg (T_{i=n}^{n+m-1} (\Lambda (\Lambda ^{i} \epsilon ))_x \left( \frac{\alpha t}{2m}\right) \bigg ) \\&= \lim _{m \rightarrow +\infty } {\widehat{T}}\bigg (T_{i=n+1}^{n+m} (\Lambda ^{i} \epsilon )_{x} \left( \frac{\alpha \nu t}{2m}\right) \bigg ). \end{aligned}$$

Next, assume that (3.39) is valid for some \(n \in {\mathbb {N}}_0\). Then in the same way, by (2.2) and the continuity of T, for every \(x \in X\), \( t > 0\), and \(\omega \in \Omega \),

$$\begin{aligned} F_{(J^{n+1} \psi - J^{n+1}\tau )(x)}(t)&\ge T_{j=1}^{\nu } F_{(J^{n} \psi - J^{n}\tau )(\xi _j(x))}\left( \frac{t}{\nu L_j(x)}\right) \\&\ge T_{j =1}^{\nu } \lim _{m \rightarrow +\infty } {\widehat{T}}\bigg (T_{i=n}^{n+m-1} (\Lambda ^{i} \epsilon )_{\xi _j(x)} \left( \frac{\omega _{i-n}\alpha t}{2\nu L_j(x)}\right) \bigg ) \\&= \lim _{m \rightarrow +\infty } T_{j =1}^{\nu } {\widehat{T}}\bigg (T_{i=n}^{n+m-1} (\Lambda ^{i} \epsilon )_{\xi _j(x)} \left( \frac{\omega _{i-n}\alpha t}{2\nu L_j(x)}\right) \bigg ) \\&= \lim _{m \rightarrow +\infty } {\widehat{T}}\bigg (T_{i=n}^{n+m-1} T_{j =1}^{\nu } (\Lambda ^{i} \epsilon )_{\xi _j(x)} \left( \frac{\omega _{i-n}\alpha t}{2\nu L_j(x)}\right) \bigg ) \\&= \lim _{m \rightarrow +\infty } {\widehat{T}}\bigg (T_{i=n}^{n+m-1} (\Lambda ^{i+1} \epsilon )_{x} \left( \frac{ \omega _{i-n}\alpha t}{2}\right) \bigg ) \\&= \lim _{m \rightarrow +\infty } {\widehat{T}}\bigg (T_{i=n+1}^{n+m} (\Lambda ^{i} \epsilon )_{x} \left( \frac{\omega _{i-n-1}\alpha t}{2}\right) \bigg ). \end{aligned}$$

Thus, we have proved (3.38) and (3.39) for every \(n \in {\mathbb {N}}\), \(x\in X\), \(\omega \in \Omega \), and \(t>0\). Whence

$$\begin{aligned} F_{(\tau - \psi )(x)}(t)&= F_{(J^n \tau - J^n \psi )(x)}(t)\nonumber \\&\ge \lim _{m \rightarrow +\infty } {\widehat{T}}\bigg (T_{i=n}^{n+m-1} (\Lambda ^{i} \epsilon )_x \left( \frac{\alpha t}{2m}\right) \bigg )\nonumber \\&= {\widehat{T}}\bigg (\lim _{m \rightarrow +\infty } T_{i=n}^{n+m-1} (\Lambda ^{i} \epsilon )_x \left( \frac{\alpha t}{2m}\right) \bigg ), \end{aligned}$$
(3.40)
$$\begin{aligned} F_{(\tau - \psi )(x)}(t)&= F_{(J^n \tau - J^n \psi )(x)}(t)\nonumber \\&\ge \lim _{m \rightarrow +\infty } {\widehat{T}}\bigg (T_{i=n}^{n+m-1} (\Lambda ^{i} \epsilon )_x \left( \frac{\omega _{i-n}\alpha t}{2}\right) \bigg )\nonumber \\&= {\widehat{T}}\bigg (\lim _{m \rightarrow +\infty } T_{i=n}^{n+m-1} (\Lambda ^{i} \epsilon )_x \left( \frac{\omega _{i-n}\alpha t}{2}\right) \bigg ). \end{aligned}$$
(3.41)

Now, if (3.9) holds, then by letting \(n \rightarrow +\infty \) in (3.40), by the continuity of \({\widehat{T}}\), we get \(F_{(\tau - \psi )(x)}(t)=1\) for every \(x\in X\) and \(t>0\), which means that \(\tau = \psi \). Similarly, if (3.10) holds, then we argue analogously by letting \(n \rightarrow +\infty \) in (3.41). \(\square \)

As for the uniqueness of the fixed points of J in Theorem 3.5, we also have the following proposition.

Proposition 3.12

Let \(\Lambda : {\mathcal {D}}_+^X \rightarrow {\mathcal {D}}_+^X,\) \(J : Y^X \rightarrow Y^X\) be \(\Lambda \)-contractive,  \(k \in {\mathbb {N}}_0\) and \(\sigma \in \{\sigma ^k\} \cup \{\,^{\omega }\!\sigma ^k : \omega \in \Omega \}\) satisfy

$$\begin{aligned} \lim _{n \rightarrow +\infty } \,(\Lambda ^{n} \sigma )_x (t) =1, \quad x \in X,\ t > 0, \end{aligned}$$
(3.42)

where \(\sigma ^k,\,^{\omega }\!\sigma ^k \in {\mathcal {D}}^X_+\) are given by \(\sigma ^k(x)=\sigma _x^k\) and \(\,^{\omega }\!\sigma ^k(x)=\,^{\omega }\!\sigma _x^k\) for \(x \in X\).

Then,  for every \(f : X \rightarrow Y,\) J has at most one fixed point \(\psi _0\) with

$$\begin{aligned} F_{(\psi _0 - J^k f)(x)} \ge \sigma (x), \quad x \in X. \end{aligned}$$

Proof

Fix \(f : X \rightarrow Y\) and assume that \(\psi _1, \psi _2 \in Y^X\) are fixed points of J satisfying

$$\begin{aligned} F_{(\psi _j - J^k f)(x)} \ge \sigma _x, \quad x \in X, \ j=1,2. \end{aligned}$$

Then, by the \(\Lambda \)-contractivity of J,

$$\begin{aligned} F_{(J^m\psi _j - J^{k+m} f)(x)} \ge (\Lambda ^m \sigma )_x, \quad x \in X,\ j=1,2,\ m\in {\mathbb {N}}_0, \end{aligned}$$

and consequently,

$$\begin{aligned} F_{(\psi _1 - \psi _2)(x)}(t)&= F_{(J^m \psi _1 - J^m \psi _2)(x)}(t) \\&\ge T\bigg (\;F_{(J^m \psi _1- J^{k+m}f)(x)}\left( \frac{t}{2}\right) , F_{(J^{k+m}f-J^m \psi _2)(x)}\left( \frac{t}{2}\right) \bigg ) \\&\ge T\bigg ((\Lambda ^m \sigma )_x\Bigg (\,\frac{t}{2}\Bigg ), (\Lambda ^m \sigma )_x\Bigg (\,\frac{t}{2}\Bigg )\bigg ) \end{aligned}$$

for every \(m \in {\mathbb {N}}_0\), \(x \in X\) and \(t > 0\). Hence, by letting m tend to \(\infty \), by (3.42) and the continuity of T at the point (1, 1), \(F_{(\psi _1 - \psi _2)(x)} = H_0\) for \(x \in X\). Consequently, \(\psi _1 = \psi _2\). \(\square \)

If X has only one element, then \(Y^X\) can actually be identified with Y and Theorem 3.5 becomes an analog of the classical Banach Contraction Principle (somewhat generalized), given in Corollary 3.14 below. To present it, we need the following hypothesis, concerning mappings \(\lambda : {\mathcal {D}}_+ \rightarrow {\mathcal {D}}_+\), which is a special case of hypothesis \(({\mathcal {C}}_0)\).

  1. (C)

    The sequence \(\big (\lambda (F_{z_n})\big )_{n \in {\mathbb {N}}}\) converges pointwise to \(H_0\) for each sequence \((z_n)_{n \in {\mathbb {N}}}\) in Y, which converges to 0.

To avoid any ambiguity, let us give one more definition, which is a special case of an earlier definition, namely: Definition 3.1.

Definition 3.13

Let \(\lambda : {\mathcal {D}}_+ \rightarrow {\mathcal {D}}_+\) be given. We say that a mapping \(h : Y \rightarrow Y\) is \(\lambda \)-contractive provided

$$\begin{aligned} F_{h(z) - h(w)} \ge \lambda \phi := \lambda (\phi ) \end{aligned}$$

for every \(z,w \in Y\) and \(\phi \in {\mathcal {D}}_+\) with \(F_{z-w} \ge \phi \).

Corollary 3.14

Let \(\lambda : {\mathcal {D}}_+ \rightarrow {\mathcal {D}}_+\) satisfy hypothesis (C) and \(h : Y \rightarrow Y\) be \(\lambda \)-contractive. Let \(\epsilon \in {\mathcal {D}}_+\) be such that

$$\begin{aligned} F_{h(z)-z} \ge \epsilon , \quad z \in Y, \end{aligned}$$
(3.43)

and assume that one of the following three conditions holds.

\((\alpha )\):

(YFT) is M-complete and

$$\begin{aligned} \lim _{k \rightarrow +\infty }\; \inf _{j \in {\mathbb {N}}_0}\, T_{i=k}^{k+j} (\lambda ^i \epsilon ) \Bigg (\frac{t}{j+1} \Bigg ) =1,\quad t > 0. \end{aligned}$$
\((\beta )\):

(YFT) is M-complete and,  for each \(k \in {\mathbb {N}},\) there is a sequence \((\omega ^k_n)_{n \in {\mathbb {N}}_0} \in \Omega \) with

$$\begin{aligned} \lim _{k \rightarrow +\infty }\; \inf _{j \in {\mathbb {N}}_0}\, T_{i=k}^{k+j} (\lambda ^i \epsilon ) \big ( \omega ^k_{i-k} t \big ) =1,\quad t > 0. \end{aligned}$$
\((\gamma )\):

(YFT) is G-complete and

$$\begin{aligned} \lim _{n \rightarrow +\infty } \,(\lambda ^n \epsilon ) =H_0. \end{aligned}$$
(3.44)

Then,  for every \(\omega = (\omega _n)_{n \in {\mathbb {N}}_0} \in \Omega \) and \( t > 0,\) the limits

$$\begin{aligned} z_0&:= \lim _{n \rightarrow +\infty } h^n (z),\\ l_k(t)&:= \lim _{m \rightarrow +\infty } T_{i=k}^{m+k-1} (\lambda ^i \epsilon )\Bigg ( \frac{t}{m} \Bigg ),\\ \,^{\omega }l_k(t)&:= \lim _{m \rightarrow +\infty } T_{i=k}^{m+k-1} (\lambda ^i \epsilon )\big (\omega _{i-k} t\big ) \end{aligned}$$

exist (in Y and \({\mathbb {R}},\) respectively) and \(z_0\) is a fixed point of h such that

$$\begin{aligned} F_{z_0 - h^k( z )}(t) \ge \sup _{\alpha \in (0,1)}\; {\widehat{l}}_k(\alpha t), \quad t > 0,\ k \in {\mathbb {N}}_0, \end{aligned}$$

where

$$\begin{aligned} {\widehat{l}}_k( t) := \max \{l_k(t), {\underline{l}}_{\,k}(t)\},\qquad {\underline{l}}_{\,k}(t) := \sup _{\omega \in \Omega } \,^{\omega }l_k(t), \quad t > 0,\ k \in {\mathbb {N}}_0. \end{aligned}$$

Moreover,  in case \((\alpha )\) or \((\beta )\) holds,  \(z_0\) is the unique fixed point of h for which there exists \(\alpha \in (0,1)\) with

$$\begin{aligned} F_{z_0 - h^k (z) }(t) \ge {\widehat{l}}_k(\alpha t),\quad t > 0,\ k \in {\mathbb {N}}_0. \end{aligned}$$

Remark 3.15

Let \(g : {\mathbb {R}} \rightarrow {\mathbb {R}}\) and \(G : [0,1] \rightarrow [0,1]\) be non-decreasing, left continuous and such that \(g(0) = G(0) = 0\), \(G(1) = 1\),

$$\begin{aligned} G(t) \ge t,\quad \lim _{n \rightarrow \infty } g^n(t) = \infty ,\qquad t > 0. \end{aligned}$$

Let \(\lambda : {\mathcal {D}}_+ \rightarrow {\mathcal {D}}_+\) have the form

$$\begin{aligned} (\lambda \xi )(t) = G(\xi (g(t))), \quad \xi \in {\mathcal {D}}_+,\ t \in {\mathbb {R}}. \end{aligned}$$

Then,

$$\begin{aligned} \lim _{n \rightarrow +\infty } \,(\lambda ^{n} \xi ) (t) = \lim _{n \rightarrow +\infty } \, G^n(\xi (g^n(t))) = 1,\quad t > 0,\ \xi \in {\mathcal {D}}_+, \end{aligned}$$

which means that (3.44) holds for every \(\epsilon \in {\mathcal {D}}_+\).

A very simple example of such \(\lambda \) is obtained when G is the identity map of [0, 1] (i.e., \(G(t) \equiv t\)) and

$$\begin{aligned} g(t)= at,\quad t\in {\mathbb {R}}, \end{aligned}$$
(3.45)

with a fixed \(a > 1\). Clearly, then \((\lambda \xi )(t) = \xi (at)\) for \(\xi \in {\mathcal {D}}_+\) and \(t > 0\) and, in this case, the \(\lambda \)-contractive mappings are known as B-contractions or Sehgal contractions (see [67, 88]).

If g is the identity map on \({\mathbb {R}}\) and

$$\begin{aligned} G(t)=\frac{s}{s+\kappa (1-s)},\quad s\in [0,1], \end{aligned}$$
(3.46)

with some \(\kappa \in (0,1)\), then \(\lambda \)-contractive mappings are fuzzy contractive (see [43, 67, 83]).

If both (3.45) and (3.46) hold, then \(\lambda \)-contractive mappings are called strict B-contractions (see [67, 85]).

4 Approximate eigenvalues

In this section, we show an application of Theorem 3.5 in investigation of the approximate eigenvalues and eigenvectors, which corresponds to the results in [36, 47].

It is well known that \(Y^X\) is a real linear space with the operations defined pointwise in the usual way:

$$\begin{aligned} (\xi + \eta )(x) := \xi (x) + \eta (x),\quad (\alpha \xi )( x) := \alpha \xi (x), \quad \xi ,\eta \in Y^X,\, x \in X,\, \alpha \in {\mathbb {R}}. \end{aligned}$$

The next corollary is an example of a result concerning approximate eigenvalues of some linear operators on \(Y^X\). Actually, the assumption of linearity of the operators is not necessary in the proof, but the notion of eigenvalue might be ambiguous without it (see, e.g., [86]) and therefore we confine only to the linear case.

Corollary 4.1

Let \(\gamma \in {\mathbb {R}}{\setminus }\{0\},\) \(\Lambda _0:{\mathcal {D}}_+^{X}\rightarrow {\mathcal {D}}_+^{X}\) satisfy \(({\mathcal {C}}_0),\) and \(J_0 : Y^X \rightarrow Y^X\) be linear and \(\Lambda _0\)-contractive. Assume \(h \in Y^X\) and \(\varepsilon \in {\mathcal {D}}_+^{X}\) satisfy the condition

$$\begin{aligned} F_{(J_0 h-\gamma h)(x)} \ge \varepsilon _x, \quad x\in X. \end{aligned}$$
(4.1)

If one of the conditions (i), (ii) and (iii) of Theorem 3.5 is valid with

$$\begin{aligned} (\Lambda \delta )_x(t) := (\Lambda _0 \delta )_x(|\gamma | t),\quad \delta \in {\mathcal {D}}_+^X,\ x \in X,\ t > 0, \end{aligned}$$
(4.2)

then \(\gamma \) is an eigenvalue of \(J_0,\) the limits

$$\begin{aligned} \psi (x)&:= \lim _{n \rightarrow \infty } \big (J_0^n(\gamma ^{-n+1}h)\big )(x), \end{aligned}$$
(4.3)
$$\begin{aligned} \sigma _x^0(t)&:= \lim _{m\rightarrow \infty }T_{i=0}^{m-1} (\Lambda ^{i} \epsilon )_x \left( \frac{t}{m}\right) , \end{aligned}$$
(4.4)
$$\begin{aligned} ^{\omega }\sigma _x^0(t)&:= \lim _{m\rightarrow \infty }T_{i=0}^{m-1} (\Lambda ^{i} \epsilon )_x \big (\omega _i t\big ) \end{aligned}$$
(4.5)

exist for every \(x \in X,\) \(\omega =(\omega _n)_{n\in {\mathbb {N}}_0}\in \Omega \) and \(t>0,\) and the function \(\psi _0 \in Y^X,\) given by

$$\begin{aligned} \psi _0(x) := \gamma ^{-1} \psi (x), \quad x \in X, \end{aligned}$$

is an eigenvector of \(J_0,\) with the eigenvalue \(\gamma ,\) such that

$$\begin{aligned} F_{(\psi _0 - h)(x)} (t)\ge \sup _{\alpha \in (0,1)}\; {\widehat{\sigma }}_x^0(\alpha |\gamma |\, t), \quad x \in X,\ t>0. \end{aligned}$$
(4.6)

Proof

Let \(\varphi := \gamma h\) and \(J : Y^X \rightarrow Y^X\) be given by:

$$\begin{aligned} (J \eta )(x) = \big (J_0(\gamma ^{-1} \eta )\big )(x), \quad \eta \in Y^X, \ x \in X. \end{aligned}$$

Then, in view of the \(\Lambda _0\)-contractivity and linearity of \(J_0\), for every \(\mu ,\xi \in Y^X\) and \(\delta \in {\mathcal {D}}_+^X\) with \(F_{(\mu -\xi )(x)}\ge \delta _x\), we have

$$\begin{aligned} F_{(J\mu -J\xi )(x)}(t)=F_{(J_0(\gamma ^{-1}\mu )-J_0(\gamma ^{-1}\xi ))(x)}(t)\ge F_{(J_0\mu -J_0\xi )(x)}(|\gamma | t),\quad t>0, \end{aligned}$$

whence

$$\begin{aligned} F_{(J\mu -J\xi )(x)}(t)\ge (\Lambda _0 \delta )_x(|\gamma | t)=(\Lambda \delta )_x( t),\quad t>0, \end{aligned}$$

which means that J is \(\Lambda \) – contractive. Next, we can write (4.1) in the form:

$$\begin{aligned} F_{ (J \varphi -\varphi )(x)} \ge \varepsilon _x, \quad x \in X. \end{aligned}$$
(4.7)

Hence, by Theorem 3.5 and Lemma 3.3, the limits (4.3), (4.4) and (4.5) exist for every \(x\in X\), \(\omega = (\omega _n)_{n \in {\mathbb {N}}_0} \in \Omega \) and \(t > 0\). Moreover, the function \(\psi : X \rightarrow Y\), defined by (3.12), is a fixed point of J with

$$\begin{aligned} F_{(\psi -\varphi )(x)}(t) \ge \sup _{\alpha \in (0,1)}\; {\widehat{\sigma }}_x^0(\alpha t), \quad x \in X,\ t>0. \end{aligned}$$
(4.8)

Write \(\psi _0 := \gamma ^{-1}\psi \). Now, it is easily seen that \(J_0 \psi _0 = J\psi = \psi = \gamma \psi _0\), (4.6) is equivalent to (4.8), and (3.12) yields (4.3). \(\square \)

Clearly, under suitable additional assumptions in Corollary 4.1, we can deduce from Theorem 3.5 some statements on the uniqueness of \(\psi \), and consequently on the uniqueness of \(\psi _0\).

Given \(\varepsilon \in {\mathcal {D}}_+^{X}\), let us introduce the following definition: \(\gamma \in {\mathbb {R}} {\setminus } \{0\}\) is an \(\varepsilon \)-eigenvalue of a linear operator \(J_0 : Y^X \rightarrow Y^X\) provided there exists \(h \in Y^X\) such that \(F_{(J_0 h-\gamma h)(x)} \ge \varepsilon _x\) for \(x\in X\).

It is easily seen that Corollary 4.1 yields the following simple result.

Corollary 4.2

Let \(\Lambda _0:{\mathcal {D}}_+^{X}\rightarrow {\mathcal {D}}_+^{X},\) \(J_0 : Y^X \rightarrow Y^X\) be \(\Lambda _0\)-contractive and linear,  and \(\varepsilon \in {\mathcal {D}}_+^{X}\). If \(\gamma \in {\mathbb {R}} {\setminus } \{0\}\) is an \(\varepsilon \)-eigenvalue of  \(J_0\) and one of the conditions (i)–(iii) of Theorem 3.5 is valid with \(\Lambda \) given by (4.2), then \(\gamma \) is an eigenvalue of  \(J_0\).

5 Ulam stability of functional equations in a single variable

In this section, as before, X is a nonempty set and (YFT) is an RN-space.

As we have mentioned in the Introduction, the main issue of Ulam stability can be very briefly expressed in the following way: when must a function satisfying an equation approximately (in some sense) be near an exact solution to the equation?

The next definition (cf. [25, p. 119, Ch. 5, Definition 8]) makes that notion a bit more precise for the RN-spaces.

Definition 5.1

Let \({\mathcal {E}}\) and \({\mathcal {C}}\) be nonempty subsets of \({\mathcal {D}}_+^{X}\) with \({\mathcal {E}} \subset {\mathcal {C}}\). Let \({\mathcal {T}}\) be an operator mapping \({\mathcal {C}}\) into \({\mathcal {D}}_+^{X}\), \({\mathcal {G}}\) be an operator mapping a nonempty set \({\mathcal {K}} \subset Y^X\) into \(Y^{X}\), and \(\chi _0 \in Y^X\). We say that the equation

$$\begin{aligned} {\mathcal {G}}\phi (x)=\chi _0(x) ,\quad x\in X, \end{aligned}$$
(5.1)

is \(({\mathcal {E}},{\mathcal {T}})\) - stable provided for any \(\varepsilon \in {\mathcal {E}}\) and \(\phi _0\in {\mathcal {K}}\) with

$$\begin{aligned} F_{({\mathcal {G}}\phi _0 - \chi _0)(x)}\ge \varepsilon _x,\quad x\in X, \end{aligned}$$
(5.2)

there exists a solution \(\phi \in {\mathcal {K}}\) of (5.1) such that

$$\begin{aligned} F_{(\phi -\phi _0)(x)}\ge ({\mathcal {T}}\varepsilon )_x,\quad x\in X. \end{aligned}$$
(5.3)

Roughly speaking, \(({\mathcal {E}},{\mathcal {T}})\)-stability of (5.1) means that every approximate (in the sense of (5.2)) solution \(\phi _0\in {\mathcal {K}}\) of (5.1) is always close (in the sense of (5.3)) to an exact solution \(\phi \in {\mathcal {K}}\) of (5.1).

Now, we present a simple Ulam stability outcome that can be derived from the results of the previous sections. To this end, we need the following hypothesis.

  1. (H1)

    \(\nu \in {\mathbb {N}}\), \(H : X \times Y^{\nu } \rightarrow Y\), \(L_i,\ldots ,L_\nu : X \rightarrow (0,\infty )\) and

    $$\begin{aligned} F_{H(x,w_1,\ldots ,w_{\nu })-H(x,z_1,\ldots ,z_{\nu })}&\,(t) \ge T_{i=1}^{\nu } F_{w_i-z_i}\Bigg (\frac{t}{\nu L_i(x)}\Bigg ),\nonumber \\ x \in X,&\ (w_1,\ldots ,w_\nu ),\;(z_1,\ldots ,z_\nu ) \in Y^\nu ,\ t > 0. \end{aligned}$$
    (5.4)

The subsequent corollary can be easily deduced from Theorem 3.11.

Corollary 5.2

Let hypothesis (H1) be valid,  \(\xi _1,\dots ,\xi _{\nu }\in X^X,\) \(f\in Y^X,\) \(\varepsilon \in {\mathcal {D}}_+^{X}\) and

$$\begin{aligned} F_{H(x,f(\xi _1(x)), \ldots , f(\xi _{\nu }(x)))-f(x)}\ge \varepsilon _x, \quad x\in X. \end{aligned}$$
(5.5)

Assume that one of the assumptions (i)–(iii) of Theorem 3.5 is fulfilled with \(\Lambda :{\mathcal {D}}_+^{X}\rightarrow {\mathcal {D}}_+^{X}\) given by

$$\begin{aligned} (\Lambda \delta )_x(t)=T_{i=1}^{\nu }\delta _{\xi _i(x)}\Bigg (\frac{t}{\nu L_i(x)}\Bigg ),\quad \delta \in {\mathcal {D}}_+^{X},\ x\in X. \end{aligned}$$
(5.6)

Then,  for each \(x \in X,\) \(\omega = (\omega _n)_{n\in {\mathbb {N}}_0}\in \Omega \) and \(t>0,\) the limits

$$\begin{aligned} \psi (x)&:=\lim _{n\rightarrow \infty }(J^n f)(x), \nonumber \\ \sigma _x^0(t)&:= \lim _{m\rightarrow \infty }T_{i=0}^{m-1} (\Lambda ^{i} \epsilon )_x \left( \frac{t}{m}\right) , \nonumber \\ ^{\omega }\sigma _x^0(t)&:= \lim _{m\rightarrow \infty }T_{i=0}^{m-1} (\Lambda ^{i} \epsilon )_x \big (\omega _i t\big ) \end{aligned}$$
(5.7)

exist (in Y and \({\mathbb {R}},\) respectively),  with \(J : Y^X \rightarrow Y^X\) given by : 

$$\begin{aligned} (J \eta )(x):=H(x,\eta (\xi _1(x)),\ldots ,\eta (\xi _{\nu }(x))), \quad \eta \in Y^X,\ x\in X, \end{aligned}$$
(5.8)

and the mapping \(\psi \in Y^X\), defined by (5.7), fulfills

$$\begin{aligned} H(x,\psi (\xi _1(x)),\ldots ,\psi (\xi _{\nu }(x)))=\psi (x), \quad x\in X, \end{aligned}$$
(5.9)
$$\begin{aligned} F_{f(x)-\psi (x)} (t)\ge \sup _{\alpha \in (0,1)}\; {\widehat{\sigma }}^x_0(\alpha t),\quad t>0,\ x\in X. \end{aligned}$$
(5.10)

Moreover,  if one of the condition (i) and (ii) holds,  then \(\psi \) is the unique solution of (5.9) such that there is \(\alpha \in (0,1)\) with

$$\begin{aligned} F_{(f - \psi )(x)}(t) \ge {\widehat{\sigma }}_x^0(\alpha t),\quad t>0,\ x\in X. \end{aligned}$$
(5.11)

Proof

Clearly, inequality (5.5) implies (3.8). Next, hypothesis \(({\mathcal {C}}_0)\) holds (see Remarks 3.2 and 3.10) and (5.4) means that (3.34) is valid. Consequently, by Theorem 3.11 with \({\mathcal {C}}=Y^X\), the function \(\psi \) defined by (5.7) is a fixed point of J (that is a solution of (5.9)) satisfying (5.10) (take \(k=0\) in (3.14)).

The statement on the uniqueness of \(\psi \) also follows from Theorem 3.11\(\square \)

The stability of functional equations of form (1.2) (or related to it) has already been studied by several authors. For further information, we refer to [4, 23, 25]. A very particular case of (5.9), with H given by (3.32), is the linear functional equation of the form:

$$\begin{aligned} \phi (x)=\sum _{i=1}^{\nu } {\widetilde{L}}_i(x)\phi (\xi _i(x))+h(x), \end{aligned}$$
(5.12)

with fixed functions \(h\in Y^X\) and \({\widetilde{L}}_1, \ldots , {\widetilde{L}}_{\nu } \in {\mathbb {R}}^X\). That equation is called a linear equation of higher order when \(\xi _i = \xi ^i\) for \(i=1,\ldots ,\nu \), with some \(\xi \in X^X\), i.e., when (5.12) has the form:

$$\begin{aligned} \phi (x)=\sum _{i=1}^{\nu } {\widetilde{L}}_i(t)\phi (\xi ^i(x))+h(x). \end{aligned}$$
(5.13)

Some recent results concerning the stability of less general cases of it can be found in [25, 26, 51, 52, 70, 97].

The simplest case of Eq. (5.13), when \(\nu =1\) and \(0\not \in {\widetilde{L}}_1(X)\), can be rewritten in the form:

$$\begin{aligned} \phi (\xi (x))=\frac{1}{{\widetilde{L}}_1(x)}\phi (x)-\frac{h(x)}{{\widetilde{L}}_1(x)}, \end{aligned}$$
(5.14)

which is also called the linear equation. Special cases of (5.14) are the gamma functional equation

$$\begin{aligned} \phi (x + 1) = x\phi (x) \end{aligned}$$

for \(X=Y={\mathbb {R}}\), the Schröder functional equation

$$\begin{aligned} \phi (\xi (x)) = s\phi (x) \end{aligned}$$
(5.15)

with fixed \(s\in {\mathbb {R}}{\setminus } \{0\}\), and the Abel functional equation

$$\begin{aligned} \phi (\xi (x)) = \phi (x)+1. \end{aligned}$$

For more details on Eq. (5.14) and its various particular versions, we refer to [58, 60].

Remark 5.3

Let us consider a situation analogous to that in Remark 3.7, with \(T=T_M\), for the Schröder functional equation (5.15) rewritten as

$$\begin{aligned} \frac{1}{s}\phi (\xi (x)) = \phi (x). \end{aligned}$$
(5.16)

Clearly, Eq. (5.16) is (5.9) with \(\nu =1\), \(\xi _1=\xi \) and \(H(x,y)=\frac{1}{s}y\) for \(x\in X\) and \(y\in Y\). So we have the case as in Corollary 5.2 with \(\Lambda :{\mathcal {D}}_+^{X}\rightarrow {\mathcal {D}}_+^{X}\) given by

$$\begin{aligned} (\Lambda \delta )_x (t)=\delta _{\xi (x)}(|s|t),\quad x\in X,\ \delta \in {\mathcal {D}}_+^{X},\ t>0. \end{aligned}$$

Further, let E be a normed space, \(X:=E{\setminus } \{0\}\), \(p\in {\mathbb {R}}\), \(L\in (0,\infty )\) and

$$\begin{aligned} \epsilon _x (t)=\frac{t}{t+L\Vert x\Vert ^p},\quad x\in X,\ t>0. \end{aligned}$$

Assume that \(\Vert \xi ^n(x)\Vert ^p\le a_n\Vert x\Vert ^p\) for \(x\in X\) and \(n \in {\mathbb {N}}_0\), with some sequence \((a_n)_{n\in {\mathbb {N}}_0}\) of positive reals such that \(\lim _{n \rightarrow \infty } a_n^{-1} |s|^n = \infty \).

Write \(e_n := a_n^{-1}|s|^n\). Clearly, for every \(n \in {\mathbb {N}}\), \(x \in X\) and \(t > 0\),

$$\begin{aligned} (\Lambda ^n \epsilon )_x (t)=\epsilon _{\xi ^n(x)}(|s|^nt)=\frac{|s|^nt}{|s|^nt+L\Vert \xi ^n(x)\Vert ^p}\ge \epsilon _{x}\big (e_nt\big ), \end{aligned}$$
(5.17)

whence

$$\begin{aligned} \lim _{n \rightarrow +\infty } \,(\Lambda ^{n} \epsilon )_x (t)\ge \lim _{n \rightarrow +\infty } \,\epsilon _{x}\big (e_nt\big )=1, \quad x \in X,\ t>0, \end{aligned}$$

which means that (3.11) holds.

Further, assume additionally that

$$\begin{aligned} \rho :=\sum _{i=0}^\infty \frac{1}{e_i}<\infty \end{aligned}$$

and write

$$\begin{aligned} \rho _k:=\sum _{i=k}^\infty \frac{1}{e_i},\quad \omega ^k_i:=\frac{1}{\rho _k e_{k+i}},\qquad k,i\in {\mathbb {N}}_0. \end{aligned}$$

Then,

$$\begin{aligned} \sum _{i=0}^\infty \omega ^k_i=1,\quad e_{k+j}\omega ^k_{j} = \frac{1}{\rho _k},\qquad k, j \in {\mathbb {N}}_0. \end{aligned}$$

Now, using (5.17), we get

$$\begin{aligned} T_{i=k}^{k+j} (\Lambda ^{i} \epsilon )_x \big (\omega ^k_{i-k}t\big ) \ge \inf _{i\in \{0,\ldots ,j\}} \epsilon _{x} \big (e_{k+i}\omega ^k_{i}t\big )=\epsilon _{x} \big (\rho _k^{-1}t\big ),\quad x\in X,\ t>0. \end{aligned}$$

Consequently, for every \(x\in X\) and \(t>0\),

$$\begin{aligned} \lim _{m \rightarrow +\infty } \;\inf _{j\in {\mathbb {N}}_0}T_{i=m}^{m+j} (\Lambda ^{i} \epsilon )_x \big (\omega ^m_{i-m}t\big ) \ge \lim _{m \rightarrow +\infty } \;\inf _{j\in {\mathbb {N}}_0}\;\epsilon _{x} \big (\rho _m^{-1}t\big )\\=\lim _{m \rightarrow +\infty } \; \epsilon _{x} \big (\rho _m^{-1}t\big )=1, \end{aligned}$$

which means that (3.10) holds with \(\omega ^k:=(\omega _n^k)_{n\in {\mathbb {N}}_0}\in \Omega \). Moreover, for \(\omega :=\omega ^0\) we have

$$\begin{aligned} \,^{\omega }\sigma _x^0(t)\ge \lim _{j\rightarrow \infty }\;\inf _{i\in \{0,\ldots ,j-1\}} \epsilon _{x} \big (e_{i}\omega ^0_{i}t\big )=\epsilon _{x} \big (\rho ^{-1}t\big ), \end{aligned}$$

whence

$$\begin{aligned} {\widehat{\sigma }}_x^0(t)\ge \,^{\omega }\sigma _x^0(t)\ge \epsilon _{x} \big (\rho ^{-1}t\big ),\quad x\in X,\ t>0, \end{aligned}$$

where \(\,^{\omega }\sigma _x^0\) and \({\widehat{\sigma }}_x^0\) have the same meaning as in Corollary 5.2.

6 Stability of Eq. (1.2)

In this section, we are concerned with the stability of the functional equation (1.2) for \(m > 1\). So we assume that X is a linear space over a field \({\mathbb {K}} \in \{ {\mathbb {R}}, {\mathbb {C}}\}\), \(A_i, a_{ij} \in {\mathbb {K}}\) for \(i = 1, \ldots , m\) and \(j = 1, \ldots , n\), and that \(D : X^n \rightarrow Y\) is a fixed function.

It is easily seen that particular cases of the homogeneous version of (1.2), namely of the equation

$$\begin{aligned} \sum _{i=1}^m A_i f\left( \sum _{j=1}^n a_{ij} x_j\right) =0, \end{aligned}$$
(6.1)

are the Cauchy functional equation

$$\begin{aligned} f(x+y)=f(x)+f(y), \end{aligned}$$
(6.2)

the Jensen functional equation

$$\begin{aligned} f(x+y)=\frac{1}{2}\big (f(2x)+f(2y)\big ), \end{aligned}$$

the particular version (with \(c = C = 0\)) of the linear equation in two variables

$$\begin{aligned} f(ax+by+c)=Af(x)+Bf(y)+C, \end{aligned}$$

the Jordan–von Neumann (quadratic) functional equation

$$\begin{aligned} f(x+y)+f(x-y)=2f(x)+2f(y), \end{aligned}$$
(6.3)

the Drygas equation

$$\begin{aligned} f(x+y)+f(x-y)=2f(x)+f(y)+f(-y), \end{aligned}$$
(6.4)

and the Fréchet functional equation

$$\begin{aligned} f(x+y+z)+f(x)+f(y)+f(z)=f(x+y)+f(x+z)+f(y+z).\nonumber \\ \end{aligned}$$
(6.5)

Various information on the Cauchy, Jensen and linear equations can be found in [2, 3, 59]. Equation (6.3) (the parallelogram law) was used by Jordan and von Neumann [50] in a characterization of the inner product spaces and Eqs. (6.4) and (6.5) were applied for the analogous purposes (cf. [8, 38, 55]); we refer to [12, 15, 35, 49, 53, 54, 71, 72, 77,78,79,80, 85] for further related information and stability results for those equations.

Let \(\Delta :Y^X \rightarrow Y^X\) denote the Fréchet difference operator given by

$$\begin{aligned} \Delta _y f(x) = \Delta _y^1 f(x) := f(x+y)-f(x) , \quad x,y\in X. \end{aligned}$$

Write

$$\begin{aligned} \Delta _{t,z}:=\Delta _t\circ \Delta _z,\quad \Delta ^{2}_t:=\Delta _{t,\,t}, \qquad t,z\in X, \end{aligned}$$

and

$$\begin{aligned} \Delta _{t,u,z} := \Delta _t\circ \Delta _u\circ \Delta _z, \quad \Delta ^{3}_{t} := \Delta _{t,\,t,\,t}, \qquad t,u,z\in X, \end{aligned}$$

for functions \(f\in Y^X\). Recurrently, we define

$$\begin{aligned}{} & {} \Delta ^{n+1}_{z}:=\Delta _z \circ \Delta _{z}^n, \quad z\in X,\ n\in {\mathbb {N}}, \\{} & {} \Delta _{x_{n+1},x_n,\ldots ,x_1}:=\Delta _{x_{n+1}}\circ \Delta _{x_n,\ldots ,x_1}, \quad x_1,\ldots ,x_{n+1}\in X,\;n\in {\mathbb {N}}. \end{aligned}$$

It is easily seen that the equations

$$\begin{aligned}&\Delta ^n_{z}f(x)=0,\quad x,z\in X,\end{aligned}$$
(6.6)
$$\begin{aligned}&\Delta _{x_n,\ldots ,x_1}f(x)=0,\quad x,x_1,\ldots ,x_n\in X,\nonumber \\&\Delta ^n_{z}f(x)=n!f(z),\quad x,z\in X \end{aligned}$$
(6.7)

are particular cases of (6.1). Functions \(f:X\rightarrow Y\) satisfying (6.6) and (6.7) are called polynomial functions of order \(n-1\) and monomial functions of order n, respectively (see, e.g., [42, 49, 59, 61, 93] for information on their solutions and stability).

Let us mention yet that (6.5) can be written as

$$\begin{aligned} C^2f(x, y, z)=0,\quad x,y,z\in X, \end{aligned}$$

where

$$\begin{aligned} C^2f(x, y, z) = Cf(x, y + z) - Cf(x, y) - Cf(x, z),\quad x,y,z\in X, \end{aligned}$$

and

$$\begin{aligned} Cf(x, y)=f(x+y)-f(x)-f(y),\quad x,y\in X, \end{aligned}$$

i.e., \(C^2f\) is the Cauchy difference of f of the second order. Recurrently,

$$\begin{aligned} C^{n+1} f(x_1, \ldots ,x_{n},u,w)&= C^{n}f(x_1, \ldots ,x_{n},u+w) - C^{n}f(x_1, \ldots ,x_{n},u)\\&\quad - C^{n}f(x_1, \ldots ,x_{n},w) \end{aligned}$$

for \(x_1, \ldots ,x_{n},u,w\in X,\) and \(n\in {\mathbb {N}}\). It is easily seen that the equation

$$\begin{aligned} C^{n+1} f(x_1, \ldots ,x_{n+2}) = 0,\quad x_1, \ldots ,x_{n+2}\in X, \end{aligned}$$

also is of the form (6.1) for every \(n\in {\mathbb {N}}\).

The functional equation

$$\begin{aligned}&M\bigg [f\left( \frac{x+y+z}{m}\right) +f(x)+f(y)+f(z)\bigg ] \nonumber \\&\quad = N \left[ f\left( \frac{x + y}{n} \right) + f\left( \frac{x + z}{n} \right) + f \left( \frac{y + z}{n}\right) \right] , \end{aligned}$$
(6.8)

(MNmn being non-zero integers) is another particular case of (6.1). It has been studied in [29,30,31,32]. The Eq. (6.8) with \(M=m=3\) and \(N=n=2\) was considered for the first time by Popoviciu [81] in connection with some inequalities for convex functions; for results on solutions and stability of it, we refer to [92, 94]. Solutions and stability of (6.8) with \(M=m=3\) and \(N=n=2\) have been investigated by Lee [62]. The more general case \(N=n^2\) and \(M=m^2\) of (6.8) has been studied in [63]. For results on a generalization of (6.8) we refer to [95].

Finally, let us recall here the equation of p-Wright affine functions (called also the p-Wright functional equation)

$$\begin{aligned} f(px+(1-p)y)+f((1-p)x+py)=f(x)+f(y), \end{aligned}$$
(6.9)

where \(p \in {\mathbb {R}}\) is fixed, which also is of form (6.1). For more information on (6.9) and recent results on its stability we refer to [11, 18].

Our main theorem in this section concerns the Ulam-type stability of Eq. (1.2) in RN-spaces. The following two hypotheses are needed to formulate it.

\(({\mathcal {M}})\):

There exist \(\mu \in \{1, \cdots , m-1\}\) and \(c_1, \ldots , c_n\in {\mathbb {K}}\) such that

$$\begin{aligned} A_0 := \Bigg |\sum _{i =\mu +1 }^{m} A_i \Bigg | > 0,\quad \beta _{i} = 1, \qquad i=\mu +1,\ldots ,m, \end{aligned}$$

where \(\beta _{i} := \sum _{j=1}^n a_{ij} c_j\) for \(i=1,\ldots ,m\).

\(({\mathfrak {D}})\):

For every \(x_1,\ldots ,x_n\in X,\)

$$\begin{aligned} \sum _{i=1}^mA_i d\Bigg (\sum _{j=1}^n a_{ij} x_j\Bigg ) =\sum _{i=1}^m A_iD\Bigg (\beta _i x_1,\ldots ,\beta _i x_n\Bigg ), \end{aligned}$$
(6.10)

where \(\beta _i\) is defined as in hypothesis \(({\mathcal {M}})\) and \(d(x)=D(c_1x, \ldots , c_n x)\) for \(x\in X\).

The next two remarks provide some comments on those hypotheses.

Remark 6.1

If \(\sum _{j=1}^n |a_{mj}| \ne 0\), then there exist \(c_1, \ldots , c_n\in {\mathbb {K}}\) such that \(\sum _{j=1}^n a_{mj} c_j = 1\). Therefore, hypothesis \(({\mathcal {M}})\) is fulfilled with \(\mu = m - 1\). However, because of the forms of the conditions (i)–(iii) of Theorem 3.5 and (6.13), it makes sense to consider (for some cases of the Eq. (1.2) and some functions \(\theta \)) also the situations with \(\mu < m - 1\).

For instance, for the Cauchy equation (6.2) and its inhomogeneous form

$$\begin{aligned} f(x+y)=f(x)+f(y)+D(x,y), \end{aligned}$$
(6.11)

we can consider the following two situations (we refer to Corollary 6.4 and its proof for consequences in both of them).

  1. (a1)

    If (6.11) is written in the form (1.2) as \(f(x_1+x_2)-f(x_1)-f(x_2)=D(x,y)\), then \(m = 3\), \(n = 2\), \(A_1 = 1\), \(A_2 = A_3 = -1\), \(a_{11} = a_{12} = 1\), \(a_{21} = 1\), \(a_{22} = 0\), \(a_{31} = 0\), \(a_{32} = 1\). In the matrix form, we can write \(a_{ij}\) as

    $$\begin{aligned} (a_{ij})_{1 \le i \le 3,\atop 1 \le j \le 2} = \begin{pmatrix} 1 &{} 1 \\ 1 &{} 0 \\ 0 &{} 1 \\ \end{pmatrix}. \end{aligned}$$

    Clearly, \(({\mathcal {M}})\) is valid with \(\mu =1\), \(A_0=2\), and \(c_1=c_2=1\).

  2. (a2)

    If (6.11) is written in the form (1.2) as \(-f(x_1)-f(x_2)+f(x_1+x_2)=D(x,y)\), then \(m=3\), \(n=2\), \(A_1=A_2=-1\), \(A_3=1\), \(a_{11}=1\), \(a_{12}=0\), \(a_{21}=0\), \(a_{22}=1\), \(a_{31}=a_{32}=1\). In the matrix form, we can write \(a_{ij}\) as

    $$\begin{aligned} (a_{ij})_{1 \le i \le 3,\atop 1 \le j \le 2} = \begin{pmatrix} 1 &{} 0 \\ 0 &{} 1 \\ 1 &{} 1 \\ \end{pmatrix}, \end{aligned}$$

    and \(({\mathcal {M}})\) is valid with \(\mu = 2\), \(A_0 = 1\), and \(c_1 = c_2 = 1/2\).

Remark 6.2

(b0) Clearly, if D is a constant function, then hypothesis \(({\mathfrak {D}})\) is valid (this case includes Eq. (1.1)). Moreover, if functions \(D_1,D_2:X^n\rightarrow Y\) satisfy the hypothesis, then so does the function \(\alpha _1 D_1+ \alpha _2 D_2\) with any fixed scalars \(\alpha _1,\alpha _2\). Below, we provide more examples of nontrivial functions D satisfying the hypothesis.

(b1) Consider the situation (a1) depicted in the previous remark, with \(m = 3\), \(n = 2\), \(A_1 = 1\), \(A_2 = A_3 = -1\), \(a_{11} = a_{12} = 1\), \(a_{21} = 1\), \(a_{22} = 0\), \(a_{31} = 0\), \(a_{32} = 1\), \(\mu =1\), and \(c_1=c_2=1\). Then \(\beta _{1}=2\), \(\beta _{2}=\beta _{3}=1\) and condition (6.10) takes the form

$$\begin{aligned}&D(x_1+x_2,x_1+x_2)-D(x_1,x_1)-D(x_2,x_2)\nonumber \\&\quad =D(2x_1,2x_2)-2D(x_1,x_2),\quad x_1,x_2\in X. \end{aligned}$$
(6.12)

Observe that condition (6.12) holds in each of the following three cases:

  • D is a symmetric biadditive function (i.e., \(D(x_1,x_2)= D(x_2,x_1)\) and \(D(x_1,x_2+x_3)= D(x_1,x_2)+D(x_1,x_3)\) for \(x_1,x_2,x_3\in X\));

  • there exist additive \(h_1,h_2:X\rightarrow X\) such that \(D(x_1,x_2)= h_1(x_1)+h_2(x_2)\) for \(x_1,x_2\in X\);

  • there exists \(\rho :X\rightarrow Y\) such that \(D(x_1,x_2)= \rho (x_1+x_2)-\rho (x_1)-\rho (x_2)\) for \(x_1,x_2\in X\).

(b2) In the situation (a2) depicted in the previous remark, with \(m=3\), \(n=2\), \(A_1=A_2=1\), \(A_3=-1\), \(a_{11}=1\), \(a_{12}=0\), \(a_{21}=0\), \(a_{22}=1\), \(a_{31}=a_{32}=1\), \(\mu = 2\), \(c_1 = c_2 = 1/2\), \(\beta _{2}=\beta _{3}=1/2\) and \(\beta _{3}=1\), condition (6.10) takes the form

$$\begin{aligned}&D\Bigg (\frac{1}{2} x_1,\frac{1}{2}x_1\Bigg )+D\Bigg ( \frac{1}{2} x_2,\frac{1}{2} x_2\Bigg )-D\Bigg (\frac{1}{2}(x_1+x_2),\frac{1}{2}(x_1+x_2)\Bigg )\\&\quad =2D\Bigg (\frac{1}{2}x_1,\frac{1}{2}x_2\Bigg )-D\big (x_1,x_2\big ),\quad x_1,x_2\in X, \end{aligned}$$

which is actually (6.12) (it is enough to replace \(x_i\) by \(2x_i\) and multiply both sides by \(-1\)).

(b3) More generally, if \(h_1,\ldots ,h_n:X\rightarrow Y\) are solutions to equation (6.1), then the function \(D:X^n\rightarrow Y\), given by

$$\begin{aligned} D(x_1, \ldots , x_n)=\sum _{k=1}^nh_k(x_k),\quad x_1, \ldots , x_n\in X, \end{aligned}$$

fulfills hypothesis \(({\mathfrak {D}})\). In fact, fix \(x_1, \ldots , x_n\in X\). Then, according to the definition of d,

$$\begin{aligned}&\sum _{i=1}^mA_i \,d\Bigg (\sum _{j=1}^n a_{ij} x_j\Bigg ) =\sum _{i=1}^mA_i D\Bigg (c_1 \sum _{j=1}^n a_{ij} x_j,\ldots ,c_n\sum _{j=1}^n a_{ij} x_j\Bigg )\\&\quad =\sum _{i=1}^mA_i\sum _{k=1}^nh_k\Bigg (c_k \sum _{j=1}^n a_{ij} x_j\Bigg )=\sum _{k=1}^n \sum _{i=1}^mA_ih_k\Bigg ( \sum _{j=1}^n a_{ij} c_kx_j\Bigg )=0,\\&\sum _{i=1}^m A_i D\Bigg (\beta _i x_1,\ldots ,\beta _i x_n\Bigg )=\sum _{i=1}^mA_i\sum _{k=1}^nh_k\Bigg (\beta _i x_k\Bigg )\\&\quad =\sum _{k=1}^n \sum _{i=1}^mA_ih_k\Bigg ( \sum _{j=1}^n a_{ij} c_j x_k\Bigg )=0, \end{aligned}$$

whence we get (6.10).

Theorem 6.3

Let hypotheses \(({\mathcal {M}})\) and \(({\mathfrak {D}})\) be valid and \(\theta : X^n \rightarrow {\mathcal {D}}_+\) satisfy

$$\begin{aligned} \lim _{k\rightarrow \infty } \big ({\mathcal {T}}^k\theta \big )(x_1, \ldots , x_n)(t) =1, \quad t > 0,\ x_1, \ldots , x_n \in X, \end{aligned}$$
(6.13)

where \({\mathcal {T}} : {\mathcal {D}}_+^{X^n} \rightarrow {\mathcal {D}}_+^{X^n}\) is given by

$$\begin{aligned} {\mathcal {T}}\chi (x_1, \ldots , x_n)(t)=&\,T_{i=1}^{\mu }\chi (\beta _i x_1, \ldots , \beta _i x_n) \left( \frac{A_0 t}{\mu |A_i|}\right) ,\nonumber \\&\chi \in {\mathcal {D}}_+^{X^n},\ t>0,\ x_1, \ldots , x_n \in X. \end{aligned}$$
(6.14)

Further,  assume that one of the conditions (i)–(iii) of Theorem 3.5 holds with \(\epsilon \in {\mathcal {D}}_+^X\) and \(\Lambda : {\mathcal {D}}_+^X \rightarrow {\mathcal {D}}_+^X\) defined by

$$\begin{aligned}&\epsilon _x(t) := \theta (c_1 x, \ldots , c_n x)(t A_0), \quad x \in X,\ t>0, \end{aligned}$$
(6.15)
$$\begin{aligned}&(\Lambda \delta )_x(t) :=T_{i=1}^{\mu } \delta _{\beta _i x}\left( \frac{A_0 t}{\mu |A_i|}\right) , \quad \delta \in {\mathcal {D}}_+^X, \ x \in X, \ t > 0. \end{aligned}$$
(6.16)

If \(f : X \rightarrow Y\) fulfills

$$\begin{aligned} F_{\sum _{i=1}^m A_i f(\sum _{j=1}^n a_{ij} x_j)-D(x_1,\ldots ,x_n)}&\ge \theta (x_1, \ldots , x_n),\quad x_1, \ldots , x_n \in X, \end{aligned}$$
(6.17)

then there is a solution \(\psi : X \rightarrow Y\) of Eq. (1.2) such that

$$\begin{aligned} F_{(\psi - f)(x)}(t) \ge \sup _{\alpha \in (0,1)}{\widehat{\sigma }}_x^0(\alpha t),\quad t>0,\ x\in X, \end{aligned}$$
(6.18)

with \({\widehat{\sigma }}_x^0\) defined by (3.7) (see also (3.2) and (3.3)).

Moreover,  in case where (i) or (ii) holds,  there is exactly one solution \(\psi \in Y^X\) of (1.2) such that there exists \(\alpha \in (0,1)\) with

$$\begin{aligned} F_{(\psi - f)(x)}(t) \ge {\widehat{\sigma }}_x^0(\alpha t),\quad t>0,\ x \in X. \end{aligned}$$
(6.19)

Proof

Write \(|\alpha |=1/A_0\) and fix \(x \in X\). Putting \(x_j := c_j x\) for \(j \in \{1, \ldots , n\}\) in (6.17), we get

$$\begin{aligned} F_{\sum _{i=1}^{m} A_i f(\beta _i x)-d( x)} \ge \theta (c_1x, \ldots , c_n x). \end{aligned}$$

Moreover,

$$\begin{aligned} \alpha \sum _{i=1}^{m} A_i f\big (\beta _i x\big )= f(x) + \sum _{i=1}^{\mu } A_i \alpha f\big (\beta _i x\big ). \end{aligned}$$

Therefore,

$$\begin{aligned}&F_{f(x) + \alpha \sum _{i=1}^{\mu } A_i f\left( \beta _i x\right) -\alpha d(x)}(t)=F_{\alpha \sum _{i=1}^{m} A_i f\left( \beta _i x\right) -\alpha d(x)} (t)\\&\quad = F_{\sum _{i=1}^{m} A_i f\left( \sum _{j=1}^n a_{ij} c_j x\right) -d(x)} (A_0t) \ge \theta (c_1 x, \ldots , c_n x)(A_0t),\quad t>0. \end{aligned}$$

Consequently,

$$\begin{aligned} F_{f(x) - Jf\left( x\right) }(t) \ge \theta (c_1 x, \ldots , c_n x) (A_0t),\quad t > 0, \end{aligned}$$
(6.20)

with the operator \(J : Y^X \rightarrow Y^X\) defined by

$$\begin{aligned} J\xi (x) := -\alpha \Bigg (\sum _{i=1}^{\mu } A_i \xi (\beta _i x)-d(x)\Bigg ), \quad \xi \in Y^X,\ x \in X. \end{aligned}$$

Note that the assumptions of Theorem 3.11 are satisfied for such J, because, for every \(\xi ,\eta \in Y^X\) and \(x \in X\),

$$\begin{aligned} (J\xi - J\eta )(x)=-\alpha \sum _{i=1}^{\mu } A_i (\xi - \eta )\big (\beta _{i} x\big ) , \end{aligned}$$

whence

$$\begin{aligned} F_{(J\xi - J\eta )(x)}( t)&= F_{-\alpha \sum _{i=1}^{\mu }A_i(\xi - \eta )\left( \beta _{i} x\right) }(t) \\&\ge T_{i=1}^{\mu } F_{A_i (\xi - \eta )\left( \beta _{i} x\right) }\left( \frac{A_0t}{\mu }\right) \\&= T_{i=1}^{\mu } F_{ (\xi - \eta )\left( \beta _{i} x\right) }\left( \frac{A_0t}{\mu |A_i|}\right) ,\quad t > 0. \end{aligned}$$

This means that the condition (3.34) is fulfilled with \(\xi _i(x)=\beta _{i} x\) and \(L_i(x)=|A_i|/A_0\). Consequently, by Theorem 3.11, for every \(k\in {\mathbb {N}}_0\), \(x \in X\) and \(t>0\), the limit (3.12) exists and the function \(\psi \in Y^X\) is a fixed point of J fulfilling (6.18).

Moreover, if one of the conditions (i), (ii) holds, then \(\psi \) is the unique fixed point of J such that there is \(\alpha \in (0,1)\) with

$$\begin{aligned} F_{(\psi - f)(x)}(t) \ge {\widehat{\sigma }}_x^0(\alpha t),\quad t > 0,\ x \in X. \end{aligned}$$
(6.21)

Now, we show that \(\psi \) is a solution to (6.1). To this end, observe that

$$\begin{aligned}&\sum _{i=1}^{m} A_i \psi \Bigg (\sum _{j=1}^n a_{ij} x_j\Bigg )= \sum _{i=1}^{m} A_i \lim _{k\rightarrow \infty }(J^k f)\Bigg (\sum _{j=1}^n a_{ij} x_j\Bigg )\nonumber \\&\quad = \lim _{k\rightarrow \infty }\;\Bigg (\sum _{i=1}^{m} A_i (J^k f)\Bigg (\sum _{j=1}^n a_{ij} x_j\Bigg )\Bigg ), \quad x_1,\ldots ,x_n \in X. \end{aligned}$$
(6.22)

First, we prove by induction that, for each \(k\in {\mathbb {N}}_0\) and \(x_1,\ldots ,x_n \in X\),

$$\begin{aligned} F_{\sum _{i=1}^{m} A_i (J^k f)\left( \sum _{j=1}^n a_{ij} x_j\right) - D(x_1,\ldots ,x_n)}\ge ({\mathcal {T}}^k\theta )(x_1,\ldots ,x_n). \end{aligned}$$
(6.23)

The case \(k=0\) is (6.17). So fix \(k\in {\mathbb {N}}_0\) and assume that (6.23) holds. Then, by hypothesis \(({\mathfrak {D}})\), for every \(x_1, \ldots , x_n \in X\),

$$\begin{aligned}&\sum _{i=1}^{m} A_i (J^{k+1} f)\Bigg (\sum _{j=1}^n a_{ij} x_j\Bigg )= \sum _{i=1}^{m} A_i J(J^{k} f)\Bigg (\sum _{j=1}^n a_{ij} x_j\Bigg )\nonumber \\&\quad = -\alpha \sum _{i=1}^{m} A_i \Bigg [\sum _{l=1}^{\mu } A_l(J^{k} f)\Bigg (\beta _l \sum _{j=1}^n a_{ij} x_j\Bigg )-d\Bigg (\sum _{j=1}^n a_{ij} x_j\Bigg )\Bigg ]\nonumber \\&\quad = -\alpha \sum _{l=1}^{\mu } A_l \sum _{i=1}^{m} A_i(J^{k} f)\Bigg (\sum _{j=1}^n a_{ij} \beta _l x_j\Bigg )+\alpha \sum _{i=1}^{m} A_id\Bigg (\sum _{j=1}^n a_{ij} x_j\Bigg )\nonumber \\&\quad = -\alpha \sum _{l=1}^{\mu } A_l \sum _{i=1}^{m} A_i(J^{k} f)\Bigg (\sum _{j=1}^n a_{ij} \beta _l x_j\Bigg ) + \alpha \sum _{l=1}^{m} A_l D\Bigg (\beta _l x_1, \ldots , \beta _l x_n\Bigg )\nonumber \\&\quad = -\alpha \sum _{l=1}^{\mu } A_l \Bigg [\sum _{i=1}^{m} A_i(J^{k} f)\Bigg (\sum _{j=1}^n a_{ij} \beta _l x_j\Bigg )-D\Bigg (\beta _l x_1,\ldots ,\beta _l x_n\Bigg )\Bigg ] \nonumber \\&\qquad + D(x_1, x_2 \dots , x_n). \end{aligned}$$
(6.24)

Hence, by (6.14) and the assumed inequality (6.23),

$$\begin{aligned}&F_{\sum _{i=1}^{m} A_i (J^{k+1} f)\left( \sum _{j=1}^n a_{ij} x_j\right) - D(x_1,\ldots ,x_n)}(t) \\&\quad = F_{-\alpha \sum _{l=1}^{\mu } \left[ A_l \sum _{i=1}^{m} A_i(J^{k} f)\left( \sum _{j=1}^n a_{ij} \beta _l x_j\right) -D(\beta _l x_1,\ldots ,\beta _l x_n)\right] }(t) \\&\quad \ge T_{l=1}^{\mu }F_{\sum _{i=1}^{m} A_i(J^{k} f)\left( \sum _{j=1}^n a_{ij} \beta _l x_j\right) -D(\beta _l x_1,\ldots ,\beta _l x_n)}\Bigg (\frac{A_0t}{\mu |A_l|}\Bigg ) \\&\quad \ge T_{l=1}^{\mu } ({\mathcal {T}}^k\theta )(\beta _l x_1,\ldots ,\beta _l x_n)\Bigg (\frac{A_0t}{\mu |A_l|}\Bigg ) \\&\quad =({\mathcal {T}}^{k+1}\theta )(x_1,\ldots ,x_n)(t), \quad x_1,\ldots ,x_n \in X,\ t>0. \end{aligned}$$

Thus we have proved that (6.23) holds for each \(k\in {\mathbb {N}}_0\). Now, by letting \(k\rightarrow \infty \) in (6.23), in view of (6.13), for every \(x_1,\ldots ,x_n \in X\), we get

$$\begin{aligned} \lim _{k\rightarrow \infty }F_{\sum _{i=1}^{m} A_i (J^k f)\left( \sum _{j=1}^n a_{ij} x_j\right) - D(x_1,\ldots ,x_n)}(t)=1,\quad t>0, \end{aligned}$$
(6.25)

which means that

$$\begin{aligned} \lim _{k\rightarrow \infty }\sum _{i=1}^{m} A_i (J^k f)\Bigg (\sum _{j=1}^n a_{ij} x_j\Bigg )= D(x_1,\ldots ,x_n), \quad x_1,\ldots ,x_n \in X. \end{aligned}$$

Consequently, by (6.22),

$$\begin{aligned} \sum _{i=1}^{m} A_i \psi \Bigg (\sum _{j=1}^n a_{ij} x_j\Bigg )= D(x_1,\ldots ,x_n), \quad x_1,\ldots ,x_n \in X. \end{aligned}$$
(6.26)

To complete the proof, observe that every solution of (1.2) is a fixed point of J and therefore the statement on uniqueness follows directly from the uniqueness property of \(\psi \) as a fixed point of J satisfying (6.21). \(\square \)

Using Theorem 6.3, we can obtain various stability results for numerous equations. For instance, for the Cauchy inhomogeneous equation (6.11) we can argue as in the following corollary.

Corollary 6.4

Assume that \(T=T_M,\) Y is M-complete,  \(\Vert \ \Vert \) is a norm on X\(D:X^2\rightarrow Y\) satisfies condition (6.12), \(p,v_1, v_2 \in [0,\infty ),\) \(p\ne 1,\) \(v_1 + v_2 \ne 0,\) and \(f : X \rightarrow Y\) satisfies

$$\begin{aligned} F_{f(x+y) - f(x) -f(y)-D(x,y)}(t) \ge \frac{t}{t + v_1 \Vert x\Vert ^p + v_2\Vert y\Vert ^p}, \nonumber \\ x, y \in X,\ t > 0. \end{aligned}$$
(6.27)

Then there exists a unique solution \(\psi : X \rightarrow Y\) to the Cauchy inhomogeneous equation (6.11) such that

$$\begin{aligned} F_{f(x) - \psi (x)}(t) \ge {\widehat{\sigma }}_x^0(t)\ge \frac{t}{t + v_0\Vert x\Vert ^p},\quad x \in X,\ t > 0, \end{aligned}$$
(6.28)

where \({\widehat{\sigma }}_x^0(t)\) is defined by (3.7) and

$$\begin{aligned} v_0=\frac{v_1 + v_2}{|2 - 2^{p}|}. \end{aligned}$$

Proof

Equation (6.11) is (1.2) with \(m = 3\) and \(n = 2\). Next, Remark 6.2 shows that condition (6.12) means that D fulfills hypothesis \(({\mathfrak {D}})\). So, we use Theorem 6.3 with

$$\begin{aligned} \theta (x_1,x_2)(t)=\frac{t}{t + v_1\Vert x_1\Vert ^p + v_2\Vert x_2\Vert ^p}, \quad x_1, x_2 \in X,\ t > 0, \end{aligned}$$

and consider two separate cases: \(p<1\) and \(p>1\).

The first case (\(p<1\)) coincides with the situation (a1) of Remark 6.1, with \(A_1 = - A_2 = - A_3 = 1\) and

$$\begin{aligned} (a_{ij})_{1 \le i \le 3,\atop 1 \le j \le 2} = \begin{pmatrix} 1 &{} 1 \\ 1 &{} 0 \\ 0 &{} 1 \\ \end{pmatrix}, \end{aligned}$$

when hypothesis \(({\mathcal {M}})\) is valid with \(\mu = 1\) and \(c_1 = c_2 = 1\). Then, \(A_0 = 2\), \(\beta _1 = a_{11} c_1 + a_{12}c_2 = 2\), and consequently (see (6.14)–(6.16))

$$\begin{aligned}&({\mathcal {T}}\chi )(x_1,x_2)(t) = \chi (\beta _1 x_1, \beta _1 x_2)\left( \frac{A_0 t}{\mu |A_1|}\right) = \chi ( 2x_1, 2x_2)(2t), \nonumber \\&\quad t > 0,\ x_1,x_2\in X, \chi \in {\mathcal {D}}_+^{X^2}, \end{aligned}$$
(6.29)
$$\begin{aligned}&(\Lambda \delta )_x(t) := \delta _{\beta _1x}\left( \frac{A_0 t}{\mu |A_1|}\right) = \delta _{2x}(2t), \quad \delta \in {\mathcal {D}}_+^X, \ x \in X,\ t> 0,\nonumber \\&\epsilon _x(t) = \theta (c_1 x, c_2 x)(t A_0)=\frac{t}{t + v\Vert x\Vert ^p}, \quad x \in X,\ t>0, \end{aligned}$$
(6.30)

with \(v:=(v_1 + v_2)/2\).

Arguing as in Remark 3.7 (with \(a = b = 2\) and \(e_0 =ba^{-p}= 2^{1-p} >1\)), we obtain that \(\Lambda \) satisfies condition (3.10) (with some sequence \((\omega ^k_n)_{n\in {\mathbb {N}}_0}\in \Omega \)) and

$$\begin{aligned} {\widehat{\sigma }}_x^0(t)\ge \epsilon _{x} \big (e_0^{-1}(e_0-1)t\big )=\frac{t}{t + v_0\Vert x\Vert ^p},\quad x\in X,\ t>0, \end{aligned}$$

with

$$\begin{aligned} v_0 = \frac{v}{1-2^{p-1}}=\frac{v_1 + v_2}{2 - 2^{p}}. \end{aligned}$$

Moreover, by (6.29),

$$\begin{aligned}&\lim _{k\rightarrow \infty } \big ({\mathcal {T}}^k\theta \big )(x_1,x_2)(t)=\lim _{k\rightarrow \infty } \theta (2^kx_1, 2^kx_2)(2^k t)\\&\quad =\lim _{k\rightarrow \infty } \,\frac{t}{t + 2^{k(p-1)}(v_1 \Vert x_1\Vert ^p + v_2\Vert x_2\Vert ^p)}=1,\quad x_1,x_2\in X,\ t>0, \end{aligned}$$

which means that (6.13) is fulfilled. Therefore our statement for \(p<1\) results from Theorem 6.3.

In the case \(p > 1,\) we need situation (a2) of Remark 6.1, with \(-1=A_1 = A_2 = - A_3\) and

$$\begin{aligned} (a_{ij})_{1 \le i \le 3, \atop 1 \le j \le 2} = \begin{pmatrix} 1 &{} 0 \\ 0 &{} 1 \\ 1 &{} 1 \\ \end{pmatrix}, \end{aligned}$$

when hypothesis \(({\mathcal {M}})\) holds with \(\mu = 2\), \(A_0 = 1\) and \(c_1 = c_2 = 1/2\). The reasoning is analogous to the case \(p<1\), but for the convenience of the reader, we present it in some details. Namely, when \(\beta _1 =\beta _2 = 1/2\), \(\epsilon _x(t)\) is given by (6.30), but with \(v=2^{-p}(v_1+v_2)\), and for every \(x, x_1, x_2 \in X\), \(t>0\) and \(\delta \in {\mathcal {D}}_+^X\),

$$\begin{aligned} (\Lambda \delta )_x(t)&=T_{i=1}^{2} \delta _{\beta _i x}\left( \frac{A_0t}{\mu |A_i|}\right) = \min _{i=1,2} \left\{ \delta _{\beta _i x}\left( \frac{A_0t}{\mu |A_i|}\right) \right\} =\delta _{\frac{1}{2} x}\Bigg (\frac{t}{2}\Bigg ),\\ {\mathcal {T}}\theta (x_1, x_2)(t)&= T_{i=1}^2 \theta (\beta _1 x_1, \beta _2 x_2)\left( \frac{t}{2 }\right) = \theta \Bigg (\frac{1}{2} x_1, \frac{1}{2} x_2\Bigg )\Bigg (\frac{1}{2} t\Bigg ). \end{aligned}$$

According to Remark 3.7 (with \(a = b = 1/2\) and consequently \(e_0 := ba^{-p} = 2^{p-1} > 1\)), condition (3.10) is satisfied and

$$\begin{aligned} {\widehat{\sigma }}^x_0(t)\ge \epsilon _{x} \big (e_0^{-1}(e_0-1)t\big )=\frac{t}{t + v_0\Vert x\Vert ^p},\quad x\in X,\ t>0, \end{aligned}$$

with

$$\begin{aligned} v_0=\frac{v}{1-2^{1-p}}=\frac{v_1+v_2}{2^p-2}. \end{aligned}$$

Note yet that, for \(x_1, x_2 \in X\) and \(t > 0\), we have

$$\begin{aligned} \lim _{k\rightarrow \infty } \big ({\mathcal {T}}^k\theta \big )(x_1,x_2)(t)&= \lim _{k\rightarrow \infty } \theta (2^{-k} x_1, 2^{-k}x_2)(2^{-k} t)\\&=\lim _{k\rightarrow \infty } \frac{t}{t + 2^{k(1-p)}(v_1 \Vert x_1\Vert ^p + v_2 \Vert x_2\Vert ^p)} = 1. \end{aligned}$$

This means that (6.13) is fulfilled. Therefore, also our statement for \(p>1\) results from Theorem 6.3. \(\square \)

Remark 6.5

According to Remark 6.2 ((b0) and (b1)), the function D in Corollary 6.4 can be of the following form:

$$\begin{aligned} D(x,y)=z_0+u_1(x)+u_2(y)+u_3(x,y)+g(x+y)-g(x)-g(y),\quad x,y\in X, \end{aligned}$$

with any fixed: \(g:X\rightarrow Y\), additive \(u_1,u_2:X\rightarrow Y\), biadditive symmetric \(u_3:X^2\rightarrow Y\), and \(z_0\in Y\).