1 Introduction

Invariance property is a very important aspect in the theory of means. There are two classical studies, Lagrange [17] and Gauss [12], which could be considered as a beginning of this field. It has been extensively studied by many authors since then. For example Borwein and Borwein [5] extended some earlier ideas [11, 18, 29] and generalized the original iteration to a vector of continuous, strict means of an arbitrary length. For several recent results about Gaussian product of means, see the papers by Baják–Páles [1,2,3,4], by Daróczy–Páles [7,8,9], by Głazowska [14, 15], by Jarczyk–Jarczyk [16], by Matkowski [19,20,21,22], by Matkowski–Páles [25], and by the author [26]. In the vast majority of these studies, there are assumptions which provide that the invariant mean is uniquely determined. There are also few results where this is not the case; see Deręgowska–Pasteczka [10], Matkowski–Pasteczka [23, 24], and Pasteczka [27, 28]. The main result of these papers is that the uniqueness of invariant means is deeply related to strictness and continuity.

1.1 Basic Definition and Notions

Before we proceed further recall that, for a given \(p\in {\mathbb {N}}\) and an interval \(I \subset {\mathbb {R}}\), a p-variable mean on I is an arbitrary function \(M :I^p \rightarrow I\) satisfying the inequality

$$\begin{aligned} \begin{aligned} { \min (x)\le M(x)\le \max (x)\text { for all }x \in I^p. } \end{aligned} \end{aligned}$$
(1.1)

Property (1.1) is referred as a mean property. If the inequalities in (1.1) are strict for every nonconstant vector x, then we say that a mean M is strict. Moreover, for such objects, we define natural properties like continuity, symmetry (when the value of mean does not depend on the order of its arguments), monotonicity (which states that M is nondecreasing in each of its variables), etc.

A mean-type mapping is a selfmapping of \(I^p\) which has a p-variable mean on each of its coordinates. More precisely, \({\textbf{M}}:I^p \rightarrow I^p\) is called a mean-type mapping if \({\textbf{M}}=(M_1,\dots ,M_p)\) for some p-variable means \(M_1,\dots ,M_p\) on I. In this framework a function \(K :I^p\rightarrow {\mathbb {R}}\) is called \({\textbf{M}}\)-invariant if it solves the functional equation \(K \circ {\textbf{M}}=K\). Usually we restrict solutions of this equation to the family of means and say about \({\textbf{M}}\)-invariant means.

1.2 Posing the Problem

There is a natural problem to give a condition to \({\textbf{M}}\) that guarantees the uniqueness of the \({\textbf{M}}\)-invariant mean. It turned out that it is equivalent to certain convergence of the sequence of iterations of the self-mapping \({\textbf{M}}\) (which hereafter will be denoted by \({\textbf{M}}^n\)), cf. [24]. There are three natural conditions which are proposed in the literature. Namely, if the continuous mean-type mapping \({\textbf{M}}=(M_1,\dots ,M_p) :I^p\rightarrow I^p\) satisfies one of the following three conditions:

  • each \(M_i\) is a strict mean;

  • \({\textbf{M}}\) is contractive, that is, \(\max {\textbf{M}}(x)-\min {\textbf{M}}(x) < \max (x)-\min (x)\) for every nonconstant vector \(x \in I^p\);

  • \({\textbf{M}}\) is weakly contractive, which states that for every nonconstant vector \(x \in I^p\) there exists a natural number n(x) such that

$$\begin{aligned} \begin{aligned} { \max {\textbf{M}}^{n(x)}(x)-\min {\textbf{M}}^{n(x)}(x) < \max (x)-\min (x),} \end{aligned} \end{aligned}$$
(1.2)

then there exists exactly one \({\textbf{M}}\)-invariant mean (cf. [5, 22], and [24], respectively). Obviously the last condition is the most general, however it is also the most difficult to verify.

We try to check the last condition in the example.

Example 1

Take a quadriple of four-variable power means defined on a set \({\mathbb {R}}_+:=(0,+\infty )\). Namely, let \({\textbf{M}}:{\mathbb {R}}_+^4 \rightarrow {\mathbb {R}}_+^4\) be a mean-type mapping given by

$$\begin{aligned} \begin{aligned} {{\textbf{M}}(x,y,z,t):=\bigg (\frac{2xy}{x+y},\sqrt{yz},\frac{z+t}{2}, \sqrt{\frac{t^2+x^2}{2}}\bigg ). } \end{aligned} \end{aligned}$$
(1.3)

Then each coordinate of \({\textbf{M}}\) is a bivariate mean. Furthermore, \({\textbf{M}}\) is not contractive, since this condition voids for all vectors of the form (aabb). On the other hand, one can prove that each coordinate of

$$\begin{aligned}\begin{aligned} {\textbf{M}}^2(x,y,z,t)&=\Bigg ( \frac{4 \, \sqrt{y z} x y}{ {2 x y + (x+y)\sqrt{y z}}}, \root 4 \of {\tfrac{1}{4} y z (t + z)^2},\\&\quad \qquad \frac{t+z+\sqrt{2 t^{2} + 2 x^{2}}}{4}, \frac{1}{2} \, \sqrt{t^{2} + x^{2} + \frac{8 \, x^{2} y^{2}}{{\left( x + y\right) }^{2}}}\,\Bigg ) \end{aligned} \end{aligned}$$

is a trivariate, strict mean on I. Thus, \({\textbf{M}}^2\) is a contractive mean-type mapping. Consequently (1.2) holds with \(n(x):=2\), \({\textbf{M}}\) is weakly contractive, and there exists the unique \({\textbf{M}}\)-invariant mean.

Observe that \({\textbf{M}}\) has a quite interesting structure. Namely, in each coordinate we take one of the classical means (harmonic, geometric, arithmetic, and quadratic), but we omit some arguments. The aim of this paper is to deliver a robust framework and to prove some natural properties for these sort of mean-type mappings.

1.3 Extended Means

Now we introduce the essential definition from the point of view of this manuscript. Namely, a p-variable mean \(M :I^p \rightarrow I\) is called an extended mean if it satisfies a mean property (1.1) and it is independent on some variable. More precisely, there exists \(k \in \{1,\dots ,p\}\) such that for all \(x,x' \in I^p\) satisfying the equality \(x_i=x'_i\) for all \(i \in \{1,\dots ,p\}{\setminus } \{k\}\) we have \(M(x)=M(x')\).

For a given \(d,p \in {\mathbb {N}}\), a sequence \(\alpha :=(\alpha _1,\dots ,\alpha _d) \in \{1,\dots ,p\}^d\), and a d-variable mean \(M :I^d\rightarrow I\) we define the mean \(M^{(p;\alpha )}:I^p\rightarrow I\) by

$$\begin{aligned} \begin{aligned} {M^{(p;\alpha )}(x_1,\dots ,x_p):=M(x_{\alpha _1},\dots ,x_{\alpha _d}) \text { for all }(x_1,\dots ,x_p)\in I^p. } \end{aligned} \end{aligned}$$
(1.4)

In the case \(d<p\), mean \(M^{(p;\alpha )}\) is a p-variable extended mean on I.

For example if \({\mathcal {A}}:{\mathbb {R}}^2\rightarrow {\mathbb {R}}\) is a bivariate arithemetic mean, \(p \ge 3\) and \(\alpha =(2,3)\) then \({\mathcal {A}}^{(p;\alpha )}:I^p \rightarrow I\) is given by

$$\begin{aligned}\begin{aligned} {\mathcal {A}}^{(p;\alpha )}(x_1,\dots ,x_p)={\mathcal {A}}^{(p;2,3)}(x_1,\dots ,x_p)=\tfrac{x_2+x_3}{2}\text { for all }(x_1,\dots ,x_p)\in I^p. \end{aligned} \end{aligned}$$

The opposite statement is also valid in some sense. Indeed, for every extended mean \(M :I^p \rightarrow I\), there exist \(d<p\), a sequence \(\alpha \in \{1,\dots ,p\}^d\), and a d-variable mean \(M_* :I^d\rightarrow I\) such that \(M=M_*^{(p;\alpha )}\). We are going to study the invariance of mean-type mappings which contain extended means.

2 Uniformly Weak Contractive Mappings

It turns out that in this setup it is natural to define a property which is between the contractivity and the weak contractivity. We say that a mean-type mapping \({\textbf{M}}:I^p \rightarrow I^p\) is uniformly weak contractive if there exists a natural number \(n_0 \in {\mathbb {N}}\) (which does not depend on x) such that

$$\begin{aligned}\begin{aligned} { \max {\textbf{M}}^{n_0}(x)-\min {\textbf{M}}^{n_0}(x) < \max (x)-\min (x),} \end{aligned} \end{aligned}$$

for every nonconstant vector \(x \in I^p\). Obviously, every contractive mean-type mapping is uniformly weak contractive, and every uniformly weak contractive mean-type mapping is weakly contractive. Moreover, due to Matkowski-Pasteczka [23, 24], it is known that:

  1. (1)

    for \(p=2\) every weakly contractive mean-type mapping is uniformly weak contractive with \(n_0=2\);

  2. (2)

    for every \(p>2\) there exist a weakly contractive mean-type mapping which is not uniformly weak contractive

2.1 Invariance Property

In this section we show a counterpart of the result contained in [22, Theorem 1]. The main difference is that we generalize the original setting to the family of uniformly weakly contractive mean-type mappings.

Theorem 1

Let \(I \subset {\mathbb {R}}\) be an interval, \(p \in {\mathbb {N}}\), and \({\textbf{M}}:I^p \rightarrow I^p\) be the uniformly weak contractive, continuous mean-type mapping. Then

  1. (i)

    for every \(n\in {\mathbb {N}}\), the mapping \({\textbf{M}}^n\) is a mean-type mapping;

  2. (ii)

    there is a continuous mean \(K:I^p \rightarrow I\) such that the sequence of iterates \(({\textbf{M}}^n)_{n=0}^\infty \) converges, uniformly on compact subsets of \(I^p\), to the mean-type mapping \({\textbf{K}}:I^p \rightarrow I^p\), \({\textbf{K}}=(K_1,\dots ,K_p)\) such that

    $$\begin{aligned}\begin{aligned} {K_1=\dots =K_p=K;} \end{aligned} \end{aligned}$$
  3. (iii)

    \({\textbf{K}}:I^p \rightarrow I^p\) is \({\textbf{M}}\)-invariant, that is, \({\textbf{K}}={\textbf{K}}\circ {\textbf{M}}\) or, equivalently, the mean K is \({\textbf{M}}\)-invariant;

  4. (iv)

    \({\textbf{M}}\)-invariant mean (mean-type mapping) is unique;

  5. (v)

    if \({\textbf{M}}=(M_1,\dots ,M_p)\) and all \(M_i\)-s are strict means, then so is K;

  6. (vi)

    if \({\textbf{M}}=(M_1,\dots ,M_p)\) and all \(M_i\)-s are nondecreasing with respect to each variable then so is K;

  7. (vii)

    if \(I=(0,+\infty )\) and \({\textbf{M}}\) is positively homogeneous, then every iterate of \({\textbf{M}}\) and K are positively homogeneous.

Proof

The case when \({\textbf{M}}\) is contractive is due to [22, Theorem 1]. Moreover parts (i), and (iv) are due to [24, Theorem 2], where they were proved for all continuous, weakly contractive mean-type mappings.

Now assume that \({\textbf{M}}:I^p \rightarrow I^p\) is uniformly weak contractive, and take \(k \in {\mathbb {N}}\) such that \({\textbf{M}}^k\) is contractive.

To show (ii) let \(\Lambda \) be a family of all compact subintrevals of I. Observe that for every compact subset X of \(I^p\), there exists \(J \in \Lambda \) such that \(X \subset J^p\). Moreover

$$\begin{aligned}\begin{aligned} {I^p=\bigcup _{J \in \Lambda } J^p,} \end{aligned} \end{aligned}$$

and \({\textbf{M}}(J^p)\subset J^p\) for every \(J \in \Lambda \). Therefore, it is sufficient to show that the assertion

$$\begin{aligned} \begin{aligned} {\textbf{M}}^n|_{J^p}\text { is uniformly convergent to }{\textbf{K}}|_{J^p}\\ \text {and }{\textbf{K}}|_{J^p}=(K,\dots ,K)|_{J^p} \end{aligned} \end{aligned}$$
(2.1)

holds for all \(J \in \Lambda \). Consequently, one may assume that I is compact.

Then, by the contractive part, the assertion (ii) holds for the subsequence \(({\textbf{M}}^{kn})_{n=0}^\infty \). More precisely, there exists \({\textbf{K}}:I^p \rightarrow I^p\) with required properties such that for every \(\varepsilon >0\) there exists \(n_\varepsilon \) such that

$$\begin{aligned}\begin{aligned} { \left\| {\textbf{M}}^{kn}(x)-{\textbf{K}}(x) \right\| _\infty <\varepsilon \quad \text {for all }x \in I^p \text { and }n\ge n_\varepsilon , } \end{aligned} \end{aligned}$$

where \(\left\| (v_1,\dots ,v_p) \right\| _\infty :=\max \{\left| v_1 \right| ,\dots ,\left| v_p \right| \}\). Equivalently, the property

$$\begin{aligned} \begin{aligned} { K(x)-\varepsilon< \min {\textbf{M}}^{s}(x) \le \max {\textbf{M}}^{s}(x) < K(x)+\varepsilon } \end{aligned} \end{aligned}$$
(2.2)

holds for \(s=kn_\varepsilon \). But by the mean-value property (1.1) we know that \(M_i(y) \ge \min (y)\) for all \(i \in \{1,\dots ,p\}\) and \(y \in I^p\). Thus

$$\begin{aligned}\begin{aligned} { \min {\textbf{M}}(y)=\min (M_1(y),\dots ,M_p(y)) \ge \min (y) \quad \text {for all }y \in I^p. } \end{aligned} \end{aligned}$$

Thus, by simple induction, the mapping \(s \mapsto \min {\textbf{M}}^{s}(x)\) is nondecreasing. Analogously, we can show that the mapping \(s \mapsto \max {\textbf{M}}^{s}(x)\) is nonincreasing. Whence, by the trivial inequality \(\min {\textbf{M}}^{s}(x)\le \max {\textbf{M}}^{s}(x)\), we obtain that (2.2) holds for all \(s\ge kn_\varepsilon \) which completes the proof of (ii).

Having this proved, for all \(x\in I^p\), we obtain

$$\begin{aligned}\begin{aligned} { {\textbf{K}}({\textbf{M}}(x))=\lim _{n \rightarrow \infty } {\textbf{M}}^n ({\textbf{M}}(x))=\lim _{n \rightarrow \infty } {\textbf{M}}^n (x)={\textbf{K}}(x), } \end{aligned} \end{aligned}$$

and therefore \({\textbf{K}}\circ {\textbf{M}}={\textbf{K}}\), which is (iii).

To prove (v) note that, under the assumption that all \(M_i\)-s are strict means, for an arbitrary nonconstant vector \(x \in I^p\) we have

$$\begin{aligned}\begin{aligned} { K(x)=K\circ {\textbf{M}}(x) \le \max \big ({\textbf{M}}(x)\big )<\max x. } \end{aligned} \end{aligned}$$

Similarly we can show the inequality \(K(x)>\min (x)\).

Now we proceed to the proof of (vi). First, let us define the partial ordering \(\preceq \) on \(I^p\) by

$$\begin{aligned}\begin{aligned} { (x_1,\dots ,x_p) \preceq (y_1,\dots ,y_p)\text { if and only if }x_i\le y_i\text { for all }i \in \{1,\dots p\}. } \end{aligned} \end{aligned}$$

Then, since each \(M_i\) are nondecreasing, for every \(x,y\in I^p\) with \(x \preceq y\) we get \({\textbf{M}}(x)\preceq {\textbf{M}}(y)\). Thus, by a simple induction, we also have \({\textbf{M}}^n(x)\preceq {\textbf{M}}^n(y)\) for all \(n \in {\mathbb {N}}\). In the limit case when \(n \rightarrow \infty \), in view of (ii), we obtain that \({\textbf{K}}\) is monotone with respect to \(\preceq \), which is (vi).

To show the last assertion, take \(c>0\) and \(x \in (0,+\infty )^p\) arbitrarily. Then, since \({\textbf{M}}\) is positively homogeneous, by (ii), we have

$$\begin{aligned}\begin{aligned} { {\textbf{K}}(c x)=\lim _{n \rightarrow \infty } {\textbf{M}}^n(c x)= \lim _{n \rightarrow \infty } c {\textbf{M}}^n(x)=c {\textbf{K}}(x), } \end{aligned} \end{aligned}$$

and thus \(K(c x)=c K(x)\). \(\square \)

3 \({\textbf{d}}\)-Averaging Mappings

In this section, we introduce an important subfamily of mean-type mappings and study their properties within this class. Our aim is to reduce the properties of a subclass of mean-type mappings which appeared in the previous section to certain properties of directed graphs.

For the sake of completeness, let us introduce formally \({\mathbb {N}}:=\{1,\dots \}\), and \({\mathbb {N}}_p:=\{1,\dots ,p\}\) (where \(p\in {\mathbb {N}}\)). Then, for \(p \in {\mathbb {N}}\) and a vector \({\textbf{d}}=(d_1,\dots ,d_p)\in {\mathbb {N}}^p\), let \({\mathbb {N}}_p^{\textbf{d}}:={\mathbb {N}}_p^{d_1}\times \dots \times {\mathbb {N}}_p^{d_p}\). Using this notations, a sequence of means \({\textbf{M}}=(M_1,\dots ,M_p)\) is called \({\textbf{d}}\)-averaging mapping on I if each \(M_i\) is a \(d_i\)-variable mean on I.

For a \({\textbf{d}}\)-averaging mapping \({\textbf{M}}\) and a vector of indexes \(\alpha =(\alpha _1,\dots ,\alpha _p)\in {\mathbb {N}}_p^{d_1}\times \dots \times {\mathbb {N}}_p^{d_p}={\mathbb {N}}_p^{\textbf{d}}\) define a mean-type mapping \({\textbf{M}}_\alpha :I^p \rightarrow I^p\) by

$$\begin{aligned}\begin{aligned} {{\textbf{M}}_\alpha :=\Big (M_1^{(p;\alpha _1)},\dots ,M_p^{(p;\alpha _p)}\Big ); } \end{aligned} \end{aligned}$$

recall that \(M_i^{(p,\alpha _i)}\)-s were defined in (1.4). In the more explicit form we have

$$\begin{aligned} \begin{aligned} {[}{\textbf{M}}_\alpha ](x_1,\dots ,x_p)&=\Big (M_i^{(p,\alpha _i)}(x_1\dots ,x_p)\Big )_{i=1}^p\\&=\Big (M_i\big (x_{\alpha _{i,1}},\dots ,x_{\alpha _{i,d_i}}\big )\Big )_{i=1}^p\\&=\Big (M_1\big (x_{\alpha _{1,1}},\dots ,x_{\alpha _{1,d_1}}\big ),\dots ,M_p\big (x_{\alpha _{p,1}},\dots ,x_{\alpha _{p,d_p}}\big )\Big ). \end{aligned} \end{aligned}$$
(3.1)

Few examples of this sort of mean-types mapping are presented in the last section. We aim to study the family of means which are \({\textbf{M}}_\alpha \)-invariant. The important part of our consideration will use some facts from graph theory.

Remark 1

Observe that for each element of \(\alpha \in {\mathbb {N}}_p^{\textbf{d}}\) we can recover the value of p (based on the vector \({\textbf{d}}\))  but we cannot do the same for the single element of \(\alpha _i \in {\mathbb {N}}_p^{d_i}\) (in this example \(i \in \{1,\dots ,p\}\)). Therefore, it is natural to use notations \({\textbf{M}}_\alpha \) and \(M_i^{(p;\alpha _i)}\).

3.1 General Properties of Directed Graphs

Now we recall some elementary facts concerning graphs. For details, we refer the reader to the classical book [13].

A digraph is a pair \(G=(V,E)\), where V is a finite set of vertexes, and \(E\subset V \times V\) is a set of edges. For each \(v \in V\) we denote by \(N_G^-(v)\) and \(N_G^+(v)\) sets of in-neighbors and out-neighbors, respectively. More precisely \(N_G^-(v)=\{w \in V :(w,v)\in E\}\) and \(N_G^+(v)=\{w \in V :(v,w)\in E\}\). The edges of the form (vv) for \(v \in V\) are called loops.

A sequence \((v_0,\dots ,v_n)\) of elements in V such that \((v_{i-1},v_{i})\in E\) for all \(i \in \{1,\dots ,n\}\) is called a walk from \(v_0\) to \(v_n\). The number n is a length of the walk. If for all \(v, w \in V\) there exists a walk from v to w, then G is called irreducible.

A cycle in a graph is a non-empty walk in which only the first and last vertices are equal. A directed graph is said to be aperiodic if there is no integer \(k > 1\) that divides the length of every cycle of the graph. A graph which is simultaneously irreducible and aperiodic is called ergodic. We also need two lemmas which will be useful in the remaining part of this paper

Lemma 1

Let \(G=(V,E)\) be an ergodic digraph. Then there exists \(q_0\) such that for all \(q \ge q_0\), and \(v,w \in V\) there exists a walk from v to w of length exactly q.

Proof

First, since G is ergodic, there exist \(k>1\) and cycles \((C_i)_{i=1}^k\) in G of length \((m_i)_{i=1}^k\), respectively, such that \(\gcd (m_1,\dots ,m_k)=1\). Then there exists \(n_0\in {\mathbb {N}}\), such that every number \(n \ge n_0\) can be expressed as

$$\begin{aligned}\begin{aligned} {n=m_1\alpha _1+\dots +m_k\alpha _k } \end{aligned} \end{aligned}$$

for some nonnegative integers \(\alpha _1,\dots ,\alpha _k\) (see for example [6]). Next, for \(v,w \in V\), denote

$$\begin{aligned}\begin{aligned} P_{v,w}:=\bigg \{s \in {\mathbb {N}}:\begin{array}{l} \text { there exists a walk from }v\text { to }w\text { of length }s\\ \text { which contains every vertex in }G \end{array} \bigg \} \end{aligned} \end{aligned}$$

Since G is irreducible, for all \(v,w \in V\) there exists a walk \(A_{vw}\) from v to w, which contains all vertices in G (vertices may appear many times)—we denote its length by \(l_{v,w}\). Then \(A_{vw}\) has a nonempty intersection with all cycles \((C_i)_{i=1}^k\) (since it contains all vertexes). Consequently, we may extend \(A_{vw}\) by “taking a detour” thought any cycle \(C_i\), also many times. Thus

$$\begin{aligned}\begin{aligned} {\big \{l_{v,w}+m_1\alpha _1+\dots +m_k\alpha _k \ :\ \alpha _1,\dots ,\alpha _k \ge 0\big \} \subseteq P_{v,w}. } \end{aligned} \end{aligned}$$

Therefore \(q \in P_{v,w}\) for all \(q \ge l_{v,w}+n_0\). Consequently \(q \in \bigcap \nolimits _{v,w \in V} P_{v,w}\) for every \(q \ge \max \big \{l_{v,w}:v,w\in V\big \}+n_0:=q_0\), which completes the proof. \(\square \)

Lemma 2

For an ergodic digraph \(G=(V,E)\) define \(T_G :\{-1,0,1\}^V \rightarrow \{-1,0,1\}^V\) as follows: for an arbitrary \(c :V \rightarrow \{-1,0,1\}\) and \(v \in V\) we set

$$\begin{aligned} \begin{aligned} T_G(c)(v):={\left\{ \begin{array}{ll} 1 &{} \text { if }c(w)=1 \text { for all }w \in N_G^-(v);\\ -1 &{} \text { if }c(w)=-1 \text { for all }w \in N_G^-(v);\\ 0 &{} \text { otherwise}. \end{array}\right. } \end{aligned} \end{aligned}$$
(3.2)

Then for every function \(c_0 :V \rightarrow \{-1,0,1\}\) there exists a number \({\bar{c}} \in \{-1,0,1\}\) such that \(T_G^{n}(c_0) \equiv \bar{c}\) for all \(n \ge 3^{|V|}\).

Moreover \({\bar{c}}=0\) unless \(c_0 \equiv 1\) or \(c_0 \equiv -1\).

Proof

For the sake of brevity, for all \(n\in {\mathbb {N}}\), we denote briefly \(c_n:=T_G^n(c_0)\). First observe that if \(c_{n_0}\) is constant for some \(n_0 \in {\mathbb {N}}\) then, since constant functions are fixed points of \(T_G\), we obtain \(c_n = c_{n_0}\) for all \(n \ge n_0\). Thus, in order to show the main part of the statement, it is sufficient to show that \(c_{n_0}\) is constant for some \(n_0 \in \{0,\dots ,3^{|V|}\}\).

By \(c_{n+1}=T_G(c_n)\), since the range of T has \(3^{|V|}\) elements, we know that there exist \(\alpha ,n_0 \in \{1,\dots ,3^{|V|}\}\) such that

$$\begin{aligned} \begin{aligned} { c_{n+\alpha }=c_n\quad \text {for all }n\ge n_0. } \end{aligned} \end{aligned}$$
(3.3)

Next, we show that \(c_{n_0}\) is a constant function. First, since \(c_{n_0}(V)\) is a nonempty subset of \(\{-1,0,1\}\), we know that at least one of the following conditions is valid:

$$\begin{aligned}\begin{aligned} \textsc {A.}\ c_{n_0}(V)=\{0\}, \quad \textsc {B.}\,1\in c_{n_0}(V), \quad \textsc {C.}\,{-1}\in c_{n_0}(V). \end{aligned} \end{aligned}$$

It splits our proof to three (possibly overlapping) cases.

Case A. Once we have \(c_{n_0}(V)=\{0\}\), that is \(c_{n_0}\equiv 0\), then \(T_G^n(c_0)=T^{n-n_0}(c_{n_0})\equiv 0\) for all \(n \ge n_0\) and we are done.

Case B. If \(1 \in c_{n_0}(V)\) then take \(v_0\in V\) such that \(c_{n_0}(v_0)=1\). By (3.3) we have

$$\begin{aligned} \begin{aligned}{ c_{n_0+k\alpha }(v_0)=1\text { for all }k \in {\mathbb {N}}. } \end{aligned} \end{aligned}$$
(3.4)

Next, define a sequence \((V_n)_{n=0}^\infty \) of sets of vertexes (that is subsets of V) by \(V_0:=\{v_0\}\), and

$$\begin{aligned}\begin{aligned} { V_p:=\big \{v \in V :\text {there exists a walk from }v\text { to }v_0\text { of length }p\big \},\ p\ge 1. } \end{aligned} \end{aligned}$$

Observe that \(w \in V_{p+1}\) if and only if there exists an edge from w to some vertex in \(V_p\), that is \(V_{p+1}=N_G^-(V_p)\) (\(p\ge 0\)). Thus \(c_n(v)=1\) for all \(v \in V_p\) implies \(c_{n-1}(v)=1\) for all \(v \in V_{p+1}\). By a simple induction, in view of (3.4), one gets

$$\begin{aligned} \begin{aligned} {c_{0}(v)=1\quad \text {for all }k \in {\mathbb {N}}\text { and }v \in V_{n_0+k\alpha }.} \end{aligned} \end{aligned}$$
(3.5)

By Lemma 1, since G is ergodic, there exists \(q_0\) such that \(V_q=V\) for all \(q\ge q_0\). Take \(k_0\) such that \(n_0+k_0\alpha \ge q_0\). Then, by (3.5), we have \(c_0(v)=1\) for all \(v\in V\).

Case C. Whenever \(-1 \in c_{n_0}(V)\) then, analogously to the previous case, one gets \(c_0\equiv -1\).

Binding all the latter cases, we have proved that \(c_{n_0}\) is a constant function, that is, \(c_{n_0} \equiv {\bar{c}}\) for some \({\bar{c}} \in \{-1,0,1\}\). Since \(n_0 \le 3^{|V|}\), and \(T_G(c_{n_0})=c_{n_0}\), we obtain

$$\begin{aligned}\begin{aligned} {T_G^n(c_0)=T_G^{n-n_0}\big (T_G^{n_0}(c_0)\big )=T_G^{n-n_0}(c_{n_0})=c_{n_0}\equiv {\bar{c}}\quad \text {for all }n\ge 3^{|V|}. } \end{aligned} \end{aligned}$$

Moreover if \({\bar{c}} \ne 0\) then we are not in case A, and therefore \(c_0 \equiv 1\) or \(c_0 \equiv -1\) as it has been proved in cases B and C, respectively. \(\square \)

After this extensive introduction, for a given \(p\in {\mathbb {N}}\), \({\textbf{d}}=(d_1,\dots ,d_p)\in {\mathbb {N}}^p\), and \(\alpha \in {\mathbb {N}}_p^{\textbf{d}}\), we define the \(\alpha \)-incidence graph \(G_\alpha =(V_\alpha ,E_\alpha )\) as follows: \(V_\alpha :={\mathbb {N}}_p\) and \(E_\alpha :=\{(\alpha _{i,j},i) :i \in {\mathbb {N}}_p \text { and }j \in {\mathbb {N}}_{d_i}\}\).

In view of (3.1) we get that \(x_k\) appears in \([{\textbf{M}}_\alpha ]_i\) as an argument if and only if \(G_\alpha \) contains the edge from k to i. We are going to prove that the natural assumption to warranty that \({\textbf{M}}_\alpha \)-invariant mean is uniquely determined, is that \(G_\alpha \) is ergodic. Therefore, for \(p \in {\mathbb {N}}\) and \({\textbf{d}}\in {\mathbb {N}}^p\), we set

$$\begin{aligned}\begin{aligned} {{\,\textrm{Erg}\,}}({\textbf{d}}):=\{\alpha \in {\mathbb {N}}_p^{\textbf{d}}:G_\alpha \text { is ergodic}\}. \end{aligned} \end{aligned}$$

3.2 Graphs of Averaging Mappings

Now we show the first nontrivial result referring directly to \({\textbf{d}}\)-averaging mappings.

Proposition 1

Let \(I \subset {\mathbb {R}}\) be an interval, \(p \in {\mathbb {N}}\), \({\textbf{d}}\in {\mathbb {N}}^p\), \(\alpha \in {{\,\textrm{Erg}\,}}({\textbf{d}})\), and \({\textbf{M}}=(M_1,\dots ,M_p)\) be a \({\textbf{d}}\)-averaging mapping. If all \(M_i\)-s are strict means, then \({\textbf{M}}_\alpha \) is uniformly weak contractive.

Moreover, for all \(x \in I^p\), either \({\textbf{M}}_\alpha ^{3^p}(x)\) is a constant vector or

$$\begin{aligned} \begin{aligned}{ \min (x)<\min {\textbf{M}}_\alpha ^{3^p}(x)<\max {\textbf{M}}_\alpha ^{3^p}(x)<\max (x). } \end{aligned} \end{aligned}$$
(3.6)

Proof

Let \(\Gamma \) be a family of all nonconstant vectors in \(I^p\). For \(n \in \{0,1,\dots \}\) and \(x=(x_1,\dots ,x_p)\in \Gamma \) define \(c_{x,n} :{\mathbb {N}}_p \rightarrow \{-1,0,1\}\) by

$$\begin{aligned}\begin{aligned} c_{x,0}(i)&:={\left\{ \begin{array}{ll} 1 &{} \text { if }x_i=\max (x);\\ -1 &{} \text { if }x_i=\min (x);\\ 0 &{} \text { otherwise}; \end{array}\right. }\\ c_{x,n}(i)&:={\left\{ \begin{array}{ll} 1 &{} \text { if }[{\textbf{M}}_\alpha ^n(x)]_i=\max (x);\\ -1 &{} \text { if }[{\textbf{M}}_\alpha ^n(x)]_i=\min (x);\\ 0 &{} \text { otherwise} \end{array}\right. }&\text { for }n\ge 1. \end{aligned} \end{aligned}$$

Then \(c_{x,n}(i)=1\) for some \(x \in \Gamma \) and \(n \ge 1\) yields \([{\textbf{M}}_\alpha ^n(x)]_i=\max (x)\), thus

$$\begin{aligned} M_i\big ([{\textbf{M}}_\alpha ^{n-1}(x)]_{\alpha _{i,1}},\dots ,[{\textbf{M}}_\alpha ^{n-1}(x)]_{\alpha _{i,d_i}}\big )=\max (x). \end{aligned}$$

Since \(M_i\) is a strict mean, one has

$$\begin{aligned}\begin{aligned} {[{\textbf{M}}_\alpha ^{n-1}(x)]_{\alpha _{i,j}}=\max (x)\quad \text {for all }j \in {\mathbb {N}}_{d_i}. } \end{aligned} \end{aligned}$$

In the other words, \(c_{x,n-1}(\alpha _{i,j})=1\) for all \(j \in {\mathbb {N}}_{d_i}\). Therefore, if \(c_{x,n}(i)=1\) for some \(x \in \Gamma \) and \(n \in {\mathbb {N}}\), then \(c_{x,n-1}(v)=1\) whenever \(v \in N_{G_\alpha }^-(i)\).

The converse implication is also valid. Indeed, if \(c_{x,n-1}(v)=1\) for all \(v \in N_{G_\alpha }^-(i)\) then \([{\textbf{M}}_\alpha ^{n-1}(x)]_k=\max (x)\) for all \(k \in \{\alpha _{i,1},\dots ,\alpha _{i,d_i}\}\) which yields \([{\textbf{M}}_\alpha ^{n}(x)]_i=\max (x)\). Similarly \(c_{x,n}(i)=-1\) if and only if \(c_{x,n-1}(v)=-1\) for all \(v \in N_{G_\alpha }^-(i)\).

Consequently, for every \(x \in \Gamma \) and \(n \ge 0\) we have \(c_{x,n+1}=T_{G_\alpha }(c_{x,n})\), where \(T_{G_\alpha }\) is defined by (3.2). Thus \(c_{x,n}=T_{G_\alpha }^n(c_{x,0})\). By Lemma 2, \(c_x:=c_{x,3^p}\) is a constant function for every \(x \in \Gamma \). Now we have three cases.

First, if \(c_x \equiv -1\) or \(c_x \equiv 1\), then \({\textbf{M}}_\alpha ^{3^p}(x)\) is a constant vector and, since x is nonconstant, one has

$$\begin{aligned} \begin{aligned} { \max {\textbf{M}}_\alpha ^{3^p}(x)-\min {\textbf{M}}_\alpha ^{3^p}(x)<\max (x) -\min (x). } \end{aligned} \end{aligned}$$
(3.7)

Next, if \(c_x \equiv 0\) then \(\min (x)< [{\textbf{M}}_\alpha ^{3^p}(x)]_i < \max (x)\) for all \(i \in {\mathbb {N}}_p\), which also yields (3.7). Thus, by the mean-value property, one gets

$$\begin{aligned} \max {\textbf{M}}_\alpha ^{n}(x)-\min {\textbf{M}}_\alpha ^{n}(x)<\max (x) -\min (x) \quad \text {for all }x \in \Gamma \text { and }n\ge 3^p, \end{aligned}$$

which implies the uniform weak conctractivity of \({\textbf{M}}_\alpha \). Finally note that, based on the proof above, we can easily deduce the moreover part. \(\square \)

3.3 Invariance Problem

Following the convention used by several authors (for example, it was used in [22]) and Theorem 1 above, we are going to bind several results in a single theorem. The idea beyond this result (and simultaneously the sketch of its proof) is to bind Proposition 1 (which states that, under certain conditions, \({\textbf{M}}_\alpha \) is uniformly weak contractive) and Theorem 1 (which provides a number of properties of such mappings). Before we formulate this result, which should be considered as the most important outcome of this paper, let us underline two issues. First, we shuffle the order of items in these results (in our opinion, they are more natural). Second, in some parts we need to give some effort in the proof, since we cannot apply Theorem 1 directly.

Theorem 2

Let \(I \subset {\mathbb {R}}\) be an interval, \(p \in {\mathbb {N}}\), \({\textbf{d}}\in {\mathbb {N}}^p\), and \({\textbf{M}}=(M_1,\dots ,M_p)\) be a \({\textbf{d}}\)-averaging mapping on I such that all \(M_i\)-s are continuous and strict. Then, for all \(\alpha \in {{\,\textrm{Erg}\,}}({\textbf{d}})\),

  1. (a)

    there exists the unique \({\textbf{M}}_\alpha \)-invariant mean \(K_\alpha :I^p \rightarrow I\);

  2. (b)

    \(K_\alpha \) is continuous;

  3. (c)

    \(K_\alpha \) is strict;

  4. (d)

    \(({\textbf{M}}_\alpha ^n)_{n=1}^\infty \) converges, uniformly on compact subsets of \(I^p\), to the mean-type mapping \({\textbf{K}}_\alpha :I^p \rightarrow I^p\), \({\textbf{K}}_\alpha =(K_\alpha ,\dots ,K_\alpha )\);

  5. (e)

    \({\textbf{K}}_\alpha :I^p \rightarrow I^p\) is \({\textbf{M}}_\alpha \)-invariant, that is \({\textbf{K}}_\alpha ={\textbf{K}}_\alpha \circ {\textbf{M}}_\alpha \);

  6. (f)

    if \(M_1,\dots ,M_p\) are nondecreasing with respect to each variable, then so is \(K_\alpha \);

  7. (g)

    if \(I=(0,+\infty )\) and \(M_1,\dots ,M_p\) are positively homogeneous, then every iterate of \({\textbf{M}}_\alpha \) and \(K_\alpha \) are positively homogeneous.

Proof

By Proposition 1 we know that \({\textbf{M}}_\alpha \) is uniformly weakly contractive. Then the vast majority of the proof is based on Theorem 1; more precisely: (iv) yields (a), (ii) implies (b) and (d), and (iii) implies (e).

Moreover, for all \(i \in {\mathbb {N}}_p\), if \(M_i\) is nondecreasing then the mapping

$$\begin{aligned}\begin{aligned} {I^p \ni (x_1,\dots ,x_p) \mapsto M_i(x_{\alpha _{i,1}},\dots ,x_{\alpha _{i,d_i}})} \end{aligned} \end{aligned}$$

is also nondecreasing (in each of its variable), and therefore \([{\textbf{M}}_\alpha ]_i\) is nondecreasing. Since \(i \in {\mathbb {N}}_p\) is arbitrary, in view of Theorem 1 part (vi), we get (f). Analogously, using part (vii) of the same result, one can prove (g).

At this stage (c) is the only remaining part to be proved, whence we need to prove that \(K_\alpha \) is a strict mean on I. To this end, take a nonconstant vector \(x \in I^p\). We show that \(K_\alpha (x)<\max (x)\) (the proof of the second inequality is analogous).

Due to the moreover part of Proposition 1 we have that either (3.6) holds or \({\textbf{M}}_\alpha ^{3^p}(x)\) is a constant vector. In the first case, since \(K_\alpha \) is \({\textbf{M}}_\alpha \)-invariant, using the inequality \(K_\alpha =K_\alpha \circ {\textbf{M}}_\alpha ^{3^p}\) and the mean-value property of \(K_\alpha \), we obtain

$$\begin{aligned}\begin{aligned} { K_\alpha (x)=K_\alpha \circ {\textbf{M}}_\alpha ^{3^p} (x)\le \max \big ({\textbf{M}}_\alpha ^{3^p} (x)\big ) < \max (x). } \end{aligned} \end{aligned}$$

If \({\textbf{M}}_\alpha ^{3^p}(x)\) is a constant vector (that is \({\textbf{M}}_\alpha ^{3^p}(x)={\textbf{K}}_\alpha (x)\)) then let \(n_0\in \{0,\dots ,3^p\}\) be the smallest number such that \({\textbf{M}}_\alpha ^{n_0}(x)={\textbf{K}}_\alpha (x)\).

Obviously \(n_0>0\) since x is nonconstant. Moreover, \(y:={\textbf{M}}_\alpha ^{n_0-1}(x)\) is a nonconstant vector with \({\textbf{K}}_\alpha (x)={\textbf{M}}_\alpha (y)\).

Then \(y_k=\min (y)\) for some \(k \in {\mathbb {N}}_p\). Since \(G_\alpha \) is irreducible, we have \((k,i) \in E_\alpha \) for some \(i \in {\mathbb {N}}_p\). Therefore, by the definition of \(G_\alpha \), one gets \(\alpha _{i,j}=k\) for some \(j \in {\mathbb {N}}_{d_i}\). Then \(\max (y_{\alpha _{i,1}},\dots ,y_{\alpha _{i,d_i}}) \le \max (y)\) and \(y_{\alpha _{i,j}}=y_k=\min (y)<\max (y)\). Therefore, since \(M_i\) is strict,

$$\begin{aligned}\begin{aligned} {K_\alpha (x)=[{\textbf{K}}_\alpha (x)]_i=[{\textbf{M}}_\alpha (y)]_i=M_i(y_{\alpha _{i,1}},\dots ,y_{\alpha _{i,d_i}})<\max (y). } \end{aligned} \end{aligned}$$

However, since \({\textbf{M}}_\alpha \) is a mean-type mapping, we obtain

$$\begin{aligned}\begin{aligned} { \max (y)=\max ({\textbf{M}}_\alpha ^{n_0-1}(x))\le \max (x),} \end{aligned} \end{aligned}$$

and thus we get \(K_\alpha (x)<\max (x)\).

Similarly, one can prove the inequality \(K_\alpha (x)>\min (x)\). Since x is an arbitrary nonconstant vector in \(I^p\), we obtain that \(K_\alpha \) is a strict mean, which was the last unproved part of this statement. \(\square \)

4 Applications and Examples

4.1 An Application to Functional Equations

As a first application, we solve the functional equation \(F \circ {\textbf{M}}_\alpha =F\). Obviously, under standard conditions, it has a unique solution in the family of means. We show that if we extend the considered family to all functions which are continuous on the diagonal, we are able to follow the pattern of invariant means.

Theorem 3

Let \(I \subset {\mathbb {R}}\) be an interval, \(p \in {\mathbb {N}}\), \({\textbf{d}}\in {\mathbb {N}}^p\), \(\alpha \in {{\,\textrm{Erg}\,}}({\textbf{d}})\), and \({\textbf{M}}=(M_1,\dots ,M_p)\) be a \({\textbf{d}}\)-averaging mapping on I such that all \(M_i\)-s are strict.

A function \(F:I^p \rightarrow {\mathbb {R}}\) that is continuous on the diagonal \(\Delta (I^p):=\{(u_1,\dots ,u_p)\in I^p :u_1=\dots =u_p\}\) is invariant with respect to the mean-type mapping \({\textbf{M}}_\alpha \), i.e. F satisfies the functional equation

$$\begin{aligned}\begin{aligned} { F\circ {\textbf{M}}_\alpha =F } \end{aligned} \end{aligned}$$

if, and only if, there is a continuous function \(\varphi :I \rightarrow {\mathbb {R}}\) such that \(F=\varphi \circ K_\alpha \), where \(K_\alpha :I^p \rightarrow I\) it the unique \({\textbf{M}}_\alpha \)-invariant mean.

Proof

Take an \({\textbf{M}}_\alpha \)-invariant function \(F :I^p \rightarrow {\mathbb {R}}\) that is continuous on the diagonal. Then, for all \(n \in {\mathbb {N}}\), we have \(F \circ {\textbf{M}}_\alpha ^n=F\). In the limit case, by Theorem 2 part (d), since F is continuous on the diagonal, we get \(F=F \circ {\textbf{K}}_\alpha \). Thus \(F=\varphi \circ K_\alpha \) for \(\varphi (x):=F(x,\dots ,x)\).

Conversely, if \(F=\varphi \circ K_\alpha \) then, since \(K_\alpha \) in \({\textbf{M}}_\alpha \)-invariant, for all \(x \in I^p\) we get

$$\begin{aligned}\begin{aligned} { F \circ {\textbf{M}}_\alpha (x)=\varphi \circ K_\alpha \circ {\textbf{M}}_\alpha (x)=\varphi \circ K_\alpha (x)=F(x),} \end{aligned} \end{aligned}$$

which completes the proof \(\square \)

4.2 Classical Application of Theorem 2

In this section we apply Theorem 2 to show that the mean-type mapping given by (1.3) has the unique invariant mean. This was one of the motivations to write this paper. In this and the subsequent section, all mean-type mappings contain only power means (since the means that build the mean-type mapping \({\textbf{M}}\) do not affect the convergence of the sequence of iterates \(({\textbf{M}}_\alpha ^n)\) provided that they are all strict). Recall that the n-variable power mean of order s is defined by

$$\begin{aligned}\begin{aligned} {\mathcal {P}}_s(x_1,\dots ,x_n)={\left\{ \begin{array}{ll} \Big (\dfrac{x_1^s+\cdots +x_n^s}{n}\Big )^{1/s} &{}\quad \text { if } s\in {\mathbb {R}}\setminus \{0\}, \\ \root n \of {x_1\cdots x_n} &{}\quad \text { if } s = 0, \end{array}\right. } \end{aligned} \end{aligned}$$

where \(n \in {\mathbb {N}}\) and \(x_1,\dots ,x_n \in {\mathbb {R}}_+\). For simplicity, let us assume that all means in this section are on \({\mathbb {R}}_+\).

Example 2

Let \({\textbf{M}}:{\mathbb {R}}_+^4 \rightarrow {\mathbb {R}}_+^4\) be given by (1.3). We show that there exists a unique \({\textbf{M}}\)-invariant mean \(K :{\mathbb {R}}_+^4 \rightarrow {\mathbb {R}}_+\). Additionally, K is continuous and strict.

Fig. 1
figure 1

Graph \(G_\alpha \) related to Example 2

Indeed, in the framework of \({\textbf{d}}\)-averaging mappings, we express \({\textbf{M}}\) defined in (1.3) as \({\bar{{\textbf{M}}}}_\alpha \), where \({\bar{{\textbf{M}}}}\) consists of bivariate power means, that is

$$\begin{aligned}\begin{aligned} {{\bar{{\textbf{M}}}}=({\mathcal {P}}_{-1},{\mathcal {P}}_0,{\mathcal {P}}_1,{\mathcal {P}}_2),\text { and }\alpha =\big ((1,2),(2,3),(3,4),(4,1)\big ). } \end{aligned} \end{aligned}$$

The vector \({\textbf{d}}\) contains the lengths of the elements in \(\alpha \) (since \(\alpha \in {\mathbb {N}}_4^{\textbf{d}}\)), thus \({\textbf{d}}=(2,2,2,2)\). Obviously all means in \({\bar{{\textbf{M}}}}\), being power means, are continuous an strict. Moreover, the \(\alpha \)-incidence graph (see Figure 1) is aperiodic (since every vertex has a loop) and irreducible (since (4321) is its Hamiltonian cycle). Consequently the \(\alpha \)-incidence graph is ergodic.

Thus, in view of Theorem 2, there exists exactly one \({\textbf{M}}\)-invariant mean \(K :{\mathbb {R}}_+^4 \rightarrow {\mathbb {R}}_+\). Moreover, by the same theorem, we know that it is continuous and strict.

4.3 Mean-Type Mappings Without Ergodic Incidence Graph

In the last section, we show a few difficulties that arise in this setting. Moreover, in each example we present the incidence graph, which would help us to understand the problems appearing when it comes to deal with the invariance problem.

In the first example we show what happens if the incidence graph is disconnected.

Example 3

(Disconnected incidence graph). Let \(p=4\),

$$\begin{aligned}\begin{aligned} {\textbf{d}}&=(2,2,2,2), \\ \alpha&=\big ((1,2),(1,2),(3,4),(3,4)\big ) \in {\mathbb {N}}_4^{\textbf{d}},\\ {\textbf{M}}&=({\mathcal {P}}_{-1},{\mathcal {P}}_1,{\mathcal {P}}_{-1},{\mathcal {P}}_1). \end{aligned} \end{aligned}$$

Then the mean-type mapping \({\textbf{M}}_\alpha :{\mathbb {R}}_+^4\rightarrow {\mathbb {R}}_+^4\) is of the form

$$\begin{aligned}\begin{aligned} {\textbf{M}}_\alpha (x,y,z,t)&=\bigg (\frac{2xy}{x+y},\frac{x+y}{2},\frac{2zt}{z+t},\frac{z+t}{2}\,\bigg ). \end{aligned} \end{aligned}$$
Fig. 2
figure 2

Graph \(G_\alpha \) related to Example 3

Observe that, in this case, \({\textbf{M}}_\alpha \) is not weakly contractive, for example

$$\begin{aligned}\begin{aligned} { {\textbf{M}}_\alpha ^n(1,1,2,2)=(1,1,2,2)\quad \text {for all }n\in {\mathbb {N}}. } \end{aligned} \end{aligned}$$

As a matter of fact, we can split the mapping \({\textbf{M}}_\alpha \) into two bivariate mappings. Then, using the classical result, which says that the arithmetic-harmonic mean coincides with the geometric mean, we obtain

$$\begin{aligned} \begin{aligned} \lim _{n\rightarrow \infty }{\textbf{M}}^n_\alpha (x,y,z,t)&=(\sqrt{xy},\sqrt{xy},\sqrt{zt},\sqrt{zt}). \end{aligned} \end{aligned}$$

As a result we have two natural \({\textbf{M}}_\alpha \)-invariant means. Namely

$$\begin{aligned}\begin{aligned} { K_1(x,y,z,t)=\sqrt{xy}\text { and }K_2(x,y,z,t)=\sqrt{zt}. } \end{aligned} \end{aligned}$$

Therefore \(K(x,y,z,t)=S(\sqrt{xy},\sqrt{zt})\) is \({\textbf{M}}_\alpha \)-invariant for every mean \(S :{\mathbb {R}}_+^2 \rightarrow {\mathbb {R}}_+\). Note that we cannot exclude that there are other \({\textbf{M}}_\alpha \)-invariant means, however all continuous solutions are of this form.

In next two examples we deal with the weakly connected graphs which are not irreducible. We do believe that it is possible to generalize these examples to a result which covers weakly connected graphs. However, at this stage, it is a conjecture.

Example 4

(Weakly connected incidence graph I). Let \(p=4\),

$$\begin{aligned}\begin{aligned} {\textbf{d}}&=(3,3,3,3),\\ \alpha&=\big ((1,2,3),(1,2,3),(1,2,3),(1,2,3)\big )\in {\mathbb {N}}_4^{\textbf{d}}, \\ {\textbf{M}}&=({\mathcal {P}}_{-1},{\mathcal {P}}_0,{\mathcal {P}}_1,{\mathcal {P}}_2). \end{aligned} \end{aligned}$$
Fig. 3
figure 3

Graph \(G_\alpha \) related to Example 4

Then the mean-type mapping \({\textbf{M}}_\alpha :{\mathbb {R}}_+^4\rightarrow {\mathbb {R}}_+^4\) is of the form

$$\begin{aligned}\begin{aligned}{ {\textbf{M}}_\alpha (x,y,z,t)=\bigg (\frac{3xyz}{xy+yz+zx},\root 3 \of {xyz}, \frac{x+y+z}{3},\sqrt{\frac{x^2+y^2+z^2}{3}}\,\bigg ). } \end{aligned} \end{aligned}$$

Thus \({\textbf{M}}_\alpha \) does not depend on the last coordinate. As a result, we cannot claim that the \({\textbf{M}}_\alpha \)-invariant mean is, for example, monotone or strict. In this case, however, the \({\textbf{M}}_\alpha \)-invariant mean is uniquely determined, since \({\textbf{M}}_\alpha \) restricted to the first three variables (in both domain and values) admit the unique invariant mean.

Example 5

(Weakly connected incidence graph II). Let \(p=4\),

$$\begin{aligned} \begin{aligned} {\textbf{d}}&=(2,2,2,2),\\ \alpha&=\big ((1,2),(1,2),(2,4),(3,4)\big )\in {\mathbb {N}}_4^{\textbf{d}}, \\ {\textbf{M}}&=({\mathcal {P}}_{-1},{\mathcal {P}}_1,{\mathcal {P}}_{-1},{\mathcal {P}}_1). \end{aligned} \end{aligned}$$

Then \({\textbf{M}}_\alpha \) is of the form

$$\begin{aligned}\begin{aligned} {\textbf{M}}_\alpha (x,y,z,t)&=\bigg (\frac{2xy}{x+y},\frac{x+y}{2},\frac{2yt}{y+t},\frac{z+t}{2}\,\bigg ). \end{aligned} \end{aligned}$$
Fig. 4
figure 4

Graph \(G_\alpha \) related to Example 5

Analogously to Example 3, we have

$$\begin{aligned}\begin{aligned}{ \lim _{n \rightarrow \infty } {\textbf{M}}_\alpha ^n(x,y,z,t)=(\sqrt{xy},\sqrt{xy},\sqrt{xy},\sqrt{xy}). } \end{aligned} \end{aligned}$$

Therefore \(K(x,y,z,t)=\sqrt{xy}\) is the only \({\textbf{M}}\)-invariant mean (see [24, Theorem 1] for the detailed proof). Remarkably, it depends on neither z nor t.

Finally, we show an example with periodic incidence graph, with the conjecture as in the case of weakly connected graphs.

Example 6

(Periodic incidence graph). Let \(p=4\),

$$\begin{aligned}\begin{aligned} {\textbf{d}}&=(2,2,2,2),\\ \alpha&=\big ((3,4),(3,4),(1,2),(1,2)\big )\in {\mathbb {N}}_4^{\textbf{d}}, \\ {\textbf{M}}&=({\mathcal {P}}_{-1},{\mathcal {P}}_1,{\mathcal {P}}_{-1},{\mathcal {P}}_1). \end{aligned} \end{aligned}$$

Then \({\textbf{M}}_\alpha \) is of the form

$$\begin{aligned}\begin{aligned} {\textbf{M}}_\alpha (x,y,z,t)&=\bigg (\frac{2zt}{z+t},\frac{z+t}{2},\frac{2xy}{x+y},\frac{x+y}{2}\,\bigg ). \end{aligned} \end{aligned}$$
Fig. 5
figure 5

Graph \(G_\alpha \) related to Example 6

Then, after some computations, we have

$$\begin{aligned}\begin{aligned} {\textbf{M}}_\alpha ^2(x,y,z,t)&=\bigg (\frac{4xy(x+y)}{x^2+6xy+y^2},\frac{x^2+6xy+y^2}{4(x+y)},\\&\qquad \qquad \frac{4zt(z+t)}{z^2+6zt+t^2},\frac{z^2+6zt+t^2}{4(z+t)}\,\bigg ). \end{aligned} \end{aligned}$$

Analogously to Example 3 and the previous example, we have

$$\begin{aligned}\begin{aligned} \lim _{n\rightarrow \infty } {\textbf{M}}_\alpha ^{2n}(x,y,z,t)=(\sqrt{xy},\sqrt{xy},\sqrt{zt},\sqrt{zt}), \end{aligned} \end{aligned}$$

consequently

$$\begin{aligned}\begin{aligned} \lim _{n\rightarrow \infty } {\textbf{M}}_\alpha ^{2n+1}(x,y,z,t)&={\textbf{M}}_\alpha \Big (\lim _{n\rightarrow \infty } {\textbf{M}}_\alpha ^{2n}(x,y,z,t)\Big )\\&= (\sqrt{zt},\sqrt{zt},\sqrt{xy},\sqrt{xy}). \end{aligned} \end{aligned}$$

Therefore \(K(x,y,z,t)=S(\sqrt{xy},\sqrt{zt})\) is \({\textbf{M}}_\alpha \)-invariant for every symmetric mean \(S :{\mathbb {R}}_+^2 \rightarrow {\mathbb {R}}_+\). Moreover, using ideas from Example 3, one can show that all continuous \({\textbf{M}}_\alpha \)-invariant means are of this form.