1 Introduction

At the beginning let us set up some basic terminology and notation.

Definition 1

Let S be a Banach space and let T be a linear, closed subspace of S. An operator \(P: S \rightarrow T\) is called a projection if \(P|_T = id|_T\). We denote by \({\mathcal {P}}(S; T)\) the set of all linear and continuous (with respect to the operator norm) projections.

Definition 2

A projection \(P_0\in {\mathcal {P}}(S; T)\) is called minimal if

$$\begin{aligned} \Vert P_0\Vert = \inf \{\Vert P\Vert : P \in {\mathcal {P}}(S; T)\} =: \lambda (T; S). \end{aligned}$$

In the theory of minimal projection three main problems are considered: existence and uniqueness of minimal projections [15,16,17, 19,20,21,22,23,24,25,26,27,28,29] , finding estimates of the constant \(\lambda (T; S)\) [2,3,4,5, 7,8,9,10,11,12,13] and finding concrete formulas for minimal projections [6, 9, 18, 24]. As one can see this theory is widely studied by many authors also recently [1, 11, 12, 14, 18, 23].

Let \(X=\{1, 2, 3, \ldots , n \}\), \(Y=\{1, 2, 3, \ldots , m \}\), \(Z=\{1, 2, 3, \ldots , r \}\), where \(3\le n, m, r <+\infty \) are fixed. Define \(S=M(n,m,r)\) as a set of all functions from \(X\times Y \times Z\) into \({\mathbb {R}}\) (or \({\mathbb {C}}\)). Let T be a subspace of S consisting of all sums of functions which depend on one variable, i.e.

$$\begin{aligned} T=\{f\in S : f(x,y,z)=g(x)+h(y)+i(z); g: X \mapsto {\mathbb {R}}, h: Y \mapsto {\mathbb {R}}, i: Z \mapsto {\mathbb {R}}\} (\text{ or } {\mathbb {C}}). \end{aligned}$$

It is convenient to consider these spaces as a spaces of ”three-dimensional” matrices with real (or complex) values. Let M(1, 1, r) be a subspace of a three-dimensional matrix space S with elements \(a_{ijk}\), such that \(a_{i_1j_1k}=a_{i_2j_2k}\) for any \(i_1,i_2\in \{1, 2, \ldots , n\}\), \(j_1,j_2\in \{1, 2, \ldots , m\}\) and \(k\in \{1, 2, \ldots , r\}\). Analogously we define M(1, m, 1), M(n, 1, 1). Then we can write \(T=M(n,1,1)+M(1,m,1)+M(1,1,r)\).

Definition 3

Let \(\Pi _n\) be a set of all permutations of \(\{1,2,\ldots ,n\}\). Define

$$\begin{aligned} \Pi _n\times \Pi _m\times \Pi _r=\{\pi =\alpha \times \beta \times \gamma , \text{ where }: \alpha \in \Pi _n, \beta \in \Pi _m, \gamma \in \Pi _r\}. \end{aligned}$$

\(G=\Pi _n\times \Pi _m\times \Pi _r\) will be a group with permutation composition as a natural operation and let \(A_{\alpha \times \beta \times \gamma }\) be a transformation of S associated with permutation \(\alpha \times \beta \times \gamma \). It means that

$$\begin{aligned}A_{\alpha \times \beta \times \gamma }(x)(i,j,k)=x(\alpha (i),\beta (j)\gamma (k)).\end{aligned}$$

Every element of a group G can be identified with a composition of permutations of matrix planes: parallel to plane XY, parallel to plane XZ and parallel to plane YZ. For more details about that interpretation see [18].

Let us remind

Definition 4

An element x of Banach space X is called a smooth point if there exists a unique supporting functional \(f_x\).

If every x from the unit sphere of X is smooth, then X is called a smooth space.

From now we assume that for any permutation \({\alpha \times \beta \times \gamma }\) an operator \(A_{\alpha \times \beta \times \gamma }\) is an isometry and a space S is smooth.

Definition 5

Let X be a Banach space and G be a topological group such that for every \(g\in G\) there is a continuous linear operator \(A_g: X \rightarrow X\) for which:

$$\begin{aligned}A_e=I, \qquad A_{g_1g_2}=A_{g_1}A_{g_2}, \text{ for } \text{ every } g_1, g_2,\in G\end{aligned}$$

Then we say that G acts as a group of linear operators on X.

Definition 6

We say that \(L: X \rightarrow X\) commutes with G if \(A_gLA_{g^{-1}}=L\) for every \(g\in G\).

The aim of this paper is to generalize a result of Skrzypek [27] who proved the uniqueness of minimal projection in standard smooth matrix spaces. In particular, we prove that there is a unique projection from S into T. Our approach is based on a Skrzypek’s method, who used there two main theorems: Rudin’s theorem [26] and Chalmers and Metcalf’s theorem [6]. In this paper, we also use a theorem proved by Lewicki and Skrzypek in [22].

Theorem 1

(Rudin) Let X be a Banach space and W be its complemented subspace (\({\mathcal {P}}(X,W)\ne \emptyset \)). Assume that W is G-invariant subspace, where G is a compact topological group acting by isomorphisms on X such that

  • for every \(x\in X\) function \(A_g(x)\) is continuous,

  • \(A_g(W)\subset W\) for every \(g\in G\).

If there exists a bounded linear projection \(P: X \mapsto W\) then there exists a bounded linear projection \(Q_P\) from X to W which commutes with G and is of the form:

$$\begin{aligned} Q_Px=\int _G A_{g^{-1}}PA_gx \text {d}\mu ' (g), \end{aligned}$$
(1)

where \(\mu '\) is normalized Haar measure and \(\int _G f(g) \text {d}\mu ' (g)\) is a Pettis integral of f.

Moreover, the following theorem holds true.

Theorem 2

Let the assumptions of the Rudin’s Theorem be satisfied. Assume furthermore that for every \( g \in G\) there is \(A_g\) linear surjective isometry of X. If there is the unique projection \(Q\in {\mathcal {P}}(X,W)\) commuting with G then Q is a minimal projection of X into W.

For the proof and more details see [18, Theorem 2]

These theorems are very useful in finding in some cases explicit formulas for minimal projections [18] but, in general, does not imply their uniqueness, because there can exist a minimal projection which does not commute with G. To prove the uniqueness we use the following theorems, but first let us recall a definition.

Definition 7

A pair \((x,y)\in S(X^{**})\times S(X^*)\) is called an extreme pair for \(P\in {\mathcal {P}}(X,W)\) if \(y(P^{**}x)=\Vert P\Vert \), where \(P^{**}: X^{**} \rightarrow W\) and S(X) is a sphere on X. Let \({\mathcal {E}}(P)\) be a set of all extreme pairs of P.

Spaces ST are of a finite dimension so the set \({\mathcal {E}}(P)\) is not empty. Furthermore \(X^{**}\) can be considered as X. It is also known that for such spaces \({\mathcal {P}}(S; T)\ne \emptyset \) (see [10]).

Theorem 3

(Chalmers, Metcalf) A projection \(P\in {\mathcal {P}}(X,W)\) is minimal if and only if closed convex hull of \(\{y\otimes x\}_{(x,y)\in {\mathcal {E}}(P)}\) contains an operator \(E_P,\) for which W is an invariant subspace.

Operator \(E_P\) (called Chalmers–Metcalf operator) is given by a formula:

$$\begin{aligned}E_P=\int _{{\mathcal {E}}(P)} y\otimes x d\mu '' (x,y): X \rightarrow X^{**},\end{aligned}$$

where \(\mu ''\) is a probabilistic Borel measure on \({\mathcal {E}}(P)\).

Theorem 4

(Lewicki, Skrzypek) Let X be a Banach space, let W be its finite dimensional subspace. Assume that \(X^{**}\) is a smooth space. Assume furthermore that for a minimal projection P there exists a Chalmers–Metcalf operator \(E_P\) such that \(E_P|_W\) is invertible. Then P is the unique minimal projection.

2 Preliminary results

First let us prove some technical lemmas which will be used in a main proof. Lemma 1 and Theorems 56 are easy generalizations of their analogs from [27]. For the completeness of the content of this paper, we present their proofs.

Lemma 1

(Compare with [27] Lemma 1.4) For any \(y\in S^*\) i \(\pi \in G\) we have

$$\begin{aligned}y(A_\pi ^{-1}(s))=(A_\pi y)(s), \qquad s\in S.\end{aligned}$$

Proof

Since \(\dim S<+\infty \) then any \(y\in S^*\) can be written as

$$\begin{aligned}y(x)=\sum _{i,j,k}y_{i,j,k}\cdot x_{i,j,k},\end{aligned}$$

where \(\displaystyle x=\sum _{i,j,k}x_{i,j,k} \cdot e_{i,j,k}\) and elements \(y_{i,j,k}\in {\mathbb {K}}\) do not depend on x. Since \(A_{\alpha \times \beta \times \gamma } ^{-1}=A_{\alpha ^{-1}\times \beta ^{-1}\times \gamma ^{-1}}\):

$$\begin{aligned} y(A_\pi ^{-1}(s))= & {} \sum _{i,j,k}y_{i,j,k}\cdot (A_{\alpha \times \beta \times \gamma } ^{-1}(s))_{i,j,k}= \sum _{i,j,k}y_{i,j,k}\cdot (A_{\alpha ^{-1}\times \beta ^{-1}\times \gamma ^{-1}}(s))_{i,j,k} \\= & {} \sum _{i,j,k}y_{i,j,k}\cdot s_{\alpha ^{-1}(i)\times \beta ^{-1}(j)\times \gamma ^{-1}(k)}\cdot s_{i,j,k}= \sum _{i,j,k}y_{\alpha (i),\beta (j),\gamma (k)}\cdot s_{i,j,k} \\= & {} \sum _{i,j,k}\left( A_{\alpha \times \beta \times \gamma }(y)\right) _{i,j,k}\cdot s_{i,j,k}=A_\pi y (s) \end{aligned}$$

\(\square \)

Theorem 5

(Compare with [27] Theorem 1.5) Let \(Q\in {\mathcal {P}}(S,T)\) commutes with G. If \((x,y)\in {\mathcal {E}}(Q)\) then \((A_\pi x, A_\pi y)\in {\mathcal {E}}(Q)\) for any permutation \(\pi \in \Pi _n\times \Pi _m\times \Pi _r\).

Proof

If Q commutes with \(\Pi _n\times \Pi _m\times \Pi _r\) then from Lemma 1 we get

$$\begin{aligned} \Vert Q\Vert =y(Qx)=y((A_\pi )^{-1}QA_\pi (x))=y((A_\pi )^{-1}(QA_\pi (x)))=(A_\pi y)(Q(A_\pi x)). \end{aligned}$$

\(\square \)

For our further considerations let us introduce a Chalmers-Metcalf operator

$$\begin{aligned}E_Q=\frac{1}{|G|}\sum _{\pi \in G} (A_\pi y) \otimes (A_\pi x) : S \rightarrow S,\end{aligned}$$

where (xy) is a fixed extreme pair (\((x,y)\in {\mathcal {E}}(Q)\)).

Theorem 6

(Compare [27] Theorem 1.7) \(E_Q\) commutes with G.

Proof

Fix \(\delta \in G\). From Lemma 1 we get that for every \(s\in S\)

$$\begin{aligned} |G|\cdot E_Q \circ A_\delta (s)= & {} \sum _\pi (A_\pi y) \otimes (A_\pi x)(A_\delta s)=\sum _\pi (A_\pi y)(A_\delta s) \cdot (A_\pi x) \\= & {} \sum _\pi (A_\delta ^{-1} A_\pi y)(s) \cdot A_\pi (x)=\sum _\pi (A_{\delta ^{-1}\circ \pi } y)(s) \cdot (A_{\pi } x)\\= & {} \sum _{\pi '} (A_{\pi '} y)(s) \cdot A_{\delta \circ \pi '} (x) \\= & {} \sum _{\pi '} (A_{\pi '} y)(s) \cdot A_\delta (A_{\pi '}) (x)=A_\delta \left( \sum _{\pi '} (A_{\pi '} y)(s) \cdot (A_{\pi '}) (x)\right) \\= & {} A_\delta (|G|\cdot E_Q (s))=|G|\cdot A_\delta \circ E_Q(s). \end{aligned}$$

\(\square \)

One of the main results of this paper is a Theorem 9 concerning the form of an operator from T into itself. Let us recall that space T is generated by elements

$$\begin{aligned}u_{a}(i,j,k)={\left\{ \begin{array}{ll} 1 \quad \text{ if } i=a \\ 0 \quad \text{ if } i\ne a \end{array}\right. }\in M(n,1,1),\\v_{b}(i,j,k)={\left\{ \begin{array}{ll} 1 \quad \text{ if } j=b \\ 0 \quad \text{ if } j\ne b \end{array}\right. }\in M(1,m,1),\\w_{c}(i,j,k)={\left\{ \begin{array}{ll} 1 \quad \text{ if } k=c \\ 0 \quad \text{ if } k \ne c \end{array}\right. }\in M(1,1,r),\\t(i,j,k)=1,\\ \text{ for } \text{ any } i\in \{1,\ldots ,n\}, j\in \{1,\ldots ,m\}, k\in \{1,\ldots ,r\},\end{aligned}$$

where \(a\in \{1,\ldots ,n\}, b\in \{1,\ldots ,m\}, c\in \{1,\ldots ,r\}\). Furthermore, we can choose a basis as

$$\begin{aligned}\left\{ u_{a}, v_{b}, w_{c}, t, \text{ where } a\in \{1,\ldots ,n-1 \}, b\in \{1,\ldots ,m-1 \}, c\in \{1,\ldots ,r-1 \}\right\} .\end{aligned}$$

Consequently, \(\dim T=n-1+m-1+r-1+1=n+m+r-2\). Now we can prove two useful theorems.

Theorem 7

Let \(E_Q\), S, T be as above. Then:

$$\begin{aligned}E_Q(T)\subset T.\end{aligned}$$

Proof

Fix \(a\in \{1,\ldots , n\}\). We show that \(E_Q(u_a)\in T\). Analogously, it can be shown that \(E_Q(v_b)\in T\) and \(E_Q(w_c)\in T\) which will end the proof. Proceeding in the same way as in the proof of Theorem 1.6 (1) in [27] we get from Lemma 1 that

$$\begin{aligned} |G|E_Q(u_a) = \sum _{\pi \in G} y(A_\pi (u_a)) \cdot (A_\pi )^{-1} (x) . \end{aligned}$$
(2)

Let \(\pi (a,z)=\{\pi =\alpha \times \beta \times \gamma : \alpha (a)=z\}\). Then

$$\begin{aligned} \sum _{\pi \in G} y(A_\pi (u_a)) \cdot (A_\pi )^{-1} (x)= & {} \sum _{z=1}^{n}\left( \sum _{\pi \in \pi (a,z)} y(A_\pi (u_a)) \cdot A_\pi ^{-1} (x)\right) \nonumber \\= & {} \sum _{z=1}^{n}\left( \sum _{\pi \in \pi (a,z)} y(u_z) \cdot A_\pi ^{-1} (x)\right) \nonumber \\= & {} \sum _{z=1}^{n} y(u_z) \cdot \left( \sum _{\pi \in \pi (a,z)} A_{\pi ^{-1}} (x)\right) \nonumber \\= & {} \sum _{z=1}^{n} y(u_z) \cdot \left( \sum _{\pi '\in \pi (z,a)} A_{\pi '} (x)\right) . \end{aligned}$$
(3)

In the last equality we changed the summing because of the fact that \(\pi \in \pi (a,z) \Leftrightarrow \pi ^{-1}\in \pi (z,a)\). Let us now focus on the expression in the last brackets.

$$\begin{aligned} \left( \sum _{\pi '\in \pi (z,a)} A_{\pi '} (x)\right) (i,j,k)= & {} \sum _{\alpha \times \beta \times \gamma \in \pi (z,a)} x(\alpha (i),\beta (j),\gamma (k)) \nonumber \\= & {} \sum _{\alpha : \alpha (z)=a} \left( \sum _{\beta \times \gamma } x(\alpha (i),\beta (j),\gamma (k))\right) \nonumber \\= & {} \sum _{\alpha : \alpha (z)=a} (m-1)!\left( \sum _{b=1}^m (r-1)! \left( \sum _{c=1}^r x(\alpha (i),b,c)\right) \right) \nonumber \\ \end{aligned}$$
(4)

As one can see the last expression in (4) does not depend on j nor k so \(\left( \sum _{\pi '\in \pi (z,a)} A_{\pi '} (x)\right) \in M(n,1,1)\subset T\). Combining (2) and (3) we get that \(E_Q(u_a)\in T\), which ends the proof. \(\square \)

Theorem 8

Let \(E_Q\), S, T, t be as defined above. Then there exists a constant c such that

$$\begin{aligned}E_Q^*(t)=c\cdot t.\end{aligned}$$

Proof

Notice that for any \(y\in M(n,m,r)\) we have:

$$\begin{aligned} \sum _{\pi \in G}A_{\pi } (y)= & {} \left( \sum _{\pi \in G}A_\pi \right) \left( \sum _{i,j,k}y(i,j,k) e_{ijk} \right) = \sum _{i,j,k}y(i,j,k)\left( \sum _{\pi \in G}A_\pi \right) \left( e_{i,j,k} \right) \nonumber \\= & {} \sum _{i,j,k}y(i,j,k) \left( (n-1)!(m-1)!(r-1)! \cdot \sum _{{\tilde{i}}, {\tilde{j}}, {\tilde{k}}} e_{{\tilde{i}}, {\tilde{j}}, {\tilde{k}}} \right) \nonumber \\= & {} \sum _{i,j,k}y(i,j,k) \big ( (n-1)!(m-1)!(r-1)! \cdot t \big ) \nonumber \\= & {} (n-1)!(m-1)!(r-1)! \sum _{i,j,k}y(i,j,k) \cdot t \end{aligned}$$
(5)

By the formula for \(E_Q^*\), Lemma 1 and the above equality we get

$$\begin{aligned} |G|E_Q^*(t)= & {} \sum _{\pi \in G}(A_\pi x) \otimes (A_\pi y) (t)=\sum _{\pi \in G}(A_\pi x)(t)\cdot A_\pi (y) \\= & {} \sum _{\pi \in G}(A_\pi ^{-1} (t)) \cdot A_\pi (y) =\sum _{\pi \in G}x(t)\cdot A_\pi (y)=x(t) \sum _{\pi \in G}A_\pi (y) \\= & {} x(t) (n-1)!(m-1)!(r-1)! \sum _{i,j,k}y(i,j,k) \cdot t \end{aligned}$$

Since \(|G|=n!m!r!\) then these equalities give us our thesis with a constant

$$\begin{aligned}c=\frac{x(t) (n-1)!(m-1)!(r-1)! \sum _{i,j,k}y(i,j,k)}{n!m!r!}=\frac{x(t)\sum _{i,j,k}y(i,j,k)}{nmr}.\end{aligned}$$

\(\square \)

3 Main results

Finally, we can present previously mentioned theorem of the form of an operator from T into T which is crucial to prove the main theorem of that paper.

Theorem 9

If an operator \(L: T\rightarrow T\) commutes with a group \(G=\Pi _n\times \Pi _m\times \Pi _r\) (\(A_\pi L=LA_\pi \)), then there exist constants defg such that:

$$\begin{aligned} L(u_a)= & {} du_a+\frac{g-d}{n}t, \text{ for } a\in \{1,\ldots ,n-1 \}; \nonumber \\ L(v_b)= & {} ev_b+\frac{g-e}{m}t, \text{ for } b\in \{1,\ldots ,m-1 \}; \\ L(w_c)= & {} fw_c+\frac{g-f}{r}t, \text{ for } c\in \{1,\ldots ,r-1 \}; \nonumber \\ L(t)= & {} gt.\nonumber \end{aligned}$$
(6)

Proof

Notice that elements

$$\begin{aligned}\{ u_1, \ldots , u_{n-1}, v_1, \ldots , v_{m-1}, w_1, \ldots , w_{r-1}, t \}\end{aligned}$$

form a basis of T. Every linear operator \(L: T \rightarrow T\) is represented by the images of basis elements, so

$$\begin{aligned} L(u_a)= & {} \sum _{i=1}^{n-1} d_{ai}^u u_i+\sum _{j=1}^{m-1} e_{aj}^u v_j+\sum _{k=1}^{r-1} f_{ak}^u w_k+g_a^u t,\\ L(v_b)= & {} \sum _{i=1}^{n-1} d_{bi}^v u_i+\sum _{j=1}^{m-1} e_{bj}^v v_j+\sum _{k=1}^{r-1} f_{bk}^v w_k+g_b^v t,\\ L(w_c)= & {} \sum _{i=1}^{n-1} d_{ci}^w u_i+\sum _{j=1}^{m-1} e_{cj}^w v_j+\sum _{k=1}^{r-1} f_{ck}^w w_k+g_c^w t,\\ L(t)= & {} \sum _{i=1}^{n-1} d_{i}^t u_i+\sum _{j=1}^{m-1} e_{j}^t v_j+\sum _{k=1}^{r-1} f_{k}^t w_k+g^t t. \end{aligned}$$

Due to the complexity of the proof, we will conduct it in a few steps.

  1. 1.

    Fix \(a_1, a_2 \in \{1,\ldots , n-1\}\), \(b_1, b_2 \in \{1,\ldots , m-1\}\), \(c_1, c_2 \in \{1,\ldots , r-1\}\) and consider \(A\in G\), which interchanges: \(u_{a_1}\) with \(u_{a_2}\), \(v_{b_1}\) with \(v_{b_2}\) and \(w_{c_1}\) with \(w_{c_2}\), i.e.

    $$\begin{aligned} A(u_{a_1})= & {} u_{a_2}, \quad A(u_{a_2})=u_{a_1}, \quad A(u_{a})=u_{a}, \quad a\ne a_1, a_2;\\ A(v_{b_1})= & {} v_{b_2}, \quad A(v_{b_2})=v_{b_1}, \quad A(v_{b})=v_{b}, \quad b\ne b_1, b_2;\\ A(w_{c_1})= & {} w_{c_2}, \quad A(w_{c_2})=w_{c_1}, \quad A(w_{c})=w_{c}, \quad c\ne c_1, c_2;\\ A(t)= & {} t. \end{aligned}$$

    Since L commutes with G, in particular \(L\circ A(u_{a_1})=A\circ L(u_{a_1})\), which means that

    $$\begin{aligned} L(u_{a_2})=A\left( \sum _{i=1}^{n-1} d_{a_1i}^u u_i+\sum _{j=1}^{m-1} e_{a_1j}^u v_j+\sum _{k=1}^{r-1} f_{a_1k}^u w_k+g_{a_1}^u t\right) \end{aligned}$$

    Therefore

    $$\begin{aligned}&\sum _{i=1}^{n-1} d_{a_2i}^u u_i+\sum _{j=1}^{m-1} e_{a_2j}^u v_j+\sum _{k=1}^{r-1} f_{a_2k}^u w_k+g_{a_2}^u t=\sum _{i=1, i\ne a_1, a_2}^{n-1} d_{a_1i}^u u_i+\sum _{j=1, j\ne b_1, b_2}^{m-1} e_{a_1j}^u v_j\\&\quad +\sum _{k=1, k\ne c_1, c_2}^{r-1} f_{a_1k}^u w_k+g_{a_1}^u t+d_{a_1a_1}^u u_{a_2}+d_{a_1a_2}^u u_{a_1}+e_{a_1b_1}^u v_{b_2} +e_{a_1b_2}^u v_{b_1}+f_{a_1c_1}^u w_{c_2}+f_{a_1c_2}^u w_{c_1}. \end{aligned}$$

    Hence, after comparing the coefficients corresponding to the base elements, we get the equations

    1. (a)

      \(d_{a_1i}^u=d_{a_2i}^u\), \(e_{a_1j}^u=e_{a_2j}^u\), \(f_{a_1k}^u=f_{a_2k}^u\) for all \(i\in \{ 1,\ldots , n-1 \}\backslash \{a_1, a_2\}\), \(j\in \{ 1,\ldots , m-1 \}\backslash \{b_1, b_2\}\), \(k\in \{ 1,\ldots , r-1 \}\backslash \{c_1, c_2\}\);

    2. (b)

      \(d_{a_1a_1}^u=d_{a_2a_2}^u\), \(e_{a_1b_1}^u=e_{a_2b_2}^u\), \(f_{a_1c_1}^u=f_{a_2c_2}^u\);

    3. (c)

      \(d_{a_1a_2}^u=d_{a_2a_1}^u\), \(e_{a_1b_2}^u=e_{a_2b_1}^u\), \(f_{a_1c_2}^u=f_{a_2c_1}^u\);

    4. (d)

      \(g_{a_1}^u=g_{a_2}^u\).

  2. 2.

    Let us consider a matrix of coefficients \(d_{ai}^u\), where \(a,i\in \{1,\ldots , n-1\}\) given by

    $$\begin{aligned}D^u= \left[ \begin{array}{ccc} d_{1 \ 1}^u &{} \quad \ldots &{} \quad d_{1 \ n-1}^u\\ \vdots &{} \quad \ddots &{} \quad \vdots \\ d_{n-1 \ 1}^u &{}\quad \ldots &{} \quad d_{n-1 \ n-1}^u \end{array} \right] \end{aligned}$$

    Elements \(a_1, a_2\) are chosen arbitrary, hence by (b) elements on a main diagonal of \(D^u\) are equal.

    By (c) matrix \(D^u\) is symmetric.

    If \(a_1=1\), \(a_2=2\) then by (a) we get \(d_{1i}^u=d_{2i}^u\) for all \(i\ne 1, 2\). In particular \(d_{13}^u=d_{21}\). Furthermore by (c) \(d_{12}^u=d_{21}^u\).

    Analogously, if \(a_1=1\), \(a_2=3\) then by (a) \(d_{12}^u=d_{32}^u\) and by (c) \(d_{13}^u=d_{31}^u\) and if \(a_1=2\), \(a_2=3\) then by (a) \(d_{21}^u=d_{31}^u\) and by (c) \(d_{23}^u=d_{32}^u\).

    Hence

    $$\begin{aligned} d_{12}^u=d_{21}^u=d_{31}^u=d_{13}^u=d_{23}^u=d_{32}^u=:d^u_2. \end{aligned}$$

    Proceeding similarly, for any three numbers from the set \(\{1, \ldots , n-1\}\), we get

    $$\begin{aligned}D^u= \left[ \begin{array}{cccc} d_{1}^u &{} \quad d_{2}^u &{} \quad \ldots &{} \quad d_{2}^u\\ d_{2}^u &{} \quad \ddots &{} \quad &{} \quad \vdots \\ \vdots &{} \quad &{} \quad \ddots &{} \quad d_{2}^u\\ d_{2}^u &{} \quad \ldots &{} \quad d_{2}^u &{} \quad d_{1}^u \end{array} \right] .\end{aligned}$$
  3. 3.

    Consider now a matrix of coefficients \(e_{aj}^u\), where \(a\in \{1,\ldots ,n-1\}\), \(j\in \{1,\ldots ,m-1\}\)

    $$\begin{aligned}E^u= \left[ \begin{array}{ccc} e_{1 \ 1}^u &{}\quad \ldots &{} \quad e_{1 \ m-1}^u\\ \vdots &{} \quad \ddots &{}\quad \vdots \\ e_{n-1 \ 1}^u &{}\quad \ldots &{} \quad e_{n-1 \ m-1}^u \end{array} \right] .\end{aligned}$$

    If \(b_1=1, b_2=2\) then by (b) we get \(e_{a_11}^u=e_{a_22}^u\) and by (c) \(e_{a_12}^u=e_{a_21}^u\).

    If \(b_1=1, b_2=3\) then by (a) we get \(e_{a_12}^u=e_{a_22}^u\).

    Hence \(e_{a_11}^u=e_{a_22}^u=e_{a_12}^u=e_{a_21}^u\). Proceeding analogously for any \(b_1,b_2\in \{1,\ldots ,m-1\}\) and by arbitrariness of choice of \(a_1,a_2\), we get the equality of all elements of the matrix \(E^u\), i.e.

    $$\begin{aligned}E^u= \left[ \begin{array}{ccc} e^u &{} \quad \ldots &{}\quad e^u\\ \vdots &{}\quad \ddots &{}\quad \vdots \\ e^u &{} \quad \ldots &{}\quad \quad e^u \end{array} \right] .\end{aligned}$$

    Analogously

    $$\begin{aligned}F^u= \left[ \begin{array}{ccc} f^u &{} \quad \ldots &{}\quad f^u\\ \vdots &{}\quad \ddots &{} \quad \quad \vdots \\ f^u &{} \quad \ldots &{}\quad f^u \end{array} \right] .\end{aligned}$$

    Furthermore, by (d) and by arbitrariness of choice of \(a_1,a_2\) we get \(g_{a_1}^u=g_{a_2}^u=:g^u\) for all \(a_1,a_2\in \{1,\ldots ,n-1\}\). Applying the above formulas and (2), (3) we get a new form of \(L(u_a)\)

    $$\begin{aligned} L(u_a)=d_1^u u_a+d_2^u\sum _{i=1, i\ne a}^{n-1} u_i+e^u\sum _{j=1}^{m-1} v_j+f^u\sum _{k=1}^{r-1} w_k+g^u t. \end{aligned}$$

    Analogously

    $$\begin{aligned} L(v_b)= & {} d^v\sum _{i=1}^{n-1}u_i+e_1^v v_b+e_2^v\sum _{j=1, j\ne b}^{m-1} v_j+f^v\sum _{k=1}^{r-1} w_k+g^v t,\\ L(w_c)= & {} d^w\sum _{i=1}^{n-1} u_i+e^w\sum _{j=1}^{m-1} v_j+f_1^w w_c+f_2^w\sum _{k=1, k\ne c}^{r-1} w_k+g^w t. \end{aligned}$$
  4. 4.

    L commutes with G, so \(L\circ A(t)=A\circ L(t)\) and therefore

    $$\begin{aligned} L(t)=A\left( \sum _{i=1}^{n-1} d_{i}^t u_i+\sum _{j=1}^{m-1} e_{j}^t v_j+\sum _{k=1}^{r-1} f_{k}^t w_k+g^t t\right) , \end{aligned}$$

    which means that

    $$\begin{aligned} \sum _{i=1}^{n-1} d_{i}^t u_i+\sum _{j=1}^{m-1} e_{j}^t v_j+\sum _{k=1}^{r-1} f_{k}^t w_k+g^t t=A\left( \sum _{i=1}^{n-1} d_{i}^t u_i+\sum _{j=1}^{m-1} e_{j}^t v_j+\sum _{k=1}^{r-1} f_{k}^t w_k+g^t t\right) . \end{aligned}$$

    After subtraction the same elements from both sides of equation, we get

    $$\begin{aligned}&d_{a_1}^t u_{a_1}+d_{a_2}^t u_{a_2}+e_{b_1}^t v_{b_1}+e_{b_2}^t v_{b_2}+f_{c_1}^t w_{c_1}+f_{c_2}^t w_{c_2}\\&\quad =d_{a_1}^t u_{a_2}+d_{a_2}^t u_{a_1}+e_{b_1}^t v_{b_2}+e_{b_2}^t v_{b_1}+f_{c_1}^t w_{c_2}+f_{c_2}^t w_{c_1}. \end{aligned}$$

    Hence

    $$\begin{aligned} d_{a_1}^t=d_{a_2}^t=:d^t, e_{b_1}^t =e_{b_2}^t=:e^t, f_{c_1}=f_{c_2}^t=:f^t. \end{aligned}$$

    By arbitrariness of choice of \(a_1, a_2, b_1, b_2, c_1, c_2\) we obtain a new formula for L(t)

    $$\begin{aligned} L(t)=d^t\sum _{i=1}^{n-1} u_i+e^t\sum _{j=1}^{m-1} v_j+f^t\sum _{k=1}^{r-1} w_k+g^t t. \end{aligned}$$
  5. 5.

    Fix \(a_3\in \{1,\ldots ,n-1\}, b_3\in \{1,\ldots ,m-1\}, c_3\in \{1,\ldots ,r-1\}\) and consider \(B\in G\), which interchanges: \(u_{a_3}\) with \(u_{a_n}\), \(v_{b_3}\) with \(v_{b_m}\) and \(w_{c_3}\) with \(w_{c_r}\). Therefore, since \(u_n=t-\sum _{i=1}^{n-1} u_i\), B fulfills the conditions

    $$\begin{aligned} B(u_{a_3})= & {} t-\sum _{i=1}^{n-1} u_i, \qquad B(u_{a})=u_{a}, \quad a\in \{1,\ldots ,n-1\}\backslash \{a_3\};\\ B(v_{b_3})= & {} t-\sum _{j=1}^{m-1} v_j, \qquad B(v_{b})=v_{b}, \quad b\in \{1,\ldots ,m-1\}\backslash \{b_3\};\\ B(w_{c_3})= & {} t-\sum _{k=1}^{r-1} w_k, \qquad B(w_{c})=w_{c}, \quad c\in \{1,\ldots ,r-1\}\backslash \{c_3\};\\ B(t)= & {} t. \end{aligned}$$

    Since \(L\circ B(u_a)=B\circ L(u_a)\) for all \(a\ne a_3\),

    $$\begin{aligned} L(u_a)=B\Bigg (d_1^u u_a+d_2^u\sum _{i=1, i\ne a}^{n-1} u_i+e^u\sum _{j=1}^{m-1} v_j+f^u\sum _{k=1}^{r-1} w_k+g^u t\Bigg ). \end{aligned}$$

    Hence:

    $$\begin{aligned}&d_1^u u_a+d_2^u\sum _{i=1, i\ne a}^{n-1} u_i+e^u\sum _{j=1}^{m-1} v_j+f^u\sum _{k=1}^{r-1} w_k+g^u t=d_1^u u_a\\&\quad +d_2^u\sum _{i=1, i\ne a,a_3}^{n-1} u_i+d_2^u\left( t-\sum _{i=1}^{n-1} u_i \right) \\&\quad +e^u\sum _{j=1, j\ne b_3}^{m-1} v_j+e^u\left( t-\sum _{j=1}^{m-1} v_j \right) +f^u\sum _{k=1, k\ne c_3}^{r-1} w_k+f^u\left( t-\sum _{k=1}^{r-1} w_k \right) +g^u t \end{aligned}$$

    Therefore, after reducing identical elements, we get

    $$\begin{aligned} d_2^u u_{a_3}+e^u v_{b_3}+f^u w_{c_3}=d_2^u t-d_2^u\sum _{i=1}^{n-1}u_i+e^ut-e^u\sum _{j=1}^{m-1}v_j+f^ut-f^u\sum _{k=1}^{r-1}w_k. \end{aligned}$$

    Consequently,

    $$\begin{aligned}&d_2^u\sum _{i=1,i\ne a_3}^{n-1} u_i+e^u\sum _{j=1,j\ne b_3}^{m-1} v_j+f^u\sum _{k=1,k\ne c_3}^{r-1} w_k\\&\quad +2d_2^u u_{a_3}+2e^u v_{b_3}+2f^u w_{c_3}=(d_2^u+e^u+f^u)t. \end{aligned}$$

    Hence \(d_2^u+e^u+f^u=0, d_2^u=0, e^u=0, f^u=0\). Analogously \(d^v=0, e_2^v=0, f^v=0\) and \(d^w=0, e^w=0, f_2^w=0\).

  6. 6.

    Furthermore, we know that \(L\circ B(t)=B\circ L(t)\) which gives

    $$\begin{aligned} L(t)=B\left( d^t\sum _{i=1}^{n-1}u_i+e^t\sum _{j=1}^{m-1}v_j+f^t\sum _{k=1}^{r-1} w_k+g^t t \right) \end{aligned}$$

    and

    $$\begin{aligned}&d^t\sum _{i=1}^{n-1}u_i+e^t\sum _{j=1}^{m-1}v_j+f^t\sum _{k=1}^{r-1} w_k+g^t t=d^t\sum _{i=1, i\ne a_3}^{n-1}u_i+d^t\left( t-\sum _{i=1}^{n-1} u_i \right) \\&\quad +e^t\sum _{j=1, j\ne b_3}^{m-1}v_j+e^t\left( t-\sum _{j=1}^{m-1} v_j \right) +f^t\sum _{k=1, k\ne c_3}^{r-1}w_k+f^t\left( t-\sum _{k=1}^{r-1} w_k \right) +g^tt. \end{aligned}$$

    After reduction, we get

    $$\begin{aligned} d^tu_{a_3}+e^tv_{b_3}+f^tw_{c_3}+d^t\sum _{i=1}^{n-1}u_i+e^t\sum _{j=1}^{m-1}v_j+f^t\sum _{k=1}^{r-1}w_k=(d^t+e^t+f^t)t.\end{aligned}$$

    Therefore \(d^t+e^t+f^t=0, d^t=0, e^t=0, f^t=0\) and so we obtain a new formula for L

    $$\begin{aligned} L(u_a)= & {} d_1^u u_a+g^u t,\\ L(v_b)= & {} e_1^v v_b+g^v t,\\ L(w_c)= & {} f_1^w w_c+g^w t,\\ L(t)= & {} g^t t. \end{aligned}$$
  7. 7.

    To end the proof, we should only find proper relationships between constants: \(g^u, g^v, g^w, g^t\). To this end, we note that \(L\circ B(u_{a_3})=B\circ L(u_{a_3})\), which implies

    $$\begin{aligned} L\left( t -\sum _{i=1}^{n-1}u_i\right)= & {} B(d_1^u u_{a_3}+g^u t)\\ g^tt-\sum _{i=1}^{n-1}(d_1^uu_i+g^ut)= & {} d_1^u\left( t-\sum _{i=1}^{n-1}u_i \right) +g^ut\\ g^tt-d_1^u\sum _{i=1}^{n-1}u_i-(n-1)g^ut= & {} d_1^ut-d_1^u\sum _{i=1}^{n-1}u_i +g^ut. \end{aligned}$$

    Hence \((d_1^u+ng^u-g^t)t=0\) so \(d_1^u+ng^u-g^t=0\), which gives us: \(g^u=\frac{g^t-d_1^u}{n}\). Analogously for the rest of two constants, we obtain:

    \(g^v=\frac{g^t-d_1^v}{m}\), \(g^w=\frac{g^t-d_1^w}{r}\) and the proof is completed.

\(\square \)

Now we can prove the main theorem of this paper.

Theorem 10

Let \(S=\left( M(n,m,r), \Vert .\Vert \right) \) be a smooth space. Assume, that for any permutation \({\alpha \times \beta \times \gamma }\) an operator \(A_{\alpha \times \beta \times \gamma }\) is an isometry. Consider \(T=M(n,1,1)+M(1,m,1)+M(1,1,r)\) and assume that Q is a minimal projection which commutes with G. Then Q is the unique minimal projection from S into T.

Proof

By Theorems 6 and 7 , the operator \(E_Q|_T\) fulfills the assumptions of Theorem 9. Therefore, there exist constants defg such that \(E_Q|_T\) is of the form (6). Consider now the adjoint operator \((E_Q|_T)^*\). It is represented by adjoint matrix corresponding to operator \(E_Q|_T\). It means that

$$\begin{aligned} (E_Q|_T)^*(t)=\left( \sum _{i=1}^{n-1} \frac{g-d}{n}u_i \right) +\left( \sum _{j=1}^{m-1} \frac{g-e}{m}v_j \right) +\left( \sum _{k=1}^{r-1} \frac{g-f}{r}w_k \right) +gt. \end{aligned}$$

By Theorem 8 we know that \((E_Q|_T)^*(t)=c\cdot t\). Hence \(\frac{g-d}{n}=\frac{g-e}{m}=\frac{g-f}{r}=0\), which means that \(g=d=e=f\). Finally we get

$$\begin{aligned}E_Q|_T=g\cdot Id_T.\end{aligned}$$

Since \(E_Q|_T\not \equiv 0\), \(g\ne 0\). Therefore, the operator \(E_Q|_T\) is invertible. By Theorem 4 we obtain the uniqueness of Q, what ends the proof. \(\square \)

Remark 1

Since \(\dim S< +\infty \), \({\mathcal {P}}(S,T)\ne \emptyset \) and a minimal projection exists. For more details see [10].

Example 1

For every \(1<p<+\infty \) space \(L_p(M(n,m,r))\) is smooth and every permutation \(A_{\alpha \times \beta \times \gamma }\) is an isometry. Therefore the assumptions of Theorem 10 are fulfilled and there exists the unique projection from S into T.

The above considerations also works for Orlicz spaces equipped with a smooth Orlicz or Luxemburg norm.