Abstract
Let us consider the space M(n, m) of all real or complex matrices on n rows and m columns. In 2000 Lesław Skrzypek proved the uniqueness of minimal projection of this space onto its subspace \(M(n,1)+M(1,m)\) which consists of all sums of matrices with constant rows and matrices with constant columns. We generalize this result using some new methods proved by Lewicki and Skrzypek (J Approx Theory 148:71–91, 2007). Let S be a space of all functions from \(X\times Y \times Z\) into \({\mathbb {R}}\) or \({\mathbb {C}}\), where X, Y, Z are finite sets. It could be interpreted as a space of three-dimensional matrices M(n, m, r). Let T be a subspace of S consisting of all sums of functions which depend on one variable. Let S be equipped with a smooth norm \(\Vert .\Vert \). We show that there exists the unique minimal projection of S onto its subspace T.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
At the beginning let us set up some basic terminology and notation.
Definition 1
Let S be a Banach space and let T be a linear, closed subspace of S. An operator \(P: S \rightarrow T\) is called a projection if \(P|_T = id|_T\). We denote by \({\mathcal {P}}(S; T)\) the set of all linear and continuous (with respect to the operator norm) projections.
Definition 2
A projection \(P_0\in {\mathcal {P}}(S; T)\) is called minimal if
In the theory of minimal projection three main problems are considered: existence and uniqueness of minimal projections [15,16,17, 19,20,21,22,23,24,25,26,27,28,29] , finding estimates of the constant \(\lambda (T; S)\) [2,3,4,5, 7,8,9,10,11,12,13] and finding concrete formulas for minimal projections [6, 9, 18, 24]. As one can see this theory is widely studied by many authors also recently [1, 11, 12, 14, 18, 23].
Let \(X=\{1, 2, 3, \ldots , n \}\), \(Y=\{1, 2, 3, \ldots , m \}\), \(Z=\{1, 2, 3, \ldots , r \}\), where \(3\le n, m, r <+\infty \) are fixed. Define \(S=M(n,m,r)\) as a set of all functions from \(X\times Y \times Z\) into \({\mathbb {R}}\) (or \({\mathbb {C}}\)). Let T be a subspace of S consisting of all sums of functions which depend on one variable, i.e.
It is convenient to consider these spaces as a spaces of ”three-dimensional” matrices with real (or complex) values. Let M(1, 1, r) be a subspace of a three-dimensional matrix space S with elements \(a_{ijk}\), such that \(a_{i_1j_1k}=a_{i_2j_2k}\) for any \(i_1,i_2\in \{1, 2, \ldots , n\}\), \(j_1,j_2\in \{1, 2, \ldots , m\}\) and \(k\in \{1, 2, \ldots , r\}\). Analogously we define M(1, m, 1), M(n, 1, 1). Then we can write \(T=M(n,1,1)+M(1,m,1)+M(1,1,r)\).
Definition 3
Let \(\Pi _n\) be a set of all permutations of \(\{1,2,\ldots ,n\}\). Define
\(G=\Pi _n\times \Pi _m\times \Pi _r\) will be a group with permutation composition as a natural operation and let \(A_{\alpha \times \beta \times \gamma }\) be a transformation of S associated with permutation \(\alpha \times \beta \times \gamma \). It means that
Every element of a group G can be identified with a composition of permutations of matrix planes: parallel to plane XY, parallel to plane XZ and parallel to plane YZ. For more details about that interpretation see [18].
Let us remind
Definition 4
An element x of Banach space X is called a smooth point if there exists a unique supporting functional \(f_x\).
If every x from the unit sphere of X is smooth, then X is called a smooth space.
From now we assume that for any permutation \({\alpha \times \beta \times \gamma }\) an operator \(A_{\alpha \times \beta \times \gamma }\) is an isometry and a space S is smooth.
Definition 5
Let X be a Banach space and G be a topological group such that for every \(g\in G\) there is a continuous linear operator \(A_g: X \rightarrow X\) for which:
Then we say that G acts as a group of linear operators on X.
Definition 6
We say that \(L: X \rightarrow X\) commutes with G if \(A_gLA_{g^{-1}}=L\) for every \(g\in G\).
The aim of this paper is to generalize a result of Skrzypek [27] who proved the uniqueness of minimal projection in standard smooth matrix spaces. In particular, we prove that there is a unique projection from S into T. Our approach is based on a Skrzypek’s method, who used there two main theorems: Rudin’s theorem [26] and Chalmers and Metcalf’s theorem [6]. In this paper, we also use a theorem proved by Lewicki and Skrzypek in [22].
Theorem 1
(Rudin) Let X be a Banach space and W be its complemented subspace (\({\mathcal {P}}(X,W)\ne \emptyset \)). Assume that W is G-invariant subspace, where G is a compact topological group acting by isomorphisms on X such that
-
for every \(x\in X\) function \(A_g(x)\) is continuous,
-
\(A_g(W)\subset W\) for every \(g\in G\).
If there exists a bounded linear projection \(P: X \mapsto W\) then there exists a bounded linear projection \(Q_P\) from X to W which commutes with G and is of the form:
where \(\mu '\) is normalized Haar measure and \(\int _G f(g) \text {d}\mu ' (g)\) is a Pettis integral of f.
Moreover, the following theorem holds true.
Theorem 2
Let the assumptions of the Rudin’s Theorem be satisfied. Assume furthermore that for every \( g \in G\) there is \(A_g\) linear surjective isometry of X. If there is the unique projection \(Q\in {\mathcal {P}}(X,W)\) commuting with G then Q is a minimal projection of X into W.
For the proof and more details see [18, Theorem 2]
These theorems are very useful in finding in some cases explicit formulas for minimal projections [18] but, in general, does not imply their uniqueness, because there can exist a minimal projection which does not commute with G. To prove the uniqueness we use the following theorems, but first let us recall a definition.
Definition 7
A pair \((x,y)\in S(X^{**})\times S(X^*)\) is called an extreme pair for \(P\in {\mathcal {P}}(X,W)\) if \(y(P^{**}x)=\Vert P\Vert \), where \(P^{**}: X^{**} \rightarrow W\) and S(X) is a sphere on X. Let \({\mathcal {E}}(P)\) be a set of all extreme pairs of P.
Spaces S, T are of a finite dimension so the set \({\mathcal {E}}(P)\) is not empty. Furthermore \(X^{**}\) can be considered as X. It is also known that for such spaces \({\mathcal {P}}(S; T)\ne \emptyset \) (see [10]).
Theorem 3
(Chalmers, Metcalf) A projection \(P\in {\mathcal {P}}(X,W)\) is minimal if and only if closed convex hull of \(\{y\otimes x\}_{(x,y)\in {\mathcal {E}}(P)}\) contains an operator \(E_P,\) for which W is an invariant subspace.
Operator \(E_P\) (called Chalmers–Metcalf operator) is given by a formula:
where \(\mu ''\) is a probabilistic Borel measure on \({\mathcal {E}}(P)\).
Theorem 4
(Lewicki, Skrzypek) Let X be a Banach space, let W be its finite dimensional subspace. Assume that \(X^{**}\) is a smooth space. Assume furthermore that for a minimal projection P there exists a Chalmers–Metcalf operator \(E_P\) such that \(E_P|_W\) is invertible. Then P is the unique minimal projection.
2 Preliminary results
First let us prove some technical lemmas which will be used in a main proof. Lemma 1 and Theorems 5, 6 are easy generalizations of their analogs from [27]. For the completeness of the content of this paper, we present their proofs.
Lemma 1
(Compare with [27] Lemma 1.4) For any \(y\in S^*\) i \(\pi \in G\) we have
Proof
Since \(\dim S<+\infty \) then any \(y\in S^*\) can be written as
where \(\displaystyle x=\sum _{i,j,k}x_{i,j,k} \cdot e_{i,j,k}\) and elements \(y_{i,j,k}\in {\mathbb {K}}\) do not depend on x. Since \(A_{\alpha \times \beta \times \gamma } ^{-1}=A_{\alpha ^{-1}\times \beta ^{-1}\times \gamma ^{-1}}\):
\(\square \)
Theorem 5
(Compare with [27] Theorem 1.5) Let \(Q\in {\mathcal {P}}(S,T)\) commutes with G. If \((x,y)\in {\mathcal {E}}(Q)\) then \((A_\pi x, A_\pi y)\in {\mathcal {E}}(Q)\) for any permutation \(\pi \in \Pi _n\times \Pi _m\times \Pi _r\).
Proof
If Q commutes with \(\Pi _n\times \Pi _m\times \Pi _r\) then from Lemma 1 we get
\(\square \)
For our further considerations let us introduce a Chalmers-Metcalf operator
where (x, y) is a fixed extreme pair (\((x,y)\in {\mathcal {E}}(Q)\)).
Theorem 6
(Compare [27] Theorem 1.7) \(E_Q\) commutes with G.
Proof
Fix \(\delta \in G\). From Lemma 1 we get that for every \(s\in S\)
\(\square \)
One of the main results of this paper is a Theorem 9 concerning the form of an operator from T into itself. Let us recall that space T is generated by elements
where \(a\in \{1,\ldots ,n\}, b\in \{1,\ldots ,m\}, c\in \{1,\ldots ,r\}\). Furthermore, we can choose a basis as
Consequently, \(\dim T=n-1+m-1+r-1+1=n+m+r-2\). Now we can prove two useful theorems.
Theorem 7
Let \(E_Q\), S, T be as above. Then:
Proof
Fix \(a\in \{1,\ldots , n\}\). We show that \(E_Q(u_a)\in T\). Analogously, it can be shown that \(E_Q(v_b)\in T\) and \(E_Q(w_c)\in T\) which will end the proof. Proceeding in the same way as in the proof of Theorem 1.6 (1) in [27] we get from Lemma 1 that
Let \(\pi (a,z)=\{\pi =\alpha \times \beta \times \gamma : \alpha (a)=z\}\). Then
In the last equality we changed the summing because of the fact that \(\pi \in \pi (a,z) \Leftrightarrow \pi ^{-1}\in \pi (z,a)\). Let us now focus on the expression in the last brackets.
As one can see the last expression in (4) does not depend on j nor k so \(\left( \sum _{\pi '\in \pi (z,a)} A_{\pi '} (x)\right) \in M(n,1,1)\subset T\). Combining (2) and (3) we get that \(E_Q(u_a)\in T\), which ends the proof. \(\square \)
Theorem 8
Let \(E_Q\), S, T, t be as defined above. Then there exists a constant c such that
Proof
Notice that for any \(y\in M(n,m,r)\) we have:
By the formula for \(E_Q^*\), Lemma 1 and the above equality we get
Since \(|G|=n!m!r!\) then these equalities give us our thesis with a constant
\(\square \)
3 Main results
Finally, we can present previously mentioned theorem of the form of an operator from T into T which is crucial to prove the main theorem of that paper.
Theorem 9
If an operator \(L: T\rightarrow T\) commutes with a group \(G=\Pi _n\times \Pi _m\times \Pi _r\) (\(A_\pi L=LA_\pi \)), then there exist constants d, e, f, g such that:
Proof
Notice that elements
form a basis of T. Every linear operator \(L: T \rightarrow T\) is represented by the images of basis elements, so
Due to the complexity of the proof, we will conduct it in a few steps.
-
1.
Fix \(a_1, a_2 \in \{1,\ldots , n-1\}\), \(b_1, b_2 \in \{1,\ldots , m-1\}\), \(c_1, c_2 \in \{1,\ldots , r-1\}\) and consider \(A\in G\), which interchanges: \(u_{a_1}\) with \(u_{a_2}\), \(v_{b_1}\) with \(v_{b_2}\) and \(w_{c_1}\) with \(w_{c_2}\), i.e.
$$\begin{aligned} A(u_{a_1})= & {} u_{a_2}, \quad A(u_{a_2})=u_{a_1}, \quad A(u_{a})=u_{a}, \quad a\ne a_1, a_2;\\ A(v_{b_1})= & {} v_{b_2}, \quad A(v_{b_2})=v_{b_1}, \quad A(v_{b})=v_{b}, \quad b\ne b_1, b_2;\\ A(w_{c_1})= & {} w_{c_2}, \quad A(w_{c_2})=w_{c_1}, \quad A(w_{c})=w_{c}, \quad c\ne c_1, c_2;\\ A(t)= & {} t. \end{aligned}$$Since L commutes with G, in particular \(L\circ A(u_{a_1})=A\circ L(u_{a_1})\), which means that
$$\begin{aligned} L(u_{a_2})=A\left( \sum _{i=1}^{n-1} d_{a_1i}^u u_i+\sum _{j=1}^{m-1} e_{a_1j}^u v_j+\sum _{k=1}^{r-1} f_{a_1k}^u w_k+g_{a_1}^u t\right) \end{aligned}$$Therefore
$$\begin{aligned}&\sum _{i=1}^{n-1} d_{a_2i}^u u_i+\sum _{j=1}^{m-1} e_{a_2j}^u v_j+\sum _{k=1}^{r-1} f_{a_2k}^u w_k+g_{a_2}^u t=\sum _{i=1, i\ne a_1, a_2}^{n-1} d_{a_1i}^u u_i+\sum _{j=1, j\ne b_1, b_2}^{m-1} e_{a_1j}^u v_j\\&\quad +\sum _{k=1, k\ne c_1, c_2}^{r-1} f_{a_1k}^u w_k+g_{a_1}^u t+d_{a_1a_1}^u u_{a_2}+d_{a_1a_2}^u u_{a_1}+e_{a_1b_1}^u v_{b_2} +e_{a_1b_2}^u v_{b_1}+f_{a_1c_1}^u w_{c_2}+f_{a_1c_2}^u w_{c_1}. \end{aligned}$$Hence, after comparing the coefficients corresponding to the base elements, we get the equations
-
(a)
\(d_{a_1i}^u=d_{a_2i}^u\), \(e_{a_1j}^u=e_{a_2j}^u\), \(f_{a_1k}^u=f_{a_2k}^u\) for all \(i\in \{ 1,\ldots , n-1 \}\backslash \{a_1, a_2\}\), \(j\in \{ 1,\ldots , m-1 \}\backslash \{b_1, b_2\}\), \(k\in \{ 1,\ldots , r-1 \}\backslash \{c_1, c_2\}\);
-
(b)
\(d_{a_1a_1}^u=d_{a_2a_2}^u\), \(e_{a_1b_1}^u=e_{a_2b_2}^u\), \(f_{a_1c_1}^u=f_{a_2c_2}^u\);
-
(c)
\(d_{a_1a_2}^u=d_{a_2a_1}^u\), \(e_{a_1b_2}^u=e_{a_2b_1}^u\), \(f_{a_1c_2}^u=f_{a_2c_1}^u\);
-
(d)
\(g_{a_1}^u=g_{a_2}^u\).
-
(a)
-
2.
Let us consider a matrix of coefficients \(d_{ai}^u\), where \(a,i\in \{1,\ldots , n-1\}\) given by
$$\begin{aligned}D^u= \left[ \begin{array}{ccc} d_{1 \ 1}^u &{} \quad \ldots &{} \quad d_{1 \ n-1}^u\\ \vdots &{} \quad \ddots &{} \quad \vdots \\ d_{n-1 \ 1}^u &{}\quad \ldots &{} \quad d_{n-1 \ n-1}^u \end{array} \right] \end{aligned}$$Elements \(a_1, a_2\) are chosen arbitrary, hence by (b) elements on a main diagonal of \(D^u\) are equal.
By (c) matrix \(D^u\) is symmetric.
If \(a_1=1\), \(a_2=2\) then by (a) we get \(d_{1i}^u=d_{2i}^u\) for all \(i\ne 1, 2\). In particular \(d_{13}^u=d_{21}\). Furthermore by (c) \(d_{12}^u=d_{21}^u\).
Analogously, if \(a_1=1\), \(a_2=3\) then by (a) \(d_{12}^u=d_{32}^u\) and by (c) \(d_{13}^u=d_{31}^u\) and if \(a_1=2\), \(a_2=3\) then by (a) \(d_{21}^u=d_{31}^u\) and by (c) \(d_{23}^u=d_{32}^u\).
Hence
$$\begin{aligned} d_{12}^u=d_{21}^u=d_{31}^u=d_{13}^u=d_{23}^u=d_{32}^u=:d^u_2. \end{aligned}$$Proceeding similarly, for any three numbers from the set \(\{1, \ldots , n-1\}\), we get
$$\begin{aligned}D^u= \left[ \begin{array}{cccc} d_{1}^u &{} \quad d_{2}^u &{} \quad \ldots &{} \quad d_{2}^u\\ d_{2}^u &{} \quad \ddots &{} \quad &{} \quad \vdots \\ \vdots &{} \quad &{} \quad \ddots &{} \quad d_{2}^u\\ d_{2}^u &{} \quad \ldots &{} \quad d_{2}^u &{} \quad d_{1}^u \end{array} \right] .\end{aligned}$$ -
3.
Consider now a matrix of coefficients \(e_{aj}^u\), where \(a\in \{1,\ldots ,n-1\}\), \(j\in \{1,\ldots ,m-1\}\)
$$\begin{aligned}E^u= \left[ \begin{array}{ccc} e_{1 \ 1}^u &{}\quad \ldots &{} \quad e_{1 \ m-1}^u\\ \vdots &{} \quad \ddots &{}\quad \vdots \\ e_{n-1 \ 1}^u &{}\quad \ldots &{} \quad e_{n-1 \ m-1}^u \end{array} \right] .\end{aligned}$$If \(b_1=1, b_2=2\) then by (b) we get \(e_{a_11}^u=e_{a_22}^u\) and by (c) \(e_{a_12}^u=e_{a_21}^u\).
If \(b_1=1, b_2=3\) then by (a) we get \(e_{a_12}^u=e_{a_22}^u\).
Hence \(e_{a_11}^u=e_{a_22}^u=e_{a_12}^u=e_{a_21}^u\). Proceeding analogously for any \(b_1,b_2\in \{1,\ldots ,m-1\}\) and by arbitrariness of choice of \(a_1,a_2\), we get the equality of all elements of the matrix \(E^u\), i.e.
$$\begin{aligned}E^u= \left[ \begin{array}{ccc} e^u &{} \quad \ldots &{}\quad e^u\\ \vdots &{}\quad \ddots &{}\quad \vdots \\ e^u &{} \quad \ldots &{}\quad \quad e^u \end{array} \right] .\end{aligned}$$Analogously
$$\begin{aligned}F^u= \left[ \begin{array}{ccc} f^u &{} \quad \ldots &{}\quad f^u\\ \vdots &{}\quad \ddots &{} \quad \quad \vdots \\ f^u &{} \quad \ldots &{}\quad f^u \end{array} \right] .\end{aligned}$$Furthermore, by (d) and by arbitrariness of choice of \(a_1,a_2\) we get \(g_{a_1}^u=g_{a_2}^u=:g^u\) for all \(a_1,a_2\in \{1,\ldots ,n-1\}\). Applying the above formulas and (2), (3) we get a new form of \(L(u_a)\)
$$\begin{aligned} L(u_a)=d_1^u u_a+d_2^u\sum _{i=1, i\ne a}^{n-1} u_i+e^u\sum _{j=1}^{m-1} v_j+f^u\sum _{k=1}^{r-1} w_k+g^u t. \end{aligned}$$Analogously
$$\begin{aligned} L(v_b)= & {} d^v\sum _{i=1}^{n-1}u_i+e_1^v v_b+e_2^v\sum _{j=1, j\ne b}^{m-1} v_j+f^v\sum _{k=1}^{r-1} w_k+g^v t,\\ L(w_c)= & {} d^w\sum _{i=1}^{n-1} u_i+e^w\sum _{j=1}^{m-1} v_j+f_1^w w_c+f_2^w\sum _{k=1, k\ne c}^{r-1} w_k+g^w t. \end{aligned}$$ -
4.
L commutes with G, so \(L\circ A(t)=A\circ L(t)\) and therefore
$$\begin{aligned} L(t)=A\left( \sum _{i=1}^{n-1} d_{i}^t u_i+\sum _{j=1}^{m-1} e_{j}^t v_j+\sum _{k=1}^{r-1} f_{k}^t w_k+g^t t\right) , \end{aligned}$$which means that
$$\begin{aligned} \sum _{i=1}^{n-1} d_{i}^t u_i+\sum _{j=1}^{m-1} e_{j}^t v_j+\sum _{k=1}^{r-1} f_{k}^t w_k+g^t t=A\left( \sum _{i=1}^{n-1} d_{i}^t u_i+\sum _{j=1}^{m-1} e_{j}^t v_j+\sum _{k=1}^{r-1} f_{k}^t w_k+g^t t\right) . \end{aligned}$$After subtraction the same elements from both sides of equation, we get
$$\begin{aligned}&d_{a_1}^t u_{a_1}+d_{a_2}^t u_{a_2}+e_{b_1}^t v_{b_1}+e_{b_2}^t v_{b_2}+f_{c_1}^t w_{c_1}+f_{c_2}^t w_{c_2}\\&\quad =d_{a_1}^t u_{a_2}+d_{a_2}^t u_{a_1}+e_{b_1}^t v_{b_2}+e_{b_2}^t v_{b_1}+f_{c_1}^t w_{c_2}+f_{c_2}^t w_{c_1}. \end{aligned}$$Hence
$$\begin{aligned} d_{a_1}^t=d_{a_2}^t=:d^t, e_{b_1}^t =e_{b_2}^t=:e^t, f_{c_1}=f_{c_2}^t=:f^t. \end{aligned}$$By arbitrariness of choice of \(a_1, a_2, b_1, b_2, c_1, c_2\) we obtain a new formula for L(t)
$$\begin{aligned} L(t)=d^t\sum _{i=1}^{n-1} u_i+e^t\sum _{j=1}^{m-1} v_j+f^t\sum _{k=1}^{r-1} w_k+g^t t. \end{aligned}$$ -
5.
Fix \(a_3\in \{1,\ldots ,n-1\}, b_3\in \{1,\ldots ,m-1\}, c_3\in \{1,\ldots ,r-1\}\) and consider \(B\in G\), which interchanges: \(u_{a_3}\) with \(u_{a_n}\), \(v_{b_3}\) with \(v_{b_m}\) and \(w_{c_3}\) with \(w_{c_r}\). Therefore, since \(u_n=t-\sum _{i=1}^{n-1} u_i\), B fulfills the conditions
$$\begin{aligned} B(u_{a_3})= & {} t-\sum _{i=1}^{n-1} u_i, \qquad B(u_{a})=u_{a}, \quad a\in \{1,\ldots ,n-1\}\backslash \{a_3\};\\ B(v_{b_3})= & {} t-\sum _{j=1}^{m-1} v_j, \qquad B(v_{b})=v_{b}, \quad b\in \{1,\ldots ,m-1\}\backslash \{b_3\};\\ B(w_{c_3})= & {} t-\sum _{k=1}^{r-1} w_k, \qquad B(w_{c})=w_{c}, \quad c\in \{1,\ldots ,r-1\}\backslash \{c_3\};\\ B(t)= & {} t. \end{aligned}$$Since \(L\circ B(u_a)=B\circ L(u_a)\) for all \(a\ne a_3\),
$$\begin{aligned} L(u_a)=B\Bigg (d_1^u u_a+d_2^u\sum _{i=1, i\ne a}^{n-1} u_i+e^u\sum _{j=1}^{m-1} v_j+f^u\sum _{k=1}^{r-1} w_k+g^u t\Bigg ). \end{aligned}$$Hence:
$$\begin{aligned}&d_1^u u_a+d_2^u\sum _{i=1, i\ne a}^{n-1} u_i+e^u\sum _{j=1}^{m-1} v_j+f^u\sum _{k=1}^{r-1} w_k+g^u t=d_1^u u_a\\&\quad +d_2^u\sum _{i=1, i\ne a,a_3}^{n-1} u_i+d_2^u\left( t-\sum _{i=1}^{n-1} u_i \right) \\&\quad +e^u\sum _{j=1, j\ne b_3}^{m-1} v_j+e^u\left( t-\sum _{j=1}^{m-1} v_j \right) +f^u\sum _{k=1, k\ne c_3}^{r-1} w_k+f^u\left( t-\sum _{k=1}^{r-1} w_k \right) +g^u t \end{aligned}$$Therefore, after reducing identical elements, we get
$$\begin{aligned} d_2^u u_{a_3}+e^u v_{b_3}+f^u w_{c_3}=d_2^u t-d_2^u\sum _{i=1}^{n-1}u_i+e^ut-e^u\sum _{j=1}^{m-1}v_j+f^ut-f^u\sum _{k=1}^{r-1}w_k. \end{aligned}$$Consequently,
$$\begin{aligned}&d_2^u\sum _{i=1,i\ne a_3}^{n-1} u_i+e^u\sum _{j=1,j\ne b_3}^{m-1} v_j+f^u\sum _{k=1,k\ne c_3}^{r-1} w_k\\&\quad +2d_2^u u_{a_3}+2e^u v_{b_3}+2f^u w_{c_3}=(d_2^u+e^u+f^u)t. \end{aligned}$$Hence \(d_2^u+e^u+f^u=0, d_2^u=0, e^u=0, f^u=0\). Analogously \(d^v=0, e_2^v=0, f^v=0\) and \(d^w=0, e^w=0, f_2^w=0\).
-
6.
Furthermore, we know that \(L\circ B(t)=B\circ L(t)\) which gives
$$\begin{aligned} L(t)=B\left( d^t\sum _{i=1}^{n-1}u_i+e^t\sum _{j=1}^{m-1}v_j+f^t\sum _{k=1}^{r-1} w_k+g^t t \right) \end{aligned}$$and
$$\begin{aligned}&d^t\sum _{i=1}^{n-1}u_i+e^t\sum _{j=1}^{m-1}v_j+f^t\sum _{k=1}^{r-1} w_k+g^t t=d^t\sum _{i=1, i\ne a_3}^{n-1}u_i+d^t\left( t-\sum _{i=1}^{n-1} u_i \right) \\&\quad +e^t\sum _{j=1, j\ne b_3}^{m-1}v_j+e^t\left( t-\sum _{j=1}^{m-1} v_j \right) +f^t\sum _{k=1, k\ne c_3}^{r-1}w_k+f^t\left( t-\sum _{k=1}^{r-1} w_k \right) +g^tt. \end{aligned}$$After reduction, we get
$$\begin{aligned} d^tu_{a_3}+e^tv_{b_3}+f^tw_{c_3}+d^t\sum _{i=1}^{n-1}u_i+e^t\sum _{j=1}^{m-1}v_j+f^t\sum _{k=1}^{r-1}w_k=(d^t+e^t+f^t)t.\end{aligned}$$Therefore \(d^t+e^t+f^t=0, d^t=0, e^t=0, f^t=0\) and so we obtain a new formula for L
$$\begin{aligned} L(u_a)= & {} d_1^u u_a+g^u t,\\ L(v_b)= & {} e_1^v v_b+g^v t,\\ L(w_c)= & {} f_1^w w_c+g^w t,\\ L(t)= & {} g^t t. \end{aligned}$$ -
7.
To end the proof, we should only find proper relationships between constants: \(g^u, g^v, g^w, g^t\). To this end, we note that \(L\circ B(u_{a_3})=B\circ L(u_{a_3})\), which implies
$$\begin{aligned} L\left( t -\sum _{i=1}^{n-1}u_i\right)= & {} B(d_1^u u_{a_3}+g^u t)\\ g^tt-\sum _{i=1}^{n-1}(d_1^uu_i+g^ut)= & {} d_1^u\left( t-\sum _{i=1}^{n-1}u_i \right) +g^ut\\ g^tt-d_1^u\sum _{i=1}^{n-1}u_i-(n-1)g^ut= & {} d_1^ut-d_1^u\sum _{i=1}^{n-1}u_i +g^ut. \end{aligned}$$Hence \((d_1^u+ng^u-g^t)t=0\) so \(d_1^u+ng^u-g^t=0\), which gives us: \(g^u=\frac{g^t-d_1^u}{n}\). Analogously for the rest of two constants, we obtain:
\(g^v=\frac{g^t-d_1^v}{m}\), \(g^w=\frac{g^t-d_1^w}{r}\) and the proof is completed.
\(\square \)
Now we can prove the main theorem of this paper.
Theorem 10
Let \(S=\left( M(n,m,r), \Vert .\Vert \right) \) be a smooth space. Assume, that for any permutation \({\alpha \times \beta \times \gamma }\) an operator \(A_{\alpha \times \beta \times \gamma }\) is an isometry. Consider \(T=M(n,1,1)+M(1,m,1)+M(1,1,r)\) and assume that Q is a minimal projection which commutes with G. Then Q is the unique minimal projection from S into T.
Proof
By Theorems 6 and 7 , the operator \(E_Q|_T\) fulfills the assumptions of Theorem 9. Therefore, there exist constants d, e, f, g such that \(E_Q|_T\) is of the form (6). Consider now the adjoint operator \((E_Q|_T)^*\). It is represented by adjoint matrix corresponding to operator \(E_Q|_T\). It means that
By Theorem 8 we know that \((E_Q|_T)^*(t)=c\cdot t\). Hence \(\frac{g-d}{n}=\frac{g-e}{m}=\frac{g-f}{r}=0\), which means that \(g=d=e=f\). Finally we get
Since \(E_Q|_T\not \equiv 0\), \(g\ne 0\). Therefore, the operator \(E_Q|_T\) is invertible. By Theorem 4 we obtain the uniqueness of Q, what ends the proof. \(\square \)
Remark 1
Since \(\dim S< +\infty \), \({\mathcal {P}}(S,T)\ne \emptyset \) and a minimal projection exists. For more details see [10].
Example 1
For every \(1<p<+\infty \) space \(L_p(M(n,m,r))\) is smooth and every permutation \(A_{\alpha \times \beta \times \gamma }\) is an isometry. Therefore the assumptions of Theorem 10 are fulfilled and there exists the unique projection from S into T.
The above considerations also works for Orlicz spaces equipped with a smooth Orlicz or Luxemburg norm.
References
Aksoy, A., Lewicki, G.: Minimal projections with respect to various norms. Studia. Math. 210(1), 1–16 (2012)
Chalmers, B.L., Lewicki, G.: Symmetric subspaces of \(\ell _1\) with large projection constants. Studia Math. 134(2), 119–133 (1999)
Chalmers, B.L., Lewicki, G.: Symmetric spaces with maximal projection constant. J. Funct. Anal. 200, 1–22 (2003)
Chalmers, B.L., Lewicki, G.: Three-dimensional subspace of \(l_{\infty }^{(5)}\) with maximal projection constant. J. Funct. Anal. 257, 553–592 (2009)
Chalmers, B.L., Lewicki, G.: A proof of the Grünbaum conjecture. Studia Math. 200(2), 103–129 (2010)
Chalmers, B.L., Metcalf, F.T.: The determination of minimal projections and extensions in \(L^1\). Trans. Am. Math. Soc. 329, 289–305 (1992)
Cheney, E.W., Franchetti, C.: Minimal projections in \(L_1\)-space. Duke Math J. 43, 501–510 (1976)
Cheney, E.W., Hobby, C.R., Morris, P.D., Schurer, F., Wulbert, D.E.: On the minimal property of the Fourier projection. Trans. Am. Math. Soc. 143, 249–258 (1969)
Cheney, E.W., Light, W.A.: Approximation Theory in Tensor Product Spaces, Lecture Notes in Mathematics. Springer-Verlag, Berlin (1985)
Cheney, E.W., Morris, P.D.: On the existence and characterization of minimal projections. J. Reine Angew. Math. 270, 61–76 (1974)
Deregowska, B., Lewandowska, B.: Minimal projections onto hyperplanes in vector-valued sequence spaces. J. Approx. Theory 194, 1–13 (2015)
Deregowska, B., Lewandowska, B.: On the minimal property of de la Vallée Poussin’s operator. Bull. Aust. Math. Soc. 91(1), 129–133 (2015)
Fisher, S.D., Morris, P.D., Wulbert, D.E.: Unique minimality of Fourier projections. Trans. Am. Math. Soc. 265, 235–246 (1981)
Foucart, S., Skrzypek, L.: On maximal relative projection constants. J. Math. Anal. Appl. 447(1), 309–328 (2017)
Isbell, J.R., Semadeni, Z.: Projection constants and spaces of continuous functions. Trans. Am. Math. Soc. 107(1), 38–48 (1963)
König, H.: Spaces with large projection constants. Israel J. Math. 50, 181–188 (1985)
König, H., Schuett, C., Jaegermann, N.T.: Projection constants of symmetric spaces and variants of Khinchine’s inequality. Reine Angew. Math. 511, 1–42 (1999)
Kozdęba, M.: Minimal projection onto certain subspace of \(L_p(X\times Y\times Z)\). Num. Funct. Anal. Optim. 39(13), 1407–1422 (2018)
Lambert, P.V.: Minimum norm property of the Fourier projection in spaces of continuous functions. Bull. Soc. Math. Belg. 21, 359–369 (1969)
Lewicki, G.: Minimal extensions in tensor product spaces. J. Approx. Theory 97, 366–383 (1999)
Lewicki, G., Prophet, M.: Minimal multi-convex projections. Studia Math. 178(2), 71–91 (2007)
Lewicki, G., Skrzypek, L.: Chalmers-Metcalf operator and uniqueness of minimal projections. J. Approx. Theory 148, 71–91 (2007)
Lewicki, G., Skrzypek, L.: Minimal projections onto hyperplanes in \(\ell _p^n,\). J. Approx. Theory 202, 42–63 (2016)
Light, W.A.: Minimal projections in tensor-product spaces. Math. Z. 191(4), 633–643 (1986)
Odyniec, W., Lewicki, G.: Minimal Projections in Banach Spaces, Lecture Notes in Mathematics, vol. 1449. Springer-Verlag, Berlin (1990)
Rudin, W.: Functional Analysis, TMH Edition edn. McGraw Hill, New York (1974)
Skrzypek, L.: The uniqueness of minimal projections in smooth matrix spaces. J. Approx. Theory 107, 315–336 (2000)
Skrzypek, L.: Minimal projections in spaces of functions of N variables. J. Approx Theory 123, 214–231 (2003)
Żwak, A.: Minimal projections in Orlicz spaces. Univ. Iagel. Acta Math. No. 32, 137–147 (1995)
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Adrian Constantin.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Kozdęba, M. Uniqueness of minimal projections in smooth expanded matrix spaces. Monatsh Math 194, 275–289 (2021). https://doi.org/10.1007/s00605-020-01471-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00605-020-01471-y