Non-optimality of the Greedy Algorithm for Subspace Orderings in the Method of Alternating Projections

The method of alternating projections involves projecting an element of a Hilbert space cyclically onto a collection of closed subspaces. It is known that the resulting sequence always converges in norm and that one can obtain estimates for the rate of convergence in terms of quantities describing the geometric relationship between the subspaces in question, namely their pairwise Friedrichs numbers. We consider the question of how best to order a given collection of subspaces so as to obtain the best estimate on the rate of convergence. We prove, by relating the ordering problem to a variant of the famous Travelling Salesman Problem, that correctness of a natural form of the Greedy Algorithm would imply that P=NP\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathrm {P}=\mathrm {NP}$$\end{document}, before presenting a simple example which shows that, contrary to a claim made in the influential paper (Kayalar and Weinert in Math Control Signals Syst 1(1):43–59, 1988), the result of the Greedy Algorithm is not in general optimal. We go on to establish sharp estimates on the degree to which the result of the Greedy Algorithm can differ from the optimal result. Underlying all of these results is a construction which shows that for any matrix whose entries satisfy certain natural assumptions it is possible to construct a Hilbert space and a collection of closed subspaces such that the pairwise Friedrichs numbers between the subspaces are given precisely by the entries of that matrix.


Introduction
Let X be a real or complex Hilbert space, N ≥ 2 an integer, and suppose that M 1 , . . . , M N are closed subspaces of X. Furthermore let P k denote the orthogonal projection onto M k , 1 ≤ k ≤ N , and let P M denote the orthogonal projection onto the intersection M = M 1 ∩ . . . ∩ M N . If we let T = P N · · · P 1 then it follows from a classical theorem due to Halperin [8] that for all x ∈ X. It follows easily that, for any x ∈ X, the sequence in X obtained by starting at X and then projecting cyclically onto the N subspaces M 1 , . . . , M N must converge to the point P M x, which is the point in M closest to the starting vector x. This procedure is known as the method of alternating projections and has many applications, for instance to the iterative solution of large linear systems but also in the theory of partial differential equations and in image restoration; see [3] for a survey.
In view of these applications it is important to understand the rate at which the convergence in (1.1) takes place; see for instance [1,2,6,7] for indepth investigations. Recall that the Friedrichs number c(L 1 , L 2 ) between the two subspaces L 1 , L 2 of X is defined as where L = L 1 ∩ L 2 . The Friedrichs number lies in the interval [0, 1] and may be thought of as the cosine of the 'angle' between the subspaces L 1 and L 2 . It is shown in [9,Theorem 2] that for N = 2 in the method of alternating projections we have T n − P M = c(M 1 , M 2 ) 2n−1 , n ≥ 1.
( 1.2) When N ≥ 3 no sharp upper bound of this form is known, but it is shown in [5, Corollary 2.10] that Moreover, the assumption on the subspaces cannot be omitted. The same bound was obtained earlier in [9] in the special case where the subspaces Examples in [5,Section 3] show both that the bound in (1.4) fails to be sharp in some special cases, thus disproving a conjecture made in [9], and more generally that it is not possible for N ≥ 3 to obtain a sharp upper bound for T n − P M , n ≥ 1, which depends only on the pairwise Friedrichs numbers between the subspaces M 1 , . . . , M N . Nevertheless, the estimate in (1.3) recovers the sharp bound in (1.2) when N = 2 and holds with equality in a number of other cases, for instance if all of the spaces M 1 , . . . , M N are one-dimensional.
We also see from (1.3) that if the Friedrichs number between a pair of consecutive subspaces is zero then we have convergence in the method of alternating projections after at most two steps. Since our interest here is primarily in the asymptotic rate of convergence as n → ∞, there is no significant loss of generality in assuming that c(M k , M ) > 0 for 1 ≤ k, ≤ N with k = . In this case (1.3) may be recast as , indices henceforth being considered modulo N . Since the asymptotic rate of convergence is determined by the value of r ∈ (0, 1], it is natural to seek the reordering of the subspaces M 1 , . . . , M N which leads to the smallest possible value of r. More formally, given N ≥ 2 we let S N denote the symmetric group on N letters and for each σ ∈ S N we let r σ = N k=1 c(M σ(k) , M σ(k+1) ), so that for the reordered product The objective therefore is to find a permutation σ ∈ S N such that r σ = r * , where r * = min{r σ : σ ∈ S N }, and to find such a permutation a version of the following 'greedy' algorithm was proposed in [9, Section 9].
Greedy Algorithm: Given N ≥ 2 independent closed subspaces M 1 , . . . , M N of a Hilbert space X whose mutual Friedrichs numbers are known we obtain permutations σ k ∈ S N , 1 ≤ k ≤ N , as follows. Let σ k (1) = k and for j = 2, . . . , N consider as possible values for σ k (j) any previously unused index which minimises c(M σ k (j−1) , M ). If at any stage there is more than one choice of such an index then proceed by considering all possible choices of this index and take σ k to be that permutation which among those leading to the least value of r σ k comes first in the lexicographical ordering. Return the permutation σ G = σ where ∈ {1, . . . , N} is the smallest index such that r σ = min{r σ k : 1 ≤ k ≤ N }. If we let r G = r σG , N ≥ 2, then the Greedy Algorithm is correct if and only if r G = r * for all constellations of subspaces. By definition of r * it is clear that r * ≤ r G , N ≥ 2. In Sect. 3 we show that if the Greedy Algorithm were correct then it would follow that P = NP. We then exhibit a simple example with N = 4 in which r * < r G . Both results are obtained as a consequence of a construction, presented in Sect. 2, which shows that any suitable collection of numbers in [0, 1] arises as the set of pairwise Friedrichs numbers between subspaces of some Hilbert space. This result is of independent interest and in particular implies that the problem of finding an optimal ordering is at least as hard as solving a multiplicative form of the Travelling Salesman Problem (TSP). In Sect. 4 we give sharp estimates for the maximal discrepancies between r * and r G . In particular, we show that generically r G < r 1/2 * , and that

Results Math
the estimate is optimal in the sense that for every ε ∈ (0, 1) there exists some N ≥ 2 and a suitable collection of N subspaces of some Hilbert space such that r G > (1 − ε)r 1/2 * . The last step once again requires the construction from Sect. 2.

Friedrichs Matrices
Given N ≥ 2 closed subspaces M 1 , . . . , M N of a Hilbert space, we may consider the N ×N -matrix (c(M k , M )) 1≤k, ≤N whose entries are the pairwise Friedrichs numbers between the various subspaces. We call the matrix arising in this way the Friedrichs matrix corresponding to the collection of subspaces. It is clear that any Friedrichs matrix must be symmetric, have zeros along its main diagonal and elsewhere must have entries lying in the interval [0, 1]. Is every square matrix which has these three properties a Friedrichs matrix for some collection of closed subspaces? The following result answers this question in the affirmative.
Here and in what follows we use the same notation as in Sect. 1.
Proof. Let C = (c k, ) and suppose first that 0 ≤ c k, < 1 for 1 ≤ k, ≤ N . Let {e k, : 1 ≤ k, ≤ N, k = } be an orthonormal basis for the space X = F N (N −1) endowed with the Euclidean norm, and set noting that these sets are orthonormal, and consider the closed subspaces of X given by M k = span B k . By our assumption that the entries of C be strictly smaller than 1 we see that

Incorrectness of the Greedy Algorithm
In this section we turn to the Greedy Algorithm presented in Sect. 1, and in particular we ask whether the algorithm is correct in the sense that the ordering it produces leads to the optimal value of r ∈ [0, 1] in (1.4). We first consider the connection between our problem of finding an optimal ordering and the classical TSP, and we show in Corollary 3.3 below that correctness of

Results Math
the Greedy Algorithm for a sufficiently large class of cases would imply that P = NP. We then exhibit a simple example in which the Greedy Algorithm gives a suboptimal ordering.
Recall that in the graph-theoretical formulation of the TSP we are given, for some N ≥ 2, a complete graph K N with vertices V N = {1, 2, . . . , N} and a weight function such that w(k, ) = w( , k) for 1 ≤ k, ≤ N with k = , and the objective is to find a permutation σ * ∈ S N such that Σ σ * = min{Σ σ : σ ∈ S N }, where for a permutation σ ∈ S N we let with indices, as usual, considered modulo N . We will be interested primarily in the multiplicative form of the TSP, denoted by MTSP, in which the objective is to minimise not the additive cost but instead to find σ * ∈ S N such that Π σ * = min{Π σ : σ ∈ S N }, where for a permutation σ ∈ S N we let It is clear that TSP and MTSP have the same solution, and indeed one may pass from one form of the problem to the other simply by replacing the weight function by its logarithm or its exponential, as appropriate. Furthermore, the solution of TSP is unaffected by shifting the values of the weight function by a constant amount, which implies in particular that there is no loss of generality in considering the MTSP only for weight functions taking values in the range [0, 1]. It is well known that the TSP, and hence also MTSP, is NP-complete. This means that it lies in the complexity class NP and is NP-hard, which is to say that any other problem in NP can be transformed into an instance of the TSP in polynomial time. Furthermore, by considering the corresponding decision problems it can be seen that TSP and hence MTSP remain NP-complete if the weight function is assumed to take distinct values on distinct pairs. Our first result is an application of Theorem 2.1 showing that the subspace ordering problem is NP-hard.

Proposition 3.1. The problem of finding an optimal ordering for collections of independent closed subspaces with pairwise distinct Friedrichs numbers is NP-hard.
Proof. It suffices to show that every instance of TSP with distinct costs can be transformed in polynomial time into a subspace ordering problem with pairwise distinct Friedrichs numbers. However, this follows straightforwardly Vol. 72 (2017) Non-optimality of the Greedy Algorithm for Subspace Orderings 985 from Theorem 2.1. Indeed, given a TSP problem on N ≥ 2 vertices we may transform it to an instance of MTSP with weight function taking values in the range [0, 1] in O(N 2 ) steps. Let C = (c k, ) 1≤k, ≤N be the symmetric matrix with zeros along its main diagonal and entries c k, = w(k, ) for 1 ≤ k, ≤ N with k = . By Theorem 2.1 there exists a Hilbert space X and independent closed subspaces M 1 , . . . , M N of X such that C is the associated Friedrichs matrix. Moreover, it is clear from the proof of Theorem 2.1 that it is possible to obtain these subspaces in polynomial time. If we find a permutation σ * ∈ S N such that r σ * = r * , then since r σ = Π σ for all σ ∈ S N the permutation σ * also solves our instance of MTSP, and hence the original TSP problem. Since TSP is known to be NP-hard, our problem is too. The result shows that the existence of any polynomial-time algorithm which solves the subspace ordering problem in a sufficiently large number of cases implies that P = NP. In particular, we obtain the following consequence for the Greedy Algorithm.

Corollary 3.3. Correctness of the Greedy Algorithm for independent subspaces with pairwise distinct Friedrichs numbers implies that P = NP.
Proof. It is straightforward to see that if all the pairwise Friedrichs numbers are distinct then the Greedy Algorithm terminates after O(N 3 ) steps, where N ≥ 2 is the number of subspaces we a required to order optimally.
Remark 3.4. The version of the Greedy Algorithm formulated in [9,Section 9] differs from ours in that it does not consider all possible greedy paths and hence runs in polynomial time even if the pairwise Friedrichs numbers are not assumed to be distinct. Note also that, as in the case of Proposition 3.1, the assumption of independence on the subspaces can be relaxed to pairwise quasi-disjointness.
Given that the question whether P = NP is a long-standing open problem, one may view Proposition 3.1 as evidence suggesting that the Greedy Algorithm does not in general lead to an optimal ordering of the subspaces in question. This is indeed the case, as the following example illustrates.
Remark 3.6. Example 3.5 disproves a claim made in [9, Section 9], namely that the Greedy Algorithm always leads to an optimal ordering in the case of independent subspaces. The examples considered in [9, Section 9] involve only N = 3 subspaces, a special case in which the Greedy Algorithm performs an exhaustive search of all possible orderings (up to the direction in which they are traversed) and in particular is correct. Thus Example 3.5 is minimal in terms of the number of subspaces involved.

Sharp Estimates for the Degree of Suboptimality
Having shown in Sect. 3 that the Greedy Algorithm does not in general lead to an optimal ordering of the subspaces in the method of alternating projections, we seek now to quantify how much the result reached by the Greedy Algorithm can disagree with the optimal result. Given a collection of closed subspaces of a Hilbert space such that at least one of the pairwise Friedrichs numbers is zero, we see that for suitable orderings of the subspaces we obtain convergence after at most two steps in the method of alternating projections. Another essentially uninteresting case for asymptotic analysis is when all of the pairwise Friedrichs numbers equal 1, so that no ordering leads to a useful estimate in (1.3). If either of these two cases holds we shall say that the collection of subspaces involved is non-generic, and otherwise we call it generic. Proof. For 1 ≤ k ≤ N let σ k ∈ S N be the permutation produced by running the Greedy Algorithm with the starting vertex σ k (1) = k and let r k = r σ k . Then certainly r * ≤ r k for 1 ≤ k ≤ N , and hence also r * ≤ r G . For 1 ≤ k, ≤ N let denote the index of the successor to M in the ordering of the subspaces determined by σ k , noting that s k ( ) = 1 if σ k ( ) = N . Let σ ∈ S N and for 1 ≤ k, ≤ N with k = let w(k, ) = c(M k , M ). Let 1 ≤ k, ≤ N . If σ −1 k (σ( )) < σ −1 k (σ( + 1)), which is to say that in the ordering determined by σ k the subspace M σ( ) comes before M σ( +1) , then by definition of the Greedy Algorithm we must have w σ( ), s k (σ( ) ≤ w σ( ), σ( + 1) , while if σ −1 k (σ( )) > σ −1 k (σ( + 1)) then w σ( + 1), s k (σ( + 1) ≤ w σ( ), σ( + 1) .