Regular cell complexes in total positivity

This paper proves a conjecture of Fomin and Shapiro that their combinatorial model for any Bruhat interval is a regular CW complex which is homeomorphic to a ball. The model consists of a stratified space which may be regarded as the link of an open cell intersected with a larger closed cell, all within the totally nonnegative part of the unipotent radical of an algebraic group. A parametrization due to Lusztig turns out to have all the requisite features to provide the attaching maps. A key ingredient is a new, readily verifiable criterion for which finite CW complexes are regular involving an interplay of topology with combinatorics.


Introduction and terminology
This paper gives the following new characterization of which finite CW complexes are regular, followed by the proof of a conjecture of Sergey Fomin and Michael Shapiro [FS] that stratified, totally nonnegative spaces modeling Bruhat intervals are homeomorphic to balls. Condition 2 implies that the closure poset is graded by cell dimension. Section 2 gives examples demonstrating that each of conditions 2, 3, 4, and 5 is not redundant, then proves Theorem 1.1. The fairly 1991 Mathematics Subject Classification. 05E25, 14M15, 57N60, 20F55. The author was supported by NSF grant 0500638.
technical conditions of Theorem 1.1 seem to capture how the combinatorics (encoded in condition 3) substantially reduces what one must check topologically. Notably absent is the requirement that f α is bijective between the entire boundary of B dim α and a union of open cells.
Björner proved in [Bj] that any finite poset which has a unique minimal element and is thin and shellable (i.e. stronger conditions than condition 3 above) is the closure poset of a finite regular CW complex. However, this by no means guarantees that any particular CW complex with this closure poset will be regular. One goal of this paper is to explore how the combinatorial data of the closure poset may be used in conjunction with limited topological information (namely information about the codimension one cell incidences) to prove that a CW complex is regular; this in turn enables determination of its homeomorphism type directly from the combinatorics of its closure poset.
Björner asked in [Bj] for a naturally arising family of regular CW complexes whose closure posets are the intervals of Bruhat order. To this end, Fomin and Shapiro introduced stratifications of links of open cells within bigger closed cells, all within the totally nonnegative part of the unipotent radical of a semisimple, simply connected algebraic group. In [FS], they showed these had the Bruhat intervals as their closure posets and proved quite a bit about their topological structure (especially in type A). They also conjectured that these were regular CW complexes, which would imply that the spaces themselves are homeomorphic to balls. In Section 3, we prove this conjecture: Theorem 1.2. These combinatorial decompositions from [FS] are regular CW decompositions, implying the spaces are homeomorphic to balls.
Our plan is to construct a regular CW complex rather explicitly, using Theorem 1.1 to prove that it is indeed regular, then show its equivalence, at least up to homeomorphism, to the complexes of Fomin and Shapiro. It was previously open whether the decompositions of Fomin and Shapiro were CW decompositions, so we also prove that along the way. A simple consequence of the exchange axiom for Coxeter groups will allow us to confirm condition 4 of Theorem 1.1, using an argument that cannot possibly generalize to higher codimension cell incidences (see Section 3), seemingly making this a good example of the efficacy of Theorem 1.1. Now let us review terminology and a few basic facts from topology and combinatorics. See e.g. [Mu] or [St] for further background.
Definition 1.3. A CW complex is a space X and a collection of disjoint open cells e α whose union is X such that: (1) X is Hausdorff.
(2) For each open m-cell e α of the collection, there exists a continuous map f α : B m → X that maps the interior of B m homeomorphically onto e α and carries the boundary of B m into a finite union of open cells, each of dimension less than m.
An open cell is any space which is homeomorphic to the interior of a ball. Note that 0-cells are treated as open cells whose boundary is the empty set in the above definition. We refer to the restriction of a characteristic map f α to the boundary of B m as an attaching map. Denote the closure of a cell α by α. A finite CW complex is a CW complex with finitely many open cells. Definition 1.5. The closure poset of a finite CW complex is the partially ordered set (or poset) of open cells with σ ≤ τ iff σ ⊆ τ . By convention, we adjoin a unique minimal element0 which is covered by all the 0-cells, which may be regarded as representing the empty set.
Definition 1.6. The order complex of a finite partially set is the simplicial complex whose i-dimensional faces are the chains u 0 < · · · < u i of i + 1 comparable poset elements.
Remark 1.7. The order complex of the closure poset of a finite regular CW complex K (with0 removed) is the first barycentric subdivision of K, hence is homeomorphic to K. In particular, this implies that the order complex for any open interval (u, v) in the closure poset of K will be homeomorphic to a sphere S rk(v)−rk(u)−2 .
Recall that a finite, graded poset with unique minimal and maximal elements is Eulerian if each interval [u, v] has equal numbers of elements at even and odd ranks. This is equivalent to its Möbius function satisfying µ(u, v) = (−1) rk(v)−rk (u) for each pair u < v, or in other words the order complex of each open interval (u, v) having the same Euler characteristic as that of a sphere S rk(v)−rk(u)−2 . A finite, graded poset is thin if each rank two closed interval [u, v] has exactly four elements, in other words if each such interval is Eulerian.
For a regular cell complex, the closure poset interval (u, v) is homeomorphic to the link of u within the boundary of v, i.e. to S dim(v)−dim(u)−2 .
Remark 1.8. If each closed interval [u, v] of a finite poset is Eulerian and shellable, then each open interval has order complex homeomorphic to a sphere S rk(v)−rk(u)−2 , implying condition 3 of Theorem 1.1. Conversely, if each open interval (u, v) has order complex homeomorphic to a sphere S rk(v)−rk(u)−2 , then the poset is Eulerian, but not necessarily shellable.
In the application developed in the second half of the paper, the closure posets will consist of the intervals in Bruhat order. These were proven to be shellable and Eulerian by Björner and Wachs in [BW], hence meet condition 3 of Theorem 1.1.
Remark 1.9. Lusztig and Rietsch have also introduced a combinatorial decomposition for the totally nonnegative part of a flag variety (cf. [Lu] and [Ri]). Lauren Williams conjectured in [Wi] that this is a regular CW complex. It seems quite plausible that Theorem 1.1 could also be a useful ingredient for proving that conjecture.
Rietsch determined the closure poset of this decomposition in [Ri]. Williams proved in [Wi] that this poset is shellable and thin, hence meets condition 3 of Theorem 1.1. Recently, Postnikov, Speyer and Williams proved in [PSW] for the special case of the Grassmannian that its decomposition is a CW decomposition; Rietsch and Williams subsequently generalized this to all flag varieties in [RW]. In each case, it remains open whether these CW complexes are regular. Also still open is the question of whether the spaces themselves are homeomorphic to balls, though these papers show that the Euler chararacteristic is what one would expect in order for these CW complexes to be regular, providing further evidence for Williams' conjecture.

A criterion for determining whether a finite CW complex is regular
Before proving Theorem 1.1, we first give a few examples demonstrating the need for its various hypotheses. The CW complex consisting of an open 2-cell with its entire boundary attached to a 0-cell does not have closure poset graded by dimension, forcing it to violate condition 2 of Theorem 1.1. Condition 2 is designed also to preclude examples such as a CW complex whose 1-skeleton is the simplicial complex comprised of the faces {v 1 , v 2 , v 3 , e 1,2 , e 1,3 , e 2,3 }, also having a two cell with a closed interval of its boundary not equalling a point mapped to v 2 and the remainder of its boundary mapped homeomorphically to the rest of the 1-skeleton.
Remark 2.1. In this case, one may choose a different characteristic map which is a homeomorphism even at the boundary. Whether or not this can always be done for finite CW complexes with characteristic maps satisfying conditions 1, 3, 4, and 5 seems subtle at best, in light of examples such as the Alexander horned ball: a 3-ball which cannot be contracted to a point without changing the homeomorphism type of the complement.
The next example is a non-regular CW complex satisfying conditions 1, 2, 4, and 5 of Theorem 1.1, but violating condition 3.
One might ask if the connectedness part of requirement 3 is redundant, at least if one requires the closure poset be Eulerian. Closure posets do have the property that open intervals (0, u) with rk(u) > 2 are connected, by virtue of the fact that the image of a continuous map from a sphere S d with d > 0 is connected. However, there are closure posets of CW complexes which are Eulerian and have disconnected intervals (u, v) with rk(v) − rk(u) > 2 [Th]. Nonetheless, it is still plausible that condition 3 in Theorem 1.1 could be replaced by the requirement that the closure poset be Eulerian.
Next is a non-regular CW decomposition of RP 2 satisfying conditions 1, 2, 3, and 5 of Theorem 1.1, but failing condition 4. Example 2.3. Let K be the CW complex having as its 1-skeleton the simplicial complex with maximal faces e 1,2 , e 2,3 , e 1,3 . Additionally, K has a single 2-cell whose boundary is mapped to the 1-cycle which goes twice around the 1-cycle (v 1 , v 2 , v 3 ). Notice that this CW decomposition of RP 2 has the same closure poset as a 2-simplex, but the attaching map for the 2-cell is a 2 to 1 map onto the lower dimensional cells.
Finally, we give an example (due to David Speyer) of a CW complex with characteristic maps meeting conditions 1, 2, 3 and 4, but failing condition 5, though this CW complex is regular with respect to a different choice of characteristic maps. David Speyer also helped with the formulation of condition 5.
Example 2.4. Let the 2-skeleton be the boundary of a pyramid. Now attach a 3-cell which is a triangular prism by sending an entire edge of one of the rectangular faces to the unique vertex of degree 4 in the pyramid, otherwise mapping the boundary of the prism homeomorphically to the boundary of the pyramid.
We implicitly use condition 1 in the next proposition, in that the notion of closure poset does not even really make sense without it.
Proposition 2.5. Conditions 1 and 2 of Theorem 1.1 imply that the closure poset is graded by cell dimension.
Proof. Consider any e ρ ⊆ e σ with dim(e σ ) − dim(e ρ ) > 1. Choose a point p in e ρ expressible as f σ (x) for some x ∈ S dim(eσ)−1 . If we take an infinite series of smaller and smaller open sets about x, by condition 2 each must include a point sent by f σ to an open cell of higher dimension than e ρ ; finiteness of the CW complex then implies some such open cell e τ is mapped into infinitely often, implying p ∈ e τ . Thus, e ρ < e σ for dim(e σ ) − dim(e ρ ) > 1 implies there exists e τ with e ρ < e τ < e σ . Now to the proof of Theorem 1.1.
Proof. It is clear that conditions 1, 2, and 4 are each necessary. The necessity of 3 follows easily from the fact that a regular CW complex is homeomorphic to the order complex of its closure poset. To see that 5 is also necessary, note that if K is regular with respect to the characteristic maps {f α }, then e σ ⊆ e τ implies that f σ factors as is the desired continuous inclusion map. Now to the sufficiency of these five conditions. We must prove that each attaching map f σ is a homeomorphism from ∂(B dim σ ) to the set of open cells comprising e σ \ e σ . Since K is a CW complex in which the closure of each cell is a union of cells, f σ must be continuous and surjective onto a union of lower dimensional cells, leaving us to prove injectivity of f σ and continuity of f −1 σ . However, once we prove injectivity, we may use the fact that any bijective, continuous map from a compact set to a Hausdorff space is a homeomorphism to conclude continuity of the inverse, so it suffices to prove injectivity.
If the attaching maps for K were not all injective, then we could choose open cells e ρ , e σ with dim(e σ ) − dim(e ρ ) as small as possible such that e ρ ∈ e σ and f σ restricted to the preimage of e ρ is not 1-1. Then we could choose a point z ∈ e ρ with |f −1 σ (z)| = k for some k > 1. By condition 4, dim(e σ ) − dim(e ρ ) must be at least 2. We will now show that the open interval (e ρ , e σ ) in the closure poset has at least k connected components, which by condition 3 forces [e ρ , e σ ] to have rank exactly two. The point is to show for each point p i ∈ f −1 σ (z) that there is an open cell e τ i ⊆ e σ such that p i ∈ ι(B dim eτ i ), and then to show for distinct points p i , p j ∈ f −1 σ (z) that the open cells e τ i , e τ j are incomparable in the closure poset. To prove the first part, take an infinite sequence of smaller and smaller balls about p i , which by condition 2 must each intersect f −1 σ (e τ ) for some e τ < e σ with dim e σ − dim e τ = 1; by finiteness of K, the preimage of some such e τ i is hit infinitely often, implying p i ∈ f −1 σ (e τ i ), hence e ρ ⊆ e τ i . We prove next that the collections of cells whose closures contain the various points in f −1 σ (z) must belong to distinct components of (e ρ , e σ ), yielding the desired k components in the open poset interval. Consider , and hence p 1 , p 2 ∈ f −1 σ (e τ j ), contradicting the fact that f τ j restricted to the preimage of ρ is a homeomorphism. Thus, (e ρ , e σ ) has no comparabilities between cells whose preimages under f σ have closures containing distinct points of f −1 (z); in particular, (e ρ , e σ ) has at least k connected components, hence must be rank two.
Finally, we show that (e ρ , e τ ) has at least 2k elements, forcing k to be 1, by the thinness requirement in condition 3. This will contradict our assumption that k was strictly larger than 1. Lemma 2.6 provides the desired 2k elements by showing that for each of the k preimages of z, there are at least two open cells e τ in (e ρ , e σ ) with f −1 σ (e τ ) containing that particular preimage of z.
Lemma 2.6. If a CW complex K meets the conditions of Theorem 1.1, then it also satisfies the following condition: for each open cell e τ and each x ∈ e τ \e τ with f τ (x) in an open cell e ρ ⊆ e τ with dim e τ −dim e ρ = 2, there exist distinct open cells e σ 1 , e σ 2 with dim e σ i = 1 + dim e ρ and x ∈ f −1 τ (e σ i ) for i = 1, 2.
Proof. Condition 2 ensures that the boundary of B dim eτ does not include any open (dim e τ − 1)-ball, all of whose points map are mapped by f τ into e ρ . In particular, each such ball containing x includes points not sent by f τ to e ρ . Since K is finite, there must be some particular cell e σ 1 such that points arbitarily close to x within the boundary of B dim τ map into e σ 1 , implying x ∈ e σ 1 , with dim e ρ < dim e σ 1 < dim e τ .
Thus, e ρ ⊆ e σ 1 and dim e σ 1 = dim e ρ + 1, just as needed. Now let us find a suitable e σ 2 . Here we use the fact that removing the boundary of e σ 1 from a sufficiently small ball B dim eτ −1 about x yields a disconnected region, only one of whose components may include points from e σ 1 . This forces the existence of the requisite open cell e σ 2 which includes points of the other component and has x in its closure.
The following will enable us to build CW complexes inductively.
Theorem 2.7 (Mu,Theorem 38.2). Let Y be a CW complex of dimension at most p − 1, let B α be a topological sum of closed p-balls, and let g : BdB α → Y be a continuous map. Then the adjunction space X formed from Y and B α by means of g is a CW complex, and Y is its p − 1 skeleton.
The next result, which is easy to prove, explains the general manner in which Theorem 1.1 will be used later in the paper. In particular, it singles out conditions 3 and 4 to be checked separately once a suitable regular CW complex K and continuous function f have been obtained. Proof. The restrictions of f to a collection of closures of cells of the (p− 1)-skeleton give the characteristic maps needed to prove that the (p−1)skeleton of f (K) is a finite CW complex. Now we use Theorem 2.7 to attach the p-cells and deduce that f (K) is a finite CW complex with characteristic maps given by the various restrictions of f .
Conditions 1 and 2 are immediate from our assumptions on f . If there are two open cells σ 1 , σ 2 in K (of dimension at most p − 1) with identical image under f , then the fact that σ 1 and σ 2 are both regular with isomorphic closure posets gives a homeomorphism from σ 1 to σ 2 preserving cell structure, namely the map sending each x to the unique y with f (y) = f (x). This allows us to use the embedding of either σ 1 or σ 2 in the closure of any higher cell of K to deduce condition 5.
3. An application: proof of Theorem 1.2 We now verify the hypotheses of Theorem 1.1 for the stratified space introduced by Fomin and Shapiro in [FS], so as to prove their conjecture that this is a regular CW complex. Our strategy will be to construct a regular CW complex K upon which Lusztig's map will act in a manner that meets all the requirements of Corollary 2.8, and then to show that conditions 3 and 4 of Theorem 1.1 also hold, so as to deduce that f (K) is a regular CW complex with the restrictions of f to the various closed cells of K giving the characteristic maps. Condition 3 is well-known to hold for Bruhat order, and condition 4 will follow easily from the following basic property of Coxeter groups: Lemma 3.1. Given a reduced word s i 1 s i 2 · · · s ir for a Coxeter group element w, any two distinct subwords of length r − 1 which are both themselves reduced must give rise to distinct Coxeter group elements.
We include a short proof of this vital fact for completeness sake.
Proof. Suppose deleting s i j yields the same Coxeter group element which we get by deleting s i k for some pair 1 ≤ j < k ≤ r. This implies s i j s i j+1 · · · s i k−1 = s i j+1 · · · s i k−1 s i k . Multiplying on the right by s i k yields contradicting the fact that the original expression was reduced.
Notice that the statement of the above lemma no longer holds if we replace r − 1 by r − i for i > 1, as indicated by the example of the reduced word s 1 s 2 s 1 in the symmetric group on 3 letters, where s i denotes the adjacent transposition (i, i + 1) swapping the letters i and i + 1 (or more generally the i-th simple reflection of a Coxeter group W ). For this reason, it really seems to be quite essential to our proof of the conjecture of Fomin and Shapiro that Theorem 1.1 will enable us to focus on codimension one cell incidences.
Recall from [BB] the following terminology and basic properties of Coxeter groups. An expression for a Coxeter group element w is a way of writing it as a product of simple reflections s i 1 · · · s ir ; an expression is reduced when it minimizes r among all expressions for w, in which case r is called the length of w. Breaking now from standard terminology, we sometimes speak of the wordlength of a (not necessarily reduced) expression s i 1 · · · s ir , by which we again mean r.
The Bruhat order is the partial order on the elements of a Coxeter group W having u ≤ v iff there are reduced expressions r(u), r(v) for u, v with r(u) a subexpression of r(v). Bruhat order is also the closure order on the cells B w = B − wB − of the Bruhat stratification of the reductive algebraic group having W as its Weyl group.
Given a (not necessarily reduced) expression s i 1 · · · s i d for a Coxeter group element w, define a braid-move to be the replacement of a string of consecutive simple reflections s i s j · · · by s j s i · · · yielding a new expression for w by virtue of a braid relation (s i s j ) m(i,j) = 1 with i = j. Define a nil-move to be the replacement of a substring s i s i appearing in consecutive positions by 1. We call braid relations with m(i, j) = 2 commutation relations and those with m(i, j) > 2 long braid relations.
Theorem 3.2 (BB, Theorem 3.3.1). Let (W, S) be a Coxeter group generated by X. Consider w ∈ W .
(1) Any expression s i 1 s i 2 · · · s i d for w can be transformed into a reduced expression for w by a sequence of nil-moves and braidmoves.
(2) Every two reduced expressions for w can be connected via a sequence of braid-moves.
Recall that any expression s i 1 · · · s i d may be represented more compactly by its word, namely by (i 1 , . . . , i d ). Let us say that an expression is stuttering if it admits a nil-move and call it non-stuttering otherwise. An expression is commutation equivalent to a stuttering expression if a series of commutation relations may be applied to it to obtain a stuttering expression. We sometimes call simple reflections in an expression letters. See [Hu] or [BB] for further background on Coxeter groups.
Associated to any Coxeter system (W, S) is a 0-Hecke algebra, with generators {x i |i ∈ S} and the following relations: for each braid relation s i s j · · · = s j s i · · · in W , there is an analogous relation x i x j · · · = x j x i · · · , again of degree m(i, j); there are also relations x 2 i = −x i for each i ∈ S. We will instead need relations x 2 i = x i , but this sign change is inconsequential to all of our proofs, so we abuse language and call the algebra with relations x 2 i = x i the 0-Hecke algebra of W . This variation on the usual 0-Hecke algebra has previously arisen in work on Schubert polynomials (see e.g. [FSt] or [Ma]). We refer to x 2 i → x i as a modified nil-move. It still makes sense to speak of reduced and non-reduced expressions, and many properties (such as Lemma 3.1 and Theorem 3.2) carry over to the 0-Hecke algebra by virtue of having the same braid moves; there are subtle differences though too, resulting e.g. from the fact that cancellation is no longer available.
It is natural (and will be helpful) to associate a Coxeter group element w(x i 1 · · · x i d ) to any 0-Hecke algebra expression x i 1 · · · x i d . This is done by applying braid moves and modified nil-moves to obtain a new expression x j 1 · · · x js such that (j 1 , . . . , j s ) is a reduced word, then letting w(x i 1 · · · x i d ) = s j 1 · · · s js . The fact that this does not depend on the choice of moves is immediate from the upcoming intrinsic description for w(x i 1 · · · x i d ) in Proposition 3.3 and Corollary 3.4.
3.1. Totally nonnegative spaces modeling Bruhat intervals. Recall that the totally nonnegative part of SL n (R) consists of the matrices in SL n (R) whose minors are all nonnegative. Motivated by connections to canonical bases, Lusztig generalized this dramatically in [Lu] as follows. The totally nonnegative part of a reductive algebraic group G over C which is split over R is the semigroup generated by the sets {x i (t)|t ∈ R >0 , i ∈ I}, {y i (t)|t ∈ R >0 , i ∈ I}, and {t ∈ T |χ(t) > 0 for all χ ∈ X * (T )}, for I indexing the simple roots. In type A, x i (t) = I n +tE i,i+1 , namely the n by n identity matrix modified to have the value t in position (i, i + 1), and y i (t) = I n + tE i+1,i . In any type, x i (t) = exp(te i ) and y i (t) = exp(tf i ) for {e i , f i |i ∈ I} the Chevallay generators. In other words, if we let φ i be the homomorphism of SL 2 into G associated to the i-th simple root, then Let B + , B − be opposite Borels with N + (or simply N) and N − their unipotent radicals. In type A, we may choose B + , B − to consist of the upper triangular matrices and lower triangular matrices in GL(n), respectively. In this case, N + , N − are the matrices in B + , B − with diagonal entries one. The totally nonnegative part of N + , denoted Y , is the submonoid generated by {x i (t i )|i ∈ I, t i ∈ R >0 }. Let W be the Weyl group of G. One obtains a combinatorial decomposition of Y by taking the usual Bruhat decomposition of G and intersecting each open [Lu] in using the standard topology on R throughout this paper. See e.g. [Hu2] for further background on algebraic groups.
Lusztig proved for (i 1 , . . . , i d ) any reduced word for w that Y o w consists exactly of the elements for u ≤ w in Bruhat order on W . Fomin and Shapiro suggested for each u < w in Bruhat order that the link of the open cell Y o u within Y w should serve as a good geometric model for the Bruhat interval (u, w], namely as a naturally arising regular CW complex with (u, w] as its closure poset. This required introducing a suitable notion of link, i.e. of lk(u, w), before they could even begin to analyze it.
To this end, Fomin and Shapiro introduced a projection map π u : Y ≥u → Y o u which may be defined as follows. Letting N(u) = u −1 Bu∩N and N u = B − uB − ∩ N, Fomin and Shapiro show that each x ∈ Y ≥u has a unique expression as x = x u x u with x u ∈ N u , x u ∈ N(u). In light of results in [FS], π u (x) may be defined as equalling this element [FS,p. 11]). Thus, points of lk(u, w) belong to cells Y u ′ for u < u ′ ≤ w, and closure relations are inherited from Y . Fomin and Shapiro proved that each cell in lk(u, w) is indeed homeomorphic to R n for some n, i.e. is a cell. We will use the same notation for cells in Y and in lk(u, w), letting context dictate which is meant.
We work mainly with a more geometric description of lk(u, w), whose equivalence to the notion of Fomin and Shapiro will be justified later. Specifically, we will prove that ( may be regarded as a unit sphere about the origin and ∼ is an equivalence relation which we will prove identifies exactly those points having the same image under Lusztig's map )/ ∼ will serve as lk(1, w) and enable us to define lk(u, w) as the link of the cell indexed by u within lk(1, w).

Regularity of
, after suitably modifying the preimage, to provide the characteristic maps. One of the biggest challenges will be proving sphericity of the preimages of the attaching maps, i.e. of (R d ≥0 ∩ S d−1 1 )/ ∼ and its various cell closures. This is done in the next section, where ∼ is also defined carefully. First we sketch the idea of how ∼ will be constructed and prove that the image of Lusztig's map is a regular CW complex homeomorphic to a ball, using key properties of ∼ while deferring numerous technical details regarding the exact construction of ∼ to the next section.
By regions or faces in In upcoming combinatorial arguments all that will matter is which parameters are nonzero, so we often suppress parameters and associate the expression x i j 1 · · · x i j k to the region R {j 1 ,...,j k } . To keep track of the positions of the nonzero parameters, we sometimes also includes 1's as placeholders, e.g. describing which hold for all t 1 , t 2 ∈ R ≥0 and any x i yield the modified nil-moves x i x i → x i . The braid moves of the Coxeter group W also hold, as explained shortly, leading us to regard these x-expressions as expressions in the 0-Hecke algebra associated to W .
, for w as defined just prior to Section 3.1.
Proof. This follows from Theorem 3.2, which ensures the existence of a series of braid moves and modified nil-moves which may be applied to x i j 1 · · · x i j k mapping the points of R S onto the points of some cell R T indexed by a reduced expression, sending each x ∈ R S to some y ∈ R T with the property that f (x) = f (y).
Corollary 3.4. The Coxeter group element w(x i j 1 · · · x i j k ) does not depend on the series of braid moves and modified nil-moves used.
Proof. Any series of braid moves and modified nil-moves leading to a reduced expression may be used in the above proof, but the end result is determined by the map f .
Proof. Notice that A is obtained from B by setting some parameters to 0, hence the open cell to which A maps is in the closure of the open cell to which B maps. But Bruhat order is the closure order on cells.
To strike a balance between convenience for our argument and consistency of notation with [FS], we make the non-standard convention of letting R d ≥0 ∩ S d−1 1 denote the intersection of R d ≥0 with the hyperplane in which the d coordinates sum to 1.
Recall from [Lu], [FZ] the relations (1) for any s i , s j with (s i s j ) 3 = 1 and any t 1 + t 3 = 0. These are not difficult to verify directly. In [Lu], it is proven that there are more general relations of a similar nature for each braid relation (s i s j ) m(i,j) = 1 of W . These relations will hold whenever the parameters involved are all nonzero, since the subword upon which we apply the relation will be reduced. Additionally, notice for any braid relation (s i s j ) m(i,j) = 1 and any t 2 , . .
, and t ′ m(i,j) = 0. Lemmas 3.26 and 3.37 will enable us to carry much farther this idea of defining braid relations even when some parameters are 0.
Lemma 3.6. The new parameters after applying a braid relation will have the same sum as the old ones; moreover, this preservation of sum refines to the subset of parameters for any fixed x i .
Proof. This follows from the description of x i (t) as exp(te i ), simply by comparing the linear terms in the expressions . . . appearing in a braid relation.
Remark 3.7. Lemma 3.6 justifies that our description of R d ≥0 ∩ S d−1 1 may be used even after a change of coordinates due to a braid relation.
. . , j k are the indices of the nonzero entries in (t 1 , . . . , t d ). We will soon define an equivalence relation ∼ on ) using the following idea: if the word associated to (t 1 , . . . , t d ) is not reduced, then we may apply commutation moves and braid moves to it, causing a coordinate change to new coordinates (u 1 , . . . , u d ), and enabling a substitution In some cases, we will then say (u 1 , . . . , i for all i = r, s; however, for each non-reduced word, we only choose one such way of identifying points of that cell with ones have strictly fewer nonzero parameters. Remark 3.35 will specify which of the possible such equivalences are actually imposed by ∼.
Remark 3.8. Additional equivalences will hold by transitivity, but it will be quite important to the collapsing argument that we only specify as many identifications as are justified by the collapses we perform.
Once a sufficient series of braid relations has been applied to the word associated to (t 1 , . . . , t d ) to cause two copies of the same simple reflection s i to appear in consecutive positions j and j + 1, then the point expressed in the new coordinates as (u 1 , . . . , u d ) will be identified with the points having (u j , u j+1 ) replaced by (u j + u j+1 , 0) and (0, u j + u j+1 ). When we collapse an illegal regions requiring long braid relations, this will necessitate a change of coordinates within the closure of that region. We apply the same series of braid relations to all points in (the closure of) an illegal region being collapsed, enabling a modified nil-move which allows us to collapse entire level curves to pairs of boundary points which are thereby identified. Thus, ∼ will be described by a series of region collapses on a sphere, each of which is shown to preserve sphericity and regularity. Now we verify condition 4 of Theorem 1.1. The points in a cell boundary (i.e. the preimage of one of the attaching maps) are obtained by letting parameters go to 0. As mentioned already, this preimage takes the form ∂(R d ≥0 ∩ S d−1 1 )/ ∼ where ∼ is an equivalence relation that will be defined carefully later. The proof of the next lemma will use the following properties of ∼, which will follow easily from how ∼ is defined later: ( ) whose x-expression x(p) is not reduced is identified by ∼ with a point having more parameters set to 0, i.e. whose x-expression is a subexpression of x(p).
(2) If two points p, q satisfy p ∼ q, then w(x(p)) = w(x(q)), i.e. p and q have the same associated Coxeter group element.
Lemma 3.9. Lusztig's map f restricted to the preimages of the codimension one cells in the closure of a cell is injective.

Proof.
Consider an open cell indexed by a reduced expression x j 1 · · · x js , which may regard as being comprised of the points (t 1 , . is a homeomorphism under these conditions. The boundary point in which some t r = 0 has x jr (t r ) replaced by the identity matrix. If the resulting expression x j 1 · · ·x jr · · · x js is reduced, then we obtain a point in the preimage under f of a codimension one cell. If x j 1 · · ·x jr · · · x js is not reduced, then there is a series of braid moves enabling a modified nil-move. In this case, or if more than one parameter is set to 0, then the resulting point must be in the preimage of a cell of dimension less than s − 1, i.e. one of higher codimension than one. By Lemma 3.1, w(x j 1 · · ·x jr · · · x js ) = w(x j 1 · · ·x j r ′ · · · x js ) for r = r ′ , provided both expressions are reduced, since then the map w just transforms each x i to the corresponding simple reflection s i . Consequently, boundary points obtained by sending distinct single parameters to 0 to obtain reduced expressions of length one shorter must have images in distinct cells, so in particular must have distinct images. On the other hand, changing values of the nonzero parameters when a fixed parameter is set to 0 and the resulting expression is reduced must also yield points with distinct images under f , by the result of [Lu] that the map f given by a reduced expression of length d is a homeomorphism on R d >0 . Thus, f restricted to the preimage of the codimension one cells is injective. Property (2) of ∼ listed above ensures that injectivity holds even after the point identifications due to ∼.
The next theorem is phrased so as to enable a proof by induction on the length d of a Coxeter group element, in spite of the fact that we do not know yet that f ((R d ≥0 ∩ S d−1 1 )/ ∼) is even a CW complex. In particular, we will want the preimages of the various characteristic maps to be closed cells in (R d ≥0 ∩S d−1 1 )/ ∼, since this will give condition 5 of our regularity criterion. It is not clear that taking the closure in Proof. We induct on d, so we may assume the result for all finite Coxeter group elements of length less strictly than d (and all choices of reduced word for them). Notice that f restricts to any region obtained by setting some t i 's to 0 since x i (0)  )/ ∼ giving the characteristic maps, and that this CW complex structure satisfies conditions 1,2 and 5 of Theorem 1.1. Lemma 3.9 confirmed condition 4 of Theorem 1.1, while results of [BW] that Bruhat order is thin and shellable give condition 3. Thus, by Theorem 1.1, f ((R d ≥0 ∩ S d−1 1 )/ ∼) is a regular CW complex with characteristic maps given by the restrictions of f to the various cell closures.
In the next section, we will construct such a collapsing process, allowing us to deduce: )/ ∼ and its various closed cells provides a regular CW complex structure for lk(1, w).
and is a homeomorphism on R d >0 ∩ S d−1 1 (see [Lu,Section 4]), it follows that f is also continuous on K. Therefore, K meets all the requirements of Theorem 3.10, yielding the result.
. It now makes sense to define lk(u, w) to be the link of u in the regular CW complex lk(1, w). We will verify at the very end of the paper that this is equivalent to the definition for lk(u, w) given in [FS]; the key step in passing from (1, w) to intervals (u, w) will be a new, geometric description for a certain algebraically defined projection map from [FS].
Corollary 3.13. The space lk(u, w) is homeomorphic to a ball.

Construction and regularity of
)/ ∼. Let us begin by collapsing the faces of R d ≥0 ∩ S d−1 1 whose words are commutation equivalent to stuttering words, yielding identifications which we denote by ∼ C . We then prove that (R d ≥0 ∩ S d−1 1 )/ ∼ C is a regular CW complex. We will start afresh for the more difficult task of constructing and proving regularity of the complex (R d ≥0 ∩ S d−1 1 )/ ∼ obtained by collapsing all faces whose words are nonreduced. A separate proof for ∼ C is given first for two reasons: (1) it illustrates the strategy of the general case in a much simpler setting, and (2) it will be used to prove Lemmas 3.26 and 3.37, two key ingredients to the general case.
For any face F i whose associated expression x(F i ) is commutation equivalent to a stuttering expression, choose r as small as possible so that there are indices l, r within (i 1 , . . . , i d ) with l < r, i l = i r and x(F i ) commutation equivalent to an expression in which x i l , x ir have been moved into neighboring positions. Denote this chosen r by r(F i ) or simply r i . Next choose l among such pairs (l, r i ) so as to minimize r i − l. Denote this chosen l by l(F i ) or simply l i . We will refer to this as the chosen omittable pair for F i , so named because w(x(F i )) equals the Coxeter group element obtained by deleting s i l , s ir . One possible way to collapse F i is by identifying each point x i 1 (t 1 ) · · · x i d (t d ) ∈ F i with the pair of points in which the parameters t l i , t r i have replaced by 0, t l i + t r i and by t l i + t r i , 0, leaving all other parameters unchanged. Now let us describe the process by which we collapse the faces whose expressions are commutation equivalent to stuttering expressions. The idea is to put a prioritization ordering on the faces to be collapsed, and repeatedly choose the highest priority face not yet collapsed; in the course of collapsing a face, some faces in its closure may also get collapsed, as described shortly. The highest priority faces are those with r(F i ) as small as possible, breaking ties by choosing r(F i ) − l(F i ) as small as possible, and breaking further ties by choosing F i to be as high dimensional as possible. Collapse the first face chosen, which we now call F 1 , by identifying all points (t 1 , . . . , t d ) ∈ F 1 with the points in which t l 1 , t r 1 are replaced by (0, t l 1 + t r 1 ) and (t l 1 + t r 1 , 0) and all other parameters are left unchanged. Notice that this will also collapse any faces in its closure also having t l 1 > 0 and t r 1 > 0. Continue in this manner, i.e. at the i-th collapsing step collapsing a face we denote by F i by identifying all points in F i having t l i , t r i > 0 with points where (t l i , t r i ) are replaced by (t l i + t r i , 0) and by (0, t l i + t r i ). Let ∼ C be the set of identifications resulting from this series of collapses.
Remark 3.14. Commutation relations permute the parameters t 1 , . . . , t d since x i (t r )x j (t s ) = x j (t s )x i (t r ) whenever s i , s j commute. Any face F i whose word is commutation equivalent to a stuttering word is collapsed above by identifying the collections of points with t l i + t r i = k for a constant k and all other parameters fixed. This identifies the faces in F i having t l i = 0, r t i > 0 with ones instead having t r i = 0, t l i > 0.
Call the collections of points which are identified in the collapse of F i level curves. Define a slide-move or simply a slide, to be the replacement of S = {j 1 , . . . , j s } by S ′ = {k 1 , . . . , k s } for j 1 < · · · < j s and k 1 < · · · < k s with j i = k i for i = r for some fixed r and i jr = i kr . A type A example for (i 1 , . . . , i d ) = (1, 2, 3, 1, 2 Lemma 3.15. Each collapse preserves injectivity of the attaching maps for the faces not yet collapsed. More precisely, if F ′ is any minimal face with G 1 , G 2 ⊆ F k just prior to the collapse of a face F , where G 1 , G 2 are a pair of faces that are identified during the collapse of F , then F ′ is also collapsed during the collapse of F . Proof. Let t l j , t r j be the chosen deletion pair for the step which collapses F , and say x l j ∈ G 1 , x r j ∈ G 2 . F ′ must contain faces equivalent to G 1 and G 2 , which we call G ′ 1 and G ′ 2 . Consider the x-expression for F ′ which would cause F ′ to be collapsed earliest, and consider the highest priority subexpressions which are x-expressions for G ′ 1 , G ′ 2 . We first consider the case that x ir j is not in this x-expression for F ′ , which means that x ir j must have been swapped for a letter x ir to its left. Since we are only allowed commutation moves, we must have i r = i r j . Likewise, F ′ either must include x i l j or else this must have been shifted to some x i l with i l = i l j only using commutation moves. But since i l j = i r j , F ′ will have a deletion pair causing its collapse strictly before the collapse of F . Now suppose x ir j is in the optimal x-expression for F ′ . Then F ′ will again be collapsed during or prior to the collapse of F unless x i l j has been exchanged for a letter to its left in F ′ . But then by maximality of our choices of faces to collapse, this exchange would extend to faces including x ir j , thereby identifying faces including x i l and x ir j with ones instead including x i l j and x ir j at this earlier step. In particular, F must be identified with F ′ prior to the collapse of F . Example 3.16. A region with associated expression x 1 x 1 x 1 is collapsed based on the deletion pair comprised of its leftmost two x 1 's. The region with expression 1 · x 1 · x 1 is collapsed later based on its deletion pair at positions two and three. Composing face identifications based on these two steps causes the face x 1 · 1 · 1 to be identified with the face 1 · 1 · x 1 , potentially causing the attaching map for the face x 1 · 1 · x 1 no longer to be injective; however, this face will itself have been collapsed by this time, by virtue of having already been identified with the face 1 · x i · x i which has already been collapsed.
Lemma 3.17. The endpoints of the level curves across which we collapse a face F i live in distinct faces just prior to the collapse.
Proof. Suppose a face G 1 ⊆ F i with t l i > 0 and t r i = 0 had been identified already with the face G 2 ⊆ F i instead having t r i > 0 and t l i = 0. This would have required a series of earlier slides l i ′ → r i ′ , one of which must have r i ′ = r i and therefore by our collapsing order would have r i ′ − l i ′ < r i − l i , since i ′ < i. But the last of these slide moves would have been in a step which would have also collapsed F i , by virtue of identifying it with a face already collapsed, a contradiction.
Remark 3.18. Let F i be commutation equivalent to a stuttering expression. Then an open cell H ⊆ F i not collapsed earlier is collapsed along with F i iff t l i , t r i > 0 for the points of H.
The collapses accomplishing ∼ C rely on the following basic facts.
Lemma 3.19. If K and L are topological spaces, f is a homeomorphism from K to L, and K/ ∼ is a quotient space of K, then K/ ∼ is Proof. This is immediate e.g. from Proposition 13.5 in [Br].
We will use Lemma 3.19 in situations where K, L, K/ ∼ and L/ ∼ ′ are regular CW complexes and the quotient topologies on K/ ∼ and L/ ∼ ′ coincide with the real topologies on these CW complexes, by virtue of the fibers consisting of parallel line segments or their images under a continuous function g of the type allowed in Lemma 3.22.
Lemma 3.20. Let K be the boundary of a simplex, realized as a PL sphere. Let L be a closed cell of K, with G 1 , G 2 closed, codimension one faces in L. Let C be a collection of parallel line segments covering L, each having one endpoint in G 1 and the other in G 2 . Let x ∼ y whenever x, y ∈ c for any c ∈ C. Then K/ ∼ is a regular CW complex homeomorphic to a sphere.
Proof. Denote by ∆ i a closed i-dimensional simplex. It is easy to construct a homeomorphism h from , to R d . Moreover, we may choose h so as to to act as the identity outside of a bounded region; regarding S d as a one point compactification of R d , h then extends to a homeomorphism from (S d − ∆ d )/ ∼ to (S d − ∆ d−1 )/ ∼, i.e. from K/ ∼ to S d . See Example 3.21, which can easily be made general, though the notation would be cumbersome. It is similarly easy to construct a homeomorphism h from S d − ∆ m to S d − ∆ m−1 for any m ≤ d in such a way that h extends to a homeomorphism from (S d − ∆ m )/ ∼ (i.e. from K/ ∼) to the sphere S d .
Our PL set-up ensures that distinct endpoints of each of the parallel line segments live in distinct faces, that there is a unique minimal face containing any two faces being identified by ∼, and that these minimal faces are all collapsed. Thus, all attaching maps for cells not yet collapsed remain injective, implying that K/ ∼ is a regular CW complex in addition to being homeomorphic to a sphere.
Example 3.21. Consider the case in which d = 2, letting ∆ d be the convex hull of (1, 0), (0, 1/2), (0, −1/2) ∈ R 2 and ∆ d−1 the convex hull of (0, 0) and (1, 0) in R 2 . Then the collapsing homeomorphism h may be chosen to act as the identity outside R = {(x, y) : |x|, |y| ≤ 1} and to send each (x, y) ∈ R − ∆ 2 to some (x, y ′ ) ∈ R − ∆ 1 by appropriately stretching and shortening vertical line segments (breaking the vertical segments in which x is negative into three pieces of lengths 1/2, 1, 1/2, the middle of which is shrunk by h and the other two of which are stretched by h; as x approaches 0 from below, the length to which the corresponding middle segment is shrunk also tends to 0, whereas h is chosen to approach the identity map as |x| tends to 1). Now we turn to more general face collapses. Let K, L, C, ∼ be as in Lemma 3.20, and let g be a continuous, surjective function from K to a finite, regular CW complex K ′ homeomorphic to a sphere, with g sending L to a closed cell L ′ . Suppose for each open cell σ of K that either g acts homeomorphically on σ or that σ is covered by parallel line segments which comprise exactly the fibers of g| σ . Suppose also g acts homeomorphically on the interior of K and the K is covered by a collection C parallel line segments such that for each c ∈ C, g either sends c to a point or acts homeomorphically on c. We regard g as describing how the earlier collapses transform K to a regular CW complex K ′ upon which we wish to apply the next collapse.
Lemma 3.22. Let K ′ , L ′ , g, C, ∼ be as just described above. Suppose Proof. The point is to consider the homeomorphism g from K/(ker g) to K ′ induced by g, use the fact that we can do a collapse h on K (by Lemma 3.20), and deduce (by Lemma 3.19) that we can also do a collapse h on K/(ker g). The fact that g is indeed a homeomorphism follows from our requirements on g, particularly the fact that g acts homeomorphically on open cells of K not yet collapsed and on elements of C not sent by g to single points. ) carried out by ∼ C each preserve sphericity and regularity. Therefore, Proof. The first collapse is exactly as in Lemma 3.20, since the sets of points with some pair of coordinates t r , t s satisfying t r + t s = k and all other coordinates fixed comprise exactly the parallel line segments in C. Lemma 3.22 is used to prove that the later collapses preserve sphericity, since initially each closed cell is a simplex in a PL sphere, and each collapse gives a continuous function of exactly the format required for g in Lemma 3.22, implying the composition of these functions is of the desired format for g as well. To see that g indeed acts on each level curve either homeomorphically or by sending it to a point, notice that the interior of any nontrivial level curve lives entirely in one open cell F ⊆ F i that has not yet been collapsed, with the endpoints in two others, say H 1 and H 2 ; moreover, the level curves of F are defined in terms of an x-expression for F that was never associated to a higher dimensional cell (i.e. one already collapsed). By Lemma 3.17, g cannot identify the two endpoints of a level curve without collapsing F in the process. Finally, Lemma 3.15 proves that the attaching maps for the faces not yet collapsed are still injective just after our current collapsing step, completing the proof that regularity and sphericity are preserved.
Proposition 3.24. R S ∼ C R T for x(R S ), x(R T ) not commutation equivalent to stuttering expressions iff S, T differ from each other by a series of commutation relations and slides.
Proof. Let S = {j 1 , . . . , j s } and T = {k 1 , . . . k s }. We begin with pairs of words x(R S ), x(R T ) differing by a single slide, so S ∩ T = S \ {j r } = T \ {k r } for some r with i jr = i kr . But then w S∪T is stuttering, implying R S∪T was collapsed by ∼ C . The fact that x(R S ), x(R T ) are not commutation equivalent to stuttering expressions makes it impossible to apply commutation relations to x(R S∪T ) to obtain any other stuttering pair, so that R S∪T could have only been collapsed by identifying R S with R T . By transitivity of ∼ C , S and T differing by a series of slide moves give rise to R S , R T with R S ∼ C R T . Applying commutation relations to w S to produce w σ(S) which is slide equivalent to x(R T ) again ensures x(R S∪T ) admits the same commutation relations leading to a stuttering word, and again x(R S∪T ) does not admit any other stuttering pairs, so R S ∼ C R T . Transitivity of ∼ C yields the result. Now to the general case, i.e. the collapsing of all faces whose words are nonreduced. The next lemma will help us do changes of coordinates needed for long braid moves in a well-defined manner on closed cells. It relies heavily on the fact that Proof. For the interior of the big cell, we may use the fact that both f (i,j,... ) and f (j,i,... ) are homeomorphisms here. Each point x in ∆ which does not belong to the open big cell must instead belong to a region whose associated Coxeter group element has a unique reduced word, namely one with the appropriate alternation of s i 's and s j 's. Thus, we must send x to a point in ∆ ′ having this same reduced word, so that by Proposition 3.24 the only choices to be made are equivalent to each other under ∼ C ′ . This map from ∆ to ∆ ′ is a homeomorphism because it is a composition of two homeomorphisms, namely f (i,j,... ) and f −1 (j,i,... ) . Example 3.27. The type A relation s i s i+1 s i = s i+1 s i s i+1 gives the map (t 1 , t 2 , t 3 ) → ( t 2 t 3 t 1 +t 3 , t 1 + t 3 , t 1 t 2 t 1 +t 3 ) on the interior and the above proposition shows that by virtue of ∼ C , this map extends to the boundary, e.g. sending (t 1 , t 2 , 0) to (0, t 1 , t 2 ) and sending (0, t 2 , 0) to the ∼ C ′equivalence class of points (t ′ When collapsing a face, not only is there a chosen omittable pair, but also lurking in the background is a deletion pair. It will be helpful to focus on these in our remaining collapses, necessitating that we first develop some properties of the 0-Hecke algebra. Recall e.g. from [Hu] that a deletion pair in a Coxeter group expression s i 1 · · · s i d is a pair of letters s i j , s i k which may be deleted to obtain a new expression for the same Coxeter group element. The relations x i x i → x i yield the following 0-Hecke algebra variation on the deletion exchange property: Proof. Since w(x i 1 · · ·x ir ) = w(x i 1 · · · x ir ), we right multiply both expressions by x ir to obtain w(x i 1 · · ·x ir x ir ) = w(x i 1 · · · x ir x ir ). Thus, Define a deletion pair in a 0-Hecke algebra expression x i 1 · · · x i d to be a pair x ir , x is such that the subexpression x ir · · · x is is not reduced butx ir · · · x is and x ir · · ·x is are each reduced. For example, in type A the first x 1 and the last x 2 in x 1 x 2 x 1 x 2 comprise a deletion pair.
Lemma 3.29. If x ir , x is are a deletion pair, then w(x ir · · · x is ) = w(x ir · · · x is ) = w(x ir · · ·x is ), but these do not equal w(x ir · · ·x is ).
Proof. w(x ir · · ·x is ) ≤ w(x ir · · · x is ) and w(x ir · · · x is ) ≤ w(x ir · · · x is ) in Bruhat order, while all three of these Coxeter group elements have the same length, so the equalities follow. The inequality is immediate from the fact that x ir · · ·x is is reduced.
Lemma 3.30. If an expression x i 1 · · · x i d has deletion pairs x ir , x is and x is , x it for r < s < t, then x ir · · ·x is · · · x it is not reduced.
Proof. Since x ir · · ·x is andx ir · · · x is are reduced expressions for the same Coxeter group element, we may apply braid relations to the former to obtain the latter. Likewise, we may apply braid relations tô x is · · · x it to obtain x is · · ·x it . Applying these same braid relations to the first and second parts of x ir · · ·x is · · · x it yieldsx ir · · · x is x is · · ·x it , implying x ir · · ·x is · · · x it is not reduced.
See [FG] for a faithful representation of the 0-Hecke algebra in which the simple reflections which do not increase length act by doing nothing. In some sense, this idea translates to our set-up as follows.
Lemma 3.31. If x ir · · ·x iu · · · x is is reduced but x ir · · · x is is not, then x iu belongs to a deletion pair within x ir · · · x is . Proof. Without loss of generality, assume x ir , x is are a deletion pair in x ir · · · x is , since otherwise we could restrict to a subexpression beginning and ending with the elements of a deletion pair; we are assured there is no deletion pair strictly to the right or strictly to the left of x iu by our assumption that x ir · · ·x iu · · · x is is reduced. There must be a series of braid moves transforming x ir · · ·x is intox ir · · · x is . We may apply this same series of moves to x ir · · ·x iu · · ·x is , provided that x iu never appears in an interior position within a subword to which we apply a braid move, by saying that the braid move sends a word beginning withx iu to one instead ending with somex j or vice versa. Our assumption that x ir · · ·x iu · · · x is is reduced ensures thatx iu will never appear at an interior position of a braid move, because the first instance of this would imply the existence of a deletion pair in x ir · · ·x iu · · · x is . At the end of the series of braid moves, we will have a stuttering pair which either exhibits how x iu was part of a deletion pair or else demonstrates that x ir · · ·x iu · · · x is could not have been reduced in the first place.
Corollary 3.32. If x ir , x is is a deletion pair in x i 1 · · · x ir · · · x is · · · x i d , then any expression x ir · · ·x iu · · · x is obtained by deleting one letter from x ir · · · x is for u = r, s is not reduced.
Remark 3.33. If x iu , x iv comprise a deletion pair in a word x i 1 · · · x i d and we apply a braid relation in which x iu is the farthest letter in the braid relation from x iv , then the resulting expression will have x iv together with the nearest letter involved in the braid relation as a deletion pair. We regard this as a braided version of the same deletion pair. For example, applying a braid relation to x 1 x 2 x 1 x 2 yields x 2 x 1 x 2 x 2 ; we regard the third and fourth letter in the new expression as a braided version of the deletion pair in the original expression.
Lemma 3.34. Let A = x j 1 · · · x jr and B = x k 1 · · · x ks . If w(A) = w(B), then any word A ∨ B obtained by shuffling the letters in A with the letters in B satisfies w(A ∨ B) = w(A).

Proof. A ∨ B is a subexpression of the expression
, because the leftmost letter in A ∨ B forms a deletion pair with a letter to its left from A, hence may be deleted, and this argument may be repeated until all letters of A∨B have been eliminated. Corollary 3.5 yields w(A) ≤ Bruhat w(A ∨ B) ≤ Bruhat w(A), implying equality.
For any deletion pair x ir , x is in any expression x F , let c({x ir , x is }; x F ) be the smallest number of long braid moves which, together with any number of commutation moves, enables application of a modified nilmove to the pair x ir , x is while at each step only acting on the subexpression beginning with the left element of the deletion pair and ending to the immediate left of the right end of the deletion pair. For example, in type A c({x i 1 , x i 4 }; x 1 x 2 x 1 x 2 ) = 1. Such a sequence of moves is guaranteed to exist, by Lemma 3.29 combined with Theorem 3.2.
At each step in our collapsing process, each face is labeled by a collection of expressions that have been made equivalent to each other by earlier collapses; we will describe the collapse of a face F j in terms of one of its expression giving it highest collapsing priority.
We use the following word prioritization to order the faces to be collapsed, using the deletion pair within the word which gives the face its highest possible priority, and letting the statistics listed earliest take highest priority, only using later ones to break ties: (1) linear order on the right endpoint indices r j in the deletion pairs, (2) linear order on the differences r j − l j in indices for the deletion pairs, (3) linear order on c({x i l j , x ir j }; x F ), and (4) reverse linear order on face dimension (i.e. prioritizing faces of higher dimension). Note that there may be more than one choice of word for a face giving it the same priority, but all such choices will use the same deletion pair and admit the same series of long braid moves; we may choose arbitrarily which of these optimal representations to use for a face. Collapse the face F 1 selected first, i.e. the face of highest priority above, by choosing an allowable series of exactly c({x i l 1 , x ir 1 }; x(F 1 )) long braids to transform its optimal deletion pair x i l 1 , x ir 1 for F 1 into a stuttering pair. Some proper faces may be collapsed in the process. Keep repeating until all faces to be collapsed have been collapsed.
Remark 3.35. When collapsing a face F i , we choose a series of braid moves enabling a modified nil-move. If t r , t s are the parameters of the chosen deletion pair for F i , a face σ of F i having t r = 0 is identified with the face σ ′ instead having t s = 0 iff the long braid moves apply to the subexpression of x(F i ) involving exactly the letters in σ and σ ′ and move into the neighboring positions of the stuttering pair a nonzero and a zero parameter..
Example 3.36. Applying allowable braid moves to x 1 x 2 x 1 x 3 x 2 x 3 yields x 2 x 1 x 3 x 2 x 3 x 3 . Collapsing based on the resulting stuttering pair will cause the proper face 1 · x 2 x 1 x 3 x 2 x 3 to be identified with the face x 1 x 2 x 1 x 3 x 2 · 1. On the other hand, x 1 x 2 x 1 · 1 · x 2 x 3 will have been collapsed at an earlier step, since it has a higher priority deletion pair.
Denote by ∼ i the equivalence relation comprised of the identifications from the first i − 1 collapsing steps. Call two pairs {x iu , x iv } and {x ir , x is } appearing in a word x i 1 · · · x i d crossing if the first and third largest indices among r, s, u, v give one of the while the second and fourth give the other, provided additionally that it is not possible to apply commutation relations to make the pairs non-crossing; two pairs are nesting if the first and fourth indices comprise one pair while the second and third comprise the other.
Lemma 3.37. Each braid relation used in the k-th collapsing step is given by a homeomorphism ch from the closure of the region F k being collapsed within Proof. Consider the first long braid move needed for F k . Lemma 3.26 allows us to define the change of coordinates map ch on the closed region where all parameters other than those involved in this braid relation are zero, because our collapsing order ensures that the identifications due to ∼ C will have already been applied to this closed face; here we also use the fact that ∼ C and ∼ k agree on this closed face, by virtue of the fact that ∼ C already identifies all points with the same image under Lusztig's map. We extend this definition for the map ch to all of F k by leaving all other coordinates unchanged; in other words, we let ch = (id ⊗ f (i,j,... ) ⊗ id) • (id ⊗ f −1 (j,i,... ) ⊗ id), and we impose the identifications ch(x) ∼ k ch(y) whenever we have x ∼ k y prior to the application of the map ch. Our collapsing order ensures id ⊗ f −1 (j,i... ) ⊗ id is well-defined, since ch is only applied to faces not yet collapsed.
We check now that after applying a series of long braid moves thereby obtaining new coordinates, the requisite identifications have been done that will be needed for the coordinate change ch for the next long braid move, namely x i x j · · · → x j x i · · · . Consider a proper face G which omits letters causing stuttering within x i x j · · · . Then G either uses the same deletion pair x ir , x is as F k but with c({x ir , x is }; G) < c({x ir , x is }; F k ), or G has a preferable deletion pair. Either way, G is collapsed prior to F k . The desired identification would have already been done when G was collapsed (identifying the face omitting x ir from G with the one omitting x is ) unless G was collapsed based on a preferable deletion pair. If G has such a preferable deletion pair, suppose it is noncrossing and nonnesting with x ir , x is . Then the earlier collapse of G would have extended to collapse F k earlier too, because of the fact that our collapse ordering chooses G to be as high dimensional as possible, hence coincides exactly with F k on the subexpression beginning and ending with the preferable deletion pair. If, on the other hand, the preferable deletion pair is crossing or nesting with x ir , x is , then the collapse of G would have also collapsed the face obtained from G by deleting x is (for r < s), again because we maximize dimension in our collapses, Thus G \ {x ir } is not in the domain of ch. By Corollary 3.32, the face obtained from G by deleting x ir would also have been collapsed earlier, hence also is not in the domain of ch. Thus, we have shown that all requisite identifications have been done so as to apply ch in a well-defined (and bijective) manner to F k .
To see that ch is actually a homeomorphism on ), doing all collapses whose deletion pairs are strictly to the left of the segment to be braided by ch, and then doing all collapses whose deletion pairs are stuttering pairs inside the segment to be braided. We may assume by induction that the first series of collapses applied to F k preserves regularity as well as its being homeomorphic to a ball; then we may apply our results about ∼ C , working throughout in the coordinates we have just prior to the application of ch, to deduce that in fact H k is a regular CW complex homeomorphic to a ball, i.e. that the collapses based on stuttering pairs within the segment to be braided preserve this property. Since ch acts as the identity on the coordinates outside the braided segment and as f (i,j,... ) • f −1 (j,i,... ) on the braided coordinates, each of which itself is a homeomorphism, we deduce that ch acts homeomorphically on H k . In Lemma 3.38, we prove that F k is a quotient space of H k . By definition, ch(F k ) is a quotient space of ch(H k ) as well. Thus, we may apply Lemma 3.19 to deduce that ch is a homeomorphism on F k .
To see that the braid moves for F k leading to the aforementioned stuttering in G and the braid moves actually used to collapse G will give consistent point identifications (i.e. no monodromy), notice that in each case we are composing homeomorphisms where at each step for each point x there is a unique where s is the index of the right element of the chosen deletion pair for F k , id is the identity map, and ∼ ′ k consists of those identifications of ∼ k given by deletion pairs only involving the first s − 1 positions. This will imply that the identifications of ∼ k include all identifications based on deletion pairs using only the first s − 1 positions. We may assume by induction on wordlength that (R s−1 ≥0 ∩ S s−2 1 )/ ∼ ′ k is a regular CW complex. Suppose the expression associated to a region has the property that its subexpression using only the first s − 1 positions is not reduced; then the expression has a deletion pair whose right endpoint has index at most s − 1, implying the region will have been collapsed by ∼ ′ k . Thus, we apply Theorem 3.10 to deduce that f (i 1 ,...,i s−1 ) ⊗id is a homeomorphism, hence that ∼ ′ k identifies any pair of points having the same image under f (i 1 ,...,i s−1 ) ⊗ id, so in particular accomplishes all identifications present in H k , regarding this as as quotient of We still must prove that injectivity persists after the collapse of one cell for the attaching maps for the other cells not yet collapsed. The next lemma will imply this. For any two subexpressions A, B of x i 1 · · · x i d , let A∨B be the expression involving the union of the indexing positions from A and B. Elsewhere in the paper, we use the notation G 1 , G 2 for the closures of the cells whose x-expressions are denoted by A, B in the next lemma. For expediency, we sometimes speak of cells and their x-expressions interchangeably in the next lemma. When a collapse identifies an open cell A with some A ′ , we denote this exchange by A → A ′ ; if this is done in a step collapsing via a deletion pair x iu , x iv , we say that x iu is exchanged for x iv . If A, B are exchanged for A ′ , B ′ in the same collapsing step (for some larger cell having both A and B in its closure, we say that A and B experience the same exchange.
Lemma 3.39. If a face A∨B is collapsed across level curves each having one endpoint in A and the other endpoint in B, then each element F which is a least upper bound for A and B in the closure poset just prior to the collapse of A ∨ B is itself collapsed when A ∨ B is.
Proof. Let F k be the maximal face collapsed in the step which collapses A ∨ B. Let x ir , x is be the deletion pair accomplishing the collapse. Let A ∨ B = x i j 1 · · · x i jm , so A = x i j 1 · · ·x ir · · · x is · · · x i jm and B = x i j 1 · · · x ir · · ·x is · · · x i jm . A, B ∈ F implies there are some letters which may be deleted from x(F ) to obtain x(A ′ ) for some A ′ ∼ k A, and that there are letters which may be deleted from x(F ) to obtain x(B ′ ) such that B ′ ∼ k B. A must have been identified with A ′ (and likewise B with B ′ ) by a series of earlier deletion pair exchanges, i.e., steps each replacing one letter by another such that the two letters comprise the chosen deletion pair at that collapsing step.
Consider first the case in which x is ∈ A ′ ∨ B ′ and B ′ = B. Suppose A is transformed to A ′ by a series of exchanges A → A 1 → A 2 → · · · → A k = A ′ . Denote by A| x i 1 ···x i t the restriction of an x-expression A to its subexpression only using letters from the first t available indexing positions. Since w(A) = w(B) = w(A i ) for each i, and in fact w(A| x i 1 ···x i s−1 ) = w(B| x i 1 ···x i s−1 ) = w(A i | x i 1 ···x i s−1 ), Lemma 3.34 yields w(A i ∨ B| x i 1 ···x i s−1 ) = w(B| x i 1 ···x i s−1 ) for all i. In particular, A ′ ∨ B| x i 1 ···x i s−1 must therefore be non-reduced, since A ′ cannot yet have been identified with B, which means A ′ ∨ B has strictly more letters than B in this segment. Thus, A ′ ∨ B must have a deletion pair whose right index is to the left of x is , implying . Again, we use the fact that A, B are not identified prior to the collapse of A ∨ B to deduce that A ′ ∨ B ′ has strictly more letters than B, and in particular A ′ ∨ B ′ | x i 1 ···x i s−1 is not reduced. Thus, A ′ ∨ B ′ has a deletion pair strictly to the left of x is , ensuring this face will be collapsed before A ∨ B is collapsed.
, exactly the same exchanges based on deletion pairs strictly to the left of x is will apply to A and to A ∨ B. Those exchanges resulting from collapses with right endpoint for the deletion pair to the left of x ir will also apply to B, allowing us to disregard these exchanges. If A has any exchanges x iu → x iv based on deletion pairs x iu , x iv having x ir at an intermediate position, then Lemma 3.32 ensures that the exchange x iu → x iv is also applicable to B, in that inserting x iv into B creates a deletion pair x iu , x iv ; the fact that x iu , x iv both appear strictly to the left of x is allows us to assume by induction on wordlength that this identification B → B ′ by exchanging x iu for x iv indeed occurs prior to the collapse of A ∨ B. (The idea is similar, and explained in more detail, within the proof of Lemma 3.38.) If A could exchange a letter to the left of x ir for x ir , this would contradict the fact that B| x i 1 ···x i s−1 is reduced. Any exchanges which apply to A based on deletion pairs x iu , x iv appearing strictly between x ir , x is also apply to B at some point prior to the collapse of A ∨ B, again using induction based on length, as in the proof of Lemma 3.38; B admits this same exchange since the deletion pair is purely to the right of x ir , i.e. is on a segment where A and B are identical. Thus we have proven that all exchanges performed on A prior to the collapse of A ∨ B not only extend to include x is but also will occur on B; moreover, any exchanges applicable to B must also apply to A ∨ B; if exchanges on B also all apply to A, then A ∨ B = F .
Otherwise, there is some exchange on B that does not apply to A. The resulting face F = A ′ ∨B ′ will strictly contain A∨B, contradicting its being a least upper bound for A, B, unless there is some exchange not applicable to A which specifically deletes x ir from B and A∨B. But exchanging x ir to the right would identify A ∨ B with a face collapsed strictly earlier than it, whereas exchanging x ir to the left for some x iu would identify A ∨ B with a face F ′ which is collapsed strictly before A ∨ B is (by Lemma 3.30) unless the chosen deletion pair for F ′ is precisely x iu , x is ; but then A ∨ B would be identified with F = A ′ ∨ B ′ prior to the collapse of A ∨ B, so in every case we are done. Proof. Let t l i , t r i be the parameters of the deletion pair, with u r , u s the new parameters comprising the stuttering pair obtained from t l i , t r i by braid and modified nil-moves. Any two faces G 1 , G 2 containing opposite endpoints of a level curve u r + u s = k have the property that one has t l i = 0, the other has t r i = 0, with the faces otherwise having the same zero and nonzero parameters. We show that G 1 , G 2 are not identified in an earlier collapse.
There are two possibilities to rule out, namely (a) that the collapse of some face not in F caused G 1 and G 2 to be identified earlier, and (b) that G 1 and G 2 were each collapsed onto the same face within the boundary of F prior to the collapse of F . We may assume that the complex is regular just prior to collapsing F in order to show that regularity is preserved at the end of the collapse. This regularity assumption precludes (a). In situation (b), the x-expressions for G 1 , G 2 differ only in the deletion pair x i l i , x ir i ; but each would have already been collapsed onto the same face H prior to the collapse of F by identifying x i l i (resp. x ir i ) with some other x iv (resp. x iw ).
Note that x i l i , x iv comprise a deletion pair in x(G 1 ), and that x ir i , x iw form a deletion pair in x(G 2 ). Our collapsing order implies that x iv is to the left of x ir i but to the right of x i l i . If x iu is to the left of x i l i , then the collapse of G 1 extends to collapse F , unless it is collapsed even earlier. If x iu is to the right of x i l i , it still must be to the left of x ir i , and again the collapse extends to F unless F were collapsed earlier. In each case we are done. Proof. Each collapsing step is specified by a deletion pair x ir , x is , and collapses a closed face F i across level curves which each have one endpoint in the closed face G 1 in which t ir = 0 and the other endpoint in the closed face G 2 in which t is = 0. By Lemma 3.40, G 1 = G 2 . First notice by virtue of our collapsing order that G 2 cannot have been collapsed yet, though G 1 , may have been.
First suppose F i does not require any long braid moves. Then just as in Lemma 3.23, the earlier collapses give homeomorphisms on the complements of the regions being collapsed at these earlier steps, and the composition of these maps takes exactly the format required for the function g appearing in Lemma 3.22. Lemma 3.40 guarantees that the map g needed for Lemma 3.22 indeed acts on each level curve c either homeomorphically or by sending it to a point, using a similar argument to the ∼ C case.
Using the fact that ch is a homeomorphism on F i which carries the boundary to itself (as shown in Lemma 3.37), we show below that we may extend ch to a small neighborhood N of F i to give a change of coordinates on N, transforming the level curves into the format handled by 3.22, enabling us to apply Lemma 3.22 to collapse F i after suitable change of coordinates, choosing the collapsing homeomorphism so as to act as the identify map outside of N. This allows us to encode the collapsing homeomorphism as the composition of three maps: first apply the map which acts as ch −1 on N and the identify map outside of N, then apply the collapsing homeomorphism which will exist in the new coordinates (and which is chosen so as to act as the identity outside of N), then apply the map which acts as ch on N and the identity map outside of N; while the first and third maps are not homeomorphisms, the composition of these three maps will nonetheless be a homeomorphism, by virtue of the collapse acting as the identity at the boundary of N, so that the composition of the three maps also acts as the identity both at the boundary of N and also outside of N.
It is important for points x ∈ F i that f (i 1 ,...,i d ) (x) = f (j 1 ,...,j d ) (ch(x)), where (j 1 , . . . , j d ) is the word obtained from (i 1 , . . . , i d ) by the braid relation causing the change of coordinates ch; however, this property is not needed for points in N that are outside F i , since none of these will be identified with each other during the collapse; thus, we may extend ch to all of N simply by thickening the boundary of F i to ∂(F i ) × [0, ǫ) and letting ch(x, t) = (ch(x), t) for each t ∈ [0, ǫ). Define ch to act as the identity map outside of N. Thus, ch is a homeomorphism on N (though certainly not on the entire regular CW complex to which we are about to apply a collapse). Moreover, the composition of ch with a collapsing homeomorphism h in the new coordinates composed with ch −1 will give the needed homeomorphism to a sphere, showing sphericity is preserved under the collapse. Lemma 3.39 shows that the attaching maps remain injective after the collapse, confirming that regularity is also preserved under the collapse.
Because ∂(R d ≥0 ∩S d−1 1 ) is the boundary of a simplex, it is regular and homeomorphic to a sphere. Lemma 3.39 together with the argument above ensures that regularity and sphericity are preserved through each collapse, hence that ∂((R d ≥0 ∩ S d−1 1 )/ ∼) is also a regular CW complex homeomorphic to a sphere.

Relationship to lk(u, w) as defined by Fomin and Shapiro
Finally, we relate our construction for lk(u, w) to the construction of Fomin and Shapiro in [FS]. Earlier sections allow us to define lk(1, w) as the regular cell complex (R d ≥0 ∩S d−1 1 )/ ∼ or as its image under Lusztig's map, which is a homeomorphism on this domain. This enables lk(u, w) to be defined as the link of u within lk(1, w) for any u ≤ w.
Lemma 4.1. The map π u sends any x = x i 1 (t 1 ) · · · x ir (t r ) ∈ Y ≥u to the expression x ′ = x i j 1 (t j 1 ) · · · x i js (t js ) ∈ Y o u obtained by reading x from left to right including every possible term such that the subword x i j 1 · · · x i j l chosen so far satisfies w(x i j 1 · · · x i j l ) ≤ weak u. This is trivial when u is the identity, i.e. the case of lk(1, w), but it will require some work to give a careful proof for all u.
Proof. For any x ∈ Y ≥u , we will show that x may be written as x ′ y for some y ∈ N(u) = u −1 Bu ∩ N with x ′ as above. We begin with type A. Regard x ′ as an operator acting on matrices on the right. Thus, each x i j (t j ) in turn (read from left to right) adds t j copies of column i j to column i j + 1. From this viewpoint, it is not hard to see that for any element of Y o u , the entries at the positions (i, j) indexed by the inversions of u give enough information to determine x u .
On the other hand, we show for any matrix M out in N(u) and any inversion pair (i, j) of u with i < j that M out must have a 0 in position (i, j), so that acting on the right by M out does not add any copies of column i to column j. Let red(u) = s i 1 · · · s i j be a reduced expression for u and red(u) rev its reversal, namely a reduced expression for u −1 . Since M out ∈ u −1 Bu, it is obtained by taking some M in ∈ B and successively letting s i 1 , s i 2 . . . act simultaneously on the left and the right, thereby simultaneously swapping rows and columns (i j , i j + 1). Thus, for each inversion (i, j) in u, M out has the entries (i, j) and (j, i) of M in swapped, forcing both to be 0. This implies that right multiplying x ′ by an element of N(u) does not impact this data which was shown to determine the element x u ∈ N u ; to see this claim, use an alternative description of the operator by letting the piece of data indexed by the pair (i, j) record how many copies of the vector comprising the sum of columns i through j − 1 is added to column j, so that an element of N u determines exactly the data for those (i, j) which are inversions. Since x ′ consists of exactly those factors of x impacting this data which determines x u , and since x ′ is in N u , x ′ must equal π u (x).
To generalize this to other types, we must work in terms of the appropriate homomorphisms φ i of SL 2 into our group G. For example, in type B, the simple reflection s i simultaneously swapping 1, 2 and −1, −2 yields x i (t) = I 2n + tE 1,2 + tE n+1,n+2 , and the simple reflection s j swapping 1, −1 yields x j (t) = I 2n + tE 1,n+1 . For any finite Coxeter group, define the inversions of an element u to be the hyperplanes separating the associating region from the base chamber. Now the proof is quite similar to type A; again, the point is that an element of Y o u is determined by data that is held fixed under right multiplication by an element of N(u), that x ′ consists of exactly the factors impacting this data, and that x ′ ∈ N u .
Choose any point x u in the region of (R d ≥0 ∩ S d−1 )/ ∼ indexed by u. Lemma 4.1 lets us regard π −1 u (x u ) within a neighborhood of x u as the subspace satisfying the same equations t i + t j = k which the parameters in x u satisfy, with all unconstrained parameters set equal to their values in x u . Thus, a tiny sphere about x u restricted to this subspace intersected with R d ≥0 gives a region homeomorphic to lk(u, w). Corollary 4.2. The stratified space lk(u, w) defined in [FS] is equivalent, up to homeomorphism, to the link of u in lk(1, w) = (R l(w) ≥0 ∩ S l(w)−1 1 )/ ∼. Thus, lk(u, w) as defined in [FS] is a regular CW complex homeomorphic to a ball, as conjectured in [FS].
It would be interesting to better understand how lk(u, w) relates both to shellability of subword complexes (cf. [KM]) and also to the synthetic CW complexes for Bruhat intervals studied in [Re].

Acknowledgments
The author is grateful to Sara Billey, James Davis, Nets Katz, Charles Livingston and Nathan Reading for very helpful discussions. She also thanks Sergey Fomin, David Speyer, and Lauren Williams for insightful questions and comments on earlier versions of the paper which led to substantial improvements in the paper.