A.1 Examples of Morphisms All examples below are for type

A _{n} with the simple roots

α _{1} ,…,

α _{n− 1} ordered as in the following Dynkin diagram:

The simple reflections are \(\phantom {\dot {i}\!}\omega _{i}=s_{\alpha _{i}}\) .

In the following examples, the maps between sets of fixed points are given only for one gallery, as the remaining values can be reconstructed by condition (3) of Section 3.3.

Note that the morphisms in Examples 3–6 are not topological either because of the wrong sign or because the triples (p , e , ϕ ) are not morphisms of \(\phantom {\dot {i}\!}\widetilde {\mathbf {Seq}}\) (if w ≠e ). The following example shows that a morphism of \(\phantom {\dot {i}\!}\widetilde {\mathbf {Seq}}\) can be not topological because it does not preserve T -curves.

Example 8 Let s = (ω _{1} , ω _{4} , ω _{3} ), s ^{′} = (ω _{1} , ω _{4} , ω _{4} , ω _{1} , ω _{3} , ω _{4} ), p (1) = 1, p (2) = 3, p (3) = 5 and ϕ ((e , e , e )) = (e , e , e , ω _{1} , e , ω _{4} ). Then (p , e , ϕ ) is a morphism of \(\phantom {\dot {i}\!}\widetilde {\mathbf {Seq}}\) of sign (1, 1, 1), which is not topological. Actually, this morphism is not weakly curve preserving as the points γ and δ = f _{i} γ are connected by a T-curve on BS(s ) but the points ϕ (γ ) and ϕ (δ ) are not connected by a T-curve on BS(s ^{′} ) in the following six cases: γ = (e , e , e ), δ = (ω _{1} , e , e ); γ = (e , e , e ), δ = (e , ω _{4} , e ); γ = (e , e , ω _{3} ), δ = (ω _{1} , e , ω _{3} ); γ = (e , ω _{4} , ω _{3} ), δ = (ω _{1} , ω _{4} , ω _{3} ); γ = (ω _{1} , e , e ), δ = (ω _{1} , ω _{4} , e ); γ = (e , ω _{4} , e ), δ = (ω _{1} , ω _{4} , e ).

A.2 Example of a Basis Let

s ^{′} = (

ω _{2} ,

ω _{1} ,

ω _{2} ,

ω _{3} ,

ω _{2} ,

ω _{1} ,

ω _{2} ,

ω _{1} ) and

x ^{′} =

ω _{3} ω _{2} ω _{1} . We have

\(\phantom {\dot {i}\!}{\Gamma }_{s^{\prime },x^{\prime }}=\{\delta _{1},\ldots ,\delta _{8}\}\) , where

$$\begin{array}{@{}rcl@{}} \delta_{1}\!\!&=&\!(e, e, e, \omega_{3}, e, e, \omega_{2}, \omega_{1}),\quad \delta_{2}=(\omega_{2}, e, \omega_{2}, \omega_{3}, e, e, \omega_{2}, \omega_{1}),\!\quad \delta_{3}=(e, \omega_{1}, e, \omega_{3}, e, \omega_{1}, \omega_{2}, \omega_{1}),\\ \delta_{4}\!&=&\!(e, e, e, \omega_{3}, \omega_{2}, e, e, \omega_{1}),\quad \delta_{5}=(\omega_{2}, e, \omega_{2}, \omega_{3}, \omega_{2}, e, e, \omega_{1}),\quad \delta_{6}=(e, e, e, \omega_{3}, \omega_{2}, \omega_{1}, e, e),\\ \delta_{7}\!&=&\!(\omega_{2}, e, \omega_{2}, \omega_{3}, \omega_{2}, \omega_{1}, e, e),\quad \delta_{8}=(e, \omega_{1}, e, \omega_{3}, \omega_{2}, \omega_{1}, \omega_{2}, e). \end{array} $$

The columns of the following matrix give an

S -basis of

\(\phantom {\dot {i}\!}D{\mathcal X}(s^{\prime },x^{\prime })\) :

$$\left( \begin{array}{cccccccc} 1&-\frac1{\alpha_{2}}&-\frac1{\alpha_{1}}&-\frac1{\alpha_{2}+\alpha_{3}}&{\frac {1}{ \alpha_{{2}} \left( \alpha_{{2}}+\alpha_{{3}} \right) }}&0&0&{\frac {1 }{\alpha_{{1}} \left( \alpha_{{2}}+\alpha_{{3}} \right) }} \\\noalign{}0&\frac1{\alpha_{2}}&0&0&-{\frac {1}{\alpha_{{2}} \left( \alpha_{{2}}+\alpha_{{3}} \right) }}&0&0&0\\\noalign{}0 &0&\frac1{\alpha_{1}}&0&0&0&0&-{\frac {1}{ \left( \alpha_{{1}}+\alpha_{{2}}+\alpha_{{3}} \right) \alpha_{{1}}}}\\\noalign{}0&0&0& \frac1{\alpha_{2}+\alpha_{3}}&-{\frac {1}{\alpha_{{2 }} \left( \alpha_{{2}}+\alpha_{{3}} \right) }}& -\frac1{\alpha_{1}+\alpha_{2}+\alpha_{3}}&{\frac {1}{\alpha_{{2}} \left( \alpha_{{1}}+\alpha_{{2}}+\alpha_{{3}} \right) }}&-{\frac {1}{ \left( \alpha_{{2}}+\alpha_{{3}} \right) \left( \alpha_{{1}}+\alpha_ {{2}}+\alpha_{{3}} \right) }}\\\noalign{}0&0&0&0&{\frac {1}{ \alpha_{{2}} \left( \alpha_{{2}}+\alpha_{{3}} \right) }}&0&-{\frac {1} {\alpha_{{2}} \left( \alpha_{{1}}+\alpha_{{2}}+\alpha_{{3}} \right) }} &0\\\noalign{}0&0&0&0&0&\frac1{\alpha_{1}+\alpha_{2}+\alpha_{3}}&-{\frac {1}{\alpha_{{2}} \left( \alpha_{{1} }+\alpha_{{2}}+\alpha_{{3}} \right) }}&-{\frac {1}{ \left( \alpha_{{1} }+\alpha_{{2}}+\alpha_{{3}} \right) \alpha_{{1}}}}\\\noalign{}0 &0&0&0&0&0&{\frac {1}{\alpha_{{2}} \left( \alpha_{{1}}+\alpha_{{2}}+ \alpha_{{3}} \right) }}&0\\\noalign{}0&0&0&0&0&0&0&{\frac {1}{ \left( \alpha_{{1}}+\alpha_{{2}}+\alpha_{{3}} \right) \alpha_{{1}}}} \end {array} \right) $$

Here the columns and rows are labeled by the galleries

δ _{1} ,…,

δ _{8} and we skip tensoring the denominators by 1

_{k} . To understand that the columns really give a basis of

\(\phantom {\dot {i}\!}D{\mathcal X}(s^{\prime },x^{\prime })\) , we can compute the matrix

H by [

5 , (4.20)], then invert it and finally do some elementary transformations of columns.

We can explain this form of the matrix with the help of the category

\(\phantom {\dot {i}\!}\widetilde {\mathbf {Seq}}_{\textbf {f}}\) as follows. Consider the following morphisms (

p ,

w ,

ϕ ) : (

s ,

x ) → (

s ^{′} ,

x ^{′} ) of this category:

(1) s = (ω _{2} ), x = e , p (1) = 1, ϕ ((e )) = δ _{1} , w = e .

(2) s = (ω _{2} , ω _{2} ), x = e , p (1) = 1, p (2) = 3, ϕ ((e , e )) = δ _{1} , w = e .

(3) s = (ω _{1} , ω _{1} ), x = e , p (1) = 2, p (2) = 6, ϕ ((e , e )) = δ _{1} , w = e .

(4) s = (ω _{2} , ω _{2} ), x = e , p (1) = 5, p (2) = 7, ϕ ((e , e )) = δ _{1} , w = ω _{3} .

(5) s = (ω _{2} , ω _{2} , ω _{3} , ω _{3} ), x = ω _{2} , p (1) = 1, p (2) = 3, p (3) = 5, p (4) = 7, ϕ ((e , ω _{2} , e , e )) = δ _{1} , w = e .

(6) s = (ω _{1} , ω _{1} ), x = e , p (1) = 6, p (2) = 8, ϕ ((e , e )) = δ _{4} , w = ω _{2} ω _{3} ω _{2} .

(7) s = (ω _{3} , ω _{3} , ω _{1} , ω _{1} ), x = e , p (1) = 1, p (2) = 3, p (3) = 6, p (4) = 8, ϕ ((e , e , e , e )) = δ _{4} , w = ω _{2} ω _{3} ω _{2} .

(8) s = (ω _{1} , ω _{2} , ω _{1} , ω _{2} , ω _{1} ), x = ω _{2} , p (1) = 2, p (2) = 5, p (3) = 6, p (4) = 7, p (5) = 8, ϕ ((e , e , e , ω _{2} , e )) = δ _{1} , w = ω _{3} .

By Corollary 5, the space BS(s , x ) is smooth in every of these cases. Thus we know at least one element of \(\phantom {\dot {i}\!}D{\mathcal X}(s,x)\) — the one given by the inverse Euler classes of the T -fixed points. Applying Corollary 6, we “inject” these elements into \(\phantom {\dot {i}\!}D{\mathcal X}(s^{\prime },x^{\prime })\) . It is easy to check that we thus obtain exactly the above basis of \(\phantom {\dot {i}\!}D{\mathcal X}(s^{\prime },x^{\prime })\) .

A.3 Stabilization Phenomenon We are going to prove here that for any two (p , w )-pairs (γ , δ ) and (ρ , δ ) with γ ∈Γ_{s} and ρ ∈Γ_{t} , the sequences s and t coincide. To this end, we need the following result. In its proof, \(\phantom {\dot {i}\!}\widehat {x}\) means the omission of the factor x in a product.

Proof Note that we actually have

$$ \gamma^{i-1}s_{i}(\gamma^{i-1})^{-1}=\rho^{i-1}t_{i}(\rho^{i-1})^{-1}. $$

(42)

We apply the induction on

n , the case

n = 1 being trivial.

Let n > 1 and we suppose that the lemma is true for smaller values of this parameter. We write \(\phantom {\dot {i}\!}s_{i}=s_{\alpha _{i}}\) and \(\phantom {\dot {i}\!}t_{i}=s_{\tau _{i}}\) for the corresponding simple roots α _{i} and τ _{i} .

We choose the maximal

k = 1,…

n such that the following conditions hold:

(i) s _{i} = t _{i} for 1 ≤ i ≤ k ;

(ii) γ _{i} ≠ρ _{i} for 1 ≤ i < k ;

(iii) s _{i} and s _{j} commute for 1 ≤ i < j ≤ k .

As we have already seen that

s _{1} =

t _{1} , the value

k = 1 satisfies all three conditions. Thus our

k is well-defined. Moreover, we assume that

k <

n , as otherwise we would be done.

First suppose that

γ _{k} =

ρ _{k} . For

i =

k + 1,…,

n , we get

$$\gamma_{1}\cdots\widehat{\gamma}_{k}\cdots\gamma_{i-1}s_{i}\gamma_{i-1}\cdots\widehat{\gamma}_{k}\cdots\gamma_{1}= \rho_{1}\cdots\widehat{\rho}_{k}\cdots\rho_{i-1}t_{i}\rho_{i-1}\cdots\widehat{\rho}_{k}\cdots\rho_{1}. $$

by Eq.

42 and (iii). These equalities together with Eq.

42 for

i = 1,…,

k − 1 imply

s _{i} =

t _{i} for

i ∈ [1,

n ] ∖{

k } by the inductive hypothesis. However

s _{k} =

t _{k} by (i).

Now suppose that

γ _{k} ≠

ρ _{k} . By (ii), we have

γ _{i} ρ _{i} =

s _{i} for any

i = 1,…,

k . Thus (

42 ) for

i =

k + 1 can be written as follows:

$$ s_{1}s_{2}{\cdots} s_{k}s_{k + 1}s_{k}{\cdots} s_{2}s_{1}=t_{k + 1}. $$

(43)

By (iii), we can write this equality in terms of roots as follows:

$$ \alpha_{k + 1}-\langle\alpha_{k + 1},\alpha_{1}\rangle\alpha_{1}\pm\cdots\pm\langle\alpha_{k + 1},\alpha_{k}\rangle\alpha_{k}=\pm\tau_{k + 1}. $$

(44)

If α _{k+ 1} = α _{j} for some j = 1,…, k , then s _{k+ 1} = s _{j} . This element commutes with all elements s _{1} ,…, s _{k} . By (43 ), we get s _{k+ 1} = t _{k+ 1} . Hence conditions (i)–(iii) hold for k + 1 instead of k , which contradicts its maximality.

We can suppose now that α _{k+ 1} ≠α _{j} for j = 1,…, k . It follows from (44 ) that α _{k+ 1} = τ _{k+ 1} . and s _{k+ 1} = t _{k+ 1} . As k is maximal satisfying conditions (i)–(iii), the reflection s _{k+ 1} does not commute with some s _{j} , where j ≤ k . Hence 〈α _{k+ 1} , α _{j} 〉≠ 0. For (44 ) to hold, there must exist one more occurrence of α _{j} among α _{1} ,…, α _{k} . That is α _{j} = α _{q} , where without loss of generality 1 ≤ j < q ≤ k .

Let

i =

q + 1,…,

n . By Eq.

42 and (iii), we get

$$s_{1}{\cdots} s_{q}\gamma_{q + 1}{\cdots} \gamma_{i-1} s_{i} \gamma_{i-1}\cdots\gamma_{q + 1}s_{q}{\cdots} s_{1} =\rho_{q + 1}\cdots\rho_{i-1} t_{i} \rho_{i-1}\cdots\rho_{q + 1}. $$

As

s _{q} =

s _{j} , we get again by (iii) that

$$s_{1}\cdots\widehat s_{j}{\cdots} s_{q-1}\gamma_{q + 1}{\cdots} \gamma_{i-1} s_{i} \gamma_{i-1}\cdots\gamma_{q + 1}s_{q-1}\cdots\widehat{s}_{j}{\cdots} s_{1} =\rho_{q + 1}\cdots\rho_{i-1} t_{i} \rho_{i-1}\cdots\rho_{q + 1}, $$

Applying (iii) once more, we get

$$ \gamma_{1}\cdots\widehat{\gamma}_{j}\cdots\widehat{\gamma}_{q}\cdots\gamma_{i-1} s_{i}\gamma_{i-1}\cdots\widehat{\gamma}_{q}\cdots\widehat{\gamma}_{j}\cdots\gamma_{1}= \rho_{1}\cdots\widehat{\rho}_{j}\cdots\widehat{\rho}_{q}\cdots\rho_{i-1} t_{i}\rho_{i-1}\cdots\widehat{\rho}_{q}\cdots\widehat{\rho}_{j}\cdots\rho_{1}. $$

(45)

Now let

i =

j + 1,…,

q − 1. By (iii) and (i), we get

$$\gamma_{1}\cdots\widehat{\gamma}_{j}\cdots\gamma_{i-1} s_{i}\gamma_{i-1}\cdots\widehat{\gamma}_{j}\cdots\gamma_{1}= \rho_{1}\cdots\widehat{\rho}_{j}\cdots\rho_{i-1} t_{i}\rho_{i-1}\cdots\widehat{\rho}_{j}\cdots\rho_{1}. $$

These equalities together with equalities (

45 ) for

i =

q + 1,…,

n and equalities (

42 ) for

i = 1,…,

j − 1 imply

s _{i} =

t _{i} for

i ∈ [1,

n ] ∖{

j ,

q } by the inductive hypothesis. However

s _{j} =

t _{j} and

s _{q} =

t _{q} by (i). □