Homogeneous Hypercomplex Structures I - the compact Lie groups

We introduce a remarkable subset"the stem"of the set of positive roots of a reduced root system. The stem determines several interesting decompositions of the corresponding reductive Lie algebra. It gives also a nice simple three dimensional subalgebra and a"Cayley transform". In the present paper we apply the above devices to give a complete classification of invariant hypercomplex structures on compact Lie groups.


Introduction
This paper is the first in a series of two, whose purpose is to give a description of compact hypercomplex homogeneous manifolds with a transitive action of a compact group. The classification and proofs are entirely based on the structure theory of reductive Lie groups, it turns out that in the language of roots we get surprisingly clear answers to the natural questions.
So we start with a complex manifold (M, I) with a transitive compact group of biholomorphic automorphisms and look for another invariant complex structure J on M, such that IJ = −JI (we say shortly that J matches I). We call the complex structure I admissible if there exists a matching J.
Our classification problem splits into two: Problem A: In the class of compact complex homogeneous manifolds (M, I), discern those which are admissible.
Problem B Given an admissible complex structure I on M, describe the class of all homogeneous hypercomplex structures on M (up to equivalence) of which I is one of the complex structures.
In the present first paper (Section 2) we introduce and discuss a remarkable invariant of reduced root systems, which we call the "stem" of ∆ + . The stem is a certain maximal strongly orthogonal set of roots, which is determined by ∆ up to the action of the Weyl group (see Theorem 2.12).
Also in this paper we use the stem combinatorics to solve Problem A (see Theorem 4.27 and corollaries) and Problem B (see Theorem 4.32) when our homogeneous space is the underlying manifold of a compact Lie group.
The idea to use a highest root to construct homogeneous "quaternionic" spaces goes back to Wolf [21]. A wide class of examples of such structures was given by Spindel et al [15] and Joyce [8] where many ideas of the present paper may be traced in implicit form.
It is well known ( [13], [19]) that each compact even dimensional Lie group carries a homogeneous complex structure. A comprehensive description of the regular homogeneous complex structures on reductive Lie groups (not necessarily compact) in terms of structure theory may be found in Snow [14].
For a compact Lie group our problems are easily reduced to determining the hypercomplex structures on the Lie algebra, which are integrable in the sense that the Nijenhuis tensor vanishes. We show in particular that a compact simple Lie group U admits a left invariant hypercomplex structure if and only if U = SU(2k + 1), k ≥ 1.
When our compact Lie algebra u is "nearest to semisimple" 1 , a homogeneous complex structure may participate in at most one hypercomplex structure. More precisely: we use the stem to define a Cayley transform of the Lie algebra, which determines completely the hypercomplex structure.
1.1. Conventions and notations. Here we fix notations and recall well known facts, to be used throughout the paper.
We shall denote by u a compact Lie algebra. Then the complexification u C = g = g s ⊕ c is a reductive complex Lie algebra, whose semisimple ideal is g s , and the center is c ∼ = C r . We denote by τ conjugation of g w.r. to the real form u, so τ is an antilinear involution of g, such that u = g τ = u s ⊕ c u . We denote by U s and G s the corresponding simply connected Lie groups, by U = U s × C u , G = G s × C -the corresponding reductive Lie groups (C u is a compact torus).
For X, Y ∈ g, we denote by X, Y an ad-invariant symmetric bilinear form such that its restriction to the compact real form u is negative definite. We assume that ., . coincides with the Killing form on the semisimple part g s .
Let h be a τ -stable Cartan subalgebra of g, then h = h s ⊕ c, where h s is a Cartan subalgebra of g s . Let H be the corresponding Cartan subgroup of G. We denote by ∆ the root system of g s w.r. to h s . For α ∈ ∆ we denote by h α the element of h determined by H, h α = α(H) for all H ∈ h, and we denote H α = 2 α, α h α , g(α) = {X ∈ g| adH(X) = α(H)X, H ∈ h}.
By Aut(∆) we denote the group of all the elements in GL(h * R ), which leave the set ∆ ⊂ h * s invariant and the center c pointwise fixed.
We have u = g τ = {X ∈ g| τ (X) = X}. As h is τ -invariant: We now fix a basis Π = {α 1 , . . . , α l } of ∆, which gives us a system of positive roots ∆ + . We denote The Borel subalgebra b + is a maximal solvable subalgebra of g.
Definition 1.1. A subalgebra a ⊂ g is called h-regular if its normalizer n(a) contains a Cartan subalgebra h of g. A subalgebra a is called regular if it is h-regular for some Cartan subalgebra h.
It is well known that if a is an h-regular subalgebra of g, then we have a decomposition: 1.2. Complex structures on a compact Lie group. Any left invariant almost complex structure on the manifold U, determines (and is determined by) a complex structure I : u → u. The obvious condition for the existence of a complex structure on u is even dimension, and this is the same as even rank.
It is clear that an invariant (hyper)comp;ex structure on U determines and is determined by an invariant (hyper)complex structure on the universal covering group U ∼ = U s × R r . We have U = U/Λ, where Λ is some central lattice in U. It is well known that equivalent complex structures on U may project to unequivalent complex structures on U. In this paper we concentrate rather on classifying up to equivalence the invariant hypercomplex structures on U, which is done in terms of data on the Lie algebra u. The dependence on Λ is well understood in the literature.
Let I be any complex structure on u. We extend I to g (and go on to denote the extension by I) setting I(iX) = iIX. Thus on g we have I • τ = τ • I. Definition 1.2. Let I be a complex structure on u. We denote In other words: m + I , m − I are respectively the (1, 0) and (0, 1) components (w.r. to the left invariant almost complex structure I) of the complexified tangent space to U at the unit element. It is also well known (and obvious) that If I is a complex structure on u we define its Nijenhuis tensor: It is often convenient to "complexify" the Nijenhuis tensor by allowing X, Y in the above formula to vary in g = u C . We denote the complexified Nijenhuis tensor by the same letter.
The following proposition is well known (see e.g. Snow [14]) Proposition 1.3. The left invariant almost complex structure induced by I on U is a complex structure if and only if, any one of the following conditions is satisfied: a) m + I is a subalgebra of g; b) N I ≡ 0. In this paper, a complex structure on the compact Lie algebra u will be called integrable, if it satisfies the conditions from Proposition 1.3. Definition 1.4. Two complex structures I, I ′ on u will be called equivalent if there exists an automorphism ξ of u such that ξ •I = I ′ • ξ. Definition 1.5. We shall say that a complex structure I on a Lie algebra u is regular if m + I is a regular subalgebra w.r. to some τ -stable Cartan subalgebra h of u C .
Since U is compact, we may assume that I is a regular complex structure (see Snow,[14]). Throughout the paper h will denote a τ -stable Cartan subalgebra in the normalizer of m + I . Let ∆ ⊂ (h) * be the root system of g w.r. to h. We have Proposition 1.6. An integrable complex structure I on u determines a system of positive roots ∆ + , and a subspace h + = m + I ∩h ⊂ h, such that Proof. From regularity of I we have the decomposition (6).
, we conclude that Θ contains exactly one of the roots in each couple {α, −α} ⊂ ∆. But m + I is also a subalgebra, so Θ = ∆ + for some basis of ∆ (see e.g. [2], Ch VIII, Sect 3, Prop. 7). The lemma is proved. Remark 1.7. If I is a regular complex structure on a noncompact reductive Lie algebra g 0 , then the subalgebra m + I may have a nontrivial Levy component (see e.g. Snow [14]) .
Throughout this paper, given an integrable complex structure I on u = g τ we shall denote the corresponding τ -invariant Cartan subalgebra h = h I , the subspace h When (we believe that) no confusion may arise, we shall omit the subscript I. When we have to refer to this connection between I and the structural data, we shall say briefly that I is a b + complex structure. In other words, a complex structure I on u, will be called a b + -complex structure iff b + is the normalizer of m + I . It is well known that Adu acts transitively on the set of all Borel subalgebras of g, thus if we fix a Borel subalgebra b + , then any integrable complex structure on u is equivalent to a b + -complex structure.
Remark 1.8. It is well known that a compact group U may have a left invariant complex structure I in such a way, that the simple factors are not complex submanifolds. Perhaps the best known semisimple example is a Calabi-Eckman invariant complex structure on SU(2)×SU (2). For examples with even dimensional factors s.

Left invariant almost hypercomplex structures.
Definition 1.9. A left invariant almost hypercomplex structure on U is a couple of complex structures I, J : u −→ u, which anti-commute i.e. I • J = −J • I. An almost hypercomplex structure will be called a hypercomplex structure if both I, J are integrable.
Two hypercomplex structures (I, J), (I ′ , J ′ ) on u will be called equivalent if there exists an automorphism ξ of u such that ξ •I = We use the same letters to denote the complexifications of the operators I, J, so we have two linear maps I, J : g −→ g, such that First we show Definition 1.11. Let u be a compact Lie algebra. Let I be a b + complex structure as described in Subsection 1.2. We shall say that a complex structure J on u matches I if J is integrable and IJ = −JI. We call I admissible if there exists some J, which matches I. Now we introduce more notation, which will be used throughout the paper. We are interested in hypercomplex structures, so from this moment we assume that we have fixed a b + complex structure I on u and use freely the notations from subsection 1.2 and Proposition 1.6. Further we assume that J is a complex structure on u, such that JI = −IJ. Definition 1.12. We fix a basis U 1 , . . . , U m of h + , then we define For α ∈ ∆ + , q = 1, . . . , m we decompose the elements JE α , JU q as follows We introduce matrices with coefficients a α,β , b t,q , ξ t,α , η α,q respectively: a ∈ M(n×n); b ∈ M(m×m); ξ ∈ M(m×n); η ∈ M(n×m).
Conversely, for any choice of a, b, ξ, η as in (11), the operator given by the matrix J commutes with τ and defines a complex structure J on u, such that J • I = −I • J.
Obviously, many invariant almost hypercomplex structures on U exist iff dim(u) is divisible by 4.

Stems
Throughout this section ∆ is a reduced root system, Π is a basis of ∆ and ∆ + is the corresponding subset of positive roots.
A subset Γ ⊂ ∆ + will be called a stem of ∆ + iff If Γ is a stem of ∆ + and γ ∈ Γ, we shall call Φ + γ the branch at γ. In the present section we shall prove existence and uniqueness of a stem for a reduced root system ∆ with a fixed basis Π (hence fixed ∆ + ). We also derive the properties of stems needed for applications to the existence and properties of hypercomplex structures. Next we give a list of notations related to a stem.
We shall say that γ ∈ ∆ is a maximal root if γ is the highest root in some ∆ j .
b) The number of distinct roots in the γ-series of α is at most 2. More precisely c) α, γ = 0 if and only if γ is strongly orthogonal to α.
Proposition 2.7. Let γ ∈ ∆ + be a maximal root and let Π be our fixed basis of ∆. The set Φ + γ ∩ Π has at most two elements. Also Without loss of generality we may assume that ∆ is irreducible and γ is the highest root.
To make the induction step we choose a maximal root γ k ∈ ∆ + k to get the sequence γ 1 , . . . , γ k−1 , γ k . By (18) Now we define (19) and Lemma 2.8 a) it follows that ∆ k+1 is a root system and ∆ + k+1 = ∆ k+1 ∩ ∆ + . By (19) and Lemma 2.8 b) applied to ∆ k and ∆ k+1 it follows that for any On the other hand by the induction assumption (18) with i = k − 1 it follows that for any α ∈ ∆ + k+1 we have In particular we have proved (17). The induction is complete, so we constructed a stem Γ = {γ 1 , . . . , γ d }.
From the proof of the last proposition we get some improvements of Proposition 2.6 and Lemma 2.8 Proof. c) Easy induction using Lemma 2.8, b). a) Because of c), we may apply (17) and Proposition 2.6, a) to ∆ + k . b) Now we apply (17) and Proposition 2.6 b) to ∆ + k . Now we can prove Theorem 2.12. Existence and uniqueness Let ∆ be a reduced root system, let Π be a basis and let ∆ + be the corresponding set of positive roots. There exists exactly one stem of ∆ + .
Let Γ ′ be any stem of ∆ + . We have to prove that Γ = Γ ′ . It is obviously sufficient to prove Γ ⊂ Γ ′ .
Example 2.13. The root system ∆ = D 4 is irreducible, and fixing ∆ + we determine a highest root γ 1 , while where the last three roots are all maximal and may come in any order.
From the construction in Proposition 2.9 we obtain a natural ordering of the stem Γ -there is a sequence ∆ 1 ⊃ ∆ 2 ⊃ · · · ⊃ ∆ d , which gives the indexation Γ = {γ 1 , . . . , γ d }. The ordering is substantially partial. As the Example 2.13 shows, each time when ∆ k is not irreducible we have to choose γ k+1 among the maximal roots of ∆ + k . We shall give now the formal definition. First we have Proposition 2.15. Let ∆ be a reduced root system, let Π be a basis and let ∆ + be the corresponding set of positive roots. Let Γ be the stem of ∆ + . For each γ ∈ Γ, there exists an unique irreducible closed subsys- Proof. We look at the proof of Proposition 2.9. If γ = γ k in the construction there, then γ is a maximal root in the reduced root system ∆ k , which means exactly that γ is the highest root of exactly one irreducible component of ∆ k , which we denote by Θ γ . The check of the properties a), b) and c) is immediate. Definition 2.16. Let ∆ be a reduced root system, let Π be a basis and let ∆ + be the corresponding set of positive roots. Let Γ be the stem of ∆ + and let γ, δ ∈ Γ.
In the following text, each time when we use indexation of Γ we shall assume that it is compatible with the order ≺, that is, when we write Γ = {γ 1 , . . . , γ d }, we assume that We illustrate the importance of the order in Γ by the following useful Proof. All statements are direct consequences of the construction in Proposition 2.9, the properties in Corollary 2.11 and Theorem 2.12.
The following corollary and remark are easy to verify and will be used freely in the rest of the paper.
Also from the construction in Proposition 2.11 and Proposition 2.4, c) we get Corollary 2.20. The stem Γ is a maximal strongly orthogonal subset of ∆ + .
Proof. Let γ p , γ q ∈ Γ, p < q, then by construction γ q ∈ ∆ p+1 ⊂ γ ⊥ p and γ p is long in ∆ P , whence we may apply Proposition 2.4 c) to obtain strong orthogonality. From the definition of stem (formula (12)) it follows that no root may be strongly orthogonal to all γ ∈ Γ.
Remark 2.21. A stem Γ is a strongly orthogonal subset of ∆ + with maximal number of elements, that is the number of elements of any strongly orthogonal subset Θ ⊂ ∆ is less or equal to the number of elements of Γ. This fact is easy to prove and also easy to check comparing the list of stems of irreducible root systems in Section 3.3 with the list of maximal strongly orthogonal subsets of irreducible root systems in [1]. We shall not use it in this paper.
It makes sense to notice that the converse is not true in general. For example, when ∆ = A n (see Example 3.21 below), there are many different maximal strongly orthogonal subsets of ∆ + , one of them is the stem. Each of the others is the stem for some other choice of Weyl chamber.
On the other hand, if ∆ = C n (see Example 3.24) then the stem is the set of all long roots in ∆ + . It is the unique strongly orthogonal subset of ∆ + with maximal number of elements. In this case one and the same set Γ is the stem of ∆ + for n! different choices of the positive Weyl chamber. However the stem Γ and the partial order ≺ in it (see Definition 2.16) determine ∆ + completely. The same holds in general.
Theorem 2.22. Let ∆ be a reduced root system, let ∆ + be a system of positive roots, let Γ be the stem of ∆ + and let ≺ be the order in Γ (see Definition 2.16). Then the couple (Γ, ≺) determines ∆ + . Hence the Weyl group W acts simply transitively on the set of couples (Γ, ≺).
2.2. The stem subalgebra. We introduce notation for the Lie algebra entities which correspond to the root system combinatorics of the preceding subsection. So now u is a compact Lie algebra, g = u C is a reductive Lie algebra, h is a τ -invariant Cartan subalgebra of g and ∆ is the root system of g w. r. to h. We fix a basis Π of ∆ so we have a fixed ∆ + , a corresponding Borel subalgebra b + etc. We shall always denote by Γ = {γ 1 , . . . , γ d } the stem of ∆ + .
Definition 2.24. Let Γ be the stem of ∆ + . We denote We shall call the subalgebra f defined above the stem subalgebra.
The corresponding subgroup of G s will be denoted by F and will be called the stem subgroup.
If Γ = {γ 1 , . . . , γ d }, in order to simplify notation we shall write sometimes In the language of reductive Lie algebras, the stem decomposition (12) gives a decomposition of n + into two step nilpotent subalgebras.
Proof. By Corollary 2.11, a), all brackets in heis γ vanish except The direct sum decomposition (of vector spaces of course) follows readily from (12).
Proof. Proposition 2.5, c) implies that under our assumptions one has p = 0 in formula (2), whence the first equality.
The second equality follows from the fact that α = s γ (α) = β and the formula proved in ( [6], ch III). Now we return to the stem subalgebra. Because Γ is strongly orthogonal, we have a decomposition f u = su 1 (2) ⊕ · · · ⊕ su d (2) into commuting subalgebras. Now we shall introduce convenient bases for f u .
It is clear that the three dimensional simple subalgebra sl Γ (2, C) ⊂ g is generated by the semisimple element H Γ = −2iW Γ ∈ h and the nilpotent elements E Γ , E −Γ .
Remark 2.29. In the formulas of this section we shall sometimes suppress the dependence on the torus parameter ρ, i.e. we shall write X γ instead of X γ (ρ) or x instead of x(ρ) etc. We hope that no confusion for the reader comes from this. In any case we remark that the subalgebras sl γ (2, C) and hence the subalgebras su γ (2) = sl γ (2, C) ∩ u do not depend on ρ.
For the interpretation of the results of Section 4 it will be convenient to have done our computations and theorems in the presence of ρ (see also Remark 2.37).
Next we interpret the important Corollary 2.17 in Lie algebra language.
Proposition 2.30. If γ ∈ Γ, then the subspace V γ is a representation of the stem subalgebra f under ad. We denote it by r γ : f −→ sl(V γ ). We denote by the same letter the corresponding representation r γ : F u −→ SU(V u γ ). a) If γ, δ ∈ Γ , then the restriction of r γ to sl δ (2) may be nontrivial only if γ δ. Moreover b) If γ = δ, then V + γ and V − γ are invariant under the ad representation of sl δ (2); c) The action of sl γ (2) on V γ decomposes into 2-dimensional irreducible components: We shall need several explicit formulas, describing the action of one-parameter subgroups of the stem subgroup F in the Ad representation.
Strong orthogonality of Γ immediately implies that if γ, δ ∈ Γ, s, t ∈ R, then Proof. The three formulas follow by induction from the following: Because for γ ∈ Γ, H ∈ o we have γ(H) = 0, there is an obvious We note also Proof. Follows from the trivial exp(tadX γ )(X γ ) = X γ and the formula for Y γ in Proposition 2.32.
If we stay in the root system ∆, the point of this subsection is the fact that for any choice of ∆ + , hence of Γ, the product s γ 1 • · · · • s γ d is the opposition element in the Weyl group of ∆. However for our purposes we need to make explicit choice of a representative of the coset s γ 1 • · · · • s γ d in the exact sequence (1).
We recall that we denote by the same letter an automorphism ψ ∈ N(h) ⊂ Aut(g), its action on h as an element of the Weyl group, and the conjugate action on h * given by ψ(α)(H) = α(ψ −1 (H)).
In particular from the third formula of Proposition 2.32 and the last remark we see that for each γ we have φ γ [ρ] ∈ N u (h) .
Proof. The properties of Γ from Corollaries 2.17 and 2.20 give even more precise formulas. For each k, j = 1, . . . , d, k = j we have The proposition is proved.  From Proposition 2.35 with t = π we obtain Definition 2.41. Let θ be the contragredience automorphism of g w.r. to h(see (4)), and let φ ∈ W be the opposition automorphism of g w.r. to h (see Definition 2.36). We denote We denote by the same symbol the adjoint involution ⋆ ∈ Aut(h * ).
The trihotomy b) is Proposition 2.7 in the case when γ is a maximal root of ∆ + . To prove it for any γ ∈ Γ we just have to apply Proposition 2.7 to the closed root subsystem Θ γ (see Proposition 2.15), where γ is the highest root.
The proposition will be proved if we show that for any ζ ∈ Φ k and j > k we have n α (ζ) = n α (s γ j ζ). The last equation follows obviously from the fact that for j > k we have n α (γ j ) = 0. Indeed, by definition (see Proposition 2.9) of γ j as maximal root of ∆ j we know that n λ (γ j ) = 0 only for λ ∈ Π ∩ ∆ j (see (29)). By the definition of stem α ∈ ∆ j .
2.5. The Cayley transform. We define an automorphism which is a square root of the opposition involution φ from the previous subsection, we use freely all notation introduced there. Definition 2.47. For p = 1, . . . , d we denote X p = X γp (ρ). Let Remark 2.48. By Remark 2.31 we conclude that for p = 1, . . . , d we have c p • τ = τ • c p , so all c p and c are automorphisms of u. Also from Remark 2.31 it follows that for i, j = 1, . . . , d we have c i • c j = c j • c i , whence the definition of the automorphism c does not depend on the order of the factors and that is why we may define it as exponent of one element, namely π 2 adX Γ . Obviously for each k = 1, . . . , d we have c 2 k = φ k (see Definition 2.36), whence c 2 = φ, i.e. c is a square root of the opposition involution.
We shall need an explicit description of the action of c on f ⊕ o.
Proof. The first equality obviously follows from strong orthogonality of Γ. The second and third formulas follow directly from Proposition 2.32.
Writing for the sake of symmetry c x for the Cayley transform defined at the beginning of this subsection, we have (see Proposition 2.49): The elements c 2 x and c 2 y represent the opposition involution w.r. to the Cartan subalgebra h I , also c 2 w and c 2 y represent the opposition involution w.r. to the Cartan subalgebra x C ⊕ o, etc..
In order to prove the statement in this remark, there is no need for new computations, actually we know, that putting iρ in the place of ρ we change X γ to Y γ and Y γ goes to −X γ in all formulas of this section. The corresponding statements about c w are very easy to check.

Existence of a hypercomplex structure
Now we use the root combinatorics of the stem to find sufficient conditions for admissibility of a b + complex structure on u. We present our candidate for a match to I. By definition J is equivalent to I, so J is an integrable c(b + ) complex structure on u. We obviously have m + J = c(m + I ). Proposition 2.49 and formula (31) imply h J = c(h I ) = y C ⊕ o.
In this section we give a necessary and sufficient condition for IJ = −JI.

Remark 3.2. By the definition of
From the decomposition u = V u ⊕ f u ⊕ o u we see that dim(u) is divisible by 4 if and only if 3d + dim(o u ) is divisible by 4. So in the following we shall always assume (sometimes implicitly) that dim(o u ) = d + 2p, where p is some even integer.
From Proposition 2.30 it follows trivially that V γ is c-stable.
3.1. The structure J on V. First we are going to prove that J(V + γ ) = V − γ , whence IJ = −JI holds on V without any further conditions. We begin with Proof. If α ∈ Φ + γ , then by Corollary 2.14 we have The proposition is proved.

From Proposition 3.4 it follows that
Denoting as usual K = IJ, for each V ∈ V γ we have 3.2. The complex structure J on f⊕o. As explained in Subsection 1.2 we have some freedom in defining a b + complex structure I on the Cartan subalgebra h. While IX γ = Y γ is fixed by the convention that IX = iX (IX = −iX) on n + (n − ), we have substantial freedom choosing the elements IW γ ∈ h u . At the end, it turns out that the necessary and sufficient condition for admissibility of I is a condition on I(w).
Definition 3.8. Let I be a b + complex structure on u. For each γ ∈ Γ we denote: We call the subalgebra e = e I = f u + z I the extended stem subalgebra.
First we compute the operator J = I c on w.

Proof. By Proposition 2.49 we compute
Proof. a) =⇒ b). By (31) and Proposition 2.49 we have We use the definition of I and Proposition 3.9. to compute So by (31) we have Z γ ∈ o.
We collect the above results in the following Proof. The first three formulas obviously follow from the definition of I, Z γ and J = I c (see Propositions 3.9) . The last formula above was proved in Proposition 3.10 to be equivalent to z ⊂ o.
We also have obviously We are now ready to prove Remark 3.14. It is easy to see that the condition I(w) = o u is equivalent to 2d = rank(u) which implies that dim(u) is divisible by 4 (see Remark 3.2). When 2d = rank(g), any complex structure I on h u with I(w) = o u extends in an obvious way to a b + complex structure on u. Thus by Proposition 3.13, if U is a compact Lie group such that 2d = rank(u), then U carries a left invariant hypercomplex structure.
In order to state the sufficient condition in the general case we need some more notation.
Definition 3.15. Let Γ be the stem of ∆ + and let z ⊂ o. We denote We have Proof. If H ∈ j , then H = A + B, A ∈ j + , B ∈ j − . Thus IH = iA − iB ∈ j. For the second equality, note that u is also invariant under I. So a) is proved.
Because j ⊂ o, for H ∈ j (31) implies c −1 H = H, then by a) of this proposition we have Ic −1 H = IH ∈ j and again by (31) we have cIH = IH. Thus, item b) is proved.
We have the following important Remark 3.18. Note that the extended stem subalgebra e (see Definition 3.8) is closed under the action of I, I c . The corresponding subgroup E u may not be a closed subgroup of U. If E u is a closed subgroup, which is an arithmetic condition on the Z γ (vacuously fulfiled when u is nearest to semisimple), then E u is a hypercomplex submanifold of U.
Obviously also e C = f + ⊕ f − ⊕ v is always a subalgebra of g invariant under the action of I, J ( the complexified extended stem subalgebra ).
The subspace V ⊕ v = n + ⊕ n − ⊕ v (and V u ⊕ v u ) is also invariant under the action of I, J, but is not obliged to be a subalgebra. 19. Let u be a compact Lie algebra, whose dimension is divisible by 4, and let I be a b + complex structure on u. If z = I(w) ⊂ o u , then I is admissible.
Proof. We have a decomposition (of real vector spaces) Let S 1 , . . . , S p be a basis of j + . Define T k = τ (S k ), k = 1, . . . , p, then T 1 , . . . , T p is a basis of j − . Let b be any p × p complex matrix such that bb = −1. Then we may define a complex structure B on j u by Obviously (I, B) define a quaternionic structure on the vector space j u , whence we may decompose the j = j + B ⊕ j − B into the i and −i eigenspaces of B respectively. Now we may define a matching complex structure J on u: Obviously we have IJ + JI = 0. To show that J is integrable we note that J is a regular complex structure w.r. to the Cartan subalgebra y C ⊕ o (see (31)). More explicitely from (31), (36) and Corollary 3.11 we have and define K = IB on j. Obviously K is regular w.r.to the Cartan subalgebra x C ⊕ o.
If we perceive a hypercomplex structure on U as a representation of SU(2) on u, which splits into real 4 dimensionnal irreducible components, then the hypercomplex structures constructed in this section do not depend on ρ.

3.3.
Examples -The simple groups. In this subsection we present the stems of the irreducible reduced root systems with some comments.
Example 3.24. The root system C d , u s = sp(d).

Now we move to
Case 2 The root system ∆ 2 = E 7 . We have d = 7.
The highest root is γ 2 = f = e 8 − e 7 . Further we have ǫ(i) odd ; To go on, move to Example 3.22, Case 1.

The hypercomplex structures
In this section we prove that up to equivalence, the hypercomplex structures described in the preceding section are all the hypercomplex structures on u. So we assume that I is any admissible b + complex structure on u and J is an integrable complex structures on u matching I.
In this section we use freely the conventions and notations of sections 1.2 and 1.3. In particular we use the direct decompositions When a is a direct summand in one of these decompositions and we write pr a : g −→ a we always mean projection along the complementary component in the above formula. Obviously the basis of Definition 1.12 is well adapted to such practices.
We work with the "complexified" Nijenhuis tensor, i.e. we extend N(X, Y ) to g by complex linearity.
Proof. We decompose the element JN J (U q , E α ) ∈ g in the basis of Definition 1.12 using formula (11). Our purpose is to compute the coefficient before E −β . From integrability of J we have For β ∈ ∆ + the coefficient at E −β must vanish, whence the proposition.
Proof. By the assumption, in formula (41) the RHS is 0. On the other hand, the functional α + β is real at h R and nonzero, so it is obvious that we may choose such a q that (α + β)(U q ) = 0. Proof. By Corollary 4.2, for all ν ∈ ∆ + we have a ν,γ = 0.
Proof. Under the condition obviously N γ,−β = 0 = N γ,−α . We choose a q so that γ(U q ) = 0 and apply twice the formula in Corollary 4.4.
Before going on with the Nijenhuis tensor we introduce some convenient notation. Let γ ∈ ∆ + , JE γ ∈ h. We denote From the above definition and α(τ (H)) = −α(H) we get Now we may compute Proposition 4.7. Let I be an admissible complex structure and let J match I. Let γ ∈ ∆ + , JE γ ∈ h. Then |γ(U γ )| = 1, γ(IH γ ) = 0; (44) Proof. Integrability gives For the present computation we denote a = γ(U γ ). Now we apply I on the last expression to get the second equation of the following system First we use (46) to compute Because H γ , IH γ are linearly independent, integrability implies (44). Now using (44) we solve the system (46) to get (45).
Remark 4.8. At first glance formula (45) contains something like a vicious circle -we determine V γ = JE γ using a circle parameter γ(V γ ) on the RHS.
As we shall prove further JE γ ∈ h iff γ ∈ Γ (see Theorem 4.13). Actually, given an admissible b + complex structure I and a matching J, we have proved that (See Definition 3.15 for P γ , Q γ ). The important point here is that any matching complex structure J sends the stem nilpotent E γ ∈ f + to Q γ ∈ h + multiplied by a complex number of norm 1, thus we recover the parameters ρ γ from Section 2. We use this further to identify the Cayley transform which produces J -see Definition 4.20 and further. Proposition 4.9. Let α, β, γ ∈ ∆ + , JE γ ∈ h. Then Proof. By integrability of J we have where A ∈ b + . The statement of the proposition comes from equating to zero the coefficient at E −β in the last expression.
Corollary 4.14. The matrix a is antisymmetric.
Proof. Follows from Corollary 4.15, the fact that J is bijective and 2dim(h − ) = rank(g).
From here on, we assume (often implicitly) that rank(g) ≥ 2d.
Proof. The only simple group with rank(g) = 2d is SL(2n + 1, C) (see subsection 3.3). On the other hand, existence of a hypercomplex structure for our U follows from Remark 3.14.
Now we are ready to determine the complex structure J on V (see Definition 2.24).
Proposition 4.18. Let I be an admissible complex structure and let J match I. If γ ∈ Γ, α ∈ Φ + γ , then Proof. We denote β = µ(α) = −s γ (α). From Theorem 4.13 we have In the following computation we keep explicit only terms which nave nontrivial projection to h.

Formula (51) obviously implies
Corollary 4.19. Let I be an admissible b + complex structure and let J 1 , J 2 be two complex structures matching I.
Given an admissible b + complex structure I, Proposition 4.18 determines the action of a matching J on the invariant 4 subspace V. The result is so clean that it gives us more precise description of the matching Cayley structure I c than we achieved in subsection 3.

4.3.
The action of J on the extended stem subalgebra. Now we return to the notations of Section 2. We show that if I is any admissible complex structure on u and if J matches I, then J = I c (as in Section 2) for a certain value of the torus parameter ρ, namely: Definition 4.20. Let γ ∈ Γ. We denote: The first equality in formula (44) gives |ρ γ | = 1 whence we may use all the entities from Definitions 2.28, 3.8.
In particular for any γ ∈ Γ from (45) we get: Proposition 4.21. Let I be an admissible complex structure and let J match I. Then for any γ ∈ Γ we have Proof. The first and second equality in (54) come from the definition of a b + complex structure I. The third and fourth equality come by solving the system (53) for W γ , Z γ .  Corollary 3.11 was proved under the assumption that z ⊂ o u , which is also the sufficient condition of our general existence Theorem 3.19. We prove next that the condition z ⊂ o u is also necessary 5 for admissibility of I. We begin with: Proposition 4.23. Let I be an admissible complex structure and let J match I. If γ, δ ∈ Γ, γ = δ, then γ(JE δ ) = 0.
We have a useful consequence of Proposition 4.24  We have chosen to express the necessary and sufficient condition for admissibility of I in the most classical terms, only using the notion of stem.

4.4.
The nearest to semisimple. In the previous subsection we solved Problem A from the introduction of this paper. Now we proceed to Problem B, that is, we assume that I is an admissible b + complex structure on u and describe all complex structures J matching I.
By Proposition 4.21 we know that on the extended stem subalgebra f u ⊕ z, any J matching I coincides with the structure I c for some ρ (see Definition 3.1) .
So we go on to determine the remaining coefficients of the matrix of J (see Definition 1.12). Now that we assume z ⊂ o u , we shall use the notations of Definition 3.15.
We assume rank(g) = 2d + 2p, where p is a nonnegative even integer, In the first place we have the vectors P γ = W γ − iZ γ , Q γ = W γ + iZ γ , which are a basis for the subspaces v + , v − respectively.
The following theorem improves the constructive Proposition 3.13 in particular.
Theorem 4.29. Let 2d = rank(g) and let I be an admissible b + complex structure. Then there is exactly one (up to choice of ρ) matching complex structure J. For γ ∈ Γ and α ∈ Φ + γ the operator J (with ρ γ = 1 for each γ ∈ Γ) is given by: . Proof. The first and the second equalities come from Proposition 4.21. The third is the result of Proposition 4.18.
We give several equivalent forms of the admissibility condition. Recall that in Subsection 2.3 we introduced and studied a representative of the opposition involution φ = exp(πadX Γ ). Proof. In any case φ(n + ) = n − , so a) is equivalent to b).
It is trivial that c) is equivalent to b) (imitate the proof of Proposition 1.10). Now assume that I is admissible. Then for any W ∈ w we have IW ∈ o, whence by Corollary 2.40 φ(W − iIW ) = −W − iIW = −τ (W − iIW ) whence the condition b) holds.
Conversely let condition c) hold. Then for W ∈ w we have φ(IW ) = −Iφ(W ) = IW and by Corollary 2.40 we have I(w) ⊂ o, whence I is admissible. 4.5. The classification. When 2d < rank(g) and I is an admissible b + complex structure we have to determine the action of a matching complex structure J on the subspace j ⊂ o (see Definition 3.15).
We are ready to present our solution of Problem B from the introduction.
Theorem 4.32. Let u be a compact Lie algebra with rank(u) = 2d + 2p, where d is the number of roots in the stem Γ of ∆ + and p is a nonnegative even integer. Let I be an admissible b + -complex structure on u.Then any hypercomplex structure extending I may be determined by a complex structure J matching I, so that there exists a p × p complex matrix b, with bb = −1 and for γ ∈ Γ, α ∈ Φ + γ we have where S 1 , . . . , S p is a basis of j + , and T k = τ (S k ), k = 1, . . . , p.
Proof. We choose ρ γ = 1 for all γ ∈ Γ. The first equality is in (53). The second is in Proposition 4.18. The third follows from Proposition 4.31. Theorems 4.27 and 4.32 allow one to study the parameter spaces for the classes of equivalent hypercomplex structures on a compact connected Lie group U. This will be done in a subsequent paper, we give only two characteristic examples in the nearest to semisimple case where the equivalence classes of hypercomplex structures are in a bijective correspondence with equivalence classes of admissible complex structures . The parameter space of equivalence classes of hypercomplex structures on SU(2d + 1) is Z 2 \GL(d, R).