Gradual transitivity in orthogonality spaces of finite rank

An orthogonality space is a set together with a symmetric and irreflexive binary relation. Any linear space equipped with a reflexive and anisotropic inner product provides an example: the set of one-dimensional subspaces together with the usual orthogonality relation is an orthogonality space. We present simple conditions to characterise the orthogonality spaces that arise in this way from finite-dimensional Hermitian spaces. Moreover, we investigate the consequences of the hypothesis that an orthogonality space allows gradual transitions between any pair of its elements. More precisely, given elements e and f, we require a homomorphism from a divisible subgroup of the circle group to the automorphism group of the orthogonality space to exist such that one of the automorphisms maps e to f, and any of the automorphisms leaves the elements orthogonal to e and f fixed. We show that our hypothesis leads us to positive definite quadratic spaces. By adding a certain simplicity condition, we furthermore find that the field of scalars is Archimedean and hence a subfield of the reals.


Introduction
An orthogonality space is a set endowed with a binary relation that is supposed to be symmetric and irreflexive. The notion was proposed in the 1960s by David Foulis and his collaborators [2,28]. Their motivation may be seen as part of the efforts to characterise the basic model used in quantum physics: the Hilbert space. The strategy consists in reducing the structure of this model to the necessary minimum. Compared to numerous further approaches that have been proposed with a similar motivation [3,4], we may say that Foulis' concept tries to exhaust the limits of abstraction, focusing solely on the relation of orthogonality. The prototypical example of an orthogonality space is the projective Hilbert space together with the usual orthogonality relation. Just 484 T. Vetterlein AEM one aspect of physical modelling is this way taken into account-the distinguishability of observation results.
We have dealt with the problem of characterising complex Hilbert spaces as orthogonality spaces in our recent work [25,26]. The idea was to make hypotheses on the existence of certain symmetries. In the infinite-dimensional case, just a few simple assumptions led to success [26], whereas in the finitedimensional case, the procedure was considerably more involved [25].
In the present paper, we first of all point out a straightforward way of limiting the discussion to inner-product spaces. We deal here with the finitedimensional case, that is, we assume all orthogonality spaces to have a finite rank. We introduce the notion of linearity and establish that any linear orthogonality space of a finite rank 4 arises from an (anisotropic) Hermitian space over some skew field.
On this basis, we are furthermore interested in finding conditions implying that the skew field is among the classical ones. However, to determine within our framework the characteristic properties of, say, the field of complex numbers is difficult and we are easily led to the choice of technical, physically poorly motivated hypotheses. Rather than tailoring conditions to the aim of characterising a particular field of scalars, we focus in this work on an aspect whose physical significance is not questionable: we elaborate on the principle of smooth transitions between states. A postulate referring to this aspect might actually be typical for any approach to interpret the quantum physical formalism; cf., e.g., [9]. A first attempt to apply the idea to orthogonality spaces is moreover contained in our note [27]. In the present work, we propose the following hypothesis. Let e and f be distinct elements of an irredundant orthogonality space. Then we suppose that an injective homomorphism from a subgroup of the abelian group of unit complex numbers to the group of automorphisms exists, the action being transitive on the closure of e and f and fixing elements orthogonal to e and f .
The complex Hilbert space does not give rise to an example of the orthogonality spaces considered here, but the real Hilbert space does. The natural means of visualising matters is an n-sphere, which nicely reflects the possibility of getting continuously from any point to any other one by means of a rotation, in a way that anything orthogonal to both is left at rest. As the main result of this contribution, we establish that any linear orthogonality space of finite rank that fulfils the aforementioned hypothesis regarding the existence of automorphisms arises from a positive definite quadratic space. We furthermore subject the orthogonality space to a simplicity condition, according to which there are no non-trivial quotients compatible with the automorphisms in question. We show that the field of scalars is then embeddable into the reals.
The paper is organised as follows. In Sect. 2, we recall the basic notions used in this work and we compile some basic facts on inner-product spaces and the orthogonality spaces arising from them. In Sect. 3, we introduce linear orthogonality spaces; we show that the two simple defining conditions imply that an orthogonality space arises from a Hermitian space over some skew field. In Sect. 4, we formulate the central hypothesis with which we are concerned in this paper, the condition that expresses, in the sense outlined above, the gradual transitivity of the space. We show that, as a consequence, the skew field is commutative, its involution is the identity, and it admits an order. The subsequent Sect. 5 is devoted to the group generated by those automorphisms that occur in our main postulate. In Sect. 6, we finally show that the exclusion of certain quotients of the orthogonal space implies that the ordered field actually embeds into R.

Orthogonality spaces
We investigate in this paper relational structures of the following kind. Definition 2.1. An orthogonality space is a non-empty set X equipped with a symmetric, irreflexive binary relation ⊥, called the orthogonality relation.
We call n ∈ N the rank of (X, ⊥) if X contains n but not n + 1 mutually orthogonal elements. If X contains n mutually orthogonal elements for any n ∈ N, then we say that X has infinite rank. This definition was proposed by David Foulis; see, e.g., [2,28]. The idea of an abstract orthogonality relation has been taken up by several further authors [1,6,10,14,20,21], although definitions sometimes differ from the one that we use here. It should be noted that the notion of an orthogonality space is very general; in fact, orthogonality spaces are essentially the same as undirected graphs.
Orthogonality spaces naturally arise from inner-product spaces. We shall compile the necessary background material; for further information, we may refer, e.g., to [8,18,22].
By a -sfield, we mean a skew field (division ring) K together with an involutorial antiautomorphism : K → K. We denote the centre of K by Z(K) and we let U (K) = {ε ∈ K : εε = 1} be the set of unit elements of K.
Let H be a (left) linear space over the -sfield K. Then a Hermitian form on H is a map (·, ·): H × H → K such that, for any u, v, w ∈ H and α, β ∈ K, we have The form is called anisotropic if (u, u) = 0 holds only if u = 0.
By a Hermitian space, we mean a linear space H endowed with an anisotropic Hermitian form. If the -sfield K is commutative and the involution is the 486 T. Vetterlein AEM identity, then we refer to H as a quadratic space. We moreover recall that a field K is ordered if K is equipped with a linear order such that (i) α β implies α + γ β + γ and (ii) α, β 0 implies αβ 0. If K can be made into an ordered field, K is called formally real. If K is an ordered field and we have that (u, u) > 0 for any u ∈ H \ {0}, then H is called positive definite.
As usual, we write u ⊥ v for (u, v) = 0, where u, v ∈ H. Applied to subsets of H, the relation ⊥ is understood to hold elementwise. Moreover, we write [u 1 , . . . , u k ] for the subspace spanned by non-zero vectors u 1 , . . . , That is, P (H) is the (base set of the) projective space associated with H.
We may now indicate our primary example of orthogonality spaces. We conclude that, given an orthogonality space that is not irredundant, we can easily switch to an irredundant one whose structure can be considered as essentially the same.
Both orthogonality spaces and Hermitian spaces can be dealt with by lattice-theoretic means.
For a subset A of an orthogonality space (X, ⊥), we let where it is again understood that the orthogonality relation is applied to subsets of X elementwise. The map P(X) → P(X), A → A ⊥⊥ is a closure operator [5]. If A ⊥⊥ = A, we say that A is orthoclosed and we denote the set of all orthoclosed subsets of X by C(X, ⊥). We partially order C(X, ⊥) by settheoretical inclusion and equip C(X, ⊥) with the operation ⊥ . In this way, we are led to an ortholattice, from which (X, ⊥) can in certain cases be recovered. Following Roddy [21], we call an orthogonality space point-closed if, for any e ∈ X, {e} is orthoclosed. Note that X is in this case irredundant. We recall moreover that a lattice with 0 is called atomistic if each element is the join of atoms. Proof. The collection of closed subsets of a closure space forms a complete lattice and this fact applies to C(X, ⊥). Moreover, A ⊥ is clearly a complement of an A ∈ C(X, ⊥) and ⊥ : C(X, ⊥) → C(X, ⊥) is order-reversing as well as involutive. This shows the first part.
For any e, f ∈ X, we have {e} ⊥⊥ ⊥ {f } ⊥⊥ if and only if e ⊥ f . It follows that ({{e} ⊥⊥ : e ∈ X}, ⊥) is an orthogonality space and the assignment e → {e} ⊥⊥ is orthogonality-preserving. Moreover, if {e} ⊥⊥ = {e} holds for any e ∈ X, then C(X, ⊥) is atomistic, the atoms being the singleton subsets. The second part follows as well.
An automorphism of (X, ⊥) is a bijection ϕ of X such that, for any x, y ∈ X, x ⊥ y if and only if ϕ(x) ⊥ ϕ(y). We denote the automorphism group of (X, ⊥) by Aut(X, ⊥). Moreover, the group of automorphisms of the ortholattice C(X, ⊥) is denoted by Aut(C(X, ⊥)). The correspondence between an orthogonality space (X, ⊥) and its associated ortholattice C(X, ⊥) extends as follows to automorphisms. Proposition 2.5. Let ϕ be an automorphism of the orthogonality space (X, ⊥). Thenφ is an automorphism of the ortholattice C(X, ⊥).
Proof. The first part is clear. If the singleton subsets are orthoclosed, then C(X, ⊥) is atomistic and consequently, every automorphism is induced by a unique orthogonality-preserving permutation of the atoms. The second part follows as well.
For a subset E of a Hermitian space H, We partially order the set L(H) of subspaces of H w.r.t. the settheoretic inclusion and we endow L(H) with the complementation function ⊥ . Then L(H) is a complete ortholattice.
We call an ortholattice irreducible if it is not isomorphic to the direct product of two non-trivial ortholattices. Here, an ortholattice is considered trivial if it consists of a single element.

Theorem 2.6. Let H be a Hermitian space of finite dimension m. Then L(H)
is an irreducible, atomistic, modular ortholattice of length m.
Conversely, let L be an irreducible, atomistic, modular ortholattice of finite length m 4. Then there is a -sfield K and an m-dimensional Hermitian The group of unitary operators will be denoted by U(H) and its identity by I.
We denote the group of automorphisms of the ortholattice L(H) by Aut(L(H)). A description of Aut(L(H)) is the content of Piron's version of Wigner's Theorem [17,Theorem 3.28]; see also [16]. Note that the subspaces of H are in a natural one-to-one correspondence with the orthoclosed subsets of P (H); we may in fact identify the ortholattices L(H) and C(P (H), ⊥). Furthermore, by Proposition 2.5, we may identify the automorphisms of C(P (H), ⊥) with those of (P (H), ⊥). Based on these facts, we obtain the subsequent reformulation of Piron's theorem, which in the case of a complex Hilbert space is actually a consequence of Uhlhorn's Theorem [24]. We restrict here to a case in which automorphisms of (P (H), ⊥) are guaranteed to be induced by linear operators.

Theorem 2.7. Let H be a Hermitian space of finite dimension
3. For any unitary operator U , the map is an automorphism of (P (H), ⊥).
Conversely, let ϕ be an automorphism of (P (H), ⊥) and assume that there For a unitary operator U of a Hermitian space H, ϕ U will denote in the sequel the automorphism of (P (H), ⊥) induced by U according to (2).

The representation by Hermitian spaces
Our first aim is to identify finite-dimensional Hermitian spaces with special orthogonality spaces. In contrast to the procedure in [25], we do not deal already at this stage with symmetries. We rather derive the structure of a Hermitian space on the basis of two first-order conditions.
Throughout the remainder of this paper, (X, ⊥) will always be an irredundant orthogonality space of finite rank. We will call (X, ⊥) linear if the following two conditions are fulfilled: Condition (L 1 ) says that the collection of elements orthogonal to distinct elements e and f can be specified in such a way that f is replaced with an element orthogonal to e. (L 1 ) can be seen as a version of orthomodularity; indeed, this property is among its consequences. But more is true; also atomisticity follows and thus (L 1 ) can be regarded as the key property for the representability of X as a linear space. Condition (L 2 ) can be regarded as a statement complementary to (L 1 ). Indeed, (L 2 ) says that the collection of elements orthogonal to orthogonal elements e and g can be specified in such a way that g is replaced with a third element. We will actually need only the following immediate consequence of (L 2 ): {e, g} ⊥⊥ , where e ⊥ g, is never a two-element set. As we will see below, a closely related property of (X, ⊥) is its irreducibility.
Moreover, the assignment X → C(X, ⊥), e → {e} defines an isomorphism between (X, ⊥) and the set of atoms of C(X, ⊥) endowed with the inherited orthogonality relation.
The first part follows, the second part holds by Proposition 2.4.

T. Vetterlein AEM
We call a subset D of X orthogonal if D consists of pairwise orthogonal elements.
Proof. The assertion is trivial if D is empty; let us assume that D is non-empty. As we have assumed X to have finite rank, D is finite.
We observe that f = e k fulfils the requirement.
The following useful criterion for C(X, ⊥) to be orthomodular is due to Dacey [2]; see also [28,Theorem 35].

Lemma 3.4. C(X, ⊥) is orthomodular if and only if, for any A ∈ C(X, ⊥) and any maximal orthogonal subset D of A, we have
It follows that, by virtue of condition (L 1 ), we may describe C(X, ⊥) as follows. Proof. By Proposition 2.4 and Lemma 3.2, C(X, ⊥) is an atomistic ortholattice. From Lemmas 3.3 and 3.4, it follows that C(X, ⊥) is orthomodular.
As we have assumed X to be of finite rank m, the top element X of C(X, ⊥) is the join of m mutually orthogonal atoms. It follows that C(X, ⊥) has length m.
We claim that C(X, ⊥) fulfils the covering property. Let A ∈ C(X, ⊥) and let e ∈ X be such that e / ∈ A. By Lemma 3.4, there is an orthogonal set D Finally, an atomistic ortholattice of finite length fulfilling the covering property is modular [15,Lemma 30.3].
We now turn to the consequences of condition (L 2 ). In the presence of (L 1 ), there are a couple of alternative formulations.
We call (X, ⊥) reducible if X is the disjoint union of non-empty sets A and B such that e ⊥ f for any e ∈ A and f ∈ B, and otherwise irreducible. Lemma 3.6. Let (X, ⊥) fulfil (L 1 ). Then the following are equivalent: (1) X fulfils (L 2 ).
(3) X is irreducible. (2) ⇒ (3): Assume that X is reducible. Then X = A∪B, where A and B are disjoint non-empty sets such that e ⊥ f for any e ∈ A and f ∈ B. Pick e ∈ A and f ∈ B and let g ∈ {e, f } ⊥⊥ . We have that either g ∈ A or g ∈ B. In the former case, g ⊥ f and hence Similarly, in the latter case, we have g = f . We conclude that {e, f } ⊥⊥ contains two elements only.
(3) ⇒ (4): Assume that C(X) is not irreducible. Then C(X) is the direct product of non-trivial ortholattices L 1 and L 2 . The atoms of L 1 × L 2 are of the form (p, 0) or (0, q), for an atom p of L 1 or an atom q of L 2 , respectively. Furthermore, (a, 0) ⊥ (0, b) for any a ∈ L 1 and b ∈ L 2 . We conclude that the set of atoms of C(X, ⊥) can be partitioned into two non-empty subsets such that any element of one set is orthogonal to any of the other one. In view of Lemma 3.2, we conclude that (X, ⊥) is reducible.
We summarise:

The representation by quadratic spaces
Provided that the rank is finite and at least 4, we have seen that a linear orthogonality space arises from a Hermitian space over some -sfield. Our objective is to investigate the consequences of an additional condition. It will turn out that we can specify the -sfield considerably more precisely, namely, as a (commutative) formally real field.
We shall now make our idea precise to which we refer as the gradual transitivity of the orthogonality space. Given distinct elements e and f , we will require a divisible group of automorphisms to exist such that the group orbit of e is exactly {e, f } ⊥⊥ and {e, f } ⊥ is kept pointwise fixed.
It seems natural to assume that the group is, at least locally, linearly parametrisable. By the following lemma, the automorphism that maps e to some f ⊥ e actually interchanges e and f . Accordingly, we will postulate that the group is cyclically ordered. In what follows, we write R/2πZ for the additive group of reals modulo {2kπ : k ∈ Z}, which can be identified with the circle group, that is, with the multiplicative group of complex numbers of modulus 1. Moreover, let G be a group of bijections of some set W , and let S ⊆ W . Then we say that G acts on S transitively if S is invariant under G and the action of G restricted to S is transitive. Moreover, we say that G acts on S trivially if, for all g ∈ G, g is the identity on S.
We call an orthoclosed subset of the form {e, f } ⊥⊥ , where e and f are distinct elements of X, a line. We define the following condition on (X, ⊥). (R 1 ) For any line L ⊆ X, there is a divisible subgroup C of R/2πZ and an injective homomorphism κ : C → Aut(X, ⊥), t → κ t such that, (α) the group {κ t : t ∈ C} acts on L transitively; (β) the group {κ t : t ∈ C} acts on L ⊥ trivially.
Our discussion will focus to a large extent on the symmetries of (X, ⊥) that are described in condition (R 1 ). We will use the following terminology. For a line L, let κ : C → Aut(X, ⊥) be as specified in condition (R 1 ). Then we call an automorphism κ t , t ∈ C, a basic circulation in L and we call the subgroup {κ t : t ∈ C} of Aut(X, ⊥) a basic circulation group of L. Note that, by the injectivity requirement in condition (R 1 ), this group is isomorphic to C.
Moreover, we denote by Circ(X, ⊥) the subgroup of Aut(X, ⊥) that is generated by all basic circulations. The automorphisms belonging to Circ(X, ⊥) are called circulations and Circ(X, ⊥) itself is the circulation group. Example 4.2. Let R n , for a finite n 1, be endowed with the usual Euclidean inner product. Then (P (R n ), ⊥) is a linear orthogonality space fulfilling (R 1 ). Indeed, let u, v be an orthonormal basis of a 2-dimensional subspace of R n . Let C = R/2πZ and let κ t , t ∈ C, be the rotation in the (oriented) u-v-plane by the angle e it and the identity on [u, v] ⊥ . Then conditions (α) and (β) are obviously fulfilled.
For the general case, the intended effect of condition (R 1 ) is described in the following lemma. For ϕ ∈ Aut(X, ⊥) and n 1, we let ϕ n = ϕ • . . . • ϕ (n factors). Proof. By (R 1 ), applied to {e, f } ⊥⊥ , there is a divisible subgroup C of Aut(X, ⊥) that acts transitively on {e, f } ⊥⊥ and is the identity on {e, f } ⊥ . In particular, there is a ψ ∈ C such that ψ(e) = f and, by the divisibility of C, there is for any n 1 a ϕ ∈ C such that ϕ n = ψ. The first part is clear; the additional assertion follows from Lemma 4.1.
Our aim is to investigate the consequences of condition (R 1 ) for a linear orthogonality space. We first mention that (L 2 ), as part of the conditions of linearity, becomes redundant.
Proof. Let e, f ∈ X be orthogonal. We will show that {e, f } ⊥⊥ contains a third element. The assertion will then follow from Lemma 3.6.
Assume to the contrary that {e, f } ⊥⊥ is a two-element set. Let {κ t : t ∈ C} be a basic circulation group of {e, f }. As the group acts transitively on {e, f }, there is a t ∈ C \ {0} such that κ t (e) = f . But {e, f } is invariant also under κ t 2 and we have κ 2 t 2 = κ t , an impossible situation.

T. Vetterlein AEM
The transitivity of a linear orthogonality space, which by Lemma 4.3 is a consequence of condition (R 1 ), allows us to subject the representing Hermitian space to an additional useful condition.

Lemma 4.5.
Let (X, ⊥) be linear, of rank 4, and fulfilling (R 1 ). Then there is a Hermitian space H such that (X, ⊥) is isomorphic to (P (H), ⊥) and such that each one-dimensional subspace contains a unit vector.
Proof. By Theorem 3.8, there is a Hermitian space H such that (X, ⊥) is isomorphic to (P (H), ⊥).
Let u ∈ H. We can define a new Hermitian form on H inducing the same orthogonality relation and such that u becomes a unit vector; see, e.g., [13]. By Lemma 4.3 and Theorem 2.7, there is for any v ∈ H a unitary operator such that U (u) ∈ [v]. The assertion follows.
For the rest of this section, let H be a Hermitian space over the -sfield K such that H is of finite dimension 4, each one-dimensional subspace contains a unit vector, and (P (H), ⊥) fulfils (R 1 ). Our aim is to be as specific as possible about the -sfield K.

Lemma 4.6. Let T be a 2-dimensional subspace of H and let {κ t : t ∈ C} be a basic circulation group of P (T ). Then, for each t ∈ C, there is a uniquely determined unitary operator U t inducing κ t and being the identity on
Proof. By Theorem 2.7, κ t is, for each t ∈ R, induced by a unique unitary operator U t such that U t | T ⊥ is the identity. In particular, κ 0 is the identity on P (H), hence U 0 must be the identity on H. Furthermore, for any s, t ∈ C, U s U t induces κ s+t = κ s κ t and is the identity on T ⊥ . The same applies to U s+t and it follows that U s+t = U s U t . Finally, the injectivity assertion follows from the fact that, according to (R 1 ), the assignment t → κ t is already injective.

Lemma 4.7. K is commutative and the involution is the identity. In particular, H is a quadratic space.
Proof. Let T be a two-dimensional subspace of H. Let {κ t : t ∈ C} be a basic circulation group of P (T ) and, in accordance with Lemma 4.6, let the unitary operator U t , for each t ∈ C, induce κ t .
We will identify the operators U t , t ∈ C, with their restriction to T and represent them, w.r.t. a fixed orthonormal basis b 1 , b 2 of T , by 2 × 2-matrices.
Let t ∈ C. Then U t = α γ β δ , where αα + ββ = γγ + δδ = 1 and αγ + βδ = 0. As κ(C) acts transitively on P (T ), there is a p ∈ C such that Because we have We next claim that, for any ξ ∈ K, there is a t ∈ C such that ξ = β −1 α, where α β is the first column vector of U t . Indeed, by the transitivity of κ(C), ], thus the assertion follows.
The orthogonality of the column vectors of the first matrix in (3) implies αε 2 β ε 1 +βε 1 α ε 1 = 0 and hence (β −1 α) = −ε 1 β −1 αε 2 , provided that β = 0. By the previous remark, we conclude ξ = −ε 1 ξε 2 for any ξ ∈ K. From the case ξ = 1 we see that and we conclude that for each t ∈ C there are α, β ∈ K such that Let now s ∈ C be such that U s maps [e 1 ] to [e 1 + e 2 ]. Then there is a γ ∈ K such that U s = γ −γ γ γ . Note that 2γγ = 1; in particular, K does not have characteristic 2. Moreover, given any U t according to (5), we have This means
We continue by showing that K can be endowed with an ordering to the effect that the quadratic space H becomes positive definite. We refer to [19, §1] for further information on the topic of fields and orderings. Proof. Let S K = {α 2 1 + · · · + α 2 k : α 1 , . . . , α k ∈ K, k 0} and note that, if K admits an order, then all elements of S K will be positive. We shall show that S K ∩ −S K = {0}; it then follows that S K can be extended to a positive cone determining an order that makes K into an ordered field; see, e.g., [19,Theorem (1.8)].
Assume to the contrary that S K ∩ −S K contains a non-zero element. Then there are α 1 , . . . , α k ∈ K \ {0}, k 1, such that α 2 1 + · · · + α 2 k = 0. It follows that that there are non-zero vectors v 1 , . . . , v k such that (v i , v i ) = α 2 1 + · · · + α 2 i , i = 1, . . . , k. Indeed, let u be any unit vector. Then v 1 = α 1 u is non-zero and of length α 2 1 . Moreover, let 1 i < k and assume that v i is non-zero and of length α 2 1 + · · · + α 2 i . Let u be a unit vector orthogonal to v i . Then v i+1 = v i + α i+1 u is again non-zero and has length α 2 1 + · · · + α 2 i+1 . We conclude that, in particular, there is a non-zero vector v k that has length α 2 1 + · · · + α 2 k = 0. But this contradicts the anisotropy of the form. To show also the second assertion, let us fix an order of K and let v ∈ H • . Then there is a unit vector u ∈ H and an α ∈ K such that v = αu. It follows We summarise what we have shown. Theorem 4.9. Let (X, ⊥) be a linear orthogonality space of finite rank 4 that fulfils (R 1 ). Then there is an ordered field K and a positive-definite quadratic space H over K, possessing unit vectors in each one-dimensional subspace, such that (X, ⊥) is isomorphic to (P (H), ⊥).
We conclude the section with a comment on the formulation of our condition (R 1 ).

Remark 4.10.
For the proof of Theorem 4.9, we have not made use of the divisibility condition in (R 1 ), which hence could be dropped. So far, only Lemma 4.4, which we did not use in the sequel, depended on divisibility.
We think, however, that it is natural to include this property as it well reflects the idea of gradual transitions between pairs of elements of an orthogonality space. Furthermore, omitting divisibility would be especially interesting if C could possibly be finite. But this is not the case. Indeed, the field of scalars K of the representing linear space has characteristic 0 and hence each two-dimensional subspace contains infinitely many one-dimensional subspaces. Hence C is necessarily infinite and thus anyhow "dense" in R/2πZ.

The circulation group
We have established that linear orthogonality spaces of rank at least 4 arise from positive definite quadratic spaces in case condition (R 1 ) is fulfilled. We insert a short discussion of the symmetries that are required to exist as part of (R 1 ).
In this section, H will be a positive definite quadratic space over an ordered field K such that H is of finite dimension 4, each one-dimensional subspace contains a unit vector, and (P (H), ⊥) fulfils (R 1 ). For further information on quadratic spaces, we may refer, e.g., to [22].
In accordance with the common practice, we call the unitary operators of H from now on orthogonal and we denote the group of orthogonal operators by O(H). Furthermore, with any endomorphism A of H we may associate its determinant det A. For an orthogonal operator U , we have det U ∈ {1, −1} and we call U a rotation if det U = 1. The group of rotations is denoted by SO(H). For a two-dimensional subspace T of H, we call U ∈ SO(H) a basic rotation in T if U | T ⊥ is the identity, and we denote the group of basic rotations in T by SO(T, H).
As should be expected, the basic circulations correspond to the basic rotations. In particular, there is a unique basic circulation group of P (T ). Moreover, any two basic circulation groups are isomorphic.
Proof. In accordance with Lemma 4.6, let {U t : t ∈ C} be the subgroup of O(H) such that C = {ϕ Ut : t ∈ C}. We have to show that {U t : t ∈ C} coincides with SO(T, H).
As for any t ∈ C we have U t = (U t 2 ) 2 , it is clear that U t ∈ SO(T, H). Conversely, let U ∈ SO(T, H). We again fix an orthonormal basis of T and identify 498 T. Vetterlein AEM the operators in question with the matrix representation of their restriction to T . Then we have U = α −β β α for some α, β ∈ K such that α 2 + β 2 = 1.
As C acts transitively on P (T ), there is a t ∈ C such that U t ( This means that U t equals either Furthermore, we have U 0 = 1 0 0 1 and from U π 2 = U 0 it follows that U π = U 0 or U π = −U 0 . Since by the injectivity requirement in (R 1 ) the first possibility cannot apply, we have The assertion follows and we conclude that C = {ϕ U : U ∈ SO(T, H)}. By Lemma 4.6, we thus have the isomorphism C → SO(T, H), t → U t . Moreover, C → C, t → κ t is an isomorphism, and κ t = ϕ Ut for any t ∈ C. We conclude that SO(T, H) → C, U → ϕ U is an isomorphism.
The first part as well as the uniqueness assertion are shown. Finally, any two groups SO(T, H) and SO(T , H), where T and T are 2-dimensional subspaces of H, are isomorphic, hence the final assertion follows as well.
Given a line L in (P (H), ⊥), we can speak, in view of Proposition 5.1, of the basic circulation group of L. We should note however that, in contrast to the statements on uniqueness and isomorphy in Proposition 5.1, the homomorphism from a subgroup C of R/2πZ to a basic circulation group is not uniquely determined. Indeed, the group C may possess an abundance of automorphisms, as is the case, e.g., for C = R/2πZ.
In Proposition 5.1, we have characterised the basic circulation groups as subgroups of SO(H). We may do so also with respect to the orthogonality space itself. Proof. Let C be the basic circulation group of L, and let T be the 2-dimensional subspace of H such that L = P (T ).
Let ϕ ∈ C. By Proposition 5.1, ϕ is induced by some U ∈ SO(T, H). Then U | T ⊥ is the identity and, w.r.t. an orthonormal basis of T , we have where α, β ∈ K are such that α 2 + β 2 = 1. If β = 0, then α = 1 or α = −1 and hence U | T induces the identity on P (T ). If β = 0, U | T does not possess any eigenvector and hence U | T induces on P (T ) a map without fixed points. Conversely, let ϕ be an automorphism of P (H) such that ϕ| L ⊥ is the identity and ϕ| L is either the identity or does not have any fixed point. By Theorem 2.7, ϕ is induced by an orthogonal operator U such that U | T ⊥ is the identity.
where α 2 + β 2 = 1. In the latter case, U | T has the distinct eigenvalues 1 and −1, hence ϕ| L has exactly two fixed points. We conclude that U | T is of the form of the first matrix and hence U ∈ SO(T, H). By Proposition 5.1, ϕ = ϕ U belongs to C.
It seems finally natural to ask how Circ(P (H), ⊥) is related to SO(H). By Proposition 5.1, we know that Circ(P (H), ⊥) ⊆ {ϕ U : U ∈ SO(H)}: any circulation is induced by a rotation. Under an additional assumption, we can make a more precise statement. We call a field Pythagorean if any sum of two squares is itself a square.
In We shall show that SO(H) is generated by the basic rotations. Since Circ(P (H), ⊥) is by definition generated by the basic circulations, the assertions will then follow.
Note first that, for any elements γ, δ ∈ K that are not both 0, there are α, β, ∈ K such that α 2 + β 2 = 1, = 0, and Indeed, let 2 = γ 2 + δ 2 , α = γ , and β = − δ . We recall that a basic rotation in the plane spanned by two coordinates axes is called a Givens rotation; see, e.g., [23]. We conclude that any matrix in K n×n can be transformed by left multiplication with Givens rotations into row echelon form. When doing so with a matrix representing a rotation, the resulting matrix must be diagonal, an even number of the diagonal entries being −1 and the remaining ones being 1. We conclude that each rotation is the product of basic rotations in 2-dimensional subspaces spanned by the elements of any given basis.

Embedding into R n
Our final aim is to present a condition with the effect that our orthogonality space arises from a quadratic space over an Archimedean field. In order to exclude the existence of non-zero infinitesimal elements, we shall require that our orthogonality space is, in a certain sense, simple.
An equivalence relation θ on an orthogonality space (X, ⊥) is called a congruence if any two orthogonal elements belong to distinct θ-classes. Obviously, X possesses at least one congruence, the identity relation, which we call trivial. For a congruence θ on X, we can make X/θ into an orthogonality space, called the quotient orthogonality space: for e, f ∈ X, we let e/θ ⊥ f/θ if there are e θ e and f θ f such that e ⊥ f .
Given an automorphism ϕ of (X, ⊥), we call a congruence θ ϕ-invariant if, for any e, f ∈ X, we have that e θ f is equivalent to ϕ(e) θ ϕ(f ). If θ is ϕ-invariant for every member ϕ of a subgroup G of Aut(X, ⊥), we say that θ is G-invariant.
Example 6.1. Let again R n , n 4, be endowed with the usual inner product. By Proposition 5.3, Circ(P (R n ), ⊥) consists exactly of those automorphisms of (P (R n ), ⊥) that are induced by some U ∈ SO(n). Moreover, SO(n) acts primitively on P (R n ), that is, no non-trivial partition of P (R n ) is invariant under SO(n). This means that no non-trivial partition of P (R n ) is invariant under Circ(P (R n )). In particular, the only Circ(P (R n ))-invariant congruence is the identity relation. We conclude that (P (R n ), ⊥) fulfils (R 2 ).
Let H be a positive definite quadratic space over the ordered field K as in Section 5, that is, we assume that H is of finite dimension 4, each onedimensional subspace of H contains a unit vector, and (P (H), ⊥) fulfils (R 1 ).
Following Holland [12], we define I K = {α ∈ K : |α| < 1 n for all n ∈ N \ {0}}, M K = {α ∈ K : 1 n < |α| < n for some n ∈ N \ {0}} to be the sets of infinitesimal and medial elements of K, respectively. Then I K is an additive subgroup of K closed under multiplication; M K is a multiplicative subgroup of K • ; and we have I K · M K = I K and M K + I K = M K .
We call K Archimedean if the only infinitesimal element is 0. We have that K is Archimedean exactly if all non-zero elements are medial. For the following result, see, e.g., [7,11]. Theorem 6.2. An Archimedean ordered field is isomorphic to a subfield of the ordered field R.
We claim that ≈ is a congruence. Let x, y ∈ H • be such that We have thus shown that ≈ is a Circ(P (H), ⊥)-invariant congruence on P (H). By condition (R 2 ), ≈ is trivial.
Assume finally that K contains the non-zero infinitesimal element δ. For orthogonal unit vectors u and v, we then have [u] ≈ [u + δv], because u and u + δv are medial vectors whose difference is infinitesimal. It follows that ≈ is non-trivial, a contradiction. We conclude that K must be Archimedean.
Again, we summarise our results.