Gaussoids are two-antecedental approximations of Gaussian conditional independence structures

The gaussoid axioms are conditional independence inference rules which characterize regular Gaussian CI structures over a three-element ground set. It is known that no finite set of inference rules completely describes regular Gaussian CI as the ground set grows. In this article we show that the gaussoid axioms logically imply every inference rule of at most two antecedents which is valid for regular Gaussians over any ground set. The proof is accomplished by exhibiting for each inclusion-minimal gaussoid extension of at most two CI statements a regular Gaussian realization. Moreover we prove that all those gaussoids have rational positive-definite realizations inside every $\varepsilon$-ball around the identity matrix. For the proof we introduce the concept of algebraic Gaussians over arbitrary fields and of positive Gaussians over ordered fields and obtain the same two-antecedental completeness of the gaussoid axioms for algebraic and positive Gaussians over all fields of characteristic zero as a byproduct.


Introduction
Conditional independence (CI) is a basic notion in the probabilistic approach to reasoning under uncertainty. CI constraints prescribe statements of the form "given that the value of the factor C is known, the value of A is irrelevant to the value of B", or "having observed C, the outcome of A gives no further information on the outcome of B" on a joint probability distribution of random variables A, B, C, . . . . Such statements allow the incorporation of expert knowledge on independences of the real-world phenomena into a statistical model. The importance of conditional independence in statistical theory has been described in a seminal paper by Dawid [Daw79]. Pearl [Pea88, Chapter 3] further emphasizes the usefulness of CI as a qualitative, as opposed to numeric, measure of independence in artificial intelligence, in that reasoning on CI can be performed logically instead of by summing over and dividing given probabilities. The advantage of using CI inference in the processing and deduction of further (in)dependence statements is that every conclusion which is drawn from a given set of CI assumptions is sound and that their derivation happens in discrete steps, each of which is verifiable by humans based on agreed-upon deduction rules.
The fundamental laws of conditional independence identified by Dawid became the definition of semigraphoids. The semigraphoid properties are universal deduction rules which are valid for all probability distributions [Stu05, Lemma 2.1]. Thus they may immediately be applied to any set of statements about the conditional independences among random variables to derive additional knowledge about the independence structure of the stochastic system. The historical reference on the topic, after Dawid, is [PP85], where the special case of graphoids is introduced and first named in the context of graphical models. See also the historical overview in [Stu19]. By deduction or inference rules, we always mean CI inference formulas (or inference forms) demanding that if all of the antecedent statements a s are satisfied, then at least one of the consequent statements c t is satisfied. Which inference formulas are true, i.e., inference rules, depends on assumptions about the type of probability distribution. Distributions with only strategy: Section 4 discusses an algebraic relaxation of Gaussian distributions together with their symmetries. Section 5 shows how Gaussian realizability results can be recovered from this relaxation. The proof of Theorem 3.3 is presented in a series of lemmas in Section 6. Section 7 contains further examples and discussion of future work.

Preliminaries
We fix a finite ground set N of size n which labels a vector ξ = (ξ i ) i∈N of random variables. Suppose that this vector follows a multivariate Gaussian distribution with mean vector µ and positive-definite covariance matrix Σ. Its density with respect to the Lebesgue measure on R N is given by For distinct i, j ∈ N and a disjoint subset K ⊆ N , the conditional independence statement ξ i ⊥ ⊥ ξ j | ξ K asserts, informally, that whenever the outcome of the subvector ξ K = (ξ k ) k∈K is known, the outcomes of ξ i and ξ j are stochastically independent -learning one value provides no additional information on the other. The CI statement above is abbreviated by the symbol (ij|K). In dealing with elements and subsets of the ground set N , we adopt the following notational conventions: (1) an element i ∈ N may be written in place of the singleton subset {i} ⊆ N , and (2) juxtaposition KL of subsets of N abbreviates set union. In particular ijK stands for {i} ∪ {j} ∪ K ⊆ N and ij = ji holds. This does not make the CI statement (ij|K) ambiguous because conditional independence of random vectors is symmetric in i and j. Let A N := {(ij|K) : ij ∈ N 2 , K ⊆ N \ ij} denote the set of all CI statements on N . For a Gaussian vector ξ with positive-definite covariance matrix Σ, conditional independence has an algebraic characterization (see [Sul18,Proposition 4

.1.9]):
(ij|K) holds for ξ ⇔ det Σ iK,jK = 0, where Σ iK,jK is the submatrix of Σ with rows iK and columns jK. A submatrix of the form Σ K,K with equal row and column sets is principal. A symmetric matrix Σ is positive-definite if and only if all principal minors det Σ K,K , K ⊆ N , are positive. In the submatrix Σ iK,jK , which is decisive for the CI statement (ij|K), the row and column sets iK and jK differ by only one element each. They are not principal but almost-principal submatrices. Hence, the true conditional independence statements of a given Gaussian distribution can be determined by evaluating the almost-principal minors of the covariance matrix. The mean µ plays no role and may be assumed to be zero. The CI structure of Σ is the set of all valid CI statements: Σ := {(ij|K) ∈ A N : det Σ iK,jK = 0}. This example shows that the CI structure {(12|)} over ground set N = 123 is realizable by a Gaussian distribution. Determining the CI structure of a given matrix is easy. The reverse "synthesis problem" of deciding whether a given CI structure has a realization by a Gaussian distribution involves intricate algebraic relations between the principal and almost-principal minors of a generic positive-definite matrix. An algebraic proof that a CI structure A is nonrealizable always comes with a valid inference rule ϕ for Gaussians which is not satisfied by A, such as in the next example.
Example 2.2. Let again N = 123 be a three-element ground set. To check whether the CI structure A = {(12|), (12|3)} is realizable by a Gaussian distribution, write a generic covariance matrix and consider the equations imposed on Σ by A: c r ) = ar − bc. Together these equations imply bc = 0 and therefore b = 0 or c = 0. This proves that the inference rule is valid for all Gaussian distributions. Since A does not satisfy this inference rule, it is not realizable by a Gaussian distribution. It is easy to see that the two inclusion-minimal CI structures which extend A and which are realizable by a positive-definite matrix are {(12|), (12|3), (13|), (13|2)} and {(12|), (12|3), (23|), (23|1)} corresponding to the two alternative conclusions of the inference rule.
A gaussoid is a subset of A N which is closed under the gaussoid axioms: for all distinct i, j, k ∈ N and L ⊆ N \ ijk. The first two formulas decompose into 2 · 2 · 2 = 8 inference forms in total (some of which are redundant due to symmetries in the axioms); the third formula is a single inference form. The axiom (G.i) defines semigraphoids, the "⇒" direction of (G.ii) is known as the intersection axiom. These six inference rules together form the definition of a graphoid. The "⇐" direction of (G.ii), the converse of intersection, is the composition axiom. The final axiom (G.iii) is weak transitivity, a special case of which was proved in Example 2.2. Thus gaussoids are the weakly transitive, compositional graphoids, or the weakly transitive semigaussoids, as coined in [DX10].
The gaussoid axioms are valid inference rules for Gaussian distributions on every ground set N by [Mat05, Corollary 1]. They are the "simplest" inference rules for Gaussians in the sense that they are not only necessary but also sufficient for Gaussian realizability over the three-element ground set, which is the smallest non-trivial size. They are not sufficient for more than three variables. The main result of this paper (Corollary 3.4) shows that every valid inference rule which has at most two antecedents is implied by the gaussoid axioms -without restrictions on the finite ground set N .
The symmetries of Gaussian CI structures play a prominent role in the proof of the main result in Section 6. The symmetric group S N of permutations of N acts on the random vector ξ = (ξ i ) i∈N by relabeling the entries. This immediately translates to an action on CI statements as (ij|K) π := (π(ij)|π(K)), for π ∈ S N . This action is extended element-wise to CI structures. The equivalence relation induced by this group action on CI structures is isomorphy. On covariance matrices, the group acts accordingly by simultaneous permutation of the rows and columns or, equivalently, a permutation of the axes of R N . This is an orthogonal coordinate change and thus preserves positive-definiteness. Another important symmetry is the one induced by duality of CI statements, (ij|K) * := (ij|N \ ijK), which swaps a conditioning set K with its complement in N \ ij. On covariance matrices, duality corresponds to matrix inversion Σ → Σ −1 so that Σ −1 = Σ * ; cf. [LM07, Lemma 1]. Both, isomorphy and duality, are special cases of a larger symmetry group, the hyperoctahedral group B N , which is generated by the reflection symmetries of the N -dimensional cube. The combinatorial motivation for considering this group is explained in [BK20], but it is not important for the present work. As an abstract group, B N equals the semidirect product (Z/2) N S N , i.e., each of its elements can be uniquely written as a composition of a swap from (Z/2) N and a permutation from S N . Each vector in the group of swaps (Z/2) N is an indicator vector of a subset Z ⊆ N and acts on a CI statement via Duality is a special case of this action by swapping everything, i.e., Z = N . The second constituent of B N = (Z/2) N S N are the permutations S N , which simply act by isomorphy.
Example 2.3. Consider the CI structure A = {(12|), (12|345), (34|), (34|25)}. This structure satisfies the gaussoid axioms. Let (1 3) be the cyclic permutation on N = 12345 which exchanges 1 with 3 and leaves every other element fixed. The images of (1 3), duality and swap by Z = 123 on A are, respectively: It is easy to see that the gaussoid axioms (G.i)-(G.iii) are invariant under the hyperoctahedral group. Thus, every CI structure obtained above from A by one of the group actions must be a gaussoid as well.
In contrast to gaussoids, realizable Gaussian CI structures are invariant only under isomorphy and duality, but not under the hyperoctahedral group. The B N action can be defined on symmetric matrices but it does not preserve positive-definiteness. This is explained in detail in Section 4. That section also extends the definition of realizability via (⊥ ⊥) to matrices over ordered fields other than R and, if the field is not ordered, to matrices whose principal minors are non-zero instead of positive, giving rise to, respectively, positively and algebraically realizable gaussoids over specific fields. Algebraically realizable CI structures turn out to be invariant under the hyperoctahedral group, which is a key ingredient to the proof of our main result.

Statement of the main result
On a fixed ground set N , each subset of A N corresponds to an assignment of truth values to Boolean variables indexed by A N . Every set of CI structures (a set of truth assignments to the elements of A N ) is the set of satisfying assignments to a Boolean formula in conjunctive normal form (CNF) whose variables are in A N . We restrict attention to the clauses of these CNFs. Each clause ϕ is a disjunction of negated and non-negated statements from A N and can be written in inference form ϕ : This the general form of CI inference axioms, which are the subject of this article.
Definition 3.1. Let A ⊇ A * be sets of CI structures over a fixed ground set N . A is a kantecedental approximation of A * if every inference form ϕ with at most k antecedents and variables in A N which is valid for A * is also valid for A.
We imagine A to be a simpler set approximating A * from above. Because of the inclusion A * ⊆ A, every inference rule which is valid for A also holds for A * . The definition above concerns a degree k to which the converse holds. In this article, the role of A is played by gaussoids and that of A * by realizable gaussoids, for different notions of realizability. From an axiomatic point of view, one may also say that the axioms for A are k-antecedentally complete for the chosen notion of realizabillity A * .
Our proof of the two-antecedental approximation property of gaussoids relies on a general principle which was also used in Studený's proof for discrete CI. A minimal A-extension of a CI structure A is a CI structure A which is inclusion-minimal with the properties that A ⊇ A and A ∈ A. In the world of discrete CI and semigraphoids, this minimal extension is unique, because both, discretely realizable CI structures and semigraphoids, are closed under intersection; but for Gaussians and gaussoids this is not true, as seen in Example 2.2.
Lemma 3.2. Let A ⊇ A * be sets of CI structures over N . Then A is a k-antecedental approximation of A * if every minimal A-extension of every subset of A N of cardinality at most k belongs to A * .
Proof. Let ϕ : L ⇒ M be a valid inference rule for A * with |L| ≤ k. We have to show that ϕ is valid for A. Equivalently, letting A(ϕ) denote the subset of A which satisfies ϕ, we show that A ⊆ A(ϕ).
Consider any A ∈ A. If the antecedents L of ϕ are not contained in A, then A is vacuously contained in A(ϕ). On the other hand, if A contains L, then it also contains a minimal Aextension L of L. Since |L| ≤ k, the structure L belongs to A * by assumption. Hence L satisfies ϕ, which means that L ∩ M = ∅. Then A, containing L , also satisfies ϕ.
We can now state the main result: Theorem 3.3. Over every ground set, every minimal gaussoid extension of at most two CI statements is realizable by a positive-definite matrix with rational entries, which can be picked arbitrarily close to the identity matrix.
There is a notion of algebraic realizability over every field and of positive realizability over every ordered field. In each case, testing realizability of a fixed gaussoid means deciding if a system of polynomial equations, inequations and (for positivity) inequalities with integer coefficients has a solution. Since the rational numbers are the prime field of characteristic zero, finding a positive and rational solution is the hardest problem among all realizabilities over fields of characteristic zero.
The fact that the approximation is valid uniformly over all ground sets is significant. Generally, the realizabilities considered here do not have a finite axiomatization. This means that, as the ground set grows, evermore logically independent inference rules become necessary to cut out the realizable gaussoids. Theorem 3.3 implies that among these new inference rules for larger and larger ground sets, the easiest ones, up to two antecedents, are all logical consequences of the well-known gaussoid axioms. This is the titular The realizability proofs for Theorem 3.3 are composed of two ideas to be presented in the next two sections, respectively. The first idea is to relax the positive-definiteness requirement and study instead the aforementioned algebraic Gaussians over general fields. The resulting CI structures are still gaussoids and they are closed under the action of the hyperoctahedral group which allows passing to an easier orbit representative in realizability proofs. The second idea, compensating the previous relaxation, is to study more special realizations, namely rational parametrizations of spaces of matrices. Formally, such a space is represented by a matrix whose entries are elements of the rational function field Q(ε 1 , . . . , ε p ) with infinitesimal variables ε k . In this space, the algebraically realized gaussoid over Q is determined uniquely by the rational functions for all sufficiently small values of the ε k . If the parametrized matrices converge to a positive-definite matrix as the ε k tend to zero, then matrices in the interior of this space are positive-definite for small enough ε k . In this way, positive realizability of a gaussoid is recovered from an algebraic construction and inspection of a limit. In the proof of the main theorem in Section 6, both techniques are applied to minimal gaussoid extensions of at most two CI statements. We turn the problem of finding positive-definite realizations around and instead find spaces of matrices realizing one easy hyperoctahedral representative of each gaussoid orbit converging to every hyperoctahedral image of the identity matrix. Then, given any gaussoid in the representative's orbit, we apply the inverse group action to the right space of matrices so that the transformed space approaches the positive-definite identity matrix and realizes the given gaussoid. This yields the desired rational regular Gaussian realizations.

Algebraic Gaussians and the hyperoctahedral group
The gaussoid axioms were derived in [Mat05, Corollary 1] as consequences of an identity among minors of a complex symmetric matrix, where the proof of each instance of an axiom requires certain principal minors of the matrix not to vanish. It follows that the gaussoid axioms hold for the set Γ = {(ij|K) ∈ A N : det Γ ij|K = 0} of vanishing almost-principal minors of any symmetric matrix Γ, provided that all of its principal minors are non-zero. Such matrices are principally regular. In this article positive-definite and principally regular matrices are implicitly symmetric. Moreover, Γ can have entries from any field K because, while Matúš's result is stated for complex matrices only, special structure of the complex numbers is only invoked in a continuity argument which is circumvented by the assumption of principal regularity.
Principal regularity is not only an obvious substitute for positive-definiteness over arbitrary fields. It is the technical condition which allows the formation of all Schur complements of the matrix, which correspond to conditional distributions in the positive-definite setting. The property is inherited by the inverse matrix, by principal submatrices and Schur complements and therefore is precisely what is required to salvage the theory of minors of regular Gaussians; see [BK20].
Definition 4.1. A gaussoid G is algebraically realizable (over K) if there is a principally regular matrix Γ over K such that G = Γ . If in addition K is an ordered field and Γ has only positive principal minors, then G is positively realizable (over K). By slight abuse of language we refer to algebraically realizable gaussoids as well as their realizing matrices as algebraic Gaussians and similarly for positive Gaussians.
In this terminlogy, the familiar multivariate Gaussian distributions are positive Gaussians over the real-closed field R. Every algebraic Gaussian is, by Matúš's corollary, a gaussoid. Conversely, given any gaussoid G, its algebraic realization space over a field K is the set of matrices with entries in K that algebraically realize G. It is specified by the vanishing of the almost-principal minors in G, the non-vanishing of the almost-principal minors not in G and the non-vanishing of all principal minors of a symmetric matrix. The space of these matrices is a constructible set. The positive realization space over an ordered field refines the non-vanishing of principal minors into positivity constraints, resulting in a semialgebraic set.
Remark 4.2. With a fixed notion of realizability, the CI implication problem asks to decide if a given inference form is valid for the class of realizable CI structures. There is much previous work about CI implication for graphical models and approximations or special cases of discrete CI; see for instance [VP92,GP93b,BHLS10,NGSV13] as well as [AKKNS20] from the point of view of information theory, and the references therein. For Gaussians over algebraically closed fields, there is an algebraic characterization of CI implication which was observed by Matúš. His result [Mat05, Proposition 1] is stated for the complex numbers but again the proof works over arbitrary algebraically closed fields, with the crucial ingredient being Hilbert's Nullstellensatz. The geometric summary of this characterization is that L ⇒ M holds if and only if in the affine space of symmetric matrices Γ, the polynomial f M = (ij|K)∈M det Γ ij|K vanishes on the constructible set that is defined by the vanishing of the almost-principal minors indexed by L and the non-vanishing of the principal minors of Γ, because then on every point on this constructible set defined by L at least one of the minors in M vanishes, proving the validity of the inference. This set of matrices is obtained as a projection of a variety in a higher-dimensional affine space by a standard construction and hence the CI implication problem amounts to a radical ideal membership test. The same idea paired with the Real Nullstellensatz yields a characterization for algebraic realizability over real-closed fields, employing the real variety and hence the real radical of the analogously defined ideal. Finally, the Positivstellensatz can be used similarly to characterize the CI implication problem for positive Gaussians over a real-closed field in terms of an ideal membership query on the radical ideal of the preorder associated to the positive realization space of L. The reader is referred to [CLO15] and [Mar08] for background information on geometry over algebraically and, respectively, real-closed fields. These characterizations of CI implication directly yield a procedure for realizability testing. Namely, a CI structure L is realizable if and only if it does not appear as the antecedent set of a non-trivial valid inference rule, the weakest of which, and hence the only one that needs to be tested, is L ⇒ (A N \ L).
Algebraic realizations have two advantages over positive realizations. The first advantage is that algebraic realizability over algebraically closed fields tends to be easier to decide in practice, as outlined in the previous remark, using Hilbert's Nullstellensatz and Gröbner bases, than positive realizability over a real-closed field, requiring a Positivstellensatz certificate for non-realizability. Over an ordered field, a positively realizable gaussoid is always algebraically realizable.
The second advantage, paramount to the proof of our main theorem, is that algebraic realizability is preserved under the hyperoctahedral group introduced in Section 2. The hyperoctahedral group action on realizing matrices is a quotient of the group (Z/4) N S N . This group is, in turn, a discrete subgroup of the SL 2 (R) N action on the Lagrangian Grassmannian; cf. [HS07,BDKS19]. In the semidirect product, every group element can be written as the composition of an element of S N and one of (Z/4) N . The permutation part is just an orthogonal coordinate change, permuting rows and columns of the matrix, corresponding to the S N action on CI structures (almost-principal minors) and merely permuting the set of principal minors. This action changes neither principal regularity nor positive-definiteness and therefore we focus on the (Z/4) N part in the remainder of this section.
Our account of the (Z/4) N symmetry is based on [BDKS19, Section 3] but we give more details about the action on matrices. Each Z/4 factor in the N -fold product making up the group is represented by the four 2 × 2 matrices To each tuple (X 1 , . . . , X n ) ∈ (Z/4) N we associate four N × N diagonal matrices A, B, C, D such that Lemma 13] and the following Proposition 4.3 describes its principal and almost-principal minors. To facilitate this description we use a parametrization of this group action. For any subset Z ⊆ N and a tuple of signs δ ∈ {±1} N , choose the group element (X 1 , . . . , X n ) where Then the action can be written as S δ Z (Γ) := (A + ΓC) −1 (B + ΓD) with In expressing minors of S δ Z (Γ) in terms of Z, δ and Γ, it becomes necessary to recombine the involved subsets of N . Using the abbreviations AB = A∪B and [AB] = A∩B as well as A c for the complement of A in N , any combination of interest can be efficiently written down in "disjunctive normal form". For example, One subtlety in the following statement and its proof concerns the signs of minors. In order to have a well-defined sign of the determinant, we employ the following sign convention about the ordering of rows and columns in submatrices. Because the sign is invariant under simultaneous permutations of the rows and columns, the absolute ordering is not important, but instead which row is matched with which column in any ordering of the row and column sets. Instead of imposing an absolute order on the set N , it will be convenient to pair every k ∈ K with itself, in principal minors with respect to K, and in almost-principal minors (ij|K) to additionally pair i and j.
Proposition 4.3. Let Γ be principally regular over K and Z ⊆ N and δ ∈ {±1} N be arbitrary.
More precisely, we have the following formulas for the principal and almostprincipal minors of Γ : Proof. The matrix Γ satisfies the matrix equation (A + ΓC)Γ = B + ΓD and hence its minors can be computed with a generalized Cramer's rule [GAE02]: for sets I, J ⊆ N of the same size and where X(I, J : Y ) denotes the matrix X where the columns indexed by I are replaced by the columns of Y indexed by J. In this notation, we omit J if it equals I. In addition we use δX to denote the matrix X where the ith column is scaled with δ i . By definition of A and C we have A + ΓC = δΓ(Z c : I N ) and by Laplace expansion on the unit columns in Z c , To compute the principal minor for K ⊆ N , notice that and [ZK c ][Z c K] = Z ⊕K proves the principal minor formula. Notice that the Laplace expansions deleted the same rows and columns, so the sign convention is preserved.
For the almost-principal minor given by (ij|K), the same procedure yields the columns of Γ Pulling out the δ signs from the determinant, we get the sign k =i,j δ k · δ 2 j = δ i δ j k δ k because the ith column's δ i was replaced by a jth column's δ j . In addition, performing again Laplace expansion on the unit vector columns, the determinant so far is The contents of column i of Γ depends on whether j ∈ Z or not. If j ∈ Z, then the ith column of Γ is just the jth column of Γ. Thus the remaining minor of Γ in ( ) is revealed to be the almost-principal minor of Γ with row indices It is easy to see that the replacement of column i by column j leaves the rows and columns correctly paired and it remains to compute the conditioning set as If j ∈ Z, then the ith column contains the negative jth unit vector. Laplace expansion with respect to this column results in the column labeled i and the row labeled j to be removed and incurs a sign change which depends on the distance between these columns. By simultaneously reordering rows and columns, we can assume that rows and columns i and j are next to each other. In this case, the sign change is −1, which is compensated by the entry −1 in the eliminated column. The reordering ensures that rows and columns are properly paired after Laplace expansion. The remaining minor of Γ has row indices i and is thus again the almost-principal minor (ij|(Z \ ij) ⊕ K) of Γ.
These formulas describe in particular all the entries of S δ Z (Γ) in terms of Γ, Z and δ. Remarkably, the choice of δ has no influence at all on the principal minors, and only changes the sign of almost-principal ones. Hence, by identifying in each Z/4 factor the two matrices with opposite signs, we obtain a quotient group isomorphic to (Z/2) N S N which faithfully implements the hyperoctahedral group on algebraic gaussoids over any field. The realizing matrix may not be well-defined but the quotient is conclusive about its positivity, over ordered fields, and thus can be used to certify positive realizability of hyperoctahedral images of gaussoids.
Remark 4.4. If Γ is a principally regular matrix over a field K and the diagonal entries Γ ii are squares in K, then the diagonal matrix D with entries 1/ √ Γ ii can be formed over K. It is easy to check that DΓD is a principally regular matrix which realizes the same gaussoid as Γ and has a 1-diagonal. This argument allows to remove degrees of freedom from the generic candidate matrix in realizability tests.
For example, since the complex numbers are closed under square roots, every algebraic Gaussian over C has a realization with unit diagonal. The real numbers are missing √ −1, so algebraic realizations of gaussoids over R can only be assumed to have ±1 entries on the diagonal. The restriction to positive realizations restores the expectation of a 1-diagonal. Finally, the rational numbers are missing many square roots, but over Q( √ −1) one can at least assume positive squarefree integers on the diagonal of an algebraic realization.

Infinitesimally perturbed realizations
The theme of this section is perturbing principally regular matrices to find related but more generic CI structures in their neighborhood. By "more generic" we mean that fewer almostprincipal minors vanish, which implies smaller gaussoids. Our target, the minimal gaussoid extensions of at most two CI statements, are evidently among the smallest possible gaussoids. The focus lies on perturbing the least generic matrices from this point of view (modulo the algebraic torus action described in Remark 4.4), which lie in the hyperoctahedral orbit of the identity matrix. Formally, our main tool consists of passing to Gaussians over the (ordered) field of rational functions K(ε 1 , . . . , ε p ), extending K with finitely many (infinitesimal) variables. This technique already appears in [LM07, Theorem 1] as well as [Šim06a, Theorem 3.3] and is on vivid display in [LM07, Table 1] -but always without the formalization of the concept of positive or algebraic Gaussians over function fields. By Tarski's Transfer principle [Mar08, Theorem 1.4.2] and the fact that Gaussian realization spaces are described by polynomial constraints with integer coefficients, every gaussoid which is algebraically or positively realizable over an ordered field extension of R is also algebraically or, respectively, positively realizable over R. This principle applies to the constructions in this section, but the field of rational functions is special: if a realization over Q(ε 1 , . . . , ε p ) is found, then plugging in sufficiently small rational numbers for the ε k yields even a rational realization which is not promised by the Transfer principle. Therefore this section circumvents the Transfer principle via Lemma 5.1 and emphasizes rational constructions.
Lemma 5.1. Consider the following two situations: (1) K is an infinite field and L = K(x 1 , . . . , x p ) the field of rational functions in variables x 1 , . . . , x p over K.
(2) K is an ordered field and L = K(ε 1 , . . . , ε p ) the ordered field of rational functions in infinitesimals 0 < ε 1 < · · · < ε p over K. In both situations, a gaussoid is realizable over L if and only if it is already realizable over K.
Proof. One inclusion is obvious by the inclusion of fields. In the other direction, it suffices to show how to adjoin one variable x or one infinitesimal ε, so the proof proceeds by induction on p.
(1) Let K be an infinite field and Γ principally regular over L = K(x). The CI structure Γ is defined by vanishing and non-vanishing constraints on principal and almost-principal minors of Γ. These are polynomials in the entries of Γ and therefore rational functions over K. If a rational function f ∈ L is zero in L, then every evaluation f (a) for a ∈ K is zero. Otherwise f has non-zero numerator and denominator, which are univariate polynomials over K. The zeros of the denominator are poles of the evaluation of f and the zeros of the numerator are the zeros of f . Both zero sets are finite. Since K is infinite, one can find a point a ∈ K avoiding all the undesirable poles and zeros of all principal minors and of all non-zero almost-principal minors of Γ simultaneously. On this point a, the evaluation Γ(a) is a principally regular matrix over K and Γ = Γ(a) .
(2) Suppose K is ordered. This implies that its characteristic is zero and in particular that it is infinite. Let Γ positively realize a CI structure over L = K(ε). Again we seek a positive realization of Γ over K by plugging in elements of K for ε. By the previous part of the proof, the "algebraic part" of positive realizability, i.e., the vanishing and non-vanishing conditions of almost-principal minors, is satisfied on all but finitely many points. It remains to find infinitely many points of K on which all principal minors of Γ evaluate to positive elements of K. By the hypothesis, the principal minors of Γ are positive in the ordering of L. We can assume that numerators and denominators are both positive. By the ordering of L, a polynomial f ∈ K[ε] is positive if and only if its lowest-degree non-zero coefficient is positive in K. It is then an easy exercise in ordered fields to construct a f > 0 in K, depending on the coefficients and degree of f , such that every evaluation of f on the open interval (0, a f ) is positive in K. Let a * be the minimum of the finitely many a f constructed for all the (numerators and denominators of) principal minors of Γ. Since a * > 0, the interval (0, a * ) is infinite, and all but finitely many evaluations of Γ on this interval yield a positive realization of Γ over K.
Remark 5.2. The proof of the first part works, moreover, for all finite fields of sufficient size. A lower bound can be given based on the number of roots that the univariate polynomials in question have, which can be bounded by the size of a concrete realizing matrix and the maximal degree of its entries.
For geometric intuition, suppose that K = R for the moment. Inspection of the previous proof then paints the following picture of our main construction technique: we define a space of real matrices parametrized by rational functions in variables ε 1 , . . . , ε p . In fact, we can replace all infinitesimals by sufficiently large and far apart powers of a single infinitesimal and imagine just a curve segment of matrices parametrized by ε. By the defining rational functions, we control the algebraically realized CI structure on this curve, and as ε tends to zero, the matrices may approach a limit matrix whose principal regularity or positive-definiteness carries over to them by continuity. In this way a certificate for algebraic or positive realizability of the CI structure on the curve over the base field K is obtained.
The appeal to continuity and limits can be avoided by an easy sufficient condition which is the subject of the next definition and which holds for all our later applications. Let L be a rational function field over K as in Lemma 5.1 and Γ a matrix over L. Suppose that the denominators of all of its entries have a non-zero constant term. Then the evaluation Γ • := Γ(0, . . . , 0) is a matrix over K. Notice that each minor of Γ • is the constant term of the corresponding minor of Γ and thus principal regularity and positive-definiteness of Γ • over K imply that of Γ over L, allowing application of Lemma 5.1.
Definition 5.3. Let K be a field and Γ 0 a principally regular matrix. A gaussoid G is realizable near Γ 0 if there exists Γ over K(ε 1 , . . . , ε p ) such that G = Γ and Γ • = Γ 0 . If Γ can be chosen over the prime field K = Q, we add the adverb rationally for additional emphasis.
Among all (ordered) fields of characteristic zero, rational (positive) realizability is the most difficult to achieve, so whenever this is possible it is highlighted in the results below. Realizability near a positive-definite matrix immediately implies positive realizability by Lemma 5.1. We now assume that K is infinite or ordered and apply this lemma to prove general construction methods for Gaussians which reduce the case work in Section 6.
Lemma 5.4. Let G = Σ and H = Γ be algebraic Gaussians over infinite K on disjoint ground sets N and M , respectively. Then G ∪ H is an algebraic Gaussian over K on the ground set N ∪ M . It is realizable near the block-diagonal matrix containing Σ and Γ blocks.
where Σ sits in the N × N block, Γ in the M × M block and ε = (ε ij ) ij∈N ×M consists of independent variables. Obviously Φ • is the block-diagonal matrix from the statement of the lemma and Φ has coefficients in K. The matrix Φ defines a realizable gaussoid Φ which restricts to G on N and to H on M . It remains to see that Φ contains no other CI statement (ij|K) where we decompose K = N M with N ⊆ N , M ⊆ M . We apply Lemma 5.1 and show that det Φ ij|K vanishes in K(ε ij ) only if G or H mandate it. First assume ij ⊆ N and M = ∅ (as M = ∅ yields that det Φ ij|K = det Σ ij|N , whose vanishing depends only on G). The almost-principal submatrix Φ ij|K is written with rows labeled in order by iN M and columns jN M : The first determinant is a principal minor of Γ and hence non-zero. It suffices to show that the determinant of the Schur complement expression is not the zero polynomial. For row a ∈ iN and column b ∈ jN the corresponding entry of the Schur complement is By assumption there exists m ∈ M . In this multivariate polynomial the coefficient of ε im ε jm is which shows that (ij|K) ∈ Φ by Lemma 5.1 in case ij ⊆ N and M = ∅. The same proof applies to the symmetric case with N and M exchanged. The remaining case has i ∈ N and j ∈ M . Then, by Laplace expansion where det Φ K is non-zero and all other terms do not involve the variable ε ij .
Lemma 5.5. Let K be infinite and G = Γ an algebraic Gaussian over K on N . If L∩N = ∅, then {(ij|K) ∈ A N L : (ij|K) ∈ G} and {(ij|KL) ∈ A N L : (ij|K) ∈ G} are both algebraic Gaussians over K on N L. They are realizable near every block-diagonal matrix whose N -block is Γ and whose L-block is principally regular.
Proof. The first assertion follows directly from Lemma 5.4 using that the empty gaussoid on L is realizable near every principally regular matrix, by introducing independent variables for each entry. To show the second assertion, we make use of duality: if G = Γ for Γ principally regular, then Γ −1 is principally regular and realizes the dual gaussoid ]. Thus we may take G * over N and embed it into N L by the first part of the proof, denoting the result by G * , and take its dual over N L. All these operations preserve algebraic realizability near a chosen block-diagonal matrix and we get No other CI statements over N L arise because duality and embedding preserve also cardinality.
Remark 5.6. The "dependent sum" construction from Lemma 5.4 can be seen as a perturbation of the direct sum of semigraphoids [Mat94,Mat04] which is recovered in the limit ε ij = 0. Both constructions preserve the properties of being a gaussoid and being a realizable gaussoid for all the various notions of "realizable" covered in the above statements. The dependent sum yields the most generic gaussoid on N M which restricts to its summands on N and M , in that it satisfies no additional relations. For realizable gaussoids, this corresponds to the "most dependent" joint Gaussian. Similarly, Lemma 5.5 shows that marginalization and conditioning of realizable gaussoids from N L to N can be inverted generically, inducing no further independencies over N L.
It easily follows from Proposition 4.3 that the quotient action S Z on the (Z/4) N -orbit of the identity matrix, where components X i with different signs δ i are identified, produces well-defined matrices, independent of the choice δ i of representatives. This orbit consists of all 2 n diagonal matrices with entries (±1, . . . , ±1). For any matrix J in this orbit the action S Z (J) flips the signs of the diagonal entries of J indicated by Z. The action of S N does not leave this set of matrices either, so it constitutes an orbit under the hyperoctahedral group B N .
Realizability near the identity matrix or its hyperoctahedral images is a well-behaved notion in the theory of CI structures: take a principally regular matrix Γ over K(ε 1 , . . . , ε p ) with Γ • in this orbit. By Proposition 4.3, the hyperoctahedral action produces a principally regular matrix ∆ over the same field such that ∆ • belongs to the hyperoctahedral orbit of the identity as well. The dependent sum in Lemma 5.4, duality, marginalization and conditioning and their reversals in Lemma 5.5 of algebraic Gaussians preserve realizability near a hyperoctahedral image of the identity over their respective ground sets. These facts and the following proposition are the main realizability tools for the most complicated cases in the next section.
Proposition 5.7. If a gaussoid G is (rationally) realizable near every one of the 2 n hyperoctahedral images of the identity matrix, then every hyperoctahedral image of G is (rationally) near-identity realizable, in particular positively realizable.
Proof. Let H be in G's hyperoctahedral orbit, arising from G by a swap and a permutation. H is realizable near the identity if and only if G is realizable near the matrix which is obtained from the identity by permuting and swapping in reverse. These operations result in a hyperoctahedral image of the identity near which G is realizable by assumption. The hyperoctahedral action which transports this curve of realizations back to realize H near the identity does not change the field, so rationality is preserved.

Proof of the main theorem
In this section a proof of our main result Theorem 3.3 is delivered in the form of a series of lemmas which each settle one type of minimal gaussoid extension of at most two CI statements. In each case we construct a rational near-identity realization.
Lemma 6.1. All CI structures with at most one element are rationally realizable near the identity.
Proof. The empty structure is realized by a symmetric matrix with 1-diagonal and independent variables in the off-diagonal entries. Clearly, none of the almost-principal minors of this matrix vanish as polynomials. Every singleton subset of A N is vacuously a gaussoid. The singleton gaussoids form a single orbit under the action of the hyperoctahedral group. While this action does not in general preserve positive realizability, we can emulate it using Lemma 5.5 in a way that shows that it is preserved in this case. Given any singleton (ij|K), first permute it to (12|K ), marginalize to 12K and then contract K to arrive at the singleton (12|) over the ground set N = 12. These transformations preserve rational positive realizability and their inverses do as well. Thus we can transform every singleton into every other singleton while preserving realizability and it remains to see that {(12|)} is rationally positively realizable, which is trivial.
From now on we consider two-element sets {(ij|N ), (kl|M )} and their minimal gaussoid extensions. Using the fact that marginalizations and conditionings of Gaussians are Gaussians (over the same field) and that we can undo these operations generically via Lemma 5.5, we can assume that we work over the ground set ijklN M and that N ∩ M = ∅.
The gaussoid axioms have two antecedents. Every antecedent set of a gaussoid axiom is therefore not a gaussoid. The following lemma deals with this type: Lemma 6.2. If B = {(ij|N ), (kl|M )} is not a gaussoid, then each of its minimal gaussoid extensions has cardinality four and is rationally realizable near the identity.
Proof. B not being a gaussoid requires that (ij|N ) and (kl|M ) are distinct and form the antecedent set of a gaussoid axiom. Thus the two CI statements lie in a 3-face of the ambient ijklN M -cube; see [BK20]. We can therefore reduce the study of gaussoid extensions of B to this 3-face and hence, after conditioning, to a 3-element ground set. Every gaussoid closure of B is thus a 3-gaussoid which is placed in a 3-face of the ijklN M -cube. With two generators, each closure has exactly four elements. The 3-gaussoids are all realizable as undirected graphical models or their duals. Rational near-identity realizations have been constructed in [LM07, Theorem 1] and those are embedded back into the ijklN M -cube via Lemma 5.5.
The remaining type of gaussoids is comprised of pairs of so-called inferenceless generators with respect to the gaussoid axioms: two-element subsets of A N which are vacuously gaussoids. We expect this type to be the hardest to realize. The gaussoid axioms, as the previous proof shows, govern inferences of two CI statements in a common 3-face of the hypercube. The realizabilities of inferenceless pairs prove that there are no valid two-antecedental inference rules for Gaussian CI whose antecedents lie further apart in the hypercube than in a common 3-face.
We continue to assume that the ground set is ijklN M and that N ∩ M = ∅. In addition, the assumption of inferenceless generators can be expressed as This type splits into a number of cases depending on how "entangled" ij, kl, N and M are, as these entanglements influence the form of a potential realizing matrix. Up to the group Z/2 × S N of duality and isomorphy, which preserves rational positive realizability, and symmetries in the roles of ij and kl, there are seven cases: 1 2 3 4 5 6 7 The hyperoctahedral group has a considerably wider reach. Every gaussoid {(ij|N ), (kl|M )} can be transformed into {(ij|), (kl|M )}, where ij ∩ M = ∅, by swapping out N ∪ (M ∩ ij). This action reduces the seven cases in the table above to only three cases: - Not only does B N decrease the number of cases, it also allows to pick simpler representatives of each case. The three representatives displayed above all contain the statement (ij|), which only mandates that a specific entry of the realizing matrix be zero. This reduction comes at the cost of not preserving positive realizability. Using Proposition 5.7, to obtain rational positive realizability of the entire orbit, we realize the above three case representatives rationally near all the matrices which are equivalent to the identity under the hyperoctahedral action.
The first case is the union of gaussoids {(ij|)} and {(kl|M )} over disjoint ground sets ij and klM and is already settled as an instance of Lemma 5.4 with the rational near-identity realizations constructed for singleton gaussoids in Lemma 6.1. The remaining two cases are settled by Lemmas 6.3 and 6.4 below.
Lemma 6.3. The gaussoid {(ij|), (ik|M )} on the ground set ijkM is rationally realizable near all hyperoctahedral images of the identity.
Proof. We introduce the following notation: where (ij|) is already fulfilled by imposing the zero entry. The statement (ik|M ) is equivalent to the following relation, after a Schur complement: Thus we impose this relation on ξ. All other appearing symbols are supposed to be generic, i.e., η is a variable, the vectors have independent variable entries u m , v m , w m , for m ∈ M , and Σ is a generic symmetric matrix with ±1-diagonal and independent ε mn off-diagonals. The signs of the diagonal elements of Φ are arbitrary but fixed. Φ is a matrix over Q(η, u m , v m , w m , ε mn ) whose off-diagonal entries tend to zero with the infinitesimal variables and thus it approaches any hyperoctahedral image of the identity matrix. The only denominator appears in ξ and is the principal minor det Σ with constant term ±1, which is infinitesimally non-zero. By construction, (ij|) and (ik|M ) hold for Φ. It is clear that the only interesting almostprincipal minors are those involving ξ. For any N M , the statement (ik|N ) surely does not hold because it is equivalent to The almost-principal minor (ik|jN ) is rewritten using Schur complement with respect to jN to which vanishes if and only if the parenthesized factor vanishes as a rational function. Numerator and denominator of ξ do not involve the variable η, so it suffices to show that there is a monomial divisible by η with non-zero coefficient in the "bilinear term" inside the parentheses. All terms involving η are η Each of these summands has a unique variable u n which does not appear in Φ jN . If N = ∅, this ensures that the η terms do not cancel and that the almost-principal minor does not vanish. In case N = ∅, it is sufficient to remark that Φ ik| = ξ = 0 because M = ∅ due to the assumption of inferenceless generators ( †). For the case (il|kN ) first assume that l = j. Laplace expansion on the first row of the almost principal minor gives a sum of which the omitted terms are not divisible by u l . Since the constant term of det Φ kN is ±1, the monomial u l arises in the sum and cannot be canceled by other terms. If l = j, then (ij|kN ) is equivalent to Again we investigate the terms divisible by η: Since ξ = 0 and (Φ −1 kN ) kk has constant term ±1, we find the monomial ηu m w m for some m ∈ M in the numerator of this almost-principal minor.
The case (kl|iN ) for l = j is completely analogous to the previous (il|kN ) one. In fact, the involved matrices are identical up to exchanging the places of the generic vectors u and w, which already play symmetric roles in the definition of ξ. The matrix for (kj|iN ) is and again Laplace expansion can be used to see that η survives as a monomial of degree one. The last case is (lm|ikN ). If j ∈ lm, the almost-principal minor of has a monomial ε lm via Laplace expansion in the first row. The numerators of other summands in this expansion are not divisible by ε lm , making it impossible to cancel this degree-1 monomial. Otherwise, without loss of generality, m = j and the matrix  is susceptible to the same Laplace expansion proof yielding a monomial v l .
Lemma 6.4. The gaussoid {(ij|), (ij|M )} on the ground set ijM is rationally realizable near all hyperoctahedral images of the identity.
Proof. We use the matrix pattern with column vectors u and v and a generic matrix Σ with ±1-diagonal and independent ε mn off-diagonals. Again, (ij|) is imposed already by a zero entry. Unlike the situation of Lemma 6.3, we cannot make (ij|M ) hold by imposing a relation on the ij-entry of Φ which is already set to zero. The equation for (ij|M ) is equivalent to that is, u and v are orthogonal with respect to the (infinitesimally principally regular) adjoint of Σ. Equivalently we could have used Σ −1 but prefer not to introduce denominators into Φ needlessly. To enforce this relation, we define u and v via the Gram-Schmidt process on vectors x and y of mutually independent variables indexed by M , as follows: with the bilinear forms α L = x T L adj(Σ L )x L and β L = x T L adj(Σ L )y L for any L ⊆ M . This completes the definition of Φ, which is a matrix over Q(x m , y m , ε mn ) whose off-diagonal entries tend to zero with the infinitesimal variables, and clearly Φ contains (ij|) and (ij|M ). Evidently (kl|N ) ∈ Φ whenever j ∈ klN because the almost-principal submatrix is generic in this case. The remainder of the proof treats CI statements of the forms (ij|N ), (jk|N ) and (kl|jN ) each for all suitable k, l and N ⊆ M .
If N is any non-empty subset of M , the almost-principal minor (ij|N ) becomes If N = M , it suffices to find a monomial in α M β N which does not appear in α N β M . Given k ∈ N and m ∈ M \ N , and using that x 2 m only appears in α M via the constant term ±1 in the cofactor (adj Σ) mm , such a monomial is x 2 m x k y k . Next consider type (jk|N ) with By the assumption of inferenceless generators ( †), |M | ≥ 2, so there exists m ∈ M \ k. The expansion of α M y k produces the monomial x 2 m y k which does not appear in β M x k . Thus this monomial arises from the product term. The remaining term is a bilinear form with respect to adj(Φ N ). Expanding the Φ j,N vector, pretending that x i = y i = 0 in case N i, we find Each monomial in α M or β M has total degree at least 2; y n , x n and Φ kn are variables or zero if n = i ∈ N . Under no circumstance does any monomial of total degree 3 arise. This proves that x 2 m y k is a monomial with non-zero coefficient in the expansion of det Φ jk|N , hence (jk|N ) ∈ Φ . The last type (kl|jN ) splits into two cases, depending on whether i ∈ kl or not. The proofs are similar, so suppose first that i ∈ kl. Then Because det Φ jN has constant term ±1, the monomial ε kl appears in the above expansion of the almost-principal minor. It suffices to show that the bilinear form term does not produce this monomial. Indeed, sums over products of three polynomials of which at most the entry of the adjoint may have a non-zero constant term. Analogously to above, this sum has no monomial of total degree 1, so the ε kl monomial cannot be canceled. Finally, if l = i, the ε kl term in the calculation above becomes x k instead and one of the coordinates in the bilinear form term is the 0 in Φ ij| . However, this does not interfere with the argument.

Remarks and examples
We have shown that on every ground set N , a Boolean formula in inference form ϕ : L ⇒ M in variables A N and with |L| ≤ 2 is valid for all regular Gaussian distributions if and only if the gaussoid axioms on N logically imply ϕ. The main work of the proof consists of realizing all gaussoids with two elements, as their realizability implies that no inference forms with two antecedents, beyond the deductive closure of the gaussoid axioms, are valid for all regular Gaussians. Since we constructed positive-definite rational realizations (in every neighborhood of the identity matrix), which is the most restrictive constellation among algebraic and positive realizations over characteristic zero, we obtain Corollary 3.4 which in character reaches slightly beyond the probabilistic origin of gaussoids and into the field of synthetic geometry.
Corollary 3.4 does not hold in general in positive characteristic. For example, it is easy to see that the only principally regular matrix over GF(2) is the identity matrix, so the GF(2)-algebraic Gaussians satisfy many inference rules which are not implied by the gaussoid axioms. The proof strategy of this paper begins to fail over finite fields in Section 5. For example, Lemma 5.4 does not hold over GF(2).
The result does not generalize to more antecedents either: a valid three-antecedental inference rule for Gaussians which is not implied by the gaussoid axioms was found by Lněnička  In private correspondence, Milan Studený kindly pointed out that Corollary 3.4 is not a complete analogue to [Stu94] because it concerns only CI inference forms over elementary or local CI statements (ij|K) ∈ A N where i and j are singletons. By contrast, statements of the form (I, J|K) with pairwise disjoint sets I, J, K ⊆ N are global CI statements. It is an easy exercise to prove that for compositional graphoids the following equivalence holds: Therefore, a general theory of gaussoids can forgo global symbols. Since local and global semigraphoids are in bijection, a global semigraphoid can be recovered exactly from its intersection with A N [Mat97, Section 2]. However, for inference rules with a bound on the number of antecedents, there is a major difference in allowing global statements. For example, the single global CI statement (12, 34|5) corresponds to the local statements {(13|5), (14|5), (23|5), (24|5)} which have a unique minimal gaussoid extension with 16 elements. This gaussoid is realizable, hence (12, 34|5) is not the antecedent set of a non-trivial valid global inference rule for Gaussians, but this is not covered by our proof. Hence we have Conjecture 7.1. All minimal gaussoid extensions of at most two global CI statements are realizable.
In 2004, in an effort to classify semigraphoids by closures of families of "easy semigraphoids" under certain operations, Matúš [Mat04] proved a significant refinement of Studený's twoantecedental completeness: his Theorem 2 states that all minimal semigraphoid extensions of two global CI statements are multilinear over every field, which implies discrete realizability and hence, by Lemma 3.2, two-antecedental completeness of the semigraphoid axioms for discrete CI and also for multilinearity over every field. Following [Mat94], a semigraphoid is a semimatroid if it is the CI structure of a polymatroid rank function. It is multilinear (depending on context this is also called just linear ) if the rank function is given by a subspace arrangement, cf. [Mat97, Section 6.1]. Multilinearity over every field of all semigraphoids with at most two generators is a strong result which may be helpful in approaching Conjecture 7.1. However, there is no general relation between multilinear semimatroids which happen to be gaussoids and algebraic or positive realizability, as the following example shows.
Example 7.2 (A multilinear but non-complex gaussoid). To find a 5-gaussoid which is not algebraic over the complex numbers, one iterates through the B 5 -orbit representatives of 5gaussoids computed in [BDKS19]. Remark 4.2 describes a complex realizability test for gaussoids in geometric terms, which is readily translated into the language of commutative algebra, to be executed by a computer algebra system like Macaulay2 [GS]. We refer to [CLO15,Section 4.4] for further clarification of the machinery of computer algebra. However, the exact test from Remark 4.2 requires computing the following quotient of the radical ideal defined by the gaussoidunder-test M: where Γ is a generic symmetric 5 × 5 matrix over C. The product on the right is a very large polynomial and computing the quotient is often infeasible due to the amount of memory and CPU time required. Instead of taking the ideal quotient by a product of minors, one can successively take the quotient by each factor. This produces a potentially larger variety than the realization space of M described in Remark 4.2, but the computation is feasible and if the larger variety turns out to be empty, the gaussoid is certain to be non-realizable. For some gaussoids, this method proves non-realizability within a few seconds.
The multilinearity test is based on the characterization of the cone spanned by multilinear polymatroids on five variables achieved by Dougherty-Freiling-Zeger in [DFZ10]. We thank Kenneth Zeger for pointing us to the new location of the data mentioned in their paper, http: //code.ucsd.edu/zeger/linrank. Using the list of extreme rays of the multilinear polymatroid cone, one can compute their CI structures and check for every incoming non-algebraic gaussoid from the preceding test whether it is equal to the intersection of all extreme structures above it.
One gaussoid passing both tests is To confirm that this gaussoid is non-realizable over C, it suffices to write down the ideal defined by the almost-principal minors in M, compute its radical and saturate it at the product of the non-vanishing entries of the matrix as specified by M, which is a fast operation. The variety defined by the resulting ideal is a superset of the algebraic realization space of M and one can check that the almost-principal minor (34|125) ∈ M vanishes on this variety. This proves the nine-antecedental inference rule for algebraic Gaussians over C. Since M does not satisfy this rule, it is not realizable over C.
To prove multilinearity, one has to exhibit the face of the multilinear polymatroid cone defined by M and verify that a relatively interior point realizes M. This face is spanned by the following 37 extreme rays. Each ray is written as an array of 31 single-digit integers in "binary counter" Substituting the first into the second equality and canceling the non-zero factor b in every term we find 1 + ace = c 2 + e 2 . This equation cannot be satisfied if a, c and e all tend to zero. It can be shown that the realization space of G decomposes into eight reorientation classes which are identical up to an orthogonal transformation; for an explanation of reorientation, see [BDKS19, Section 5]. This transformation preserves Euclidean distances and it fixes the center of the elliptope, thus all of these components have the same distance to the identity matrix. Focusing on one of them, we can assume that a, c and e are all positive and then the Euclidean distance of a realization of G to the identity is √ 2 a 2 + b 2 + c 2 + e 2 + f 2 = √ 2 1 + ace + a 2 + b 2 + f 2 ≥ √ 2.
By allowing e to converge to one while a, b and c converge to zero, one can find positive-definite realizations of G which approach this lower bound. Thus, the elliptope slice of the realization space of G has distance √ 2 to the identity matrix.
It was remarked in [BDKS19, Corollary 1] that, based on the hyperoctahedral action and [LM07, Table 1], every 4-gaussoid is algebraically realizable over C. Since all realizations in the table are even rational, our Lemma 5.1 and Proposition 4.3 furthermore imply that every 4-gaussoid is algebraically realizable over Q, while not all of them are positively realizable even over R.
Example 7.5 (A non-algebraic gaussoid). In [BDKS19, Example 13] a 5-gaussoid was found which is not algebraically realizable over C. This gaussoid had a redundant element (25|34) which contributed neither to the property of being a gaussoid nor to being non-realizable. In this example, we consider the gaussoid of [BDKS19, Example 13], with (25|34) removed, from the broader perspective of algebraic realizability over general fields. The claim is that Using these variable substitutions, the longer equations, corresponding to CI statements with bigger conditioning sets, shrink. They are Dividing by the non-zero variables d, f , and h yields a system in even powers of the variables. Replacing d 2 = D and so on, we have: which is a contradiction to (25|) ∈ V. Notice that this contradiction was derived using only the non-vanishing of principal minors and division by variables known to be non-zero by the definition of V. The argument is independent of the field chosen, so the following nine-antecedental inference rule is valid for algebraic Gaussians over every field: This proves that V is not algebraically realizable over any field. We note that, unlike M of Example 7.2, this CI structure is not realizable by discrete random variables either because it is not a semimatroid. The closure of V in the set of 5-semimatroids and in the set of multilinear 5-semimatroids can be computed using polyhedral geometry, as outlined in Example 7.2, because the extreme rays of the relevant cones are known. The semimatroid closure of V contains 16 CI statements but none of the conclusions (15|), (24|), (25|) and (34|) of ( ‡), so this inference rule is not valid for semimatroids. The multilinear closure is larger at 22 elements and from the possible conclusions of ( ‡), it contains precisely (15|). Hence, the special case V ⇒ (15|) of ( ‡) is valid for multilinear semimatroids. The closure of V among the CI structures realizable by discrete random variables lies in between the semimatroid and multilinear closures. It remains open whether V ⇒ (15|) is valid even in the discrete case.
As a final direction for future work we would like to mention the influence of the field K on the realizability of gaussoids. There are numerous questions to be asked inspired by related theorems in matroid theory, cf. [Oxl11, Chapter 6]. To give one example: recall from Remark 4.2 that the realizability of a gaussoid G is a disproof of the existence of a non-trivial valid inference rule with antecedent set G. Therefore, realizing matrices represent invalidity proofs for inference rules in a very compact way. In the proof of Theorem 3.3, we exhibited rational matrices as invalidity proofs for all two-antecedental inference formulas which are not implied by the gaussoid axioms. These matrices can be stored on a computer and verified using exact arbitrary-precision rational arithmetic. This leads to the following two variants of a question which was asked before by Petr Šimeček [Šim06b,Section 4] in the broader context of semidefinite Gaussian CI models. He asked whether there exists a realizable Gaussian CI structure for which a rational realizing covariance matrix does not exist -because his computer search for invalid inference rules was conducted on rational matrices only. Both questions are answered affirmatively in [Boe21].
Question 7.6. Is there a gaussoid which is positively realizable over R but not over Q?
Question 7.7. Are there gaussoids which separate the notions of algebraic realizability over C, R and Q?