The expected codimension of a matroid variety
Abstract
Matroid varieties are the closures in the Grassmannian of sets of points defined by specifying which Plücker coordinates vanish and which do not. In general these varieties are very illbehaved, but in many cases one can estimate their codimension by keeping careful track of the conditions imposed by the vanishing of each Plücker coordinates on the columns of the matrix representing a point of the Grassmannian. This paper presents a way to make this procedure precise, producing a number for each matroid variety called its expected codimension that can be computed combinatorially solely from the list of Plücker coordinates that are prescribed to vanish. We prove that for a special, wellstudied class of matroid varieties called positroid varieties, the expected codimension coincides with the actual codimension.
Keywords
Rank Function Rank Condition Schubert Variety Tutte Polynomial Careful Track1 Introduction
Consider a point \(x\) on the Grassmannian \(G(k,n)\) of \(k\)planes in \(\mathbb {C}^n\). The matroid of \(x\) is defined to be the set of Plücker coordinates that are nonzero at \(x\), and a matroid variety is the closure of the set of points on \(G(k,n)\) with a particular matroid. If \(M\) is such a set of Plücker coordinates, we write \(X(M)\) for the corresponding matroid variety. Many enumerative problems on the Grassmannian can be described in terms of matroid varieties; the Schubert varieties that form the usual basis for the cohomology ring of \(G(k,n)\) are an especially wellbehaved special case.
Unfortunately, matroid varieties can be very ugly in full generality. A good start toward understanding the behavior of a matroid variety would be to find some way to compute its dimension directly from the matroid that defines it, but even this has very little hope of succeeding.
Still, one can come up with an estimate of the codimension of a matroid variety inside its Grassmannian by keeping careful track of the conditions imposed by the vanishing of Plücker coordinates on the columns of the \(k\times n\) matrix defining a point on \(G(k,n)\). This paper is about a way to make this idea precise, producing a number called the expected codimension for each matroid. While it does not always produce the actual codimension of the matroid variety, we will prove that it always does for positroids, a particularly wellstudied class of matroids which includes both Schuberts and Richardsons.
We write \([n]\) for the set \(\{1,2,\ldots ,n\}\), and for any set \(S\) we write \(\left( {\begin{array}{c}S\\ k\end{array}}\right) \) for the set of all \(k\)element subsets of \(S\). \(G(k,n)\) will always stand for the Grassmannian of \(k\)planes in \(\mathbb {C}^n\). For \(S\in \left( {\begin{array}{c}[n]\\ k\end{array}}\right) \), we write \(p_S\) for the corresponding Plücker coordinate on \(G(k,n)\); that is, thinking of elements of \(G(k,n)\) as being represented by \(k\times n\) matrices, \(p_S\) is the determinant of the minor whose columns are the elements of \(S\). All varieties in this paper are over \(\mathbb {C}\).
We estimate the codimension of \(X(S)\) in \(G(3,8)\) as follows. To build a projective model of \(S\) like the one in the figure, we are free to place the oddnumbered points wherever we want. Once we have done this, each evennumbered point is forced to live in the codimension1 subspace spanned by two of the points we have already placed. So, we guess that the codimension of \(X(S)\) is \(1+1+1+1=4\).
This turns out to be the correct answer for \({{\mathrm{codim}}}X(S)\), and we will see later that the reasoning given is more or less why. One immediate question is whether the result of this procedure depends on the order in which we “place” the points. Once we have nailed down exactly what the procedure is, we will see that the answer to this question is no. For now, let us just try a couple more. If they are placed in order starting from the beginning, points 1, 2, 4, 5, and 7 can be put anywhere without restriction. As before, points 6 and 8 are now forced onto codimension1 subspaces. But point 3 is now forced onto a codimension2 subspace: it needs to be on the intersection of \({{\mathrm{span}}}\{1,2\}\) and \({{\mathrm{span}}}\{4,5\}\). So, adding all the restrictions up, we get \(1+1+2=4\). Similarly, we could get “\(2+2\)” by placing the points 1, 2, 4, 5, 6, and 8 freely, and then putting 3 and 5 in last.
We will show that our definition is independent of the order by recasting it in terms of something manifestly orderindependent. In \(G(k,n)\), specifying exactly which Plücker coordinates vanish is the same as describing, for each subset of the set of columns, the dimension of its span in \(\mathbb {C}^k\); in matroid language, this is called the rank of the corresponding subset \([n]\). Our current procedure is to ask, for each element, what constraints are put on that element when it is added in. Instead let us ask, for each subset of the base set of the matroid, what constraints it puts on its elements. For example, in the set \(\{1,2,3\}\) in \(S\), the third element added in will be forced onto a codimension1 subspace no matter what the order is; the only thing that matters is that the number of elements of this set is 1 more than its rank.
So, it seems like we should add up the numbers \((\#F{{\mathrm{rk}}}F)(k{{\mathrm{rk}}}F)\) for each subset \(F\); the first factor is the number of elements which are constrained by \(F\) and the second is the codimension of the subspace those elements are constrained to. But this is not quite right: whenever an element belongs to two different such \(F\)’s, it is going to be counted twice. Sometimes this is desirable, as we saw with point 3 two paragraphs up, but often it will be redundant, as it is for the sets \(\{1,2,3\}\) and \(\{1,2,3,4\}\) in \(S\). We ought to subtract 1 from the number of constrained elements for the larger set to account for the fact that it was already taken care of by the smaller one.
This, finally, takes us to the definition that we will be using:
Definition 1.1
Allowing \(\mathfrak {S}\) to be something other than \(\mathcal {P}([n])\) itself might seem strange, but it will turn out to be very helpful. We will show that in many cases \({{\mathrm{ec}}}_{\mathfrak {S}}\) will be the same for many different choices of \(\mathfrak {S}\) but easier to compute for some choices than for others, and we will be happy to have the flexibility, for both theoretical and practical reasons.
In Definition 4.2, we describe an important class of matroids called positroids. We will show in Theorem 4.7 that positroids have expected codimension. In Sect. 5 we also discuss valuativity, a wellstudied property of some numerical invariants of matroids, and show that expected codimension is valuative.
2 Matroids and matroid varieties
2.1 Matroids
We will need to have access to some theoretical results about abstract matroids. There are many equivalent definitions of matroids, all useful in different contexts, and we are only going to mention two of them here. A good place to learn more about matroids from a combinatorial perspective is [13].
Given a collection of vectors in a vector space, its matroid combinatorially captures all the information about the linear relations among these vectors. We will consider two equivalent ways to do this. Details of these and other axiomatizations of matroids can be found in [13, pp. 298–312]
Definition 2.1

\(\fancyscript{B}\) is not empty.

No element of \(\fancyscript{B}\) contains another.

For \(B,B'\in \fancyscript{B}\) and \(x\in B\), there is some \(y\in B'\) so that \(B\{x\}\cup \{y\}\in \fancyscript{B}\).
Definition 2.2
Suppose we have a finitedimensional vector space \(V\), a finite set \(E\), and a function \(e:E\rightarrow V\) whose image spans \(V\). We can put a matroid structure on \(E\) by taking \(\fancyscript{B}\) to be the collection of all subsets of \(E\) which map injectively to a basis of \(V\). (The reason for this funny definition is that we would like to be able to take the same element of \(V\) more than once; otherwise \(E\) could just be a subset of \(V\). We will hardly ever be careful about the difference between an element \(x\in E\) and its image \(e(x)\in V\).) It is an easy exercise to show that this definition satisfies the axioms above. Matroids which arise in this way are called realizable.
The following terminology will be useful. Most of these definitions mirror the corresponding ones from linear algebra in the realizable case.
Definition 2.3
 (1)
A subset of \(E\) which is contained in a basis is called independent. Any other set is dependent.
 (2)
For \(F\subseteq E\), the rank of \(F\), written \({{\mathrm{rk}}}F\), is the size of the largest independent set contained in \(F\). Note that \({{\mathrm{rk}}}E\) is the same as the size of any basis. We define \({{\mathrm{rk}}}M\) to be \({{\mathrm{rk}}}E\).
 (3)
For a subset \(F\subseteq E\) and an element \(x\in E\), we say that \(x\) is in the closure of \(F\), written \(x\in \overline{F}\), if \({{\mathrm{rk}}}(F\cup \{x\})={{\mathrm{rk}}}F\). Note that, as the name suggests, closure is idempotent and inclusionpreserving. Sets which are their own closures are called flats. In the realizable case, the flats are the intersections of subspaces of \(V\) with \(E\).
 (4)
A set \(F\) which contains a basis is called a spanning set. Equivalently, \(F\) spans if \({{\mathrm{rk}}}F={{\mathrm{rk}}}E\), or if \(\overline{F}=E\).
 (5)
If \({{\mathrm{rk}}}\{x\}=0\), we say \(x\) is a loop. Equivalently, \(x\) is not in any basis, or \(x\in \overline{\varnothing }\), or \(x\) is in every flat. In the realizable case, loops are elements of \(E\) which map to the zero vector in \(V\).
 (6)
If \({{\mathrm{rk}}}(E\{x\})={{\mathrm{rk}}}E1\), we say \(x\) is a coloop. Equivalently, \(x\) is in every basis.
 (7)
If \({{\mathrm{rk}}}\{x,y\}=1\), we say that \(x\) and \(y\) are parallel. Equivalently, any flat which contains one of \(x\) or \(y\) also contains the other.
It will also be convenient to note that matroids can be defined just by listing the axioms that have to be satisfied by the rank function defined above:
Definition 2.4

\({{\mathrm{rk}}}\varnothing =0\).

\({{\mathrm{rk}}}(F\cup \{x\})\) is either \({{\mathrm{rk}}}F\) or \({{\mathrm{rk}}}F+1\).

If \({{\mathrm{rk}}}F={{\mathrm{rk}}}(F\cup \{x\})={{\mathrm{rk}}}(F\cup \{y\})\), then \({{\mathrm{rk}}}(F\cup \{x,y\})={{\mathrm{rk}}}F\).
Given the same data, we used to define a realizable matroid before—a set \(E\) with a function \(e\) to a vector space \(V\)—we can get a rank function on \(E\) by setting \({{\mathrm{rk}}}(F)=\dim ({{\mathrm{span}}}(e(F)))\).
We have already mentioned how to turn a collection of bases into a rank function. To go the other way, we can say \(B\) is a basis if it is minimal among sets of maximal rank. One can check that these two correspondences make the two definitions given here equivalent. We will not distinguish between them as we go forward.
2.2 Matroid varieties
As mentioned above, the main objects of study in this paper are certain subvarieties of Grassmannians which can be described in terms of matroids.
Construction 2.5
Consider the Grassmannian \(G(k,n)\), which we will think of as the set of \(k\times n\) matrices of full rank modulo the obvious left action of \(GL_k\). When one builds the Grassmannian in this way, one ordinarily considers the \(k\) rows of the matrix as elements of \(\mathbb {C}^n\), and the action of \(GL_k\) corresponds to automorphisms of the span of those elements, so that we are left with a variety that parametrizes the \(k\)planes in \(\mathbb {C}^n\).
We will think about our matrices the other way. Given a \(k\times n\) matrix of full rank, consider the function \(e:[n]\rightarrow \mathbb {C}^k\) which takes \(i\) to the \(i\)’th column of our matrix. We can then use Definition 2.2 to put a matroid structure on \([n]\). Since the action of \(GL_k\) clearly does not change which matroid we get, we have assigned a matroid in a consistent way to every point of the Grassmannian. The Plücker coordinate \(p_S\) corresponding to some \(S\in \left( {\begin{array}{c}[n]\\ k\end{array}}\right) \) is given by the determinant of the submatrix defined by taking the columns in \(S\). So \(p_S\) vanishes precisely when these \(k\) columns fail to span \(\mathbb {C}^k\), that is, precisely when \(S\) fails to be a basis of our matroid.
Given a matroid \(M\) of rank \(k\) on \([n]\), the open matroid variety \(X^\circ (M)\) is the subset of \(G(k,n)\) consisting of all points whose matroid is \(M\). This is a locally closed subvariety of \(G(k,n)\): it is defined by taking all Plücker coordinates corresponding to bases of \(M\) to be nonzero and all the other Plücker coordinates to be zero. The closure of \(X^\circ (M)\) is called the matroid variety \(X(M)\). Similarly, we can define a matroid variety inside \(\mathrm {Mat}_{k\times n}\) in the same way. The open matroid variety in \(\mathrm {Mat}_{k\times n}\) does not intersect the subvariety of matrices of less than full rank, but its closure will in general.
The reader who is familiar with the definition of Schubert varieties may be tempted to ignore the definition above and take \(X(M)\) to be the subvariety of \(G(k,n)\) defined by setting all the Plücker coordinates corresponding to nonbases of \(M\) to zero. Sadly, this is not the same:
Counterexample 2.6
Consider the rank3 matroid \(A\) on \([7]\) generated by the conditions that \(\{1,2,7\}\), \(\{3,4,7\}\), and \(\{5,6,7\}\) have rank 2. The variety \(X(A)\) is not cut out by the ideal \((p_{127},p_{347},p_{567})\). That ideal cuts out two components: \(X(A)\) and the variety of the matroid in which 7 is a loop. The ideal of \(X(A)\) is actually \((p_{127},p_{347},p_{567},p_{124}p_{356}p_{123}p_{456})\).
2.3 Operations on matroids and matroid varieties
In general, as mentioned in the introduction, matroid varieties are under no obligation to be geometrically wellbehaved. They do not have to be irreducible, equidimensional, normal, or even generically reduced (if given the appropriate scheme structure), and even the problem of determining whether \(X(M)\) is empty or not is NPhard ([10]). (See [12] for a discussion of how bad these varieties can get.) Still, our goal in this paper is to find some way to control the codimension of a matroid variety, at least in some nice cases. With this in mind, we establish some results which describe the effects of some simple matroid operations on the corresponding matroid varieties.
Definition 2.7
 (1)Let \(M\) be a matroid on \(E\) and \(N\) a matroid on \(F\). The direct sum of \(M\) and \(N\) is the matroid \(M\oplus N\) on \(E\sqcup F\) defined by$$\begin{aligned} {{\mathrm{rk}}}_{M\oplus N}(S)={{\mathrm{rk}}}_M(S\cap E)+{{\mathrm{rk}}}_N(S\cap F). \end{aligned}$$
 (2)
If \(M\) is a matroid, the loop extension of \(M\) is the matroid formed by taking the direct sum of \(M\) with the unique matroid of rank 0 on the oneelement set \(\{x\}\), so that the new element \(x\) is a loop.
 (3)
The coloop extension of \(M\) is the matroid formed by taking the direct sum of \(M\) with the unique matroid of rank 1 on \(\{x\}\), so that \(x\) is a coloop.
It is straightforward to compute the codimension of \(X(M\oplus N)\) given the codimensions of \(X(M)\) and \(X(N)\).
Proposition 2.8
If \(X(M)\subseteq G(k,n)\) and \(X(N)\subseteq G(k',n')\) are matroid varieties, then \({{\mathrm{codim}}}X(M\oplus N)={{\mathrm{codim}}}X(M)+{{\mathrm{codim}}}X(N)\).
Corollary 2.9
Let \(X(M)\subseteq G(k,n)\) be a matroid variety. Then if \(M\oplus x_0\) is the loop extension of \(M\) and \(M\oplus x_1\) is the coloop extension of \(M\), we have \(\dim X(M\oplus x_0)=\dim X(M\oplus x_1)=\dim X(M)\).
Definition 2.10
If \(M\) cannot be written as a direct sum in a nontrivial way, we say that \(M\) is connected. If we write \(M=\bigoplus _iA_i\) with each \(A_i\) connected, then the \(A_i\)’s are uniquely determined, and we call them the connected components of \(M\).

\(M\) is connected if there is no proper, nonempty subset \(S\subseteq E\) for which \({{\mathrm{rk}}}S+{{\mathrm{rk}}}(ES)={{\mathrm{rk}}}E\).

A circuit of \(M\) is a minimal dependent set, that is, a dependent set \(C\) for which every proper subset is independent. We can define an equivalence relation on \(E\) by saying \(x\) is equivalent to \(y\) if either \(x=y\) or there is a circuit of \(M\) containing both \(x\) and \(y\). The connected components of \(M\) are the equivalence classes under this relation.
Definition 2.11
Let \(M\) be a matroid of rank \(k\) on a set \(E\). The dual of \(M\) is the matroid \(M^*\) on \(E\) whose bases are exactly the complements of bases of \(M\).
The rank of a set \(S\) in \(M^*\) works out to be \(\#Sk+{{\mathrm{rk}}}(E{\setminus } S)\). In particular \(M^*\) has rank \(\#Ek\). The following result follows directly from Definition 2.11 and Construction 2.5.
Proposition 2.12
There is an isomorphism \(\omega :G(k,n)\rightarrow G(nk,n)\) which takes \(p_S\) to \(p_{[n]S}\) for \(S\in \left( {\begin{array}{c}[n]\\ k\end{array}}\right) \). For a rank\(k\) matroid \(M\) on \([n]\) this restricts to an isomorphism \(X(M)\cong X(M^*)\).
It is straightforward to check that \(M\) is connected if and only if \(M^*\) is. In fact, \((A\oplus B)^*=A^*\oplus B^*\).
Finally, for a matroid \(M\) on a set \(E\), we define two different ways to put the structure of a matroid on a subset of \(E\). One of them corresponds to restricting to a subspace of a vector space, and the other corresponds to taking a quotient of vector spaces.
Definition 2.13
Suppose \(S\subseteq E\). The restriction of \(M\) to \(S\) is the matroid \(M_S\) on \(S\) in which the rank of any subset of \(S\) is just its rank in \(M\). In particular, \({{\mathrm{rk}}}(M_S)={{\mathrm{rk}}}S\). We will sometimes also refer to this matroid as the result of deleting \(ES\). In this context, when we refer to \(S\) itself as a matroid, we will always mean the restriction to \(S\).
The contraction of \(S\) is the matroid \(M/S\) on \(ES\) in which the rank of any set \(T\) is \({{\mathrm{rk}}}_M(T\cup S){{\mathrm{rk}}}_M(S)\). In particular, \({{\mathrm{rk}}}(M/S)={{\mathrm{rk}}}_ME{{\mathrm{rk}}}_MS\).
It is important to note that these two constructions are dual to each other. That is, \((M_S)^*=M^*/(ES)\), and \((M/S)^*=M^*_{ES}\).
3 Properties of the expected codimension
We now study how \({{\mathrm{ec}}}_\mathfrak {S}\) changes for different choices of \(\mathfrak {S}\). Throughout this section, \(M\) is a matroid of rank \(k\) on a set \(E\).
Proposition 3.1
 (1)
\({{\mathrm{ec}}}_{\mathfrak {S}}(M)={{\mathrm{ec}}}_{\mathfrak {S}'}(M^*)\)
 (2)
For \(S\in \mathfrak {S}\), \(a_{\mathfrak {S}}(S)=b_{\mathfrak {S}'}(ES)\), where the latter is computed in \(M^*\).
Proof
What is the point of going through this? Our immediate goal is to determine the conditions under which the expected codimension can be computed with respect to some set other than \(\mathcal {P}(E)\) and still give the same answer. To figure this out, it would be enough to establish a condition for when \({{\mathrm{ec}}}_{\mathfrak {S}}(M)={{\mathrm{ec}}}_{\mathfrak {S}\{Z\}}(M)\) for some set \(Z\). In fact, we can do a little better:
Proposition 3.2
 (1)
\({{\mathrm{ec}}}_{\mathfrak {S}}(M){{\mathrm{ec}}}_{\mathfrak {S}\{Z\}}(M)=a_{\mathfrak {S}}(Z)b_{\mathfrak {S}}(Z)\).
 (2)
For \(S\in \mathfrak {S}\{Z\}\), \(a_{\mathfrak {S}}(S)a_{\mathfrak {S}\{Z\}}(S)=a_{\mathfrak {S}}(Z)\mu _{\mathfrak {S}}(Z,S)\).
 (3)
For \(S\in \mathfrak {S}\{Z\}\), \(b_{\mathfrak {S}}(S)b_{\mathfrak {S}\{Z\}}(S)=\mu _{\mathfrak {S}}(S,Z)b_{\mathfrak {S}}(Z)\).
Proof
Corollary 3.3
Given \(\mathfrak {A}\subseteq \mathfrak {S}\), if \(a_{\mathfrak {S}}(A)=0\) for each \(A\in \mathfrak {A}\), then \({{\mathrm{ec}}}_{\mathfrak {S}\mathfrak {A}}(M)={{\mathrm{ec}}}_{\mathfrak {S}}(M)\), and similarly with \(a\) replaced with \(b\).
Proof
Remove the elements of \(\mathfrak {A}\) from \(\mathfrak {S}\) one by one. By part 1 of the proposition, removing something for which \(a=0\) does not change \({{\mathrm{ec}}}\), and by part 2, the remaining elements of \(\mathfrak {A}\) will still have \(a=0\) after some have been removed. The argument is exactly analogous for \(b\). \(\square \)
This result will be a lot more useful if we can find a lot of sets for which \(a\) and \(b\) are zero. Luckily, we can:
Proposition 3.4
Suppose that \(S\in \mathfrak {S}\) is disconnected, say \(S=\bigoplus _iS_i\). Suppose further that, for each \(T\subseteq S\) for which \(T\in \mathfrak {S}\), we also have that each connected component of \(T\) is in \(\mathfrak {S}\). Then \(a_\mathfrak {S}(S)=0\).
Proof
Simply by dualizing everything, we get a version of this statement about \(b\). Suppose that \(S\in \mathfrak {S}\) is such that \(M/S\) is disconnected, and that whenever \(T\supseteq S\) is in \(\mathfrak {S}\), say \(M/T=\bigoplus A_i\), we have each \(T\cup A_i\in \mathfrak {S}\). Then \(b_\mathfrak {S}(S)=0\).
In particular, Proposition 3.4 and Corollary 3.3 together imply that, starting with all of \(\mathcal {P}(E)\), we can remove any number of disconnected sets, or any number of sets \(S\) for which \(M/S\) is disconnected, and end up with the same expected codimension, because the extra condition in Proposition 3.4 will be trivially satisfied. Note that it does not say that we can remove sets of both kinds at the same time: Proposition 3.2 tells us that removing sets for which \(b=0\) does not change values of \(b\) for other sets, but values of \(a\) can and will change.
First we need a lemma:
Lemma 3.5
Suppose that \(M\) is connected and that \(S\subseteq M\) is connected. Say \(M/S=\bigoplus _iA_i\) where each \(A_i\) is connected in \(M/S\). Then each \(A_i\cup S\) is connected in \(M\).
Proof
Suppose \(A_i\cup S\) is disconnected. (Recall that when we refer to a subset of \(M\) as a matroid we mean the restriction of \(M\) to that subset.) Write \(A=A_i\cup S\) and \(B=\bigcup _{j\ne i}A_j\cup S\), so that \(M/S=(AS)\oplus (BS)\). Because \(S\) is connected, it must be contained in a connected component of \(A\). Dually, since \(A/S\cong A_i\) is connected, \(S\) must contain all but one connected component of \(A\). So in fact \(S\) is a connected component of \(A\), say \(A=S\oplus C\).
We know that \({{\mathrm{rk}}}M{{\mathrm{rk}}}S=({{\mathrm{rk}}}A{{\mathrm{rk}}}S)+({{\mathrm{rk}}}B{{\mathrm{rk}}}S)\), but our decomposition of \(A\) gives us that the righthand side is \({{\mathrm{rk}}}C+{{\mathrm{rk}}}B{{\mathrm{rk}}}S\), so \({{\mathrm{rk}}}M={{\mathrm{rk}}}B+{{\mathrm{rk}}}C\). So in fact \(M=B\oplus C\), contradicting the connectedness of \(M\). \(\square \)
Note that by applying the theorem inductively to the \(A_i\) themselves, we get that any \(S\cup \bigcup _{i\in I}A_i\) is also connected. Again we can extract a dual version of this statement: if \(M\) and \(M/S\) are connected but \(S=\bigoplus B_i\) with each \(B_i\) connected, then each \(M/B_i\) is connected.
Theorem 3.6
Suppose that \(M\) is connected, that \(\mathfrak {S}\) contains every set \(S\) for which both \(S\) and \(M/S\) are connected, and that whenever \(S\in \mathfrak {S}\), all of the connected components of \(S\) are also in \(\mathfrak {S}\). Then \({{\mathrm{ec}}}_{\mathfrak {S}}(M)={{\mathrm{ec}}}(M)\).
Proof
Starting with all of \(\mathcal {P}(E)\), using Corollary 3.3 we may remove every set \(S\) for which \(M/S\) is disconnected and \(S\notin \mathfrak {S}\). Call the resulting collection \(\mathfrak {T}\). If \(T\in \mathfrak {T}\mathfrak {S}\), we know that \(M/T\) is connected, or we would have removed it already. So \(T\) must be disconnected, or else it would be in \(\mathfrak {S}\). Write \(T=\bigoplus _i T_i\) with each \(T_i\) connected.
We know that the \(T_i\) themselves are in \(\mathfrak {T}\): each \(M/T_i\) is connected by the dual version of Lemma 3.5, so in fact each \(T_i\in \mathfrak {S}\). We would like to use Proposition 3.4 to show that \(a_\mathfrak {T}(T)=0\). Suppose \(U\subsetneq T\) and \(U\in \mathfrak {T}\). If \(U\in \mathfrak {T}\mathfrak {S}\), then \(a_\mathfrak {T}(U)=0\) by induction, so such \(U\) do not contribute to \(a_\mathfrak {T}(T)\), that is, \(a_\mathfrak {T}(T)=a_\mathfrak {S}(T)\). But we may now apply Proposition 3.4: any \(U\subseteq T\) with \(U\in \mathfrak {S}\) has all its connected components in \(\mathfrak {S}\) by hypothesis. So \(a_\mathfrak {S}(T)=0\).
So by applying Corollary 3.3 once more, we may remove every set in \(\mathfrak {T}\mathfrak {S}\), which gives the result. \(\square \)
Note that, in particular, Lemma 3.5 implies that taking \(\mathfrak {S}\) to be the collection of all sets \(S\) for which both \(S\) and \(M/S\) is connected will satisfy the hypotheses of Theorem 3.6. (These sets are called flacets, and come up in the study of matroid polytopes. See [4, 2.6].)
Expected codimension turns out to be wellbehaved under direct sums:
Proposition 3.7
Proof
In particular, using Proposition 2.8, we see that if \(M\) and \(N\) have expected codimension, so does \(M\oplus N\). Since it is trivial to check that both matroids on a oneelement set have expected codimension, this also applies to loop and coloop extensions.
We conclude this section with an example of a matroid that does not have expected codimension:
Counterexample 3.8
Consider the Pappus matroid \(P\), the rank3 matroid on \([9]\) generated by the collinearities in Fig. 2. The only sets \(S\subseteq [9]\) for which both \(S\) and \(P/S\) are connected are the nine sets of points which lie on lines in the picture. (That is, 123, 456, 789, 157, 168, 247, 269, 348, and 359.) From this, we can easily compute that \({{\mathrm{ec}}}(P)=9\). However, the actual codimension of \(X(P)\) in \(G(3,9)\) is 8. This can be (and was) computed directly with a computer algebra system like Macaulay2; it also follows from computations performed in [3]. Either way, \(P\) does not have expected codimension.
This should not be especially surprising: the whole point of the Pappus matroid is that it demonstrates Pappus’s theorem, that is, the fact that given any eight of the collinearities in Fig. 2, the ninth comes for free. Our definition of expected codimension is unable to keep track of “global” constraints like this one, so it treats all nine rank conditions as independent.
4 Positroid varieties
It is, as we have already observed, hopeless to expect to be able to say anything especially nice about the expected codimension of a general matroid variety, and we have already seen an example where our definition fails to produce the actual codimension of a matroid variety. There is, however, a much nicer class of matroids for which we will be able to say a lot more.
Proposition 4.1
 (1)
For some collection of cyclic intervals (that is, cyclic permutations of intervals) \(I_i\) and some corresponding set of ranks \(r_i\), \(M\) is the largest matroid on \([n]\) in which the rank of \(I_i\) is \(r_i\).
 (2)
\(M\) is the matroid of a collection of vectors \(v_1,\ldots ,v_n\) in \(\mathbb {R}^k\) for which all \(k\times k\) minors are nonnegative.
 (3)
\(X(M)\) is the image of a Richardson variety in the flag variety \(Fl(n)\) under the natural projection map \(Fl(n)\rightarrow G(k,n)\).
Note that these definitions all depend on the cyclic ordering of the base set; it is possible to change whether some matroid satisfies these conditions just by permuting the elements of \([n]\).
Definition 4.2
A matroid satisfying any of the equivalent conditions just listed is called a positroid, and its matroid variety is called a positroid variety. (Positroids were first introduced and studied in [9].)
Note that, in particular, Schubert and Richardson matroids are positroids. Positroid varieties have many nice geometric properties [6]. In particular, they are always reduced, irreducible, and Cohen–Macaulay, and unlike general matroid varieties (see Counterexample 2.6) they are always cut out by Plücker variables. Positroids are very wellstudied already, and there are several different combinatorial gadgets that have been invented to describe them, some of which are described in [7].
Because a positroid is generated by rank conditions on cyclic intervals, we can describe it completely by listing the rank of each cyclic interval.
Definition 4.3
([7]) Take a positroid \(P\) on \([n]\). We will think of elements \([n]\) as representatives of equivalence classes of integers mod \(n\) with the obvious cyclic order (that is, 1 comes right after \(n\)), and we will use interval notation with this in mind; for example, if \(n=6\), then we will write \([5,8]=[5,2]=\{5,6,1,2\}\). In particular, \([5,5]=\{5\}\), whereas \([5,11]=[5,10]=[5,4]=\{5,6,1,2,3,4\}\). We can form a cyclic rank matrix by setting \(r_{ij}={{\mathrm{rk}}}_P([i,j])\) for any integers \(i,j\) with \(0\le ji\le n\).

Each \(r_{ii}\) is 0 or 1.

For any \(i,j\), either \(r_{i1,j}=r_{ij}\) or \(r_{i1,j}=r_{ij}+1\), and similarly for \(r_{i,j+1}\).

If \(r_{ij}=r_{i1,j}=r_{i,j+1}\), then \(r_{i1,j+1}=r_{ij}\).
Definition 4.4
([7]) Given a cyclic rank matrix corresponding to a positroid on \([n]\), we can form an affine permutation. This will be a bijection \(\pi :\mathbb {Z}\rightarrow \mathbb {Z}\) such that \(i\le \pi (i)\le i+n\) and \(\pi (i+n)=\pi (i)+n\) for all \(i\). We define \(\pi \) as a matrix by putting a 1 in position \((i,j)\) if \(r_{ij}=r_{i,j1}=r_{i+1,j}\ne r_{i+1,j1}\) and putting a 0 there otherwise. One can check that each row and each column will have exactly one 1. Note that to describe \(\pi \) it is enough to describe the images of the elements of \([n]\).
The affine permutation gives a simple way to determine which rank conditions are necessary to define \(X(P)\):
Definition 4.5
The essential set of an affine permutation is defined by the following procedure: cross out all the positions strictly below or to the left of a 1 in the affine permutation matrix, and take the positions which are at the upperright corners of their connected components. (This definition follows Fulton’s description in [5], though he did not refer to positroid varieties.)
By convention, we do not take positions on the upperright edge of the matrix (that is, ones where \(ji=n\)) to be essential. Imposing the rank conditions corresponding to the essential intervals are enough to define a positroid variety in \(G(k,n)\) as a scheme.
Example 4.6
We have aleady seen in Counterexample 3.8 a case in which the expected codimenion of a matroid variety fails to line up with its actual codimension in the Grassmannian. The main result of this section is that that does not happen for positroids:
Theorem 4.7
Positroids have expected codimension.
In order to prove this, we are going to need to understand a little bit more about the matroid structure of a positroid. We refer repeatedly to restrictions and contractions of positroids; note that it follows directly from the second definition in Proposition 4.1 that these are again positroids.
Lemma 4.8
If \(P\) is a positroid on \([n]\), \(X\subseteq [n]\), and both \(P_X\) and \(P/X\) are connected, then \(X\) is an interval.
Proof
Suppose \(X\) is not an interval. Take \(c_1,c_2\in [n]X\) to lie in two different cyclic intervals of \([n]X\). Since \(P/X\) is connected, there is a circuit \(C\) of \(P/X\) which contains both \(c_1\) and \(c_2\). By restricting to \(X\cup C\), we may assume that \(P/X\) is a circuit. Similarly, for \(b_1,b_2\in X\) lying on different sides of \(c_1\) and \(c_2\) (so the named elements appear in the cyclic order \(b_1,c_1,b_2,c_2\)), there is a circuit \(B\) of \((P_X)^*=P/([n]X)\) containing both, and we may contract the elements of \(XB\) and assume that \((P_X)^*\) is a circuit, that is, that everything in \(X\) is parallel.
Now, delete all elements of \(X\) other than \(b_1\) and \(b_2\). This does not change the rank of any set in \(P/X\): everything in \(X\) was parallel to \(b_1\) and \(b_2\). Dually, contract all elements of \([n]X\) except \(c_1\) and \(c_2\). Now we have \(n=4\), and the sets \(\{1,3\}\) and \(\{2,4\}\) each have rank 1. This matroid is not a positroid, which can easily be checked, so we have a contradiction. \(\square \)
Lemma 4.9
The connected components of a positroid form a noncrossing partition. (This was also proved independently in [1]).
Proof
Suppose first that \(P\) has just two connected components, say \(P=A\oplus B\). Then \(P/A=B\) and \(P/B=A\) are also both connected, so Lemma 4.8 implies that they are both intervals. If there are more than two connected components, they no longer have to both be intervals, but for any two components \(C\) and \(D\), each of \(C\) and \(D\) must be an interval inside \(C\cup D\), which means in particular that they cannot cross. \(\square \)
Lemma 4.10
If \(P\) is a connected positroid on \([n]\) and \(I\subseteq [n]\) is an interval, then each connected component of \(I\) is an interval.
Proof
Say \(I=X\oplus \bigoplus _iY_i\) with \(X\) and each \(Y_i\) connected, and suppose \(X\) is not an interval, say \(X=\bigcup _kJ_k\) and \(IX=\bigcup _lJ'_l\) where each \(J_k\) and \(J'_l\) is an interval. Since the components of \(I\) have to form a noncrossing partition by Lemma 4.9, none of the \(Y_i\) can meet more than one of the \(J'_l\). So we may assume that left and right endpoints of \(X\) coincide with those of \(I\) by removing the \(Y_i\) that lie to the left of \(X\)’s left endpoint or to the right of its right endpoint. We now know that all the \(J'_l\) lie in between two \(J_k\).
Just as in the proof of Lemma 4.8, the connectedness of \(X\) lets us conclude that there is a circuit of \(X^*=P/([n]X)\) that contains points in two different \(J_k\). Suppose there is a circuit of \(P/X\) that contains a point of \(IX\) and a point of \(PI\). If this were the case, because we forced all the points of \(IX\) to lie between intervals of \(X\), we would be in the exact situation that gave us a contradiction in the the proof of Lemma 4.8.
Lemma 4.11
For a positroid \(P\), let \(\mathfrak {I}\subseteq \mathcal {P}([n])\) be the collection of all cyclic intervals. For any interval \([i,j]\ne [n]\), \(a_\mathfrak {I}([i,j])\) is equal to the entry (either 0 or 1) at \((i,j)\) in the affine permutation matrix.
Proof
Proof of Theorem 4.7
First, it is enough to prove this for connected positroids: we can decompose a general positroid as a direct sum and apply Proposition 3.7.
Next, we claim that for a connected positroid \(P\), if \(\mathfrak {I}\subseteq \mathcal {P}([n])\) is the collection of all cyclic intervals of \([n]\), \({{\mathrm{ec}}}(P)={{\mathrm{ec}}}_\mathfrak {I}(P)\). This follows immediately from Theorem 3.6: Lemmas 4.8 and 4.10 say that taking \(\mathfrak {S}=\mathfrak {I}\) satisfies the hypotheses of the theorem.
5 Valuativity
Let \(M\) be a matroid on a set \([n]\). For each basis \(B\) of \(M\), consider the vectors in \(\mathbb {R}^n\) whose entries are 1 if the corresponding element of \([n]\) is in \(B\) and 0 otherwise. The convex hull of these vectors is called the matroid polytope of \(M\), written \(P(M)\). There are many examples of combinatorial properties of matroids that are encoded in the geometry of the matroid polytope.
Definition 5.1
Valuative matroid invariants are studied in detail in [2]. We single out the following result, which appears as [2, 5.4] in slightly different language:
Theorem 5.2
The set of Schubert matroids forms a basis for \(\mathrm {Mat}(n)\) modulo matroidal subdivisions.
We will show:
Theorem 5.3
Expected codimension is a valuative matroid invariant.
Since Schubert matroids, being positroids, have expected codimension, Theorem 5.2 gives us another way to think about expected codimension: you could have defined it by assigning Schubert matroids their codimensions and extending to all matroids by subdividing the matroid polytope and insisting that it be valuative.
We will prove Theorem 5.3 by proving something stronger first:
Lemma 5.4
Lemma 5.5
Proof
This is just \((1)^{\dim P}(\chi (P)\chi (\partial P))\) where \(\chi \) is the Euler characteristic. So the result follows, because \(P\) is contractible and \(\partial P\) is homeomorphic to a \((\dim P1)\)sphere. \(\square \)
Proof of Lemma 5.4
Proof of Theorem5.3
Notes
Acknowledgments
I am incredibly grateful to my advisor, David Speyer, for many fruitful conversations and ideas, some of which appear in this paper, and to Allen Knutson for the same. I also want to thank Jordan Watkins both for talking to me about the content of this paper and for his meticulous proofreading. This work was partially supported by NSF Grant DMS 0943832.
References
 1.Ardila, F., Rincón, F., Williams, L.: Positroids and noncrossing partitions. arXiv preprint http://arxiv.org/abs/1308.2698 (2013)
 2.Derksen, H., Fink, A.: Valuative invariants for polymatroids. Adv. Math. 225(4), 1840–1892 (2010)CrossRefMATHMathSciNetGoogle Scholar
 3.Fehér, L.M., Némethi, A., Rimányi, R.: Equivariant classes of matrix matroid varieties. Comment. Math. Helv. 87(4), 861–889 (2012)CrossRefMATHMathSciNetGoogle Scholar
 4.Feichtner, E.M., Sturmfels, B.: Matroid polytopes, nested sets and bergman fans. Port. Math. (N.S.) 62(4), 437–468 (2005)MATHMathSciNetGoogle Scholar
 5.Fulton, W.: Flags, schubert polynomials, degeneracy loci, and determinantal formulas. Duke Math. J. 65(3), 381–420 (1992)CrossRefMATHMathSciNetGoogle Scholar
 6.Knutson, A., Lam, T., Speyer, D.E.: Projections of Richardson varieties. ArXiv eprints (2010)Google Scholar
 7.Knutson, A, Lam, T., Speyer, D.E.: Positroid varieties: juggling and geometry. http://arxiv.org/abs/1111.3660v1 (2011)
 8.Oh, S.: Positroids and Schubert matroids. J. Comb. Theory Ser. A 118(8), 2426–2435 (2011)CrossRefMATHGoogle Scholar
 9.Postnikov, A.: Total positivity, grassmannians, and networks. arXiv preprint http://arxiv.org/abs/math/0609764 (2006)
 10.Shor, P.W.: Stretchability of pseudolines is NPhard. Applied Geometry and Discrete Mathematics, DIMACS Series Discrete Mathematics Theoretical Computer Science, vol. 4, pp. 531–554. American Mathematical Society, Providence (1991)Google Scholar
 11.Speyer, D.E.: Tropical linear spaces. SIAM J. Discret. Math. 22(4), 1527–1558 (2008)CrossRefMATHMathSciNetGoogle Scholar
 12.Vakil, R.: Murphy’s law in algebraic geometry: badlybehaved deformation spaces. Invent. Math. 164(3), 569–590 (2006)CrossRefMATHMathSciNetGoogle Scholar
 13.White, N. (ed.): Theory of Matroids: Encyclopedia of Mathematics and its Applications, vol. 26. Cambridge University Press, Cambridge (1986)MATHGoogle Scholar