Abstract
We identify the asymptotic distribution of p-rank of the sandpile group of random directed bipartite graphs which are not too imbalanced. We show this matches exactly with that of the Erdös–Rényi random directed graph model, suggesting that the Sylow p-subgroups of this model may also be Cohen–Lenstra distributed. Our work builds on the results of Koplewitz who studied p-rank distributions for unbalanced random bipartite graphs, and showed that for sufficiently unbalanced graphs, the distribution of p-rank differs from the Cohen–Lenstra distribution. Koplewitz (sandpile groups of random bipartite graphs, https://arxiv.org/abs/1705.07519, 2017) conjectured that for random balanced bipartite graphs, the expected value of p-rank is O(1) for any p. This work proves his conjecture and gives the exact distribution for the subclass of directed graphs.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Given a directed graph G, we can associate with it an Abelian group \(\Gamma (G)\) defined as
where M is the Laplacian matrix of G and \({\mathbb {Z}}_0^n\) is the subspace of \({\mathbb {Z}}^n\) orthogonal to the all 1’s vector, i.e., those vectors with zero sum. This group is known as the sandpile group, as well as the Jacobian, or critical group of G.
A well-studied question about the sandpile group is its asymptotic structure for random graphs. The sandpile group of the Erdős–Rényi random graph is well understood. In [11], Wood showed for \(G \in G(n,q)\) an Erdős–Rényi random graph, the p-parts of \(\Gamma (G)\) are independent for a finite set of p and are isomorphic to a given p-group H with probability proportional to
This distribution is related to the Cohen–Lenstra distribution which samples a given p-group with probability inversely proportional to the size of its automorphism group. Explicitly, a random p-group G is Cohen–Lenstra distributed if its probability of being equal to a fixed p-group H is given by
The distribution was first observed by Cohen and Lenstra as the empirical distribution of the class group of real quadratic number fields. It also appears as the distribution of the cokernel of random matrices of independent entries. This was shown first for Haar measure by Friedman and Washington [1] and later for a more general class of distributions by Maples [6] and Wood [12]. We will refer to the distribution given by Eq. 1 as the symmetric Cohen–Lenstra distribution.
Another important class of random graphs is those with vertices of a specified fixed degree. Mészáros [7] showed that \(d\ge 3\) random d-regular directed graphs have sandpile groups with Cohen–Lenstra distribution. Undirected d-regular graph’s sandpile groups have the same distribution as in Eq. 1 for d odd. The distribution for even d is different, but related. See Mészáros’ paper for details.
Though ubiquitous, not all random groups are Cohen–Lenstra distributed. In [3], Koplewitz shows that the distribution of p-rank of sandpile groups of random bipartite graphs on vertex sets \(V_1,V_2\) depends on the ratio of the vertex sets \(\alpha = \frac{|V_1|}{|V_2|}\le 1\). He showed that for \(\alpha < 1/p,\) the expected p-rank is proportional to n, and at the critical threshold \(\alpha = 1/p\) the p-rank is proportional to \(\sqrt{n}\). So instead of the Cohen–Lenstra distribution, one does not get a limiting distribution at all.
For more balanced bipartite graphs where \(1>\alpha >1/p\), Koplewitz showed the expected p-rank is O(1), where the constant depends on p. The Koplewitz’s method does not extend to the balanced case \(\alpha =1,\) but one can infer from the result that the expected p-rank is o(n) in that case. Koplewitz conjectured in the balanced case the expected p-rank is O(1). While the work of Koplewitz applies to the more challenging case of undirected graphs, his methods generalize easily to the directed case as well. In this work, we are able to achieve Koplewitz’s conjectured bound and find the exact limiting distribution of p-rank in the case of directed bipartite graphs with \(\alpha >1/p\).
The result of this paper improves Koplewitz’s result in two ways: it applies to the balanced case (\(\alpha = 1\)) and gives the limiting p-rank distribution, not only its expectation.
Our method and result are similar in spirit to other studies of matrices with fully iid entries over finite fields ([5, 6, 8, 12]). For instance, it is known (see [6, 12]) that under mild min-entropy conditions, for a random \(n\times (n+u)\) matrix A with iid entries over \({\mathbb {F}}_p\), we have
We achieve essentially the same result for the matrix model of the Laplacian of a not too imbalanced random directed bipartite graph, which we describe below.
Let \(1\ge \alpha >1/p\). Consider a random directed bipartite graph on sets of n and \(\lfloor \alpha n\rfloor \) vertices (in the future, we will omit the floors when they make no essential difference) given by including each crossing edge independently with probability \(0<q<1\), a constant independent of n. Denote the Laplacian of this graph by M. Note the rows of this matrix are independent (in the probabilistic sense).
Theorem 1.1
Let p be a prime such that \(p\ll \sqrt{n}\). Then for any \(k\le (1+\alpha )n,\) we have
Based on computations and intuition, we conjecture that the rank of the sandpile group for undirected bipartite graphs should have the distribution predicted by Eq. 1 for \(p>2\). However, our method depends on the independence between the rows. We further conjecture that the cokernels in the directed and undirected model are actually Cohen–Lenstra and symmetric Cohen–Lenstra distributed.
1.1 Notation
For J, a set of indices, we denote by \(M^J\) the restriction of the matrix to columns in J, and by \(M_J\) the restriction of the matrix to rows in J. When \(J = [k],\) we abuse the notation by writing \(M^k\) or \(M_k\). In particular, \(M_n\) is the top half of our matrix and \(M_{(1+\alpha )n}\) is M.
Generally, W will denote the row space of M with the same subscripting convention and \(\beta = \min (q, 1-q)\). But we will define them in place before using them.
We use \(\mathbbm {1}\) to denote the all 1’s vector.
We fix a prime p and work mod p for all of our results. We treat p as a constant, but our arguments work for \(p=o(\sqrt{n})\). We note our results do not necessarily hold when p grows faster with n. We use C to denote constants, not necessarily the same, which may depend on p, q and \(\alpha \).
1.2 Structure of the Paper
Our method is a row exposure process similar to the method by Maples [6] and Nguyen and Paquette [8]. Note that we use a row exposure process instead of a column exposure process, because, as the Laplacian of a directed graph is traditionally defined, the rows are independent but not the columns.
In Sect. 2, we show that \(M_{(1+\alpha )n-r}\) is full rank with high probability with respect to r. We make heavy use of a variant of Odlyzko’s lemma, Lemma 2.3.
In Sect. 3, we show that the remaining r rows increase the rank of the matrix with probability similar to what one would predict with the uniform model. The argument will make use of a notion of structure and results about it due to Meehan, Luh and Nguyen [4]. The technique is modified for the Laplacian setting.
In Sect. 4, we combine these two facts to show that the distribution of the rank of M is exponentially close to the distribution of the rank of a uniform random matrix.
2 Full Rank Until the Final Rows
In this section, we prove that \(M_{(1+\alpha )n-r}\) is full rank with probability at least \(1-\exp (-C r)\).
We reintroduce a concept from Koplewitz [3], which itself is a variant of a definition from Maples [6] called min-entropy. It combines a nonconcentration property with a weak notion of independence.
Definition 2.1
Let A be a random matrix over \({\mathbb {Z}}/p{\mathbb {Z}}\), \(\beta >0\) and I a set of entries in A. We say that \(a_{ij}\) has min-entropy \(\beta \) with respect to I if for any conditioning on a possible set of values for the entries in I, the probability that \(a_{ij}\) takes on any value in \({\mathbb {Z}}/p{\mathbb {Z}}\) is bounded above by \(1-\beta \).
We say the matrix or vector has min-entropy \(\beta \) if every entry of the matrix or vector has min-entropy \(\beta \) with respect to the set of other entries. We say a subset of entries has min-entropy \(\beta \) if every entry in the subset has min-entropy \(\beta \) with respect to the other entries in the subset.
The Laplacian has min-entropy 0, because the entries in a row off the diagonal determine the value of the diagonal. Instead, we have the following.
Fact 2.2
The diagonal entries of \(M_n\) together with the entries of \(M_n\) in columns \(n+1\) to \((1+\alpha )n-p\) have min-entropy \(\min (q,1-q)^p\).
Many of the lemmas in this section make use of the following variant of a lemma of Odlyzko [10] and its corollary. Corollary 2.4 is Theorem 10 in [3] and Lemma 2.3 is a statement in its proof.
Lemma 2.3
(Odlyzko) Let V be a subspace of \({\mathbb {F}}_p^n\) of codimension k. Then, for a random vector \(X = (x_i)_{i=1}^n\in {\mathbb {F}}_p^n\) with min-entropy \(\beta \), we have
Corollary 2.4
An \(n\times m\), \(n>m\) matrix of min-entropy \(\beta \) is full rank with probability at least
These facts are not directly applicable in the Laplacian model, as these models inherently yield matrices/vectors with 0 min-entropy. But they will be useful when applied to trimmed submatrices/subvectors which have nontrivial min-entropy.
Lemma 2.5
There exists \(C>0\) such that with probability at least \(1-O(\exp (-Cn))\), the matrix \(M_n\) has full rank.
Proof
Write
where \(\delta _j =\sum _{i=1}^{\alpha n}x_{j,i}\). Notice that the set of rows with \(\delta _j\ne 0\) is linearly independent and a row with \(\delta _j=0\) is not in its span (unless the row is 0). So it suffices to show that with high probability, the rows with \(\delta _j=0\) are linearly independent.
Toward that, we show there are not many rows with \(\delta _i=0\). Let \(I\subseteq [n]\) be the set of all j with \(\delta _j = 0\). Lemma 2.6 implies that
where \(\varepsilon ' = \exp (-\alpha n/2p^2)\). Noting that the rows are probabilistically independent and applying a Chernoff bound, we have for any \(\varepsilon >0\),
Next, we show that \(M_I\), the submatrix generated by the rows in I, is full rank with high probability. To do this, we consider only the columns in \(J = \{n+1,\dots ,(1+\alpha )n\}\), and show \(M_I^J\) is also full rank with high probability. Observe that \(M_I^J\) is \(|I|\times \alpha n\), but the sum of each row is already conditioned to be 0. So, truncate the final p columns to obtain an \(|I|\times (\alpha n-p)\) matrix which by Fact 2.2 has min-entropy \(\beta = \min (q,1-q)^p\). Therefore by Lemma 2.4, this matrix is full rank with probability \(1-\frac{1}{\beta ^2}(1-\beta )^{-(\alpha n-p-|I|))}\).
Conditioning on \(|I|\le (1/p + \varepsilon ' + \varepsilon ) n\), the rows in such a matrix are linearly independent with probability at least \(1 - O(\exp (-C(\alpha -1/p-\varepsilon ' - \varepsilon )n))\). Combining this with the above, taking \(\varepsilon = \alpha - 1/p - \varepsilon '\), which is positive for sufficiently large n,
Note it is crucial that \(\alpha >1/p\) for this result. In the case \(\alpha <1/p\), \(M_I^J\) is not full rank with high probability and the lemma is false. \(\square \)
Lemma 2.6
For any n, \(p\ll \sqrt{n}\), \(\left| {\mathbb {P}}\left( \sum _{i=1}^nx_i = 0\right) -\frac{1}{p}\right| \le \exp (-n/2p^2)\).
Proof
Note that
This is equal to \(\rho (\mathbbm {1})\) as defined in [4]. Therefore, the non-Laplacian version of Lemma 3.3, which is Corollary 4.6 from that paper, gives the result. \(\square \)
Proposition 2.7
There exists \(C>0\) such that for any \(r<n,\) the matrix \(M_{(1+\alpha )n-r}\) generated by rows \(X_1,\dots ,X_n,Y_{1},\dots ,Y_{\alpha n-r}\) is full rank with probability \(1-O(\exp (-Cr))\).
Proof
Letting \(J=\{n+1,\dots , n+\alpha n\}\), we see \(M_n^J\) has iid entries and thus by Corollary 2.4 has rank \(\alpha n-r'\) with probability at least \(1-\exp (-Cr')\) for any \(r'<\alpha n\). Condition on \(M_n^J\) having rank \(\alpha n - r'\), \(r'\) is chosen as a function of r.
We proceed inductively. By Lemma 2.5, \(M_n\) is full rank with probability at least \(1-\exp (-Cn)\). Condition on \({{\text {rank}}}(M_{n+i-1}) = n+i-1\) for some \(1\le i\le \alpha n - r\), we will show \({{\text {rank}}}(M_{n+i}) = n+i\) with probability \(1-O(\exp (-Cr))\).
Let \(I = J\setminus {n+i}\) and let \(P_{I}\) be the projection onto the coordinates in I. As \(Y_i\) vanishes on I, \(Y_i\in W_{n+i-1}\) if and only if it is in the kernel of the \(P_I\) restricted to \(W_{n+i-1},\) e.g., \(Y_i\in \ker (P_I|_{W_{n+i-1}})\), where \(W_{n+i-1}\) is the row space of \(M_{n+i-1}\). Using our assumption \(M_n^J\) has rank at least \(\alpha n - r'\), and \(M_n^I\) has rank at least \(\alpha n - r' - 1\). Using our inductive hypothesis \(\dim (W_{n+i-1}) = n+i-1\) and the rank-nullity theorem, we have
Therefore, the codimension of \(P_{[n]}(\ker P_I|_{W_{n+i-1}})\) as a subspace of \({\mathbb {F}}_p^n\) is at least
Because \(Y_i\) has iid entries in the first n indices, \(Y^n_i\) has min-entropy \(\beta = \min (q, 1-q)\). Applying Lemma 2.3,
Summing the loss in probability \(\dim (W_{n+i}) = n+i\) at each step,
Taking \(r' = r/2,\) we achieve the desired result where
\(\square \)
3 Rank Evolution
Our goal for this section is to show the rank evolution as the final r rows are exposed is what one would predict from the uniform model.
Let \(W_{(1+\alpha )n-k}\) be the subspace generated by the first \((1+\alpha )n-k\) rows of the matrix \(M_{(1+\alpha )n}\).
Proposition 3.1
(Rank evolution). Let \(\delta >0\), and \(k\le \delta n\). There exists an event \({\mathcal E}_{(1+\alpha )n-k}\) of probability at least \(1 -O(\exp (-Cn))\) such that the following holds.
This proposition is established by showing the normal vectors to the row space of the first \((1+\alpha -\delta )n\) rows are not sparse or structured with high probability, that is, Propositions 3.2 and 3.3, respectively. Along with a fact about subspaces with unstructured normal vectors, Lemma 3.4, we can bound the transition probabilities.
In the following proposition, we show the row space of \(M_{(1+\alpha - \delta )n}\) does not have sparse normal vectors with high probability. Even stronger, we show that their projection to the first n coordinates is not sparse. This is necessary, as the last couple of rows only have min-entropy in the first n coordinates.
Proposition 3.2
(Non-sparseness of normal vectors). Let \(\alpha>\delta >0\) be given. There exists \(\delta '>0\), \(C>0\) such that with probability at least \(1-O(\exp (-Cn))\), any vector \({\textbf{v}}\in {\mathbb {F}}_p^{(1+\alpha )n}\setminus {\mathbb {F}}_p\cdot \mathbbm {1}\) that is orthogonal to \(X_1,\dots , X_n,Y_1,\dots , Y_{\alpha n-\delta n}\) and any \(a\in {\mathbb {F}}_p\), we have
Proof
Let W be the rowspace of \(M_{(1+\alpha - \delta )n}\). Because M is a Laplacian, for any \(a\in {\mathbb {F}}_p\), we have that \(a\cdot \mathbbm {1}\in W^\perp \). Thus, \(W^\perp -a\cdot \mathbbm {1} = W^\perp \) which allows us to reduce to the case of \(a=0\).
Let \(J = \{n+1,\dots , \alpha n\}\) and consider an arbitrary subset \(I\subseteq [(1+\alpha )n]\) with \(J\subseteq I\) and \(|I| = \alpha n+m\). Then, we consider the matrix
where \(A_1,\) \(A_2\) are iid of dimension \(n\times \alpha n\) and \((\alpha -\delta )n \times m,\) respectively. Note that \(D_1\), \(D_2\) are not necessarily diagonal, but trimmed versions of the diagonal matrices consisting of corresponding row sums from \(A_1\), \(A_2\).
We prove \(M_{(1+\alpha - \delta )n}^I\) is full rank with probability \(1-O(\exp (-Cn))\). To do this, we argue that there is a rank n subset of rows comprising primarily from the first n rows (i.e., rows from \(A_1,\) \(D_1\)).
By Lemma 2.3, \(A_1\) has rank at least \(\alpha n-\varepsilon n\) with probability at least \(1-O(\exp (-Cn))\) for any constant \(\varepsilon >0\). Further trimming the leftmost m columns, consider
By the same argument as Lemma 2.5, this matrix is full rank with probability at least \(1-O(\exp (-Cn))\). The only difference from Lemma 2.5 is that the entries in \(D_2\) are independent of the entries in \(A_1\).
By the union bound, \(M_{(1+\alpha -\delta )n}^{J}\) will have full rank and \(A_1\) will have rank at least \(\alpha n-\varepsilon n\) simultaneously with probability at least \(1-O(\exp (-Cn))\). We then take the linearly independent \(\alpha n-\varepsilon n\) rows from [n] and supplement via some \(\varepsilon n\) rows from \(\{n+1,\dots ,n-\delta n\}\) to get a rank \(\alpha n\) minor of \(M_{(1+\alpha - \delta )n}^{J}\). Let \(L\subseteq [(1+\alpha )n]\) denote the indices of the rows comprising the full rank minor, \(M_{L}^{J}\).
Now, we expose the leftmost m columns of \(M_{(1+\alpha - \delta )n}^{I}\) (this is similar to Lemma 2.7). Applying Lemma 2.3 to the subvector of the last \(n-\delta n\) entries of the \(i^{th}\) exposed vector, it is in the column span of \(M_L^J\) and the previously exposed columns with probability at most \(\beta ^{(\alpha -\delta -\varepsilon )n-i}\), where \(\beta =\min (q, 1-q)\). We use
Exposing all m columns, we see that
If \(M_{(1+\alpha - \delta )n}^{I}\) is full rank, then any nonzero orthogonal vector to the rows of \(M_{(1+\alpha - \delta )n}\) must have support outside the columns of \(M_{(1+\alpha - \delta )n}^{I}\). When we look at all possible choices of I, if all choices lead to \(M_{(1+\alpha - \delta )n}^{I}\) being full rank, then there are at least \(m+1\) nonzero entries in a normal vector to \(M_{(1+\alpha - \delta )n}\). So, by union bound, we have that
Then, setting parameters, we choose \(m = \delta 'n\) for some \(\delta '=\delta '(q)>0\) so that we have
for some constant \(C > 0\), where the final equality holds as
for \(d=d(\delta '),\) where we can choose \(\delta '\) so that d is arbitrarily close to 1 independent of n. \(\square \)
We introduce the following concentration probability, designed for our purpose,
where \(X = (x_1,\dots ,x_n,0,\dots ,-\sum _ix_i,0,\dots ,0)\), with the sum at index \(n+j\) and the \(x_i\) are iid copies of a random variable which is 1 with probability q, otherwise 0. It is a Laplacian version of \(\rho \) from [4], wherein X has exclusively iid entries. As the rows we add in the final \(\delta n\) steps have only one nonzero entry outside the first n coordinates, our notion of structure must be sensitive to structure specifically in those coordinates.
Conceptually, we think of vectors with small \(\rho ^j_L\) as being unstructured.
The following proposition is the Laplacian version of a structure property from [4], Corollary 4.6 in their paper. The argument is similar to the one in that paper and included here for completeness.
Proposition 3.3
Suppose for all \(a\in {\mathbb {F}}_p\), \({\textbf{w}}-a\cdot \mathbbm {1}\) has at least m non-zero coordinates among the first n coordinates where \(p<\sqrt{m}\), then
for all j.
Proof
Observe that \(X\cdot {\textbf{w}}= X\cdot ({\textbf{w}}-a\cdot \mathbbm {1})\) for any \({\textbf{w}}\) and \(a\in {\mathbb {F}}_p\).
Thus, if \(X = (x_1,\dots ,x_n,0,\dots , -\sum _ix_i,0, \dots ,0)\) where \(-\sum _ix_i\) is at index j, it suffices to consider the case \({\textbf{w}}_j = 0,\) since otherwise we consider \({\textbf{w}}-({\textbf{w}}_j,\dots ,{\textbf{w}}_j)\).
Note that we view \({\textbf{w}}\) as a member of \({\mathbb {F}}_p^{(1+\alpha )n}\) with the representation
Since \(\frac{t{\textbf{w}}}{p}\in {\mathbb {R}}^{(1+\alpha )n}\) has at least m non-zero coordinates among the first n coordinates for any \(t\in {\mathbb {F}}_p\), restricting to the first n coordinates, we must have that
Let \(e_p(x) = e^{2\pi ix/p}\). Now, we proceed as in [4]. For any \(r\in {\mathbb {F}}_p\), we have
Thus, we have
Now, we observe the fact that
where \(\Vert \cdot \Vert _{{\mathbb {R}}/{\mathbb {Z}}}\) represents the distance to the closest integer. Then, we have that
\(\square \)
The following is a variant of Lemma 7.1 from [4] adapted for the different notion of structure in the Laplacian setting.
Lemma 3.4
Let H be a subspace of \({\mathbb {F}}_p^{(1+\alpha )n}\) of codimension d such that \({\textbf{1}}\in H^\perp ,\) and for any \({\textbf{w}}\in H^\perp \setminus {\mathbb {F}}_q{\textbf{1}}\), we have \(\rho ^j_L({\textbf{w}})\le \delta \). Then, if \(X = (x_1,\dots ,x_n, 0,\dots ,-\sum _ix_i,0,\dots ,0)\) with the sum at index \(n+j\) and the \(x_i\) are iid variables equal to 1 with probability q otherwise 0,
Proof
Let \({\textbf{v}}_1,\dots ,{\textbf{v}}_{d-1},{\textbf{1}}\) be a basis of \(H^\perp \). Then we have that for all \(t_1,\dots , t_{d-1}\) not all zero and a,
Using the identity
we see that
Because \(X\cdot {\textbf{1}} = 0\), we have
Observe that
By our assumption on \(\rho _L^j\),
So, we obtain
\(\square \)
Proof of Proposition 3.1
We have seen from Proposition 3.2 that there exists an event \({\mathcal E}\) with \({\mathbb {P}}({\mathcal E})\ge 1-O(\exp (-Cn)),\) where for all vectors \({\textbf{v}}\in W_{(1+\alpha )n-k}^\perp \), we have for any \(a\in {\mathbb {F}}_p\),
Proposition 3.3 states that \({\textbf{v}},\) which satisfy Eq. 2 for all a, have
Conditioning on \({\mathcal E}\), and applying Lemma 3.4 to \(W_{(1+\alpha )n-k}\) and \(Y_{(1+\alpha )n-k + 1}\) with \(\delta = \exp (-\delta 'n/2p^2),\) we obtain
Note that \(\frac{p}{p-1}\le 2,\) so the big-O does not depend on p. \(\square \)
4 Proof of Theorem 1.1
Proof
It is known [2] that for a uniform \((n+1)\times n\) matrix \(M_{n}'\) with edge probability q, the final corank distribution is given by
Let \(r = \delta n/p^2\) for some constant \(\delta >0\) to be fixed later. Applying Proposition 2.7, we have that the matrix \(M_{(1+\alpha )n-r}\) is full rank with probability at least \(1-O(\exp (-Cr))\). Similarly, \((M_n')_{n+1-r}\) will be full rank with probability \(1-O(\exp (-Cr)\).
With r rows remaining, the rank of both models evolve similarly with a (high probability) initial rank of \(n-r+1\) for the \((n+1)\times n\) model and \(n-r\) for the Laplacian model. In the uniform model if the rank of \((M_n')_\ell \) is k, then the rank increases to \(k+1\) when we add a row with probability \(1-1/p^{n-k}\). In the Laplacian model, if the rank of \(M_\ell \) for \((1+\alpha -\delta )n\le \ell < n\) is k, then the rank increases to \(k+1\) with probability \(1-1/p^{n-k-1}+O(\exp (-Cn/p^2))\).
In other words, the two models start in the same place and evolve similarly with high probability. This is illustrated by Fig. 1. Summing over the errors for the change of rank over the addition of the final r rows, we achieve a total error
Now, we take \(0<\delta < C\) small enough so that
for some constant \(C>0\). This yields the final result
\(\square \)
Data Availibility Statement
Data sharing is not applicable to this article, as no datasets were generated or analyzed during the current study.
References
Eduardo Friedman and Lawrence C. Washington, On the distribution of divisor class groups of curves over a finite field, In Théorie des nombres (1989), pp 227-239.
J. Fulman and L. Goldstein, Stein’s method and the rank distribution of random matrices over finite fields, Ann. Probab. Volume 43, Number 3 (2015), 1274-1314.
S. Koplewitz, Sandpile groups of random bipartite graphs, https://arxiv.org/abs/1705.07519, 2017.
K. Luh and S. Meehan and H. Nguyen, Some new results in random matrices over finite fields, submitted.
K. Maples, Singularity of Random Matrices over Finite Fields, arxiv.org/abs/1012.2372.
K. Maples, Cokernels of random matrices satisfy the Cohen-Lenstra heuristics, arxiv.org/abs/1301.1239.
A. Mészáros, The distribution of sandpile groups of random regular graphs, Trans. Amer. Math. Soc. 373 (2020), 6529-6594.
H. Nguyen and E. Paquette, Surjectivity of near square random matrices, to appear in Combinatorics, Probability and Computing.
H. Nguyen and M. M. Wood, Random integral matrices: universality of surjectivity and the cokernel, submitted.
A. M. Odlyzko. On subspaces spanned by random selections of \(\pm \)1 vectors. J. Combin. Theory Ser. A, 47(1):124-133, 1988.
M. M. Wood, The distribution of sandpile groups of random graphs, J. Amer. Math. Soc. 30 (2017), pp. 915–958.
M. M. Wood, Random integral matrices and the Cohen-Lenstra Heuristics. Amer. J. Math. 141 (2019), pp. 383–398.
Acknowledgements
The authors thank Hoi Nguyen for leading them to a summer reading group, which led to this problem. The authors thank Nathan Kaplan for pointing out some important omissions, errors and ambiguities in an earlier draft and, finally, the anonymous reviewer for corrections and insightful comments.
Funding
Open access funding provided by SCELC, Statewide California Electronic Library Consortium
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Additional information
Communicated by Kolja Knauer.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Bhargava, A., DePascale, J. & Koenig, J. The Rank of the Sandpile Group of Random Directed Bipartite Graphs. Ann. Comb. 27, 979–992 (2023). https://doi.org/10.1007/s00026-023-00637-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00026-023-00637-3