Abstract
We consider three models of sparse random graphs: undirected and directed Erdős–Rényi graphs and random bipartite graph with two equal parts. For such graphs, we show that if the edge connectivity probability p satisfies \(np\ge \log n+k(n)\) with \(k(n)\rightarrow \infty \) as \(n\rightarrow \infty \), then the adjacency matrix is invertible with probability approaching one (n is the number of vertices in the two former cases and the same for each part in the latter case). For \(np\le \log n-k(n)\) these matrices are invertible with probability approaching zero, as \(n\rightarrow \infty \). In the intermediate region, when \(np=\log n+k(n)\), for a bounded sequence \(k(n)\in \mathbb {R}\), the event \(\Omega _0\) that the adjacency matrix has a zero row or a column and its complement both have a non-vanishing probability. For such choices of p our results show that conditioned on the event \(\Omega _0^c\) the matrices are again invertible with probability tending to one. This shows that the primary reason for the non-invertibility of such matrices is the existence of a zero row or a column. We further derive a bound on the (modified) condition number of these matrices on \(\Omega _0^c\), with a large probability, establishing von Neumann’s prediction about the condition number up to a factor of \(n^{o(1)}\).
This is a preview of subscription content, access via your institution.
Notes
When \(\ell =1\) by a slight abuse of notation we take \(\hat{z}_1=x_{[1: \sqrt{np}]}\).
References
Addario-Berry, L., Eslava, L.: Hitting time theorems for random matrices. Comb. Probab. Comput. 23(5), 635–669 (2014)
Bai, Z.D., Silverstein, J.W.: Spectral Analysis of Large Dimensional Random Matrices. Springer Series in Statistics, 2nd edn. Springer, Dordrecht (2010)
Banderia, A.S., van Handel, R.: Shrap nonasymptotic bounds on the norm of random matrices with independent entries. Ann. Probab. 44(4), 2479–2506 (2016)
Basak, A., Cook, N., Zeitouni, O.: Circular law for the sum of random permutations. Elec. J. Probab. 33, 51 (2018)
Basak, A., Dembo, A.: Limiting spectral distribution of sum of unitary and orthogonal matrices. Elec. Commun. Probab. article 69 (2013)
Basak, A., Rudelson, M.: Invertibility of sparse non-Hermitian matrices. Adv. Math. 310, 426–483 (2017)
Basak, A., Rudelson, M.: The circular law for sparse non-Hermitian matrices. Ann. Probab. 47(4), 2359–2416 (2019)
Bordenave, C., Caputo, P., Chafaï, D.: Circular law theorem for random Markov matrices. Probab. Theory Relat. Fields 152(3–4), 751–779 (2012)
Bordenave, C., Chafaï, D.: Around the circular law. Probab. Surv. 9, 1–89 (2012)
Boucheron, S., Lugosi, G., Massart, P.: Concentration Inequalities: A Nonasymptotic Theory of Independence. Oxford University Press, Oxford (2013)
Bourgain, J., Vu, V., Wood, P.M.: On the singularity probability of discrete random matrices. J. Funct. Anal. 258(2), 559–603 (2010)
Cook, N.: On the singularity of adjacency matrices for random regular digraphs. Probab. Theory Relat. Fields 167(1–2), 143–200 (2017)
Cook, N.: The circular law for random regular digraphs. Ann. Inst. H. Poincaré Probab. Stat. 55(4), 2111–2167 (2019)
Costello, K.P.: Bilinear and quadratic variants on the Littlewood–Offord problem. Isr. J. Math. 194(1), 359–394 (2013)
Costello, K.P., Vu, V.H.: The rank of random graphs. Random Struct. Algorithms 33(3), 269–285 (2008)
Costello, K.P., Vu, V.: On the rank of random sparse matrices. Combin. Probab. Comput. 19(3), 321–342 (2010)
Edelman, A.: Eigenvalues and condition numbers of random matrices. SIAM J. Matrix Anal. Appl. 9, 543–560 (1988)
Frieze, A.: Random structures and algorithms. In: Proceedings of the International Congress of Mathematicians—Seoul 2014, vol. 1. Kyung Moon Sa, Seoul, pp. 311–340 (2014)
Götze, F., Tikhomirov, A.: The circular law for random matrices. Ann. Probab. 38(4), 1444–1491 (2010)
Huang, J.: Invertibility of adjacency matrices for random \(d\)-regular directed graphs. ArXiv preprint arXiv: 1806.01382v2 (2018)
Huang, J.: Invertibility of adjacency matrices for random \(d\)-regular graphs. ArXiv preprint arXiv: 1807.06465v1 (2018)
Huang, H.: Singularity of Bernoulli matrices in the sparse regime \(pn = O(log(n))\). ArXiv preprint arXiv:2009.13726v1 (2020)
Jain, V., Sah, A., Sawhney, M.: Singularity of discrete random matrices I. ArXiv preprint arXiv:2010.06553v1 (2020)
Jain, V., Sah, A., Sawhney, M.: Singularity of discrete random matrices II. ArXiv preprint arXiv:2010.06554v1 (2020)
Knowles, A., Yin, J.: Anisotropic local laws for random matrices. Probab. Theory Relat. Fields 169, 257–352 (2017)
Kahn, J., Komlós, J., Szemerédi, E.: On the probability that a random \(\pm 1\) matrix is singular. J. Am. Math. Soc. 8(1), 223–240 (1995)
Komlós, J.: On the determinant of \((0,1)\) matrices. Stud. Sci. Math. Hungar. 2, 7–22 (1967)
Komlós, J.: On the determinant of random matrices. Stud. Sci. Math. Hungar. 3, 387–399 (1968)
Komlós, J.: Circulated manuscript. Edited version available online at http://sites.math.rutgers.edu/~komlos/01short.pdf (1977)
Landon, B., Sosoe, P., Yau, H.-T.: Fixed energy universality of Dyson Brownian motion. Adv. Math. 346, 1137–1332 (2019)
Latała, R.: Some estimates of norms of random matrices. Proc. Am. Math. Soc. 133(5), 1273–1282 (2004)
Litvak, A.E., Lytova, A., Tikhomirov, K., Tomczak-Jaegermann, N., Youssef, P.: Adjacency matrices of random digraphs: singularity and anti-concentration. J. Math. Anal. Appl. 445(2), 1447–1491 (2017)
Litvak, A.E., Lytova, A., Tikhomirov, K., Tomczak-Jaegermann, N., Youssef, P.: The smallest singular value of a shifted \(d\)-regular random square matrix. Probab. Theory Relat. Fields 173, 1301–1347 (2019)
Litvak, A.E., Lytova, A., Tikhomirov, K., Tomczak-Jaegermann, N., Youssef, P.: The rank of random regular digraphs of constant degree. J. Complex. 48, 103–110 (2018)
Litvak, A.E., Lytova, A., Tikhomirov, K., Tomczak-Jaegermann, N., Youssef, P.: Circular law for sparse random regular digraphs. J. Eur. Math. Soc. to appear (2021)
Litvak, A.E., Pajor, A., Rudelson, M., Tomczak-Jaegermann, N.: Smallest singular value of random matrices and geometry of random polytopes. Adv. Math. 195, 491–523 (2005)
Litvak, A.E., Tikhomirov, K.E.: Singularity of sparse Bernoulli matrices. ArXiv preprint arXiv:2004.03131v1 (2020)
Mészáros, A.: The distribution of sandpile groups of random regular graphs. Trans. Am. Math. Soc. 373, 6529–6594 (2020)
Nguyen, H.H., Wood, W.M.: Cokernels of adjacency matrices of random \(r\)-regular graphs. ArXiv preprint arXiv: 1806.10068v2 (2018)
Odlyzko, A.M.: On subspaces spanned by random selections of \(\pm 1\) vectors. J. Comb. Theory Ser. A 47, 124–133 (1988)
Rebrova, E., Tikhomirov, K.: Coverings of random ellipsoids, and invertibility of matrices with i.i.d. heavy-tailed entries. Isr. J. Math. 227(2), 507–544 (2018)
Rudelson, M.: Invertibility of random matrices: norm of the inverse. Ann. Math. 168, 575–600 (2008)
Rudelson, M., Tikhomirov, K.: The sparse circular law under minimal assumptions. Geom. Funct. Anal. 29, 561–637 (2019)
Rudelson, M., Vershynin, R.: The Littlewood–Offord Problem and invertibility of random matrices. Adv. Math. 218(2), 600–633 (2008)
Rudelson, M., Vershynin, R.: Smallest singular value of a random rectangular matrix. Commun. Pure Appl. Math. 62, 1707–1739 (2009)
Rudelson, M., Vershynin, R.: Invertibility of random matrices: unitary and orthogonal perturbations. J. Am. Math. Soc. 27(2), 293–338 (2014)
Sankar, A., Spielman, D.A., Teng, S.-H.: Smoothed analysis of the condition numbers and growth factors of matrices. SIAM J. Matrix Anal. Appl. 28(2), 446–476 (2006)
Smale, S.: On the efficiency of algorithms of analysis. Bull. Am. Math. Soc. (N.S.) 13, 87–121 (1985)
Tao, T.: Least singular value, circular law, and Lindeberg exchange. Preprint http://helper.ipam.ucla.edu/publications/qlatut/qlatut_15156.pdf (2017)
Tao, T., Vu, V.: On random \(\pm 1\) matrices: singularity and determinant. Random Struct. Algorithms 28, 1–23 (2006)
Tao, T., Vu, V.: On the singularity probability of random Bernoulli matrices. J. Am. Math. Soc. 20, 603–628 (2007)
Tao, T., Vu, V.: Random matrices: the circular law. Commum. Contemp. Math. 10(2), 261–307 (2008)
Tao, T., Vu, V.: Random matrices: universality of the ESDs and the circular law. Ann. Probab. 38, 2023–2065 (2010) (with an appendix by M. Krishnapur)
Tikhomirov, K.: Singularity of random Bernoulli matrices. Ann. Math. 191, 593–634 (2020)
Vershynin, R.: Introduction to the Non-asymptotic Analysis of Random Matrices. Compressed Sensing, pp. 210–268. Cambridge University Press, Cambridge (2012)
Vershynin, R.: Invertibility of symmetric random matrices. Random Struct. Algorithms 44(2), 135–182 (2014)
von Neumann, J.: Collected Works. Vol. V: Design of Computers, Theory of Automata and Numerical Analysis. General editor: A. H. Taub. A Pergamon Press Book The Macmillan Co., New York (1963)
von Neumann, J., Goldstine, H.H.: Numerical inverting of matrices of high order. Bull. Am. Math. Soc. 53, 1021–1099 (1947)
Vu, V.: Random discrete matrices. In: Horizons of Combinatorics, Vol. 17 of Bolyai Soc. Math. Stud.. Springer, Berlin, pp. 257–280 (2008)
Vu, V.: Combinatorial problems in random matrix theory. In: Proceedings of the International Congress of Mathematicians—Seoul 2014, vol. 4. Kyung Moon Sa, Seoul, pp. 489–508 (2014)
Wei, F.: Investigate invertibility of sparse symmetric matrices. ArXiv preprint arXiv:1712.04341v2 (2017)
Wood, P.M.: Universality and the circular law for sparse random matrices. Ann. Appl. Probab. 22(3), 1266–1300 (2012)
Acknowledgements
We thank the anonymous referees for their suggestions that led to an improvement of the presentation of this paper. AB acknowledges support of the Department of Atomic Energy, Government of India (GOI), under project no. RTI4001. Research of AB was partially supported by Grant 147/15 from the Israel Science Foundation, a funding from the European Research Council under the European Unions Horizon 2020 research and innovation program (Grant Agreement Number 692452), an Infosys–ICTS Excellence Grant, and a Start-up Research Grant (SRG/2019/001376) and a MATRICS Grant (MTR/2019/001105) from Science and Engineering Research Board of GOI. Research of AB is carried out in part as a member of the Infosys-Chandrasekharan virtual center for Random Geometry, supported by a grant from the Infosys Foundation. Part of this research was performed while MR visited Weizmann Institute of Science in Rehovot, Israel, where he held Rosy and Max Varon Professorship. He is grateful to Weizmann Institute for its hospitality and for creating an excellent work environment. The research of MR was supported in part by the NSF Grant DMS 1464514 and by a fellowship from the Simons Foundation.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A: Structural properties of the adjacency matrices of sparse graphs
In this section we prove that certain structural properties of \(A_n\), as listed in Lemma 3.7, hold with high probability when \(A_n\) satisfies Assumption 3.1 with p such that \(np \ge \log (1/\bar{C} p)\), for some \(\bar{C} \ge 1\). We also show that under the same assumption we have bounds on the number of light columns of \(A_n\), namely we prove Lemma 3.15.
First let us provide the proof of Lemma 3.15.
Proof of Lemma 3.15
The proof is a simple application of Chernoff bound and Markov’s inequality.
Since the entries of \(A_n\) satisfies Assumption 3.1, using Stirling’s approximation we note that
where in the second inequality we have used the fact that \(p \le 1/4\). Therefore, for \(np \ge C \log n\), with C large, using the union bound we find \(\mathbb {E}[ |\mathcal {L}(A_n)| ]<1/n\). Hence by Markov’s inequality we deduce that
To prove the upper bound on the cardinality of \(\mathcal {L}(A_n)\) we note that the assumption \(np \ge \log (1/\bar{C} p)\) implies that \(np \ge (1-\delta ) \log n\), for any \(\delta >0\), for all large n. Therefore, using (A.1) and Markov’s inequality, setting \(\delta =\frac{1}{9}\), we find that for \(np \le 2 \log n\),
for all large n, whenever \(\delta _0\) is chosen sufficiently small. For p such that \(2 \log n \le np \le C_{3.15} \log n\) we note from (A.1) that
Therefore, an union bound followed by Markov’s inequality yield the desired result. \(\square \)
Proof of Lemma 3.7
We will show that each of the six properties of the event \(\Omega _{3.7}\) hold with probability at least \(1 - Cn^{-2\bar{c}_{3.7}}\), for some constant \(C >0\). Then, taking a union bound the desired conclusion would follow.
First let us start with the proof of (3.7). Since the inequality \(np \ge \log (1/\bar{C} p)\) implies that \(np \ge \log n/2\), for all large n, it follows from Chernoff bound that property (3.7) of the event \(\Omega _{3.7}\) holds with probability at least \(1-1/n\), for all large n. We omit the details.
Next let us prove that property (3.7) of \(\Omega _{3.7}\) holds with high probability. For \((i,j) \in \left( {\begin{array}{c}[n]\\ 2\end{array}}\right) \) and \(k \in [n]\) denote by \(\Omega _{(i,j),k}\) the event that the columns \({{\,\mathrm{col}\,}}_i(A_n), {{\,\mathrm{col}\,}}_j(A_n)\) are light and \(a_{k,i}, a_{k,j} \ne 0\). Note that the event that two light columns intersect is contained in the event \(\cup _{i,j,k} \Omega _{(i,j),k}\). Therefore, we need to find bounds \(\mathbb {P}(\Omega _{(i,j),k})\). Since the entry \(a_{i,j}\) may depend on \(a_{j,i}\) we need to consider the cases \(k \in [n]\backslash \{i,j\}\) and \(k \in \{i,j\}\) separately.
First let us fix \(k \in [n]\backslash \{i,j\}\). We note that
Therefore, recalling that under Assumption 3.1 the entries of the sub-matrix of \(A_n\) indexed by \(([n]\backslash \{i,j\}) \times \{i,j\}\) are i.i.d. \({{\,\mathrm{Ber}\,}}(p)\) we obtain that
for all large n, where we have proceeded similarly as in (A.1) to bound the probability of the event
Since \(np \ge \log (1/\bar{C} p)\) an application of the union bound shows that
for some absolute constant c and all large n, where we use that \(np \ge \log n/2\), which as already seen is a consequence of the assumption \(np \ge \log (1/\bar{C} p)\).
Next let us consider the case \(k \in \{i,j\}\). Without loss of generality, let us assume that \(k=i\). We see that
Hence proceeding same as above we deduce
So combining the bounds of (A.2) and (A.3) we conclude that property (3.7) of \(\Omega _{3.7}\) holds with probability at least \(1- n^{-2 \bar{c}_{3.7}}\).
Now let us prove that (3.7) holds with high probability. We let \(j \in [n]\), \(I =(i_1 ,\ldots ,i_{r_0}) \in \left( {\begin{array}{c}[n]\backslash \{j \}\\ r_0\end{array}}\right) \), and \(k_1 ,\ldots ,k_{r_0} \in [n]\), for some absolute constant \(r_0\) to be determined during the course of the proof. Denote by \(\Omega _{j,I,(k_1 ,\ldots ,k_{r_0})}\) the event that all the columns indexed by I are light, and for any \(i_\ell \in I\), \(k_\ell \in {{\,\mathrm{supp}\,}}({{\,\mathrm{col}\,}}_{i_\ell }(A_n)) \cap {{\,\mathrm{supp}\,}}({{\,\mathrm{col}\,}}_j(A_n))\). Equipped with this notation we see that the event that there exists a column such that its support intersects with the supports of at least \(r_0\) light columns is contained in the event \(\cup _{j; I; k_\ell , \ell \in [r_0]} \Omega _{j,I, (k_1,k_2,\ldots ,k_{r_0})}\).
Since all the columns indexed by I are light, applying property (3.7) it follows that \(\{k_\ell \}_{\ell =1}^{r_0}\) are distinct. Therefore, for matrices with independent entries (3.7) follows upon bounding the probability of the events
and
followed a union bound. Recall that under Assumption 3.1 the entry \(a_{i,j}\) may only depend on \(a_{j,i}\) for \(i,j \in [n]\). Therefore, to carry out this scheme for matrices satisfying Assumption 3.1 we additionally need to show that the support of \({{\,\mathrm{col}\,}}_j(A_n)\) is almost disjoint from the set of light columns with high probability, so that we can omit the relevant diagonal block to extract a sub-matrix with jointly independent entries.
To this end, we claim that
for some \(c' >0\). To establish (A.4) we fix \(j \in [n]\) and note that
For ease of writing, let us denote
By Assumption 3.1 the entries \(\{a_{i',j'}\}\) for \((i',j') \in \{i_\ell \}_{\ell =1}^k \times ([n]\backslash (\{i_\ell \}_{\ell =1}^k \cup \{j\})\) are jointly independent \({{\,\mathrm{Ber}\,}}(p)\) random variables. Therefore applying Stirling’s approximation once more, and proceeding similarly as in (A.1) we find that
Since by Lemma 3.15 we see that \(\mathcal {L}(A_n) =\varnothing \) with high probability when \(p \ge \frac{C_{3.15}\log n}{n}\). Without loss of generality, we therefore assume that \(p \le \frac{C_{3.15}\log n}{n}\). So, by the union bound over j, using the fact that \(np \ge \log (1/\bar{C} p)\) and property (3.7) of the event \(\Omega _{3.7}\) we have that, for all large n,
for some \(\delta >0\). This establishes the claim (A.4).
Equipped with (A.4) we turn to proving (3.7). Using (A.4) we see that excluding a set of probability at most \(n^{-c'}\), for any \(j,I,(k_1 ,\ldots ,k_{r_0})\) such that \(\Omega _{j,I,(k_1 ,\ldots ,k_{r_0})}\) occurs, we can find \(\ell _{1},\ldots , \ell _{{r_0-3}}\) with \(k_{\ell _s} \in [n]\backslash (\mathcal {L}(A_n) \cup \{j\}) \subset [n] \backslash (I \cup \{j\})\) for all \(s=1,2,\ldots ,r_0-3\). For such \(k_{\ell _s}\), all events \(|{{\,\mathrm{supp}\,}}({{\,\mathrm{col}\,}}_{i_{\ell _s}}(A_n))\backslash (I \cup \{j\})| \le \delta _0 np\) and \(a_{k_{\ell _s}, j} = a_{k_{\ell _s}, i_{\ell _s}}=1\) with \(s=1,2,\ldots ,r_0-3\) are independent. Denote for brevity
Note that under the assumption \(np \ge \log (1/\bar{C} p)\) we have \(\bar{q} \le \exp ( - \log n/2)\) for all large n. Hence, recalling Assumption 3.1, using (A.4) and property (3.7) of \(\Omega _{3.7}\), and proceeding similarly as in (A.1) once again we see that
for some \(\bar{c}, c_0 >0\), where the last step follows upon choosing \(r_0\) such that \(r_0 - 3 > 15\). This completes the proof of property (3.7).
Next let us show that (3.7) holds with high probability. First we will prove that for any \(j \in [n]\) such that \({{\,\mathrm{col}\,}}_j(A_n)\) is normal we have
with high probability. Note that the difference between (A.6) and property (3.7) of \(\Omega _{3.7}\) is that in (A.6) it is claimed that for any \(j \in [n]\) such that \({{\,\mathrm{col}\,}}_j(A_n)\) is normal its support does not have a large intersection with that of light columns. To establish property (3.7) we need to strengthen the above to deduce that one can replace the matrix \(A_n\) by its folded version on the lhs of (A.6) with the loss of factor of four in its rhs.
Turning to prove (A.6), we see that if (3.7) holds then given any \(j \in [n]\) there exists only \(r_0\) light columns \({{\,\mathrm{col}\,}}_{i_1}(A_n), ,\ldots ,, {{\,\mathrm{col}\,}}_{i_{r_0}}(A_n)\) such that their supports intersect that of \({{\,\mathrm{col}\,}}_j(A_n)\). Hence,

Since by (3.7) we have that \(|{{\,\mathrm{supp}\,}}({{\,\mathrm{col}\,}}_j(A_n))| \le C_{3.7} np\), using Stirling’s approximation and a union bound we show that the event on the rhs of (A.7) holds with small probability.
Indeed, for \(i \ne j \in [n]\), denoting
and using the fact that property (3.7) holds with high probability we deduce that
for all large n. Thus combining (A.7) and (A.8) and applying property (3.7) of the event \(\Omega _{3.7}\) we establish that (A.6) holds with probability at least \(1- n^{-\widetilde{c}}\) for some \(\widetilde{c} >0\).
As mentioned above, to show that property (3.7) holds with high probability we need to strengthen (A.6). To this end, recalling the definition of the folded matrix (see Definition 3.5) we note that \(k \in {{\,\mathrm{supp}\,}}({{\,\mathrm{col}\,}}_i({{\,\mathrm{fold}\,}}(A_n)) \cap {{\,\mathrm{supp}\,}}({{\,\mathrm{col}\,}}_j({{\,\mathrm{fold}\,}}(A_n))\) implies that
for some \(\mathfrak {u}, \mathfrak {v}\in \{1,2\}\), where for any \(\ell \in [n]\).
and for any set \(S \subset [n]\) and \(k \in \mathbb {Z}\) we denote \(S+k :=\{x+k: x \in S\}\). Using the observation we see that it suffices to show that
with high probability, for all \(\mathfrak {u}, \mathfrak {v}\in \{1,2\}\). If \(\mathfrak {u}=\mathfrak {v}\) then (A.9) is an immediate consequence of (A.6). It remains to prove (A.9) for \(\mathfrak {u}\ne \mathfrak {v}\). Let us consider the case \(\mathfrak {u}=1\) and \(\mathfrak {v}=2\). From (A.4) we have
Therefore, proceeding similarly as in the steps leading to (A.5) we deduce that, with the desired high probability, for any \(j \in [n]\), such that \({{\,\mathrm{col}\,}}_j(A_n)\) is a normal column, there are at most \(r_0\) light columns \(\{{{\,\mathrm{col}\,}}_{i_\ell }(A_n)\}_{\ell =1}^{r_0}\) so that \({{\,\mathrm{supp}\,}}_1({{\,\mathrm{col}\,}}_j(A_n)) \cap {{\,\mathrm{supp}\,}}_2({{\,\mathrm{col}\,}}_{i_\ell }(A_n)) \ne \varnothing \). Now arguing similarly as in the proof of (A.6) we derive (A.9) for \(\mathfrak {u}=1\) and \(\mathfrak {v}=2\). The proof of the other case is similar and hence is omitted.
Next we show that (3.7) holds with high probability. We first fix an \(I \subset [n]\) with \(2 \le |I| \le c_{3.7} p^{-1}\) and derive that (3.7) holds with certain probability for each such choice of I and then take an union over I.
Since the entry \(a_{i,j}\) may depend on \(a_{j,i}\), for \(i \ne j\), to derive that (3.7) holds with the desired probability we need to split it into two cases. Namely, the off-diagonal and the diagonal blocks require separate arguments. First we consider the off-diagonal block.
To this end, define the random variables
where we recall \(\bar{I}:=\bar{I}(I):=\{j \in [\mathfrak {n}]: j \in I \text { or } j +\mathfrak {n}\in I\} \subset [\mathfrak {n}]\), \(\mathfrak {n}:=\lfloor n/2 \rfloor \), and \(\mathfrak {a}_{i,j}\) denotes the (i, j)th entry of \({{\,\mathrm{fold}\,}}(A_n)\). Observe that
To prove (3.7) we need to show that \(\sum \eta _i\) cannot be too large with large probability. To show the latter we use the standard Laplace transform method.
Note that
where \(\{\xi _{i,j}\}\) are i.i.d. Rademacher random variables, \(\delta _{i,j}\) are i.i.d. \({{\,\mathrm{Ber}\,}}(\mathfrak {p})\) random variables, and \(\mathfrak {p}:=2p(1-p)\). Therefore,
Thus, for any \(\lambda >0\) such that \(e^\lambda \mathfrak {p}|I|\le 1\), we have
and hence
In particular, taking \(t:=\frac{\delta _0}{32} n p|I|\) and \(\lambda :=\log \frac{1}{\mathfrak {p}|I|}\), we get
where the second and the third inequalities follow from recalling that \( p |I| \le c_{3.7}\) for some sufficiently small constant \(c_{3.7}\), depending only on \(\delta _0\), and the last inequality follows from our assumption that \(np \ge \log n/2\) and shrinking \(c_{3.7}\) even further, if necessary.
To complete the proof of the fact that (3.7) holds with high probability, we show that
Now the proof finishes from (A.10) and (A.11) by first taking a union over \(I \in \left( {\begin{array}{c}[n]\\ k\end{array}}\right) \) followed by a union over \(k=2,3,\ldots , c_{3.7} p^{-1}\). We omit the details.
Turning to prove (A.11), we denote \( \hat{I}(I):=\hat{I}:= \cup _{i \in \bar{I}} \{ i , \mathfrak {n}+i\} \). As the entries of \(A_n\) are \(\{0,1\}\)-valued, we see that
Moreover, \(I \subset \hat{I}\). Therefore, it is enough to show that
Since \(A_n\) satisfies Assumption 3.1 we have that the upper triangular part of the sub-matrix of \(A_n\) induced by the rows and columns indexed by \(\hat{I}\) consists of independent \(\{0,1\}\)-valued random variables stochastically dominated by i.i.d. \({{\,\mathrm{Ber}\,}}(p)\) variables. So does the lower triangular part of that sub-matrix.
For ease of writing let us write
and note \(\mathscr {X}_U\) and \(\mathscr {X}_L\) has the same law. Thus to establish (A.12) it suffices to show that
The above is obtained by using the Laplace transform method as above. Indeed, we note that
and therefore
where \(\lambda = \log \frac{1}{p |I|}\) and we have used the fact that \(|\hat{I}| \le 2 |I|\). Hence, upon using Markov’s inequality and proceeding similarly as in (A.10) we deduce (A.13). It completes the proof of (A.12).
Now it remains to prove that property (3.7) holds with high probability. Recalling the definition of the folded matrix again we note that \(|{{\,\mathrm{supp}\,}}({{\,\mathrm{col}\,}}_j({{\,\mathrm{fold}\,}}(A_n)))| \le |{{\,\mathrm{supp}\,}}({{\,\mathrm{col}\,}}_j(A_n))|\). To show that the cardinality of the support of \({{\,\mathrm{col}\,}}_j({{\,\mathrm{fold}\,}}(A_n))\) is not too small compared to its unfolded version we observe that if \(k \in {{\,\mathrm{supp}\,}}({{\,\mathrm{col}\,}}_j(A_n))\) but \(k \notin {{\,\mathrm{supp}\,}}({{\,\mathrm{col}\,}}_j({{\,\mathrm{fold}\,}}(A_n)))\) then we must have that \(a_{k,j}=a_{k,\mathfrak {n}+j}=1\). Using estimates on the binomial probability and Chernoff bound we show that number of such k is small.
To carry out the above heuristic, we fix \(j \in [n]\) and since the entries of \(A_n\) are \(\{0,1\}\) valued we note that
Further observe that
and
Therefore,
Denoting
we see that \(\Delta _j\) is stochastically dominated by \({{\,\mathrm{Bin}\,}}(\mathfrak {n}, p^2)\). To finish the proof we need to find bounds on \(\Delta _j\).
First let us consider the case \(p \le n^{-5/12}\). For any \(k_0 \in \mathbb {N}\), sufficiently large, we see that
For \( n^{-5/12} \le p \le c\), for some small \(c >0\) depending on \(\delta _0\), we use Chernoff bound to deduce that
Combining (A.14) and (A.15) and taking an union over \(j \in [n]\) we show that property (3.7) holds with high probability. This completes the proof of the lemma. \(\square \)
Appendix B: Proof of invertibility over sparse vectors with a large spread component
In this section we prove Proposition 3.21. As already mentioned in Sect. 3.3 the proof is similar to that of Proposition 3.18. There are two key differences. Since our goal is to find a uniform bound on \(\Vert A_n x \Vert _2\) for x’s with a large spread component, unlike in the proof of Proposition 3.21, we use Lemma 3.22 to estimate the small ball probability. Moreover, as noted earlier, Assumption 3.1 allows some dependencies among its entries. Therefore, to tensorize the small ball probability we need to extract a sub-matrix of \(A_n\) with jointly independent entries such that the coordinates of x corresponding to the columns of this chosen sub-matrix form a vector with a large spread component and a sufficiently large norm. Below we make this idea precise.
Proof of Proposition 3.21
First, let us show that (3.41) implies (3.43). To this end, we begin by noting that if \(c_{3.21} < \frac{1}{2}\) then for any \(x \in \mathrm{Dom}(c_0^*n, c_{3.21}K^{-1})\) we have that \(\Vert x_{[M_0+1: c_0^*n]}\Vert _2 \ge \Vert x_{[c_0^*n+1:n]}\Vert _2\) (see also (3.34)). Hence, for \(x \notin V_{M_0}\) we obtain that \(\Vert x_{[M_0+1: c_0^*n]}\Vert _2 \ge \rho /\sqrt{2}\). Therefore, (3.41) implies that
where we recall the definition of \(\Omega _K^0\) from (3.29). Hence, proceeding as in the steps leading to (3.31) we deduce (3.43) upon assuming (3.41).
So, to complete the proof of the proposition it remains to establish (3.41). To prove it, we fix \(x \notin V_{M_0}\). Then
Fixing \(\bar{c}_0 \in (c_0^*,1)\), as \(M_0 \le \frac{1-\bar{c}_0}{2}n\) for all large n, recalling the fact that the non-zero entries of \(x_{[m_1: m_2]}\), for \(m_1 <m_2\), are the coordinates of x that take places from \(m_1\) to \(m_2\) in the non-increasing arrangement according their absolute values, we note that
Therefore
Note that this shows \(x_{[M_0+1:(1-\bar{c}_0)n]}\) has a large spread part and a large norm. Denoting \(\mathcal {I}:=\mathcal {I}(x):= {{\,\mathrm{supp}\,}}(x_{[M_0+1:(1-\bar{c}_0)n]})\) we note that Assumption 3.1 implies that the entries \(\{a_{i,j}\}_{j \in \mathcal {I}, i \notin \mathcal {I}}\) are i.i.d. \({{\,\mathrm{Ber}\,}}(p)\). So, now we can carry out the scheme that was outlined above by using the joint independence of \(\{a_{i,j}\}_{j \in \mathcal {I}, i \notin \mathcal {I}}\).
Indeed, using Lemma 3.22 we find that for any \(i \notin \mathcal {I}\), \(y \in \mathbb {R}^n\), and \(\varepsilon _0>0\) we have
for all sufficiently large n (depending only on \(\varepsilon _0\)), where \({\varvec{a}}_i\) is the ith row of \(A_n\) and we have used the fact that \(np \ge c_1 \log n\) for some \(c_1 >0\). We will choose \(\varepsilon _0\) as a small constant during the course of the proof.
Since the entries \(\{a_{i,j}\}_{j \in \mathcal {I}, i \notin \mathcal {I}}\) are i.i.d. \({{\,\mathrm{Ber}\,}}(p)\), we apply a standard tensorization argument, for example [42, Lemma 5.4], to deduce from (B.2) that for any \(x \notin V_{M_0}\)
for some constant \(C_0\), depending only on \(\bar{c}_0\), where the last two steps follow from the fact that \(|\mathcal {I}^c| \ge \bar{c}_0 n\) and upon choosing \(\varepsilon _0\) such that \( C_0 \cdot \varepsilon _0 \le \frac{1}{2}\).
To complete the proof we use an \(\varepsilon \)-net similar to the proof of Proposition 3.18. First, setting
and using Fact 3.20 we obtain a net \(\widetilde{\mathcal {M}}\) in \(V_{c_0^*,{c}_{3.21}} \backslash V_{M_0}\) with
for some \(\bar{C}\), depending only on \(\bar{c}_0\) and \(c_0^*\). Recalling that \(M_0 = \frac{n \sqrt{\log \log n}}{\log n}\) and the definition of \(\rho \) we observe that
for \(p \in (0,1/2]\) satisfying \(np \ge c_1 \log n\). Therefore, we further have that
for some other constant \(C_\star \), depending only on \(c_0^*\) and \(\bar{c}_0\). Next proceeding as in the steps leading to (3.37) we obtain that for any \(x \in V_{c_0^*,{c}_{3.21}} \backslash V_{M_0}\) there exists \(\bar{x} \in \widetilde{\mathcal {M}}\) such that for any \(y \in \mathbb {R}^n\)
Since \(\Vert \bar{x}_{[M_0+1:c_0^*n]}\Vert _2 = \Vert v_{\bar{x}}\Vert _2 \ge \rho /\sqrt{2}\), using (B.4) and setting
we deduce from above that any \(x \in V_{c_0^*,{c}_{3.21}} \backslash V_{M_0}\) there exists \(\bar{x} \in \widetilde{\mathcal {M}}\) such that for any \(y \in \mathbb {R}^n\)
Furthermore, by our construction of the net \(\widetilde{\mathcal {M}}\),
Therefore, upon assuming \(p \le \frac{1}{4}\) and recalling (B.1), this further yields that
where the second last step follows from (B.5) and the last step follows upon using the fact that \(\bar{c}_0 > c_0^*\) and choosing \(\varepsilon _0\) sufficiently small. This yields (3.41) and hence the proof of the proposition is complete. \(\square \)
Rights and permissions
About this article
Cite this article
Basak, A., Rudelson, M. Sharp transition of the invertibility of the adjacency matrices of sparse random graphs. Probab. Theory Relat. Fields 180, 233–308 (2021). https://doi.org/10.1007/s00440-021-01038-4
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440-021-01038-4
Keywords
- Random matrices
- Sparse matrices
- Erdős–Rényi graph
- Invertibility
- Smallest singular value
- Condition number
Mathematics Subject Classification
- 46B09
- 60B20