Skip to main content
Log in

Tracy-Widom Distributions for the Gaussian Orthogonal and Symplectic Ensembles Revisited: A Skew-Orthogonal Polynomials Approach

  • Published:
Journal of Statistical Physics Aims and scope Submit manuscript

Abstract

We study the distribution of the largest eigenvalue in the “Pfaffian” classical ensembles of random matrix theory, namely in the Gaussian orthogonal (GOE) and Gaussian symplectic (GSE) ensembles, using semi-classical skew-orthogonal polynomials, in analogy with the approach of Nadal and Majumdar (NM) for the Gaussian unitary ensemble (GUE). Generalizing the techniques of Adler, Forrester, Nagao and van Moerbeke, and using “overlapping Pfaffian” or “compound Pfaffian” identities, we explicitly construct these semi-classical skew-orthogonal polynomials in terms of the semi-classical orthogonal polynomials studied by NM in the case of the GUE. With these polynomials we obtain expressions for the cumulative distribution functions of the largest eigenvalue in the GOE and the GSE. Further, by performing asymptotic analysis of these skew-orthogonal polynomials in the limit of large matrix size, we obtain an alternative derivation of the Tracy-Widom distributions for GOE and GSE. This asymptotic analysis relies on a certain Pfaffian identity, the proof of which employs the characterization of Pfaffians in terms of perfect matchings and link diagrams.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. Note that, for \(\beta = 4\), the \(\mathrm {Tr}\,\) function needs to be interpreted as a quaternion trace, which can be understood as one half of the regular trace (for more information on the quaternionic definitions in RMT see, for example, [20, 29, 30, 53, 55]).

  2. We will provide some details of the N odd case in Sect. 3.4, however we do not have an expression directly analogous to (17), where the \(\beta =1\) CDF would be specified without any reference to the \(\beta =1\) skew-orthogonal polynomials \(R_j\). Our expression for \(F_{1,N_{\mathrm {odd}}}\) in (51) still contains explicit reference to the highest degree \(\beta =1\) polynomial.

  3. Note that the unusual indexing on the \(X_{j,k}\) and \(\varPhi _{j,k}\) entries of \({\mathbf {V}}_m\) follows from the fact that we used the orthogonality of the polynomials \(P_k\) to fill the lower triangle and the first upper diagonal of the matrix, and then we used the anti-symmetry of \({\mathbf {V}}_m\) to fill the rest of the upper triangle. See Appendix F.1 for the details.

  4. The error function and complementary error function are defined by

    $$\begin{aligned} \mathrm {erf}(x)&:= \frac{2}{\sqrt{\pi }} \int _0^x e^{-t^2} dt&\mathrm {erfc}(x)&:= 1- \mathrm {erf}(x)= \frac{2}{\sqrt{\pi }} \int _x^{\infty } e^{-t^2} dt, \end{aligned}$$

    and they are related by the expressions \(\mathrm {erfc}(-x) = 2- \mathrm {erfc}(x) = 1+ \mathrm {erf}(x)\).

  5. This modification of the skew-inner product allows us to define a single operator A which can be used to construct both the \(\beta =1\) and \(\beta =4\) cases [2]. See Sect. 4.1 for the details.

  6. Note that this invariance follows from the anti-symmetry of skew-inner products, and also that the invariance does not hold for the even degree polynomials; that is the mapping \(\eta _{2j} \mapsto \eta _{2j} + d \; \eta _{2j-1}\) is not invariant — we leave the confirmation of this as a short exercise for the reader.

  7. Such relations on compound Pfaffians have been obtained in the physics literature as a consequence of a general Wick theorem for fermions [65], which turned out to be useful in the study of the 2d-Ising model, see e.g. [8].

  8. Note that the factor \(2^{-7/6}\) differs by a factor \(2^{-1/6}\) from the result obtained in the original paper [82]. This mistake was actually noticed in [60, p.47] — see also [19]. There, this factor was corrected by matching with known asymptotic results for large (positive and negative) arguments. Here, we obtain this correct factor \(2^{-7/6}\) by a direct computation.

  9. The term skew-diagonal is used here in analogy with the term diagonal, that is, the (non-trivial) skew-symmetric (\({\mathbf {M}}=-{\mathbf {M}}^T\)) analogue of a diagonal matrix.

References

  1. Abramowitz, M., Stegun, I.A. (eds.): Handbook of mathematical functions, 10 edn. United States Department of Commerce, Washington D.C. (1972)

  2. Adler, M., Forrester, P., Nagao, T., van Moerbeke, P.: Classical skew orthogonal polynomials and random matrices. J. Stat. Phys. 99(1–2), 141–170 (2000)

    Article  MathSciNet  MATH  ADS  Google Scholar 

  3. Adler, M., van Moerbeke, P.: Toda versus Pfaff lattice and related polynomials. Duke Math. J. 112(1), 1–58 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  4. Akemann, G., Atkin, M.R.: Higher order analogues of Tracy-Widom distributions via the Lax method. J. Phys. A 46(1), 015202 (2012)

    Article  MathSciNet  MATH  ADS  Google Scholar 

  5. Akemann, G., Kanzieper, E.: Integrable structure of Ginibre’s ensemble of real random matrices and a Pfaffian integration theorem. J. Stat. Phys. 129, 1159–1231 (2007)

    Article  MathSciNet  MATH  ADS  Google Scholar 

  6. Amir, G., Corwin, I., Quastel, J.: Probability distribution of the free energy of the continuum directed random polymer in 1+1 dimensions. Commun. Pure Appl. Math. 64(4), 466–537 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  7. Atkin, M.R., Zohren, S.: Instantons and extreme value statistics of random matrices. J. High Energy Phys. 2014(4), 118 (2014)

    Article  ADS  Google Scholar 

  8. Au-Yang, H., Perk, J.H.H.: Toda lattice equation and Wronskians in the 2d Ising model. Physica D 18(1–3), 365–366 (1986)

    Article  MathSciNet  ADS  Google Scholar 

  9. Baik, J., Barraquand, G., Corwin, I., Suidan, T., et al.: Pfaffian Schur processes and last passage percolation in a half-quadrant. Ann. Probab. 46(6), 3015–3089 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  10. Baik, J., Deift, P., Johansson, K.: On the distribution of the length of the longest increasing subsequence of random permutations. J. Am. Math. Soc. 12(4), 1119–1178 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  11. Baik, J., Jenkins, R.: Limiting distribution of maximal crossing and nesting of Poissonized random matchings. Ann. Probab. 41(6), 4359–4406 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  12. Baik, J., Rains, E.M.: Limiting distributions for a polynuclear growth model with external sources. J. Stat. Phys. 100(3/4), 523–541 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  13. Barraquand, G., Krajenbrink, A., Doussal, P.L.: Half-space stationary Kardar-Parisi-Zhang equation. J. Stat. Phys. 181(4), 1149–1203 (2020)

    Article  MathSciNet  ADS  Google Scholar 

  14. Basor, E., Chen, Y.: Painlevé V and the distribution function of a discontinuous linear statistic in the Laguerre unitary ensembles. J. Phys. A 42(3), 035203 (2008)

    Article  MATH  ADS  Google Scholar 

  15. Biroli, G., Bouchaud, J.P., Potters, M.: On the top eigenvalue of heavy-tailed random matrices. Europhys. Lett. EPL 78(1), 10001 (2007)

    Article  MathSciNet  MATH  ADS  Google Scholar 

  16. Bloemendal, A., Virág, B.: Limits of spiked random matrices I. Probab. Theory Relat. Fields 156(3), 795–825 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  17. Bornemann, F., Forrester, P.J.: Singular values and evenness symmetry in random matrix theory. Forum Math. 28, 873–891 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  18. Borodin, A., Soshnikov, A.: Janossy densities. I. Determinantal ensembles. J. Stat. Phys. 113(3), 595–610 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  19. Borot, G., Nadal, C.: Right tail asymptotic expansion of Tracy-Widom beta laws. Random Matrices Theory Appl. 1(03), 1250006 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  20. Bufetov, A.I., Cunden, F.D., Qiu, Y.: Conditional measures for pfaffian point processes: conditioning on a bounded domain. arXiv preprint arXiv:1912.10743 (2019)

  21. Calabrese, P., Le Doussal, P., Rosso, A.: Free-energy distribution of the directed polymer at high temperature. Europhys. Lett. EPL 90(2), 20002 (2010)

    Article  ADS  Google Scholar 

  22. Cao, M., Chen, Y., Griffin, J.: Continuous and discrete Painlevé equations arising from the gap probability distribution of the finite \(n\) Gaussian unitary ensembles. J. Stat. Phys. 157(2), 363–375 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  23. Chiani, M.: Distribution of the largest eigenvalue for real Wishart and Gaussian random matrices and a simple approximation for the tracy-widom distribution. J. Multivar. Anal. 129, 69–81 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  24. de Bruijn, N.: On some multiple integrals involving determinants. J. Indian Math. Soc. New Ser. 19, 133–151 (1955)

    MathSciNet  MATH  Google Scholar 

  25. Dean, D.S., Le Doussal, P., Majumdar, S.N., Schehr, G.: Finite-temperature free fermions and the Kardar-Parisi-Zhang equation at finite time. Phys. Rev. Lett. 114(11), 110402 (2015)

    Article  ADS  Google Scholar 

  26. Dean, D.S., Le Doussal, P., Majumdar, S.N., Schehr, G.: Noninteracting fermions at finite temperature in a \(d\)-dimensional trap: Universal correlations. Phys. Rev. A 94(6), 063622 (2016)

    Article  Google Scholar 

  27. Dean, D.S., Le Doussal, P., Majumdar, S.N., Schehr, G.: Noninteracting fermions in a trap and random matrix theory. J. Phys. A 52(14), 144006 (2019)

    Article  MathSciNet  ADS  Google Scholar 

  28. Dotsenko, V.: Bethe ansatz derivation of the Tracy-Widom distribution for one-dimensional directed polymers. Europhys. Lett. EPL 90(2), 20003 (2010)

    Article  MATH  ADS  Google Scholar 

  29. Dyson, F.J.: Statistical theory of the energy levels of complex systems. I. J. Math. Phys. 3(1), 140–156 (1962)

    Article  MathSciNet  MATH  ADS  Google Scholar 

  30. Dyson, F.J.: Correlations between eigenvalues of a random matrix. Commun. Math. Phys. 19, 235–250 (1970)

    Article  MathSciNet  MATH  ADS  Google Scholar 

  31. Forrester, P., Mays, A.: A method to calculate correlation functions for \(\beta =1\) random matrices of odd size. J. Stat. Phys. 134(3), 443–462 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  32. Forrester, P.J.: The spectrum edge of random matrix ensembles. Nucl. Phys. B 402(3), 709–728 (1993)

    Article  MathSciNet  MATH  ADS  Google Scholar 

  33. Forrester, P.J.: Log-gases and random matrices, London Mathematical Society Monographs, vol. 34. Princeton University Press, Princeton (2010)

  34. Forrester, P.J., Majumdar, S.N., Schehr, G.: Non-intersecting Brownian walkers and Yang-Mills theory on the sphere. Nucl. Phys. B 844(3), 500–526 (2011)

    Article  MathSciNet  MATH  ADS  Google Scholar 

  35. Fyodorov, Y.V.: Level curvature distribution: From bulk to the soft edge of random Hermitian matrices. Acta Phys. Polonica A 120(6A), (2011)

  36. Fyodorov, Y.V., Perret, A., Schehr, G.: Large time zero temperature dynamics of the spherical \(p= 2\)-spin glass model of finite size. J. Stat. Mech. Theory Exp. 2015(11), P11017 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  37. Green, H., Hurst, C.: Order-disorder phenomena. Interscience Publishers, London (1964)

    MATH  Google Scholar 

  38. Gueudré, T., Le Doussal, P.: Directed polymer near a hard wall and KPZ equation in the half-space. Europhys. Lett. EPL 100(2), 26006 (2012)

    Article  ADS  Google Scholar 

  39. Harnad, J.: Janossy densities, multimatrix spacing distributions and Fredholm resolvents. Int. Math. Res. Not. 2004(48), 2599–2609 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  40. Imamura, T., Sasamoto, T.: Fluctuations of the one-dimensional polynuclear growth model with external sources. Nucl. Phys. B 699(3), 503–544 (2004)

    Article  MathSciNet  MATH  ADS  Google Scholar 

  41. Janossy, L.: On the absorption of a nucleon cascade. Proc. R. Irish Acad. Sect. A 53, 181–188 (1950)

    MathSciNet  MATH  Google Scholar 

  42. Johnstone, I.M., Ma, Z.: Fast approach to the Tracy-Widom law at the edge of GOE and GUE. Ann. Appl. Probab 22(5), 1962–1988 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  43. Knuth, D.E.: Overlapping Pfaffians. Electron. J. Comb. 3, (1996)

  44. Le Doussal, P., Calabrese, P.: The KPZ equation with flat initial condition and the directed polymer with one free end. J. Stat. Mech. Theory Exp. 2012(06), P06001 (2012)

    Article  MATH  Google Scholar 

  45. Liechty, K.: Nonintersecting Brownian motions on the half-line and discrete Gaussian orthogonal polynomials. J. Stat. Phys. 147(3), 582–622 (2012)

    Article  MathSciNet  MATH  ADS  Google Scholar 

  46. Majumdar, S.N.: Course 4 random matrices, the Ulam problem, directed polymers & growth models, and sequence matching. Les Houches 85, 179–216 (2007)

    Article  Google Scholar 

  47. Majumdar, S.N., Nechaev, S.: Anisotropic ballistic deposition model with links to the Ulam problem and the Tracy-Widom distribution. Phys. Rev. E 69(1), 011103 (2004)

    Article  MathSciNet  ADS  Google Scholar 

  48. Majumdar, S.N., Nechaev, S.: Exact asymptotic results for the Bernoulli matching model of sequence alignment. Phys. Rev. E 72(2), 020901 (2005)

    Article  MathSciNet  ADS  Google Scholar 

  49. Majumdar, S.N., Pal, A., Schehr, G.: Extreme value statistics of correlated random variables: A pedagogical review. Phys. Rep. 840, 1–32 (2020)

    Article  MathSciNet  MATH  ADS  Google Scholar 

  50. Majumdar, S.N., Schehr, G.: Top eigenvalue of a random matrix: Large deviations and third order phase transition. J. Stat. Mech. Theory Exp. 2014(1), P01012 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  51. Makey, G., Galioglu, S., Ghaffari, R., Engin, E.D., Yıldırım, G., Yavuz, Ö., Bektaş, O., Nizam, Ü.S., Akbulut, Ö., Şahin, Ö., et al.: Universality of dissipative self-assembly from quantum dots to human cells. Nat. Phys. 16(7), 795–801 (2020)

    Article  Google Scholar 

  52. Mays, A.: A geometrical triumvirate of real random matrices. Ph.D. thesis, The University of Melbourne, Parkville (2011)

  53. Mays, A.: A real quaternion spherical ensemble of random matrices. J. Stat. Phys. 153, 48–69 (2013)

    Article  MathSciNet  MATH  ADS  Google Scholar 

  54. Mays, A., Ponsaing, A., Schehr, G.: In preparation (2020)

  55. Mehta, M.L.: Random matrices, vol. 142, 3rd edn. Academic Press, Boston (2004)

    MATH  Google Scholar 

  56. Min, C., Chen, Y.: Linear statistics of matrix ensembles in classical background. Math. Methods Appl. Sci. 39(13), 3758–3790 (2016)

    Article  MathSciNet  MATH  ADS  Google Scholar 

  57. Min, C., Chen, Y.: Linear statistics of random matrix ensembles at the spectrum edge associated with the Airy kernel. Nucl. Phys. B 950, 114836 (2020)

    Article  MathSciNet  Google Scholar 

  58. Monthus, C., Garel, T.: Typical versus averaged overlap distribution in spin glasses: Evidence for droplet scaling theory. Phys. Rev. B 88(13), 134204 (2013)

    Article  ADS  Google Scholar 

  59. Muir, T.: A treatise on the theory of determinants. Macmillan and Co., London (1882)

    MATH  Google Scholar 

  60. Nadal, C.: Matrices aléatoires et leurs applications à la physique statistique et quantique. Ph.D. thesis, Paris 11 (2011)

  61. Nadal, C., Majumdar, S.N.: Nonintersecting Brownian interfaces and Wishart random matrices. Phys. Rev. E 79(6), 061117 (2009)

    Article  MathSciNet  ADS  Google Scholar 

  62. Nadal, C., Majumdar, S.N.: A simple derivation of the Tracy-Widom distribution of the maximal eigenvalue of a Gaussian unitary random matrix. J. Stat. Mech. Theory Exp. 2011(04), P04001 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  63. Nagao, T., Wadati, M.: Correlation functions of random matrix ensembles related to classical orthogonal polynomials. J. Phys. Soc. Jpn. 60(10), 3298–3322 (1991)

    Article  MathSciNet  ADS  Google Scholar 

  64. Nguyen, G.B., Remenik, D.: Non-intersecting Brownian bridges and the Laguerre orthogonal ensemble. Ann. Inst. Henri Poincaré Probab. Stat. 53(4), 2005–2029 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  65. Perk, J.H.H., Capel, H.W., Quispel, G.R.W., Nijhoff, F.: Finite-temperature correlations for the Ising chain in a transverse field. Physica A 123, 1–49 (1984)

    Article  MathSciNet  ADS  Google Scholar 

  66. Perret, A., Schehr, G.: Near-extreme eigenvalues and the first gap of Hermitian random matrices. J. Stat. Phys. 156(5), 843–876 (2014)

    Article  MathSciNet  MATH  ADS  Google Scholar 

  67. Perret, A., Schehr, G.: The density of eigenvalues seen from the soft edge of random matrices in the Gaussian \(\beta \)-ensembles. Acta Phys. Pol. B 46(9), 1693 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  68. Perret, A., Schehr, G.: Finite \(N\) corrections to the limiting distribution of the smallest eigenvalue of Wishart complex matrices. Random Matrices Theory Appl. 5(01), 1650001 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  69. Prähofer, M., Spohn, H.: Universal distributions for growth processes in 1+1 dimensions and random matrices. Phys. Rev. Lett. 84(21), 4882 (2000)

    Article  ADS  Google Scholar 

  70. Rote, G.: Division-free algorithms for the determinant and the Pfaffian: Algebraic and combinatorial approaches. In: Alt, H. (ed.) Computational Discrete Mathematics: Advanced Lectures, pp. 119–135. Springer, Berlin Heidelberg, Berlin, Heidelberg (2001)

  71. Sasamoto, T., Spohn, H.: One-dimensional Kardar-Parisi-Zhang equation: An exact solution and its universality. Phys. Rev. Lett. 104(23), 230602 (2010)

    Article  MATH  ADS  Google Scholar 

  72. Sinclair, C.D.: Correlation functions for \(\beta =1\) ensembles of matrices of odd size. J. Stat. Phys. 136, 17–33 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  73. Sinclair, C.D.: Ensemble averages when \(\beta \) is a square integer. Monatshefte für Mathematik 166, 121–144 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  74. Sommers, H.J., Wieczorek, W.: General eigenvalue correlations for the real Ginibre ensemble. J. Phys. A Math. Theor. 41(40), 405003 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  75. Soshnikov, A.: Janossy densities. II. Pfaffian ensembles. J. Stat. Phys. 113(3), 611–622 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  76. Soshnikov, A.: Janossy densities of coupled random matrices. Commun. Math. Phys. 251(3), 447–471 (2004)

    Article  MathSciNet  MATH  ADS  Google Scholar 

  77. Stembridge, J.R.: Nonintersecting paths, Pfaffians, and plane partitions. Adv. Math. 83, 96–131 (1990)

    Article  MathSciNet  MATH  Google Scholar 

  78. Stéphan, J.M.: Free fermions at the edge of interacting systems. SciPost Phys. 6, 057 (2019)

    Article  MathSciNet  ADS  Google Scholar 

  79. Takeuchi, K.A., Sano, M.: Universal fluctuations of growing interfaces: evidence in turbulent liquid crystals. Phys. Rev. Lett. (23), 230601 (2010)

  80. Takeuchi, K.A., Sano, M., Sasamoto, T., Spohn, H.: Growing interfaces uncover universal fluctuations behind scale invariance. Sci. Rep. 1, 34 (2011)

    Article  ADS  Google Scholar 

  81. Tracy, C.A., Widom, H.: Level-spacing distributions and the Airy kernel. Commun. Math. Phys. 159(1), 151–174 (1994)

    Article  MathSciNet  MATH  ADS  Google Scholar 

  82. Tracy, C.A., Widom, H.: On orthogonal and symplectic matrix ensembles. Commun. Math. Phys. 177(3), 727–754 (1996)

    Article  MathSciNet  MATH  ADS  Google Scholar 

  83. Tracy, C.A., Widom, H.: Correlation functions, cluster functions, and spacing distributions for random matrices. J. Stat. Phys. 92(5), 809–835 (1998)

    Article  MathSciNet  MATH  ADS  Google Scholar 

  84. Witte, N., Bornemann, F., Forrester, P.: Joint distribution of the first and second eigenvalues at the soft edge of unitary ensembles. Nonlinearity 26(6), 1799 (2013)

    Article  MathSciNet  MATH  ADS  Google Scholar 

  85. Witte, N., Forrester, P.: On the variance of the index for the Gaussian unitary ensemble. Random Matrices Theory Appl. 1(04), 1250010 (2012)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

A.M. would like to thank Michael Wheeler, Peter Forrester and Shi-Hao Li for helpful discussions. We are grateful to the anonymous reviewers for their careful reading of the manuscript and the many suggested improvements, in particular we thank one of the reviewers for pointing us to the paper [77] and suggesting a simplification of our proof of Lemma 1. We would also like to thank Jacques H. H. Perk for an interesting correspondence on the history of the “Overlapping Pfaffians”. A.M. and A.P. are supported by the Australian Research Council (ARC) Centre of Excellence for Mathematical and Statistical Frontiers (ACEMS), ARC Grant No. CE140100049. A.M. thanks LPTMS for their hospitality during a visit supported by CNRS.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anthony Mays.

Additional information

Communicated by Pierpaolo Vivo.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendices

Reminder on the Classical Ensembles of RMT: GOE, GUE and GSE

For self-consistency, we recall here the definition of the classical ensembles of RMT studied in this paper:

  • The Gaussian Orthogonal Ensemble (GOE) is the set of \(N\times N\) real symmetric matrices

    $$\begin{aligned} {\mathbf {M}}= \frac{{\mathbf {Y}}+ {\mathbf {Y}}^T}{2}, \end{aligned}$$
    (A.1)

    where \({\mathbf {Y}}\) contains standard normally distributed elements \(y_{j,k} \sim {\mathcal {N}}[0,1]\) resulting in a matrix PDF proportional to \(e^{- (\mathrm {Tr}\,{\mathbf {M}}^2)/2}\) which is invariant under orthogonal conjugation \({\mathbf {M}}\mapsto {\mathbf {O}}^T {\mathbf {M}}{\mathbf {O}}\).

  • The Gaussian Unitary Ensemble (GUE) is the set of complex Hermitian matrices

    $$\begin{aligned} {\mathbf {M}}= \frac{{\mathbf {Y}}+ {\mathbf {Y}}^{\dagger }}{2} \end{aligned}$$
    (A.2)

    with real independent Gaussian components \(y_{j,k} \sim {\mathcal {N}}[0,\frac{1}{\sqrt{2}}] + i {\mathcal {N}}[0,\frac{1}{\sqrt{2}}]\) giving a matrix PDF proportional to \(e^{- \mathrm {Tr}\,{\mathbf {M}}^2}\) which is invariant under unitary conjugation \({\mathbf {M}}\mapsto {\mathbf {U}}^{\dagger } {\mathbf {M}}{\mathbf {U}}\).

  • The Gaussian Symplectic Ensemble (GSE) is defined similarly for normally distributed quaternionic entries. The reader can find detailed treatments about the quaternions in random matrix theory in several places (see, for example, [29, 53, 55]), however this will not be required for understanding the current work, as we use the equivalent \(2\times 2\) representation of quaternions

    $$\begin{aligned} \begin{bmatrix} a_1+ ib_1 &{}a_2 +i b_2\\ -a_2+ ib_2 &{}a_1 -i b_1 \end{bmatrix} \qquad (a_1, a_2, b_1, b_2 \in {\mathbb {R}}). \end{aligned}$$
    (A.3)

    The ensemble is then the set of \(2N\times 2N\) matrices,

    $$\begin{aligned} {\mathbf {M}}= \frac{{\mathbf {Y}}+ {\mathbf {Y}}^{\dagger }}{2}, \end{aligned}$$
    (A.4)

    where each \(2\times 2\) block of \({\mathbf {Y}}\) is of the form (A.3) with each independent real component normally distributed \(a_1, b_1, a_2, b_2 \sim {\mathcal {N}}[0, \frac{1}{2} ]\). The matrix PDF is then proportional to \(e^{- \mathrm {Tr}\,{\mathbf {M}}^2}\), which is invariant under symplectic conjugation, that is conjugation by a unitary matrix \({\mathbf {M}}\mapsto {\mathbf {U}}^{\dagger } {\mathbf {M}}{\mathbf {U}}\), with the restriction that

    $$\begin{aligned} {\mathbf {U}}{\mathbf {Z}}_{N} {\mathbf {U}}^T = \pm {\mathbf {Z}}_{N} \;, \end{aligned}$$
    (A.5)

    where

    $$\begin{aligned} {\mathbf {Z}}_N:= \begin{bmatrix} 0&{} 1\\ -1&{} 0 \end{bmatrix} \otimes {\mathbf {I}}_N = \begin{bmatrix} 0&{} 1&{} 0 &{} 0&{}0&{} &{}\\ -1&{} 0&{} 0 &{} 0&{}0&{} &{}\dots \\ 0&{}0&{}0&{}1&{} 0 &{}&{}\\ 0&{}0&{}-1&{}0&{} 0&{} &{}\\ &{}\vdots &{}&{}&{}&{}&{}\ddots \end{bmatrix}, \end{aligned}$$
    (A.6)

    and \({\mathbf {I}}_{N}\) is the \(N\times N\) identity matrix.

Pfaffians

Pfaffians are a key tool in our work here, so for the non-specialist we provide some background information. The statements in this section are classical results — references include [24, 59, 77]. A brief historical survey on the topic is provided in [43, §6]

Definition B.1

(Pfaffian) Let \({\mathbf {M}}=[m_{j, k}]_{j,k =1,... ,2N}\), where \(m_{j, k}=-m_{k, j}\), so that \({\mathbf {M}}\) is an anti-symmetric matrix of even size. Then the Pfaffian of \({\mathbf {M}}\) is defined by

$$\begin{aligned} \mathrm {Pf}\; {\mathbf {M}}&=\sum ^*_{\begin{array}{c} \pi \in S_{2N} \\ \pi (2j)>\pi (2j -1) \end{array}} \varepsilon (\pi ) m_{\pi (1),\pi (2)} m_{\pi (3), \pi (4)}\cdot \cdot \cdot m_{\pi (2N-1), \pi (2N)}\nonumber \\&=\frac{1}{N!} \sum _{\begin{array}{c} \pi \in S_{2N} \\ \pi (2j) >\pi (2j -1) \end{array}} \varepsilon (\pi ) m_{\pi (1), \pi (2)} m_{\pi (3),\pi (4)}\cdot \cdot \cdot m_{\pi (2N-1), \pi (2N)}\nonumber \\&=\frac{1}{2^N N!}\sum _{\pi \in S_{2N}} \varepsilon (\pi ) m_{\pi (1), \pi (2)} m_{\pi (3), \pi (4)}\cdot \cdot \cdot m_{\pi (2N-1),\pi (2N)}, \end{aligned}$$
(B.1)

where \(S_{2N}\) is the group of permutations of 2N letters and \(\varepsilon (\pi )\) is the signature of the permutation \(\pi \). The * above the first sum indicates that the sum is over distinct terms only (that is, all permutations of the pairs of indices are regarded as identical).

Note that in the second equality of (B.1) the factors of 2 are associated with the restriction \(\pi (2j)> \pi (2j -1)\) while the factorial is associated with counting only distinct terms [N! is the number of ways of arranging the N pairs of indices \(\pi (2l-1), \pi (2l)\)]. Pfaffians can be calculated via a version of Laplace expansion, however the Pfaffian minors \({\mathbf {M}}^{(j,k)}\) that one needs to calculate are obtained by blocking out both the jth and kth row and the jth and kth column.

The definition of a Pfaffian is very close to that of a determinant, and for the matrix \({\mathbf {M}}\) (antisymmetric of size \(2N \times 2N\)), they are related by

$$\begin{aligned} (\mathrm {Pf}\,{\mathbf {M}})^2 = \det {\mathbf {M}}, \end{aligned}$$
(B.2)

We will also have need of the identity [24]

$$\begin{aligned} \mathrm {Pf}\,({\mathbf {B}}{\mathbf {M}}{\mathbf {B}}^T)= \det ({\mathbf {B}}) \mathrm {Pf}\,({\mathbf {M}}), \end{aligned}$$
(B.3)

where \({\mathbf {B}}\) is a general \(2N \times 2N\) matrix.

1.1 Pfaffians and Perfect Matchings

In several places of this work (eg. Lemma 1 and Appendix F.2) we use an expression equivalent to (B.1) in terms of perfect matchings and link patterns. Expressions for Pfaffians in terms of perfect matchings have been known for a long time, and they are discussed in many places — we refer to [43, 70, 77].

A perfect matching \(\mu \) is a set of links between 2N sites, where each site is connected to exactly one other site. Diagrammatically, this is expressed as a link diagram, and most easily seen via an example: let

$$\begin{aligned} \mu = \{(2,3), (5,1), (4, 6)\} \end{aligned}$$
(B.4)

and the link diagram is given in Fig. 3. The signature \(\varepsilon (\mu )\) of the perfect matching is given by\((-1)^{\# \chi }\), where \(\# \chi \) is the number of crossings in the link pattern — for the example in (B.4) we have \(\varepsilon (\mu ) = (-1)^1\). We denote the set of all perfect matchings on 2N sites by \(M_{2N}\), and the number of perfect matchings is

$$\begin{aligned} |M_{2N}| = (2N -1)!! = (2N -1) \cdot (2N-3) \cdots (3) \cdot (1), \end{aligned}$$
(B.5)

since there are \(2N-1\) sites for the first site to pair with, then \(2N-3\) sites for the second site to pair with, etc. (Note that usually a perfect matching is defined as a set of edges on a graph such that every vertex is included exactly once. However this characterization will not be useful for us, and for a complete graph it is equivalent to the definition we use in terms of link patterns.)

Fig. 3
figure 3

The link pattern for the perfect matching \(\mu = \{(2,3), (5,1), (4, 6)\} = \{ (1, 5), (2,3), (4, 6)\}\) on sites \(\{ 1,2,3, 4, 5, 6\}\) from (B.4)

The connection to Pfaffians comes from the fact that there is a bijection from \(M_{2N}\) to a subset of \(S_{2N}\), the set of permutations of \(\{1, \dots , 2N \}\). The bijection is found by taking a perfect matching \(\mu \) and ordering the components of each pair such that \(\mu = \{ (\mu _{j,L}, \mu _{j,R}) \}_{j= 1, \dots , N}\), where \(\mu _{j,L}\) and \(\mu _{j,R}\) are respectively the left and right terminals of link j. (In graph parlance, this creates a directed link pattern, where all links point from, say, left to right.) Then we define an ordering between the pairs according to some scheme (say, that \(\mu _{j,L}< \mu _{j+1 ,L}\)), which results in a unique representative ordered set of pairs for each perfect matching. Then, by removing the pairing, we obtain a unique \(s\in S_{2N}\). For the example in (B.4) we find

$$\begin{aligned} M_{6} \ni \{(2,3), (5,1), (4, 6)\} = \{ (1, 5), (2,3), (4, 6)\} \mapsto (1,5,2,3,4,6) \in S_6. \end{aligned}$$
(B.6)

The reverse mapping \(S_{2N} \supset {\hat{S}}_{2N} \rightarrow M_{2N}\) is clear: \(s\in {\hat{S}}_{2N}\) is a permutation of \(1, \dots , 2N\) such that \(s(2j-1)< s(2j)\) and \(s(2j-1)< s(2k-1)\) for \(j<k\).

In order for this mapping to make sense, we need \(\varepsilon (\mu )= \varepsilon (s)\), that is the number of crossings in the perfect matching \(\mu \) must be the same as the signature of the permutation s, given by \((-1)^\tau \) where \(\tau \) is the number of transpositions required to return s to the identity permutation. This can be shown by first noting that the identity permutation gives a link pattern with no crossings, and then that a crossing can always be removed by a single transposition, while a link pattern with no crossings can be transformed to the identity by an even number of transpositions.

The conditions defining \({\hat{S}}_{2N}\) are the same restrictions on \(S_{2N}\) as those implied by the first line of (B.1), and so we have the following equivalent expression for the Pfaffian

$$\begin{aligned} \mathrm {Pf}\,{\mathbf {M}}= \sum _{\mu \in M_{2N}} \varepsilon (\mu )\; m_{i_1, j_1} m_{i_2, j_2}\cdot \cdot \cdot m_{i_N, j_{N}}, \end{aligned}$$
(B.7)

where \(M_{2N}\) is the set of all perfect matchings \(\mu = \{ (i_1, j_1), \dots , (i_N, j_N) \}\) on 2N sites, and \(\varepsilon (\mu )\) is the signature of the perfect matching, or equivalently, the signature of the corresponding permutation. Note that in this representation, since we are in the upper half-triangle of the matrix (where the row index is less than the column index) the left vertex \(\mu _{j,L}\) of each edge corresponds to the row index of the matrix, while the right vertex \(\mu _{j,R}\) corresponds to the column.

Proofs of Restricted Partition Functions

Although the statements in Propositions 1 and 2 are straightforward variations on the \(y= \infty \) results (see [33, 55, 83]), we provide the proofs for our specific case here to keep this work self-contained for the non-specialist.

1.1 Proof of Proposition 1 (\(\beta =4\))

We start with the identity [55]

$$\begin{aligned} \prod _{1\le j< k\le N} (\lambda _j - \lambda _k)^4 = \det \left[ \begin{array}{c} \lambda _j^{k-1}\\ (k-1) \lambda _j^{k-2} \end{array}\right] _{\begin{array}{c} j=1, \dots , N \\ k= 1, \dots , 2N \end{array}}, \end{aligned}$$
(C.1)

and note that each even row is the derivative of the odd row immediately above it. Then in this matrix, for each column, by adding linear combinations of the columns to the left of that column (starting from the left-most column) we can create arbitrary monic polynomials, while preserving the derivative relationship between the even and odd rows. So for our purpose, we choose the polynomials to be the \(Q_j\), which are skew-orthogonal with respect to the skew-inner product (7), giving

$$\begin{aligned}&{\hat{Z}}_{4, N} [a, y] \nonumber \\&\quad = \frac{1}{Z_{4, N}} \int _{-\infty }^y d\lambda _1 \cdots \int _{-\infty }^y d\lambda _N \prod _{j=1}^N a(\lambda _j) e^{- 2 \lambda _j^2 } \det \left[ \begin{array}{cc} Q_{2k-2} (\lambda _j) &{} Q_{2k-1} (\lambda _j)\\ Q_{2k-2}' (\lambda _j) &{} Q_{2k-1}' (\lambda _j) \end{array}\right] _{j,k= 1, \dots , N}\nonumber \\&\quad = \frac{1}{Z_{4, N}} \sum _{\pi \in S_{2N}} \varepsilon (\pi ) \prod _{j=1}^N \int _{-\infty }^y d\lambda \; a(\lambda ) e^{-2\lambda ^2} Q_{\pi (2j-1) -1} (\lambda ) Q_{\pi (2j) -1}' (\lambda ), \end{aligned}$$
(C.2)

where the second line follows from Laplace expansion of the determinant, and we apply the integrals to each matched pair of Q and \(Q'\). (Note that we suppress the dependence on y for brevity.)

For each pair of indices on the Q and \(Q'\) in (C.2), we then match up each permutation with the corresponding permutation where that index pair is interchanged, hence picking up a \((-1)\), giving

$$\begin{aligned} {\hat{Z}}_{4, N} [a, y]&=\frac{1}{Z_{4, N}} \nonumber \\&\quad \times \sum _{\begin{array}{c} \pi \in S_{2N} \\ \pi (2j)> \pi (2j-1) \end{array}} \varepsilon (\pi ) \prod _{j=1}^N \int _{-\infty }^y d\lambda \; a(\lambda ) e^{-2\lambda ^2} \Big ( Q_{\pi (2j-1) -1} (\lambda ) Q_{\pi (2j) -1}' (\lambda ) \nonumber \\&\quad - Q_{\pi (2j) -1} (\lambda ) Q_{\pi (2j-1) -1}' (\lambda ) \Big ), \end{aligned}$$
(C.3)

where we need to restrict the sum to just those permutations obeying the rule \(\pi (2j)> \pi (2j-1)\) for all j. Introducing a factor of \(\frac{1}{2}\) for each integral (incurring a pre-factor of \(2^N\)), then using the definition of the Pfaffian recalled in (B.1) we obtain

$$\begin{aligned} {\hat{Z}}_{4, N} [a, y] =\frac{2^N N!}{Z_{4, N}} \mathrm {Pf}\,\left[ \frac{1}{2} \int _{-\infty }^y d\lambda \; a(\lambda ) e^{-2\lambda ^2} \Big ( Q_{j} (\lambda ) Q_{k}' (\lambda ) - Q_{k} (\lambda ) Q_{j}' (\lambda ) \Big ) \right] _{j,k=0, \dots , 2N-1}. \end{aligned}$$
(C.4)

The equality between the first and second lines in (7) gives the result in (40).

1.2 Proof of Proposition 2 (\(\beta =1\), N even)

We start by ordering the eigenvalues \(-\infty< \lambda _1<\cdot \cdot \cdot< \lambda _N < y\) (incurring a factor of N!) in (33) so that we can remove the absolute value from the product of differences. Then we use the Vandermonde determinant expression (suppressing the polynomial dependence on y)

$$\begin{aligned} {\hat{Z}}_{1,N} [a ,y]&= \frac{N!}{Z_{1, N}} \int _{-\infty }^{y}d\lambda _N \int _{-\infty }^{\lambda _N}d\lambda _{N-1} \nonumber \\&\quad \cdot \cdot \cdot \int _{-\infty }^{\lambda _2} d\lambda _1 \prod _{j=1}^N e^{-\lambda _j^2/2}\; a(\lambda _j)\prod _{1\le j < k \le N}(\lambda _k -\lambda _j)\nonumber \\&=\frac{N!}{Z_{1, N}} \int _{-\infty }^{y}d\lambda _N \int _{-\infty }^{\lambda _N} d\lambda _{N-1} \nonumber \\&\quad \cdot \cdot \cdot \int _{-\infty }^{\lambda _2} d\lambda _1 \det \left[ e^{-\lambda _j^2/2} a(\lambda _j) \lambda _j^{k-1} \right] _{j,k=1,...,N}\nonumber \\&=\frac{N!}{Z_{1, N}} \int _{-\infty }^{y} d\lambda _N \int _{-\infty }^{\lambda _N} d\lambda _{N-1} \nonumber \\&\quad \cdot \cdot \cdot \int _{-\infty }^{\lambda _2} d\lambda _1 \det \left[ e^{-\lambda _j^2/2} a(\lambda _j) R_{k-1}(\lambda _j) \right] _{j,k=1,...,N}, \end{aligned}$$
(C.5)

where the third equality follows from elementary column operations. This is the same procedure that was applied to (C.1) in the \(\beta =4\) case above, and it allows us to obtain any set of monic polynomials in the columns; for our purposes we specify the polynomials to be the \(\{R_j\}\), which are skew-orthogonal with respect to the skew-inner product (8).

Now we wish to apply the method of integration over alternate variables (mentioned above), and to prepare for that we change the order of the integrals, with even integrals on the left and odd integrals on the right

$$\begin{aligned} {\hat{Z}}_{1,N} [a ,y]&=\frac{N!}{Z_{1, N}}\nonumber \\&\quad \times \int _{-\infty }^{y}d\lambda _N \int _{-\infty }^{\lambda _N} d\lambda _{N-2} \cdot \cdot \cdot \int _{-\infty }^{\lambda _4} d\lambda _2 \int _{\lambda _{N-2}}^{\lambda _N} d\lambda _{N-1} \nonumber \\&\quad \cdots \int _{\lambda _{2}}^{\lambda _{4}} d\lambda _{3} \int _{-\infty }^{\lambda _2} d\lambda _1 \det \left[ e^{-\lambda _j^2/2} a(\lambda _j) R_{k-1} (\lambda _j) \right] _{j,k=1,...,N}. \end{aligned}$$
(C.6)

The purpose of this manipulation is that now in each odd integral (i.e. over the variables \(\lambda _{2n-1}\)) the only dependence of the corresponding variable is in the \((2n-1)\)st row of the determinant, so the odd integrals can be applied to their respective rows:

$$\begin{aligned} {\hat{Z}}_{1,N} [a ,y]&=\frac{N!}{Z_{1, N}} \int _{-\infty }^{y} d\lambda _N \int _{-\infty }^{\lambda _N}d\lambda _{N-2} \cdot \cdot \cdot \int _{-\infty }^{\lambda _4} d\lambda _2 \det \nonumber \\&\qquad \times \left[ \begin{array}{c} \int _{-\infty }^{\lambda _{2j}} e^{-\lambda ^2/2} a(\lambda ) R_{k-1}(\lambda ) d\lambda \\ e^{-\lambda _{2j}^2/2} a(\lambda _{2j}) R_{k-1}(\lambda _{2j}) \end{array}\right] _{\begin{array}{c} j=1,...,N/2 \\ k=1,...,N \end{array}}, \end{aligned}$$
(C.7)

where we have added the first row to the third row, and the first and third rows to the fifth row, and so on, so all the integrals have lower terminal \(-\infty \). (This sequence of steps is the method of integration over alternate variables.)

We see that the determinant in (C.7) is now symmetric in the variables \(\lambda _2, \lambda _4,..., \lambda _N\), and so we can remove the ordering \(\lambda _2< \lambda _4< ... < \lambda _N\) at the cost of dividing by (N/2)!. Taking the Laplace expansion of the determinant we find

$$\begin{aligned} {\hat{Z}}_{1,N} [a ,y]= \frac{1}{Z_{1,N}} \frac{N!}{(N/2)!} \sum _{\pi \in S_N} \varepsilon (\pi ) \prod _{j=1}^{N/2} \mu _{\pi (2j -1), \pi (2j)}, \end{aligned}$$
(C.8)

where

$$\begin{aligned} \mu _{j,k}:=\int _{-\infty }^{y} dx\, e^{-x^2/2}\, a(x) \,R_{k-1}(x) \int _{-\infty }^x dz\, e^{-z^2/2}\, a(z)\, R_{j-1}(z), \end{aligned}$$
(C.9)

and \(\varepsilon (\pi )\) is the signature of the permutation \(\pi \). By defining

$$\begin{aligned} \gamma _{j,k}^{(1)} :=\frac{1}{2}(\mu _{j,k} -\mu _{k,j}), \end{aligned}$$
(C.10)

incurring a factor of \(2^{N/2}\), then we can restrict the sum to terms with \(\pi (2j)> \pi (2j -1)\) for all j, giving

$$\begin{aligned} {\hat{Z}}_{1,N} [a ,y]=\frac{1}{Z_{1,N}} 2^{N/2} \frac{N!}{(N/2)!} \sum _{\begin{array}{c} \pi \in S_N \\ \pi (2j) > \pi (2j-1) \end{array}} \varepsilon (\pi ) \prod _{j=1}^{N/2} \gamma _{\pi (2j-1), \pi (2j)}^{(1)}. \end{aligned}$$
(C.11)

Now using (B.1) we have the result in (44)–(45) [where we cancel the factor of (N/2)! to account for summing over distinct terms only].

1.3 Proof of Proposition 3 (\(\beta =1\), N odd)

As for the N even case we follow the same technique that led to (C.5) and (C.6), but now with an odd number of \(\lambda _j\), so (C.6) becomes

$$\begin{aligned} {\hat{Z}}_{1,N_{\mathrm {odd}}} [a ,y]&=\frac{N!}{Z_{1, N}}\nonumber \\&\quad \times \int _{-\infty }^{y}d\lambda _{N-1} \cdot \cdot \cdot \int _{-\infty }^{\lambda _4} d\lambda _2 \int _{\lambda _{N-1}}^{y} d\lambda _{N} \int _{\lambda _{N-3}}^{\lambda _{N-1}} d\lambda _{N-2}\nonumber \\&\quad \cdots \int _{\lambda _{2}}^{\lambda _{4}} d\lambda _{3} \int _{-\infty }^{\lambda _2} d\lambda _1 \det \left[ e^{-\lambda _j^2/2} a(\lambda _j) R_{k-1} (\lambda _j) \right] _{j,k=1,...,N}. \end{aligned}$$
(C.12)

We depart from the N even case when we pair up the even and odd rows (“integration over alternate variables”), and we must have one unpaired row. So we replace (C.7) with

$$\begin{aligned} {\hat{Z}}_{1,N_{\mathrm {odd}}} [a ,y]&=\frac{N!}{Z_{1, N}} \int _{-\infty }^{y} d\lambda _{N-1} \int _{-\infty }^{\lambda _{N-1}} d\lambda _{N-3} \nonumber \\&\quad \cdot \cdot \cdot \int _{-\infty }^{\lambda _{4}} d\lambda _{2} \det \left[ \begin{array}{c} \left[ \begin{array}{c} \int _{-\infty }^{\lambda _{2j}} e^{-\lambda ^2/2} a(\lambda ) R_{k-1}(\lambda ) d\lambda \\ e^{-\lambda _{2j}^2/2} a(\lambda _{2j}) R_{k-1}(\lambda _{2j}) \end{array}\right] \\ \int _{-\infty }^{y }e^{-\lambda ^2/2} a(\lambda ) R_{k-1} (\lambda ) d\lambda \end{array}\right] _{\begin{array}{c} j=1,...,(N-1)/2 \\ k=1,...,N \end{array}} \nonumber \\&=\frac{1}{Z_{1, N}} \frac{N!}{((N-1)/2)!} \int _{-\infty }^{y} d\lambda _{N-1} \int _{-\infty }^{y} d\lambda _{N-3} \nonumber \\&\quad \cdot \cdot \cdot \int _{-\infty }^{y} d\lambda _{2} \left[ \begin{array}{c} \left[ \begin{array}{c} \int _{-\infty }^{\lambda _{2j}} e^{-\lambda ^2/2} a(\lambda ) R_{k-1}(\lambda ) d\lambda \\ e^{-\lambda _{2j}^2/2} a(\lambda _{2j}) R_{k-1}(\lambda _{2j}) \end{array}\right] \\ \int _{-\infty }^{y }e^{-\lambda ^2/2} a(\lambda ) R_{k-1} (\lambda ) d\lambda \end{array}\right] _{\begin{array}{c} j=1,...,(N-1)/2 \\ k=1,...,N \end{array}}, \end{aligned}$$
(C.13)

where, in the second equality we have removed the ordering on the \((N-1)/2\) even variables.

So now with \(\mu _{j,k}\) from (C.9) and \(\nu _j\) from (49) we expand this determinant to obtain

$$\begin{aligned} {\hat{Z}}_{1,N_{\mathrm {odd}}} [a ,y]= \frac{1}{Z_{1,N}} \frac{N!}{((N-1)/2)!} \sum _{\pi \in S_N} \varepsilon (\pi ) \nu _{\pi (N) -1} [a, y] \prod _{j=1}^{(N-1)/2} \mu _{\pi (2j -1), \pi (2j)}. \end{aligned}$$
(C.14)

Introducing \(\gamma _{j,k}^{(1)}\) via (C.10) and restricting the sum to those terms with \(\pi (2j) > \pi (2j-1)\) we have

$$\begin{aligned}&{\hat{Z}}_{1,N_{\mathrm {odd}}} [a ,y]= \frac{2^{(N-1)/2} }{Z_{1,N}} \frac{N!}{((N-1)/2)!} \nonumber \\&\quad \sum _{\begin{array}{c} \pi \in S_N \\ \pi (2j) > \pi (2j-1) \end{array}} \varepsilon (\pi ) \nu _{\pi (N) -1} [a, y] \prod _{j=1}^{(N-1)/2} \gamma _{\pi (2j -1), \pi (2j)}^{(1)}. \end{aligned}$$
(C.15)

Further restricting the sum to only distinct terms [giving a factor of \(((N-1)/2)!\)] and identifying \(\nu _{\pi (N) -1, N}:= \nu _{\pi (N) -1}\) we use (B.1) with the permutations \(\pi \in S_{N-1}\) over just the first \(N-1\) elements and we have the result in Proposition 3.

Iterative Construction of the First Few Skew-Orthogonal Polynomials

In this Appendix, we iteratively construct the first few skew-orthogonal polynomials defined in Eqs. (10) and (11).

First for \(\beta =4\), by monicity, we must have \(Q_0 (\lambda ) = 1\) and by (29) we can assume that \(Q_1 (\lambda ) = \lambda \), then we use the skew-inner product relations (7) to iteratively solve for the higher degree polynomials, so the first four skew-orthogonal polynomials are

$$\begin{aligned} Q_0 (\lambda , y)&= 1, \qquad Q_1 (\lambda , y) = \lambda , \qquad Q_2 (\lambda , y)= \lambda ^2 +b \lambda + \frac{1 -2yb}{4}, \end{aligned}$$
(D.1)
$$\begin{aligned} Q_3 (\lambda , y)&= \lambda ^3 - 3 \frac{1- 2y b}{4} \lambda -b \frac{1+2 y^2}{2} \end{aligned}$$
(D.2)

[where we used (29) for \(Q_3 (\lambda , y)\)] with normalizations

$$\begin{aligned} q_0 (y)&:= \langle Q_0 , Q_1 \rangle _{4}^y = \frac{\sqrt{\pi }}{4 \sqrt{2}} \;\mathrm {erfc}(-\sqrt{2} y) = \frac{e^{-2y^2}}{4 b}, \end{aligned}$$
(D.3)
$$\begin{aligned} q_1 (y)&:= \langle Q_2 , Q_3 \rangle _{4}^y = \frac{1}{64} \left( 3 \sqrt{2 \pi } \mathrm {erfc}(- \sqrt{2} y) -2 e^{-2 y^2} y (9 +4 y^2) - 4 e^{-2 y^2} ( 2 +y^2) b \right) , \end{aligned}$$
(D.4)

where

$$\begin{aligned} b= \frac{\sqrt{2} e^{-2y^2}}{\sqrt{\pi } (1+\mathrm {erf}(\sqrt{2} y))}= \frac{\sqrt{2} e^{-2y^2}}{\sqrt{\pi } \mathrm {erfc}(- \sqrt{2} y)}. \end{aligned}$$
(D.5)

For \(\beta =1\), again by monicity and (29) we have, \(R_0(\lambda ) =1, R_1(\lambda ) = \lambda \) and using the relations (8) we can obtain the first four polynomials

$$\begin{aligned} R_0(\lambda , y)&=1, \qquad R_1(\lambda ) = \lambda , \end{aligned}$$
(D.6)
$$\begin{aligned} R_2 (\lambda , y)&= \lambda ^2 + \lambda \frac{c}{\sqrt{\pi }} \left( 2 e^{-y^2 /2} + \sqrt{2 \pi } y \, \mathrm {erfc}(-y/ \sqrt{2}) \right) + c e^{y^2/2} \mathrm {erfc}(-y) -1, \end{aligned}$$
(D.7)
$$\begin{aligned} R_3 (\lambda , y)&= \lambda ^3 +\lambda c \left( \frac{2 ye^{-y^2/2}}{\sqrt{\pi }} - \frac{2}{c} - e^{y^2/2} \, \mathrm {erfc}(-y) + y^2 \sqrt{2 }\, \mathrm {erfc}(-y/ \sqrt{2} ) \right) \nonumber \\&\quad - \frac{2 c e^{-y^2/2}}{\sqrt{\pi }} \end{aligned}$$
(D.8)

with

$$\begin{aligned} c= \left( 2e^{y^2/2} \mathrm {erfc}(-y) - \sqrt{2} \mathrm {erfc}(-y/ \sqrt{2}) \right) ^{-1} \end{aligned}$$
(D.9)

and

$$\begin{aligned} r_0(y)&=\frac{\sqrt{\pi }}{2} \left( \mathrm {erfc}(-y) - \frac{e^{-y^2/2}}{\sqrt{2}} \mathrm {erfc}(- y/ \sqrt{2}) \right) = \frac{\sqrt{\pi } e^{-y^2/2}}{4 c} \end{aligned}$$
(D.10)
$$\begin{aligned} r_1 (y)&= \frac{\sqrt{\pi }}{8} \text {erfc}(-y) -\frac{y e^{-y^2}}{4} - c \left( \frac{e^{-\frac{3y^2}{2}}}{\sqrt{\pi }} + \frac{y^2 \sqrt{\pi }}{2 \sqrt{2}} \mathrm {erfc}(-y) \mathrm {erfc}\left( -\frac{y}{\sqrt{2}}\right) \right. \nonumber \\&\quad \left. +\frac{y e^{-\frac{y^2}{2}}}{2} \mathrm {erfc}(-y) +\frac{y e^{-y^2} \mathrm {erfc}\left( -\frac{y}{\sqrt{2}}\right) }{\sqrt{2}}- \frac{\sqrt{\pi } e^{\frac{y^2}{2}} }{4} \mathrm {erfc}(-y)^2 \right) . \end{aligned}$$
(D.11)

Skew-Orthogonal Polynomials for \(\beta =4\)

For the ease of the reader, we try to use the same notation as in [33, §6.2 & §6.4], where the case \(y= \infty \) is discussed in detail. Also note that all the quantities in this section depend on y, however we will suppress the explicit notation of such, to save space.

The goal is to write the \(\beta =4\) skew-orthogonal polynomials \(\{ Q_j \}\), defined by (7) and (10), in terms of the polynomials orthogonal with respect to the inner product (6), the NM polynomials \(P_j\) which obey the relations (24)–(26). However, as discussed in Sect. 4.1 we will instead use the modified skew-inner product (56), and look for polynomials \(\{{\tilde{Q}}_j \}\) that obey the relations (57) and (58), up to the invariance (29). Since the orthogonal polynomials form a complete set we can find coefficients \({\tilde{\alpha }}_{j,k}\) such that

$$\begin{aligned} {\tilde{Q}}_j = {\tilde{\alpha }}_{j,j} P_j + {\tilde{\alpha }}_{j,j-1}P_{j-1} + \dots + {\tilde{\alpha }}_{j,1}P_1 + {\tilde{\alpha }}_{j,0} P_0, \qquad {\tilde{\alpha }}_{j,k}\in {\mathbb {C}}. \end{aligned}$$
(E.1)

Recall that the tilde \(\tilde{~}\) means that the quantity is associated with this modified skew-inner product. From monicity and (29) we have

$$\begin{aligned} {\tilde{\alpha }}_{j,j}=1, \qquad {\tilde{\alpha }}_{2j+1, 2j}=0. \end{aligned}$$
(E.2)

We can write (E.1) in the matrix form

$$\begin{aligned} {\tilde{\mathbf {Q}}}= & {} {\tilde{\mathbf {X}}}{\mathbf {P}}\end{aligned}$$
(E.3)

where

$$\begin{aligned} {\tilde{\mathbf {Q}}}= & {} \left[ \begin{array}{c} {\tilde{Q}}_0 \\ {\tilde{Q}}_1\\ \vdots \end{array} \right] , \qquad {\mathbf {P}}= \left[ \begin{array}{c} P_0 \\ P_1\\ \vdots \end{array} \right] \end{aligned}$$
(E.4)
$$\begin{aligned} {\tilde{\mathbf {X}}}= & {} \left[ \begin{array}{cccccc} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} \cdots \\ 0 &{} 1 &{} 0 &{} 0 &{} 0 &{} \cdots \\ {\tilde{\alpha }}_{2, 0} &{} {\tilde{\alpha }}_{2, 1} &{} 1 &{} 0 &{} 0 &{} \cdots \\ {\tilde{\alpha }}_{3, 0} &{} {\tilde{\alpha }}_{3, 1} &{} 0 &{} 1 &{} 0 &{} \cdots \\ {\tilde{\alpha }}_{4, 0} &{} {\tilde{\alpha }}_{4, 1} &{} {\tilde{\alpha }}_{4, 2} &{} {\tilde{\alpha }}_{4, 3} &{} 1 &{} \\ \vdots &{} \vdots &{} \vdots &{} \vdots &{} &{} \ddots \end{array} \right] . \end{aligned}$$
(E.5)

For the calculation, we will find it more convenient to work with the equation

$$\begin{aligned} {\mathbf {P}}= {\tilde{\mathbf {X}}}^{-1} {\tilde{\mathbf {Q}}}. \end{aligned}$$
(E.6)

Since the skew-orthogonal polynomials will also form a complete set, we know that \({\tilde{\mathbf {X}}}\) is invertible and we denote

$$\begin{aligned} {\tilde{\mathbf {X}}}^{-1}:= \Big [ {\tilde{\beta }}_{s,t} \Big ]_{s,t=1, \dots , j}= \left[ \begin{array}{cccccc} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} \cdots \\ 0 &{} 1 &{} 0 &{} 0 &{} 0 &{} \cdots \\ {\tilde{\beta }}_{2, 0} &{} {\tilde{\beta }}_{2, 1} &{} 1 &{} 0 &{} 0 &{} \cdots \\ {\tilde{\beta }}_{3, 0} &{} {\tilde{\beta }}_{3, 1} &{} 0 &{} 1 &{} 0 &{} \cdots \\ {\tilde{\beta }}_{4, 0} &{} {\tilde{\beta }}_{4, 1} &{} {\tilde{\beta }}_{4, 2} &{} {\tilde{\beta }}_{4, 3} &{} 1 &{} \\ \vdots &{} \vdots &{} \vdots &{} \vdots &{} &{} \ddots \end{array} \right] \end{aligned}$$
(E.7)

where we have used the assumptions analogous to (E.2)

$$\begin{aligned} {\tilde{\beta }}_{j,j}=1, \qquad {\tilde{\beta }}_{2j+1, 2j}=0 \end{aligned}$$
(E.8)

(recalling that the first equality is the definition of monicity). So instead of looking for the coefficients in (E.1) we will solve for the coefficients \({\tilde{\beta }}_{j,k}\) in

$$\begin{aligned} P_j = {\tilde{Q}}_j + {\tilde{\beta }}_{j,j-1} {\tilde{Q}}_{j-1} + \dots + {\tilde{\beta }}_{j,1} {\tilde{Q}}_1 + {\tilde{\beta }}_{j,0} {\tilde{Q}}_0, \qquad {\tilde{\beta }}_{j,k}\in {\mathbb {C}}, \end{aligned}$$
(E.9)

and then hope to invert the relations to recover the \({\tilde{\alpha }}_{j,k}\). We also define the matrix of inner products

(E.10)

Using (69) we can write the modified \(\beta =4\) skew-inner product in terms of the \(\beta =2\) inner product, with the inclusion of the operator A defined in (54). To make use of this we first note that if \(f_k\) is any monic polynomial of degree k then we have

$$\begin{aligned} Af_k[x]&= -\left( xf_k (x) - f_k' (x) \right) = -\left( P_{k+1} (x) + \sum _{j=0}^{k} c_j P_{j} (x) \right) , \end{aligned}$$
(E.11)

where we have decomposed \(xf_k (x) - f_k' (x)\) into a sum over the (monic) orthogonal polynomials \(P_j\), with coefficients \(c_j\). Combining this fact with (66), (67) and the normalization of the \(P_j\) from (12) we have the matrix

$$\begin{aligned} {\mathbf {A}}&:= \left[ (P_j, A P_k )_{2}^y \right] _{j, k =0, ..., N -1}\nonumber \\&= \left[ \begin{array}{ccccccc} \frac{\varOmega _{0, 0}}{2} &{} p_1+ \varOmega _{0, 1} &{} \varOmega _{0, 2} &{} \varOmega _{0, 3} &{} \varOmega _{0, 4} &{} \varOmega _{0, 5} &{}\cdots \\ -p_1 &{} \frac{\varOmega _{1, 1}}{2} &{} p_2+ \varOmega _{1, 2} &{} \varOmega _{1, 3} &{} \varOmega _{1, 4} &{} \varOmega _{1, 5} &{}\cdots \\ 0 &{} -p_2 &{} \frac{\varOmega _{2, 2}}{2} &{} p_3+ \varOmega _{2, 3} &{} \varOmega _{2, 4} &{} \varOmega _{2, 5} &{}\cdots \\ 0 &{} 0 &{} -p_3 &{} \frac{\varOmega _{3, 3}}{2} &{} p_4+ \varOmega _{3, 4} &{} \varOmega _{3, 5} &{}\cdots \\ 0 &{} 0 &{} 0 &{} -p_4 &{} \frac{\varOmega _{4, 4}}{2} &{} p_5+ \varOmega _{4, 5} &{}\\ \vdots &{} \vdots &{} \vdots &{} \vdots &{} \ddots &{} \ddots &{} \ddots \end{array}\right] . \end{aligned}$$
(E.12)

So now we can write

(E.13)

where \({\mathbf {W}}= {\mathbf {A}}- \frac{1}{2} \left[ \varOmega _{j,k} \right] \) is the anti-symmetric matrix in (22). (Note that for a matrix \({\mathbf {M}}\) the notation implies that the average is applied elementwise to the matrix.) Rearranging (E.13)

$$\begin{aligned} {\tilde{\mathbf {X}}}^{-1} {\tilde{\mathbf {q}}}({\tilde{\mathbf {X}}}^{-1} )^T = {\mathbf {W}}, \end{aligned}$$
(E.14)

and expanding out the left hand side we get

$$\begin{aligned} \Big [ {\tilde{\mathbf {X}}}^{-1} {\tilde{\mathbf {q}}}({\tilde{\mathbf {X}}}^{-1})^T \Big ]_{j,k}&= \sum _{\begin{array}{c} m = 0, 1, \dots , j \\ n = 0, 1, \dots , k \end{array}} {\tilde{\beta }}_{j,m} {\tilde{{{\mathbf {q}}}}}_{m,n} {\tilde{\beta }}_{k,n}\nonumber \\&=\sum _{m\, \mathrm {even}} {\tilde{\beta }}_{j, m} {\tilde{{{\mathbf {q}}}}}_{m,m+1} {\tilde{\beta }}_{k,m+1}+ \sum _{m\, \mathrm {odd}} {\tilde{\beta }}_{j, m} {\tilde{{{\mathbf {q}}}}}_{m,m-1} {\tilde{\beta }}_{k,m-1}\nonumber \\&=\sum _{m\, \mathrm {even}} {\tilde{\beta }}_{j, m} {\tilde{q}}_{m/2} {\tilde{\beta }}_{k,m+1} -\sum _{m\, \mathrm {odd}} {\tilde{\beta }}_{j, m} {\tilde{q}}_{(m-1)/2} {\tilde{\beta }}_{k,m-1}, \end{aligned}$$
(E.15)

noting that this is a finite sum since all \(\beta _{\mu , \nu }\) are zero when \(\nu > \mu \). So we have the set of equations

$$\begin{aligned} 0&= \sum _{m \, \mathrm {even}} {\tilde{q}}_{m/2} \left( {\tilde{\beta }}_{j,m} {\tilde{\beta }}_{k, m+1}- {\tilde{\beta }}_{j,m+1} {\tilde{\beta }}_{k, m} \right) - w_{j,k} \end{aligned}$$
(E.16)

(where we denote the elements of \({\mathbf {W}}\) by \(w_{j,k}\)) and we are now in a position to solve for the normalizations \({\tilde{q}}_j\) and the coefficients \({\tilde{\beta }}_{j,k}\).

1.1 Expressions for \({\tilde{q}}_j\)

Let the matrices in (E.14) be of size \(2n \times 2n\). Then, taking the Pfaffian we get

$$\begin{aligned} \mathrm {Pf}\,{\mathbf {W}}= \mathrm {Pf}\,({\tilde{\mathbf {X}}}^{-1} {\tilde{\mathbf {q}}}({\tilde{\mathbf {X}}}^T)^{-1})= \det ({\tilde{\mathbf {X}}}^{-1}) \mathrm {Pf}\,{\tilde{\mathbf {q}}}= \mathrm {Pf}\,{\tilde{\mathbf {q}}}\;, \end{aligned}$$
(E.17)

where we used the Pfaffian identity (B.3) for the second equality, and the fact that \({\tilde{\mathbf {X}}}^{-1}\) is a triangular matrix with 1s on the diagonal for the third equality.

Because \({\tilde{\mathbf {q}}}\) is a skew-diagonal matrix, as in (52), we have

$$\begin{aligned} \mathrm {Pf}\,{\tilde{\mathbf {q}}}= \prod _{j=0}^{n-1} {\tilde{q}}_j = \mathrm {Pf}\,{\mathbf {W}}_{2n-1}. \end{aligned}$$
(E.18)

Beginning with \(n=1\) and iterating, we obtain (73), with the convention (74).

1.2 Expressions for \({\tilde{\beta }}_{j,k}\)

Let k be even, then the last term in the sum of (E.16) is \(-{\tilde{\beta }}_{j,k+1} {\tilde{q}}_{k/2}\) (when \(m= k\)), and so solving for this \({\tilde{\beta }}\) we obtain

$$\begin{aligned} {\tilde{\beta }}_{j,k+1} = \frac{1}{{\tilde{q}}_{k/2}} \left[ \sum _{\begin{array}{c} m=0, \\ m\, \mathrm {even} \end{array}}^{k-2} {\tilde{q}}_{m/2} \left( {\tilde{\beta }}_{j,m} {\tilde{\beta }}_{k, m+1} - {\tilde{\beta }}_{j,m+1} {\tilde{\beta }}_{k, m}\right) -w_{j,k} \right] , \qquad k \text{ even }. \end{aligned}$$
(E.19)

For k odd the last term (when \(m=k-1\)) is \({\tilde{q}}_{(k-1)/2}( {\tilde{\beta }}_{j, k-1} - {\tilde{\beta }}_{j,k} {\tilde{\beta }}_{k, k-1})\), but recall from (E.8) that (when k is odd) we have set \({\tilde{\beta }}_{k, k-1} =0\) [using (29)], so we obtain

$$\begin{aligned} {\tilde{\beta }}_{j, k-1}= & {} -\frac{1}{{\tilde{q}}_{(k-1)/2}} \left[ \sum _{\begin{array}{c} m=0, \\ m\, \mathrm {even} \end{array}}^{k-3} {\tilde{q}}_{m/2} \left( {\tilde{\beta }}_{j,m} {\tilde{\beta }}_{k, m+1} - {\tilde{\beta }}_{j,m+1} {\tilde{\beta }}_{k, m}\right) -w_{j,k} \right] , \qquad k \text{ odd }.\nonumber \\ \end{aligned}$$
(E.20)

From these two expressions we see that each \({\tilde{\beta }}_{j,2k}\) and \({\tilde{\beta }}_{j, 2k+1}\) only depends on the \({\tilde{\beta }}\)s in the same row, and in columns \(0,1,\dots 2k-1\). This allows us to inductively solve for the \({\tilde{\beta }}\): first we solve for \({\tilde{\beta }}_{j,0}, {\tilde{\beta }}_{j,1}\), then \({\tilde{\beta }}_{j,2}, {\tilde{\beta }}_{j,3}\), etc.

It is this decoupling of the \({\tilde{\beta }}\) equations that is the reason for working with \({\tilde{\mathbf {X}}}^{-1}\) instead of \({\tilde{\mathbf {X}}}\).

Proposition E.1

 

$$\begin{aligned} {\tilde{\beta }}_{j,k}&= \left\{ \begin{array}{cl} \frac{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{k+1}^{(k\mapsto j)}}}{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{k+1}}},&{}\quad k \text{ even },\\ \frac{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{k}^{(k\mapsto j)}}}{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{k}}},&{}\quad k \text{ odd }, \end{array} \right. \end{aligned}$$
(E.21)

where \({\mathbf {W}}_{\mu }^{(\eta \mapsto \nu )}\) is the matrix \({\mathbf {W}}_{\mu }\) from (22) with all occurrences of the index \(\eta \) replaced by the index \(\nu \).

Proof

As mentioned above, we will employ an inductive proof. We need both even and odd base cases. Expanding out (E.16) with \(k=0\) we have

$$\begin{aligned}&0= -{\tilde{\beta }}_{j,1} {\tilde{q}}_0 {\tilde{\beta }}_{0,0} - w_{j,0} \end{aligned}$$
(E.22)
$$\begin{aligned}&\Rightarrow {\tilde{\beta }}_{j,1}= -\frac{w_{j,0}}{{\tilde{q}}_0}= \frac{w_{0,j}}{w_{0,1}} = \frac{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{1}^{(1\mapsto j)}}}{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{1}}}. \end{aligned}$$
(E.23)

Similarly, [recalling that \({\tilde{\beta }}_{1,0}=0\) from (E.8)] with \(k=1\), we get

$$\begin{aligned}&0= {\tilde{\beta }}_{j,0}{\tilde{q}}_0 {\tilde{\beta }}_{1,1}- {\tilde{\beta }}_{j,1} {\tilde{q}}_0 {\tilde{\beta }}_{1,0} - w_{j,1} \end{aligned}$$
(E.24)
$$\begin{aligned}&\Rightarrow {\tilde{\beta }}_{j,0}= \frac{w_{j,1}}{{\tilde{q}}_0} = \frac{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{1}^{(0\mapsto j)}}}{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{1}}}. \end{aligned}$$
(E.25)

Now we move to the inductive step. For convenience, here we restrict to k even. Assume that we have (E.21) for all \({\tilde{\beta }}_{j,0}, {\tilde{\beta }}_{j,1}, \dots , {\tilde{\beta }}_{j,k-1}\) and we substitute (E.21) and (73) into (E.19) to get

$$\begin{aligned} {\tilde{\beta }}_{j,k+1} = \frac{\mathrm {Pf}\,{\mathbf {W}}_{k-1}}{\mathrm {Pf}\,{\mathbf {W}}_{k+1}} \left[ \sum _{\begin{array}{c} m=0, \\ m\, \mathrm {even} \end{array}}^{k-2} \left( \frac{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{m+1}^{(m\mapsto j)}}}{\mathrm {Pf}\,{\mathbf {W}}_{m-1}} \frac{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{m+1}^{(m+1 \mapsto k)}}}{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{m+1}}} - \frac{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{m+1}^{(m+1 \mapsto j)}}}{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{m+1}}} \frac{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{m+1}^{(m\mapsto k)}}}{\mathrm {Pf}\,{\mathbf {W}}_{m-1}} \right) -w_{j,k} \right] . \end{aligned}$$
(E.26)

Using [43, (1.1)] we obtain

$$\begin{aligned}&\mathrm {Pf}\,{\mathbf {W}}_{m+1}^{(m\mapsto j)} \mathrm {Pf}\,{\mathbf {W}}_{m+1}^{(m+1\mapsto k)}- \mathrm {Pf}\,{\mathbf {W}}_{m+1}^{(m+1\mapsto j)} \mathrm {Pf}\,{\mathbf {W}}_{m+1}^{(m\mapsto k)}\nonumber \\&= \mathrm {Pf}\,{\mathbf {W}}_{m-1} \mathrm {Pf}\,{\mathbf {W}}_{m+3}^{(m+2\mapsto k, m+3\mapsto j)}- \mathrm {Pf}\,{\mathbf {W}}_{m+1}\mathrm {Pf}\,{\mathbf {W}}_{m+1}^{(m\mapsto k,m+1\mapsto j)}. \end{aligned}$$
(E.27)

We note that these “overlapping Pfaffians” have appeared earlier than in [43] (eg. [65], and even as far back as [37]), however we will use Knuth’s formulation simply because of our familiarity with it. The notation in [43] is quite different to that used here, so we briefly outline how (E.27) follows from [43, (1.1)], which we quote here, rearranged for convenience

$$\begin{aligned} - f[\alpha x z] f[\alpha w y] + f[\alpha w z] f[\alpha x y]&= f[\alpha ] f[\alpha w x y z]- f[\alpha w x] f[\alpha y z] \end{aligned}$$
(E.28)

where \(w,x,y,z \in {\mathbb {Z}}\) are matrix indices and \(\alpha \in {\mathbb {Z}}^p\) is an ordered set of indices. For index sets \(\alpha _1\in {\mathbb {Z}}^p, \alpha _2 \in {\mathbb {Z}}^q\) the product \(\alpha _1 \alpha _2 \in {\mathbb {Z}}^{p+q}\) is the concatenation of the index sets. The function \(f[\alpha ]\) is then the Pfaffian of the matrix \(\big [ f[jk] \big ]\) with index set \(\alpha \), i.e.

$$\begin{aligned} f[\alpha ] = \mathrm {Pf}\,\Big [ f[jk] \Big ]_{j,k \in \alpha } \end{aligned}$$
(E.29)

defined recursively, where for a pair of indices f[jk] is the matrix element, and

$$\begin{aligned} f[jk]=-f[kj] \end{aligned}$$
(E.30)

since Pfaffian matrices are anti-symmetric. So then to match (E.28) with (E.27) we take

$$\begin{aligned} \alpha =\{ 0, 1, \dots , m-1\}, \quad w=\{m\}, \quad x= \{m+1\}, \quad y=\{k\}, \quad z= \{j\}, \end{aligned}$$
(E.31)

and apply (E.30) to rearrange the indices as needed.

Substituting (E.27) into (E.26) we obtain

$$\begin{aligned} {\tilde{\beta }}_{j,k+1}&= \frac{\mathrm {Pf}\,{\mathbf {W}}_{k-1}}{\mathrm {Pf}\,{\mathbf {W}}_{k+1}} \left[ \sum _{\begin{array}{c} m=0, \\ m\, \mathrm {even} \end{array}}^{k-2} \frac{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{m+3}^{(m+2 \mapsto k, m+3\mapsto j)}}}{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{m+1}}} - \frac{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{m+1}^{(m\mapsto k, m+1\mapsto j)}}}{\mathrm {Pf}\,{\mathbf {W}}_{m-1}} -w_{j,k} \right] , \end{aligned}$$
(E.32)

which is a telescoping sum, leaving

$$\begin{aligned} {\tilde{\beta }}_{j,k+1}&= \frac{\mathrm {Pf}\,{\mathbf {W}}_{k-1}}{\mathrm {Pf}\,{\mathbf {W}}_{k+1}} \left[ \frac{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{k+ 1}^{(k+1 \mapsto j)}}}{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{k-1}}} - \frac{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{1}^{(0\mapsto k, 1\mapsto j)}}}{\mathrm {Pf}\,{\mathbf {W}}_{-1}} -w_{j,k} \right] \end{aligned}$$
(E.33)
$$\begin{aligned}&= \frac{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{k+ 1}^{(k+1 \mapsto j)}}}{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{k+1}}} \end{aligned}$$
(E.34)

since \({\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{1}^{(0\mapsto k, 1\mapsto j)}} = w_{k,j}= -w_{j,k}\), and we also used the convention (74).

For the k odd case, one proceeds from (E.20) in a similar fashion. \(\square \)

Note from (E.21) that

$$\begin{aligned} {\tilde{\beta }}_{2n+1, 2n} = \frac{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{2n+1}^{(2n\mapsto 2n+1)}}}{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{2n+1}}}, \end{aligned}$$
(E.35)

and since \(w_{n,n}=0\) for all n, then the Pfaffian in the numerator has two identical columns (the right-most) and two identical rows (the bottom-most), which implies

$$\begin{aligned} {\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{2n+1}^{(2n\mapsto 2n+1)}}=0 \qquad \Rightarrow \qquad {\tilde{\beta }}_{2n+1, 2n} =0. \end{aligned}$$
(E.36)

Also, we clearly have

$$\begin{aligned} {\tilde{\beta }}_{j,j}&= \left\{ \begin{array}{cl} \frac{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{j+1}^{(j\mapsto j)}}}{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{j+1}}},&{}\quad j \text{ even },\\ \frac{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{j}^{(j\mapsto j)}}}{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{j}}},&{}\quad j \text{ odd }, \end{array}\right\} \quad = 1 \end{aligned}$$
(E.37)

so, even though we assumed \({\tilde{\beta }}_{j,j}=1\) in (E.8), the result (E.21) naturally extends to cover this case.

1.3 Expressions for \({\tilde{\alpha }}_{j,k}\) in Proposition 4

From the matrix product

$$\begin{aligned} {\tilde{\mathbf {X}}}^{-1} {\tilde{\mathbf {X}}}={\mathbf {I}}\end{aligned}$$
(E.38)

we have

$$\begin{aligned} {\tilde{\alpha }}_{j,k}= - \sum _{m=k}^{j-1} {\tilde{\beta }}_{j,m} {\tilde{\alpha }}_{m,k} \end{aligned}$$
(E.39)

for \(j>k\). Using this and the expressions for the \({\tilde{\beta }}_{j,k}\) in (E.21) we can find expressions for the \({\tilde{\alpha }}_{j,k}\).

Proof of Proposition 4:

From (E.9) we have

$$\begin{aligned} {\tilde{Q}}_j&= P_j - \sum _{k=0}^{j-1} {\tilde{\beta }}_{j, k} {\tilde{Q}}_k\nonumber \\&=P_j - {\tilde{\beta }}_{j,j-1} {\tilde{Q}}_{j-1} - \sum _{k=0}^{j-2} {\tilde{\beta }}_{j, k} {\tilde{Q}}_k, \end{aligned}$$
(E.40)

so, with (70), this implies

$$\begin{aligned} {\tilde{\beta }}_{j,j-1} {\tilde{Q}}_{j-1} = {\tilde{\beta }}_{j,j-1} \left( P_{j-1} + {\tilde{\alpha }}_{j-1, j-2} P_{j-2}+ \dots \right) , \end{aligned}$$
(E.41)

and thus

$$\begin{aligned} {\tilde{Q}}_j&= P_j -{\tilde{\beta }}_{j,j-1} P_{j-1} - \text{ lower } \text{ degree } \text{ polynomials }. \end{aligned}$$
(E.42)

So we have that

$$\begin{aligned} {\tilde{\alpha }}_{j,j-1}= -{\tilde{\beta }}_{j, j-1}, \end{aligned}$$
(E.43)

which is equal to zero when j is odd by (E.8), and we have consistency with (71).

For (72) we will use an inductive proof similar to that used in Proposition E.1. We see from (E.39) that each \({\tilde{\alpha }}_{j,k}\) only depends on the \({\tilde{\beta }}\)’s (which are known) and the \({\tilde{\alpha }}\)’s above it in the same column of the matrix \({\tilde{\mathbf {X}}}\) [in (E.5)]. From (E.43) we have

$$\begin{aligned} {\tilde{\alpha }}_{j,j-1} = - {\tilde{\beta }}_{j, j-1}= \left\{ \begin{array}{cl} - \frac{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{j-1}^{(j-1\mapsto j)}}}{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{j-1}}}, &{}\quad j \text{ even },\\ 0, &{}\quad j \text{ odd }, \end{array}\right. \end{aligned}$$
(E.44)

and from (E.39)

$$\begin{aligned} {\tilde{\alpha }}_{j,j-2} = -{\tilde{\beta }}_{j,j-2}{\tilde{\alpha }}_{j-2, j-2} - {\tilde{\beta }}_{j, j-1} {\tilde{\alpha }}_{j-1, j-2}= -{\tilde{\beta }}_{j,j-2} = \left\{ \begin{array}{ll} - \frac{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{j-1}^{(j-1\mapsto j)}}}{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{j-1}}}, &{}\quad j \text{ even },\\ - \frac{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{j-2}^{(j-2\mapsto j)}}}{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{j-2}}}, &{}\quad j \text{ odd }, \end{array}\right. \end{aligned}$$
(E.45)

since one of \({\tilde{\beta }}_{j, j-1}\) or \({\tilde{\alpha }}_{j-1, j-2}\) must be zero by (E.8) or (E.44). The equations (E.44) and (E.45) give us expressions for all \({\tilde{\alpha }}\)’s on the first and second lower diagonals of \({\tilde{\mathbf {X}}}\). So for any column k, there is a row j for which all the \({\tilde{\alpha }}_{j- m, k}\) above it are known, so we have our base cases.

Now for the inductive step, we expand (E.39) to obtain

$$\begin{aligned} {\tilde{\alpha }}_{j,k}&= \left\{ \begin{array}{ll} -{\tilde{\beta }}_{j,k} -{\displaystyle \sum _{m=k+2}^{j-1} {\tilde{\beta }}_{j,m} {\tilde{\alpha }}_{m,k}}, &{} \qquad j \text{ even, } k\le j-1, k \text{ even },\\ -{\tilde{\beta }}_{j,k} -{\displaystyle \sum _{m=k+1}^{j-1} {\tilde{\beta }}_{j,m} {\tilde{\alpha }}_{m,k}}, &{} \qquad j \text{ even, } k\le j-1, k \text{ odd },\\ -{\tilde{\beta }}_{j,k} -{\displaystyle \sum _{m=k+2}^{j-2} {\tilde{\beta }}_{j,m} {\tilde{\alpha }}_{m,k}}, &{} \qquad j \text{ odd, } k\le j-1, k \text{ even },\\ -{\tilde{\beta }}_{j,k} -{\displaystyle \sum _{m=k+1}^{j-2} {\tilde{\beta }}_{j,m} {\tilde{\alpha }}_{m,k}}, &{} \qquad j \text{ odd, } k\le j-1, k \text{ odd }.\\ \end{array}\right. \end{aligned}$$
(E.46)

We assume that \({\tilde{\alpha }}_{m,k}\) is given by (72) for all \(m\le j-1\) (j even) or \(m\le j-2\) (j odd), while all \({\tilde{\beta }}\)’s are given by (E.21). Taking jk both even (the other cases follow similarly), we substitute these known \({\tilde{\alpha }}\)’s and \({\tilde{\beta }}\)’s into the first row of (E.46) to give

$$\begin{aligned} {\tilde{\alpha }}_{j,k}&= - \frac{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{k+1}^{(k\mapsto j)}}}{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{k+1}}} + \sum _{\begin{array}{c} m=k+2 \\ m\, \mathrm {even} \end{array}}^{j-2} \frac{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{m+1}^{(m\mapsto j)}}}{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{m+1}}} \frac{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{m-1}^{(k\mapsto m)}}}{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{m-1}}}+ \sum _{\begin{array}{c} m=k+3 \\ m\, \mathrm {odd} \end{array}}^{j-1} \frac{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{m}^{(m\mapsto j)}}}{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{m}}} \frac{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{m-2}^{(k\mapsto m)}}}{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{m-2}}}\nonumber \\&=- \frac{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{k+1}^{(k\mapsto j)}}}{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{k+1}}} + \sum _{\begin{array}{c} m=k+2 \\ m\, \mathrm {even} \end{array}}^{j-2} \frac{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{m+1}^{(m\mapsto j)}}}{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{m+1}}} \frac{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{m-1}^{(k\mapsto m)}}}{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{m-1}}}+ \frac{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{m+1}^{(m+1 \mapsto j)}}}{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{m+1}}} \frac{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{m-1}^{(k\mapsto m+1)}}}{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{m-1}}}, \end{aligned}$$
(E.47)

keeping in mind that we have the convention that \(\mathrm {Pf}\,{\mathbf {W}}_{-1} =1\).

We now use [43, (5.1)] (again quoted here and rearranged for convenience)

$$\begin{aligned} f[\alpha xuw] f[\alpha vyz] -f[\alpha xuv] f[\alpha wyz]&= -f[\alpha u v w] f[\alpha x y z] + f[\alpha uyz] f[\alpha xvw]\nonumber \\&\quad +f[\alpha z] f[ \alpha u v w x y] - f[\alpha y] f[\alpha uvwxz] \end{aligned}$$
(E.48)

with

$$\begin{aligned} \alpha = \{0, 1, \dots , k-1, k+1, \dots , m-1 \}, \quad&x= \{ j\}, \quad u= \{ k\}, \quad v= \{ m\}, \quad w= \{ m+1\} \end{aligned}$$
(E.49)
$$\begin{aligned}&y=z=\emptyset \qquad \text{(the } \text{ empty } \text{ set) }. \end{aligned}$$
(E.50)

Rearranging indices according to (E.30), the equality (E.48) gives

$$\begin{aligned}&\mathrm {Pf}\,{\mathbf {W}}_{m+1}^{(m\mapsto j)} \mathrm {Pf}\,{\mathbf {W}}_{m-1}^{(k\mapsto m)}+ \mathrm {Pf}\,{\mathbf {W}}_{m+1}^{(m+1\mapsto j)} \mathrm {Pf}\,{\mathbf {W}}_{m-1}^{(k\mapsto m+1)} \end{aligned}$$
(E.51)
$$\begin{aligned}&\quad = \mathrm {Pf}\,{\mathbf {W}}_{m+1} \mathrm {Pf}\,{\mathbf {W}}_{m-1}^{(k\mapsto j)}- \mathrm {Pf}\,{\mathbf {W}}_{m-1}\mathrm {Pf}\,{\mathbf {W}}_{m+1}^{(k\mapsto j)}, \end{aligned}$$
(E.52)

and substituting into (E.47) we get

$$\begin{aligned} {\tilde{\alpha }}_{j,k}&=- \frac{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{k+1}^{(k\mapsto j)}}}{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{k+1}}} + \sum _{\begin{array}{c} m=k+2 \\ m\, \mathrm {even} \end{array}}^{j-2} \frac{\mathrm {Pf}\,{\mathbf {W}}_{m-1}^{(k\mapsto j)}}{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{m-1}}} - \frac{\mathrm {Pf}\,{\mathbf {W}}_{m+1}^{(k\mapsto j)}}{{\displaystyle \mathrm {Pf}\,{\mathbf {W}}_{m+1}}}. \end{aligned}$$
(E.53)

This is a telescoping sum, which reduces to (72). The other cases in (E.46) are calculated similarly. \(\square \)

1.4 \(\beta =4\) Polynomials in the Classical Limit

In the classical limit (\(y\rightarrow \infty \)) the skew inner product (7) becomes

$$\begin{aligned} \langle f, g \rangle _4:= \frac{1}{2} \int _{- \infty }^{\infty } dx \; e^{-2 x^2} \left[ f(x) g'(x)- g(x) f'(x) \right] , \end{aligned}$$
(E.54)

and the associated skew-orthogonal polynomials obeying

$$\begin{aligned} \langle Q_{2j} , Q_{2k} \rangle _{4}&= \langle Q_{2j+1} , Q_{2k+1} \rangle _{4}= 0\nonumber \\ \langle Q_{2j} , Q_{2k+1} \rangle _{4}&= -\langle Q_{2k+1} , Q_{2j} \rangle _{4} = q_j \delta _{j,k} \end{aligned}$$
(E.55)

are given by [2, 63]

$$\begin{aligned} Q_{2j+1}(x) = P_{2j+1} (\sqrt{2} x), \qquad \qquad \qquad Q_{2j}(x)&= \sum _{t=0}^{j} \left( \prod _{s=t+1}^{j} \frac{p_{2s}}{p_{2s-1}} \right) P_{2t} (\sqrt{2} x)\nonumber \\&= \sum _{t=0}^{j} \frac{j!}{t!}\; P_{2t} (\sqrt{2} x) \end{aligned}$$
(E.56)

[up to the invariance (29)], where the polynomials

$$\begin{aligned} P_{j} (x) = \frac{1}{2^j} H_{j} (x) \end{aligned}$$
(E.57)

are the (monic, “physicist’s”) Hermite polynomials in (27) and \(p_j = p_{j} (\infty )\) from (28). The corresponding normalizations \(q_j = q_j(\infty )\) are also from (28).

As mentioned after Proposition 4, it can be seen that the results of that Proposition reduce to the classical polynomials (E.56), since in the limit \(y\rightarrow \infty \) the matrix \([\varOmega _{j,k}]=0\) in (E.13), and we then follow exactly the steps in [2] to obtain (E.56).

Skew-Orthogonal Polynomials for \(\beta =1\)

We again suppress the explicit dependence on y to save space, although all the quantities here depend on y.

We can follow the same steps as for the \(\beta =4\) case in Appendix E to find the coefficients \(\alpha _{j,k}\) in (77). With \({{\mathbf {p}}}\) from (E.4) we first rewrite equation (77) as

$$\begin{aligned} {\mathbf {R}}= {\mathbf {X}}{\mathbf {P}}\qquad \Rightarrow \qquad {\mathbf {P}}= {\mathbf {X}}^{-1} {\mathbf {R}}, \end{aligned}$$
(F.1)

where

$$\begin{aligned} {\mathbf {R}}= \left[ \begin{array}{c} R_0 \\ R_1\\ \vdots \end{array} \right] , \end{aligned}$$
(F.2)

and \({\mathbf {X}}\) and \({\mathbf {X}}^{-1}\) are the same as in (E.5) and (E.7), but without the tildes. Also define the matrices

$$\begin{aligned} {{\mathbf {r}}}&:= \big [ \langle R_j, R_k \rangle _1^y \big ]_{j,k = 0, 1, \dots , N-1}, \end{aligned}$$
(F.3)
$$\begin{aligned} {\mathbf {B}}&:= [(P_j, A^{-1} P_k)_2^y]_{j,k = 0, 1, \dots , N-1}, \end{aligned}$$
(F.4)
$$\begin{aligned} \varvec{\varPhi }&:= [\varPhi _{j,k}]_{j,k = 0, 1, \dots , N-1}, \end{aligned}$$
(F.5)

where \({{\mathbf {r}}}\) is of skew-diagonal form (52). Then

$$\begin{aligned} {{\mathbf {r}}}&= \Big [ \langle R_j, R_k \rangle _{1}^{y} \Big ]= \left\langle {\mathbf {R}}{\mathbf {R}}^T \right\rangle _{1}^{y}= \left\langle {\mathbf {X}}{{\mathbf {p}}}{{\mathbf {p}}}^T {\mathbf {X}}^T \right\rangle _{1}^{y}\nonumber \\&={\mathbf {X}}\left\langle {\mathbf {P}}{\mathbf {P}}^T \right\rangle _{1}^{y} {\mathbf {X}}^T\nonumber \\&= -{\mathbf {X}}\Big ( {\mathbf {B}}+ \varvec{\varPhi } \Big ) {\mathbf {X}}^T\nonumber \\&= {\mathbf {X}}{\mathbf {V}}{\mathbf {X}}^T, \end{aligned}$$
(F.6)

where the anti-symmetric matrix \({\mathbf {V}}\) is defined in (18) — we will discuss the derivation of the specific structure of the elements of \({\mathbf {V}}\) in Appendix F.1 below. (As above, the averages over matrix arguments imply that the average is applied elementwise to the matrix.)

We now follow the same steps as in (E.14)–(E.16) to get

$$\begin{aligned}&{\mathbf {X}}^{-1} {{\mathbf {r}}}\left( {\mathbf {X}}^{-1} \right) ^T = {\mathbf {V}}\end{aligned}$$
(F.7)
$$\begin{aligned} \qquad \Rightarrow \qquad&\sum _{m \, \mathrm {even}} r_{m/2} \left( \beta _{j,m} \beta _{k, m+1}- \beta _{j,m+1} \beta _{k, m} \right) - v_{j,k} =0 \end{aligned}$$
(F.8)

with \({\mathbf {V}}= {\mathbf {V}}_m= [v_{j,k}]_{j,k=0,\dots , m}\) from (18). Assuming \(m=2n\), taking the Pfaffian of (F.7) we have

$$\begin{aligned} \mathrm {Pf}\,{{\mathbf {r}}}= \prod _{j=0}^{n-1} r_j = \mathrm {Pf}\,{\mathbf {V}}_{2n-1} \end{aligned}$$
(F.9)

and we obtain (84), with the convention (85).

Then, since the equations in (F.8) are of the same form as (E.16), we apply the same reasoning as that in Proposition E.1 to obtain solutions for the \(\beta _{j,k}\)

$$\begin{aligned} \beta _{j,k}&= \left\{ \begin{array}{cl} \frac{{\displaystyle \mathrm {Pf}\,{\mathbf {V}}_{k+1}^{(k\mapsto j)}}}{{\displaystyle \mathrm {Pf}\,{\mathbf {V}}_{k+1}}},&{}\quad k \text{ even },\\ \frac{{\displaystyle \mathrm {Pf}\,{\mathbf {V}}_{k}^{(k\mapsto j)}}}{{\displaystyle \mathrm {Pf}\,{\mathbf {V}}_{k}}},&{}\quad k \text{ odd }, \end{array} \right. \end{aligned}$$
(F.10)

where again, \({\mathbf {V}}_{\mu }^{(\eta \mapsto \nu )}\) is the matrix \({\mathbf {V}}_{\mu }\) with all occurrences of the index \(\eta \) replaced by the index \(\nu \). Now using the equations

$$\begin{aligned} {\mathbf {X}}^{-1} {\mathbf {X}}= {\mathbf {I}}\qquad \Rightarrow \qquad \alpha _{j,k} = -\sum _{m=k}^{j-1} \beta _{j,m} \alpha _{m,k}, \end{aligned}$$
(F.11)

and by following the same steps as in Appendix E.3 and we establish the remaining statements in Proposition 5.

1.1 Entries of the Matrix \({\mathbf {V}}_m\)

For a general polynomial

$$\begin{aligned} g_{j} (x) = c_{j,j} x^j + c_{j, j-1} x^{j-1} + \dots + c_{j,1} x + c_{j,0} \end{aligned}$$
(F.12)

we use the identities (calculated via repeated integration by parts)

$$\begin{aligned} \int _{a}^{b} e^{-u^2/2} u^{2k+1} du&= (2k)!! \left( \sum _{m=0}^k \frac{e^{-a^2/2} a^{2m} - e^{-b^2/2} b^{2m}}{(2m)!!} \right) \end{aligned}$$
(F.13)
$$\begin{aligned} \int _{a}^{b} e^{-u^2/2} u^{2k} du&= (2k-1)!! \left( \sum _{m=1}^k \frac{e^{-a^2/2} a^{2m-1} - e^{-b^2/2} b^{2m-1}}{(2m-1)!!} \right) \nonumber \\&\quad + (2k-1)!! \sqrt{\frac{\pi }{2}} \left( \mathrm {erf}\left( \frac{b}{\sqrt{2}} \right) - \mathrm {erf}\left( \frac{a}{\sqrt{2}} \right) \right) \end{aligned}$$
(F.14)

to obtain

$$\begin{aligned} \int _{-\infty }^{\infty } e^{-z^2/2} g_k (z) dz = \sqrt{2 \pi } \sum _{t=0}^{\lfloor k/2 \rfloor } c_{k, 2t} (2t -1)!! \end{aligned}$$
(F.15)

and

$$\begin{aligned} A^{-1}g_{k} [z]&= \left( \frac{e^{x^2/2}}{2}\; \mathrm {erf}\left( \frac{x}{\sqrt{2}} \right) \int _{-\infty }^{\infty } e^{-z^2/2} g_{k} (z) dz \right) - g_{k-1} (x)\nonumber \\&\quad - (\text{ lower } \text{ order } \text{ polynomials}). \end{aligned}$$
(F.16)

So then, with \({\mathbf {B}}\) defined in (F.4), and using the orthogonality of the NM polynomials \(P_j\) we have

$$\begin{aligned} {\mathbf {B}}&= \begin{bmatrix} -\varPhi _{0,0} &{} -p_0 + X_{0,1} &{} b_{0,2} &{} b_{0,3} &{}\\ X_{1,0} &{} -\varPhi _{1,1} &{} -p_1 + X_{1,2} &{} b_{1,3}&{} \cdots \\ X_{2,0} &{} X_{2,1} &{} -\varPhi _{2,2} &{} b_{2,3} \\ &{}\vdots &{}&{}\ddots \end{bmatrix}\nonumber \\&= \begin{bmatrix} -\varPhi _{0,0} &{} -p_0 + X_{0,1} &{} b_{0,2} &{} b_{0,3}&{}\\ p_0 -X_{0,1} -\varPhi _{0,1} -\varPhi _{1,0} &{} -\varPhi _{1,1} &{} -p_1 + X_{1,2} &{} b_{1,3}&{} \cdots \\ X_{2,0} &{} p_1 -X_{1,2} -\varPhi _{1,2} -\varPhi _{2,1} &{} -\varPhi _{2,2} &{} b_{2,3} &{}\\ &{}\vdots &{}&{}\ddots \end{bmatrix}, \end{aligned}$$
(F.17)

where the \(b_{j,k}\) represent the currently unknown elements of \({\mathbf {B}}\), and the second equality comes from the use of (78). Adding the matrix \(\varvec{\varPhi }\) from (F.5) gives the (negative of the) anti-symmetric matrix \({\mathbf {V}}\) from (18), allowing us to specify the remaining elements of \({\mathbf {B}}\) as so

$$\begin{aligned} {\mathbf {B}}+\varvec{\varPhi }&= -{\mathbf {V}}\nonumber \\&=\begin{bmatrix} 0 &{} -p_0 + X_{0,1} +\varPhi _{0,1} &{} -X_{2,0} - \varPhi _{2,0} &{} -X_{3,0} - \varPhi _{3,0}&{} \\ p_0 -X_{0,1} -\varPhi _{0,1} &{} 0 &{} -p_1 + X_{1,2} +\varPhi _{1,2} &{} -X_{3,1} - \varPhi _{3,1}&{} \cdots \\ X_{2,0} + \varPhi _{2,0} &{} p_1 -X_{1,2} -\varPhi _{1,2} &{} 0 &{} -p_2 + X_{2,3} +\varPhi _{2,3}&{}\\ X_{3,0} + \varPhi _{3,0} &{} X_{3,1}+ \varPhi _{3,1} &{} p_2 -X_{2,3} -\varPhi _{2,3} &{} 0&{}\\ &{}\vdots &{}&{}&{}\ddots \end{bmatrix}. \end{aligned}$$
(F.18)

1.2 \(\beta =1\) Polynomials in the Classical Limit

Similar to Appendix E.4 above we have the \(y \rightarrow \infty \) limit of the skew-inner product (8) as

$$\begin{aligned} \langle f, g \rangle _1 = \frac{1}{2} \int _{-\infty }^{\infty } dx \; e^{-x^2/2} f(x) \int _{-\infty }^{\infty } d z \; e^{-z^2/2} g (z) \mathrm {sgn}(z- x), \end{aligned}$$
(F.19)

with the associated skew-orthogonal polynomials obeying the equations

$$\begin{aligned} \langle R_{2j} , R_{2k} \rangle _{1}&= \langle R_{2j+1} , R_{2k+1} \rangle _{1}= 0\nonumber \\ \langle R_{2j} , R_{2k+1} \rangle _{1}&=-\langle R_{2k+1} , R_{2j} \rangle _{1} = r_j \delta _{j,k}. \end{aligned}$$
(F.20)

These polynomials are given explicitly [up to the invariance (29)] by [2, 63]

$$\begin{aligned} R_{2j} (x) = P_{2j} (x), \qquad \qquad R_{2j+1} (x)&= P_{2j+1} (x) - \frac{p_{2j}}{p_{2j-1}} P_{2j-1} (x)\nonumber \\&= P_{2j+1} (x) - j\, P_{2j-1} (x), \end{aligned}$$
(F.21)

where the polynomials \(P_j (x)\) are the Hermite polynomials in (E.57) and \(p_j = p_j (\infty )\). The normalizations \(r_j = r_j (\infty )\) are from (28).

To check coherence between (83) and (F.21) we can use integration by parts, the identities (154) and

$$\begin{aligned} \frac{d}{dx} \mathrm {erf}\left( \frac{x}{\sqrt{2}} \right)&= \sqrt{\frac{\pi }{2}} e^{-x^2/2} \end{aligned}$$
(F.22)

to give us

$$\begin{aligned} \int _{-\infty }^{\infty } e^{-x^2/2} H_j (x) \mathrm {erf}\left( \frac{x}{\sqrt{2}} \right) dx = \left\{ \begin{array}{cl} 2^{(j+2)/2} (j-1)!! , \quad &{} j \text{ odd },\\ 0,&{} j \text{ even }. \end{array}\right. \end{aligned}$$
(F.23)

Substitution into (20) yields

$$\begin{aligned} X_{j,k} \Big |_{y \rightarrow \infty }&= \frac{1}{2^{j+k+1}} \left( \int _{-\infty }^{\infty } H_j (x) e^{-x^2/2}\; \mathrm {erf}\left( \frac{x}{\sqrt{2}} \right) dx \right) \int _{-\infty }^{\infty } e^{-z^2/2} H_k (z) dz\nonumber \\&= \left\{ \begin{array}{cl} \varGamma \left( \frac{j+1}{2} \right) \varGamma \left( \frac{k+1}{2} \right) = p_k (\infty ) \frac{\varGamma \left( \frac{j+1}{2} \right) }{\varGamma \left( \frac{k+2}{2} \right) }, \quad &{} j \text{ odd } \wedge k \text{ even },\\ 0,&{} \text{ otherwise }, \end{array}\right. \end{aligned}$$
(F.24)

where we used (155) for the integral over \(H_k\). The second line (equalling zero) follows easily from the fact that the error function is an odd function and that \(H_j(x)\) is an even or odd function depending on the parity of j. We will also make use of the formula

$$\begin{aligned} p_j (\infty ) = \varGamma \left( \frac{j+1}{2} \right) \varGamma \left( \frac{j+2}{2} \right) , \end{aligned}$$
(F.25)

which can be shown via Legendre’s duplication formula for gamma functions.

In the case that \(y= \infty \) then from (19) the function \(\varPhi _{j,k}=0\) and we also use (F.24) to find that the matrix \({\mathbf {V}}_m\) in (18) has entries

$$\begin{aligned} {\mathbf {V}}_m= \begin{bmatrix} 0&{}p_0 &{}0 &{}X_{3,0}&{}0&{}X_{5,0}&{}\\ -p_0&{} 0&{} 0&{} 0&{} 0&{}0&{} \\ 0&{}0&{}0&{}p_2 &{} 0&{} X_{5,2}&{} \cdots \\ -X_{3,0}&{}0 &{}-p_2&{} 0&{} 0&{}0&{}\\ 0&{}0&{}0&{}0&{}0&{} p_4\\ -X_{5,0}&{} 0&{} -X_{5,2}&{} 0&{} -p_4&{} 0\\ &{}&{}\vdots &{}&{}&{}&{}\ddots \end{bmatrix} \end{aligned}$$
(F.26)

meaning

$$\begin{aligned} v_{j,k} = \left\{ \begin{array}{cl} p_j, &{}\quad j \text{ even } \wedge \,k=j+1,\\ 0,&{}\quad j \text{ odd } \vee \,k \text{ even },\\ X_{k,j}, &{}\quad j \text{ even } \wedge \,k \text{ odd } \wedge \, j<k-1, \end{array} \right. \end{aligned}$$
(F.27)

with the anti-symmetry condition

$$\begin{aligned} v_{j,k}= -v_{k,j}. \end{aligned}$$
(F.28)

So \({\mathbf {V}}_m\) is a sparse chequerboard matrix and it is known that Pfaffians of chequerboard matrices are equivalent to determinants of a condensed matrix (see, for example, [5] and [17]). Explicitly, for a matrix \({\hat{{\mathbf {A}}}}_1=[\alpha _{i,j}]_{i,j=1,...,2N}\) we have

$$\begin{aligned} \mathrm {Pf}\,{\hat{{\mathbf {A}}}}_1=\det [\alpha _{2i-1,2j}]_{i,j=1,...,N}, \end{aligned}$$
(F.29)

and applying this to the matrix in (F.26) we obtain a determinant with zeroes everywhere below the diagonal, giving

$$\begin{aligned} \mathrm {Pf}\,{\mathbf {V}}_{2j-1} = p_0 p_2 \cdots p_{2j-2}. \end{aligned}$$
(F.30)

For the numerator of \(\alpha _{j,k}\) we have four cases to consider, being the four possibilities given by the parities of j and k.

\(\underline{\alpha _{2j, 2k}}:\)

In the 2k-th column we have the matrix entries

$$\begin{aligned} v_{s, 2k} \mapsto v_{s, 2j} = X_{2j, s} =0 \qquad (s< 2k) \end{aligned}$$
(F.31)

while in the 2k-th row we have

$$\begin{aligned} v_{2k, t} \mapsto v_{2j, t} = -X_{2j, t} =0 \qquad (2k<t) \end{aligned}$$
(F.32)

so we have zeros above and to the right of the (2k, 2k) entry (in the same column and row), which gives us

$$\begin{aligned} \mathrm {Pf}\,{\mathbf {V}}_{2j-1}^{(2k\mapsto 2j)} =0, \end{aligned}$$
(F.33)

since at least one of these zero factors must appear in each term of the Pfaffian.

\(\underline{\alpha _{2j, 2k+1}}:\)

Similar to the above, we have

$$\begin{aligned} v_{s, 2k+1} \mapsto v_{s, 2j} = X_{2j, s} =0 \qquad (s< 2k+1) \end{aligned}$$
(F.34)

and

$$\begin{aligned} v_{2k+1, t} \mapsto v_{2j, t} = -X_{2j, t} =0 \qquad (2k+1 < t). \end{aligned}$$
(F.35)

So now we have zeros above and to the right of the \((2k+1, 2k+1)\) entry, which gives us

$$\begin{aligned} \mathrm {Pf}\,{\mathbf {V}}_{2j-1}^{(2k+1 \mapsto 2j)} =0. \end{aligned}$$
(F.36)

\(\underline{\alpha _{2j+1, 2k}}:\)

Now we have

$$\begin{aligned} v_{s, 2k} \mapsto v_{s, 2j+1} = X_{2j+1, s} =0 \qquad (s< 2k \wedge s \text{ odd}) \end{aligned}$$
(F.37)

so we still have every odd row containing only zeros (in the upper triangle). Thus, as in (F.30), the only term in the Laplace expansion that could be non-zero is \(p_0 p_2 \cdots p_{2j-3}\). However,

$$\begin{aligned} p_{2k} = v_{2k, 2k+1} \mapsto v_{2j+1, 2k+1} = -X_{2j+1, 2k+1} =0, \end{aligned}$$
(F.38)

and so

$$\begin{aligned} \mathrm {Pf}\,{\mathbf {V}}_{2j-1}^{(2k \mapsto 2j+1)} =0. \end{aligned}$$
(F.39)

\(\underline{\alpha _{2j+1, 2k+1}}:\)

Using the expressions (F.24) and (F.25) we have the identity

$$\begin{aligned} X_{2m+1, 2t} X_{2t+1, 2n} = p_{2t} X_{2m+1, 2n}, \qquad (m> t > n), \end{aligned}$$
(F.40)

which we will make use of below. First we recall from (F.26) that in the upper triangle \(v_{j,k} \ne 0\) only when j is even and when k is odd, which implies that all the even sites in the corresponding diagram connect to the right, and all the odd sites connect to the left. However, we will have an exception to this when we make the replacement \(2k+1 \mapsto 2j+1\). Specifically, in terms of link diagrams there are two possibilities for the links involving site \(2j+1\): either \((2s, 2j+1)\) or \((2j+1, 2t)\) (so \(2j+1\) is either the right or left vertex of the link). We note that the other vertex must be even, since any odd-odd or even-even link results in \(X_{\mathrm {odd}, \mathrm {odd}} = 0 = X_{\mathrm {even}, \mathrm {even}}\). It is easiest to consider the two cases separately:

  1. (i)

    Assume \(2j+1\) connects to the left, that is we have a link \((2s, 2j+1)\). Since all other odd sites connect left and all other even sites connect right, this must be the identity link diagram.

  2. (ii)

    Assume \(2j+1\) connects to the right, that is we have a link \((2j+1, 2t)\), then we must have identity links at sites to the left of 2k and to the right of \(2t+1\), as depicted in Fig. 4. [The left-pointing arrow on the edge \((2j+1, 2t)\) indicates that the left vertex is greater than the right vertex, which is the opposite convention to all the other links, and this introduces a negative sign from (F.28).] In this case, we see from the diagram that there are 2 possible connections for \(2t-2\), and then another 2 possible connections for \(2t-4\), and so on. Thus there are \(2^{t-k-1}\) link diagrams corresponding to Fig. 4.

Fig. 4
figure 4

A general link diagram in the case that the vertex \(2j+1\) connects to the right. The left-pointing arrow on this link indicates that the corresponding matrix entry has row index larger than the column index (which is different to the convention on all other links). From (F.28) we see that this left-pointing arrow will introduce a negative sign

Summing over the possible values of \(t=k+1, \dots , j-1\) in (ii), and adding the identity link pattern from (i), we have the number of valid link patterns on N sites L(N) given by

$$\begin{aligned} L(N)= 1+ \sum _{t=k+1}^{j-1} 2^{t-k-1} = 2^{j-k-1}. \end{aligned}$$
(F.41)

So for \(2k+1< 2j-1\) we have an even number of terms in the Pfaffian, and it turns out that they all cancel.

Fig. 5
figure 5

An example of the type of link diagrams possible with the restrictions in Fig. 4. The ellipses “\(\dots \)” denote identity links. The labels “e” and “o” denote generic even and odd vertices respectively

Fig. 6
figure 6

Two possible link diagrams corresponding to equation (F.40) when \(2k+1 \mapsto 2j+1\). A left-pointing arrow on a link indicates that the left vertex of the edge is greater than the right vertex, and so from (F.28) we pick up a negative sign (since this corresponds to an element in the lower triangle of the anti-symmetric matrix). Diagram (a) introduces/removes a left-pointing arrow hence the factor of \((-1)\); diagram (b) contains left-pointing arrows on both sides of the equation, but it does introduce/remove an odd number of crossings so it also has a factor of \((-1)\)

Fig. 7
figure 7

Constructing the link diagram in Fig. 5 using the diagram equalities in Fig. 6. The labels to the left of each link diagram refer to which of the equalities in Fig. 6 was applied, and the sign of the corresponding term in the Pfaffian

To show this, note that the restriction that all odd vertices connect to the left and all even vertices connect to the right (except for \(2j+1\) and 2t) means that a general link diagram must look like that in Fig. 5. That is, big interconnected links, with a large rainbow link \((2j+1, \mathrm {even})\), and interspersed with little links. The big links must interconnect at neighbouring sites, since otherwise we would have two neighbouring vertices pointing in the same direction, violating the even/right–odd/left rule. We can construct every diagram of the type in Fig. 5 by application of the equality (F.40), by recasting that equation into the link diagram equalities in Fig. 6, for the particular case when \(m=j\). In Fig. 6a note the link diagram on the right has a left-pointing arrow (implying that the row index is larger than the column index), and so from (F.28) we introduce a negative sign on the corresponding matrix entry. In Fig. 6b we have left-pointing arrows on both sides of the equality, but we have an additional sign introduced since the diagrams differ by an odd number of crossings.

In Fig. 7 we give the example of constructing the link diagram in Fig. 5 from the identity diagram by repeated application of equalities in Fig. 6—starting from the left at the link \((2k, 2j+1)\) we first apply equality (a), and then, moving to the right, we repeatedly apply (b) until we have the final diagram. Each application of the equalities (a) and (b) introduces a negative sign.

In the identity diagram there are \(j-k-1\) little links to the right of site \(2k+1\), so there are \({j-k-1 \atopwithdelims ()p}\) link diagrams obtained from p uses of the equalities in Fig. 6, which gives us that

$$\begin{aligned} \mathrm {Pf}\,{\mathbf {V}}_{2j-1}^{(2k+1 \mapsto 2j+1)}&= (p_0 p_2 \cdots p_{2k-2}) v_{2k,2j+1} (p_{2k+2} \cdots p_{2j-2}) \sum _{p=0}^{j-k-1} (-1)^p {j-k-1 \atopwithdelims ()p}\nonumber \\&= 0 \qquad \text{(for } k< j-1), \end{aligned}$$
(F.42)

where \((p_0 p_2 \cdots p_{2k-2}) v_{2k,2j+1} (p_{2k+2} \cdots p_{2j-2})\) is the term from the identity link diagram (i.e. the top diagram in Fig. 7). The second equality follows since the sum of alternating binomial coefficients is equal to zero, which can be seen from the binomial expansion of \((x-y)^{j-k-1}\), with \(x, y\rightarrow 1\). Thus \(\alpha _{2j+1, 2k+1}=0\) when \(k<j-1\).

From (F.41) we see the only scenario where we do not have an even number of cancelling link diagrams is when \(k = j-1\), and we have only the identity link pattern. In this case, equation (F.42) becomes

$$\begin{aligned} \mathrm {Pf}\,{\mathbf {V}}_{2j-1}^{(2j-1 \mapsto 2j+1)} = p_0 p_2 \cdots p_{2j-4} X_{2j+1, 2j-2} \end{aligned}$$
(F.43)

since \(v_{2j-2, 2j-1}\mapsto v_{2j-2, 2j+1}= X_{2j+1, 2j-2}\). Substituting (F.43) and (F.30) (with \(m=2j-1\)) into (83) we have

$$\begin{aligned} \alpha _{2j+1, 2j-1} = - \frac{X_{2j+1, 2j-2}}{p_{2j-2}} = - \frac{p_{2j}}{p_{2j-1}} = -\frac{\varGamma (j+1)}{\varGamma (j)} = -j, \end{aligned}$$
(F.44)

where we used (F.24) for the second equality. Combining this result with (F.33), (F.36), (F.39) and (F.42) we recover (F.21).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mays, A., Ponsaing, A. & Schehr, G. Tracy-Widom Distributions for the Gaussian Orthogonal and Symplectic Ensembles Revisited: A Skew-Orthogonal Polynomials Approach. J Stat Phys 182, 28 (2021). https://doi.org/10.1007/s10955-020-02695-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10955-020-02695-w

Keywords

Navigation