Skip to main content
Log in

Integrability of Boundary Liouville Conformal Field Theory

  • Published:
Communications in Mathematical Physics Aims and scope Submit manuscript

Abstract

Liouville conformal field theory (LCFT) is considered on a simply connected domain with boundary, specializing to the case where the Liouville potential is integrated only over the boundary of the domain. We work in the probabilistic framework of boundary LCFT introduced by Huang et al. (Ann Inst H Poincare Probab Statist 54(3):1694–1730, 2018. https://doi.org/10.1214/17-AIHP852). Building upon the known proof of the bulk one-point function by the first author, exact formulas are rigorously derived for the remaining basic correlation functions of the theory, i.e., the bulk-boundary correlator, the boundary two-point and the boundary three-point functions. These four correlations should be seen as the fundamental building blocks of boundary Liouville theory, playing the analogous role of the DOZZ formula in the case of the Riemann sphere. Our study of boundary LCFT also provides the general framework to understand the integrability of one-dimensional Gaussian multiplicative chaos measures as well as their tail expansions. Finally these results have applications to studying the conformal blocks of CFT and set the stage for the more general case of boundary LCFT with both bulk and boundary Liouville potentials.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. In [14] this parameter \(\beta \) is called \(\alpha \), but we use here the notation \(\beta \) in order to keep the convention of this paper for insertions on the boundary.

  2. It is also possible to consider degenerate insertions in the bulk but they will not be used in the present paper. See the discussion in Sect. 1.4.1 for more details.

  3. The values excluded here are recovered by an easy continuity argument.

  4. Again the values excluded here are recovered by a continuity argument.

  5. In [22] one needs to multiply further by \(\eta \) and the limit is \(-4\). This difference come from the fact [22] states the result for the sphere correlation which is related to the GMC moment by an explicit prefactor.

References

  1. Ang M., Holden N., Sun X.: Integrability of SLE via conformal welding of random surfaces. arXiv:2104.09477

  2. Ang M., Remy G., Sun X.: FZZ formula of boundary Liouville CFT via conformal welding. arXiv:2104.09478

  3. Baverez G., Wong M. D.: Fusion asymptotics for Liouville correlation functions. arXiv:1807.10207

  4. Belavin, A.A., Polyakov, A.M., Zamolodchikov, A.B.: Infinite conformal symmetry in two-dimensional quantum field theory. Nucl. Phys. B241, 333–380 (1984)

    Article  ADS  MathSciNet  Google Scholar 

  5. Berestycki N.: An elementary approach to Gaussian multiplicative chaos, Electron. Commun. Probab., 22 (2017), paper no. 27. https://doi.org/10.1214/17-ECP58,https://projecteuclid.org/euclid.ecp/1494554429

  6. Berestycki N., Powell E.: Introduction to the Gaussian Free Field and Liouville Quantum Gravity. Available at https://homepage.univie.ac.at/nathanael.berestycki/articles.html

  7. David, F., Kupiainen, A., Rhodes, R., Vargas, V.: Liouville quantum gravity on the Riemann sphere. Commun. Math. Phys. 342, 869–907 (2016)

    Article  ADS  MathSciNet  Google Scholar 

  8. David, F., Rhodes, R., Vargas, V.: Liouville quantum gravity on complex tori. J. Math. Phys. 57, 022302 (2016)

    Article  ADS  MathSciNet  Google Scholar 

  9. Dorn, H., Otto, H.-J.: Two and three point functions in Liouville theory. Nucl. Phys. B 429(2), 375–388 (1994)

    Article  ADS  MathSciNet  Google Scholar 

  10. Duplantier B., Miller J., Sheffield S.: Liouville quantum gravity as a mating of trees. arXiv:1409.7055

  11. Fateev V., Zamolodchikov A., Zamolodchikov Al.: Boundary Liouville Field Theory I. Boundary State and Boundary Two-point Function. arXiv:hep-th/0001012

  12. Fyodorov, Y.V., Bouchaud, J.P.: Freezing and extreme value statistics in a Random Energy Model with logarithmically correlated potential. J. Phys. A Math. Theor. 41(37), 372001 (2008)

    Article  MathSciNet  Google Scholar 

  13. Fyodorov Y.V., Le Doussal P., Rosso A.: Statistical Mechanics of Logarithmic REM: duality, Freezing and Extreme Value Statistics of \(1/f\) Noises generated by Gaussian Free Fields, J. Stat. Mech. P10005 (2009)

  14. Ghosal P., Remy G., Sun X., Sun Y.: Probabilistic conformal blocks for Liouville CFT on the torus. arXiv:2003.03802 (2020)

  15. Guillarmou C., Kupiainen A., Rhodes R., Vargas V.: Conformal bootstrap in Liouville Theory. arXiv:2005.11530 (2020)

  16. Guillarmou C., Kupiainen A., Rhodes R., Vargas V.: Segal’s axioms and bootstrap for Liouville Theory. arXiv:2112.14859 (2021)

  17. Guillarmou, C., Rhodes, R., Vargas, V.: Polyakov’s formulation of 2d bosonic string theory. Publ. Math. IHES 130, 111 (2019). https://doi.org/10.1007/s10240-019-00109-6

    Article  MathSciNet  MATH  Google Scholar 

  18. Hosomichi, K.: Bulk-boundary propagator in Liouville theory on a disc. J. High Energy Phys. 2001, JHEP11 (2001)

    Article  MathSciNet  Google Scholar 

  19. Huang, Y., Rhodes, R., Vargas, V.: Liouville quantum gravity on the unit disk. Ann. Inst. H. Poincaré Probab. Stat. 54(3), 1694–1730 (2018). https://doi.org/10.1214/17-AIHP852

    Article  MathSciNet  MATH  Google Scholar 

  20. Kahane, J.-P.: Sur le chaos multiplicatif. Ann. Sci. Math. Québec 9(2), 105–150 (1985)

    MathSciNet  MATH  Google Scholar 

  21. Kupiainen, A., Rhodes, R., Vargas, V.: Local conformal structure of Liouville quantum gravity. Commun. Math. Phys. 371, 1005 (2019). https://doi.org/10.1007/s00220-018-3260-3

    Article  ADS  MathSciNet  MATH  Google Scholar 

  22. Kupiainen, A., Rhodes, R., Vargas, V.: Integrability of Liouville theory: proof of the DOZZ formula. Ann. Math. 191(1), 81–166 (2020)

    Article  MathSciNet  Google Scholar 

  23. Lacoin, H., Rhodes, R., Vargas, V., Path integral for quantum Mabuchi K-energy. arXiv:1807.01758

  24. Martinec E. J.: The annular report on non-critical string theory. arXiv:hep-th/0305148

  25. Ostrovsky, D.: Mellin transform of the limit lognormal distribution. Commun. Math. Phys. 288, 287–310 (2009)

    Article  ADS  MathSciNet  Google Scholar 

  26. Ostrovsky, D.: On Barnes beta distributions and applications to the maximum distribution of the 2D Gaussian free field. J. Stat. Phys. 164, 1292–1317 (2016)

    Article  ADS  MathSciNet  Google Scholar 

  27. Ostrovsky D.: A Review of Conjectured Laws of Total Mass of Bacry-Muzy GMC Measures on the Interval and Circle and Their Applications, Reviews in Mathematical Physics, Vol 30. arXiv:1803.06677 (2018)

  28. Polyakov, A.M.: Quantum geometry of bosonic strings. Phys. Lett. 103B, 207–210 (1981)

    Article  ADS  MathSciNet  Google Scholar 

  29. Ponsot, B., Teschner, J.: Clebsch-Gordan and Racah-Wigner coefficients for a continuous series of representations of U(q)(sl(2, R)). Commun. Math. Phys. 224, 613 (2001)

    Article  ADS  MathSciNet  Google Scholar 

  30. Ponsot, B., Teschner, J.: Boundary Liouville field theory: boundary three point function. Nucl. Phys. B 622(1–2), 309–327 (2002)

    Article  ADS  MathSciNet  Google Scholar 

  31. Remy, G.: The Fyodorov-Bouchaud formula and Liouville conformal field theory. Duke Math. J. 169(1), 177–211 (2020). https://doi.org/10.1215/00127094-2019-0045

    Article  MathSciNet  MATH  Google Scholar 

  32. Remy, G.: Liouville quantum gravity on the annulus. J. Math. Phys. 59, 082303 (2018). https://doi.org/10.1063/1.5030409

    Article  ADS  MathSciNet  MATH  Google Scholar 

  33. Remy, G., Zhu, T.: The distribution of Gaussian multiplicative chaos on the unit interval. Ann. Probab. 48(2), 872–915 (2020)

    Article  MathSciNet  Google Scholar 

  34. Rhodes, R., Vargas, V.: Gaussian multiplicative chaos and applications: a review. Probab. Surv. 11, 315–392 (2014)

    Article  MathSciNet  Google Scholar 

  35. Rhodes, R., Vargas, V.: The tail expansion of Gaussian multiplicative chaos and the Liouville reflection coefficient. Ann. Probab. 47(5), 3082–3107 (2019). https://doi.org/10.1214/18-AOP1333

    Article  MathSciNet  MATH  Google Scholar 

  36. Williams D.: Path decomposition and continuity of local time for one-dimensional diffusions, I, Proceedings of the London Mathematical Society s3–28 (4), 738–768 (1974)

  37. Wong, M.D.: Universal tail profile of Gaussian multiplicative chaos. Probab. Theory Relat. Fields (2020). https://doi.org/10.1007/s00440-020-00960-3

    Article  MathSciNet  MATH  Google Scholar 

  38. Wu B.: Conformal bootstrap on the annulus in Liouville CFT. arXiv:2203.11830 (2022)

  39. Zamolodchikov, A.B., Zamolodchikov, A.B.: Structure constants and conformal bootstrap in Liouville field theory. Nucl. Phys. B 477(2), 577–605 (1996)

    Article  ADS  Google Scholar 

Download references

Acknowledgements

The authors would like to thank Rémi Rhodes and Vincent Vargas for making us discover Liouville CFT. We would also like to thank Morris Ang, Guillaume Baverez, and Xin Sun for many helpful discussions. Finally we would like to thank the two anonymous referees for their careful reading of this manuscript and for their numerous comments that helped improve this paper. G.R. was supported by an NSF mathematical sciences postdoctoral research fellowship, NSF Grant DMS-1902804. T.Z. was supported by a grant from Région Ile-de-France.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guillaume Remy.

Additional information

Communicated by J. Ding.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

1.1 Useful facts in probability

We start by explaining how to construct our GFF X from the standard Neumann boundary GFF \(X_{\mathbb {D}}\) on \({\mathbb {D}}\), also called free boundary GFF. This field has a covariance given by, for \(x,y \in {\mathbb {D}}\),

$$\begin{aligned} \mathbb {E}[X_{\mathbb {D}}(x)X_{\mathbb {D}}(y)] = \ln \frac{1}{|x-y||1- x{\bar{y}}|}. \end{aligned}$$
(5.1)

The field \(X_{\mathbb {D}}\) has zero average on the unit circle. Notice also that if x or y is on the unit circle, the covariance (5.1) reduces to \(-2 \ln |x-y|\). One can then conformally map the disk \({\mathbb {D}}\) equipped with the Euclidean metric to the upper-half plane \({\mathbb {H}}\) equipped with the metric \({\hat{g}}(x) = \frac{4}{|x+i|^4}\). By this map from the field \(X_{\mathbb {D}}\) we obtain the field \(X_{{\hat{g}}}\) defined on \({\mathbb {H}}\) which has covariance

$$\begin{aligned} \mathbb {E}[X_{{\hat{g}}}(x)X_{{\hat{g}}}(y)] = \ln \frac{1}{|x-y||x-{\bar{y}}|} -\frac{1}{2}\ln {\hat{g}}(x) - \frac{1}{2} \ln {\hat{g}}(y), \end{aligned}$$
(5.2)

and zero average on \({\mathbb {R}}\) in the metric \({\hat{g}}\). Finally the field X can be obtained from the field \(X_{{\hat{g}}}\) by simply setting:

$$\begin{aligned} X(x) = X_{{\hat{g}}}(x) - \frac{1}{\pi } \int _0^{\pi } X_{{\hat{g}}}(e^{i \theta }) d \theta . \end{aligned}$$
(5.3)

Next we state a result of finiteness of GMC moments covering all situations encountered in the main text. This follows from [19, Corollary 3.10] except that we need to consider complex \(\mu _i\).

Proposition 5.1

(Finiteness of moments of GMC). Fix \(\gamma \in (0, 2)\). The following claims hold.

  • For \(\beta < Q \) and \(\frac{\gamma }{2} - \alpha< \frac{\beta }{2} < \alpha \) we have

    $$\begin{aligned} {\mathbb {E}} \left[ \left( \int _{{\mathbb {R}}} \frac{g(x)^{\frac{\gamma }{4}(\frac{2}{\gamma }-\alpha -\frac{\beta }{2})}}{|x-i|^{\gamma \alpha } } e^{\frac{\gamma }{2} X(x) } d x \right) ^{\frac{2}{\gamma }(Q-\alpha -\frac{\beta }{2})} \right] < \infty . \end{aligned}$$
    (5.4)
  • For \((\mu _i)_{i=1,2,3}\) satisfying Definition 1.3, \(\beta _i < Q\), \( {\frac{1}{\gamma }(2 Q - \sum _{i=1}^3 \beta _i)} < \frac{4}{\gamma ^2} \wedge \min _i \frac{2}{\gamma }(Q - \beta _i)\) we have

    $$\begin{aligned} {\mathbb {E}} \left[ \left| \int _{\mathbb {R}}\frac{g(x)^{\frac{\gamma }{8}(\frac{4}{\gamma }- \sum _{i=1}^3 \beta _i )} }{|x|^{\frac{\gamma \beta _1 }{2}}|x-1|^{\frac{\gamma \beta _2 }{2}}} e^{\frac{\gamma }{2} X(x)} d \mu (x) \right| ^{\frac{1}{\gamma }(2 Q - \sum _{i=1}^3 \beta _i)} \right] < +\infty . \end{aligned}$$
    (5.5)
  • For \(\chi \in \{\frac{\gamma }{2}, \frac{2}{\gamma }\}\), \(p = \frac{2}{\gamma }(Q-\alpha -\frac{\beta }{2}+\frac{\chi }{2})\), \(\beta < Q\), \( p < \frac{4}{\gamma ^2} \wedge \frac{2}{\gamma }(Q - \beta )\), \(t \in {\mathbb {H}}\) we have

    $$\begin{aligned} {\mathbb {E}} \left[ \left| \int _{{\mathbb {R}}} \frac{(t-x)^{\frac{\gamma \chi }{2}}}{|x-i|^{\gamma \alpha } } g(x)^{\frac{\gamma ^2}{8}(p-1)} e^{\frac{\gamma }{2} X(x) } d x \right| ^{p} \right] <\infty . \end{aligned}$$
    (5.6)
  • For \(\chi \in \{\frac{\gamma }{2}, \frac{2}{\gamma }\}\), \(q = \frac{1}{\gamma }(2 Q - \beta _1 - \beta _2 - \beta _3 + \chi )\), \(\beta _i < Q\), \( \mu _1 \in (0,\infty )\), \(\mu _2, \mu _3 \in - {\overline{\mathbb {H}}}\), \( q < \frac{4}{\gamma ^2} \wedge \min _i \frac{2}{\gamma }(Q - \beta _i)\), \(t \in {\mathbb {H}}\) we have

    $$\begin{aligned} {\mathbb {E}} \left[ \left| \int _{\mathbb {R}}\frac{(t-x)^{\frac{\gamma \chi }{2}}}{|x|^{\frac{\gamma \beta _1 }{2}} |x-1|^{\frac{\gamma \beta _2 }{2}}} g(x)^{\frac{\gamma ^2}{8}(q-1)} e^{\frac{\gamma }{2} X(x)} d \mu (x) \right| ^{q} \right] <\infty . \end{aligned}$$
    (5.7)

Proof

For the first claim, since a positive function is integrated against the GMC measure, we are in the classical case of the existence of moments of GMC with an insertion of weight \(\beta \) (here the insertion is placed at infinity, but this changes nothing to the result). Following [7, Lemma 3.10], adapted to the case of one-dimensional GMC, the condition is thus \( \beta <Q\) and \(\frac{2}{\gamma }(Q-\alpha -\frac{\beta }{2}) < \frac{4}{\gamma ^2} \wedge \frac{2}{\gamma } (Q -\beta )\). One can check this last condition translates into \(\frac{\gamma }{2} - \alpha< \frac{\beta }{2} < \alpha \). See also [19, Corollary 3.10].

The second claim does not fit exactly into the framework of [7] since the \(\mu _i\) can be complex and therefore we have a complex valued quantity integrated against the GMC. Let \(q_0 := \frac{1}{\gamma }(2 Q - \sum _{i=1}^3 \beta _i)\). For the case of positive moments \(q_0 \ge 0\) one can simply use the bound

$$\begin{aligned}&\mathbb {E}\left[ \left| \int _{\mathbb {R}}\frac{g(x)^{\frac{\gamma }{8}(\frac{4}{\gamma }- \sum _{i=1}^3 \beta _i )} }{|x|^{\frac{\gamma \beta _1 }{2}}|x-1|^{\frac{\gamma \beta _2 }{2}}} e^{\frac{\gamma }{2} X(x)} d \mu (x) \right| ^{q_0}\right] \nonumber \\&\quad \le M \mathbb {E}\left[ \left( \int _{\mathbb {R}}\frac{g(x)^{\frac{\gamma }{8}(\frac{4}{\gamma }- \sum _{i=1}^3 \beta _i )} }{|x|^{\frac{\gamma \beta _1 }{2}}|x-1|^{\frac{\gamma \beta _2 }{2}}} e^{\frac{\gamma }{2} X(x)} dx \right) ^{q_0}\right] , \end{aligned}$$
(5.8)

which is valid for \(M = \max _i |\mu _i| > 0\). The claim then reduces to the first case.

Now for negative moments corresponding to \(q_0< 0\), this is precisely where we are going to use the half-space condition of Definition 1.3 on the \(\mu _i\) parameters. The condition implies \(\mu _1, \mu _2, \mu _3\) are contained in a half-space \({\mathcal {H}}\). Let \(v_1 \in {\mathbb {C}}\) be a normal vector contained in the half-space, and let \(v_2 \in {\mathbb {C}}\) be perpendicular to \(v_1\). For each \(i =1,2,3\), write \(\mu _i = \lambda _i v_1+ \lambda _i' v_2\) where \(\lambda _i \ge 0\) and \(\lambda _i' \in {\mathbb {R}}\). By Definition 1.3 at least one \(\lambda _i\) is non zero. From this one can deduce the upper bound

$$\begin{aligned}&\mathbb {E}\left[ \left| \int _{\mathbb {R}}\frac{g(x)^{\frac{\gamma }{8}(\frac{4}{\gamma }- \sum _{i=1}^3 \beta _i )} }{|x|^{\frac{\gamma \beta _1 }{2}}|x-1|^{\frac{\gamma \beta _2 }{2}}} e^{\frac{\gamma }{2} X(x)} d \mu (x) \right| ^{q_0}\right] \nonumber \\&\quad \le M' \mathbb {E}\left[ \left( \int _{\mathbb {R}}\frac{g(x)^{\frac{\gamma }{8}(\frac{4}{\gamma }- \sum _{i=1}^3 \beta _i )} }{|x|^{\frac{\gamma \beta _1 }{2}}|x-1|^{\frac{\gamma \beta _2 }{2}}} e^{\frac{\gamma }{2} X(x)} d \lambda (x) \right) ^{q_0}\right] \end{aligned}$$
(5.9)

for some \(M' >0\) and where \(d \lambda (x)\) is defined in the same as \(d \mu (x)\) but with \(\mu _i\) replaced by \(\lambda _i\) . We can now apply again the first case to show finiteness. Lastly the third case and fourth cases are treated similarly to the second one. \(\square \)

Remark 5.2

Let us discuss what happens in the above proposition if we are in the limiting case of the half-space condition of Definition 1.3, meaning the \(\mu _i\) are lying on the boundary of the half-space. For instance assume \(\mu _1, \mu _2 \) are real and positive and \(\mu _3\) is \(e^{i \pi }\) times a positive number. In this case the second claim of the above proposition will remain true if \(q_0 := \frac{1}{\gamma }(2 Q - \sum _{i=1}^3 \beta _i) \ge 0\), the proof being exactly the same. On the other hand if \(q_0 <0\) then there are cancellations that appear and the claim is no longer true. It seems the condition for finiteness then becomes \(q_0 >-1\) but this seems a little technical to show and we will not require it in the present paper.

Finally we recall some theorems in probability that we will use without further justification. In the following D is a compact subset of \({\mathbb {R}}^d\).

Theorem 5.3

(Girsanov theorem). Let \((Z(x))_{x\in D}\) be a continuous centered Gaussian process and Z a Gaussian variable which belongs to the \(L^2\) closure of the vector space spanned by \((Z(x))_{x\in D}\). Let F be a real continuous bounded function from \({\mathcal {C}}(D,{\mathbb {R}})\) to \({\mathbb {R}}\). Then we have the following identity:

$$\begin{aligned} \mathbb {E}\left[ e^{Z-\frac{\mathbb {E}[Z^2]}{2}}F((Z(x))_{x\in D})\right] = \mathbb {E}[F((Z(x)+\mathbb {E}[Z(x)Z])_{x\in D})]. \end{aligned}$$
(5.10)

When applied to our case, although the log-correlated field X is not a continuous Gaussian process, we can still make the arguments rigorous by using a regularization procedure (see [33, Appendix A.1] for a more detailed explanation). Next we recall Kahane’s inequality:

Theorem 5.4

(Kahane’s inequality). Let \((Z_0(x))_{x\in D}\), \((Z_1(x))_{x\in D}\) be two continuous centered Gaussian processes such that for all \(x,y\in D\):

$$\begin{aligned} \left| \mathbb {E}[Z_0(x)Z_0(y)] -\mathbb {E}[Z_1(x)Z_1(y)]\right| \le C. \end{aligned}$$
(5.11)

Define for \(u\in [0,1]\):

$$\begin{aligned} Z_u = \sqrt{1-u} Z_0 + \sqrt{u} Z_1, \quad W_u = \int _D e^{Z_u(x)-\frac{1}{2}\mathbb {E}[Z_u(x)^2]} \sigma (dx). \end{aligned}$$
(5.12)

Then for all smooth function F with at most polynomial growth at infinity, and \(\sigma \) a complex Radon measure over D,

$$\begin{aligned}&\left| \mathbb {E}\left[ F\left( \int _D e^{Z_0(x)-\frac{1}{2}\mathbb {E}[Z_0(x)^2]} \sigma (dx)\right) \right] -\mathbb {E}\left[ F\left( \int _D e^{Z_1(x)-\frac{1}{2}\mathbb {E}[Z_1(x)^2]} \sigma (dx)\right) \right] \right| \nonumber \\&\quad \le \sup _{u \in [0,1]}\frac{C}{2}\mathbb {E}[|W_u|^2 |F''(W_u)|]. \end{aligned}$$
(5.13)

The same remark as the one below Theorem 5.3 is valid to justify one can use this inequality in the case where \(Z_0\) and \(Z_1\) are log-correlated fields. Finally we provide the Williams decomposition theorem, see for instance [36] for a reference.

Theorem 5.5

Let \((B_s-vs)_{s\ge 0}\) be a Brownian motion with negative drift, i.e. \(v>0\) and let \(M = \sup _{s\ge 0}(B_s-vs)\). Then conditionally on M the law of the path \((B_s-vs)_{s\ge 0}\) is given by the joining of two independent paths:

  1. (1)

    A Brownian motion \((B^1_s+vs)_{0\le s \le \tau _M}\) with positive drift v run until its hitting time \(\tau _M\) of M.

  2. (2)

    \((M+B^2_t-vt)_{t\ge 0}\) where \((B^2_t-vt)_{t\ge 0}\) is a Brownian motion with negative drift conditioned to stay negative.

    Moreover, one has the following time reversal property for all \(C>0\) (where \(\tau _C\) denotes the hitting time of C),

    $$\begin{aligned} (B^1_{\tau _C-s}+v(\tau _C-s)-C)_{0\le s\le \tau _C} \overset{\text {law}}{=} ({\widetilde{B}}_s-vs)_{0\le s \le L_{-C}}, \end{aligned}$$
    (5.14)

    where \(({\widetilde{B}}_s-vs)_{s\ge 0}\) is a Brownian motion with drift \(-v\) conditioned to stay negative and \(L_{-C}\) is the last time \(({\widetilde{B}}_s-vs)_{s\ge 0}\) hits \(-C\).

1.2 Technical estimates on GMC

We repeat here several proofs found in [22, 33] that must be adapted because our objects are complex valued.

1.2.1 OPE with reflection

We want to compute the asymptotic expansion of the functions \({\widetilde{G}}_{\chi }\) and \(H_{\chi }\) in the case where there will be reflection. This has been performed in the previous works [22, 33] but it is not straightforward to adapt the proofs as we are working with complex valued quantities so there are many inequalities that need to be adapted. We will treat separately the cases where \(\chi = \frac{\gamma }{2}\) and \( \chi = \frac{2}{\gamma }\). Starting with the case where \(\chi = \frac{2}{\gamma }\):

Lemma 5.6

(OPE with reflection for \(\chi = \frac{2}{\gamma }\)). Recall \(p = \frac{2}{\gamma }(Q-\alpha -\frac{\beta }{2}+\frac{1}{\gamma })\) and consider \(s \in (-1,0)\). Recall also the functions \({\widetilde{G}}_{\frac{2}{\gamma }}\) and \(H_{\frac{2}{\gamma }}\) given by (2.9) and (3.2) for \(\chi = \frac{2}{\gamma }\). There exists a small parameter \(\beta _0>0\) such that for \(\beta \in (Q -\beta _0, Q)\) and \(\alpha \) such that \( p < \frac{4}{\gamma ^2} \wedge \frac{2}{\gamma }(Q - \beta )\), the following asymptotic expansion holds:

$$\begin{aligned}&{\widetilde{G}}_{\frac{2}{\gamma }}(s) - {\widetilde{G}}_{\frac{2}{\gamma }}(0) = -s^{\frac{1}{2} + \frac{2}{\gamma ^2} - \frac{ \beta }{\gamma }}\frac{\Gamma (1-\frac{2}{\gamma }(Q-\beta )) \Gamma (\frac{2}{\gamma }(Q-\beta ) -p)}{\Gamma (-p)}\nonumber \\&\quad {\overline{R}}(\beta , 1, -1) {\overline{G}}(\alpha , 2Q-\beta -\frac{2}{\gamma }) + o(|s|^{\frac{1}{2} + \frac{2}{\gamma ^2} - \frac{ \beta }{\gamma }}). \end{aligned}$$
(5.15)

Similarly, recall \(q = \frac{1}{\gamma }(2 Q - \beta _1 - \beta _2 - \beta _3 + \frac{2}{\gamma })\) and consider \(t \in (0,1)\). Then in the following parameter range,

$$\begin{aligned} \beta _1 \in (Q - \beta _0,Q), \quad q < \frac{4}{\gamma ^2} \wedge \min _i \frac{2}{\gamma }(Q - \beta _i),\quad \mu _1 \in (0,+\infty ),\quad \mu _2, \mu _3 \in (-\infty ,0), \end{aligned}$$
(5.16)

the following asymptotic also holds:

$$\begin{aligned} H_{\frac{2}{\gamma }}(it) - H_{\frac{2}{\gamma }}(0)&= - (it)^{1 - \frac{ 2 \beta _1}{\gamma } + \frac{4}{\gamma ^2}} \frac{\Gamma (1 - \frac{2}{\gamma }(Q - \beta _1)) \Gamma (\frac{2}{\gamma }(Q -\beta _1) -q ) }{\Gamma (-q)} \nonumber \\&\qquad {\overline{R}}(\beta _1, \mu _1, \mu _2) {\overline{H}}^{(2 Q - \beta _1 - \frac{2}{\gamma } , \beta _2 , \beta _3)}_{( \mu _1, -\mu _2, -\mu _3)} \nonumber \\&\quad + o(|t|^{1 - \frac{ 2 \beta _1}{\gamma } + \frac{4}{\gamma ^2}}). \end{aligned}$$
(5.17)

Proof

We will prove only the case of \(H_{\frac{2}{\gamma }}\), the case of \({\widetilde{G}}_{\frac{2}{\gamma }}\) can be treated in a similar fashion. For a Borel set \(I \subseteq {\mathbb {R}}\), we introduce the notation,

$$\begin{aligned} K_{I}(it) : = \int _I \frac{it-x}{|x|^{\frac{\gamma \beta _1 }{2}} |x-1|^{\frac{\gamma \beta _2 }{2}}} g(x)^{\frac{\gamma ^2}{8}(q-1)} e^{\frac{\gamma }{2} X(x)} d \mu (x), \end{aligned}$$
(5.18)

where as always \(d\mu (x) = \mu _1{\mathbf {1}}_{(-\infty ,0)}(x)dx + \mu _2 {\mathbf {1}}_{(0,1)}(x)dx + \mu _3 {\mathbf {1}}_{(1,\infty )}(x)dx\). In the following it is convenient to use \(d|\mu |(x)\) to denote the measure \(\mu _1{\mathbf {1}}_{(-\infty ,0)}(x)dx - \mu _2 {\mathbf {1}}_{(0,1)}(x)dx- \mu _3 {\mathbf {1}}_{(1,\infty )}(x)dx\) which is a positive measure thanks to our choice \(\mu _1 \in (0,+\infty )\), \(\mu _2, \mu _3 \in (-\infty ,0)\). The signs of the parameters \(\mu _i\) allows to separate \(K_{I}(it)\) into a positive real part \(K_{I}(0)\) and an imaginary part. This remark is used to bound \(|K_{I}(it)|^{q-1}\) by \(|K_{I}(0)|^{q-1}\) and in several other similar cases (remark that necessarily \(q-1<0\)). Now we want to study the asymptotic of,

$$\begin{aligned} \mathbb {E}[K_{{\mathbb {R}}}(it)^{q}]-\mathbb {E}[K_{{\mathbb {R}}}(0)^{q}] =: T_1+T_2, \end{aligned}$$
(5.19)

where we defined:

$$\begin{aligned} T_1: = \mathbb {E}[K_{(-t, t)^c}(it)^{q}]-\mathbb {E}[K_{{\mathbb {R}}}(0)^{q}], \quad T_2:= \mathbb {E}[K_{{\mathbb {R}}}(it)^{q}]- \mathbb {E}[K_{(-t, t)^c}(it)^{q}]. \nonumber \\ \end{aligned}$$
(5.20)

\(\Diamond \) First we consider \(T_1\). The goal is to show that \(T_1 =o(|t|^{1 - \frac{ 2 \beta _1}{\gamma } + \frac{4}{\gamma ^2}}) = o(|t|^{\frac{2}{\gamma }(Q-\beta _1)})\). By interpolation,

$$\begin{aligned} |T_1| \le&|q| \int _0^1 du \mathbb {E}\left[ |K_{(-t, t)^c}(it)-K_{{\mathbb {R}}}(0)||uK_{(-t, t)^c}(it)+(1-u)K_{{\mathbb {R}}}(0)|^{q-1}\right] \\ \nonumber \le&|q| \mathbb {E}\left[ \left( |K_{(-t, t)^c}(it)-K_{(-t,t)^c}(0)| + |K_{(-t,t)^c}(0) - K_{{\mathbb {R}}}(0)| \right) |K_{(-t,t)^c}(0)|^{q-1}\right] \\ \nonumber =&|q| ( A_1 + A_2), \end{aligned}$$
(5.21)

with:

$$\begin{aligned}&A_1 := \mathbb {E}\left[ |K_{(-t, t)^c}(it)-K_{(-t,t)^c}(0)| |K_{(-t,t)^c}(0)|^{q-1}\right] , \quad \nonumber \\&A_2:= \mathbb {E}\left[ |K_{(-t,t)^c}(0) - K_{{\mathbb {R}}}(0)||K_{(-t,t)^c}(0)|^{q-1}\right] . \end{aligned}$$
(5.22)

We have

$$\begin{aligned}&A_1 \le t\int _{(-t,t)^c}d|\mu |(x_1)\, \frac{1}{|x_1|^{\frac{\gamma \beta _1 }{2}} |x_1-1|^{\frac{\gamma \beta _2 }{2}}} \nonumber \\&\qquad \mathbb {E}\left[ \left( \int _{(-t,t)^c}\frac{g(x)^{\frac{\gamma ^2}{8}(q-2)}e^{\frac{\gamma }{2} X(x)} d |\mu |(x)}{|x|^{\frac{\gamma \beta _1 }{2}-1} |x-1|^{\frac{\gamma \beta _2 }{2}}|x-x_1|^{\frac{\gamma ^2}{2}}} \right) ^{q-1}\right] \nonumber \\&\quad \le t\int _{{\mathbb {R}}}d|\mu |(x_1)\, \frac{2|x_1|{\mathbf {1}}_{(-\frac{1}{2},\frac{1}{2})^c} + {\mathbf {1}}_{(-\frac{1}{2},-t)\cup (t,\frac{1}{2})}}{|x_1|^{\frac{\gamma \beta _1 }{2}} |x_1-1|^{\frac{\gamma \beta _2 }{2}}}\nonumber \\&\qquad \mathbb {E}\left[ \left( \int _{(-t,t)^c}\frac{g(x)^{\frac{\gamma ^2}{8}(q-2)}e^{\frac{\gamma }{2} X(x)} d |\mu |(x)}{|x|^{\frac{\gamma \beta _1 }{2}-1} |x-1|^{\frac{\gamma \beta _2 }{2}}|x-x_1|^{\frac{\gamma ^2}{2}}} \right) ^{q-1}\right] \nonumber \\&\quad \le 2t \mathbb {E}\left[ \left( \int _{(-t,t)^c}\frac{g(x)^{\frac{\gamma ^2}{8}(q-1)}e^{\frac{\gamma }{2} X(x)} d |\mu |(x)}{|x|^{\frac{\gamma \beta _1 }{2}-1} |x-1|^{\frac{\gamma \beta _2 }{2}}} \right) ^{q}\right] + O(t^{2-\frac{\gamma \beta _1}{2}}) = O(t^{2-\frac{\gamma \beta _1}{2}}). \end{aligned}$$
(5.23)

In the last equality we have ignored the first term since it is a O(t) and we will take \(\beta _1>\frac{2}{\gamma }\). On the other hand,

$$\begin{aligned} A_2 \le c_1 \int _{(-t,t)}d|\mu |(x_1)\, \frac{1}{|x_1|^{\frac{\gamma \beta _1 }{2}-1} |x_1-1|^{\frac{\gamma \beta _2 }{2}}} = O(t^{2-\frac{\gamma \beta _1}{2}}), \end{aligned}$$
(5.24)

for some constant \(c_1 > 0\). When \(\beta _1 > \frac{\frac{4}{\gamma ^2}-1}{\frac{\gamma }{2} - \frac{2}{\gamma }}\) is satisfied, i.e., \(\beta _0 < \frac{1-\frac{\gamma ^2}{4}}{\frac{\gamma }{2}-\frac{2}{\gamma }}\), we have \(O(t^{2-\frac{\gamma \beta _1}{2}}) = o(t^{\frac{2}{\gamma }(Q-\beta _1)})\). This proves that

$$\begin{aligned} T_1 = o(t^{\frac{2}{\gamma }(Q-\beta _1)}). \end{aligned}$$
(5.25)

\(\Diamond \) Now we focus on \(T_2\). The goal is to restrict K to \((-\infty ,-t)\cup (-t^{1+h}, t^{1+h})\cup (t,\infty )\), with \(h>0\) a small positive constant to be fixed, and then the GMC measures on the three disjoint parts will be weakly correlated. We have by interpolation and by dropping the imaginary part,

$$\begin{aligned}&\left| \mathbb {E}[K_{{\mathbb {R}}}(it)^{q}]- \mathbb {E}[K_{(-\infty ,-t)\cup (-t^{1+h}, t^{1+h})\cup (t,\infty )}(it)^{q}] \right| \nonumber \\&\le |q| \int _0^1 du {\mathbb {E}} \left[ |K_{ (-t, -t^{1 + h} ) \cup (t^{1 + h},t )}(it)| | u K_{{\mathbb {R}}}(0) + (1 - u) K_{(-\infty ,-t)\cup (-t^{1+h}, t^{1+h})\cup (t,\infty )}(0) |^{q-1} \right] \nonumber \\&\le c_2|q| \int _{(-t, -t^{1 + h} ) \cup (t^{1 + h},t )}d|\mu |(x_1)\, \frac{t+|x_1|}{|x_1|^{\frac{\gamma \beta _1 }{2}} |x_1-1|^{\frac{\gamma \beta _2 }{2}}} \nonumber \\&= O(t^{1+(1+h)(1-\frac{\gamma \beta _1}{2})}), \end{aligned}$$
(5.26)

for some constant \(c_2>0\). By taking h satisfying the condition,

$$\begin{aligned} h < \frac{1+(\frac{2}{\gamma }-\frac{\gamma }{2})\beta _1 -\frac{4}{\gamma ^2}}{\frac{\gamma \beta _1}{2}-1}, \end{aligned}$$
(5.27)

we have:

$$\begin{aligned} \mathbb {E}[K_{{\mathbb {R}}}(it)^{q}]- \mathbb {E}[K_{(-\infty ,-t)\cup (-t^{1+h}, t^{1+h})\cup (t,\infty )}(it)^{q}] = o(t^{\frac{2}{\gamma }(Q-\beta _1)}). \end{aligned}$$
(5.28)

It remains to evaluate \(\mathbb {E}[K_{(-\infty ,-t)\cup (-t^{1+h}, t^{1+h})\cup (t,\infty )}(it)^{q}]-\mathbb {E}[K_{(-t,t)^c}(it)^{q}]\). We now introduce the radial decomposition of the field X,

$$\begin{aligned} X(x) = B_{-2\ln |x|} + Y(x), \end{aligned}$$
(5.29)

where B, Y are independent Gaussian processes with \((B_{s})_{s \in {\mathbb {R}}}\) a Brownian motion starting from 0 for \(s \ge 0\), \(B_s= 0\) when \(s<0\), and Y is a centered Gaussian process with covariance,

$$\begin{aligned} \mathbb {E}[Y(x)Y(y)] = {\left\{ \begin{array}{ll} 2\ln \frac{|x|\vee |y|}{|x-y|}, \quad &{} |x|,|y| \le 1,\\ 2\ln \frac{1}{|x-y|} -\frac{1}{2}\ln g(x) -\frac{1}{2}\ln g(y), \quad &{}\text {else.} \end{array}\right. } \end{aligned}$$
(5.30)

One can wonder why the process Y with the above covariance is well-defined. To construct Y, starting from X set:

$$\begin{aligned} Y(x)= {\left\{ \begin{array}{ll} X(x) - \frac{1}{\pi } \int _0^{\pi } X(|x| e^{i \theta }) d \theta , \quad &{} |x| \le 1,\\ X(x), \quad &{} |x| \ge 1. \end{array}\right. } \end{aligned}$$
(5.31)

Now with this decomposition one can write:

$$\begin{aligned} K_{I}(it) = \int _{I} \frac{it-x}{|x|^{\frac{\gamma \beta _1 }{2}-\frac{\gamma ^2}{4}} |x-1|^{\frac{\gamma \beta _2 }{2}}} g(x)^{\frac{\gamma ^2}{8}(q-1)} e^{\frac{\gamma }{2} B_{-2\ln |x|}} e^{\frac{\gamma }{2}Y(x)} d\mu (x). \end{aligned}$$
(5.32)

From (5.30), we deduce that for \(|x'| \le t^{1+h}\) and \(|x| \ge t\),

$$\begin{aligned} \left| \mathbb {E}[Y(x)Y(x')]\right| = \left| 2\ln \left| 1-\frac{x'}{x}\right| \right| \le 4t^h, \end{aligned}$$
(5.33)

where we used the inequality \(|\ln |1-x|| \le 2|x|\) for \(x\in [-\frac{1}{2},\frac{1}{2}]\). Define the processes,

$$\begin{aligned} P(x)&:= Y(x){\mathbf {1}}_{|x| \le t^{1+h}} + Y(x){\mathbf {1}}_{|x| \ge t}, \end{aligned}$$
(5.34)
$$\begin{aligned} {\widetilde{P}}(x)&:= {\widetilde{Y}}(x){\mathbf {1}}_{|x| \le t^{1+h}} + Y(x){\mathbf {1}}_{|x| \ge t}, \end{aligned}$$
(5.35)

where \({\widetilde{Y}}\) is an independent copy of Y. Then we have the inequality over the covariance:

$$\begin{aligned} \left| \mathbb {E}[P(x)P(y)] - \mathbb {E}[{\widetilde{P}}(x) {\widetilde{P}}(y)]\right| \le 4t^h. \end{aligned}$$
(5.36)

Consider now for \(u\in [0,1]\):

$$\begin{aligned} P_u(x)&= \sqrt{1-u} P(x) + \sqrt{u} {\widetilde{P}}(x), \end{aligned}$$
(5.37)
$$\begin{aligned} K_{I}(it,u)&= \int _{I} \frac{it-x}{|x|^{\frac{\gamma \beta _1 }{2}-\frac{\gamma ^2}{4}} |x-1|^{\frac{\gamma \beta _2 }{2}}} g(x)^{\frac{\gamma ^2}{8}(q-1)} e^{\frac{\gamma }{2} B_{-2\ln |x|}} e^{\frac{\gamma }{2}P_u(x)} d\mu (x). \end{aligned}$$
(5.38)

By applying Kahane’s inequality of Theorem 5.4,

$$\begin{aligned}&\left| \mathbb {E}\left[ K_{(-\infty ,-t)\cup (-t^{1+h}, t^{1+h})\cup (t,\infty )}(it)^{q}\right] - \mathbb {E}\left[ \left( K_{(-\infty ,-t)\cup (t,\infty )}(it)+{\widetilde{K}}_{(-t^{1+h}, t^{1+h})}(it)\right) ^{q}\right] \right| \nonumber \\&\le 2|q(q-1)|t^h \sup _{u\in [0,1]}\mathbb {E}\left[ \left| K_{I}(it,u) \right| ^{q}\right] \nonumber \\&\le c_3\, t^h, \end{aligned}$$
(5.39)

for some constant \(c_3 >0\), and where in \({\widetilde{K}}_{(-t^{1+h}, t^{1+h})}(it)\) we simply use the field \({\widetilde{Y}}\) instead of Y. When \(h > \frac{2}{\gamma }(Q-\beta _1)\), we can bound the previous term by \(o(t^{\frac{2}{\gamma }(Q-\beta _1)})\).

Consider now the change of variable \(x = t^{1+h}e^{-s/2}\) for the field \({\widetilde{K}}_{(-t^{1+h}, t^{1+h})}(it)\). By the Markov property of the Brownian motion and stationarity of

$$\begin{aligned} d\mu _{{\widetilde{Y}}}(s): = \mu _1 e^{ \frac{\gamma }{2} {\widetilde{Y}}(-e^{-s/2}) }ds + \mu _2 e^{\frac{\gamma }{2} {\widetilde{Y}}(e^{-s/2}) }ds, \end{aligned}$$
(5.40)

we have

$$\begin{aligned}&{\widetilde{K}}_{(-t^{1+h}, t^{1+h})}(it) = \frac{1}{2}i t^{1+(1+h)(1-\frac{\gamma \beta _1}{2}+\frac{\gamma ^2}{4})}e^{\frac{\gamma }{2}B_{2(1+h)\ln (1/t)}}\nonumber \\&\quad \int _{0}^{\infty } \frac{(1 + i t^h e^{-s/2})}{|t^{1+h}e^{-s/2}-1|^{\frac{\gamma \beta _2}{2}}} e^{ \frac{\gamma }{2}({\widetilde{B}}_s - \frac{s}{2}(Q-\beta _1) ) } d\mu _{{\widetilde{Y}}}(s), \end{aligned}$$
(5.41)

with \({\widetilde{B}}\) an independent Brownian motion. We denote

$$\begin{aligned} \sigma _t:= t^{1+(1+h)(1-\frac{\gamma \beta _1}{2}+\frac{\gamma ^2}{4})}e^{\frac{\gamma }{2}B_{2(1+h)\ln (1/t)}},\quad V: =\frac{1}{2}\int _{0}^{\infty } e^{ \frac{\gamma }{2}({\widetilde{B}}_s - \frac{s}{2}(Q-\beta _1) ) } d\mu _{{\widetilde{Y}}}(s). \nonumber \\ \end{aligned}$$
(5.42)

By interpolation, we can prove that for some constant \(c_4>0\):

$$\begin{aligned}&\left| \mathbb {E}[(K_{(-t,t)^c}(it) + {\widetilde{K}}_{(-t^{1+h}, t^{1+h})}(it))^q] - \mathbb {E}\left[ \left( K_{(-t,t)^c}(it) + i\sigma _t V\right) ^q\right] \right| \nonumber \\&\quad \le c_4|q|t^{1+h+(1+h)(1-\frac{\gamma \beta _1}{2}+\frac{\gamma ^2}{4})}\mathbb {E}\left[ e^{\frac{\gamma }{2}B_{2(1+h)\ln (1/t)}}\int _{0}^{\infty } e^{ \frac{\gamma }{2}({\widetilde{B}}_s - \frac{s}{2}(Q-\beta _1) ) } d\mu _{{\widetilde{Y}}}(s)|K_{(1,2)}(0)|^{q-1}\right] .\nonumber \\ \end{aligned}$$
(5.43)

Since \(B_{2(1+h)\ln (1/t)}\), \(({\widetilde{B}}_s)_{s\ge 0}\), \(({\widetilde{Y}}(x))_{|x|\le 1}\), and \(K_{(1,2)}(0)\) are independent, we can easily bound the last term by, for some \(c_5>0\),

$$\begin{aligned} c_5\,t^{(1+h)(2-\frac{\gamma \beta _1}{2})} = o(t^{\frac{2}{\gamma }(Q-\beta _1)}). \end{aligned}$$
(5.44)

By the Williams path decomposition of Theorem 5.5 we can write,

$$\begin{aligned} V = e^{\frac{\gamma }{2}M}\frac{1}{2}\int _{-L_M}^{\infty } e^{ \frac{\gamma }{2}{\mathcal {B}}^{\frac{Q-\beta _1}{2}}_s} \mu _{{\widetilde{Y}}}(ds), \end{aligned}$$
(5.45)

where \( M = \sup _{s \geqslant 0} ({\widetilde{B}}_s -\frac{Q-\beta _1}{2} s) \) and \(L_M\) is the last time \(\left( {\mathcal {B}}^{\frac{Q-\beta _1}{2}}_{-s} \right) _{s\ge 0} \) hits \(-M\). Recall that the law of M is known, for \(v\ge 1\),

$$\begin{aligned} {\mathbb {P}}( e^{\frac{\gamma }{2} M} >v ) = \frac{1}{v^{ \frac{2}{\gamma }(Q-\beta _1)}}. \end{aligned}$$
(5.46)

For simplicity, we introduce the notation:

$$\begin{aligned} \rho (\beta _1) :=\frac{1}{2}\int _{-\infty }^{\infty } e^{ \frac{\gamma }{2}{\mathcal {B}}^{\frac{Q-\beta _1}{2}}_s} \mu _{{\widetilde{Y}}}(ds). \end{aligned}$$
(5.47)

Again by interpolation and then independence we can show that

$$\begin{aligned}&\left| \mathbb {E}\left[ \left( K_{(-t,t)^c}(it) + i\sigma _t V\right) ^q\right] - \mathbb {E}\left[ \left( K_{(-t,t)^c}(it) + i\sigma _t e^{\frac{\gamma }{2}M} \rho (\beta _1)\right) ^q\right] \right| \nonumber \\&\quad \le \frac{1}{2}|q|t^{1+(1+h)(1-\frac{\gamma \beta _1}{2}+\frac{\gamma ^2}{4})}\mathbb {E}\left[ e^{\frac{\gamma }{2}B_{2(1+h)\ln (1/t)}}\!\int _{-\infty }^{0} e^{ \frac{\gamma }{2}{\mathcal {B}}^{\frac{Q-\beta _1}{2}}_s} \mu _{{\widetilde{Y}}}(ds)\left| K_{(1,2)}(0)\right| ^{q-1}\right] \nonumber \\&\quad = O(t^{1+(1+h)(1-\frac{\gamma \beta _1}{2})}) = o(t^{\frac{2}{\gamma }(Q-\beta _1)}). \end{aligned}$$
(5.48)

In summary,

$$\begin{aligned} T_2 = \mathbb {E}[(K_{(-t,t)^c}(it) + i\sigma _t e^{\frac{\gamma }{2}M} \rho (\beta _1))^q] - \mathbb {E}[K_{(-t,t)^c}(it)^q] + o(t^{\frac{2}{\gamma }(Q-\beta _1)}). \end{aligned}$$
(5.49)

Finally, we evaluate the above difference at first order explicitly using the fact that density of \(e^{\frac{\gamma }{2}M}\) is known:

$$\begin{aligned}&\quad \mathbb {E}[(K_{(-t,t)^c}(it) + i\sigma _t e^{\frac{\gamma }{2}M} \rho (\beta _1))^q] - \mathbb {E}[K_{(-t,t)^c}(it)^q]\nonumber \\&\quad = \frac{2}{\gamma }(Q-\beta _1) \mathbb {E}\left[ \int _1^{\infty } \frac{dv}{v^{\frac{2}{\gamma }(Q-\beta _1)+1}}\left( \left( K_{(-t,t)^c}(it) + i\sigma _t \rho (\beta _1) v\right) ^q - K_{(-t,t)^c}(it)^q \right) \right] \nonumber \\&\quad = t^{\frac{2}{\gamma }(Q-\beta _1)} \frac{2}{\gamma }(Q-\beta _1)\nonumber \\&\qquad \mathbb {E}\left[ \int _{\frac{\hat{\sigma }_t \rho (\beta _1)}{{\hat{K}}_{(-t,t)^c}(it)}}^{\infty } \frac{du}{u^{\frac{2}{\gamma }(Q-\beta _1)+1}}((iu+1)^q-1) \rho (\beta _1)^{\frac{2}{\gamma }(Q-\beta _1)}{\hat{K}}_{(-t,t)^c}(it)^{q-\frac{2}{\gamma }(Q-\beta _1)} \right] . \end{aligned}$$
(5.50)

In the last equality we have applied Theorem 5.3. Next,

$$\begin{aligned} {\hat{K}}_{(-t,t)^c}(it) = \int _{(-t,t)^c} \frac{it-x}{|x|^{\frac{\gamma }{2}(2Q-\beta _1-\frac{2}{\gamma })} |x-1|^{\frac{\gamma \beta _2 }{2}}} g(x)^{\frac{\gamma ^2}{8}(q-1)} e^{\frac{\gamma }{2} X(x)} d \mu (x) \underset{\text {a.s.}}{\overset{t\rightarrow 0_+}{\longrightarrow }} {\hat{K}}_{{\mathbb {R}}}(0), \end{aligned}$$
(5.51)

and for \(h < \frac{2}{\gamma (Q-\beta _1)}-1\),

$$\begin{aligned} {\hat{\sigma }}_t = t^{1-(1+h)(1-\frac{\gamma \beta _1}{2}+\frac{\gamma ^2}{4})}e^{\frac{\gamma }{2}B_{2(1+h)\ln (1/t)}} \underset{\text {a.s.}}{\overset{t\rightarrow 0_+}{\longrightarrow }} 0. \end{aligned}$$
(5.52)

With some simple arguments of uniform integrability, we conclude that:

$$\begin{aligned}&\quad \mathbb {E}\left[ \left( K_{(-t,t)^c}(it) + i\sigma _t e^{\frac{\gamma }{2}M} \rho (\beta _1)\right) ^q\right] - \mathbb {E}[K_{(-t,t)^c}(it)^q] \nonumber \\&\quad \overset{t \rightarrow 0_+}{\sim }t^{\frac{2}{\gamma }(Q-\beta _1)} \frac{2}{\gamma }(Q-\beta _1) \left( \int _{0}^{\infty } \frac{du}{u^{\frac{2}{\gamma }(Q-\beta _1)+1}}((iu+1)^q-1) \right) \nonumber \\&\qquad \mathbb {E}\left[ \rho (\beta _1)^{\frac{2}{\gamma }(Q-\beta _1)}\right] \mathbb {E}\left[ {\hat{K}}_{{\mathbb {R}}}(0)^{q-\frac{2}{\gamma }(Q-\beta _1)} \right] \nonumber \\&\quad = (it)^{\frac{2}{\gamma }(Q-\beta _1)}\frac{2}{\gamma }(Q - \beta _1) \frac{\Gamma (\frac{2}{\gamma }(\beta _1 -Q)) \Gamma (\frac{2}{\gamma }(Q -\beta _1) -q ) }{\Gamma (-q)} {\overline{R}}(\beta _1, \mu _1, \mu _2)\nonumber \\&\qquad {\overline{H}}^{(2 Q - \beta _1 - \frac{2}{\gamma }, \beta _2 , \beta _3)}_{( \mu _1, -\mu _2, -\mu _3)}. \end{aligned}$$
(5.53)

The power of i comes from the evaluation of the integral. Inspecting the proof we see that the conditions on \(\beta _0\) and h indeed allow us to find small values of these parameters that make the arguments work. Therefore we have proved the claim. \(\square \)

Now the analogue result for \(\chi = \frac{\gamma }{2}\):

Lemma 5.7

(OPE with reflection for \(\chi = \frac{\gamma }{2}\)). Recall \(p = \frac{2}{\gamma }(Q-\alpha -\frac{\beta }{2}+\frac{\gamma }{4})\) and consider \(s \in (0,1)\). Recall also the functions \({\widetilde{G}}_{\frac{\gamma }{2}}\) and \(H_{\frac{\gamma }{2}}\) given by (2.9) and (3.2) for \(\chi = \frac{\gamma }{2}\). There exists a small parameter \(\beta _0>0\) such that for \(\beta \in (Q -\beta _0, Q)\) and \(\alpha \) such that \( p < \frac{4}{\gamma ^2} \wedge \frac{2}{\gamma }(Q - \beta )\), the following asymptotic expansion holds:

$$\begin{aligned} {\widetilde{G}}_{\frac{\gamma }{2}}(s) - {\widetilde{G}}_{\frac{\gamma }{2}}(0)&= -s^{\frac{1}{2} + \frac{\gamma ^2}{8} - \frac{\gamma \beta }{4}}\frac{\Gamma (1-\frac{2(Q-\beta )}{\gamma }) \Gamma (-p+\frac{2}{\gamma }(Q-\beta ))}{\Gamma (-p)}\nonumber \\&\qquad {\overline{R}}(\beta , 1, e^{i\pi \frac{\gamma ^2}{4}}) {\overline{G}}(\alpha , 2Q-\beta -\frac{\gamma }{2}) \nonumber \\&\quad + o(|s|^{{\frac{1}{2} + \frac{\gamma ^2}{8} - \frac{\gamma \beta }{4}}}). \end{aligned}$$
(5.54)

Similarly, recall \(q = \frac{1}{\gamma }(2 Q - \beta _1 - \beta _2 - \beta _3 + \frac{\gamma }{2})\) and consider \(t \in (0,1)\). Then for \(\mu _1, \mu _2, \mu _3 \in (0,+\infty )\), \(\beta _1 \in (Q - \beta _0,Q)\) and \(\beta _2, \beta _3\) chosen so that \( q < \frac{4}{\gamma ^2} \wedge \min _i \frac{2}{\gamma }(Q - \beta _i) \), the following asymptotic also holds:

$$\begin{aligned} H_{\frac{\gamma }{2}}(t) - H_{\frac{\gamma }{2}}(0)&= t^{1 -\frac{\gamma \beta _1 }{2} + \frac{\gamma ^2}{4}} \frac{2(Q - \beta _1)}{\gamma } \frac{\Gamma (\frac{2}{\gamma }(\beta _1 -Q)) \Gamma (\frac{2}{\gamma }(Q -\beta _1) -q ) }{\Gamma (-q)} \nonumber \\&\qquad {\overline{R}}(\beta _1, \mu _1, \mu _2) {\overline{H}}^{(2 Q - \beta _1 - \frac{\gamma }{2} , \beta _2 , \beta _3)}_{( \mu _1, e^{i\pi \frac{\gamma ^2}{4}} \mu _2, e^{i\pi \frac{\gamma ^2}{4}} \mu _3)} \nonumber \\&\quad + o(|t|^{1 -\frac{\gamma \beta _1 }{2} + \frac{\gamma ^2}{4}}). \end{aligned}$$
(5.55)

Proof

We will keep the notations in the proof of Lemma 5.6 although there are some slight differences. This time K is defined with the \(\chi = \frac{\gamma }{2}\) insertion:

$$\begin{aligned} K_{I}(t) : = \int _I \frac{(t-x)^{\frac{\gamma ^2}{4}}}{|x|^{\frac{\gamma \beta _1 }{2}} |x-1|^{\frac{\gamma \beta _2 }{2}}} g(x)^{\frac{\gamma ^2}{8}(q-1)} e^{\frac{\gamma }{2} X(x)} d \mu (x). \end{aligned}$$
(5.56)

To deal with the complex phases we will simply use the following inequality. For a fixed \(p<1\) and \(\varphi \in [0, \pi )\), there exists a constant \(c>0\) such that for all \(x_1, x_2, y_1, y_2 \in (0, +\infty )\):

$$\begin{aligned} |(x_1+e^{i\varphi }y_1)^p - (x_2 + e^{i\varphi }y_2)^p| \le c( |x_1^p - x_2^p|+ |y_1^p - y_2^p|). \end{aligned}$$
(5.57)

This inequality can be proved by studying the derivative of the function \((x,y) \mapsto (x^{1/p}+e^{i\varphi }y^{1/p})^p\). With the help of this inequality we will be able to perform the same proof as in the case of the previous lemma. Following the same steps as in [33], we have:

$$\begin{aligned} \mathbb {E}[ |K_{(-\infty ,-t)}(t)^q - K_{(-\infty ,0)}(0)^q|]&= o(t^{\frac{\gamma }{2}(Q-\beta _1)}), \end{aligned}$$
(5.58)
$$\begin{aligned} \mathbb {E}[ ||K_{(t, \infty )}(t)|^q - |K_{(0, \infty )}(0)|^q|]&= o(t^{\frac{\gamma }{2}(Q-\beta _1)}). \end{aligned}$$
(5.59)

Applying (5.57) implies that:

$$\begin{aligned} \mathbb {E}[ |K_{(-t,t)^c}(t)^q - K_{{\mathbb {R}}}(0)^q|]&= \mathbb {E}[ |(K_{(-\infty ,-t)}(t)+e^{i\pi \frac{\gamma ^2}{4}}|K_{(t,\infty )}(t)|)^q - (K_{(-\infty , 0)}(0)\nonumber \\&\quad +e^{i\pi \frac{\gamma ^2}{4}}|K_{(0,\infty )}(0)|)^q|] \nonumber \\ \nonumber&\le c \mathbb {E}[ |K_{(-\infty ,-t)}(t)^q - K_{(-\infty ,0)}(0)^q|] + c \mathbb {E}[||K_{(t,\infty )}(t)|^q \nonumber \\&\qquad - |K_{(t,\infty )}(0)|^q|]\nonumber \\&\le o(t^{\frac{\gamma }{2}(Q-\beta _1)}). \end{aligned}$$
(5.60)

Next we repeat the step where we introduce a small \(h>0\) and want to compare \(K_{{\mathbb {R}}}(t)\) and \(K_{(-\infty ,-t)\cup (-t^{1+h},t^{1+h})\cup (t,\infty )}(t)\). Following again the steps of [33], under the constraint on h,

$$\begin{aligned} h<\frac{\frac{\gamma \beta _1}{2}-1}{1-\frac{\gamma \beta _1}{2}+\gamma ^2}, \end{aligned}$$
(5.61)

one can show that:

$$\begin{aligned} \mathbb {E}[ |K_{(-\infty ,t)}(t)^q - K_{(-\infty ,-t)\cup (-t^{1+h},t^{1+h})}(t)^q|]&= o(t^{\frac{\gamma }{2}(Q-\beta _1)}). \end{aligned}$$
(5.62)

By applying again (5.57) one obtains,

$$\begin{aligned}&\mathbb {E}[ |K_{{\mathbb {R}}}(t)^q - K_{(-\infty ,-t)\cup (-t^{1+h},t^{1+h})\cup (t,\infty )}(t)^q|] \le c\mathbb {E}[ |K_{(-\infty ,t)}(t)^q \nonumber \\&\quad - K_{(-\infty ,-t)\cup (-t^{1+h},t^{1+h})}(t)^q|] = o(t^{\frac{\gamma }{2}(Q-\beta _1)}). \end{aligned}$$
(5.63)

Therefore as in the previous lemma we have now reduced the problem to studying the difference:

$$\begin{aligned} \mathbb {E}[K_{(-\infty ,-t)\cup (-t^{1+h},t^{1+h})\cup (t,\infty )}(t)^q] - \mathbb {E}[K_{(-t,t)^c}(t)^q]. \end{aligned}$$
(5.64)

We proceed exactly in the same way as the case \(\chi =\frac{2}{\gamma }\), using Kahane’s inequality of Theorem 5.4 to obtain:

$$\begin{aligned}&\mathbb {E}[K_{(-\infty ,-t)\cup (-t^{1+h}, t^{1+h})\cup (t,\infty )}(t)^{q}] - \mathbb {E}[(K_{(-t,t)^c}(t) + \sigma _t V)^q] = O(t^{h})\nonumber \\&\quad +O(t^{(1+h)(1-\frac{\gamma \beta _1}{2}+\frac{\gamma ^2}{4})}). \end{aligned}$$
(5.65)

When \(h > \frac{\gamma }{2}(Q-\beta _1)\) this term is also a \(o(t^{\frac{\gamma }{2}(Q-\beta _1)})\). Here the expression of \(\sigma _t\) is slightly different:

$$\begin{aligned} \sigma _t = t^{\frac{\gamma ^2}{4}+(1+h)(1-\frac{\gamma \beta _1}{2}+\frac{\gamma ^2}{4})}e^{\frac{\gamma }{2}B_{2(1+h)\ln (1/t)}}. \end{aligned}$$
(5.66)

As in our previous work [33], we can show that

$$\begin{aligned}&\mathbb {E}[(K_{(-\infty ,-t)}(t) + \sigma _t V)^q] - \mathbb {E}[K_{(-\infty ,-t)}(t)^q] \\ \nonumber&= \mathbb {E}[(K_{(-\infty ,-t)}(t) + \sigma _t e^{\frac{\gamma }{2}M} \rho (\beta _1))^q] - \mathbb {E}[K_{(-\infty ,-t)}(t)^q] + o(t^{\frac{\gamma }{2}(Q-\beta _1)}). \end{aligned}$$
(5.67)

This result is proved using inequalities, the lower bound and upper bound are then equivalent to a term with order \(t^{\frac{\gamma }{2}(Q-\beta _1)}\). As a consequence,

$$\begin{aligned} \mathbb {E}[(K_{(-\infty ,-t)}(t) + \sigma _t V)^q] - \mathbb {E}[(K_{(-\infty ,-t)}(t) + \sigma _t e^{\frac{\gamma }{2}M} \rho (\beta _1))^q] = o(t^{\frac{\gamma }{2}(Q-\beta _1)}). \end{aligned}$$
(5.68)

Furthermore, we can write V as

$$\begin{aligned} \sigma _t V = \sigma _t e^{\frac{\gamma }{2}M}\frac{1}{2}\int _{-L_M}^{\infty } e^{ \frac{\gamma }{2}{\mathcal {B}}^{\frac{Q-\beta _1}{2}}_s} \mu _{{\widetilde{Y}}}(ds) \le \sigma _t e^{\frac{\gamma }{2}M} \rho (\beta _1). \end{aligned}$$
(5.69)

This allows us to put an absolute value in expectation:

$$\begin{aligned} \mathbb {E}[|(K_{(-\infty ,-t)}(t) + \sigma _t V)^q - (K_{(-\infty ,-t)}(t) + \sigma _t e^{\frac{\gamma }{2}M} \rho (\beta _1))^q|] = o(t^{\frac{\gamma }{2}(Q-\beta _1)}). \end{aligned}$$
(5.70)

We can conclude by using (5.57) that:

$$\begin{aligned} \mathbb {E}[(K_{(-t,t)^c}(t) + \sigma _t V)^q] - \mathbb {E}[(K_{(-t,t)^c}(t) + \sigma _t e^{\frac{\gamma }{2}M} \rho (\beta _1))^q] = o(t^{\frac{\gamma }{2}(Q-\beta _1)}). \end{aligned}$$
(5.71)

We estimate as in the case \(\chi = \frac{2}{\gamma }\):

$$\begin{aligned}&\mathbb {E}[(K_{(-t,t)^c}(t) + \sigma _t e^{\frac{\gamma }{2}M} \rho (\beta _1))^q] - \mathbb {E}[K_{(-t,t)^c}(t)^q] \nonumber \\&= t^{\frac{\gamma }{2}(Q-\beta _1)}\frac{2(Q - \beta _1)}{\gamma } \frac{\Gamma (\frac{2}{\gamma }(\beta _1 -Q)) \Gamma (\frac{2}{\gamma }(Q -\beta _1) -q ) }{\Gamma (-q)} {\overline{R}}(\beta _1, \mu _1, \mu _2 )\nonumber \\&\qquad {\overline{H}}^{(2 Q - \beta _1 - \frac{\gamma }{2}, \beta _2 , \beta _3)}_{( \mu _1, \mu _2e^{i\pi \frac{\gamma ^2}{4}}, \mu _3e^{i\pi \frac{\gamma ^2}{4}})} + o(t^{\frac{\gamma }{2}(Q-\beta _1)}). \end{aligned}$$
(5.72)

Finally it is again possible to choose suitable small \(h>0 \) and \(\beta _0>0\) which make the argument work. This concludes the proof of the lemma. \(\square \)

1.2.2 Analytic continuation

In this section we prove the lemma of analyticity of the moments of GMC that we have used repetitively throughout the paper. This fact has been first shown in [22, Theorem 6.1] in the case of the correlation functions on the sphere. The main idea is that starting from the range of real parameters of \(\beta _i\) or \(\alpha \) where a given GMC expression is defined, one can find a small neighborhood in \({\mathbb {C}}\) of the parameter range where the quantity will still be well-defined and is complex analytic on this neighborhood. We also use in Sect. 3 the fact that the three-point function is complex analytic in the \(\mu _i\). This fact is obtained directly by differentiating with respect to \(\mu _i\).

Lemma 5.8

(Analyticity in insertions weights and in \(\mu _i\) of moments of GMC). Consider the following functions defined in the given parameter range:

  • \((\alpha , \beta ) \mapsto {\overline{G}}(\alpha , \beta )\) for \(\beta < Q \), \( \frac{\gamma }{2} - \alpha< \frac{\beta }{2} < \alpha \).

  • \((\alpha , \beta ) \mapsto G_{\chi }(t) \) for \(t \in \mathbb {H}\), \( \beta < Q\), \( \frac{2}{\gamma } \left( Q -\alpha - \frac{\beta }{2} + \frac{\chi }{2} \right) < \frac{4}{\gamma ^2} \wedge \frac{2}{\gamma }(Q - \beta )\).

  • \((\beta _1, \beta _2, \beta _3) \mapsto {\overline{H}}^{(\beta _1 , \beta _2 , \beta _3)}_{( \mu _1, \mu _2, \mu _3 )}\) for:

    $$\begin{aligned}&(\mu _i)_{i=1,2,3} \, \, \, \text { satisfies Definition } 1.3, \quad \beta _i< Q, \quad {\frac{1}{\gamma }(2 Q - \sum _{i=1}^3 \beta _i)} \\&\quad < \frac{4}{\gamma ^2} \wedge \min _i \frac{2}{\gamma }(Q - \beta _i). \end{aligned}$$
  • \((\beta _1, \beta _2, \beta _3) \mapsto H_{\chi }(t)\) for:

    $$\begin{aligned} \beta _i< Q, \quad \mu _1 \in (0, \infty ), \, \, \mu _2, \mu _3 \in - {\overline{\mathbb {H}}}, \quad q < \frac{4}{\gamma ^2} \wedge \min _i \frac{2}{\gamma }(Q - \beta _i), \quad t \in {\mathbb {H}}. \end{aligned}$$

Then for each function above, and for each of the function’s variables, it is complex analytic in a small complex neighborhood of any compact set K contained in the domain of definition of the function for real parameters. Furthermore the function \({\overline{H}}\) now viewed as a function of \(\mu _1, \mu _2, \mu _3\) is complex analytic in any compact \({\widetilde{K}}\) contained in the range of parameters written above.

Proof

We briefly adapt the proof of [22, Theorem 6.1] for the function \(H^{(\beta _1, \beta _2, \beta _3)}_{(\mu _1, \mu _2, \mu _3)}\) as the other cases can be treated in a similar manner. The first step performed in [22] is to apply the Girsanov Theorem 5.3 to pull out the insertions outside of the GMC expectation. It will be convenient to assume the three insertions are not located at 0, 1 and \(\infty \) but rather at three points \(s_1\), \(s_2\), \(s_3\) all in \({\mathbb {R}}\) and obeying the extra constraints \(|s_i|>2\) and \(|s_i-s_{i'}|>2\) respectively for all \(i \in \{1,2,3 \}\) and for all \(i \ne i'\). The reason it is possible to assume this is that the Liouville correlations are conformally invariant in the sense of [19, Theorem 3.5.]. It will be convenient to use the notations \({\varvec{\beta }} = (\beta _1, \beta _2, \beta _3)\) and \(\mathbf {s}= (s_1, s_2, s_3)\). Our starting point is thus that it is possible to write,

$$\begin{aligned} {\overline{H}}^{(\beta _1, \beta _2, \beta _3)}_{(\mu _1, \mu _2, \mu _3)} = C {\overline{H}}^{(\beta _1, \beta _2, \beta _3)}_{(\mu _1, \mu _2, \mu _3)}(\mathbf {s}), \end{aligned}$$
(5.73)

for C an explicit prefactor that is analytic in the \(\beta _i\) and hence can be ignored and where we have introduced:

$$\begin{aligned} {\overline{H}}^{(\beta _1, \beta _2, \beta _3)}_{(\mu _1, \mu _2, \mu _3)}(\mathbf {s}) = {\mathbb {E}} \left[ \left( \int _{\mathbb {R}}\frac{g(x)^{\frac{\gamma }{8}(\frac{4}{\gamma }- \sum _{i=1}^3 \beta _i )} }{\prod _{i=1}^3 |x-s_i|^{\frac{\gamma \beta _i }{2}}} e^{\frac{\gamma }{2} X(x)} d \mu (x) \right) ^{\frac{1}{\gamma }(2 Q - \sum _{i=1}^3 \beta _i)} \right] . \end{aligned}$$
(5.74)

Now by applying Theorem 5.3 we can obtain \({\overline{H}}^{(\beta _1, \beta _2, \beta _3)}_{(\mu _1, \mu _2, \mu _3)}(\mathbf {s})\) from the following limit,

$$\begin{aligned} H^{(\beta _1, \beta _2, \beta _3)}_{(\mu _1, \mu _2, \mu _3)}(\mathbf {s}) = \lim _{r \rightarrow \infty } {\mathcal {F}}_r({\varvec{\beta }}), \end{aligned}$$
(5.75)

where we have introduced,

$$\begin{aligned} {\mathcal {F}}_r({\varvec{\beta }}) = {\mathbb {E}} \left[ \prod _{i=1}^3 e^{\beta _i X_{r}(s_i) - \frac{\beta _i^2}{2} {\mathbb {E}}[X_r(s_i)^2] } \left( \int _{{\mathbb {R}}_r} g(x)^{\frac{1}{2}} e^{\frac{\gamma }{2} X(x)} d \mu (x) \right) ^{p_0} \right] , \end{aligned}$$
(5.76)

\(p_0 = \frac{1}{\gamma }(2 Q - \sum _{i=1}^3 \beta _i)\) and:

$$\begin{aligned} {\mathbb {R}}_r := {\mathbb {R}} \backslash \cup _{i=1}^3 (s_i-e^{-r/2}, s_i+e^{-r/2}). \end{aligned}$$
(5.77)

The fields \(X_r(s_i)\) are the radial parts of \(X(s_i)\) obtained by taking the mean of \(X(s_i)\) over the upper-half circles of radius \(e^{-r/2}\), \(\partial B(s_i,e^{-r/2})_+\).

Now when \(\beta _i\) are complex numbers, we write \(\beta _i = a_i + i b_i\). We want to prove there exists a complex neighborhood V in \({\mathbb {C}}^3\) containing the domain of definition for real \(\beta _i\) such that for all compact sets contained in V, \({\mathcal {F}}_r({\varvec{\beta }})\) converges uniformly as \(r \rightarrow + \infty \) over the compact set. It is known that \(X_{r+t}(s_i)-X_{r}(s_i)\) are independent Brownian motions for different \(s_i\). Hence,

$$\begin{aligned}&|{\mathcal {F}}_{r+1}({\varvec{\beta }}) - {\mathcal {F}}_{r}({\varvec{\beta }})|\nonumber \\&= \left| {\mathbb {E}} \left[ \prod _{i=1}^3 e^{i b_i X_{r+1}(s_i) + \frac{b_i^2}{2} {\mathbb {E}}[X_{r+1}(s_i)^2] } \left( \left( \int _{{\mathbb {R}}_{r+1}} e^{\frac{\gamma }{2} X(x)} f(x) d \mu (x) \right) ^{p_0} \right. \right. \right. \nonumber \\&\quad \left. \left. \left. - \left( \int _{{\mathbb {R}}_r} e^{\frac{\gamma }{2} X(x)} f(x) d \mu (x) \right) ^{p_0}\right) \right] \right| \nonumber \\&\le c \,e^{\frac{r+1}{2}\sum _{i=1}^3 b_i^2}{\mathbb {E}} \left[ \left| \left( \int _{{\mathbb {R}}_{r+1}} e^{\frac{\gamma }{2} X(x)} f(x) d \mu (x) \right) ^{p_0} - \left( \int _{{\mathbb {R}}_r} e^{\frac{\gamma }{2} X(x)} f(x) d \mu (x) \right) ^{p_0}\right| \right] , \end{aligned}$$
(5.78)

where we denote \(f(x) = \frac{g(x)^{\frac{\gamma ^2}{8}(p_0-1)}}{\prod _{i=1}^3 |x-s_i|^{\frac{\gamma a_i}{2}}} \). Set \(Z_r := \int _{{\mathbb {R}}_{r}} e^{\frac{\gamma }{2} X(x)} f(x) d \mu (x)\) and \(Y_r := Z_{r+1} - Z_{r}\). We want to estimate

$$\begin{aligned} \mathbb {E}[|(Z_r + Y_r)^{p_0} - Z_r^{p_0}|] \le \mathbb {E}[{\mathbf {1}}_{|Y_r| < \epsilon }|(Z_r + Y_r)^{p_0} - Z_r^{p_0}|] + \mathbb {E}[{\mathbf {1}}_{|Y_r| \ge \epsilon }|(Z_r + Y_r)^{p_0} - Z_r^{p_0}|], \end{aligned}$$
(5.79)

where \(\epsilon >0\) will be fixed later. By interpolation,

$$\begin{aligned} \mathbb {E}[{\mathbf {1}}_{|Y_r| < \epsilon }|(Z_r + Y_r)^{p_0} - Z_r^{p_0}|] \le |p_0|\epsilon \sup _{u\in [0,1]} \mathbb {E}[|(1-u)Z_r + uY_r|^{\text {Re}(p_0)-1}] \le c \, \epsilon . \end{aligned}$$
(5.80)

For the other term, we use the Hölder inequality with \(\lambda > 1\) such that \(\frac{\lambda }{\lambda -1} \text {Re}(p_0) < \min _{i=1}^3 \frac{2}{\gamma }(Q-a_i) \wedge \frac{4}{\gamma ^2}\), and \(0< m < \frac{4}{\gamma ^2}\),

$$\begin{aligned}&\mathbb {E}[{\mathbf {1}}_{|Y_r| \ge \epsilon }|(Z_r + Y_r)^{p_0} - Z_r^{p_0}|] \le c\, \mathbb {P}(|Y_r| \ge \epsilon )^{\frac{1}{\lambda }} \le c \epsilon ^{-\frac{m}{\lambda }}\mathbb {E}[|Y_r|^m]^{\frac{1}{\lambda }}\\ \nonumber&\qquad \le c\epsilon ^{-\frac{m}{\lambda }} \mathbb {E}\left[ \left| \sum _{i=1}^3 \int _{(s_i - e^{-r/2}, s_i + e^{r/2})} e^{\frac{\gamma }{2}X(x)} f(x) d\mu (x) \right| ^m \right] ^{\frac{1}{\lambda }}\\ \nonumber&\qquad \le c' \epsilon ^{-\frac{m}{\lambda }} \left( \max _i e^{-\frac{r}{2}((1+\frac{\gamma ^2}{2}-\frac{\gamma a_i}{2})m - \frac{\gamma ^2 m^2}{2})}\right) ^{\frac{1}{\lambda }} =: c'\epsilon ^{-\frac{m}{\lambda }} e^{-\frac{\theta }{\lambda }r}, \end{aligned}$$
(5.81)

where in the last step \(\theta \in {\mathbb {R}} \) is defined by the last equality and we have used the multifractal scaling property of the GMC, see e.g. [6, Sect. 3.6] or [34, Sect. 4]. We can choose a suitable m such that \(\theta >0\). Now take \(\epsilon = e^{-\eta r}\) with \(\eta = \frac{\theta }{\lambda + m}\), then:

$$\begin{aligned} \mathbb {E}[|(Z_r + Y_r)^{p_0} - Z_r^{p_0}|] \le c\,e^{\frac{r+1}{2}\sum _{i=1}^3 b_i^2}(\epsilon + \epsilon ^{-\frac{m}{\lambda }} e^{-\frac{\theta }{\lambda }r}) \le c'\, e^{-(\eta - \frac{1}{2}\sum _{i=1}^3 b_i^2)r}. \end{aligned}$$
(5.82)

Hence if one chooses the open set V in such a way that \(\frac{1}{2}\sum _{i=1}^3 b_i^2 < \eta \) always holds, all the inequalities we have done before hold true and thus we have shown that \({\mathcal {F}}_r({\varvec{\beta }})\) converges locally uniformly. This proves the analyticity result.

Lastly we very briefly justify all the other cases. The analyticity of \({\overline{G}}(\alpha , \beta )\) can be proved in the exact same way as done above for \({\overline{H}}^{(\beta _1 , \beta _2 , \beta _3)}_{( \mu _1, \mu _2, \mu _3 )}\). Furthermore adding the dependence t to get the functions \(G_{\chi }(t)\) and \(H_{\chi }(t)\) also changes nothing to the above argument and so the same claim also holds in this case. Lastly for the analyticity in \(\mu _i\) of \({\overline{H}}^{(\beta _1 , \beta _2 , \beta _3)}_{( \mu _1, \mu _2, \mu _3 )}\), one simply needs to notice the complex derivatives are well-defined. For instance for \(\mu _1\) one can write,

$$\begin{aligned} \partial _{\mu _1}&{\overline{H}}^{(\beta _1, \beta _2, \beta _3)}_{(\mu _1, \mu _2, \mu _3)} = \partial _{\mu _1}{\mathbb {E}} \left[ \left( \int _{\mathbb {R}}\frac{g(x)^{\frac{\gamma }{8}(\frac{4}{\gamma }- \sum _{i=1}^3 \beta _i )} }{|x|^{\frac{\gamma \beta _1 }{2}}|x-1|^{\frac{\gamma \beta _2 }{2}}} e^{\frac{\gamma }{2} X(x)} d \mu (x) \right) ^{\frac{1}{\gamma }(2 Q - \sum _{i=1}^3 \beta _i)} \right] \nonumber \\&= \int _{-\infty }^0 dx_1 \frac{g(x_1)^{\frac{\gamma }{8}(\frac{4}{\gamma }- \sum _{i=1}^3 \beta _i )} }{|x|^{\frac{\gamma \beta _1 }{2}}|x-1|^{\frac{\gamma \beta _2 }{2}}} \nonumber \\&\qquad \mathbb {E}\left[ e^{\frac{\gamma }{2} X(x_1)} \left( \int _{\mathbb {R}}\frac{g(x)^{\frac{\gamma }{8}(\frac{4}{\gamma }- \sum _{i=1}^3 \beta _i )} }{|x|^{\frac{\gamma \beta _1 }{2}}|x-1|^{\frac{\gamma \beta _2 }{2}}} e^{\frac{\gamma }{2} X(x)} d \mu (x) \right) ^{\frac{1}{\gamma }(2 Q - \sum _{i=1}^3 \beta _i)-1} \right] , \end{aligned}$$
(5.83)

where the last expression is clearly well-defined. Furthermore one can check that \(\partial _{{\overline{\mu }}_1} {\overline{H}}^{(\beta _1, \beta _2, \beta _3)}_{(\mu _1, \mu _2, \mu _3)} = 0\). Therefore \(\mu _1 \mapsto {\overline{H}}^{(\beta _1 , \beta _2 , \beta _3)}_{( \mu _1, \mu _2, \mu _3 )}\) is complex analytic in the claimed domain. \(\square \)

1.2.3 The limit of \({\overline{H}}\) recovers \({\overline{R}}\)

Here we will prove Lemma 1.9. With our choice of \(\mu _i \) satisfying Definition 1.3 this is an easy adaptation of the positive case as performed in [22, 33].

Proof

We prove the lemma in the first case where \(\beta _2 < \beta _1\). Let us denote \(\epsilon = \frac{\beta _3 - (\beta _1 - \beta _2)}{\gamma }\), \(p_1 = \frac{2}{\gamma }(Q-\beta _1)\). For \(I \subseteq {\mathbb {R}}\) a Borel set, we introduce the notation:

$$\begin{aligned} K_{I} = \int _I \frac{1}{|x|^{\frac{\gamma \beta _1 }{2}} |x-1|^{\frac{\gamma \beta _2 }{2}}} g(x)^{\frac{\gamma ^2}{8}(p-1-\epsilon )} e^{\frac{\gamma }{2} X(x)} d x. \end{aligned}$$
(5.84)

In our previous paper [33] it is proved that:

$$\begin{aligned} \epsilon \mathbb {E}[K_{[0,1]}^{p_1-\epsilon }] \overset{\epsilon \rightarrow 0}{\longrightarrow } p_1 {\overline{R}}(\beta _1, 0, 1). \end{aligned}$$
(5.85)

Using the density of \(e^{\frac{\gamma }{2}M}\), we have by definition of the reflection coefficient,

$$\begin{aligned} \epsilon \mathbb {E}\left[ \left( e^{\frac{\gamma }{2}M} \rho _+(\beta _1) \right) ^{p_1-\epsilon } \right] \overset{\epsilon \rightarrow 0}{\longrightarrow } p_1 {\overline{R}}(\beta _1, 0, 1), \end{aligned}$$
(5.86)

where:

$$\begin{aligned} \rho _{\pm }(\beta _1) := \frac{1}{2} \int _{-\infty }^{\infty } e^{ \frac{\gamma }{2}{\mathcal {B}}^{\frac{Q-\beta _1}{2}}_s} e^{\frac{\gamma }{2}Y(\pm e^{-s/2})} ds. \end{aligned}$$
(5.87)

On the other hand, by the William’s path decomposition of Theorem 5.5 we can write:

$$\begin{aligned} K_{[0,1]}&= e^{\frac{\gamma }{2}M} \frac{1}{2} \int _{-L_M}^{\infty } e^{ \frac{\gamma }{2}{\mathcal {B}}^{\frac{Q-\beta _1}{2}}_s} e^{\frac{\gamma }{2}Y(e^{-s/2})} ds \le e^{\frac{\gamma }{2}M}\rho _+(\beta _1). \end{aligned}$$
(5.88)

Therefore, the result from [33] implies that:

$$\begin{aligned} \mathbb {E}\left[ \left| K_{[0,1]}^{p_1-\epsilon } - (e^{\frac{\gamma }{2}M}\rho _+(\beta _1))^{p_1-\epsilon }\right| \right] = o(\epsilon ^{-1}). \end{aligned}$$
(5.89)

Similarly we also have

$$\begin{aligned} \mathbb {E}\left[ \left| K_{[-1,0)}^{p_1-\epsilon } - \left( e^{\frac{\gamma }{2}M}\rho _-(\beta _1)\right) ^{p_1-\epsilon }\right| \right] = o(\epsilon ^{-1}). \end{aligned}$$
(5.90)

We will use these results to prove the complex \(\mu _i\) case. Consider first the case \(p_1>1\). Using interpolation and Hölder’s inequality, for \(\lambda > 1\),

$$\begin{aligned}&\mathbb {E}\left[ \left| (\mu _1 K_{(-\infty , 0)} + \mu _2 K_{[0,1]} + \mu _3 K_{(1,\infty )} )^{p_1-\epsilon } - (\mu _1 K_{[-1, 0)} + \mu _2 K_{[0,1]} )^{p_1-\epsilon } \right| \right] \nonumber \\&\le \mathbb {E}\left[ |\mu _1 K_{(-\infty , -1)} + \mu _3 K_{(1,\infty )} |^{\lambda }\right] ^{\frac{1}{\lambda }} \nonumber \\&\qquad \times \sup _{u\in [0,1]}\mathbb {E}\left[ \left| (1-u)(\mu _1 K_{(-\infty , 0)} + \mu _2 K_{[0,1]} + \mu _3 K_{(1,\infty )} ) + u(\mu _1 K_{[-1, 0)} + \mu _2 K_{[0,1]}) \right| ^{(p_1-1-\epsilon )\frac{\lambda }{\lambda -1}} \right] ^{\frac{\lambda -1}{\lambda }}. \end{aligned}$$
(5.91)

Take \(p_1< \lambda < \min \{\frac{4}{\gamma ^2},\frac{2}{\gamma }(Q-\beta _2\vee \beta _3)\}\), then both expectations can be bounded by O(1). By the same techniques with \(\lambda = p-\epsilon \) we prove:

$$\begin{aligned}&\mathbb {E}\left[ \left| \left( \mu _1 K_{[-1, 0)} + \mu _2 K_{[0,1]}\right) ^{p_1-\epsilon } - \left( \mu _1 e^{\frac{\gamma }{2}M}\rho _-(\beta _1)+ \mu _2 e^{\frac{\gamma }{2}M}\rho _+(\beta _1) \right) ^{p_1-\epsilon } \right| \right] \nonumber \\ \nonumber&\le \mathbb {E}\left[ \left( e^{\frac{\gamma }{2}M} \frac{1}{2} \int _{-\infty }^{-L_M} e^{ \frac{\gamma }{2}{\mathcal {B}}^{\frac{Q-\beta _1}{2}}_s} \left( |\mu _1| e^{\frac{\gamma }{2}Y(-e^{-s/2})} + |\mu _2| e^{\frac{\gamma }{2}Y(e^{-s/2})} \right) ds\right) ^{p_1-\epsilon }\right] ^{\frac{1}{p_1-\epsilon }}\\&\quad \times \mathbb {E}\left[ \left( |\mu _1| e^{\frac{\gamma }{2}M}\rho _-(\beta _1)+ |\mu _2| e^{\frac{\gamma }{2}M}\rho _+(\beta _1) \right) ^{p_1-\epsilon }\right] ^{\frac{p_1-1-\epsilon }{p_1-\epsilon }}. \end{aligned}$$
(5.92)

The second expectation is a \(O(\epsilon ^{-1})\). For the first expectation, we use the inequality that for \(x,y>0\) one has \(x^{p_1-\epsilon } + y^{p_1-\epsilon } < (x+y)^{p_1-\epsilon }\). This shows that:

$$\begin{aligned}&\mathbb {E}\left[ \left( e^{\frac{\gamma }{2}M} \frac{1}{2} \int _{-\infty }^{-L_M} e^{ \frac{\gamma }{2}{\mathcal {B}}^{\frac{Q-\beta _1}{2}}_s} \left( |\mu _1| e^{\frac{\gamma }{2}Y(-e^{-s/2})} + |\mu _2| e^{\frac{\gamma }{2}Y(e^{-s/2})} \right) ds\right) ^{p_1-\epsilon }\right] \nonumber \\&\quad \le \! \mathbb {E}\left[ \left( |\mu _1| e^{\frac{\gamma }{2}M}\rho _-(\beta _1)\!+\! |\mu _2| e^{\frac{\gamma }{2}M}\rho _+(\beta _1) \right) ^{p-\epsilon }\right] \!-\! \mathbb {E}\left[ \left( |\mu _1| K_{[-1, 0)} \!+\! |\mu _2| K_{[0,1]}\right) ^{p_1-\epsilon }\right] \nonumber \\&\quad = o(\epsilon ^{-1}). \end{aligned}$$
(5.93)

The last inequality comes from the fact that the two expectations are equivalent when \(\epsilon \rightarrow 0\) to a term \(O(\epsilon ^{-1})\). Therefore:

$$\begin{aligned} \mathbb {E}\left[ \left| \left( \mu _1 K_{[-1, 0)} + \mu _2 K_{[0,1]}\right) ^{p_1-\epsilon } - \left( \mu _1 e^{\frac{\gamma }{2}M}\rho _-(\beta _1)+ \mu _2 e^{\frac{\gamma }{2}M}\rho _+(\beta _1) \right) ^{p_1-\epsilon } \right| \right] = o(\epsilon ^{-1}). \end{aligned}$$
(5.94)

Now consider the case \(p_1 \le 1\). Since \(p_1 = \frac{2}{\gamma }(Q-\beta _1)>0\), we are in the case \(0 < p_1 \le 1\). By studying the first order derivatives of the function,

$$\begin{aligned} ({\mathbb {R}}_+^*)^3 \ni (x_1,x_2,x_3) \mapsto \left( \mu _1 x_1^{\frac{1}{p_1}} + \mu _2 x_2^{\frac{1}{p_1}} + \mu _3 x_3^{\frac{1}{p_1}} \right) ^{p_1}, \end{aligned}$$
(5.95)

we can prove the following inequality with a constant \(c>0\) depending only on the \(\mu _i\). For \(x_i, x_i' >0\),

$$\begin{aligned} \left| (\sum _{i=1}^3 \mu _i x_i)^{p_1} - (\sum _{i=1}^3\mu _i x_i')^{p_1}\right| \le c \sum _{i=1}^3 |x_i^{p_1}-x_i'^{p_1}|. \end{aligned}$$
(5.96)

Applying the inequality,

$$\begin{aligned}&\mathbb {E}\left[ \left| \left( \mu _1 K_{[-\infty , 0)} + \mu _2 K_{[0,1]} + \mu _3 K_{(1,\infty )}\right) ^{p_1-\epsilon } - \left( \mu _1 e^{\frac{\gamma }{2}M}\rho _-(\beta _1)+ \mu _2 e^{\frac{\gamma }{2}M}\rho _+(\beta _1) \right) ^{p_1-\epsilon } \right| \right] \\ \nonumber&\le c \mathbb {E}\left[ \left| K_{(-\infty ,0)}^{p_1-\epsilon } - (e^{\frac{\gamma }{2}M} \rho _-(\beta _1))^{p_1-\epsilon }\right| \right] + c \mathbb {E}\left[ \left| K_{[0,1]}^{p_1-\epsilon } - (e^{\frac{\gamma }{2}M} \rho _+(\beta _1))^{p_1-\epsilon }\right| \right] + O(1)\\ \nonumber&\le c \mathbb {E}\left[ \left| K_{(-\infty ,0)}^{p_1-\epsilon } - (e^{\frac{\gamma }{2}M} \rho _-(\beta _1))^{p_1-\epsilon }\right| \right] + o(\epsilon ^{-1}). \end{aligned}$$
(5.97)

Moreover, by sub-additivity,

$$\begin{aligned} \mathbb {E}[|K_{(-\infty ,0)}^{p_1-\epsilon }- K_{(-1,0)}^{p_1-\epsilon }|] =\mathbb {E}[K_{(-\infty ,0)}^{p_1-\epsilon }- K_{(-1,0)}^{p_1-\epsilon }] \le \mathbb {E}[K_{(-\infty ,-1)}^{p_1-\epsilon }] = O(1). \end{aligned}$$
(5.98)

Therefore we can bound

$$\begin{aligned} \mathbb {E}\left[ \left| K_{(-\infty ,0)}^{p_1-\epsilon } - (e^{\frac{\gamma }{2}M} \rho _-(\beta _1))^{p_1-\epsilon }\right| \right] \le&\mathbb {E}\left[ \left| K_{(-1,0)}^{p_1-\epsilon } - (e^{\frac{\gamma }{2}M} \rho _-(\beta _1))^{p_1-\epsilon }\right| \right] \nonumber \\&+ o(\epsilon ^{-1}) = o(\epsilon ^{-1}). \end{aligned}$$
(5.99)

In conclusion,

$$\begin{aligned}&\lim _{\epsilon \rightarrow 0} \epsilon \mathbb {E}\left[ \left( \mu _1 K_{(-\infty , 0)} + \mu _2 K_{[0,1]} + \mu _3 K_{(1,\infty )} \right) ^{p_1-\epsilon } \right] \nonumber \\&\qquad = \lim _{\epsilon \rightarrow 0} \epsilon \mathbb {E}\left[ \left( \mu _1 e^{\frac{\gamma }{2}M}\rho _-(\beta _1)+ \mu _2 e^{\frac{\gamma }{2}M}\rho _+(\beta _1) \right) ^{p_1-\epsilon }\right] \nonumber \\&\quad =p_1 {\overline{R}}(\beta _1,\mu _1,\mu _2). \end{aligned}$$
(5.100)

This finishes the proof of the lemma. \(\square \)

1.3 Mapping GMC moments from \(\partial {\mathbb {D}}\) to \({\mathbb {R}}\)

We prove here a lemma providing a very concrete computation linking the moment of GMC on \(\partial {\mathbb {D}}\) to the moment on \({\mathbb {R}}\). This will be used to relate the moment formula for GMC on the circle of [31] to the \({\overline{U}}(\alpha )\) defined in our paper. This lemma can be deduced from [19, Proposition 3.7], but doing so requires several straightforward but tedious computational steps. Therefore for clarity we include a self-contained proof.

Lemma 5.9

Consider \(\beta < Q\) and \(\frac{\gamma }{2} - \alpha< \frac{\beta }{2} < \alpha \) and let X and \(X_{{\mathbb {D}}}\) be the GFF respectively on \({\mathbb {H}}\) and \({\mathbb {D}}\) with covariance given by Eqs. (1.10) and (5.1). Then the following equality holds,

$$\begin{aligned}&\mathbb {E}\left[ \left( \int _0^{2\pi } \frac{1}{|e^{i \theta } -1|^{\frac{\gamma \beta }{2}}} e^{\frac{\gamma }{2} X_{{\mathbb {D}}}(e^{i\theta })} d\theta \right) ^{\frac{2Q-2\alpha - \beta }{\gamma }}\right] \nonumber \\&\quad = 2^{(\alpha - \frac{\beta }{2})(Q-\alpha - \frac{\beta }{2})} \mathbb {E}\left[ \left( \int _{{\mathbb {R}}} \frac{e^{\frac{\gamma }{2} X(x) }}{|x - i|^{\gamma \alpha }} g(x)^{\frac{1}{2} - \frac{\alpha \gamma }{4}} dx \right) ^{\frac{2Q -2 \alpha - \beta }{\gamma }} \right] , \end{aligned}$$
(5.101)

where both GMC measures are defined by a renormalization according to variable as performed in Definition 1.2. Setting \(\beta = 0\), we obtain for \( \alpha > \frac{\gamma }{2}\) the equation

$$\begin{aligned} \mathbb {E}\left[ \left( \int _0^{2\pi } e^{\frac{\gamma }{2} X_{{\mathbb {D}}}(e^{i\theta })} d\theta \right) ^{\frac{2Q-2\alpha }{\gamma }}\right] = 2^{\alpha (Q-\alpha )} \mathbb {E}\left[ \left( \int _{{\mathbb {R}}} \frac{e^{\frac{\gamma }{2} X(x) }}{|x - i|^{\gamma \alpha }} g(x)^{\frac{1}{2} - \frac{\alpha \gamma }{4}} dx \right) ^{\frac{2Q -2 \alpha }{\gamma }} \right] . \nonumber \\ \end{aligned}$$
(5.102)

Proof

Take \(\psi : z\mapsto i \frac{1 +z}{1 - z}\) the conformal map that maps the unit disk \({\mathbb {D}}\) equipped with the Euclidean metric to the upper-half plane \({\mathbb {H}}\) equipped with the metric \(\hat{g}(x) = \frac{4}{|x+i|^4}\). This also maps the field \(X_{{\mathbb {D}}}\) to the field \(X_{\hat{g}}\) with covariance given by (5.2). Record the following:

$$\begin{aligned} {\hat{g}}(x)^{- \frac{\gamma \alpha }{4}} = 2^{- \frac{\gamma \alpha }{2}} |x-i|^{\gamma \alpha }, \quad \frac{1}{|e^{i \theta } -1|^{\frac{\gamma \beta }{2}}} = 2^{- \frac{\gamma \beta }{4}} {\hat{g}}(x)^{- \frac{\gamma \beta }{8}}, \quad d \theta = {\hat{g}}(x)^{1/2} dx. \nonumber \\ \end{aligned}$$
(5.103)

This change of coordinates applied to the GMC implies the following relation:

$$\begin{aligned}&\mathbb {E}\left[ \left( \int _0^{2\pi } \frac{1}{|e^{i \theta } -1 |^{\frac{\gamma \beta }{2}}} e^{\frac{\gamma }{2} X_{{\mathbb {D}}}(e^{i\theta }) - \frac{\gamma ^2}{8} \mathbb {E}[X_{{\mathbb {D}}}(e^{i\theta })^2]} d\theta \right) ^{\frac{2Q-2\alpha - \beta }{\gamma }}\right] \\&= 2^{(\alpha - \frac{\beta }{2})(Q-\alpha - \frac{\beta }{2})} \mathbb {E}\left[ \left( \int _{\mathbb {R}}\frac{e^{\frac{\gamma }{2} X_{\hat{g}}(x) -\frac{\gamma ^2}{8} \mathbb {E}[X_{\hat{g}}(e^{i\theta })^2]}}{|x-i|^{\gamma \alpha }} \hat{g}(x)^{\frac{\gamma }{4}(\frac{2}{\gamma }-\alpha -\frac{\beta }{2})}d x \right) ^{\frac{2Q-2\alpha - \beta }{\gamma }}\right] . \nonumber \end{aligned}$$
(5.104)

Notice in the above expression we explicitly wrote the renormalization of the GMC to emphasize the formula holds when the GMC is renormalized by variance. Now lets momentarily assume \(\alpha +\frac{\beta }{2}>Q\) and introduce the following integral over c:

$$\begin{aligned}&\mathbb {E}\left[ \left( \int _{\mathbb {R}}\frac{e^{\frac{\gamma }{2} X_{\hat{g}}(x)}}{|x-i|^{\gamma \alpha }} \hat{g}(x)^{\frac{\gamma }{4}(\frac{2}{\gamma }-\alpha - \frac{\beta }{2})}d x \right) ^{\frac{2Q-2\alpha -\beta }{\gamma }}\right] \nonumber \\&= \frac{\gamma }{2} \frac{e^{\frac{\alpha }{2}(Q-\alpha - \frac{\beta }{2}) \ln \hat{g}(i)}}{ \Gamma (\frac{2}{\gamma }(\alpha + \frac{\beta }{2} - Q))} \int _{{\mathbb {R}}} dc e^{(\alpha + \frac{\beta }{2} - Q)c} \mathbb {E}\left[ e^{\alpha X_{\hat{g}}(i) - \frac{\alpha ^2}{2} \mathbb {E}[ X_{\hat{g}}(i)^2] }e^{- e^{\frac{\gamma c}{2}} \int _{{\mathbb {R}}} e^{\frac{\gamma }{2} X_{\hat{g}}(x) - \frac{\gamma ^2}{8} \mathbb {E}[ X_{\hat{g}}(x)^2] } {\hat{g}}(x)^{\frac{1}{2} - \frac{\gamma \beta }{8}} dx } \right] . \end{aligned}$$
(5.105)

To go from the field \( X_{\hat{g}}\) to the field X we must perform the change of variable \(X = X_{\hat{g}} - Y \) with \( Y = \frac{1}{\pi } \int _0^{\pi } X_{\hat{g}}(e^{i \theta }) d \theta \). We perform this replacement and at the same time shift the integration over c by \(-Y\) to obtain:

$$\begin{aligned}&\int _{{\mathbb {R}}} dc e^{(\alpha + \frac{\beta }{2} - Q)c} \mathbb {E}\left[ e^{(Q - \frac{\beta }{2})Y} e^{\alpha X(i) - \frac{\alpha ^2}{2} \mathbb {E}[ X_{\hat{g}}(i)^2] }e^{- e^{\frac{\gamma c}{2}} \int _{{\mathbb {R}}} e^{\frac{\gamma }{2} X(x) - \frac{\gamma ^2}{8} \mathbb {E}[ X_{\hat{g}}(x)^2] } {\hat{g}}(x)^{\frac{1}{2} - \frac{\gamma \beta }{8}} dx } \right] \\ \nonumber&= \int _{{\mathbb {R}}} dc e^{(\alpha + \frac{\beta }{2} - Q)c} \mathbb {E}\Bigg [ e^{\frac{(Q - \frac{\beta }{2})^2}{2} \mathbb {E}[Y^2]} e^{\alpha X(i) + \alpha (Q - \frac{\beta }{2}) \mathbb {E}[X(i)Y] - \frac{\alpha ^2}{2} \mathbb {E}[ X_{\hat{g}}(i)^2] } \\&\qquad \qquad \qquad \qquad \qquad \times e^{- e^{\frac{\gamma c}{2}} \int _{{\mathbb {R}}} e^{\frac{\gamma }{2} X(x) + \frac{\gamma }{2}(Q -\frac{\beta }{2}) \mathbb {E}[X(x)Y] - \frac{\gamma ^2}{8} \mathbb {E}[ X_{\hat{g}}(x)^2] } {\hat{g}}(x)^{\frac{1}{2} - \frac{\gamma \beta }{8}} dx } \Bigg ]. \nonumber \end{aligned}$$
(5.106)

In the last line we have applied the Girsanov Theorem 5.3 to \(e^{(Q - \frac{\beta }{2})Y}\). Record the following easy computations:

$$\begin{aligned}&\mathbb {E}[Y^2] = - \frac{1}{\pi } \int _0^{\pi } \ln \hat{g}(e^{i \theta }) d \theta , \quad \quad \mathbb {E}[Y X_{\hat{g}}(x)] = \frac{1}{2} \ln \frac{g(x)}{\hat{g}(x)} + \frac{1}{2}\mathbb {E}[Y^2], \\ \nonumber&\mathbb {E}[Y X(x)] = \frac{1}{2} \ln \frac{g(x)}{\hat{g}(x)} - \frac{1}{2}\mathbb {E}[Y^2], \quad \quad \mathbb {E}[ X_{\hat{g}}(x)^2] = \mathbb {E}[ X(x)^2] + \ln \frac{g(x)}{\hat{g}(x)}. \end{aligned}$$
(5.107)

Then we get:

$$\begin{aligned}&\int _{{\mathbb {R}}} dc e^{(\alpha + \frac{\beta }{2} - Q)c} \mathbb {E}\Bigg [ e^{ \frac{1}{2} (Q- \frac{\beta }{2})(Q-\alpha - \frac{\beta }{2}) \mathbb {E}[Y^2]} e^{\frac{\alpha }{2}(Q -\alpha - \frac{\beta }{2}) \ln \frac{g(i)}{\hat{g}(i)}} e^{\alpha X(i) - \frac{\alpha ^2}{2} \mathbb {E}[ X(i)^2] } \nonumber \\&\qquad \qquad \qquad \qquad \qquad \times e^{- e^{\frac{\gamma c}{2} - \frac{\gamma }{4} (Q -\frac{\beta }{2}) \mathbb {E}[Y^2]} \int _{{\mathbb {R}}} e^{\frac{\gamma }{2} X(x) - \frac{\gamma ^2}{8} \mathbb {E}[ X(x)^2] } g(x)^{\frac{1}{2} - \frac{\gamma \beta }{8}} dx } \Bigg ] \nonumber \\&=e^{\frac{\alpha }{2}(Q -\alpha - \frac{\beta }{2}) \ln \frac{g(i)}{\hat{g}(i)}} \int _{{\mathbb {R}}} dc e^{(\alpha + \frac{\beta }{2} - Q)c} \mathbb {E}\left[ e^{\alpha X(i) - \frac{\alpha ^2}{2} \mathbb {E}[ X(i)^2] }e^{- e^{\frac{\gamma c}{2} } \int _{{\mathbb {R}}} e^{\frac{\gamma }{2} X(x) - \frac{\gamma ^2}{8} \mathbb {E}[ X(x)^2] } g(x)^{\frac{1}{2} - \frac{\gamma \beta }{8}} dx } \right] \nonumber \\&= \frac{2}{\gamma } \Gamma \left( \frac{2}{\gamma }(\alpha + \frac{\beta }{2} -Q) \right) e^{\frac{\alpha }{2}(\alpha + \frac{\beta }{2} -Q) \ln \hat{g}(i) } \mathbb {E}\left[ \left( \int _{{\mathbb {R}}} \frac{e^{\frac{\gamma }{2} X(x) - \frac{\gamma ^2}{8} \mathbb {E}[ X(x)^2] }}{|x - i|^{\gamma \alpha }} g(x)^{\frac{1}{2} - \frac{\alpha \gamma }{4} - \frac{\gamma \beta }{8}} dx \right) ^{\frac{2Q -2 \alpha - \beta }{\gamma }} \right] . \end{aligned}$$
(5.108)

To obtain the second line we have shifted the integral over c by \(\frac{1}{2}(Q - \frac{\beta }{2}) \mathbb {E}[Y^2]\) and to obtain the last one we have computed the integral over c. The conclusion of the above is thus that:

$$\begin{aligned}&\mathbb {E}\left[ \left( \int _{\mathbb {R}}\frac{e^{\frac{\gamma }{2} X_{\hat{g}}(x)}}{|x-i|^{\gamma \alpha }} \hat{g}(x)^{\frac{\gamma }{4}(\frac{2}{\gamma }-\alpha - \frac{\beta }{2})}d x \right) ^{\frac{2Q-2\alpha - \beta }{\gamma }}\right] \nonumber \\&\quad = \mathbb {E}\left[ \left( \int _{{\mathbb {R}}} \frac{e^{\frac{\gamma }{2} X(x) - \frac{\gamma ^2}{8} \mathbb {E}[ X(x)^2] }}{|x - i|^{\gamma \alpha }} g(x)^{\frac{1}{2} - \frac{\alpha \gamma }{4} - \frac{\beta \gamma }{8}} dx \right) ^{\frac{2Q -2 \alpha - \beta }{\gamma }} \right] . \end{aligned}$$
(5.109)

To lift the constraint \(\alpha + \frac{\beta }{2} > Q \) we have introduced to write the integral over c we can simply use analyticity in \(\alpha \) of both sides of the above equation given by Lemma 5.8. Then combining this equation with (5.104) implies the claim of the lemma. \(\square \)

1.4 Special functions

1.4.1 Hypergeometric equations

Here we recall some facts we have used on the hypergeometric equation and its solution space. We always use \({\mathbb {N}}\) as the set of non-negative integers. For \(A>0\) let \(\Gamma (A) = \int _0^{\infty } t^{A-1} e^{-t} dt \) denote the standard Gamma function which can then be analytically extended to \({\mathbb {C}} \setminus \{ - {\mathbb {N}} \} \). We recall the following useful properties:

$$\begin{aligned} \Gamma (A+1) = A \Gamma (A), \quad \Gamma (A) \Gamma (1-A) = \frac{\pi }{\sin (\pi A)}, \quad \Gamma (A) \Gamma (A + \frac{1}{2}) = \sqrt{\pi } 2^{1 - 2A} \Gamma (2A). \end{aligned}$$
(5.110)

Let \((A)_n : = \frac{\Gamma (A +n)}{\Gamma (A)}\). For ABC,  and t real numbers we define the hypergeometric function F by:

$$\begin{aligned} F(A,B,C,t) := \sum _{n=0}^{\infty } \frac{(A)_n (B)_n}{n! (C)_n} t^n. \end{aligned}$$
(5.111)

This function can be used to solve the following hypergeometric equation:

$$\begin{aligned} \left( t (1-t) \frac{d^2}{d t^2} + ( C - (A +B +1)t) \frac{d}{dt} - AB\right) f(t) =0. \end{aligned}$$
(5.112)

We can give the following three bases of solutions corresponding respectively to a power series expansion around \(t=0\), \(t=1\), and \(t = \infty \). Under the assumption that C is not an integer:

$$\begin{aligned} f(t)&= C_1 F(A,B,C,t) + C_2 t^{1 -C} F(1 + A-C, 1 +B - C, 2 -C, t). \end{aligned}$$
(5.113)

Under the assumption that \( C - A - B \) is not an integer:

$$\begin{aligned} f(t)&= B_1 F(A,B,1+A+B- C, 1 -t)\\ \nonumber&\quad + B_2 (1-t)^{C- A - B} F(C- A, C- B, 1 + C - A - B , 1 -t). \end{aligned}$$
(5.114)

Under the assumption that \( A - B \) is not an integer:

$$\begin{aligned} f(t)&= D_1 t^{-A}F(A,1+A-C,1+A-B,t^{-1})\\ \nonumber&\quad + D_2 t^{-B} F(B, 1 +B - C, 1 +B - A, t^{-1}). \end{aligned}$$
(5.115)

For each basis we have two real constants that parametrize the solution space, \(C_1, C_2\), \(B_1, B_2\), and \(D_1, D_2\). We thus expect to have an explicit change of basis formula that will give a link between \(C_1, C_2\), \(B_1, B_2\), and \(D_1, D_2\). This is precisely what gives the so-called connection formulas,

$$\begin{aligned}&\begin{pmatrix} C_1 \\ C_2 \end{pmatrix} = \begin{pmatrix} \frac{\Gamma (1 -C ) \Gamma ( A - B +1 ) }{\Gamma (A - C +1 )\Gamma (1- B ) } &{} \frac{\Gamma (1 -C) \Gamma ( B- A +1 ) }{\Gamma (B - C +1 )\Gamma ( 1- A ) } \\ \frac{\Gamma (C-1 ) \Gamma ( A - B +1 ) }{\Gamma ( A ) \Gamma ( C - B )} &{} \frac{\Gamma (C-1 ) \Gamma (B - A +1 ) }{ \Gamma ( B ) \Gamma ( C - A ) } \end{pmatrix} \begin{pmatrix} D_1 \\ D_2 \end{pmatrix}, \end{aligned}$$
(5.116)
$$\begin{aligned}&\begin{pmatrix} B_1 \\ B_2 \end{pmatrix} = \begin{pmatrix} \frac{\Gamma (C)\Gamma (C-A-B)}{\Gamma (C-A)\Gamma (C-B)} &{} \frac{\Gamma (2-C)\Gamma (C-A-B)}{\Gamma (1-A)\Gamma (1-B)} \\ \frac{\Gamma (C)\Gamma (A+B-C)}{\Gamma (A)\Gamma (B)} &{} \frac{\Gamma (2-C)\Gamma (A+B-C)}{\Gamma (A-C+1)\Gamma (B-C+1)} \end{pmatrix} \begin{pmatrix} C_1 \\ C_2 \end{pmatrix}. \end{aligned}$$
(5.117)

These relations come from the theory of hypergeometric equations and we will extensively use them in Sects. 2 and 3 to deduce our shift equations.

1.4.2 The double gamma function

We will now provide some explanations on the functions \(\Gamma _{\frac{\gamma }{2}}(x)\) and \(S_{\frac{\gamma }{2}}(x)\) that we have introduced. For all \(\gamma \in (0,2) \) and for \(\mathrm {Re}(x) >0\), \(\Gamma _{\frac{\gamma }{2}}(x)\) is defined by the integral formula,

$$\begin{aligned} \ln \Gamma _{\frac{\gamma }{2}}(x) = \int _0^{\infty } \frac{dt}{t} \left[ \frac{ e^{-xt} -e^{- \frac{Qt}{2}} }{(1 - e^{- \frac{\gamma t}{2}})(1 - e^{- \frac{2t}{\gamma }})} - \frac{( \frac{Q}{2} -x)^2 }{2}e^{-t} + \frac{ x -\frac{Q}{2} }{t} \right] , \nonumber \\ \end{aligned}$$
(5.118)

where we have \(Q = \frac{\gamma }{2} +\frac{2}{\gamma }\). Since the function \(\Gamma _{\frac{\gamma }{2}}(x)\) is continuous it is completely determined by the following two shift equations

$$\begin{aligned} \frac{\Gamma _{\frac{\gamma }{2}}(x)}{\Gamma _{\frac{\gamma }{2}}(x + \frac{\gamma }{2}) }&= \frac{1}{\sqrt{2 \pi }} \Gamma (\frac{\gamma x}{2}) ( \frac{\gamma }{2} )^{ -\frac{\gamma x}{2} + \frac{1}{2} }, \end{aligned}$$
(5.119)
$$\begin{aligned} \frac{\Gamma _{\frac{\gamma }{2}}(x)}{\Gamma _{\frac{\gamma }{2}}(x + \frac{2}{\gamma }) }&= \frac{1}{\sqrt{2 \pi }} \Gamma (\frac{2 x}{\gamma }) ( \frac{\gamma }{2} )^{ \frac{2 x}{\gamma } - \frac{1}{2} }, \end{aligned}$$
(5.120)

and by its value in \(\frac{Q}{2}\), \(\Gamma _{\frac{\gamma }{2}}(\frac{Q}{2} ) =1\). Furthermore \(x \mapsto \Gamma _{\frac{\gamma }{2}}(x)\) admits a meromorphic extension to all of \({\mathbb {C}}\) with single poles at \(x = -n\frac{\gamma }{2}-m\frac{2}{\gamma }\) for any \(n,m \in {\mathbb {N}}\) and \(\Gamma _{\frac{\gamma }{2}}(x)\) is never equal to 0. We have also used the double sine function defined by:

$$\begin{aligned} S_{\frac{\gamma }{2}}(x) = \frac{\Gamma _{\frac{\gamma }{2}}(x)}{\Gamma _{\frac{\gamma }{2}}(Q -x)}. \end{aligned}$$
(5.121)

It obeys the following two shift equations:

$$\begin{aligned} \frac{S_{\frac{\gamma }{2}}(x+\frac{\gamma }{2})}{S_{\frac{\gamma }{2}}(x)} = 2\sin (\frac{\gamma \pi }{2}x), \quad \frac{S_{\frac{\gamma }{2}}(x+\frac{2}{\gamma })}{S_{\frac{\gamma }{2}}(x)} = 2\sin (\frac{2\pi }{\gamma }x). \end{aligned}$$
(5.122)

The double sine function admits a meromorphic extension to \({\mathbb {C}}\) with poles at \(x = -n\frac{\gamma }{2}-m\frac{2}{\gamma }\) and with zeros at \(x = Q+n\frac{\gamma }{2}+m\frac{2}{\gamma }\) for any \(n,m \in {\mathbb {N}}\). We also record the following asymptotic for \(S_{\frac{\gamma }{2}}(x) \):

$$\begin{aligned} S_{\frac{\gamma }{2}}(x) \sim {\left\{ \begin{array}{ll} e^{-i\frac{\pi }{2}x(x-Q)} &{} \text {as} \quad \text {Im}(x) \rightarrow \infty ,\\ e^{i\frac{\pi }{2}x(x-Q)} &{}\text {as} \quad \text {Im}(x) \rightarrow -\infty . \end{array}\right. } \end{aligned}$$
(5.123)

Finally in Sect. 1.4.2 in order to state Corollaries 1.12 and 1.13 on the law of the GMC measures we have used the following random variable \(\beta _{2,2}\) defined in [27]. Its moments involve the function \(\Gamma _{\frac{\gamma }{2}}\).

Definition 5.10

(Existence theorem). The distribution \(-\ln \beta _{2,2}(a_1,a_2;b_0,b_1,b_2)\) is infinitely divisible on \([0,\infty )\) and has the Lévy-Khintchine decomposition for \({\text {Re}} (p) >-b_0\):

$$\begin{aligned}&\mathbb {E}[\exp (p \ln \beta _{2,2}(a_1, a_2;b_0,b_1,b_2))] \nonumber \\&\quad = \exp \Big (\int _0^{\infty } (e^{-pt}-1)e^{-b_0t} \frac{(1-e^{-b_1t})(1-e^{-b_2t})}{(1-e^{-a_1t})(1-e^{-a_2t})} \frac{dt}{t}\Big ). \end{aligned}$$
(5.124)

Furthermore, the distribution \(\ln \beta _{2,2}(a_1,a_2;b_0,b_1,b_2)\) is absolutely continuous with respect to the Lebesgue measure.

We only work with the case \((a_1,a_2)= (1,\frac{4}{\gamma ^2})\). Then \(\beta _{2,2}(1,\frac{4}{\gamma ^2};b_0,b_1,b_2)\) depends on 4 parameters \(\gamma , b_0, b_1, b_2\) and its real moments \(p>-b_0\) are given by the formula:

$$\begin{aligned}&{\mathbb {E}}[ \beta _{2,2}(1, \frac{4}{\gamma ^2}; b_0, b_1, b_2)^p] \nonumber \\&\quad = \frac{\Gamma _{\frac{\gamma }{2}}(\frac{\gamma }{2}(p + b_0)) \Gamma _{\frac{\gamma }{2}}( \frac{\gamma }{2}(b_0 + b_1)) \Gamma _{\frac{\gamma }{2}}(\frac{\gamma }{2}(b_0 + b_2)) \Gamma _{\frac{\gamma }{2}}(\frac{\gamma }{2}(p + b_0 + b_1 +b_2))}{\Gamma _{\frac{\gamma }{2}}(\frac{\gamma }{2}b_0)\Gamma _{\frac{\gamma }{2}}(\frac{\gamma }{2}(p + b_0 + b_1))\Gamma _{\frac{\gamma }{2}}(\frac{\gamma }{2}(p + b_0 +b_2)) \Gamma _{\frac{\gamma }{2}}( \frac{\gamma }{2}(b_0 +b_1 +b_2))}. \end{aligned}$$
(5.125)

Of course we have \(\gamma \in (0,2)\) and the real numbers \(p, b_0, b_1, b_2\) must be chosen so that the arguments of all the \(\Gamma _{\frac{\gamma }{2}}\) are positive.

1.4.3 The exact formula \({\mathcal {I}}\)

We provide here an analysis of the formula \({\mathcal {I}}\) we have written to give the expression for \({\overline{H}}\). This formula comes from taking the limit \(\mu \rightarrow 0\) in formula for the boundary three-point function proposed in [30]. We denote the integral it contains by \({\mathcal {J}}\) and first give a condition of convergence for \({\mathcal {J}}\).

Lemma 5.11

Consider parameters \(\beta _1, \beta _2, \beta _3 \in {\mathbb {C}}\) and \(\sigma _1, \sigma _2, \sigma _3 \in {\mathbb {C}}\) such that the inequality

$$\begin{aligned} Q > \mathrm {Re}\left( \sigma _3 - \sigma _2 + \frac{\beta _2}{2} \right) \end{aligned}$$
(5.126)

holds. Then the following integral is well-defined as meromorphic function of all its parameters

$$\begin{aligned} {\mathcal {J}}:=&\int _{{\mathcal {C}}} \frac{S_{\frac{\gamma }{2}}(Q-\frac{\beta _2}{2}+\sigma _3-\sigma _2+r) S_{\frac{\gamma }{2}}(\frac{\beta _3}{2}+\sigma _3-\sigma _1+r) S_{\frac{\gamma }{2}}(Q-\frac{\beta _3}{2}+\sigma _3-\sigma _1+r)}{S_{\frac{\gamma }{2}}(Q+\frac{\beta _1}{2}-\frac{\beta _2}{2}+\sigma _3-\sigma _1+r) S_{\frac{\gamma }{2}}(2Q-\frac{\beta _1}{2}-\frac{\beta _2}{2}+\sigma _3-\sigma _1+r) S_{\frac{\gamma }{2}}(Q+r)}\nonumber \\&e^{i\pi (-\frac{\beta _2}{2}+\sigma _2-\sigma _3)r} \frac{dr}{i}, \end{aligned}$$
(5.127)

where the contour \({\mathcal {C}}\) of the integral goes from \(-i \infty \) to \( i \infty \) passing to the right of the poles at \(r = -(Q-\frac{\beta _2}{2}+\sigma _3-\sigma _2) -n\frac{\gamma }{2}-m\frac{2}{\gamma }\), \(r = -(\frac{\beta _3}{2}+\sigma _3-\sigma _1)-n\frac{\gamma }{2}-m\frac{2}{\gamma }\), \(r= -(Q-\frac{\beta _3}{2}+\sigma _3-\sigma _1)-n\frac{\gamma }{2}-m\frac{2}{\gamma }\) and to the left of the poles at \(r=-(\frac{\beta _1}{2}-\frac{\beta _2}{2}+\sigma _3-\sigma _1)+n\frac{\gamma }{2}+m\frac{2}{\gamma }\), \(r = -(Q-\frac{\beta _1}{2}-\frac{\beta _2}{2}+\sigma _3-\sigma _1)+n\frac{\gamma }{2}+m\frac{2}{\gamma } \), \(r=n\frac{\gamma }{2}+m\frac{2}{\gamma } \) with \(m,n \in {\mathbb {N}}^2\). Furthermore the poles of the function \({\mathcal {J}}\) occur when \(\zeta = n \frac{\gamma }{2} + m \frac{2}{\gamma }\) where \(n, m \in {\mathbb {N}}\) and \(\zeta \) is equal to any of the following

$$\begin{aligned} \begin{array}{lll} - Q + \sigma _2 + \frac{\beta _1}{2} - \sigma _1, &{} \quad \sigma _2 - \sigma _1 - \frac{\beta _1}{2}, &{} \quad -Q + \frac{\beta _2}{2} - \sigma _3 + \sigma _2, \\ \frac{\beta _1}{2} - \frac{\beta _2}{2} - \frac{\beta _3}{2}, &{} \quad Q - \frac{\beta _1}{2} - \frac{\beta _2}{2} - \frac{\beta _3}{2}, &{} \quad - \frac{\beta _3}{2} - \sigma _3 + \sigma _1, \\ - Q + \frac{\beta _1}{2} - \frac{\beta _2}{2} + \frac{\beta _3}{2}, &{} \quad \frac{\beta _3}{2} - \frac{\beta _1}{2} - \frac{\beta _2}{2}, &{} \quad - Q + \frac{\beta _3}{2} - \sigma _3 + \sigma _1. \end{array} \end{aligned}$$

Proof

In the process of proving the above claims, we will also explain in detail the way the contour \({\mathcal {C}}\) is chosen. Notice first how the poles of the integrand of \({\mathcal {J}}\) are located. There are three lattices of poles starting from \(-(Q-\frac{\beta _2}{2}+\sigma _3-\sigma _2)\), \(-(\frac{\beta _3}{2}+\sigma _3-\sigma _1)\), \(-(Q-\frac{\beta _3}{2}+\sigma _3-\sigma _1)\) and extending to \(-\infty \) by increments of \(\frac{\gamma }{2}\) and \(\frac{2}{\gamma }\). We call these the left lattices. Similarly we have three lattices of poles starting from 0, \(-(\frac{\beta _1}{2}-\frac{\beta _2}{2}+\sigma _3-\sigma _1)\), \(-(Q-\frac{\beta _1}{2}-\frac{\beta _2}{2}+\sigma _3-\sigma _1) \) and extending to \(+\infty \) by similar increments, we call them the right lattices. The situation where it is the easiest to draw the correct contour \({\mathcal {C}}\) is when the parameters are chosen such that the poles of the six different latices all have different imaginary parts. Then it is clear how to draw a line starting from \(-i \infty \), passing to the right of the left lattices of poles, to the left of the right lattices of poles, and finally continuing to \(+ i \infty \).

Lets us check why the condition (5.126) implies the convergence of such a contour at \(-i \infty \) and \(+ i \infty \). Using the asymptotic of \(S_{\frac{\gamma }{2}}\) given by (5.123) one obtains the integrand of \({\mathcal {J}}\) as \(r \rightarrow i \infty \) is equivalent to \(c_1 e^{2 i \pi Q r} e^{-i \pi r(2 \sigma _3 - 2 \sigma _2 +\beta _2)}\) for \(c_1 \in {\mathbb {C}}\) a constant independent of r. In the other direction, as \(r \rightarrow -i \infty \), one finds similarly that the integrand is equivalent to \(c_2 e^{-2 i \pi Q r}\), which is always convergent. The asymptotic as \(r \rightarrow i \infty \) thus tells us the integral converges if \( Q > \mathrm {Re}(\sigma _3 - \sigma _2 + \frac{\beta _2}{2})\).

Lastly let us discuss the poles of \({\mathcal {J}}\) as function of its parameters. This problem is related to being able to choose correctly the contour \({\mathcal {C}}\) in any situation. It turns out the poles of \({\mathcal {J}}\) occur when the parameters \(\beta _i, \sigma _i\) are chosen such that there is pole from one of the left lattices that coincides with a pole from of one of the right lattices. In such a situation it is clearly not possible to choose the contour \({\mathcal {C}}\) as required. To solve this issue one can for instance deform the contour \({\mathcal {C}}\) in a small neighborhood of the collapsing poles so that it crosses one of the two poles before they collapse. By the residue theorem this adds a meromorphic function along side the contour integral in the expression of \({\mathcal {J}}\). This meromorphic function then has a pole precisely in situation we described, where the parameters are such that poles from the left and right lattices collapse. Lastly one can see by drawing a picture of all the poles that in any other situation one can always draw the contour \({\mathcal {C}}\). There is one tricky situation where a pole from a left lattice is to the right of a pole from the right lattice, and both poles have the same imaginary part. But in the setup the condition on \({\mathcal {C}}\) can be satisfied by choosing an eight shaped contour around the two poles.

The conclusion is thus that it is possible to choose consistently the contour \({\mathcal {C}}\) passing to the left and right of the right and left lattices of poles, except when the parameters are such that two poles of a right and a left lattice coincide. These special points then correspond to poles of the meromorphic function \({\mathcal {J}}\) and are given by the values taken by \(\zeta \) in the statement of the lemma. \(\square \)

Building upon the previous lemma we can easily deduce the following result about \({\mathcal {I}}\).

Lemma 5.12

Recall the expression

$$\begin{aligned}&{\mathcal {I}} \begin{pmatrix} \beta _1 , \beta _2, \beta _3 \\ \sigma _1, \sigma _2, \sigma _3 \end{pmatrix}\\&= {\mathcal {J}} \times \frac{(2\pi )^{\frac{2Q-{\overline{\beta }}}{\gamma }+1}(\frac{2}{\gamma })^{(\frac{\gamma }{2}-\frac{2}{\gamma })(Q-\frac{{\overline{\beta }}}{2})-1}}{\Gamma (1-\frac{\gamma ^2}{4})^{\frac{2Q-{\overline{\beta }}}{\gamma }}\Gamma (\frac{{\overline{\beta }}-2Q}{\gamma })} \frac{\Gamma _{\frac{\gamma }{2}}(2Q-\frac{{\overline{\beta }}}{2})\Gamma _{\frac{\gamma }{2}}(\frac{\beta _1+\beta _3-\beta _2}{2})\Gamma _{\frac{\gamma }{2}}(Q-\frac{\beta _1+\beta _2-\beta _3}{2})\Gamma _{\frac{\gamma }{2}}(Q-\frac{\beta _2+\beta _3-\beta _1}{2})}{\Gamma _{\frac{\gamma }{2}}(Q) \Gamma _{\frac{\gamma }{2}}(Q-\beta _1) \Gamma _{\frac{\gamma }{2}}(Q-\beta _2) \Gamma _{\frac{\gamma }{2}}(Q-\beta _3)} \nonumber \\&\quad \times \frac{e^{i\frac{\pi }{2}(-(2Q-\frac{\beta _1}{2}-\sigma _1-\sigma _2)(Q-\frac{\beta _1}{2}-\sigma _1-\sigma _2) + (Q+\frac{\beta _2}{2}-\sigma _2-\sigma _3)(\frac{\beta _2}{2}-\sigma _2-\sigma _3)+(Q+\frac{\beta _3}{2}-\sigma _1-\sigma _3)(\frac{\beta _3}{2}-\sigma _1-\sigma _3) -2\sigma _3(2\sigma _3-Q) )}}{S_{\frac{\gamma }{2}}(\frac{\beta _1}{2}+\sigma _1-\sigma _2) S_{\frac{\gamma }{2}}(\frac{\beta _3}{2}+\sigma _3-\sigma _1) }. \nonumber \end{aligned}$$
(5.128)

\({\mathcal {I}}\) is defined as a meromorphic function of all its parameters on the domain given by \( Q > \mathrm {Re} \left( \sigma _3 - \sigma _2 + \frac{\beta _2}{2} \right) \). The poles are a subset of the values of parameters for which there exists \(n, m \in {\mathbb {N}}\) such that \(\zeta = n \frac{\gamma }{2} + m \frac{2}{\gamma }\) is equal to one of the following:

$$\begin{aligned} \begin{array}{lll} - Q + \sigma _2 + \frac{\beta _1}{2} - \sigma _1, &{} \quad \sigma _2 - \sigma _1 - \frac{\beta _1}{2}, &{} \quad -Q + \frac{\beta _2}{2} - \sigma _3 + \sigma _2, \\ \frac{\beta _1}{2} - \frac{\beta _2}{2} - \frac{\beta _3}{2}, &{} \quad Q - \frac{\beta _1}{2} - \frac{\beta _2}{2} - \frac{\beta _3}{2}, &{} \quad - \frac{\beta _3}{2} - \sigma _3 + \sigma _1, \\ - Q + \frac{\beta _1}{2} - \frac{\beta _2}{2} + \frac{\beta _3}{2}, &{} \quad \frac{\beta _3}{2} - \frac{\beta _1}{2} - \frac{\beta _2}{2}, &{} \quad - Q + \frac{\beta _3}{2} - \sigma _3 + \sigma _1, \\ \frac{\beta _1}{2} + \frac{\beta _2}{2}+ \frac{\beta _3}{2} - 2Q, &{} \quad \frac{\beta _2}{2} - \frac{\beta _1}{2} - \frac{\beta _3}{2}, &{} \quad - Q + \frac{\beta _1}{2} + \frac{\beta _2}{2} - \frac{\beta _3}{2}, \\ - Q + \frac{\beta _2}{2} + \frac{\beta _3}{2} - \frac{\beta _1}{2}, &{} \quad - Q + \frac{\beta _1}{2} + \sigma _1 - \sigma _2, &{} \quad -Q + \frac{\beta _3}{2} + \sigma _3 -\sigma _1. \end{array} \end{aligned}$$

Proof

The proof of this claim is a direct consequence of Lemma 5.11, since \({\mathcal {I}}\) is obtained from \({\mathcal {J}}\) by multiplying by an explicit meromorphic function with a known pole structure. One simply adds in the list of poles of \({\mathcal {J}}\) the poles coming from this function. \(\square \)

The obvious drawback of the expression of \({\mathcal {I}}\) is that the integral \({\mathcal {J}}\) it contains forces one to work under the condition \( Q > \mathrm {Re} \left( \sigma _3 - \sigma _2 + \frac{\beta _2}{2} \right) \). Luckily thanks to the result of the main text we can propose a meromorphic extension of \({\mathcal {I}}\) to all of \({\mathbb {C}}^6\). The logic is as follows. First by the result of Lemma 3.6, we know the function \({\overline{H}}\) defined using GMC admits a meromorphic extension to a subset of \({\mathbb {C}}^6\), where the \(\beta _i\) are in complex neighborhood of \({\mathbb {R}}\) and the \(\sigma _i\) in \({\mathbb {C}}\). Then thanks to the results of Sect. 3.3, we know that the function \({\mathcal {I}}\) matches on its domain of definition with the meromorphic extension of \({\overline{H}}\). Therefore \({\mathcal {I}}\) admits a meromorphic extension to the same subset of \({\mathbb {C}}^6\). Furthermore by using simple symmetries of the function \({\overline{H}}\) proved in the main text it is actually possible to deduce an analytic continuation of \({\mathcal {I}}\) to \({\mathbb {C}}^6\). The two symmetries we will use correspond to performing a cyclic permutation of the parameters and to using the reflection principle link \(\beta _i\) and \(2Q - \beta _i\). For this purpose consider the following list of domains with the associated function \({\mathcal {I}}\).

$$\begin{aligned} \begin{array}{ll} {\mathcal {I}} \begin{pmatrix} \beta _1 , \beta _2, \beta _3 \\ \sigma _1, \sigma _2, \sigma _3 \end{pmatrix}, &{} \quad {\mathcal {D}}_1 = \left\{ (\beta _i, \sigma _i)_{i = 1,2,3} \in {\mathbb {C}}^6 \, \vert \, \mathrm {Re} \left( Q - \sigma _3 + \sigma _2 - \frac{\beta _2}{2} \right)>0 \right\} , \\ {\mathcal {I}} \begin{pmatrix} \beta _1 , 2Q - \beta _2, \beta _3 \\ \sigma _1, \sigma _2, \sigma _3 \end{pmatrix}, &{} \quad {\mathcal {D}}_2 = \left\{ (\beta _i, \sigma _i)_{i = 1,2,3} \in {\mathbb {C}}^6 \, \vert \, \mathrm {Re} \left( - \sigma _3 + \sigma _2 + \frac{\beta _2}{2} \right)>0 \right\} , \\ {\mathcal {I}} \begin{pmatrix} \beta _2 , \beta _3, \beta _1 \\ \sigma _2, \sigma _3, \sigma _1 \end{pmatrix}, &{} \quad {\mathcal {D}}_3 = \left\{ (\beta _i, \sigma _i)_{i = 1,2,3} \in {\mathbb {C}}^6 \, \vert \, \mathrm {Re} \left( Q - \sigma _1 + \sigma _3 - \frac{\beta _3}{2} \right)>0 \right\} , \\ {\mathcal {I}} \begin{pmatrix} \beta _2 , 2Q - \beta _3, \beta _1 \\ \sigma _2, \sigma _3, \sigma _1 \end{pmatrix}, &{} \quad {\mathcal {D}}_4 = \left\{ (\beta _i, \sigma _i)_{i = 1,2,3} \in {\mathbb {C}}^6 \, \vert \, \mathrm {Re} \left( - \sigma _1 + \sigma _3 + \frac{\beta _3}{2} \right)>0 \right\} , \\ {\mathcal {I}} \begin{pmatrix} \beta _3 , \beta _1, \beta _2 \\ \sigma _3, \sigma _1, \sigma _2 \end{pmatrix}, &{} \quad {\mathcal {D}}_5 = \left\{ (\beta _i, \sigma _i)_{i = 1,2,3} \in {\mathbb {C}}^6 \, \vert \, \mathrm {Re} \left( Q - \sigma _2 + \sigma _1 - \frac{\beta _1}{2} \right)>0 \right\} , \\ {\mathcal {I}} \begin{pmatrix} \beta _3 , 2Q - \beta _1, \beta _2 \\ \sigma _3, \sigma _1, \sigma _2 \end{pmatrix},&\quad {\mathcal {D}}_6 = \left\{ (\beta _i, \sigma _i)_{i = 1,2,3} \in {\mathbb {C}}^6 \, \vert \, \mathrm {Re} \left( - \sigma _2 + \sigma _1 + \frac{\beta _1}{2} \right) >0 \right\} . \end{array} \end{aligned}$$

We first give the following lemma about the domains \({\mathcal {D}}_i\).

Lemma 5.13

Consider the six domains \({\mathcal {D}}_i\), \(i=1, \dots , 6\), defined above. Then one has \(\cup _{i=1}^6 {\mathcal {D}}_i = {\mathbb {C}}^6\). Furthermore for any ij the set \({\mathcal {D}}_i \cap {\mathcal {D}}_j \) is non-empty and contains an open ball of \({\mathbb {C}}^6\).

Proof

To see why the first claim is true, assume a point \((\beta _i, \sigma _i)_{i = 1,2,3} \in {\mathbb {C}}^6\) is contained in none of the domains \({\mathcal {D}}_i\). Then it satisfies the reverse inequalities of those defining the domains \({\mathcal {D}}_i\). By summing all those inequalities one obtains \(3Q < 0\). Hence the contradiction. The second claim of the lemma is straightforward to check from the inequalities. \(\square \)

Using the above the lemma it is now possible to express the analytic continuation of \({\mathcal {I}}\) to \({\mathbb {C}}^6\) simply by patching the different domains \({\mathcal {D}}_i\) since their overlap always contains an open set.

Lemma 5.14

The function \({\mathcal {I}}\) of the six parameters \(\beta _1, \beta _2, \beta _3, \sigma _1, \sigma _2, \sigma _3\) originally defined on the domain \({\mathcal {D}}_1\) admits a meromorphic extension to \({\mathbb {C}}^6\). Furthermore its expression on any of the domains \({\mathcal {D}}_i\) can be given up to an explicit prefactor in terms of the function \({\mathcal {I}}\) written to the left of the definition of the corresponding \({\mathcal {D}}_i\) in the above table.

Proof

From the result of the main text we know \({\mathcal {I}}\) matches with \({\overline{H}}\) on the intersection of \({\mathcal {D}}_1\) and of the subset of \({\mathbb {C}}^6\) where \({\overline{H}}\) has been analytically extended. Next using the probabilistic definition of \({\overline{H}}\) one can check that this function is invariant under a cyclic permutation of the indices \(i =1,2,3\), applied simultaneously to both set of variables \(\beta _i\) and \(\sigma _i\). Similarly under the reflection \(\beta _i \rightarrow 2Q -\beta _i\) the function \({\overline{H}}\) gets multiplied by an explicit meromorphic function given in Lemma 3.4. These same properties must then hold for the function \({\mathcal {I}}\). But by performing the cyclic permutation or the reflection, the domain of valid of \({\mathcal {I}}\) changes from \({\mathcal {D}}_1\) to one of the other \({\mathcal {D}}_i\). By the result of Lemma 5.13 the different expressions are the meromorphic extension of one another, and jointly they cover all of \({\mathbb {C}}^6\). \(\square \)

Lastly we include here a sanity check on the formula of \({\mathcal {I}}\). We check that it obeys the scaling property verified by the probabilistic formula for \({\overline{H}}\) that was used in the proof of Lemma 3.6.

Lemma 5.15

Let \(A \in {\mathbb {C}}\). Then the following holds as an equality of meromorphic functions of \({\mathbb {C}}^6\)

$$\begin{aligned} {\mathcal {I}} \begin{pmatrix} \beta _1 , \beta _2, \beta _3 \\ \sigma _1 +A , \sigma _2 + A, \sigma _3 +A \end{pmatrix} = e^{i \pi A(2Q - \beta _1 - \beta _2 - \beta _3)}{\mathcal {I}} \begin{pmatrix} \beta _1 , \beta _2, \beta _3 \\ \sigma _1, \sigma _2, \sigma _3 \end{pmatrix}. \end{aligned}$$
(5.129)

Proof

Notice that when replacing all the \(\sigma _i\) by \(\sigma _i +A\), the function \({\mathcal {J}}\) does not change. The only thing that changes in \({\mathcal {I}}\) is the function

$$\begin{aligned} e^i\frac{\pi }{2}(-(2Q-\frac{\beta _1}{2}&-\sigma _1-\sigma _2)(Q-\frac{\beta _1}{2}-\sigma _1-\sigma _2) \\&+ (Q+\frac{\beta _2}{2}-\sigma _2-\sigma _3)(\frac{\beta _2}{2}-\sigma _2-\sigma _3)\\&+(Q+\frac{\beta _3}{2}-\sigma _1-\sigma _3)(\frac{\beta _3}{2}-\sigma _1-\sigma _3) -2\sigma _3(2\sigma _3-Q)). \end{aligned}$$

A direct computation then shows the change gives precisely a factor \(e^{i \pi A(2Q - \beta _1 - \beta _2 - \beta _3)}\).

\(\square \)

1.4.4 Some useful integrals

Lemma 5.16

For \(\theta _0 \in [-\pi ,\pi ]\), \(-1<g< 1\) and \(1 \vee (1+g)<b<2\) we have the identity:

$$\begin{aligned} \int _{{\mathbb {R}}_+ e^{i\theta _0}} \frac{ (1+u)^{ g } - 1}{ u^b} du=\frac{\Gamma (1-b)\Gamma (-1+b-g)}{\Gamma (-g)}. \end{aligned}$$
(5.130)

By \({\mathbb {R}}_+ e^{i\theta _0}\) we mean a complex contour that is obtained by rotating the half-line \((0,+\infty )\) by an angle \(e^{i \theta _0}\). In particular for \(\theta _0 = \pi \) it is passing above \(-1\) and for \(\theta _0 = - \pi \) it is passing below.

Proof

Denote by \((x)_n:= x(x+1)\dots (x+n-1)\). We start by the case \(\theta _0 = 0\):

$$\begin{aligned}&\int _0^{\infty } \frac{ (1+u)^{ g } - 1}{ u^b} du = \sum _{n=0}^{\infty } \frac{(-1)^n}{n!} (-g)_n \frac{1}{n +1-b } - \sum _{n=0}^{\infty } \frac{(-1)^n}{n!} (-g)_n \frac{1}{ 1-b +g-n }\nonumber \\ \nonumber&\quad =\frac{1}{1-b}\sum _{n=0}^{\infty } \frac{(-1)^n}{n!} \frac{(-g)_n (1-b)_n}{(2-b)_n} - \frac{1}{1-b + g}\sum _{n=0}^{\infty } \frac{(-1)^n}{n!} \frac{(-g)_n(-1+b - g)_n}{ (b - g )_n } \\&\quad =\frac{1}{1-b}\,F(-g,1-b,2-b,-1)-\frac{1}{1-b+g}\,F(-g,-1+b-g,b-g,-1)\\ \nonumber&\quad =\frac{\Gamma (1-b)\Gamma (-1+b-g)}{\Gamma (-g)}, \end{aligned}$$
(5.131)

where in the last line we used the formula, for suitable \({\overline{a}}, {\overline{b}} \in {\mathbb {R}}\),

$$\begin{aligned} {\bar{b}}\,F({\bar{a}}+{\bar{b}},{\bar{a}},{\bar{a}}+1,-1)+{\bar{a}}\,F({\bar{a}}+{\bar{b}},{\bar{b}},{\bar{b}}+1,-1)=\frac{\Gamma ({\bar{a}}+1)\Gamma ({\bar{b}}+1)}{\Gamma ({\bar{a}}+{\bar{b}})}. \end{aligned}$$
(5.132)

Then by rotating the contour, it is easy to observe that the value of the integral is the same for all \(\theta _0 \in [-\pi ,\pi ]\), which finishes the proof.\(\square \)

A direct consequence by a change of variable is the following identity:

Lemma 5.17

For \(\theta _0 \in [-\pi ,\pi ]\), \(-1<g< 1\) and \(g<b<1 \wedge (1+g)\) we have the identity:

$$\begin{aligned} \int _{{\mathbb {R}}_+ e^{i\theta _0}} \frac{ (1+u)^{ g } - u^g}{ u^b} du=\frac{\Gamma (1-b)\Gamma (-1+b-g)}{\Gamma (-g)}. \end{aligned}$$
(5.133)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Remy, G., Zhu, T. Integrability of Boundary Liouville Conformal Field Theory. Commun. Math. Phys. 395, 179–268 (2022). https://doi.org/10.1007/s00220-022-04455-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00220-022-04455-1

Navigation