Carving out OPE space and precise $O(2)$ model critical exponents

We develop new tools for isolating CFTs using the numerical bootstrap. A ``cutting surface"algorithm for scanning OPE coefficients makes it possible to find islands in high-dimensional spaces. Together with recent progress in large-scale semidefinite programming, this enables bootstrap studies of much larger systems of correlation functions than was previously practical. We apply these methods to correlation functions of charge-0, 1, and 2 scalars in the 3d $O(2)$ model, computing new precise values for scaling dimensions and OPE coefficients in this theory. Our new determinations of scaling dimensions are consistent with and improve upon existing Monte Carlo simulations, sharpening the existing decades-old $8\sigma$ discrepancy between theory and experiment.


Large-scale bootstrap problems
Numerical bootstrap methods [1,2] (see [3,4] for recent reviews) can help achieve two important goals: (1) make general statements about the space of all CFTs, and (2) isolate specific theories and compute their observables to high precision. In this work, we introduce new tools for isolating theories, and apply them to the 3d critical O(2) model.
To isolate a theory with the numerical bootstrap, one must choose a set of crossing symmetry equations and make reasonable assumptions about the spectrum of the theory. By analyzing the crossing equations using convex optimization, one obtains exclusion plots in the space of CFT data. In favorable circumstances, such exclusion plots contain small islands around the theory of interest -we then say that we have "isolated" the theory [5][6][7][8][9]. It is unknown in general which crossing equations and assumptions are needed to isolate a given theory. However, it is clearly important to incorporate as much information about the target theory as possible. In practice, this means we would like to study large systems of correlation functions involving multiple scalars [10][11][12][13][14][15], fermions [16][17][18], currents [19,20], stress tensors [21], various global symmetry representations , etc.. There are many indications that such large-scale bootstrap problems could help isolate myriad interesting theories. 1 Until recently, our ability to study large systems of correlation functions has been limited. One tool that will facilitate going beyond previous studies is a new version of the semidefinite program solver SDPB [82], which can now run on hundreds of cores across multiple machines [83].
Besides solving big semidefinite programs, another issue that arises in large-scale bootstrap studies is the difficulty of searching high-dimensional spaces. More crossing equations are parametrized by more input data, including scaling dimensions and OPE coefficients. If some input data is unknown, then we must scan over it to make an exclusion plot. For example, to study correlation functions of the scalars σ and in the 3d Ising model, we must scan over their scaling dimensions ∆ σ and ∆ . It was shown in [7] that it is also beneficial to scan over the OPE coefficient ratio λ σσ /λ . Specifically, the island in the space of scaling dimensions and OPE coefficient ratios is smaller than the island in the space of scaling dimensions alone. To study an even larger system of correlation functions, one must scan over an even larger set of scaling dimensions and OPE coefficients.
One of the main contributions in this work is an efficient "cutting surface" algorithm for scanning over OPE coefficients. Because OPE coefficients enter quadratically in the crossing equations, our algorithm can scan a region of volume V in OPE coefficient space in time log V . We also explain how to use our algorithm in conjunction with hot-starting [84], and introduce efficient methods for scanning over scaling dimensions. CFT   We apply our methods to study correlation functions of the lowest-dimension charge-0, charge-1, and charge-2 scalars in the three-dimensional critical O(2) model. The 3d O(2) model is one of the most studied renormalization group (RG) fixed points, both theoretically and experimentally. It describes phase transitions in numerous physical systems, including ferromagnets and antiferromagnets with easy-plane anisotropy, from which it also inherits the name of the XY universality class. Unfortunately, experimental results and Monte Carlo results for the critical exponents of the O(2) model have been in 8σ tension for two decades. We have computed the critical exponents to high precision (with rigorous error bars). We find excellent agreement with Monte Carlo results, and a clear discrepancy with experiment. In addition, we compute numerous other scaling dimensions and OPE coefficients in the O(2) model. Our results, together with comparisons to other methods, are summarized in table 1.

Experimental and theoretical approaches to the 3d O(2) model
In the remainder of this introduction, we provide an account of past approaches to the 3d O(2) model, including a history of the discrepancy between experiment and Monte Carlo. We also describe past bootstrap studies of the O(2) model and motivate the calculation in this work.
The simplest continuum field theory in the O(2) universality class is the theory of a scalar field φ transforming in the fundamental representation of O (2), with Lagrangian A large negative mass-squared for the scalar induces spontaneous symmetry breaking, leading to the ordered phase, while a large positive mass-squared leads to the disordered phase. The critical point is achieved by tuning the UV mass so that the IR correlation length diverges. Critical exponents are linked to operator dimensions at the fixed point by the simple relations Here, s ∼ | φ| 2 denotes the lowest-dimension charge-0 scalar.

The λ-point experiment
Perhaps the most intriguing experimental representative of the O(2) universality class is the superfluid transition in 4 He along the so-called λ-line, see Fig. 1. Several features make this system ideal for experimental tests of critical phenomena. Firstly, the transition is secondorder along the entire λ-line. This should be compared, for instance, with the liquid-vapor transition in water 2 where the critical point occurs at a single point on the temperaturepressure plane. 3 Secondly, the steep slope of the λ-line makes the critical temperature weakly dependent on the pressure. Thirdly, that compressibility is weakly divergent at the critical point and one side of the phase transition is a superfluid state (thus free of temperature gradients) renders the system less subject to gravitational effects, which still represent the major limitation for Earth-bound experiments.
The Earth's gravitational field creates a challenge for precise measurements of critical points in fluids. Gravity has two main effects [89]: (1) it induces a density gradient in the fluid, making the system inhomogeneous; (2) more dramatically, it prevents fluctuations from growing indefinitely, making the correlation length effectively finite (gravitational rounding). Because of gravitational effects, most Earth-bound critical systems can only be tuned to |t| 10 −4 , where t is the reduced temperature t = 1 − T /T c . Due to its favorable properties, the superfluid transition of 4 He can instead reach |t| 10 −7 . To get even closer to the critical regime, the λ-point experiment was conducted on the Space Shuttle Columbia in 1992 [90]. The micro-gravity environment allowed the experiment to reach |t| 5 × 10 −9 . In [91,85], by fitting measurements from the λ-point experiment, the following value of the critical exponent ν was obtained:

Monte Carlo results
Over the past few decades, Monte Carlo (MC) simulations of lattice models in the O(2) universality class have provided the most precise theoretical predictions for critical exponents in the O(2) model. The most recent determination using purely MC techniques [86] gives ν MC = 0.67169 (7) .
We refer to [92] and references therein for older results. 4 The above value is fully in agreement with the the second most precise theoretical determination of ν: in [96] MC simulations were combined with (uncontrolled) high temperature (HT) expansion 5 computations to obtain Unfortunately, comparison of the MC results (4) and (6) with the experimental determination (3) reveals a large discrepancy of approximatively 8σ. The obvious questions is: which one is correct? 4 A determination that post-dates the review [92] is ν MC = 0.6717(3) in [93]. A more recent computation using pseudo-expansion methods was performed in [94] giving ν p = 0.6706 (12), which is closer to the experimental result. Another determination was recently obtained in [95] using only MC techniques, ν MC = 0.67183 (18). The latter determination and the value in (4) do not entirely overlap. 5 In the HT expansion the generating functional is expanded in powers of the inverse temperature β. Each term in the expansion can then be interpreted as a graphical sum. Each graph consists of vertices (lattice sites) connected by bonds, each of which is associated with a factor β. The graphs enumeration becomes a combinatoric problem and can be automatized (see [92] for a list of available HT series). Once the series is known to a sufficiently large order, it can be Borel resummed and extended down to the critical temperature. In [96] they used a 22nd order expansion.

The conformal bootstrap
The numerical conformal bootstrap offers a rigorous and independent method to resolve this controversy. Three dimensional O(N )-models were first studied with bootstrap methods in [25] by considering the correlation function φ i φ j φ k φ l , where φ i is the lowest-dimension scalar transforming in the vector representation of O(N ). That work showed that the O(N )-models occupy special places in the space of three dimensional CFTs: they saturate bounds on scalar operator dimensions, and their presence is signaled by a "kink" in those bounds: a change of slope along an otherwise smooth boundary.
A rigorous determination of the critical exponents ν and η was later obtained in [6] by studying all nontrivial four-point functions containing φ i and the lowest-dimension singlet scalar s, furthermore imposing that φ i and s are the only relevant scalars with their respective O(N ) representations. The resulting bounds on (∆ φ , ∆ s ) carve out an isolated island where the O(N ) model lives, together with a detached region where all other O(N )symmetric CFTs satisfying these relevancy assumptions must live.
The computation of [6] was further improved for the cases N = 1, 2, 3 in [7]. The latter work used essentially the same setup of [6], but additionally explored the power of scanning over OPE coefficients. Specifically, the authors asked the following question: in the space (∆ φ , ∆ s , θ), where θ parametrizes the ratio between two three point functions coefficients tan(θ) λ sss /λ φφs , what is the region consistent with crossing symmetry? It turned out that this apparently simple upgrade has a huge effect, but still not enough to make a conclusive statement about the MC/experiments discrepancy.
A complementary approach for the case N = 2 was initiated in [20], which studied the system of correlators involving the field φ i and the conserved current associated to the global O(2) symmetry. Although the determination of critical exponents was not competitive with previous bootstrap analysis, this framework gives access to new CFT-data, in particular quantities related to transport properties near the quantum critical point. 6 In this work, we study a larger system of correlation functions using numerical bootstrap techniques: in addition to φ i and s, we incorporate the lowest-dimension charge-2 scalar t ij ∼ φ (i φ j) . A motivation for this choice is the idea that there exist strong constraints among the low-twist data of a CFT. For example, in [97,98], it was shown using the lightcone bootstrap that crossing symmetry for the operators σ, in the 3d Ising model can be approximately recast as a set of constraints for a small amount of low-twist data, namely ∆ σ , ∆ , λ σσ , λ , and c T . This immediately points to a deficiency in previous bootstrap studies of the O(2) model. The operator t ij is expected to have lower dimension than s (∆ t ≈ 1.2, while ∆ s ≈ 1.5). Thus, it makes sense to include it in the set of crossing equations we study.
As mentioned in section 1.1, studying the larger set of crossing equations involving {φ, s, t} requires searching over more input data: the operator dimensions {∆ φ , ∆ s , ∆ t }, and the OPE coefficients {λ sss , λ φφs , λ tts , λ φφt } (more precisely their ratios). Our new search Figure 2: 3d region corresponding to our new O(2) island using the {φ i , s, t ij } system and OPE scans at Λ = 43 (blue). The result is compared with the best fit values of ∆ s to 4 He data [85] (brown planes) and the region for {∆ φ , ∆ s , ∆ t } reported by the Monte Carlo studies [87,99] (green box).
algorithms are crucial for scanning this space efficiently. In figure 2, we show the resulting island in the space of scaling dimensions ∆ φ , ∆ s , ∆ t , and compare to Monte Carlo and experimental determinations. Our determination is consistent with Monte Carlo simulations and inconsistent with the results of the λ-point experiment.

Structure of this work
This work is structured as follows. In section 2, we describe the system of correlation functions in the O(2) model that we study, together with previously known information about its spectrum. In section 3, we introduce new search methods: a "cutting surface" algorithm for scanning over OPE coefficients, tricks for hot-starting, and Delaunay-triangulation methods for searching in dimension space. In section 4, we present results for scaling dimensions and OPE coefficients in the O(2) model. Appendix A provides links to the code used in this work and appendix B contains technical details of our software and hardware setup. Other appendices provide details about the crossing equations of the O(2) model and specific points that we have tested. • The trivial representation 0 + .
• The sign representation 0 − , in which U (1) acts trivially and the nontrivial element of Z 2 acts by −1.
• For each q ∈ Z >0 , a unique two-dimensional irreducible representation q. The states of q have U (1) charges ±q and are exchanged by Z 2 .
Tensor products of these irreps are given by where s/a denotes the symmetric/antisymmetric part of the tensor product, in the case of identical irreps. For any irrep R of O(2), we define q as the highest U (1) charge in the representation.
Operators O q (x) in irrep q can be written in terms of O(2) fundamental indices i = 1, 2 as rank-q symmetric traceless tensors O i 1 ...iq (x). It is convenient to contract these with auxiliary polarization vectors y i that are defined to be null, y · y = 0, so that The singlet operator O 0 + (x) has no indices or y's. The Z 2 -odd operator O 0 − (x) could be written with antisymmetric indices O [i 1 i 2 ] (x). Alternatively, we can take into account the O(2) dependence of correlation functions that include O 0 − (x) in an index-free manner by requiring that all pairs of distinct y 1 , y 2 must be contracted as where the number of ∧'s must be zero/one if an even/odd number of O 0 − (x)'s appear.
Tensor structures for correlation functions of charged operators can be factorized into "flavor" tensor structures for the O(2) polarization vectors y i and "kinematic" tensor structures that encode spacetime dependence. For two-point functions, we have where ∆ and J are the dimension and spin of O, and q is the maximal U (1) charge of the O(2) representation of O. Here, c O is a constant that we usually set to 1.
In general, four-point functions of scalars operators ϕ i (x i , y i ), where i here labels each operator that transforms in O(2) irrep R i , can be expanded in the s-channel in terms of conformal blocks as 7 where ∆ ij ≡ ∆ i − ∆ j , the conformal cross ratios u, v are 7 Our conformal blocks are normalized as in the second line of table 1 in [3]. and the operators O that appear both OPEs ϕ 1 × ϕ 2 and ϕ 3 × ϕ 4 have scaling dimension ∆, spin , and transform in an irrep R that appears in both tensor products R 1 ⊗ R 2 and R 3 ⊗R 4 . For each R, the O(2) structure T R R 1 R 2 R 3 R 4 (y i ) is a polynomial in y i for i = 1, 2, 3, 4, that can be derived from contracting appropriate 3-point functions as described in appendix C. If ϕ 1 = ϕ 2 (or ϕ 3 = ϕ 4 ), then Bose symmetry requires that O have only even/odd for R in the symmetric/antisymmetric product of R 1 ⊗ R 2 (or R 3 ⊗ R 4 ).
We are interested in four-point functions of the lowest dimension scalar operators transforming in the 0 + , 1, and 2 representations, which we will denote following [25,6,7] as s, φ, and t, respectively. 8 These operators are normalized via their two point functions as where x 12 ≡ |x 1 −x 2 |. In table 2 we list the 4-point functions of s, φ, and t that are allowed by O(2) symmetry 9 whose s and t-channel configuration lead to independent crossing equations, along with the irreps and spins of the operators that appear in the OPE, and the number of crossing equations that they yield. These 4-point functions can be written explicitly as in (14), where the explicit O(2) structures T R R 1 R 2 R 3 R 4 (y i ) are computed in appendix C. Equating each of these s-channel 4-point functions with their respective t-channels yields the crossing equations where ± denotes which spins appear, and the V 's are 22-dimensional vectors of matrix or scalar crossing equations that are ordered as table 2 and written in terms of The explicit form of the V 's are given in appendix D. The same crossing equations were derived and studied independently in [84]. 10 8 The singlet S, traceless symmetric T , vector V and antisymmetric A irreps considered in previous O(N ) bootstrap papers [25,6,7] correspond for O(2) to the 0 + , 2, 1, and 0 − irreps, respectively. 9 These 4-point functions, and the resulting crossing equations, are identical for a theory with just SO(2) symmetry. The only difference between O(2) and SO(2) is that for the latter 0 + ∼ = 0 − and ij is now an invariant tensor, so one would need to consider correlators of operators with 0 − , such as O 0 and SO (2). 10 Furthermore, [84] includes a software package autoboot that can automatically derive equation (17).

4-pnt
s-channel t-channel Eqs Table 2: Four-point function configurations that give independent crossing equations under equating their s-and t-channel, along with the even/odd spins that appear for each irrep in each channel, and the number of crossing equations that each configuration yields.

Assumptions about the spectrum
To obtain precise results for the O(2) model, we must input some information about its spectrum and OPE coefficients. Firstly, we impose that φ, s, t are the only relevant scalars in their respective charge sectors. This assumption is well-supported by other techniques including Monte Carlo simulation and the -expansion. The dimension of the second singlet s is related to the critical exponent ω = ∆ s − 3, which has been determined using field theory and numerical techniques [99][100][101] (see also [92] for a list of less precise estimates). To our knowledge, the second charge-2 operator t has only been determined in the -expansion [102]: ∆ t 3.624(10). The charge-1 case is more subtle: in the -expansion one can show that the second charge-1 operator, schematically (φ k ) 2 φ a becomes a descendant of φ a [103]. The next charge-1 operators are strongly irrelevant close to 4 dimensions: the common lore is that continuation to = 1 does not change this property. Also, Monte Carlo simulations do not show any evidence of a second charge-1 relevant perturbation. However, we are not aware of any direct determination.
Concerning higher charge sectors, there is strong evidence from the -expansion [104,105] and MC [87] that there are no relevant charge-4 scalars in the O(2) model. In most of this work, we impose this assumption as well: ∆ 4, =0 ≥ 3.
Finally, the lowest-dimension charge-3 scalar in the O(2) model is expected to have charge spin dimension ≈ 2.1 [106,87]. 11 This value is actually very close to the upper bound imposed by a bootstrap analysis [11]. 12 We impose a conservative bound ∆ 3, =0 ≥ 1.
For reasons discussed in section 3.6, it is useful to impose a small gap δτ in twist τ = ∆− above the unitarity bound for the non-scalar operators in the theory. (The unitarity bound for non-scalars is τ ≥ 1.) Of course the spectrum must include the O(2) current J µ and the stress tensor T µν , so we impose the twist gap only for operators with dimensions above the current and stress tensor in their respective sectors. (We impose slightly different gaps in these sectors when computing upper bounds on C T and C J , as discussed in section 4.3.) The presence of a small twist gap is expected to be valid in the O(2) model. In the charge-0 sector, Nachtmann's theorem [107][108][109], together with the existence of doubletwist operators [108,110], implies that leading twists τ for each even ≥ 4 satisfy Numerous methods, including the -expansion, the lightcone bootstrap, and the extremal functional method suggest that τ 4 ≈ 1.02. A result from [111] shows that minimal twists in the charge-2 and charge-4 sectors are equal to or larger than the minimal twist in the charge-0 sector, for each spin. For charges 1 and 3 and odd spins in the 0 − representation, we can appeal to the -expansion which shows there are no higher-spin operators with twist near the unitarity bound. Thus, the assumption of a twist gap δ τ < 0.02 is well-justified. In most of this work, we choose δτ = 10 −6 . Overall, our assumptions about the spectrum of the O(2) model are listed in table 3.
The OPE coefficients of J µ and T µν are constrained by Ward identities in terms of the 11 We find that this is consistent with estimates based on the extremal functional method [78]. 12 More precisely the bound requires that given a charge-1 and charge-2 operator of dimension (∆ φ , ∆ t ) = (0.51905, 1.234), the OPE φ × t must contain a charge-3 operator with dimension smaller than 2.118. Strictly speaking this bound does not apply to the O(2) model since this choice of dimensions turns out to be excluded. Nevertheless, by continuity, we expect the correct bound to be very close.
two-point coefficients C J and C T . In our conventions, we have where C free J,T are the two-point coefficients of J and T in the free O(2) model. Thus, the contribution of these operators to the crossing equation can be parametrized purely in terms of C T and C J , together with the dimensions and charges of the external scalars φ, s, t.

Numerical bootstrap bounds
Given the crossing equations (17), we compute bounds on CFT quantities in the standard way described in [1,5]. Suppose we would like to demonstrate that a hypothetical spectrum is inconsistent. We search for a linear functional α such that for all combinations of representations, dimensions ∆, and even or odd spins ± in some hypothetical spectrum. Here, "M 0" means "M is positive-semidefinite." It is conventional to normalize the contribution of the unit operator in the crossing equation to 1: If a functional exists satisfying these conditions, then the hypothetical spectrum is ruled out. We search for a functional using SDPB [83].

Positivity conditions involving the external scalars s, φ, t
The external operators s, φ, t appearing in the crossing equations require special treatment when computing bootstrap bounds. 13 There are four nonvanishing OPE coefficients involving just s, φ, t. They can be grouped into a vector 14 We define the 4 × 4 symmetric matrices V ext as the bilinear forms paired with λ ext in the crossing equations. V ext is given implicitly by When computing bounds, we can treat the term λ T ext V ext λ ext in different ways, depending on our knowledge of λ ext . If we know nothing about λ ext , then we can search for a functional α such that where " 0" means "is positive semidefinite." In this way, we ensure that the contribution of external scalar OPE coefficients to the crossing equation has a definite sign after applying α, independent of the values of those coefficients. Imposing the condition (25), we can compute an allowed region D for other quantities like operator dimensions.
However, the condition (25) is stronger than necessary because it allows the matrix M ext ≡ λ ext λ T ext to have rank larger than 1. Specifically, it ensures that Tr(M ext α( V ext )) ≥ 0 for M ext of any rank. We would like a procedure that only imposes positivity when M ext is a rank-1 matrix. Such a procedure was described in [7,112], and it results in stronger bounds. Suppose first that we know the direction of λ ext . More precisely, suppose we know the equivalence class [λ ext ] ∈ RP 3 of λ ext under rescaling by a real number. In this case, the condition (25) is too strong, and it suffices to impose the weaker condition 15 (Note that α( V ext ) is a 4 × 4 matrix, so that λ T ext α( V ext )λ ext is a number.) This ensures that the contribution of external scalars to the crossing equation will be positive, independent of the magnitude or sign of λ ext . If we use the weaker condition (26) to compute bounds on other quantities, we obtain an allowed region D [λext] that is smaller than D, but depends on the equivalence class [λ ext ] ∈ RP 3 .
If we don't know [λ ext ] a-priori, we can scan over its value and compute the regions D [λext] as a function of [λ ext ] ∈ RP 3 . The union of the resulting allowed regions must be contained inside the original allowed region D: 15 Here, λ ext can be any representative of the equivalence class [λ ext ].
A key observation of [7,112] is that this inclusion can be strict -i.e. by scanning over different directions [λ ext ] in OPE space, and taking the union of the resulting allowed regions, we can obtain a smaller allowed region than if we impose the naïve condition (25). Scanning over OPE coefficient directions [λ ext ] allows us to use that λ ext λ T ext is rank-1, and get better results. A disadvantage is that we must solve multiple semidefinite programs to compute the new allowed region D .

An algorithm for scanning over OPE coefficients
Suppose we would like to determine whether some putative scaling dimensions (∆ s , ∆ φ , ∆ t ) are allowed or not. According to the previous section, we should scan over directions in OPE coefficient space [λ ext ] ∈ RP 3 . For each direction, we should compute whether a functional α exists satisfying (26) and (21). If α does not exist for some [λ ext ], then the point In this section, we describe an algorithm that makes the scan over [λ ext ] ∈ RP 3 very efficient.
Let us choose some initial direction [λ 1 ] ∈ RP 3 . Suppose that a functional α 1 exists obeying the condition 16 and additionally obeying all other necessary positivity conditions (21) for computing feasibility of the given point ( The key observation is that Q 1 = α 1 ( V ext ) defines a bilinear form that is positive not only for λ 1 , but also for some neighborhood U 1 ⊂ RP 3 containing λ 1 ∈ U 1 . That is, α 1 rules out an entire neighborhood U 1 ⊂ RP 3 . We can now focus on scanning over the complement RP 3 \ U 1 .
This suggests Algorithm 1 for ruling out a point (∆ s , ∆ φ , ∆ t ) in dimension space. Algorithm 1 is similar to so-called "cutting plane" methods. We have a region A n of allowed OPE directions. We choose a point [λ n+1 ] ∈ A n and consult an "oracle" (the semidefinite program solver) to get a quadratic form Q n+1 that rules out that point. This quadratic form cuts away a neighborhood U n+1 from A n , giving a smaller allowed region A n+1 = A n \ U n+1 .
In traditional cutting plane methods, an oracle provides linear forms instead of quadratic forms. If the U i were half-spaces defined by linear forms, then the above algorithm would exhibit some nice properties. Firstly, the allowed regions A n would be convex. Secondly, if we choose [λ n+1 ] ∈ A n to be the center of volume of A n (in some affine coordinates), then the neighborhood U n+1 would be guaranteed to cut away half of A n . Thus, the volume of A n would decrease exponentially in the number of cuts, and the algorithm would take logarithmic time in the volume of A n . 17 Fortunately, in many examples, we have found that once the allowed region A n becomes sufficiently small, the sets U n+1 become very close to half-spaces near the allowed region, see begin Given a list of functionals {α 1 , . . . , α n }, together with quadratic forms Q i = α i ( V ext ) and regions ruled out by those quadratic forms The allowed region of OPE space is if A n is empty then All directions in OPE space are ruled out. return Disallowed else Choose some [λ n+1 ] ∈ A n . Impose the positivity condition λ T n+1 α( V ext )λ n+1 ≥ 0, and solve the resulting semidefinite program to find a functional α n+1 . if α n+1 exists then Append α n+1 to the list {α 1 , . . . , α n } and go to begin. else We have failed to rule out all directions in OPE space. return Allowed end end end Algorithm 1: Cutting surface algorithm for scanning over OPE coefficients.
figure 3. Recall that U n+1 is defined by a quadratic inequality (29), and thus generically has curved edges. However, as the algorithm proceeds, the radius of curvature of these edges becomes large relative to the size of the region A n (in some generic affine coordinates on RP 3 ). Thus, our algorithm approximately inherits many of the nice properties of traditional cutting plane methods. We call our method a "cutting surface" algorithm.

Finding a point [λ n+1 ]
The most difficult step in the cutting surface algorithm is determining whether A n is nonempty and, if it is non-empty, choosing a point [λ n+1 ] ∈ A n . For this step, we are given a list of quadratic forms Q 1 , . . . , Q n ∈ R m×m , and we wish to find x = λ n+1 ∈ R m that is negative with respect to those quadratic forms. (For the computations in this work, m = 4.) This type of problem is called a quadratically constrained quadratic program (QCQP), see e.g. [113].
Unfortunately, QCQPs are NP-hard in general. 18 However, we have found several We plot OPE space after applying the affine transformation described in figure 4, which turns the initial bounding ellipsoid into the unit sphere. For each allowed region A n , we show the point [λ n ] most recently ruled out by SDPB in red. This point is typically very close to the boundary of the allowed region. We also show the next point to be tested [λ n+1 ] in blue. We choose the blue point close to the center of A n . In the final frame, SDPB gives primal feasible for the blue point.
heuristic approaches that work well for the case at hand. Furthermore, these heuristics can be stacked: if one method fails to find a solution, we can try another method. For this work, we applied multiple heuristics, using one to verify the results of another when possible. In the next few subsections, we describe these heuristics.
Because we solve the QCQP using heuristics, our implementation of the cutting surface algorithm is non-rigorous (except when m = 2). It would be interesting to investigate whether there exists a deterministic algorithm for QCQPs in low dimensions that could be This case is relevant, for example, in the 3d Ising model problem studied in [7], which involves two OPE coefficients λ σσ and λ . useful in bootstrap calculations.

Implementation in Mathematica
For low-dimensional cases, where Q i ∈ R m×m with m = 3, 4, we have implemented the cutting surface algorithm in Mathematica using standard functions. For example, in order to plot the region A n we pass the inequalities λ T Q 1 λ < 0, . . . , λ T Q n λ < 0 to the functions RegionPlot or RegionPlot3D. 19 We then use the DiscretizeGraphics function to convert the resulting plot into a MeshRegion corresponding to the allowed region.
If the resulting MeshRegion returns as EmptyRegion[m-1] then the algorithm terminates. If it is instead nonempty, then there are various approaches one can use to select a point in its interior. One simple and fast option is to take the RegionCentroid. This approach works most of the time, but occasionally fails when the allowed region is nonconvex.
Another simple approach is to select the point which NMaximizes the RegionDistance to the RegionBoundary, subject to the constraint of being inside the allowed region. We found that this approach leads to a working algorithm a majority of the time, but is often slow and sometimes picks suboptimal points. In the next subsection we describe a more robust procedure that we have developed for selecting an optimal point in the interior.
Another important point is that as the allowed region gets smaller, it is helpful to apply an AffineTransform at each iteration of the algorithm to make the allowed region roughly spherical. This for example helps to avoid the problem of missing a very small allowed region. We do this by computing a BoundingRegion of the allowed MeshRegion (we had good success with the form "FastOrientedCuboid"), and then constructing an AffineTransform which maps it to the unit cube. This transformation then gets applied to all coordinates before iterating.

Minimizing Q n
We now describe some heuristics that do not depend on specialized Mathematica features and can in principle be used in general dimensions m. One important heuristic takes advantage of allowed regions A i typically becoming close to convex as the cutting surface algorithm proceeds. Recall that A n−1 is the region on which all quadratic forms Q 1 , . . . , Q n−1 are negative. Suppose this region is nonempty. Now let us add an additional quadratic form Q n . We would like to know whether Q n is positive on A n−1 (in which case A n is empty). If it is not positive, we would like to find a point λ n+1 ∈ A n−1 such that Q n is negative on λ n+1 .
To do so, consider the function f (x) = x T Q n x/x T x, where x ∈ R m . Because f is homogeneous of degree zero, f defines a function on RP m−1 . We would like to minimize f over A n−1 . If the minimum is negative, then the solution [x] gives a point in A n .
One possible minimization procedure is gradient descent starting from a point in A n−1 . To ensure that we stay inside A n−1 , we introduce a "barrier" function and minimize the combination where γ > 0 is a parameter that we choose. The barrier function is defined so that it is finite inside A n−1 and diverges to +∞ as one approaches the boundaries of A n−1 from the interior. In the limit γ → 0, the minimum of (32) converges to the minimum of f (x) over A n−1 .
Following standard practice in interior point optimization, we combine gradient descent with decreasing the parameter γ. In each iteration, we compute a search direction using Newton's method for the combined function (32). We then move along this direction and simultaneously decrease γ by a constant factor.
If the region A n−1 were convex and the function f were convex, then the above algorithm would be guaranteed to find the minimum of f . We have found that in practice, convexity holds approximately for both the region A n−1 and the function f (x). Thus, typically this algorithm finds a suitable minimum after a single run. To increase its likelihood of success, we attempt the descent algorithm from many different randomly chosen starting points inside A n−1 . We sample random starting points using the hessian line search method detailed in section 3.4.4.
We can make some shortcuts to the standard interior point method. First we observe that Q n is usually very small for λ n . This means λ n is in fact already quite close to the Q n = 0 surface. One shortcut is that we can draw a line starting from λ n along the gradient of the function defined by Q n , then test whether there is a feasible point on this line. Another shortcut is that we can simply sample some random points around λ n . Both shortcuts have a very good chance to succeed, and are very cheap compared to the interior point method described above. Therefore we perform the shortcuts before the standard interior point method.
For the computations in this work, the simple method of minimizing Q n over A n−1 works most of the time. It will be interesting to explore its applicability to higher-dimensional spaces of OPE coefficients and other bootstrap problems.

Semidefinite relaxation and rank minimization
Another heuristic uses the method of semidefinite relaxation, which is standard in the literature on QCQPs [113]. Recall that we would like to solve the QCQP: find x such that x T Q i x ≤ 0 for all i = 1, . . . , n (which is equivalent to [x] ∈ A n ). This can also be written as: Find X 0 such that Tr(XQ i ) ≤ 0 for all i = 1, . . . , n, and rank(X) = 1.
Here, X is an m × m matrix and " " means "is positive semidefinite". If such an X exists, then it can be written X = xx T , and x provides the required solution to the QCQP.
Equation (33) almost defines a semidefinite program. The only difference is the condition rank(X) = 1. Removing the rank-1 condition, we obtain the semidefinite relaxation of the original QCQP. Solving the semidefinite relaxation gives two possible outcomes: • The semidefinite relaxation is infeasible (i.e. X does not exist satisfying the conditions Tr(XQ i ) ≤ 0 and X 0). In this case, the original QCQP is necessarily infeasible. Thus, we can rigorously conclude that A n is empty.
• The semidefinite relaxation is feasible. Typically, the resulting matrix X is not particularly close to rank 1, so we must perform some additional work to find whether a solution of the QCQP exists.
In the case where the semidefinite relaxation is feasible, we use the method described in [114] for finding low-rank solutions of semidefinite programs. This method involves solving a sequence of semidefinite programs with objective functions designed to successively decrease the m−1 smallest eigenvalues of X. We solve the semidefinite relaxation and the subsequent rank-minimization SDPs using SDPB.
If rank minimization succeeds, we are left with a positive semidefinite matrix X with one large eigenvalue and several small eigenvalues. To find a rank-1 solution xx T , we apply the random sampling method described in [113]. We take random samples x ∈ R m with covariance matrix X = xx T . By construction, each inequality in the QCQP is true in expectation: Thus, there is a reasonable probability of finding a sample x for which all inequalities in the QCQP are true. If such a sample exists, we have solved the QCQP. If we do not find such a sample, then we cannot conclude anything about the QCQP.
An implementation of the algorithm described in this section is available online. 20 In our testing, it worked consistently in cases where OPE space is relatively low-dimensional m ≤ 4. Indeed, this algorithm is capable of finding solutions to the QCQP in cases where the Q n -minimization of section 3.4.2 fails (for example because A n−1 has a complicated or elongated shape). Although it takes only a few minutes to run, SDP relaxation methods are more computationally intensive than the Q n -minimization. Thus, we use them as a final heuristic, which we run only when other heuristics have failed to solve the QCQP.

Choosing [λ n+1 ]
When A n is non-empty, the heuristics in sections 3.4.1, 3.4.2, and 3.4.3 will usually find a point [x] ∈ A n . However, to make the cutting surface algorithm as efficient as possible, we would like to choose [λ n+1 ] roughly in the "center" of A n . In the approach using standard Mathematica functions, one possibility is to choose [λ n+1 ] to be the RegionCentroid of the allowed OPE region. However, for the other approaches it is important to have methods that don't require detailed knowledge of the shape of A n (which can be expensive to compute).
One simple approach is to minimize the barrier function B An (x) over A n (using [x] as an initial point). However, for very elongated regions A n , the minimum of the barrier function is sometimes not particularly close to the center of volume.
Note that in the case m = 2, where OPE space RP m−1 is 1-dimensional, it is trivial to find a suitable [λ n+1 ]. The allowed region is a union of line segments that we can solve for analytically. We can then choose the midpoint of the longest line segment (in some affine coordinates).
We can use this observation in higher dimensions. Let us start with a point [x 0 ] ∈ RP m−1 and choose a random line 0 ⊂ RP m−1 containing [x 0 ]. The intersection of the line 0 with the region A n is a union of line segments (typically a single segment), and we can choose [x 1 ] to be the midpoint of one of these segments. Repeating in this way, we obtain a sequence of points [x k ] that are at the midpoints of random lines intersecting A n . This sequence does not typically converge to a single point. However, later points in the sequence are good candidates for [λ n+1 ]. 21 To randomly sample the line i , we choose coordinates around [x i ] in which the Hessian of the barrier function B An (x) at [x i ] becomes a diagonal matrix with entries ±1. In these coordinates, the region A n typically looks roughly spherical around [x i ]. We then use a uniform distribution on an infinitesimal sphere around x i in these coordinates. We call this method a "hessian line search." The hessian line search can be modified to randomly sample points inside A n , with applications to the Q n -minimization method of section 3.4.2. Instead of choosing x i+1 to be the midpoint of a line segment in i ∩ A n , we can choose it randomly along a segment.

Bounding ellipsoids
The cutting surface method becomes most efficient when the radius of curvature of the surface defined by the quadratic form Q n is small compared to the size of the region A n−1 . If we start with the allowed region A 0 = RP m−1 , then it might take several iterations of the algorithm before this happens. Indeed, in our testing, the cutting surface algorithm often spent significant time cutting away parts of RP m−1 that are known to be far from the correct values of OPE coefficients. To avoid this problem, it is useful to impose a "bounding box" in OPE space. An efficient way to do this is to pick a bounding ellipsoid, and choose Q 1 to be the quadratic form that rules out the exterior of the ellipsoid. 21 In practice, we take the last 10 points in a long sequence and average them in some affine coordinates. Imposing a bounding ellipsoid is a non-rigorous optimization and should be done with care. As we worked our way up in the number of derivatives of the crossing equations, we used the following strategy. At an initial derivative order Λ, we keep track of all values of OPE coefficients of allowed points. We choose an ellipsoid E that contains these values and is also enlarged by an O(1) factor. We then increase Λ → Λ and use E as a bounding ellipsoid for the cutting surface algorithm. As a check on this method, we can inspect the set of allowed OPE coefficients found at derivative order Λ and see if any of them are close to the boundary of E. In practice, they never are, see figure 4. (In fact, they are almost never outside the cloud of points computed at derivative order Λ, so the enlargement by an O(1) factor is unnecessary.) We can now find a new ellipsoid E and continue.

Hot-starting
The cutting surface algorithm requires solving multiple SDPs to rule out a single point (∆ φ , ∆ s , ∆ t ) in dimension space. For example, for the computation described in section 4.2, each point in dimension space required solving an average of ∼ 35 SDPs (not including the tiny SDPs encountered in the semidefinite relaxation method of section 3.4.3). Fortunately, many of these SDPs can be solved extremely quickly using hot-starting [84]: we reuse the final state of the semidefinite program solver from a previous calculation as the initial state in a new calculation. In practice, hot-starting means passing an old checkpoint file as an argument to SDPB.
Hot-starting is particularly advantageous in the cutting surface algorithm because SDPs only change by a small amount with each new run. Specifically, the only difference between subsequent SDPs is the replacement of the positivity condition λ T n α(V ext )λ n ≥ 0 by the new condition λ T n+1 α(V ext )λ n+1 ≥ 0. Thus, the previous checkpoint contains a functional that already satisfies all other positivity conditions in the semidefinite program. In practice, the new condition λ T n+1 α(V ext )λ n+1 ≥ 0 is satisfied after a small number of iterations of SDPB. Furthermore, the number of iterations typically decreases over the course of the cutting surface algorithm, see figure 5.
Hot-starting is useful also for different points in dimension space. In practice, we keep a list of checkpoint files from all runs of SDPB over the course of a computation. For each new point in dimension space, we find the newest checkpoint file corresponding to the closest point in dimension space, and use it to initiate the cutting surface algorithm.
To demonstrate the effectiveness of hot-starting in dimension space, we study the 3d Ising model σ, mixed correlator bootstrap described in [5]. We choose a fixed point P 0 in dimension space and hot-start P 0 with several checkpoints from nearby points P i , see figure 6(a). We observed that in general when P i is close to P 0 , the number of iterations is smaller. In figure 6(b), we show the effectiveness of hot-starting in a transformed space, where the Ising island is roughly a spherical shape. We see that the concept of "nearest" is better behaved in this transformed space.
Let us mention one additional practical optimization. In each step of the cutting surface algorithm for scanning OPE coefficients, we must solve semidefinite programs that are nearly identical: they differ only in the positivity conditions associated to the external scalars φ, s, t. Consequently, we can avoid re-generating the entire SDP and only re-generate the conditions for the external scalars.

Primal/dual jumps
When testing feasibility of an SDP (as opposed to optimizing an objective function), SDPB includes some features that allow the solver to terminate more quickly. Internally, SDPB uses a modified Newton's method to simultaneously solve three types of equations: primal feasibility equations, dual feasibility equations, and an equation relating the two. . We fix P 0 (indicated by a cross) to be various points and hot-start the P 0 computation with checkpoints taken from nearby points P i around P 0 . The color of P i indicates how many SDPB iterations is needed for the hot-started computation. Red corresponds to 8 iterations, while green corresponds to 1 iteration. Without hot-starting, the typical number of iterations is about 80. On the left: we fix P 0 to be (0.518123, 1.412409), (0.518237, 1.413352), (0.518218, 1.412800). On the right: the (∆ σ , ∆ ) space is transformed such that the island is roughly spherical. We fix P 0 to be (0.518217, 1.413221).
and only if the dual feasibility equations are satisfied. If SDPB detects that it is possible to solve either the primal or dual feasibility equations during an iteration, then it does so immediately. We call such events primal/dual jumps.
When testing feasibility, a dual jump means that a functional α has been found (and the solver can terminate). In practice, a primal jump means a functional will not be found (so the solver can terminate in this case as well). The observation that we can stop after a primal jump was made in [82]. As far as we are aware, it has not been rigorously established. However, this does not affect the validity of the resulting bootstrap bounds, which depends only on the existence of functionals.
To make SDPB terminate in the event of primal/dual jumps, we supply the options --detectPrimalFeasibleJump and --detectDualFeasibleJump. We have found that it is important to disallow SDPB from terminating for other reasons. For example, over the course of the cutting surface algorithm, the primal error can get quite small, and often goes below reasonable values of primalErrorThreshold. However, in practice only primal/dual jumps are good reasons to terminate. Thus, we recommend turning off the options --findPrimalFeasible and --findDualFeasible, and setting primalErrorThreshold and dualErrorThreshold extremely small (e.g. 10 −200 ). Our precise parameters are listed in appendix B.
The existence of dual feasible jumps is sensitive to the precise bootstrap problem being solved. In our initial bootstrap implementation for correlators of φ, s, t, we did not observe any dual feasible jumps. In these cases, SDPB would run for many iterations, with the dualError (which indicates failure of the dual feasibility equations to be satisfied) steadily decreasing but never jumping to zero. We observed that during these iterations, SDPB was working hard to find functionals that were positive when acting on operators close to the unitarity bound. We alleviate this problem by imposing a small gap in twist τ = ∆ − J. Specifically, we impose (where τ unitarity is the unitarity bound) in all spin/symmetry sectors not containing conserved currents. (This condition is in addition to other gaps.) The extremely conservative choice δτ = 10 −6 is sufficient to restore dual feasible jumps. Imposing this small twist gap dramatically increases the efficiency of our methods.

Delaunay triangulation in dimension space
Given the above methods for determining whether a point (∆ φ , ∆ s , ∆ t ) in dimension space is allowed, we would like to search for the full allowed region. For simplicity, first consider the one-dimensional case, where we have a single parameter ∆. We can map ∆-space efficiently using binary search between known points. Suppose we have a list of values ∆ 1 < ∆ 2 < · · · < ∆ n , that are known to be either allowed or disallowed. We define For each case where p i = p i+1 , we perform a binary search between ∆ i and ∆ i+1 to find the precise threshold between allowed and disallowed.
We can reinterpret this method as follows. We can define a "probability" p(∆) that a given point is allowed. Our eventual goal will be to make p(∆) as close to 0 or 1 as possible for all ∆. A reasonable approximation for p(∆) is via linear interpolation between the values p(∆ i ) = p i . To improve our knowledge of the allowed region as quickly as possible, the next test point ∆ test should have probability p(∆ test ) = 1/2. If there are multiple such points, we should choose the one with the smallest slope |p (∆ test )|. 22 We then test whether ∆ test is allowed, add it to the list of known values, and repeat the algorithm.
The above method generalizes to higher dimensions. Consider a vector of dimensions ∆ ∈ R k . Suppose that we have a list of points ∆ 1 , ∆ 2 , . . . , ∆ n ∈ R k and values p i defined as in (36). To define a probability function p( ∆), we perform a Delaunay triangulation of the set of known points. 23 Within each simplex of the triangulation, we define p( ∆) via linear interpolation between its values p i at the vertices. Within each simplex, the points satisfying p( ∆) = 1/2 are either empty or form a codimension-1 polyhedron. For every nonempty polyhedron, we define a candidate point as the mean of the vertices of the polyhedron. We choose ∆ test as the candidate point inside the simplex with the largest "crossing distance", which is defined as the minimum distance between two vertices of the simplex with different values of p i . After testing ∆ test , we add it to the list of known points and repeat the algorithm.
We illustrate this algorithm in 2 dimensions in figure 7.
To work properly, Delaunay triangulation search requires sufficiently good initial conditions. For example, in the 1-dimensional case (binary search), we only obtain a correct picture of the allowed region if each connected allowed component and each connected disallowed component contains at least one initial point. Similarly, in higher dimensions, we only find an allowed region if we start with at least one point inside that region.
For this work, we found suitable initial conditions by first studying low derivative order Λ, and then working our way up in Λ. Our typical workflow is as follows: Based on computations at Λ = 15, 19, 23, we found that the allowed region is a nearly convex island, and it can be made approximately spherical by a particular affine transformation. For each subsequent computation, we applied an affine transformation determined by the previous computation before performing the Delaunay search. This increases the efficiency of the search and makes it easier to correctly resolve corners sharp corners and other features in the boundary of the island.
Because the shape of the island is so simple, Delaunay triangulation works properly given a single allowed point, together with enough disallowed points that the island does not extend outside the convex hull of the disallowed points. When increasing Λ, we can reuse all disallowed points from lower values of Λ. What remains is to determine an allowed point at the new value of Λ. We guess the allowed point in dimension space by extrapolating the way that the island shrinks with Λ, and choosing a point in the center of the extrapolated island. We test this point, and if the result is primal feasible, we can initiate a Delaunay search for the island. If the point is ruled out, we must make a different guess. . We transformed the (∆ σ , ∆ ) space such that the 3d Ising island is roughly spherical. The red points are disallowed, while black points are allowed. The blue region is the convex hull of the black points. The N in each plot is the total number of sampled points. Figure 8 shows bounds with different gap assumptions and different values of Λ, all computed without scanning over OPE coefficients. The light orange region shows a bound with Λ = 19 and the conservative assumption that the lowest dimension charge-4 scalar operator has dimension ∆ 4 ≥ 1. Evidence from other techniques supports the hypothesis that in fact ∆ 4 ≥ 3. The light blue region shows the resulting bound after imposing this stronger gap assumption. Finally, the dark blue region shows the result of imposing the stronger gap assumption and increasing the derivative order to Λ = 27.
We see that the stronger gap assumption reduces the size of the island by approximately 30% in both dimensions. Furthermore, imposing the gap assumption causes the island to shrink relatively quickly with Λ. Here, we see that increasing Λ from 19 to 27 causes the island to shrink by an additional factor of 2. Because the stronger gap is well-motivated and significantly improves the results, we include it in our computations. For comparison in figure 8, we show the Monte Carlo and high temperature expansion result from [96] and more recent Monte Carlo result from [95]. Without scanning over OPE coefficients, the bootstrap results are less precise.   [95,99] and an earlier study combining Monte Carlo simulations with high temperature expansion calculations in [96]. These bounds were computed at relatively low resolution, so the edges of the island show some artifacts.

Dimension bounds with OPE scans
Now we show our results obtained from scanning over OPE coefficients using the cutting surface algorithm described above. The plots in this section compute the allowed values of {∆ φ , ∆ s , ∆ t } assuming irrelevance of the second charge 0,1,2 operators and first charge 4 operator. The stress tensor and conserved current are assumed in the spectrum with coefficients constrained by Ward identities. All other operators are allowed to exist at any scaling dimension above + 1 + δτ with δτ = 10 −6 .  Figure 11 also shows the projection to the {∆ φ , ∆ t } plane and figure 2 in the introduction shows a view of the 3d region at Λ = 43. The improvement relative to figure 8 is readily apparent. In particular the conformal bootstrap results exclude the values of ∆ s extracted from 4 He measurements [85] and improve upon but appear compatible with both earlier [96] and recent results from Monte Carlo simulations [95,99].
The plotted regions are obtained by constructing the Delaunay triangulation of our tested points, selecting the triangles that contain both allowed and disallowed points, and plotting the convex hull of the points in the interior of these triangles that are midway between the allowed and disallowed vertices. This represents our best determination of the allowed region at a given Λ, but has a small error associated with the distance between the boundary and the nearest disallowed point. This "best-fit" region gives the determinations More conservatively we can consider the convex hull of the disallowed points in the Delaunay triangles straddling the boundary of the allowed region. We believe that every point outside of this more conservative region is excluded by the conformal bootstrap, giving the rigorous error bars ∆ t = 1.23629 (11). (42) Each allowed point in dimension space comes paired with an allowed point in the space of OPE coefficient ratios. At Λ = 43 these allowed OPE coefficient ratios live in the ranges λ sss λ φφs = 1.20926(46 * ), The full allowed region in OPE coefficient space may be slightly larger. 24 The full set of computed points at Λ = 43 are shown in figure 12 and listed in appendix E.  Figure 9: Superposition of the new O(2) islands using the {φ i , s, t ij } system and OPE scans over the earlier bootstrap results from [7] which used the {φ i , s} system.

Central charges and λ φφs
As stated in section 2.2, the two-point coefficient C T for stress tensors and the two-point coefficient C J for the O(2) current appear in the crossing equations. These coefficients are interesting for several reasons. For example they are related to transport in quantum critical systems, giving the leading term in the high frequency expansion at finite temperature [117,118]. In particular, the zero temperature conductivity of the O(2) model is given by [117] 2πσ It should be possible to produce an island in the combined space of scaling dimensions ∆ φ , ∆ s , ∆ t , OPE coefficient λ φφs , and coefficients C T , C J . In particular, this would give a determination of C T and C J with rigorous error bars. Due to limits on computational resources, we have not yet attempted this computation. Instead, we will content ourselves with non-rigorous estimates of C T , C J and λ φφs . We chose 7 allowed points (shown in  table 9 [95,99] and an earlier study combining Monte Carlo simulations with high temperature expansion calculations [96]. In order to compute upper bounds on C J (C T ), we must assume a gap between the conserved current (stress tensor) and other operators in the same spin and global symmetry sector. When computing upper bounds on C J , we assume all other spin-1 charge-0 operators have dimension ∆ ≥ 3. When computing upper bounds on C T , we assume all other spin-2 charge-0 operators have dimension ∆ ≥ 4. These assumptions are well-supported by estimates from the -expansion and from the extremal functional method.
We find where in both cases the error bars are non-rigorous. Our result for C J gives a new determination of the zero-temperature conductivity 2πσ ∞ = 0.355155(11 * ).
Combining this result with the OPE ratios (43) [95,99] and an earlier study combining Monte Carlo simulations with high temperature expansion calculations [96], while the results for ∆ t are compared with the Monte Carlo study [87]. The latter is also compatible with the earlier pseudo-expansion estimate ∆ t = 1.237(8) [116].

Estimates from the extremal functional method
The extremal functional method [52,78] is a non-rigorous method for estimating a large amount of CFT data from a small number of computations. We hope to present a more detailed analysis of our extremal functionals for the O(2) model in future work. For now, we give estimates of the dimensions of a few important low-lying scalars in table 4. To obtain extremal functionals, we chose 20 allowed points in the Λ = 43 island and computed lower and upper bounds on the norm of the external OPE vector |λ ext | with derivative order Λ = 27 (shown in table 10 of appendix E) . Comparing the zeros of the resulting functionals, we identified stable zeros whose positions did not vary significantly as we changed the point in the island [97]. Thus, for 20 points, we have 40 different values of ∆ s , ∆ t , ∆ charge 3 , and ∆ charge 4 (half of them are from the lower bound computations, while another half are from the upper bound computations). The gaps we impose are the same as in the OPE scan discussed before, except that we set the twist gap δ τ to 10 −4 . Figure 12: Allowed points in the space of OPE coefficient ratios computed using the {φ i , s, t ij } system at Λ = 43. The convex hull of these points (red) gives an estimate for the allowed values of these coefficients. The projection of the full 6d allowed region will be slightly larger so the shown region is non-rigorous.  We mark the errors with a * to emphasize that they are non-rigorous. Here, "charge-3" and "charge-4" refer to the lowest-dimension scalars with the given charges, which in field theory language are φ (i φ j φ k) and φ (i φ j φ k φ l) .
AV is also supported by the Swiss National Science Foundation (SNSF) under grant no. PP00P2-163670. SMC is supported by a Zuckerman STEM Leadership Fellowship.
This work used the Extreme Science and Engineering Discovery Environment (XSEDE) Comet Cluster at the San Diego Supercomputing Center (SDSC) through allocation PHY190023, which is supported by National Science Foundation grant number ACI-1548562. This work also used the EPFL SCITAS cluster, which is supported by the SNSF grant PP00P2-163670, the Caltech High Performance Cluster, partially supported by a grant from the Gordon and Betty Moore Foundation, and the Grace computing cluster, supported by the facilities and staff of the Yale University Faculty of Sciences High Performance Computing Center.

A Code availability
All code used in this work is available online. This includes     in the s-channel is derived by inserting a complete set of states where α runs over an orthogonal basis of operators O (and descendents) in irrep R that appear in the OPEs ϕ 1 × ϕ 2 and ϕ 3 × ϕ 4 . The 4-point structure T R R 1 R 2 R 3 R 4 (y i ) can then be written in terms of the O(2) structures T R R i R j (y i , y j , y) of each of the pair of 3-point functions as where (f (y), g(y)) denotes the contraction over y in index free notation. When R is 0 ± , the contraction is just multiplication of the three-point structures. When R has nonzero charge n, this contraction can be derived by expanding each rank n O(2) tensor in the basis as and similarly for g(y). This basis has the convenient properties e · e = e · e = 0 and e · e = 1, so that the contraction of the tensors in index free notation is (f (y), g(y)) = f (e)g(e) + f (e)g(e) .
The result of these contractions can then be written in terms of the quantities which have the properties which imply that w i = 0 or w i = 0 since y 2 i = 0 by definition. The utility of this derivation is that each 3-point structure establishes a convention for the OPE coefficient λ ϕ i ϕ j O of the associated 3-point function, so computing the 4-point structures in terms of these 3-point structures ensures that the coefficients λ ϕ 1 ϕ 2 O λ ϕ 3 ϕ 4 O that appear in (14) can be consistently identified with these OPE coefficients. For each sand t-channel configuration in table 2 with an independent O(2) structure, the resulting four-point structures are: