autoboot: A generator of bootstrap equations with global symmetry

We introduce autoboot, a Mathematica program which automatically generates mixed-correlator bootstrap equations of an arbitrary number of scalar external operators, given the global symmetry group and the representations of the operators. The output is a Python program which uses Ohtsuki's cboot which in turn uses Simmons-Duffin's sdpb. In an appendix we also discuss a simple technique to significantly reduce the time to run sdpb, which we call hot-starting.


Introduction and summary
A four-point function φ 1 φ 2 φ 3 φ 4 in a conformal field theory (CFT) can be constructed from three-point functions, but in more than one way, depending on how to group the four operators for the operator product expansions (OPEs): (φ 1 φ 2 )(φ 3 φ 4 ) or (φ 1 φ 3 )(φ 2 φ 4 ) or (φ 1 φ 4 )(φ 2 φ 3 ). The bootstrap equation expresses the equality of the four-point function computed in these different decompositions, and is one of the fundamental consistency conditions of a conformal field theory.
The bootstrap equation has been known for almost a third of a century, and was already successfully applied for the study of 2d CFT in 1984 in the paper [1]. Its application to CFTs in dimension higher than two had to wait until 2008, where the seminal paper [2] showed that a clever rewriting into a form where linear programming was applicable allowed us to extract detailed numerical information from the bootstrap equation. The technique was rapidly developed by many groups and has been applied to many systems. A sample of references includes . 1 Some of the highlights in these developments for the purposes of this paper are: the consideration of constraints from the global symmetry in [7], the introduction of the semi-definite programming in [9], and the extension of the analysis to the mixed correlators in [24]. These techniques can now constrain the scaling dimensions of operators of 3d Ising and O(N ) models within precise islands.
By now, we have a plethora of good introductory review articles on this approach, see e.g. [82][83][84][85][86][87], which make the entry to this fascinating and rapidly growing subject easier. There are also various computing tools developed to perform the numerical bootstrap more easily and more efficiently. For example, we now have a dedicated semi-definite programming solver sdpb [30], and two Python interfaces to sdpb, namely PyCFTBoot [42] and cboot [43]. There is also a Julia implementation JuliBootS [88], and also a Mathematica package to generate 4d bootstrap equations of arbitrary spin [59].
Given that the numerical bootstrap of precision islands of the 3d Ising and O(N ) models [24] was done in 2014, it would not have been strange if there had been many papers studying CFTs with other global symmetries. But this has not been the case, with a couple of exceptions e.g. [69][70][71]77]. We believe that this dearth of works concerning CFTs with global symmetry is due to the inherent complexity in writing down the mixed-correlator bootstrap equations, with the constraints coming from the symmetry. To solve this problem we need to automate it: human beings should never do things which can be done by machines.
The aim of this paper is to present autoboot, a proof-of-concept implementation of an automatic generator of mixed-correlator bootstrap equations of scalar operators with global symmetry. Let us illustrate the use with an example. Suppose we would like to perform the numerical bootstrap of a CFT invariant under D 8 , the dihedral group with eight elements. Let us assume the existence of two scalar operators, one in the singlet and the second in the doublet of D 8 . The mixed-correlator bootstrap equations can be generated by the following Mathematica code, after loading our package: All what remains is to make a small edit of the resulting file, to set up the dimensions and gaps of the operators. The Python code then generates the XML input file for sdpb.
The rest of the paper is organized as follows. In Sec. 2, we first explain our notations for the group theory constants and then describe how the bootstrap equations can be obtained, given the set of external scalar primary operators φ i in the representation r i of the symmetry group G. In Sec. 3, we discuss how our autoboot implements the procedure given in Sec. 2. In Sec. 4, we describe two examples of using autoboot. The first is to perform the mixed-correlator bootstrap of the 3d Ising model. The second is to study the O(2) model with three types of external scalar operators. Without autoboot, it is a formidable task to write down the set of bootstrap equations, but with autoboot, it is immediate.
We also have an Appendix A where we discuss a simple technique, which we call hot-starting, to reduce the running time of the semi-definite program solver significantly, by reusing parts of the computation for a given set of scaling dimensions of external operators to the computation of another nearby set of scaling dimensions. Our experience shows that it often gives an increase in the speed by about a factor of 10 to 20.
The authors hope that our autoboot will be of use to the bootstrap community. The code is freely available at https://github.com/selpoG/autoboot/.

Group theory notations
Let us first set up our notation for the group theory data we need. Let G be the symmetry group we are interested, and Irr(G) be the set containing one explicit representation for each isomorphism class of unitary irreducible representations of G.
In particular, r ∈ Irr(G) is a vector space together with explicit unitary matrices U (g) a b where a, b = 1, . . . , dim(r) representing the G action: The complex conjugate representation r * has the G action given by For r ∈ Irr(G), we denote byr the irreducible representation in Irr(G) isomorphic to r * , i.e. r * r ∈ Irr(G). When r is strictly real or complex, we can and do require thatr = r * . When r is pseudoreal, we have r =r = r * . This subtle distinction between r * andr are unfortunately necessary, since we will carry out the computations using explicit representation matrices.
We denote the G-invariant subspace of the tensor product of r 1 , . . . , r n by inv r 1 , . . . , r n . We then define inv r 1 , . . . , r n |s 1 , . . . , s m to be inv r * 1 , . . . , r * n , s 1 , . . . , s m . In particular, we are interested in inv t|r, s , whose orthonormal basis times dim(t) we denote by When r = s, we can choose signs σ n (t|r, r) = ±1 so that c t a, a r, r n = σ n (t|r, r) c t a , a r, r n . (2.6) We note that dim inv r|r, id = 1 and a r a, 1 r, id 1 = δ a a (2.7) for general r. We define the invariant tensor { a, b } r by When r =r or r is strictly real, we can choose { a, b } r to be δ ab . When r is pseudoreal, { a, b } r is antisymmetric and can be taken to be the direct sum of 0 −1 1 0 .
After these preparations, we can finally write down the orthonormal basis of inv r, s, t and inv r 1 , r 2 , r 3 , r 4 : We note that a, b, c r, s, t n = σ n (r, s, t) b, a, c s, r, t n (2.11) where σ n (r, s, t) := σ n (t|r, s). Other permutations are more complicated. We define τ for the cyclic permutation: which can be further written as the sum of products of four generalized CG coefficients c t a, b r, s n using (2.10). This explains our use of the tetrahedron for the coefficients in this relation. For G = SU (2), this tetrahedral object is known as the 6j symbol.

Operator product expansions
For two scalar primary fields φ 1,2 , we denote its operator product expansion (OPE) by Here, the subscript r[a] means that the operator belongs to the representation r with the index a = 1, . . . , dim(r), O : t means that the intermediate primary operator O is in the representation t, the superscript k is an index of a spin-representation of the rotation group SO(d), and C φ 1 φ 2 O,k (x − y, ∂ y ) captures the contribution of the descendants.
For an operator O transforming in r ∈ Irr(G), we denote its complex conjugate byŌ in the representationr. We normalize the operators so that they have the two-point function where I ij (x) is a certain invariant tensor. We also introduced signs σ(O) = ±1 to compensate the antisymmetry of { a, b } r when r is pseudoreal. Namely, if O is in a complex or in a strictly real representation, σ(O) = +1, and for a pair O,Ō of operators in a pseudoreal representation, we choose the sign so that σ(O)σ(Ō) = −1.
We can now proceed to three-point functions. Using the OPE, we find where Z i (x) is a certain invariant tensor and we introduced Thirdly, for the complex conjugation, we have The OPE coefficients satisfy the relations (2.24), (2.25), (2.26). In particular, when O is an unknown intermediate operator, we have various relations among for n = 1, . . . , dim inv r, s, t . When O is one of the known external scalar operator φ 3 , there are various relations among α n 123 , α n 231 , α n 312 , α n 213 , α n 321 , α n 132 , α n 123 , α n 231 , α n 312 , α n 213 , α n 321 , α n 132 . (2.28) These relations are all R-linear. Therefore, the solutions to these relations can be parameterized by mutually independent real numbers we call β m 12O , so that all the OPE coefficients listed above are linear combinations thereof.

Bootstrap equations
We can finally study the four-point function. From the definitions we have given so far, we have is the conformal block in the notation of [24], and (2.31) Symmetrizing/anti-symmetrizing in u and v, we obtain the bootstrap equation in the form We note that this function satisfies the relations 2 which follow from the properties of g , see e.g. Eq. (59) of [87]. We automatically reorder i, j, k, l by these symmetries during the calculation.
We take the inner product with a 1 ,a 2 ,a 3 ,a 4 r 1 ,r 2 ,r 3 ,r 4 s;nm . We find We now denote the space of functions on (u, v) by F and consider a vector of functionals f C : F → R, indexed by the choice C. We can exclude the existence of such a CFT if we can find where A IJ 0 means that A is a positive-definite matrix.
In practice, we classify the intermediate operators into sectors, specified by either the identity 1, or known external operators φ i , or unknown operators specified by r ∈ Irr(G) and the spin l. We then demand that (2.40) 2 The authors thank Shai Chester for pointing out the importance of removing redundant equations using these relations, in particular (2.35).
in the identity sector, for each external operator φ i with an assumed dimension ∆ i , and 3 Implementation

Group theory data
In autoboot we provide a proof-of-concept implementation of the strategy described in the previous section. For each compact group G to be supported in autoboot, one needs to provide the following information: • Labels r of irreducible representations together with their dimensions • The complex conjugation map r →r.
• Abstract tensor product decompositions of r i ⊗ r j into irreducible representations • Explicit unitary representation matrices of the generators of G for each irreducible representation r.
Currently we support small finite groups G in the SmallGrp library [89] of the computer algebra system GAP [90] and small classical groups G = SO (2) For classical groups, these data can in principle be generated automatically, but at present we implement by hand only a few representations we actually support.
For small finite groups, we use a separate script to extract these data from GAP and convert them using a C# program into a form easily usable from autoboot. Currently the script uses IrreducibleRepresentationsDixon in the GAP library ctbllib, which is based on the algorithm described in [92]. Due to the slowness of this algorithm, the distribution of autoboot as of March 2019 does not contain the converted data for all the small groups in the SmallGrp library. If any reader needs to generate the data for a small group not contained in the distribution of autoboot, please ask the authors for assistance. A faster function to generate irreducible representations is available in the GAP library repsn [93] based on the paper [94], but unfortunately it does not give unitary matrices at present.
We have in fact implemented two variants, one where matrix elements are computed as algebraic numbers, and another where matrix elements are numerically evaluated. The line <<"group.m" or <<"ngroup.m" loads the algebraic or numerical version, respectively. The for the discrete part g ∈ G and a r( for infinitesimal generators x ∈ g, where we use r aa for the representation matrices for a representation r, etc. Our autoboot enumerates these equations from the given explicit representation matrices, and solves them using NullSpace and Orthogonalize of Mathematica. We also make sure that for r = s these coefficients are either even or odd under a ↔ b. The notations in this paper and in the code are mapped as follows: inv[r, s, t] = dim inv r, s, t , These (except σ) can be computed using the inner product of the invariant tensors as explained already. Since the matrix elements of invariant tensors are often very sparse, and that the dimension of the space of invariant tensors is often simply 1, our autoboot uses a quicker method in computing them, by using only the first few nonzero entries of the invariant tensors and actually solving the linear equations.

CFT data
for intermediate operators O, and by  Then the function f inside single is a product of two β's and Fp or Hp, and the function f inside sum is a product of two β's and F or H. The function format gives a more readable representation of the equations.

Bootstrap equations
We convert the bootstrap equations into a semi-definite program following the standard method. To do this, makeSDP[...] first finds all the sectors O in the intermediate channel, and for each sector, we list all the OPE coefficients β IO involved in that sector. We then extract the vector of matrices F IJ,C as described in (2.38). In practice, this matrix F IJ,C is very sparse and automatically block-diagonal; autoboot splits it up accordingly.
At this point, it is straightforward to convert it into a form understandable by a semi-definite program solver spdb. We implemented a function toCboot[...] which constructs a Python program which uses cboot [43]. After saving the Python program to a file, some minor edits will be necessary to set up the gaps in the assumed spectrum etc. Then the Python program will output the file which can be fed to sdpb.

3d Ising model with and σ
Let us now reproduce the ground-breaking result of [24], where the mixed-correlator bootstrap of the 3d Ising model with the energy operator and the spin operator σ was first performed. We use the following code Here, we set z2 to the first group with two elements, namely Z 2 . We then introduce a Z 2 -even operator = e and a Z 2 -odd operator σ = a. We use the symbol a to represent the operator σ, since the symbol given to op will also be used in the generated Python code. Here we also illustrated another small feature of autoboot, where the trivial representation for a group g can be found by g [id]. We save the Python code into Ising.py.
The resulting Python code uses cboot. We need to make a few manual modifications to the first few lines in the file:

context=cb.context_for_scalar(epsilon=0.5,Lambda=11) spins=list(range(22)) nu_max=8 mygap={}
Let us explain it line by line: • epsilon = (d − 2)/2 is given by the spacetime dimension d, • Lambda = Λ specifies the cutoff m + n ≤ Λ in the derivative expansion of the conformal blocks ∂ m u ∂ n v F (u, v), • spins controls the spins in the intermediate channel to consider, • nu_max = ν max is the number of poles which will be used in the numerical computation of the conformal block as explained in [24], to assume that all unnamed scalar operators are irrelevant.
To actually create an input to sdpb, we perform write_SDP({"e": 1.4127, "a": 0.5181}) which will generate the semi-definite program for a given ∆ and ∆ σ . We can run the resulting program and check that the output is consistent with the results of [24,30]. We can further enhance the program to search for a certain region in the (∆ , ∆ σ ) space, and/or to run sdpb from within the program, and so on. autoboot also has the ability to generate the bootstrap equations in L A T E X, to be used enclosed in the \begin{align}...\end{align} environment. To use this facility, we set up the mapping between the Mathematica names of the operators and the representations and their L A T E X counterpart, and call toTeX. In the case of the 3d Ising model, we do for example which generates the following set of bootstrap equations:

3d O(2) model with three external scalar operators
As the next example, we consider the 3d O(2) model with three primary scalar external operators: a singlet s, a vector φ and a traceless-symmetric t. In [32], the same model was analyzed using s and φ as external operators, and the information on ∆ t was obtained by specifying the condition in the intermediate channel. Here o2=getO [2] creates the group O(2) within autoboot, and v[n] stands for the n-th traceless symmetric representation. We use the symbol v to represent the operator φ. The bootstrap equations generated by toTeX are given below:  Here we used S n to denote the n-th symmetric traceless tensor representation, which is v[n] in Mathematica. We also introduced V = S 1 and T = S 2 as abbreviations. I + is the trivial representation and I − is the sign representation.
This example shows the power of autoboot. It is almost trivial to add another external operator using autoboot, whereas it is quite tedious to work out the form of the bootstrap equations by hand.
We used Λ = 25 and obtained the island in the (∆ s , ∆ φ , ∆ t ) space shown in Fig. 1 and Fig. 2, where the results from Appendix B of [32] are also presented. 3 Our bound is the following:    2) island. The blue region is the one obtained in Appendix B of [32], and the red region is our result.
of converting the numerical bootstrap into the semi-definite programming, we refer the reader to [30]; our discussion will be brief.
Recall that in the semi-definite programming we consider maximizing b a y a under the condition and we vary y a , Y ij .
Here, the indices are such that a = 1, . . . , N ; u = 1, . . . , P ; i, j = 1, . . . , K (A.4) and the indices i and j are assumed to be symmetric. Y ij 0 means that the matrix Y is positive semi-definite. This is the dual form of the problem, while the primal form is that we minimize c u x u under the condition where we have the same input data as above and we vary When (x, X) or (y, Y ) satisfies the respective equality condition (A.5) or (A.1), they are called primal or dual feasible. The duality gap defined as c u x u −b a y a is guaranteed to be non-negative for a primal feasible (x, X) and a dual feasible (y, Y ). When the duality gap vanishes, both (x, X) and (y, Y ) satisfy the respective optimization problems, and XY = 0. A semi-definite program solver starts from an initial point (x, X, y, Y ), which is allowed not to satisfy the equality constraints in (A.1) and (A.5), and update the values of (x, X, y, Y ) via a generalized Newton search so that they become feasible up to an allowed numerical error we specify.
In the application to the numerical bootstrap, the bootstrap constraints are turned into a maximization problem of the dual form discussed above. The aim is to construct an exclusion plot of the scaling dimensions ∆ 1,...,n of external operators φ 1,...,n . Depending on the precision we want to impose, we pick a fixed value of K, N, P , and we construct c u , B u a , A u ij as a function of ∆ 1,...,n . We often simply set b = 0 and look for a dual feasible solution. If one is found, the chosen set of values ∆ 1,...,n is excluded. To construct an exclusion plot, we repeat this operation for many sets of values ∆ 1,...,n .
In the existing literature, and in the sample implementations available in the community, the semi-definite program solver is often repeatedly run with the initial value (x, X, y, Y ) ∝ (0, Ω P I K×K , 0, Ω D I K×K ) where I K×K is the unit matrix and Ω P,D are real constants. Our improvement is simple and straightforward: for two sets of nearby values ∆ 1,...,n and ∆ 1,...,n , we reuse the final value (x * , X * , y * , Y * ) for the previous run as the initial value for the next run. For nearby values of ∆ 1,...,n , the updates of the values (x, X, y, Y ) via the generalized Newton search are expected to follow a similar path. Therefore, we can expect that reusing the values of (x, X, y, Y ) might speed up the running time, possibly significantly. We call this simple technique the hot-starting of the semi-definite solver. For this purpose, we implemented a new option -initialCheckpointFile to sdpb, so that the initial value of (x, X, y, Y ) can be specified at the launch of sdpb. The code has been merged to the master branch of https://github.com/davidsd/sdpb.
We have not performed any extensive, scientific measurement of the actual speedup by this technique. But in our experience, the sdpb finds the dual feasible solutions about 10 to 20 times faster than starting from the default initial value.
There are a couple of points to watch out in using this technique: • In the original description of sdpb in [30], it is written in Sec.

that
In practice, if sdpb finds a primal feasible solution (x, X) after some number of iterations, then it will never eventually find a dual feasible one. Thus, we additionally include the option -findPrimalFeasible and that finding a primal feasible solution corresponds to the chosen set of values ∆ 1,...,n is considered allowed. This observation does not hold, however, once the hot-start technique is applied. We indeed found that often a primal feasible solution is quickly found, and then a dual feasible solution is found later. Therefore, finding a primal feasible solution should not be taken as a substitute for never finding a dual feasible solution. Instead, we need to turn on options -findDualFeasible and -detectPrimalFeasibleJump and turn off -findPrimalFeasible. 4 • From our experiences, it is useful to prepare the tuple (x, X, y, Y ) by running the sdpb for two values of ∆ 1,...,n , such that one is known to belong to the rejected region and another is known to belong to the accepted region, so that the tuple (x, X, y, Y ) experiences both finding of a dual feasible solution and detecting of a primal feasible jump. Somehow this significantly speeds up the running time of the subsequent runs.
• When one reuses the tuple (x, X, y, Y ) too many times, the control value µ which is supposed to decrease sometimes mysteriously starts to increase. At the same time, one observes that the primal and dual step lengths α P and α D (in the notation of [30]) become very small. This effectively stops the updating of the tuple (x, X, y, Y ). When this happens, it is better to start afresh, or to reuse the tuple (x, X, y, Y ) from some time ago which did not show this pathological behavior.