Quantum stabilizer codes, lattices, and CFTs

There is a rich connection between classical error-correcting codes, Euclidean lattices, and chiral conformal field theories. Here we show that quantum error-correcting codes, those of the stabilizer type, are related to Lorentzian lattices and non-chiral CFTs. More specifically, real self-dual stabilizer codes can be associated with even self-dual Lorentzian lattices, and thus define Narain CFTs. We dub the resulting theories code CFTs and study their properties. T-duality transformations of a code CFT, at the level of the underlying code, reduce to code equivalences. By means of such equivalences, any stabilizer code can be reduced to a graph code. We can therefore represent code CFTs by graphs. We study code CFTs with small central charge $c=n\leq 12$, and find many interesting examples. Among them is a non-chiral $E_8$ theory, which is based on the root lattice of $E_8$ understood as an even self-dual Lorentzian lattice. By analyzing all graphs with $n\leq 8$ nodes we find many pairs and triples of physically distinct isospectral theories. We also construct numerous modular invariant functions satisfying all the basic properties expected of the CFT partition function, yet which are not partition functions of any known CFTs. We consider the ensemble average over all code theories, calculate the corresponding partition function, and discuss its possible holographic interpretation. The paper is written in a self-contained manner, and includes an extensive pedagogical introduction and many explicit examples.


Introduction
It has been recognized for many years that codes, lattices, and conformal field theories (CFTs) are deeply intertwined. Perhaps the best known example of this relation is the construction of the Leech lattice from the extended Golay code. The Leech lattice subsequently played a central role in the discovery of the monster group, which appears naturally as the symmetry of the Monster CFT -a particular orbifold of the chiral CFT associated with the Leech lattice [1,2]. More generally, classical self-dual binary linear codes are naturally associated with Euclidean even self-dual lattices, which in turn give rise to chiral bosonic CFTs [3]. This relation is not exclusive -there are other known ways in which classical codes are related to chiral theories [4,5].
In view of the fruitful connections between classical codes, Euclidean lattices, and chiral CFTs, one may wonder if there is a corresponding hierarchy based on quantum codes. After all, conformal field theories are fundamentally quantum in nature, and so it is natural to expect that their relation to codes extends to include quantum codes. In this paper we will develop this idea, and show that there is indeed a natural and compelling correspondence between an important class of quantum codes, real self-dual binary stabilizer codes, and even self-dual Lorentzian lattices. These lattices define a class of nonchiral CFTs that arise from toroidal compactifications of strings with quantized B-flux, a subset of the family of Narain CFTs. In other words, real self-dual stabilizer codes are in one-to-one correspondence with a family of CFTs of a particular kind, which we call code CFTs.
The connection between CFTs and quantum codes becomes most explicit at the level of the partition function, or, in the case of the underlying code, at the level of the code's enumerator polynomial. Analogously to the classical case, the (refined) enumerator polynomial of a real self-dual quantum code lifts to the Siegel theta function of the associated Lorentzian lattice, which becomes the CFT partition function upon multiplication by the appropriate power of |η(τ )| 2 required for modular invariance. The constraints of modular invariance of the partition function reduce to a set of simple algebraic relations that must be satisfied by the enumerator polynomial, making it possible to implement a "baby" analogue of the CFT modular bootstrap for quantum codes. Thus, the maximization of the spectral gap over CFTs of given c becomes with some nuances a modification of the problem of maximizing error-correcting capacity, as measured by the number of qubits whose decoherence a real self-dual quantum code of given length can detect. The number "controlling" the spectral gap d b can be bounded from above as a linear programming optimization problem, which we solve numerically for codes of length n ≤ 32; we verify explicitly that the bound is tight for n ≤ 8. It is worth noting that the problem of finding codes with maximum error-correcting capacity (which maximize the Hamming distance for a given code length) is closely related to the problem of finding the maximum possible density of a lattice sphere packing in a given number of dimensions [6,7]; essentially, it is a version of the sphere-packing problem with respect to the distance measure appropriate for quantum codes rather than the Euclidean metric. The sphere-packing problem has recently been recast in terms the CFT modular bootstrap [8][9][10], and has been analyzed numerically, leading to improved bounds at finite n. Our work complements these studies of sphere packing by introducing a new relation between the modular bootstrap and codes.
Another question on which the connection to codes sheds new light is that of the space of solutions of the modular bootstrap constraints, namely modular invariant functions Z(τ,τ ) which are sums of characters with positive integer coefficients, but which are not partition functions of any known CFTs. A family of chiral Z(τ ) of this sort with central charge c ≥ 24 has been previously discussed in [11]. Furthermore, there are simple examples of Z(τ ) discussed later in the text which do not correspond to any CFT at all. The connection to codes leads to many such examples, both chiral and non-chiral, with small central charge in the latter case, for which the CFT is not known or may not exist. At the level of codes, the question of finding such Z(τ,τ ) reduces to constructing multivariate polynomials obeying all symmetry and positivity constraints that must be satisfied by enumerator polynomials, yet which are not enumerators of any code. Solving a simple linear programming problem yields many thousands of examples of "fake" enumerators, already for small central charge c =c ≤ 8.
Code CFTs form a discrete subset of the continuous moduli space of Narain CFTs. We show that this subset, and hence the space of codes itself, can be described as a coset of discrete groups. Acting on this coset are symmetries relating equivalent codes; code CFTs that are T-dual to each other correspond to equivalent codes. By making use of code equivalences, we are able to reduce a general code to an equivalent code of canonical form. Each canonical representative is associated with an undirected graph, and equivalent codes (i.e. T-dual code CFTs) map to graphs related by a particular graph transformation, known as edge local complementation. The representation by graphs provides a convenient way to classify the equivalence classes of codes of a given length. We do this for n ≤ 8, and in the process find many interesting examples.
One striking finding is the multitude of inequivalent codes sharing the same enumerator polynomial, which implies the existence of many examples of isospectral lattices and inequivalent isospectral code CFTs. The first such example appears for n = 7; it corresponds to a pair of isospectral even self-dual Lorentzian lattices in R 7,7 . This is the lowest-dimensional example among lattices associated with the stabilizer codes, and in many ways is analogous to Milnor's example of the isospectral pair of even self-dual lattices in R 16 . But unlike the Euclidean case, where the next example occurs in 24 dimensions, in the Lorentzian case we find many dozens of pairs and even triplets of isospectral lattices in R 8,8 , and correspondingly many isospectral CFTs with c =c = n = 8.
One of our original motivations for the present work came from quantum gravity. In the context of the AdS/CFT correspondence, information is understood to be stored at the boundary of spacetime, in a highly nonlocal and redundant form strongly reminiscent of error-correcting codes [12,13]. This observation begs the question of exactly how information is stored in the dual CFT, and in particular, how the form of error correction seen in the bulk gravitational theory is implemented in the CFT. While we do not claim to have a complete answer to this question at present, we have at least identified error-correcting codes within an important class of CFTs. This same class of CFTs has recently been studied as a toy model of holography. In papers by two sets of authors [10,14], it has been shown that the average over moduli space of Narian CFTs can be reinterpreted as a sum over three-dimensional topologies. The authors conjecture that the moduli-averaged CFT is dual to a three-dimensional gravitational theory with U (1) n × U (1) n gauge symmetry. This suggests that if we are looking for a holographic version of our code construction, we should consider what happens when we average over codes. We perform the average over a class of codes to obtain the averaged partition function of the corresponding CFTs, and note the possibility of reinterpreting the partition function as a sum over handlebodies.
Quantum code CFTs are a small subset of the space of all Narain CFTs, but our results indicate that they might be representative in a certain sense. As evidence for this claim, we cite the fact that our numerical bound on d b /n is comparable to numerical bounds on spectral gap ∆ [10], and also the observation that the average over a class of code CFTs appears to have a holographic interpretation. Future explorations may uncover further indications that code CFTs can provide a useful, stripped-down setting for studying holographic phenomena. This paper is organized as follows. It includes an extensive pedagogical introduction. Section 2 discusses classical codes, both binary codes (subsection 2.1) and codes over GF(4) (subsection 2.2) as well as their relation to lattices, MacWilliams identities, Hamming and Gilbert-Varshamov bounds, and other related questions. With some exceptions, most of the material presented in Section 2 is not new, and can be skipped by a reader with sufficient background. Section 3 contains both pedagogical and original material. Subsection 3.1 introduces quantum error-correcting codes of the stabilizer type, their relation to self-orthogonal classical codes over GF (4), and the quantum version of the MacWilliams identities. Most of it can be skipped by the knowledgeable reader. Subsection 3.2 introduces a crucial new ingredient -the relation between quantum codes and Lorentzian lattices. A reader with background in both classical and quantum codes can start reading the paper from this subsection. Section 4 is similarly mixed. Subsection 4.1 introduces Narain CFTs, and is intended for readers with a background in classical or quantum codes, but no prior exposure to String Theory. It can be skipped by anyone familiar with toroidal compactifications. Subsection 4.2 is again crucial -it introduces the basic elements of our construction relating quantum codes to CFTs. All subsequent sections contain original material.

Classical error-correcting codes
We start by reviewing classical error-correcting codes, focusing on aspects important for understanding quantum codes and their relation to CFTs. For a more in-depth treatment we recommend Elkies's comprehensive yet concise review [6,7].

Binary codes
A binary code C is a collection of binary "codewords," vectors of length n consisting of zeros and ones, C ⊂ Z n 2 . Components of codewords c ∈ C are called bits. Each codeword encodes a particular message. When sent over a noisy channel, a codeword may be corrupted, i.e., certain bits may be changed to their opposite values. The encoding procedure is designed to make it possible to restore the original form of the codeword and thus recover the message. This is done by replacing the corrupted codeword c / ∈ C with the closest proper codeword, defined with some appropriate norm. The most widely used norm is known as the Hamming distance. Given two vectors c 1 , c 2 ∈ Z n 2 , the Hamming distance between them d(c 1 , c 2 ) is the number of corresponding bits in c 1 and c 2 that are different. The Hamming distance of a code d is the smallest Hamming distance between any two codewords Colloquially, an optimal code for given n and K is one with the maximum possible d, i.e. one which can correct errors involving the maximum possible number of bits. When n goes to infinity, with log 2 (K/2 n ) approaching a finite limit, the maximum possible ratio d/n controls the amount of information which can be sent over a noisy channel. There are numerous bounds on d/n, but the exact limiting value is not known.
Codewords can be visualized as vertices of a unit cube in n dimensions. To design a good code, one should place as many points at the cube's vertices as possible, making sure they are located far away from each other. The distance d between two vertices is calculated either with the Manhattan norm (the minimum distance an ant would need to travel along the edges to get from one vertex to the other), or equivalently, the Euclidean distance squared 2 .
A code is linear if the sum of any two codewords c 1 , c 2 ∈ C, obtained by adding the components modulo 2, is also a codeword c 1 + c 2 ∈ C. In other words, a classical linear code C is a vector space over the field F == Z 2 consisting of two elements {0, 1}. There are necessarily K = 2 k distinct codewords for some nonnegative integer k, which counts the number of "logical" bits. All codewords are specified by a binary n × k generator matrix G, where matrix multiplication is performed over the field F . We use the notation [n, k, d] to describe linear codes with Hamming distance d that encode k logical bits into n physical bits. A linear code can equivalently be specified by a "parity check" matrix H defined such that Ker(H) = Im(G), so that HG = 0, and Hc = 0 if and only if c is a proper codeword (all algebra is mod 2). The parity check matrix is an (n − k) × n binary matrix of maximal rank.
Linear codes always include the zero vector, i.e. the vector consisting of n zeros. We introduce the Hamming weight w(c) as the sum of all elements of a code vector (with the sum taken using conventional algebra, not mod 2). Then the Hamming distance is the minimal Hamming weight of all non-trivial codewords If a codeword c has been corrupted by some error e, c → c = c + e, the error can be detected by applying the parity check matrix y(c ) = Hc = He. (2.4) If y = 0, an error has occurred. However, the converse is not necessarily true. A vanishing result y = 0 could mean that an undetectable error has occurred, one for which the error vector e is a proper nonzero codeword, e ∈ C. Clearly, these undetectable errors must simultaneously corrupt at least d(C) bits. Therefore, increasing d improves the quality of the code by making it less likely for undetectable errors to occur. The name of the parity check matrix comes from its role of detecting errors. Typical architectures for semiconductor computer memory supplement each byte (8 bits) of memory by an additional physical bit that has no effect on logical operations, which automatically takes the value that makes the sum of all nine bits even [15]. A violation of that condition is an indication that a hardware error has occurred.

Example: repetition code
The repetition code is, perhaps, the simplest example of a code. It encodes k = 1 logical bit by repeating it n times: (2.6) If an error occurs that corrupts [(n − 1)/2] bits or fewer, one can restore the original message by rounding w(c )/n to the closest integer. This code has a small ratio of logical bits to physical bits, k/n = 1/n, and is therefore not very efficient.
Example: Hamming [7,4,3] code A more interesting example is the Hamming [7,4,3] code, defined by the following parity check matrix is simply a binary 3-vector whose components are equal to the digits of the number i written in base 2. Thus the value of y(c ) unambiguously indicates which bit should be flipped to restore the original message. All of the algebra above (except where explicitly noted) is to be understood mod 2.
Let us assume that a code can correct any error affecting t = [(d − 1)/2] bits or fewer. There are C l n = n!/l!(n − l)! errors affecting exactly l bits and therefore t l=1 C l n such errors overall. This number should not exceed the total number 2 n−k − 1 of all possible non-trivial values of y. Otherwise different errors would yield the same y making them indistinguishable (and their sum, which would affect 2t < d bits, would be annihilated by H, leading to a contradiction). We therefore find the following bound on t = [(d − 1)/2], This is known as the Hamming bound. It constrains d in terms of n and k. A code saturating the Hamming bound is called perfect. The Hamming [7,4,3] code is a perfect code; the repetition code is not. The Hamming bound has a simple geometric interpretation. We can define a ball in the space of codewords with radius t centered at the codeword c to be the set of all codewords c with d(c, c ) ≤ t. Then V (t, n) is the volume of this ball, i.e. the total number of codewords it contains. The bound (2.9) simply states that since the balls of radius t = [(d − 1)/2] centered at each of the codewords of a given code should not overlap, the total volume of all 2 k balls can not exceed the total volume of the space of codewords 2 n . It is useful to think of the elements of Z 2 = {0, 1} as the equivalence classes of even and odd integers. We can further view the set of all integers Z as a lattice in R 1 , with the lattice 2Z of even integers being a sublattice. Then Z 2 is the lattice quotient Z/(2Z), i.e. equivalence classes of lattice vectors in Z modulo shifts by elements of the sublattice 2Z. For the sake of mathematical elegance (and for reasons explained below) we will rescale both of these lattices by 1/ √ 2. Then if Γ = Z/ √ 2, Γ * = √ 2 Z is its lattice dual, and Z 2 can be thought of as the quotient Γ/Γ * . This identification is the basis for a construction of Leech and Sloane [16,17], known as Construction A, which associates a lattice to any binary linear code. A codeword is a vector in c ∈ (Z 2 ) n and therefore can be thought of as an equivalence class of lattice points in Γ = (Z/ √ 2) n modulo shifts by vectors in Γ * = ( √ 2 Z) n . All codewords of a given code give rise to the following set of points in Γ: Provided C is a linear code, Λ(C) is a lattice. It is easy to see that Λ(C) uniquely characterizes C. In other words, for given n, linear binary codes are in one to one correspondence with lattices Λ satisfying Γ * ⊂ Λ ⊂ Γ. For a given linear code of type [n, k, d], one can define its dual, which is an [n, n − k, d ] code consisting of all codewords orthogonal to C mod 2, The generator matrix of the dual code C ⊥ is the parity matrix of C and vice versa. The code dual to C ⊥ is the original code C. Rescaling by the factor 1/ √ 2 introduced above is necessary for the following fundamental property: the lattice of the dual code Λ(C ⊥ ) is dual (in the lattice sense) to Λ(C), (2.12) A linear code is called self-orthogonal if, as a linear space, it is a subcode of its dual, C ⊂ C ⊥ . At the level of lattices, Λ(C) of a self-orthogonal code is an integral lattice. A code is called self-dual if it is equal to its dual; its corresponding lattice is then self-dual (unimodular). Self-orthogonality requires k ≤ n/2, and self-duality implies k = n/2. Therefore self-dual codes can exist only for even values of n.
A binary code is called even if the Hamming weight w(c) of all of its 2 k codewords is even. Since all codewords of a self-dual code are self-orthogonal (mod 2), self-dual codes are necessarily even. At the level of lattices, when the code is even, the norm-squared of any lattice vector is integer.
A binary code is called doubly-even if the Hamming weights of all codewords are divisible by four. The corresponding lattice is then even. It is then an elementary consequence, both for codes and lattices, that any doubly-even code (any even lattice) is self-orthogonal (lattice is integral), and vice versa. We therefore arrive at the following conclusion: doubly-even self-dual codes are in one to one correspondence with even self-dual lattices, which are sublattices of Γ ⊂ R n .
Binary doubly-even self-dual codes, which correspond to even self-dual lattices, are said to be of type II; the class of type II codes is denoted 2 II . Even but not doubly-even self-dual codes, which correspond to odd lattices, are of type I and are in the class 2 I . In some treatments, the class 2 I is defined to include doubly-even codes as well, in which case it corresponds to the set of all integral unimodular lattices.
The vector 1 has the following special property. For any c ∈ Z 2 n , its scalar product with 1 (taken using conventional algebra) is equal to the Hamming weight of c, 1 · c = w(c). For any even code, 1 is orthogonal to all codewords (with algebra mod 2) and therefore 1 belongs to the dual code. If a code is doubly-even and self-dual, 1 belongs to the code, and hence n must be divisible by four. In fact, doubly-even self-dual codes can exist only for n divisible by eight.
Two codes [n 1 , k 1 , d 1 ] and [n 2 , k 2 , d 2 ] can be combined together into a new [n 1 + n 2 , k 1 + k 2 , min(d 1 , d 2 )] code. A code which is not a composition is called indecomposable. The Construction A lattice of a decomposable code is a direct sum of two lattices.

Example: repetition code and checkerboard lattice
We apply Construction A to the repetition code i n (2.5). The corresponding lattice Λ includes the vector 1/ √ 2 and n vectors 2e i / √ 2, where e i is a basis vector in R n . One of these vectors, say 2e n / √ 2, is linearly dependent and can be dropped. Thus Λ is a linear span of the following n vectors, 1/ √ 2 and 2e i / √ 2, 1 ≤ i ≤ n − 1. This is the checkerboard lattice, isomorphic to the root lattice of the B n series rescaled by 1/ √ 2. The lattice of the dual code includes vectors of the form (e i + e i+1 )/ √ 2, 1 ≤ i ≤ n − 1, coming from the rows of (2.6), and 2e n / √ 2 (all other vectors 2e i / √ 2 are linearly dependent). This is the root lattice of the C n series rescaled by 1/ √ 2. In the special case n = 2, the lattices B 2 / √ 2 and C 2 / √ 2 coincide, reflecting that the repetition code i 2 is self-dual.
Example: Hamming [7,3,4] code and E 7 lattice The code dual to the Hamming [7,4,3] code is known as the Hamming [7,3,4] code. Its generator matrix is given by the transpose of (2.7). Its parity check matrix, besides the rows of (2.7), includes an additional row (which can be chosen in more than one way), Construction A applied to the Hamming [7,3,4] code yields the root lattice of Lie algebra E 7 . The conventional basis of the E 7 root lattice includes integer and half-integer coordinates, which does not match the factors 1/ √ 2 appearing via Construction A. These lattices are isomorphic, meaning that they are equivalent up to a rotation. We establish this isomorphism explicitly in Appendix A.2.
The generator matrix of a code is not unique. It can be multiplied from the right by any non-degenerate k × k binary matrix (all algebra mod 2) without changing the code. Usually the particular order of the bits -the components of the codewordsdoes not matter. Therefore two codes C and C are called equivalent if they are related by a permutation of bits. At the level of generator matrices where the permutation matrix O is a binary n × n matrix with only one non-zero element in each row, which is equal to 1, and O is non-degenerate. The matrix Q is an arbitrary nondegenerate binary matrix. The full equivalence group has n! elements, but some of them may act trivially. The automorphism group of a particular code Aut(C) is the subgroup of permutation group which leaves the given code C invariant. By using an equivalence transformation of the form (2.14), the generator matrix of any code can be brought to the canonical form where I is the k × k identity matrix and B is some k × (n − k) binary matrix. The representation (2.15) is not unique; one can still simultaneously permute the rows and columns of B. The equivalence transformations (2.14) which permute the first k and last n − k bits act in a more complicated way, see Appendix B. Given a code with generator matrix of the canonical form (2.15), the binary matrix B can be used to define an unoriented bipartite graph. At the level of graphs, code equivalences (2.14) are mapped to equivalences of graphs under the operation of edge local complementation [18]. The relation between codes and graphs provides useful way to analyze and design new codes [19,20], and has been used to classify all inequivalent codes for n ≤ 24. A generalization of the relation between codes and graphs to the quantum case is an important part of our discussion in Section 4.2.
When n = 2k, the matrix B is square. In this case the parity check matrix is given by When the code is self-dual G and H must generate the same code, and therefore B B T = I, understood mod 2. The same conclusion follows from the explicit form of the generator matrix of the Construction A lattice, Example: Extended Hamming [8,4,4] code and E 8 lattice The extended Hamming [8,4,4] code is obtained from the Hamming [7,4,3] code by extending all rows of its generator matrix G T by one bit, assigning it the value such that the Hamming weight of each row is even. Starting from (2.7) and (2.13) we obtain where we used the equivalence condition (2.14) to bring G T to the canonical form (2.15). It can be easily checked that B B T = I. The extended Hamming [8,4,4] code is denoted e 8 . It is the unique doubly-even self-dual code with n = 8. Via Construction A, it gives rise to the unique even self-dual lattice in eight dimensions -the root lattice of the Lie algebra E 8 .
A linear [n, k, d] code has 2 k codewords. To summarize information about its spectrum of Hamming weights it is convenient to define the enumerator polynomial W C is a homogeneous polynomial of degree n, with positive integer coefficients and W C (1, 0) = 1, W C (1, 1) = 2 k . Under the operation of code duality, the enumerator polynomial transforms according to the MacWilliams identity In other words enumerator polynomial of a self-dual code must be invariant under When the dual code C ⊥ is not equal but is equivalent in the sense of (2.14) to the original code, its enumerator polynomial must also be invariant under (2.21). Such a code is said to be isodual. At the level of lattices, Λ(C) of an isodual code is isomorphic to its dual, i.e related to its dual by a rotation. Such lattices are called isodual, in contrast to self-dual lattices. Finally, there are codes C which are not equivalent to C ⊥ in any conventional way, yet W C is invariant under (2.21). Such codes are called formally self-dual. All formally self-dual codes are [n, k = n/2, d], i.e. they exist only when n is even. Schematically, The simplest example of an isodual but not self-dual code is the [2, 1, 1] code, which includes one trivial and one non-trivial codeword c = (1, 0). This code is not even and therefore not self-dual. The corresponding lattice is a non-integral lattice with a rectangular unit cell, with sides of length √ 2 and 1/ √ 2. Its dual lattice coincides with the original lattice after rotation by π/2. The enumerator polynomial of this code, W = x 2 + xy, is invariant under (2.21). Any enumerator polynomial of a self-dual code must be invariant under (2.21) and also under y → −y since the code is even. All polynomials invariant under these symmetries are in the polynomial ring P(W i 2 , W e 8 ) generated by This is known as Gleason's theorem. The generator polynomials W i 2 and W e 8 are themselves the invariant enumerator polynomials of the self-dual repetition code i 2 and the extended Hamming [8,4,4] code, respectively. Any doubly-even formally self-dual code, provided it is a linear code, is automatically self-dual. This follows immediately from the fact that an even lattice is necessarily integral, and hence is included in its dual Λ(C) ⊂ Λ(C ⊥ ). The same argument applies to the dual code, and its dual lattice, yielding Λ(C) = Λ(C ⊥ ), C = C ⊥ . The enumerator polynomial of a doubly-even code is invariant under (2.21) and y → iy. All such polynomials lie in the polynomial ring P(W e 8 , W g 24 ) generated by W e 8 and W g 24 = x 24 + 759x 16 y 8 + 2576x 12 y 12 + 759x 8 y 16 + y 24 . (2.23) Here W g 24 is enumerator polynomial of the extended [24,12,8] Golay code, introduced below. Instead of W e 8 and W g 24 it is sometimes convenient to use W e 8 and (xy(x 4 −y 4 )) 4 . Not all polynomials invariant under appropriate symmetries are enumerator polynomials of self-dual codes. The coefficients of bona fide enumerator polynomials are positive integers and they additionally must satisfy W C (1, 0) = 1. (The condition W C (1, 1) = 2 n/2 follows from W C (1, 0) = 1 when W C (x, y) is a polynomial in P(W i 2 , W e 8 ) or P(W e 8 , W g 24 ).) In what follows we refer to polynomials that satisfy these additional conditions as invariant polynomials.
An arbitrary lattice Λ is characterized by its theta-function, which is a holomorphic function of q. Using the Poisson resummation formula, and for simplicity assuming the lattice is unimodular, one can express the theta function of the dual lattice in terms of Θ Λ : When the lattice is even, Θ Λ (τ ) is trivially invariant under τ → τ + 1. For an even self-dual lattice Θ Λ (τ ) changes covariantly under the two generators of the modular group PSL(2, Z), and therefore Θ Λ (τ ) is a modular form of weight n/2. For a lattice obtained via Construction A, the theta function can be evaluated as follows. We split the sum in (2.24) into a sum over codewords, and for each codeword c ∈ C we sum over vectors The sum over each component a i ∈ Z can be performed independently in terms of Jacobi theta-functions (2.28) Conventionally, Jacobi theta functions are understood as functions of τ . We define them as functions of q to emphasize that their algebraic combinations can be expanded as power series in q. We find that Standard identities for the Jacobi theta functions imply that under τ → τ + 1 the function θ 3 (q 2 ) remains invariant while θ 2 (q 2 ) → iθ 2 (q 2 ), and under τ → −1/τ they change as follows These transformations match y → iy and (2.21), confirming the modular properties of the theta function associated with the lattice Λ(C) of a doubly-even self-dual code C.
Example: theta function of the E 8 root lattice The root lattice E 8 is also the Construction A lattice of the e 8 code. Therefore, its theta function is given by The corresponding code is doubly-even self-dual (so the lattice is even self-dual), and therefore Θ E 8 (τ ) is a modular form of weight 4. There is a unique modular form of weight 4, the Eisenstein series E 4 (τ ), and therefore Θ E 8 (τ ) = E 4 . The overall coefficient can be fixed by noting that both Θ E 8 (τ ) and E 4 for small q behave as 1 + O(q). In fact , indicating that the E 8 lattice has 240 roots. There are many ways Θ E 8 (τ ) = E 4 can be expressed in terms of Jacobi theta-functions. It is customary to introduce and a = θ 2 (q), b = θ 3 (q), c = θ 4 (q). They satisfy a 4 + c 4 = b 4 . Then (2.34) The analog of Gleason's theorem for theta functions of even unimodular lattices is the following. All theta functions for even self-dual lattices are polynomials in theta functions for the E 8 lattice and the Construction A lattice of the Golay code. This formulation is the direct analog of (2.22), but it is not completely conventional in the choice of generators. Upon substitution x → θ 3 (q), y → θ 2 (q), the combination (xy(x 4 − y 4 )) 4 becomes Correspondingly the theta series of any even self-dual lattice can be written as a polynomial in E 4 and η 24 . This is of course a well-known result in the theory of modular forms. Since 1728η 24 = E 3 4 − E 2 6 , this is simply a consequence of the statement that all modular forms of weight n/2 for n divisible by 8 are polynomials in E 4 and E 2 6 . Another conventional choice of generators is provided by E 4 and the theta series of the Leech lattice introduced below.
There is a close relation between codes and sphere packing. An optimal lattice sphere packing requires a lattice with a fundamental cell of unit volume and a shortest vector of maximum possible length. Codes of maximal Hamming distance for a given n naturally lead to such lattices. Thinking of codewords as points on the unit cube, such codes maximize the distance from the origin to all codewords. Via Construction A this should lead to a good lattice sphere packing, and indeed the E 8 lattice is the optimal sphere packing in eight dimensions [21]. The discussion above is intuitive but it has a serious flaw: all lattices obtained via Construction A include vectors of the form √ 2 a with arbitrary a ∈ Z n . Thus, no matter how good a code might be, the corresponding lattice Λ(C) would necessarily have vectors of length 2 = 2. It so happens that 2 = 2 is the largest possible length of the shortest vector in eight dimensions, but this observation renders Construction A an unsuitable approach for finding good lattice sphere packings in higher dimensions. To design good lattice packings starting from a good code with n > 8, it is desirable to leave lattice vectors of the form c/ √ 2, c ∈ C, intact because they have sufficient length, but remove short vectors of the form √ 2(±1, 0, . . . , 0), √ 2(0, ±1, . . . , 0), . . . . There are several different ways (constructions) to achieve that result. Here we focus on a construction of particular physical relevance.
Let us start with an even self-dual lattice Λ and consider a vector δ such that 2δ ∈ Λ. We demand that δ 2 be an integer. The lattice Λ can be represented as the disjoint union of two sets (2.36) Since the original lattice Λ is integral, 2δ · v is an integer and therefore Λ = Λ 0 ∪ Λ 1 . It is easy to see that Λ 0 is closed under addition and therefore it is a lattice, while Λ 1 is not. We now shift all vectors in Λ 1 by δ, and define a new lattice via It is easy to check that Λ is a lattice: the sum of two vectors in Λ belongs to Λ . Furthermore if δ 2 is odd, the lattice is even and self-dual. This procedure can also be applied to odd self-dual lattices, yielding a new odd self-dual lattice, in which case the condition that δ 2 be odd is not necessary and is replaced by δ / ∈ Λ. We will call the above construction of a new lattice Λ a "twist" (by a half-lattice vector), following the nomenclature adopted in the context of 2d conformal theories [22]. The twist can be used to construct new lattices with longer shortest vectors than the original ones. Since any self-dual code includes the codeword 1, any Construction A lattice includes the vector 2 δ = 1/ √ 2. Choosing this δ removes the vectors √ 2(±1, 0, . . . , 0) from the lattice; they are instead replaced by (−3/2, 1/2, . . . , 1/2) √ 2 and (5/2, 1/2, . . . , 1/2) √ 2) which have length 2 ≥ 1 + n/8. The "codeword" vectors c/ √ 2, c ∈ C, still belong to the lattice provided w(c) is divisible by four. The theta function of the new lattice, obtained from the Construction A lattice by the twist with δ = 1/(2 √ 2), can be calculated in full generality. Because of permutation symmetry, the contribution of all vectors associated with a given codeword depends only on w(c). There are several cases to consider: w(c) = 0, w(c)/2 is odd, and w(c)/2 is positive and even. We spare the reader the details and simply present the answer, If the code is doubly even, 2 −n/2 W C (1, i) = 1. Under the modular transformation τ → τ + 1, the functions a, b, c change as follows: a → i 1/2 a, b ↔ c. Under τ → −1/τ they change as a → √ −iτ c, c → √ −iτ a, and b → √ −iτ b. Therefore Θ Λ (C) always changes covariantly under τ → −1/τ , reflecting that the twist does not affect selfduality; however, modular invariance under τ → τ + 1 requires the code to be doublyeven and n/8 to be odd, to ensure that Λ(C) is even, δ 2 is odd, and therefore that Λ (C) is even as well.
Example: twist of the E 8 lattice The Construction A lattice of e 8 is invariant under the twist by δ = 1/(2 √ 2) in the sense that the new lattice is isomorphic to the old one. As a consistency check one can verify using identity a 4 + c 4 = b 4 that the theta functions of the original and new lattices are equal Example: extended Golay [24,12,8] code and Leech lattice The extended Golay [24,12,8] code is the unique n = 24, d = 8 code (up to equivalences). It is denoted g 24 . In the canonical form (2.15) it is specified by the matrix B given in (B.1). It is a matter of a few minutes of computer algebra to verify that B B T = I, confirming that the code is self-dual, and to evaluate its enumerator polynomial W g 24 (2.23). The explicit form of W g 24 confirms that g 24 is doubly-even. The theta function of Λ(g 24 ) is given by which follows from the explicit form of W g 24 and (2.35). Applying the twist δ = 1/(2 √ 2) to Λ(g 24 ) produces the Leech lattice, with theta function Θ Λ(g 24 ) + (ab) 12 + (bc) 12 − (ac) 12 2 . (2.43) Its small q expansion reads 44) which indicates that the Leech lattice (famously) has no roots -vectors of length 0 < 2 ≤ 2 -and its shortest vector has length 2 = 4. The analog of Gleason's theorem for modular forms from above guarantees that Θ Leech can be expressed as an "enumerator polynomial," i.e. a polynomial in x, y invariant under (2.21) and y → iy, W Leech = x 24 − 3x 20 y 4 + 771x 16 y 8 + 2558x 12 y 12 + 771x 8 y 16 − 3x 4 y 20 + y 24 , (2.45) We emphasize that while some coefficients of W Leech (x, y) are negative, all coefficients in the q-expansion of W Leech (θ 3 (q 2 ), θ 2 (q 2 )) are positive integers.
To summarize, there is a close relation between codes and their enumerator polynomials, and lattices and their theta functions. A natural question to ask is whether this relation is exclusive. The answer is no. Enumerator polynomials characterize codes, but not in a unique way: inequivalent codes may share the same polynomial. Accordingly, different non-isomorphic lattices can be isospectral, i.e. have the same theta series. There are also invariant polynomials which are not enumerator polynomials of any code. Likewise, there are self-dual lattices not related to any code via Construction A, and so on. To see how this works we discuss the most restrictive case of doublyeven self-dual codes for n = 8, 16, 24. For even but not doubly-even self-dual codes the situation is even more complex.
For n = 8, e 8 is the unique self-dual doubly-even code, and there is a unique invariant polynomial W e 8 . There is also a unique even self-dual lattice in eight dimensions, the root lattice of E 8 , which is related to e 8 via Construction A. The theta series of that lattice, E 4 , is the unique modular form of weight 4. Thus for n = 8 the story is simple: there is a perfect correspondence between self-dual doubly-even codes, even self-dual lattices, and invariant polynomials. For n = 16, there is still a unique invariant polynomial W 2 e 8 , but there are two inequivalent self-dual doubly-even codes, a decomposable code e 8 ⊕ e 8 and an indecomposable code d + 16 [23,24]. Construction A applied to the latter yields the even self-dual lattice D + 16 = D 16 ∪ (D 16 + 1/2), where D + 16 is the root lattice of Spin(32)/Z 2 . The Construction A lattice of the former code is E 8 ⊕ E 8 , which is not isomorphic to D + 16 . Both codes have the same enumerator polynomial, and thus both lattices have the same theta function, the unique modular form of weight eight, E 2 4 . We conclude that these two non-isomorphic lattices are isospectral, since their theta series coincide. This is Milnor's famous example of distinct compact spaces (the tori defined by these lattices) with equivalent Laplacian spectra. An excellent nontechnical discussion of this point can be found in J. Conway's book [25]. The isospectral lattices E 8 ⊕E 8 and D + 16 define two different heterotic string theories, related by duality [26][27][28]. The situation is even more nuanced for n = 24. In this case there are 9 doublyeven self-dual codes, and overall 24 non-isomorphic even self-dual lattices. There is no simple way to assign each lattice to a particular code besides those nine obtained via Construction A. In our discussion, and historically, the Leech lattice is associated with the g 24 code, but other lattices are related to each other in similar manner [17]. For n = 24, all invariant polynomials can be written as W 3 e 8 + r(xy(x 4 − y 4 )) 4 with r an integer −42 ≤ r ≤ 147 to ensure all coefficients are positive. (The Golay code g 24 corresponds to the smallest allowed value of r = −42.) Most of these polynomials are not enumerator polynomials for any code. We refer to such code-less invariant polynomials as "fake." The relations between codes and lattices can be extended to CFTs and their vertex operator algebras [3,4,[29][30][31][32][33]. (We refer the reader interested in a quick summary of these relations to Table 1 of [3].) In particular, Euclidean even self-dual lattices can be used to define chiral CFTs, which play a prominent role in string theory and mathematical physics. The connection to codes provides a new angle to probe various aspects of 2d chiral theories. In particular, the fake enumerator polynomials mentioned above give rise to "would-be" CFT partition functions, modular invariant functions satisfying positive conditions, at least some of which are not partition functions of any theory. 1 Speaking colloquially, our paper extends the relations between codes, lattices, and CFTs to include quantum codes and non-chiral CFTs.
One of the central questions of code theory is to understand the maximum possible value of d/n for fixed k/n when n goes to infinity. Analogous questions can be asked about lattices and sphere packing. While the optimal value of d/n is not known there are various upper and lower bounds. For self-dual codes n = 2k the Hamming bound (2.9) readily provides an upper bound, where is the Shannon entropy. This bound is suboptimal. One can derive stronger bounds using linear programming techniques. For instance, for even self-dual codes the space of invariant polynomials is the linear space of all polynomials in W i 2 , W e 8 subject to linear constraints and inequalities. If we additionally require that the hypothetical enumerator polynomial describes a code of Hamming distance d, that would impose additional linear constrains ∂ k y W (x, y) y=0 = 0 for 1 ≤ k < d. If the corresponding discrete linear programming problem is infeasible, there is no such code and d must be reduced. For small n, but not in general, the bounds obtained this way are tight: the largest d for which the problem of finding invariant polynomials is feasible is also achievable as the Hamming distance of a code. Codes for which d saturates the linear programming bound are called extremal. The linear programming bounds are not constructive -they may yield invariant polynomials, but most of these are fake, and reconstructing a code from a polynomial is algorithmically hard. Nevertheless one can establish asymptotic bounds on d/n in this way, which for type I and II self-dual codes read d/n ≤ n/5 and d/n ≤ n/6, respectively [34][35][36]. These bounds can be further improved [37,38] and it is expected that additional systematic improvements are possible. The linear programming bounds for codes are parallel to linear programming bounds on the length of the shortest vector of a unimodular lattice [35,36]. They can be thought of as simpler, more restricted versions of linear programming bounds on sphere packing [21,[39][40][41][42], which, remarkably, are related to modular bootstrap bounds [8][9][10].
Besides upper bounds, there is a lower bound on d/n known as the Gilbert-Varshamov bound, which is closely related to the Hamming bound (2.9). The idea is to fix d and n and put a bound on the number of codewords K. Since d is the minimal distance between any two codewords, the ball of radius d − 1 centered at a given codeword does not include any other codeword. We consider balls of radius d − 1 centered at all K codewords and ask if they cover the whole space. If they do not, K can be increased. Thus for a linear code we find the maximal d for which the following inequality is satisfied (2.47) This bound can be improved by noticing that for even codes the sum over l in (2.9) should go only over even values. Furthermore, the full space of even codewords has volume 2 n−1 . Similar improvements are possible also for doubly-even codes. In the considerations above we disregarded self-duality, but a generalization to self-dual codes is possible [43]. There is a conceptually different way to obtain a Gilbert-Varshamov bound, suitable for self-dual codes, which leads to essentially the same results. For even n there are  [24,44,45] W I (x, y) = 2 n/2 (x n + y n ) + (x + y) n + (x − y) n 2(2 n/2−1 + 1) , (2.50) W II (x, y) = 2 n/2 (x n + y n ) + (x − y) n + (x + y) n + (x + iy) n + (x − iy) n 4(2 n/2−2 + 1) So if the sum of the coefficients of x n−k y k , for 1 ≤ k < d, is smaller than one, then there is a code with Hamming distance d.
Asymptotically, the lower bound on d/n is given by the value of d for which the coefficient of x n−d y d becomes of order one when n → ∞, yielding d/n ≥ p * ≈ 0.11. In Section (5) we will interpret (2.51) as the averaged partition function of certain chiral CFTs. The value of the Gilbert-Varshamov bound p * ≈ 0.11 would then define the spectral gap in a random CFT from that class.

Codes over GF(4)
Analogously to binary codes, one can define codes over any field F . We are specifically interested in codes over F = GF(4) -the unique field with four elements -because of their relevance to quantum codes. The Galois field GF(4) consists of four elements 0, 1, ω,ω subject to the following relations There is a conjugation operation which leaves 0, 1 invariant and exchanges ω ↔ω. With the exception of 2 x = 0, all other relations are automatically satisfied if we take ω = e 2πi/3 andω = e −2πi/3 . For example 1 +ω = −ω = ω − 2ω → ω.
To impose the condition 2x = 0 we first consider the triangular lattice in the complex plane This is the root lattice A 2 rescaled by 1/ √ 2, the lattice of the so-called Eisenstein integers. If we define new lattice 2 Γ E by requiring that both a and b be even, then GF(4) = Γ E /(2Γ E ). In contrast to the binary case, 2 Γ E is not dual to Γ E , a fact which will have consequences later. Now we are ready to define codes over F = GF (4). A code C ⊆ F n is called additive if C is a vector space over F , meaning that the sum of two codewords is a codeword. Additive codes always include the trivial codeword consisting of n zeros. The Hamming weight w(c) for c ∈ F n is the number of non-zero elements of c. As before, the Hamming distance of a code is the minimal Hamming weight of all nontrivial codewords, and a code with Hamming distance d and size K = 4 k is said to be of type [n, k, d].
A code is called linear if it is a vector space over F , which requires that for any codeword c ∈ C, c = ωc must also be a codeword c ∈ C. All linear codewords are additive but not vice versa. For binary codes, the two notions coincide, but not for other fields. The length of a linear code is always K = 4 k for some k ≤ n. An additive code with K = 4 k can be specified by an n × k generator matrix G with all codewords given by For each additive code we can define a lattice in C n = R 2n as the pre-image of the code under the map (Γ E ) n /(2Γ E ) n , or explicitly This is the analog of Construction A for codes over GF (4).
To define duality on the space of codes, we need to introduce a scalar product. There are several natural choices. First, the so-called Euclidean scalar product of x, y ∈ F n is (x, y) = i x i y i . There is also a Hermitian version, (x, y) H = ix i y i . These two versions are homogeneous and can be used to define duality on the space of linear codes. There is a third, physically relevant scalar product, In all cases the algebra is over F . The dual code C ⊥ consists of all codewords orthogonal (with respect to a given inner product) to all codewords of C. The code is called selforthogonal if C ⊂ C ⊥ and self-dual if C = C ⊥ . Linear self-dual codes under the Euclidean ( , ) and Hermitian ( , ) H products form the code families known as 4 E and 4 H . Additive codes self-dual under the Hermitian product (2.56) make up to family 4 H+ . It is easy to see that 4 H ⊂ 4 H+ . The Construction A lattice of an additive self-dual code 4 H+ is an integral lattice in R 2n . It is not self-dual because Γ E is not self-orthogonal, and (Γ E ) * includes points outside of Γ E , although 2 Γ E ⊂ (Γ E ) * . If the Hamming weight w(c) of all codewords c ∈ C is even, the code is called even. The Construction A lattice Λ(C) of an even code is even. Even self-dual codes are said to be of type II, and belong to the family 4 H+ II . Otherwise, if some of the weights w(c) are odd, the codes are referred to as odd, or type I, and are in the family 4 H+ I .
For codes over GF(4) one can introduce an enumerator polynomial exactly as in the binary case via (2.19), with the Hamming weight w(c) defined above. The MacWilliams identity relates the weight enumerators of the original and dual additive codes; it takes the same form for 4 E , 4 H , and 4 H+ [44][45][46]: Enumerator polynomials of self-dual codes are invariant under and therefore are in the polynomial ring P(W 1 , W 2 ) generated by [47] Here W 1 is easy to recognize as the enumerator polynomial of the simplest [1, 1, 1] self-dual additive code with only one non-trivial codeword c = (1), while W i 2 is the enumerator polynomial of a linear "repetition" code over F with the generator matrix G T = (1, 1). If the self-dual code is even, the enumerator polynomial is additionally invariant under y → −y, in which case it is a polynomial in W i 2 and W h 6 = x 6 + 45x 2 y 4 + 18y 6 . (2.60) Here W h 6 is the weight enumerator of the hexacode, introduced below. Additive codes over GF(4) are defined to be equivalent if they are related to each other by a permutation of their "letters" (components of the codewords), conjugation of some "letters" ω ↔ω, and multiplication of some "letters" by ω orω. The same operation should be applied to all codewords of the code. In terms of the corresponding lattice Λ(C) these are isomorphisms which permute C planes inside R 2n , and within each plane permute 1, ω = e 2πi/3 , ω = e −2πi/3 in an arbitrary order. There are a total of 3!n! elements in the equivalence group.
To calculate the theta series for Λ(C), it is sufficient to consider each C plane inside R 2n individually and sum either over the triangular lattice {2(a+bω) | a, b, ∈ Z} or over the triangular lattice shifted by half-vector if and only if the code (and corresponding lattice) is even. The functions φ 0 , φ 1 also change covariantly under the modular transformation τ → −1/τ , These transformations coincide with (2.58) after rescaling τ → τ / √ 3. Upon rescaling the argument to t = √ 3τ , the theta function for a self-dual C would be covariant under which reflects that the rescaled lattice Λ(C)/3 1/4 is isodual, i.e. it is equal to its dual (Λ(C)/3 1/4 ) * after a rotation by π/2 in each C plane. Alternatively, one can characterize Λ(C) of a self-dual code C as a 3-modular lattice [36].

Hexacode and Coxeter-Todd lattice
The hexacode is the unique linear even self-dual [6,3,4] code of type 4 H defined by the following generator matrix As an additive self-dual code from 4 H+ it would be denoted as [6,6,4]. Later in the text we will refer to it as h 6 . Its enumerator polynomial is given by (2.60). Since it is a linear code, all coefficients of W h 6 except for the first one are divisible by three. The Construction A lattice Λ(h 6 ) ⊂ R 12 is the Coxeter-Todd lattice K 12 , an even lattice with no roots (vectors of length 0 < 2 ≤ 2) in twelve dimensions. This follows from the theta series There are many other results concerning codes over GF(4), analogous to results about binary codes, including a series of linear programming bounds. We will present these bounds later in the text after making the connection between classical codes over GF(4) and binary quantum stabilizer codes.

Quantum error-correcting codes
In this section we introduce quantum stabilizer codes and establish their relation to Lorentzian integer lattices. Subsection 3.1 is mostly pedagogical; it introduces quantum stabilizer codes and explains their relation to classical codes over GF (4). Only the very last part of this section, where we discuss real self-dual stabilizer codes and their refined enumerator polynomials, is original. Subsection 3.2 explains the relation of stabilizer codes to Lorentzian lattices, which is the central ingredient in our construction.

Quantum additive codes
Let us consider a system consisting of n quantum spins, or qubits. Initially the system is in some state ψ ∈ H. Because of unwanted interactions with the environment the system changes its quantum state ψ → ψ in some unpredictable way. This is quantum error. We would like to devise a protocol to return the system to its original state. That would be quantum error correction. Clearly this can not be done in full generality, so we must restrict to quantum errors of a particular type. For a system consisting of n distinct physical qubits, one usually assumes a random interaction with the environment that affects at most t qubits at once. Furthermore, the correction of quantum errors is possible only for certain states that belong to a special code subspace ψ ∈ H C ⊂ H.
Interactions with the environment can be described as linear operations acting on ψ. More accurately, one should speak of a quantum channel acting on a density matrix, but for simplicity we will assume the system always remains in a pure state. Operators describing interactions with the environment form a linear space. We can choose a basis E i for this space, a basis of quantum errors. Crucially, to ensure reversibility of quantum errors due to an arbitrary linear combination of the E i , it is necessary and sufficient that each E i , restricted to H C , be nondegenerate (reversible) and that the images of H C do not overlap, This is the Knill-Laflamme condition [48]. The reduction of all possible errors to a handful of linear operators E i is called a discretization of quantum errors [49]. The Knill-Laflamme condition has a classical counterpart: correctable errors of a linear classical code must produce different results. Consider two codewords of a binary classical code c 1 , c 2 ∈ C, and assume they are subject to errors, c 1 = c 1 +e i , c 2 = c 2 +e j . For both errors e i , e j , to be correctable, c 1 and c 2 must always be distinct, which is the classical analog of (3.1). Indeed, if c 1 = c 2 (all algebra is mod 2), In this case the error e i + e j is annihilated by the parity check matrix, meaning that both e i and e j will yield the same error correction protocol, which will fail to undo at least one of errors. The similarity of the quantum case comes from the linearity of quantum mechanics, i.e. the possibility to represent any quantum error as a linear combination of the E i . There is one important exception when (3.1) does not have to apply -when two distinct errors act identically on H C . In the classical case that would mean that the errors are the same, but in the quantum case the errors could act differently on H \ H C . A code for which all correctable errors satisfy (3.1) is called non-degenerate.
The condition c 1 = c 2 in Section (2.1) leads to the classical Hamming bound (2.9). There is a quantum version of the Hamming bound, which is as follows. The linear space of quantum errors which affect exactly l qubits is spanned by 3 l tensor products of Pauli matrices (the identity being excluded, as we want all l qubits to be affected). Hence the total number of errors E i , including the trivial one E 1 = I, affecting up to t qubits is Assuming the code is nondegenerate, the images E i H C must not overlap. The total dimension of all images can not exceed the dimension of full Hilbert space, yielding This is the quantum Hamming bound for codes correcting arbitrary quantum errors affecting up to t qubits [50]. A code saturating this bound is called perfect. Often the code subspace H C will describe k logical qubits, in which case dim(H C ) = 2 k . From (3.4) it follows that to encode k = 1 logical qubit and to be able to recover the state after any quantum error affecting t = 1 physical qubit, one needs n = 5 physical qubits. For example, the 5-qubit protocol of Laflamme at el. [51] introduced below is a perfect quantum error-correcting code. The quantum Hamming distance d is the minimal number of physical qubits which need to be affected to map a state from H C into H C . A quantum error-correcting code characterized by n, k, and d is denoted [[n, k, d]]. Such a code can correct for any error affecting up to t = [(d − 1)/2] qubits.
To illustrate how quantum error-correcting codes work, we consider an oversimplified situation in which k logical qubits are implemented as k physical qubits, which are isolated in a lab as part of a perfect noiseless quantum computer. We additionally consider n − k auxiliary qubits located in a different lab in an imperfect environment. States of all k + (n − k) = n qubits can be represented in the conventional binary basis (0 = spin up, 1 = spin down) |a 1 . . . a n = |a 1 . . . a k ⊗ |a k+1 . . . a n ∈ H, Auxiliary qubits will be initialized in the state |0 n−k , and will be left intact, while the quantum computer performs unitary evolution of the first k "logical" qubits Because the auxiliary lab is imperfect, after some time the state of the last n − k qubits will evolve to where ψ l is the desired result of unitary evolution produced by the quantum computer, while ψ a is some unknown random state resulting from interactions with the environment. This example may appear unrealistic because we have physically isolated the logical qubits from the environment, the systems are not entangled, and therefore state of the auxiliary qubits does not matter. All measurements performed in the first lab will be insensitive to ψ a , and our insistence on including the auxiliary qubits in our considerations is inconsequential. Nevertheless, it is instructive to ask a question: can one devise a protocol to bring the corrupted state (3.7) to the desired form (3.6). This is easy to do: one simply needs to re-initialize the auxiliary system. This can be done by first measuring the state of auxiliary qubits in the computational up-down basis, which will project the total state onto ψ l ⊗ |a k+1 . . . a n , and then applying the recovery operator Measuring ψ a in the computational basis, called syndrome measurement, and then applying R, is analogous to evaluating (2.4) in classical case and using it to reconstruct the original codeword. In our example, the code subspace H C includes all states of the form ψ l ⊗ |0 n−k , and has dimension 2 k . It can be defined as the subspace invariant under the action of σ z acting on any of the auxiliary spins, We additionally notice that g i are unitary, nilpotent, traceless, and commute with each other, They form an abelian group, which acts trivially on H C . The group generated by the g i is called the stabilizer of H C . Crucially, the generators g i define the basis |a k+1 . . . a n in the quotient H/H C as the mutual eigenbasis of the g i with eigenvalues 1 − 2a k+i . The code described above is degenerate. Different nontrivial combinations of Pauli matrices acting on the n − k auxiliary qubits may act trivially on |0 n−k . This can be seen differently. The dimension of H C is 2 k while the total dimension is 2 n . Thus naively only 2 n−k errors are correctable, while in fact all n−k t=0 V q (t, n − k) = 4 n−k operators acting on the auxiliary qubits are correctable.
The discussion above applies to the trivial case when the logical qubits are isolated from the environment. Now we want to consider the situation when all n physical qubits are subject to noise, and we want to use them to encode k logical qubits with the possibility to recover at least some errors. To that end we perform a unitary transformation on H, and define new stabilizer group via g i → U g i U † . Our code subspace is an image of ψ l ⊗ |0 n−l under U . If U is nontrivial, all states in the code subspace will be highly entangled. To correct the error, we can perform projective measurements of the g i , identify corresponding eigenvalues λ i = 1 − 2a k+i and then act by on the projected state. As a result of these operations we are guaranteed to obtain a state from the code subspace. We will discuss later which errors can be corrected in this way.
The class of quantum error-correcting codes known as additive or stabilizer codes exploits the idea outlined above with the following restriction. The generators of the stabilizer group g i are chosen to be tensor products of Pauli operators and identity operators acting on individual spins Here σ 0 is the identity matrix, σ 1,2,3 are Pauli matrices and = ±1 or = ±i to ensure g 2 = I. With this definition all properties are automatically satisfied except for commutativity. The form of (3.13) can be understood as a restriction on the unitary transformation U . It is customary to rewrite (3.13) in a slightly different form, using two binary vectors α, β of length n, 14) The coefficient = ±1 can be chosen at will. To describe k logical qubits we would need n − k generators of the stabilizer group, or Commutativity of a pair of generators requires Because we are working mod 2, the minus sign in front of the second term can be flipped. It is convenient to combine the vectors (α i , β i ) into an (n − k) × n binary "parity check" matrix and introduce a 2n × 2n matrix Then the commutativity condition (3.15) is HgH T = 0. Multiplying H by an invertible binary matrix from the left would not change the stabilizer group but only the choice of the generators g i . All operations with H are to be understood mod 2.
The operators on H commuting with the full stabilizer group are operators acting on ψ l in our example above. These are "logical operations" -they change states from the code subspace into other states in the code subspace. Considering operators of the form (3.14), there are exactly 2k generators of such transformations corresponding to 2k linearly independent vectors (α, β). In the example above those would be operators σ z and σ x acting on individual logical qubits.
We introduce the binary "generator matrix" G as a matrix of maximal rank satisfying Its transpose G T will have n + k rows, n − k of which span the same space as the rows of H, while the remaining 2k rows are generators of logical operations on H C , The similarity with classical codes is striking at this point. We can identify rows of G T as codewords c ∈ Z 2n 2 . Assuming algebra over Z 2 , acting on G from the right by any invertible (n + k) × (n + k) binary matrix would not change the code, exactly as in the classical case. That is why we can always assume that the first n − k rows of G T coincide with H.
At this point we would like to introduce the quantum Hamming weight w(c) = w(α, β) as the number of qubits affected by g(α, β). For binary vectors (α, β) this can be written as follows (3.20) In the classical case we would define the Hamming distance of the code as the minimal weight of all 2 n+k −1 linear combinations of the rows of G T understood mod 2, with the exception of the trivial one. In the quantum case the situation is more nuanced. The rows of H and their linear combinations, understood in the sense of (3.14), are elements of the stabilizer group. They do not introduce errors as they do not affect states from the code subspace. Therefore the Hamming distance of a quantum stabilizer code is defined as follows: A quantum stabilizer code of Hamming distance d is said to be of type [[n, k, d]]. It will protect against any quantum error affecting at most t = [(n − 1)/2] qubits. Indeed for any two such errors E i , E j their linear combination E i − E j would affect strictly fewer than d qubits and therefore either (3.1) will be satisfied or E i and E j will act identically on H C . Since the stabilizer generators are nilpotent, g 2 = I, summing them up yields a projector on H C , In practice, to find states from H C in terms of the computational basis, it suffices to act by P on |0 n . In the literature, stabilizer codes are often specified by writing down stabilizer generators as products of Pauli matrices, denoted simply as X,Y,Z, and the identity I.
The [ [5,1,3]] code was introduced by Laflamme, Miquel, Paz and Zurek [51]. We use an equivalent representation from [52], which specifies the code through the following four stabilizers. For the generator matrix G T , we only explicitly write the two additional rows linearly independent from H. Linear combinations of the rows of G T include many vectors of minimal Hamming weight 3, e.g.
The code subspace is spanned by two vectors |0 l and |1 l , with two algebraicallyindependent logical operators X l , Z l represented by X⊗X⊗X⊗X⊗X and Z⊗Z⊗Z⊗Z⊗Z. In terms of the physical basis It can be checked that all four generators g i leave |0 l invariant. The state |1 l can be obtained by flipping all spins in (3.25). Many other details, including the circuit representation of the recovery protocol, can be found in the pedagogical review [52].
The formulation of stabilizer codes given above was developed in [53] and is reviewed in the textbook by Nielsen and Chuang [49]. It suggests a close relation between quantum stabilizer codes and classical linear codes. This relation was further developed in a seminal paper by Calderbank, Rains, Shor and Sloane [47] who reformulated quantum binary stabilizer codes as classical additive self-orthogonal codes over GF(4). There is an isomorphism under addition between GF(4) and Z 2 2 , called the Gray map, By combining the i-th component of α ∈ Z n 2 and β ∈ Z n 2 we can rewrite (α, β) as a vector with n components, c ∈ GF(4) n . Then a straightforward check confirms that where the first equation is understood in terms of algebra over GF(2) = Z 2 while the second equation is over GF (4). Any vector c ∈ GF(4) n is orthogonal to itself. Therefore stabilizer codes of the form (3.14,3.16) are in one-to-one correspondence with self-orthogonal additive codes over GF (4) (4) are a special case. They correspond to stabilizer codes with k = 0, which means that the code subspace is one-dimensional. In this case the quantum state ψ C ∈ H C contains no information and one can not speak of quantum error correction. Rather k = 0 stabilizer codes should be interpreted as quantum error detection protocols: they can detect any error acting on up to d − 1 qubits, where d is the largest quantum Hamming weight of all linear combinations of H (except the trivial one), In this case the classical and quantum Hamming distances coincide and self-dual stabilizer [[n, 0, d]] codes are non-degenerate. They are in one-to-one correspondence with classical self-dual [n, n, d] codes of type 4 H+ . The equivalence group of classical codes over GF(4) can be understood quantum mechanically as the group of unitary transformations acting on individual qubits in such a way that the stabilizer generators remain of the form (3.14). This group of transformations is called the Clifford group or local Clifford group (LC). The generators of the Clifford group include permutations of qubits, cyclic permutations of Pauli operators (multiplication by ω in GF(4) language), and the exchange of σ x and σ z generated by the Hadamard matrix (conjugation ω ↔ω). Assuming that the physical qubits are subject to uncorrelated noise, these are natural symmetries, which define the group of equivalences of stabilizer codes.
The connection to classical codes over GF(4) enables many results developed for classical codes to be applied to the quantum case, and vice versa. In what follows we mostly focus on self-dual stabilizer codes, or equivalently on self-dual 4 H+ classical codes over GF (4). In addition to the (total) Hamming weight w introduced in Section 2.2 we can introduce weights that count the number of individual "letters" in the codeword. Instead of using the GF(4) "alphabet" 1, ω,ω we will use the labels of the Pauli matrices σ x,y,z of the corresponding stabilizer element g(α, β). Using the Gray map c = (α, β) (3.26) we can write down explicit formulas for the weights and w(c) = w x (c) + w y (c) + w z (c). Then, in addition to the enumerator polynomial (2.19), we can define the refined enumerator polynomial (REP) One can also define the full enumerator polynomial W C (t, x, y, z) = c∈C t n−w(c) x wx(c) y wy(c) z wz(c) , but this will not play an important role in what follows. Under a duality transformation, the refined enumerator polynomial of an [n, k, d] code changes as follows: Thus for self-dual codes the refined enumerator polynomial is invariant under Setting z = y reduces (3.31) and (3.32) to (2.57) and (2.58). Enumerator polynomials and the MacWilliams identity (3.31) can also be defined at the level of quantum stabilizer codes without any reference to codes over GF(4) [54][55][56]. Focusing on self-dual codes 4 H+ , their total number is [24,47] n j=1 (2 j + 1). (3.33) In Section 2.2 we introduced even codes 4 H+ II , those whose codewords all have even Hamming weight. They exist only when n is even, and their total number is n−1 j=0 (2 j + 1), n ≡ 0 (mod 2). (3.34) Real codes make up another class of codes, which is central to our considerations. A stabilizer code is called real if all generators g(c), c ∈ C, of the stabilizer group are real. This nomenclature was introduced in the context of quantum codes in [56], where it was shown that any stabilizer code has an equivalent real code. Equivalence is defined with respect to the Clifford group defined above. This result is crucial, because it shows that modulo equivalences, real codes encompass all codes. Since the Pauli matrices σ x , σ z are real and σ y is purely imaginary, a code is real if and only if w y (c) of all codewords is even. We denote the space of real self-dual codes over GF(4) by 4 H+ R . Their total number is While this formula is the same as (3.34), the spaces 4 H+ II and 4 H+ R are not isomorphic. The former is defined for even n, while the latter exists for any n. Refined enumerator polynomials of real self-dual codes, besides being invariant under (3.32), must also be invariant under y → −y. They are polynomials in the ring P(W 1 , W 2 , W 3 ) generated by which satisfy W (1, 0, 0) = 1 and have positive integer coefficients. The polynomials W 1 , W 2 , W 3 are refined enumerator polynomials of three particular codes introduced below in Section 6. In practice, instead of W 3 it is convenient to use R = (x−z)(y 2 −z 2 ). The rings of invariant refined enumerator polynomials for 4 H+ and 4 H+ II can similarly be described explicitly.

New Construction A: Lorentzian lattices
The connection to classical codes over GF(4) provides a way to associate a stabilizer [[n, k, d]] code to an integral Euclidean lattice in R 2n , as described in Section 2.2. This lattice is not in general self-dual, even when the underlying code is self-dual, and for this reason it is not connected in any obvious way to a CFT. To obtain a self-dual lattice from a self-dual code over GF(4), we introduce a new version of Construction A for 4 H+ codes.
Starting from a code of type 4 H+ and rewriting its codewords as vectors c = (α, β) ∈ C ⊂ Z 2n 2 using the Gray map, we define a corresponding lattice using Construction A for binary codes, The lattice Λ(C) should be understood as a lattice in Lorentzian space R n,n with the metric (3.17). Then the following crucial results follow. The lattice of a dual code There is a one-to-one correspondence between lattices Λ(C) ⊂ (Z/ √ 2) 2n ⊂ R n,n and codes C of type 4 H+ . Stabilizer codes are self-orthogonal codes of type 4 H+ and therefore correspond to integral lattices. This correspondence provides a geometric way to interpret various aspects of stabilizer codes. Suppose C is an Abelian stabilizer group generated by the rows of (3.16) (or equivalently, C is a self-orthogonal code from 4 H+ ). The corresponding lattice Λ(C) ⊂ (Z/ √ 2) 2n ⊂ R n,n is then integral. The number of logical qubits k is equal to half the number of generators in the abelian group Λ(C) * /Λ(C) (i.e. the number of dimensions of the torus Λ(C) * /Λ(C)). We define the quantum Hamming norm on where the norm on the quotient is understood to be the minimal value of the norm on all elements in the preimage. A particular class of stabilizer codes, the Calderbank-Shor-Steane (CSS) codes, can easily be understood geometrically. In this case the Lorentzian lattice Λ(C) is the direct sum of two Euclidean lattices Λ(C) = Λ 2 ⊕ (Λ 1 ) * , which additionally satisfy 2) n such that Λ 1,2 are the Construction A lattices of some classical binary codes [n, k 1,2 , d 1,2 ]. The number of logical qubits k = k 1 − k 2 is equal to the dimension of the torus Λ * 1 /Λ 2 and the quantum Hamming distance of the CSS stabilizer code is the smallest length of any nontrivial vector in Λ 1 /Λ 2 or (Λ 2 ) * /(Λ 1 ) * calculated with the Euclidean metric. It is in any case not smaller The Steane [ [7,1,3]] quantum stabilizer code is a CSS code with C 1 being the Hamming [7,4,3] code and C 2 = C ⊥ 1 being its dual, the Hamming [7,3,4] code. In this case k 1 = 4, k 2 = 3, d = 3, and the quantum code protects k = 1 logical qubit against any one-qubit errors. Geometrically, Λ 1 /Λ 2 = (Λ 2 ) * /(Λ 1 ) * is the quotient of the weight lattice of E 7 over the root lattice of E 7 . As a group, the quotient is Z 2 , which corresponds to Pauli operators σ x,z acting on the single logical qubit.
A Lorentzian lattice Λ ⊂ R n,n with metric |v| 2 = p 2 L − p 2 R for v = ( k L , k R ) ∈ R n,n can be characterized by the Siegel theta-function which is a holomorphic function of two complex variables q, q , or, equivalently τ , τ . The implicit assumption in (3.37) and in the rest of this paper is that the Lorentzian lattice is simultaneously equipped with the Euclidean metric. In (3.37) we assumed a diagonal Lorentzian metric, while previously it was given by (3.17). The two metrics are related by the following transformation which preserves the Euclidean metric on v = (α, β) = ( k L , k R ), where we use different letters (Greek or Latin) and a vector arrow (or lack thereof) to imply the form of the metric tensor. When the lattice is even, it is trivially invariant under the simultaneous shift τ → τ +1, τ → τ +1. When it is self-dual (unimodular) it is also covariant under τ → −1/τ , τ → −1/τ , changing as Thus, the Siegel theta-function of an even self-dual Lorentzian lattice transforms covariantly under the full PSL(2, Z) group acting simultaneously on τ and τ . Similarly to the Euclidean lattices associated with binary and GF(4) codes discussed in Sections 2.1 and 2.2, the Siegel theta function of a lattice Λ(C) associated with the stabilizer code C is determined in terms of its refined enumerator polynomial The theta functions a, b, c are defined in the text after (2.33). The invariance of Comparing Construction A from Section 2.2 and the new Lorentzian Construction A defined above, we find that a classical self-orthogonal (self-dual) code C over GF(4) can be associated with both a Euclidean integral (integral) lattice and a Lorentzian integral (self-dual) lattice. These Euclidean and Lorentzian lattices are related to each other. In terms of the Euclidean vector v = (α, β) = ( x, y), the GF(4) Hamming weight, which is the same as the quantum Hamming weight, is Therefore the Siegel theta function will become the theta function of the Euclidean lattice (2.55) upon the substitution τ = −3τ , or q = q 3 . Indeed, it is straightforward to check using Jacobi theta function identities that in this case where φ 0,1 were given in (2.62). We postpone giving explicit examples until Sections 6.2, 6.6. Besides the Euclidean metric x 2 + 3y 2 associated with the Hamming weight of a GF(4) code, the conventional Euclidean metric x 2 + y 2 may also be considered. It is associated with the binary Hamming distance d b (c) = w x (c) + 2w y (c) + w z (c), with respect to which the original self-dual GF(4) code C via the Gray map is interpreted as an isodual binary code. Given that the generator matrix Λ of the Lorentzian Λ(C) satisfies and g (3.17) can be interpreted as an orthogonal rotation of R 2n , we immediately conclude that Λ(C), understood as a Euclidean lattice with metric x 2 + y 2 , is isodual. The theta function of this Euclidean lattice follows from Θ C (τ, τ ) upon the substitution τ = −τ . Looking ahead, the theta function Θ C (τ, −τ ) will turn out to count dimensions of the CFT operators. In full analogy with the case of classical binary codes, for which we defined a lattice and sought to maximize the norm of its shortest vector, in the quantum case we also want to maximize the Euclidean norm of the shortest nontrivial vector of Λ(C), which defines the spectral gap of the theory. The same problem we encountered in the case of binary codes also appears here: Construction A Lorentzian lattices Λ(C) necessarily have vectors of the form (α, β) = (2, 0 2n−1 )/ √ 2, resulting in operators of conformal dimension ∆ = (p 2 L + p 2 R )/2 = 1. To partially resolve this problem we employ the procedure of twisting by a half-vector, which will yield a new even self-dual Lorentzian lattice Λ starting from the original lattice Λ, given a vector 2 δ ∈ Λ with odd norm δ 2 . The procedure is identical to the one described in Section 2.1, with the only change that scalar product is now understood to be defined by the Lorentzian metric.

Narain CFTs
We start with a brief review of toroidal compactifications of string theory and the moduli space of Narain CFTs. This topic is discussed in many textbooks including [57,58]. As a warm-up we consider a particle of unit mass on a circle of radius R, The classical EOM can be easily solved: where p =ẋ is the momentum. At the quantum mechanical level, the wavefunction ψ(x) must be periodic, ψ(x) = ψ(x + 2πR), which results in the quantum-mechanical momentum operatorp being quantized, p = n/R, n ∈ Z where p now stands for the eigenvalue ofp. The solution for x(t) should now be understood as the solution of the EOM in the Heisenberg picture,x(t) =x(0) +p(0) t, with the caveat that the operatorx is not well-defined.
Since the points x and x + 2πR are equivalent, the algebra of operators consists ofp(0) with eigenvalues n/R, and operators e ikx(0) for k = m/R, m ∈ Z. Next let us consider a two-dimensional classical worldsheet theory describing the motion of n bosons X I on a torus R n /(2πΓ), where Γ ⊂ R n is a lattice and 2πΓ stands for that lattice rescaled by 2π, As in the previous example, X(t, σ) and X(t, σ) + 2π e must be physically equivalent for any e ∈ Γ . The worldsheet spatial variable σ is periodic, and therefore X(t, σ + 2π) = X(t, σ) + 2π e. The antisymmetric B-field does not enter into the EOM, nor into the solution but it affects the relation between the center-of-mass velocity and the total momentum Going back to (4.3), we can represent X(t, σ) as a sum of left and right-moving components, a n n e −in(t+σ) , (4.6) At the classical level, the vector α ( p L − p R )/2 = e ∈ Γ, while α ( p L + p R )/2 = V . Quantum mechanically, since X(t, σ) is only defined up to an arbitrary shift by 2π e ∈ Γ, the total momentum (4.4) must be a vector from the dual lattice, P ∈ Γ * , such that V = α P + B e, and The set of vectors v = ( p L , p R ) for all possible e ∈ Γ, P ∈ Γ * forms a lattice Λ in R n,n . To render p L , p R dimensionless, we use the conventional choice α = 2, in which case Λ becomes even and self-dual. To verify first property we calculate To verify self-duality it is convenient to perform a linear change of variables and rep- (4.10) In (α, β)-coordinates the metric is given by (3.17). In this representation, and taking α = 2, the generator matrix of Λ is where γ and γ * = (γ −1 ) T are the generator matrices of Γ and Γ * correspondingly. Then it is straightforward to check that and therefore Λ is self-dual. The primary vertex operators of the U (1) d × U (1) d CFT are indexed by elements of the lattice Λ, The partition function of this theory on the Euclidean worldsheet torus τ is (4.14) If Λ is an even self-dual lattice, Z(τ,τ ) is modular invariant and the CFT is welldefined. We emphasize that in constrast to (3.37), where τ and τ were two independent holomorphic variables, in (4.14) τ andτ are related by complex conjugation.

Example: twist of a compact boson on a circle
The simplest example of a theory of the type described above consists of a single boson X compactified on a circle of radius R. The lattice Γ consists of points e = mR ∈ R 1 for any integer m, and the vectors of the dual lattice are P = n/R ∈ R 1 . Since there is only one boson X, the B-field is trivial. We immediately find The set of vectors v = ( p L , p R ) for all possible n, m ∈ Z defines an even self-dual lattice Λ in R 1,1 . It is much simpler to represent Λ in the coordinates (4.10) We would like to apply to this lattice the twist procedure (2.39). We start with a vector δ = (2a/R, bR)/(2 √ 2) with some integers a, b, which automatically satisfies 2 δ ∈ Λ. For δ 2 to be odd, we must require ab/2 mod 2 = 1. Therefore either a = 2(2k + 1) is even, but not doubly-even, while b = 2l +1 is odd, k, l ∈ Z, or the other way around. In the first scenario we represent Λ = {v(n, m)|n, m ∈ Z} as the union of two sublattices where 2δ · v(2n, m) is even and 2δ · v(2n + 1, m + 1) is odd. Then The disjoint union of Λ 0 and Λ 1 gives In other words Λ is the lattice of the single boson compactified on a circle of radius R = R/2. In the second scenario Λ is given by all vectors of the form v(n/2, 2m), n, m ∈ Z, or R = 2R. Thus, starting from some radius R and applying the twist procedure repeatedly, we arrive at the lattice of a boson compactified on a circle of radius R = 2 k R for any integer k.
In the example above, it was obvious that starting from a toroidal CFT (4.14) and twisting by a vector δ yields another CFT of the same type. In fact, this construction can be applied to general CFTs of this type, and can be extended to CFTs with fermionic degrees of freedom as well [22,59].
The CFT partition function (4.14) is manifestly invariant under orthogonal transformations O(d) acting independently on p L ∈ R d and p R ∈ R d . They form a group O(d) L × O(d) R of symmetries of the CFT, which is a subgroup of T-duality transformations. Thus, from the CFT point of view, two even self-dual lattices Λ and Λ related by an O(d) × O(d) transformation are equivalent.
As we will see shortly, any even self-dual lattice Λ ⊂ R d,d defines a CFT, with the partition function given by (4.14). CFTs of this kind are called Narain CFTs. A central mathematical result, which provides a description of the moduli space of all Narain theories, is that all even self-dual lattices Λ ⊂ R d,d are related to each other by boost transformations in O(d, d). At the level of the generator matrix, and working in coordinates (4.10) such that metric is given by (3.17), where O ∈ O(d, d) and the identity matrix I is the generator matrix of a particular even self-dual lattice in R d,d . The lattice generated by I has an obvious symmetry: any element from O(d, d, Z) maps it into itself. 2 Therefore the full Narain moduli space is given by where the denominator represents the group of T-dualities -symmetries of the twodimensional CFT. The first two factors in the denominator act from the left and relate physically equivalent lattices to each other. The last factor acts from the right. It is a symmetry of a particular lattice, which maps different lattice points into each other. Thus, CFTs with momentum lattices of the form (4.8) cover all possible Narain CFTs. In other words, any Lorentzian even self-dual lattice with generator matrix (4.21) can be brought into the form (4.8) by means of an appropriate O(d) × O(d) transformation. We demonstrate this explicitly in Appendix C.

Code CFTs
In Section (3.2) we established that real self-dual codes C are in one-to-one correspondence with even self-dual lattices ( √ 2Z) 2n ⊂ Λ(C) ⊂ R n,n . In the previous section we saw that any even self-dual lattice in R n,n defines a Narain CFT. We therefore arrive at the main point of this paper: real self-dual quantum stabilizer codes (or alternatively classical self-dual codes of type 4 H+ R ) define a family of Narain CFTs, which we will call code theories. The partition function of a code theory is given by the Siegel theta-function Θ C of Λ(C) divided by |η(τ )| 2n , where Θ C is given in terms of the refined enumerator polynomial of C via (3.40), In what follows we will simply refer to these equivalence transformations as T-equivalences. It should be immediately noted that the full equivalence group also includes cyclic permutations σ i x → σ i y → σ i z → σ i x , which are not T-equivalences. Therefore, two code CFTs associated with equivalent codes are not necessarily equivalent as CFTs and may have different physical properties. We will see many such examples below.
It is important to ask whether any other T-duality transformations from O(n) × O(n), besides those mentioned above, can map a code theory into a theory based on an inequivalent code. We show in Appendix D that this is not the case, and therefore any pair of T-dual code theories are equivalent also in the code sense.
In the c = (α, β) representation of codewords, T-equivalences are generated by simultaneous permutations of the components of α and β and by exchanges of the i-th component of α with the i-th component of β.
Using T-equivalences we can bring the code generator matrix (3.19) to the following simple form. (We also need to perform linear operations mod 2 with the codewords that change the generator matrix, but do not change the code or the lattice.) First, by using linear operations and permutations, we can bring the n × n matrix formed by the α i to the form are orthogonal to (α i , β i ) with α i given by (4.24) and arbitrary β i , and therefore they belong to this code. In other words, by an appropriate linear transformation in the algebra mod 2, the last n − m rows of G T can be brought to the form (4.25). After exchanging the last (n − m) components of α i with β i , and using the last n − m rows to eliminate the last n − m components of the first m rows, we finally transform (4.24) into an identity matrix, yielding a generator matrix of the form G T = ( I | B ). This is the "canonical" form of the generator matrix, analogous to (2.15). For a real self-dual code the binary matrix B has zeros on the diagonal and is symmetric. Otherwise it is arbitrary. For notational convenience, we prefer to exchange all components of α and β to bring the generator matrix to the form (4.26) We will call codes whose generator matrix is of the form distinct B-form codes in total, and any real code has at least one T-equivalent B-form code.
Since any self-dual code is equivalent to a real code, and the generator matrix of any real code can be brought to the form (4.26) using additional equivalence transformations, we conclude that any code of type 4 H+ is equivalent to a B-form code. This result has been established in a different way in [60].
The generator matrix of Λ(C) associated with the B-form code (4.26) is where B ij ∈ {0, ±1}, such that There is an ambiguity is choosing signs of B ij , but any choice results in the same lattice. Let us choose B ij to be antisymmetric, reducing the sign ambiguity to the simultaneous flips B ij → −B ij , B ji → −B ji . All such generator matrices are related by where X ij ∈ {0, ±1}, X = −X T . For all Λ related in this way, the lattice remains the same.
Comparing (4.28) with (4.11), we find that the code theories are toroidal compactifications on the cube of "unit" size 2π with quantized B-field flux, as well as their T-duals. Different B-fields corresponding to the same B are related by T-duality transformations in O(n, n, Z) (from the denominator of (4.22)) which preserve the lattice.
The T-duality transformations which map a code to another code, permutations O p and sign flips O i , are the following elements of O(n, n, Z): where 1 ii is a diagonal matrix with all elements being zero, except for ii-th element, which is 1. Finally, the generator matrix (4.28) can be obtained from the matrix with B = 0 by a transformation from O(n, n, Z),  The coset description of the real self-dual codes (4.36) is the analog for code theories of the full Narain moduli space (4.22). There is a similar coset construction for all self-dual codes, i.e. including non-real codes (odd self-dual lattices), . (4.37) Here the subgroup O 2 includes all binary orthogonal matrices with zero B. Equivalence classes (4.36) are mapped into (4.37) by reducing mod 2. We illustrate the coset construction in case of n = 1 and n = 2 codes in Sections 6.1 and 6.2.
Most T-duality transformations (we only discuss those which map code theories to code theories) do not preserve the B-form of Λ (4.28), but there is a particular set of transformations which do. They include permutations of B, and "genuine" T-duality transformations where all algebra is mod 2, and D is the diagonal matrix with ones in the diagonal entries associated with b 11 in (4.39) and zeros elsewhere. We note that the composition of two transformations (4.40) parameterized by D 1 and D 2 is again a transformation of the form (4.40) with D = D 1 + D 2 , consistent with the consecutive action of O i (4.31) with different i.

Consistency check
The T-duality transformation (4.39) of a code theory does not change the CFT partition function, and therefore it should leave the refined enumerator polynomial invariant. For the code C associated with B it can be written as where the sum goes over all values of binary variables α i ∈ {0, 1}, we have introduced auxiliary binary variables β i via β = B α, and The binary symmetric matrix B with zeros on the diagonal can be interpreted as the adjacency matrix of a graph on n nodes. In this way all B-form codes (and code theories) correspond uniquely to graphs. Exchanging all σ z and σ x maps B-form codes into canonical form, and the stabilizer generators in this case are Stabilizer codes with generators of the form (4.43) are called graph codes [61,62]. That we can always bring a stabilizer code by means of equivalence transformations (unitary transformations from the Clifford group) to the canonical form with stabilizer generators of the form (4.43) is in a nutshell the statement that any stabilizer code is equivalent to a graph code [63]. We should also mention that non-self-dual codes, i.e. [[n, k, d]] codes with k > 0, also can be represented as graphs with labeled nodes [61]. Returning to self-dual codes, the one-dimensional code subspace H C , defined as the state ψ C invariant under the action of g i , i.e. g i ψ C = ψ C for all i (see (3.22)), is the so-called graph state [64,65]. Many aspects of code theory, including the action of the equivalence group (Clifford transformations), have been discussed in the literature in the context of graph states [60,66]. An alternative language, also used in the literature, is that of boolean functions f (α i ) [67]. In terms of graphs, the permutation (4.38) is simply the graph isomorphism which relabels the nodes, while (4.40) describes all possible compositions of edge local complementation [68]. Local complementation of a graph B (we associate the graph with the adjacency matrix) with respect to the node i, denoted B * i, is a new graph defined as follows. We define the "neighborhood" of i as a subgraph consisting of all nodes j connected to i, i.e. such that B ij = 1, and the edges between them. The complementation  procedure, applied to a (sub)graph, removes all existing links, and connects all pairs of nodes which were not previously connected. At the level of the adjacency matrix this is simply B kl → B kl + 1 mod 2. Local complementation B * i is a new graph defined as complementation applied to the neighborhood of i. In terms of B it can be written as while B ii and B ij remain unchanged. Edge local complementation is defined with respect to an edge -a pair of vertices (i, j) -as a repeated application of local complementation Two graphs related to each other by a sequence of isomorphisms and edge local complementation are said to be edge local equivalent. The edge local complementation (ELC) equivalence classes of graphs are therefore in one-to-one correspondence with the classes of physically equivalent B-form code theories, i.e. those related to each other by T-duality. ELC equivalence classes (which are defined to include isomorphic graphs) have been studied in [18] as a means to classify equivalence classes of classical binary codes. Their connection with stabilizer codes and Clifford transformations has also been discussed in [67,68]. The number of classes of ELC equivalent graphs t ELC n on n nodes is known as the OEIS integer sequence A156801, see Table 1. As the number of inequivalent graphs grows rapidly, it is more convenient to keep track of indecomposable graphs/codes. The number of classes of edge local equivalent indecomposable graphs i ELC n is related to the full number of equivalence classes t ELC n via the Euler transform. (We note that the code CFT for a decomposable code is the tensor product of the CFTs associated with the indecomposable codes into which the original code factors.) To summarize, B-form codes (graph codes) are in one to one correspondence with graphs, and we will use both languages interchangeably. The T-dualities that transform a code theory into another code theory are necessarily code equivalences; we call them T-equivalences. If we consider their action restricted to the space of B-form codes, then at the level of graphs, T-dualities are generated by permutations of nodes (graph isomorphisms) and edge local complementations. In what follows we will simply say that graphs (or B-form codes) are T-equivalent if they belong to the same ELC equivalence class.
As was mentioned above, T-duality leaves the refined enumerator polynomial invariant; W C is the same for all codes associated with edge local equivalent graphs. There is another homogeneous polynomial with this same property, i.e. it is the same for all graphs belonging to the same ELC equivalence class. This is the interlace polynomial [68,69]   Here B[w] denotes the submatrix of B ij for i, j ∈ w. The kernel Ker(X) of a binary matrix is understood to be with respect to mod 2 algebra. Since any code theory can be brought to the B-form using T-duality, and the interlace polynomial would be the same no matter which B-form representative we choose, the interlace polynomial is a proper characteristic of the code CFT. Explicit examples of the interlace polynomial will be given in Section 6.
In the beginning of this section we mentioned that cyclic permutations of σ x,y,z (multiplying by ω in the language of GF(4) codes) are not T-dualities. For simplicity we consider n = 1 and multiplication by ω. The action of ω on (α, β) can be written as where algebra is mod 2. This action automatically extends to the code lattice Λ(C). Provided that we start with an even self-dual Lorentzian Λ(C), the new lattice will be self-dual but may not be even. Thus, cyclic permutations, in general, do not preserve the property of codes being real. Combining cyclic permutations of different components with exchanges of the i-th components of α and β generated by O i (4.31), one can occasionally find transformations which transform a B-form code into another B-form code. The orbit of all B-form codes related to a given one via cyclic permutations of σ x,y,z and exchanges σ x ↔ σ z is equivalent, at the level of graphs, to the orbit with respect to consecutive actions of local complementation (LC) (4.46) [60,66,70]. Two graphs related to each other by a sequence of isomorphisms and local complementations are called LC equivalent. Local complementation equivalence classes of graphs  are therefore in one-to-one correspondence with classes of equivalent codes, i.e. those related to each other by the Clifford group (also called the local Clifford group in the literature). LC equivalence classes (which are defined to include isomorphic graphs) have been studied in [31,66,[70][71][72], in particular to classify equivalence classes of selfdual quantum stabilizer codes (or, equivalently, graph states). The number of classes of LC-equivalent graphs t LC n on n nodes is known as the OEIS integer sequence A094927, see Table 2, [72].
If two codes are equivalent in the code equivalence sense, but not related by Tequivalence (T-duality transformations), we call them "C-equivalent," where the C stands for cyclic permutations of σ x,y,z . C-equivalent codes necessarily share the same enumerator polynomial but usually have different refined enumerators. The code CFTs associated with C-equivalent codes are generally physically distinct. At the level of graphs, C-equivalent B-form codes correspond to the graphs related by LC, but not by ELC.
The role played by ELC graph equivalence in determining the physical equivalence of the corresponding code theories motivates us to classify ELC classes within the LC classes of graphs that correspond to equivalent codes. To our knowledge such a classification has not previously been performed. We provide a full classification for graphs on up to n ≤ 8 nodes, obtained with help of computer algebra, in Appendix E.
The relation between quantum stabilizer codes and 2d CFTs outlined in this section is only one particular aspect of what is likely a much richer story. Given the role classical codes play in the context of chiral CFTs, we can essentially take for granted that quantum codes can be used to define non-chiral vertex operator algebras, a subject we leave for future investigation. Here we only briefly comment on the recent work [73], which establishes a relation between the Hexacode, understood as the quantum stabilizer code, and a particular SCFT. The SCFT in question, the GTVW theory [74], has chiral vertex operators of dimension 3/2 parametrized by vectors k ∈ R 6 with all components being half-integer, k i = ±1/2. These vertex operators can be associated with the ket vectors of the Hilbert space of the Hexacode, k → |(k 1 +1/2) . . . (k 6 +1/2) , such that any linear combination in the Hilbert space is mapped to a linear combination of vertex operators. Harvey and Moore show that the code subspace ψ C , defined via an analog of (4.44) ( [73] uses a code equivalent to Hexacode (2.66), and therefore the analog of (4.44) includes imaginary coefficients), is mapped to the special vertex operator, the N = 1 supercurrent. They conjecture that other N = 1 SCFTs are related to other stabilizer codes.
There is a particular technical aspect emphasized in [73]. The expression for the code state ψ C = P|0 with P given by (3.22) exists for any stabilizer code, but it depends on the choice of n generators g i . Choosing different combinations of the g i as generators may result in a different ψ C . This is because g(c) (3.14) understood as a map from codewords c = (α, β) ∈ GF(4) n to generators is not a representation, but a projective representation The cocycle (c 1 , c 2 ) = ±1 is in general nontrivial, but in the example considered in [73] it vanishes, (c 1 , c 2 ) = 1. Here we point out this is not a unique situation, and in fact other codes also have a vanishing cocycle, with an appropriate choice of the map from GF(4) to the group of Pauli matrices. Let us choose where p, q, r are integer numbers between 0 and 3. Then the coefficient (c) in (3.14) is equal to = i p wx(c)+q wz(c)+r wy(c) . It should be real, which is a consistency condition on the code and p, q, r. For the cocycle to vanish, g(c 1 )g(c 2 ) = g(c 1 + c2), where c = (c 1 + c 2 ) mod 2. This condition is symmetric under c 1 ↔ c 2 because of (3.15), and should hold for any two codewords c 1 , c 2 ∈ C. When it holds the stabilizer group is a genuine representation of the code C ⊂ GF(4) n , understood as an abelian group under addition. In general this condition is not invariant under code equivalence transformations. Focusing on real self-dual codes, we have verified that all codes with n = 2, 3 satisfy this condition for some p, q, r. For n = 3, 24 out of 30 codes, and for n = 4, 103 out of 270 codes satisfy (4.52).

Bounds, averaging over codes, and holography
One of the central questions of coding theory is how well one can protect (quantum) information when the number of qubits n goes to infinity. In the case of self-dual stabilizer codes this is the question of determining the largest possible ratio d/n in the limit n → ∞, where d is the maximal achievable Hamming distance.

The quantum Hamming bound (3.4) readily provides an upper limit
but it is known to be conservative. A stronger upper bound was found by Rains in [56] by analytically treating the linear programming constraints, Further improvements in the asymptotic bound for d/n are possible [38]. Our first task in this section will be to obtain linear programming bounds on d numerically, for n ≤ 32. To illustrate the main idea of our approach we consider the following problem: to find a homogeneous polynomial W (x, y, z) of degree n, invariant under the duality transformation (3.32), with all coefficients being integer and nonnegative, and satisfying W (1, 0, 0) = 1. We additionally want to maximize d over the set of such polynomials, which can be formulated as the linear programming optimization (or feasibility) problem This is a slight modification of the linear programming bound considered in [47], where the feasibility of enumerator polynomials W (x, y) was considered. 5 We find that considering the feasibility of refined enumerators somewhat strengthens the bound, which mostly follows (5.2) except for certain values of n mod 6 = 1. For n ≤ 32 the results are shown in Fig. 5. Comparing with known results for n ≤ 30 [44,45], we find that the linear programming bound is mostly tight, meaning that the maximal d for which the linear programming problem is feasible is also achievable by a code (or potentially many codes) with that value of d, with at least one known exception when n = 19. In the latter case our linear programming bound gives d ≤ 8, while no self-dual codes with d = 8 exist. 6 Even when extremal codes, i.e. codes with d saturating the bound exist, there are usually many other "fake" refined enumerator polynomials which satisfy the linear programing optimization constraints and have the same d.

Gilbert-Varshamov bound
Linear programming bound The linear programming bound discussed above can be thought of as a toy version of the conformal modular bootstrap [8][9][10][75][76][77][78][79][80][81][82][83][84][85][86][87][88][89], which aims to establish universal bounds on the spectral gap and other similar properties of the 2d theories. Here we are essentially restricting our analysis to the subset of code theories defined in the previous section. Then the partition function is fully specified by the refined enumerator, which reduces the nontrivial modular bootstrap analysis to a simple linear programing problem in the space of invariant polynomials.
Linear programming bounds are not constructive. In practice, they may be used to produce invariant polynomials with the desired properties, but verifying if there is an actual code associated with this polynomial is a difficult task. While various shortcuts are possible, the only universal way to make sure the polynomial is not "fake" is to construct the code, which is exponentially difficult.
There is also a nonconstructive Gilbert-Varshamov bound for quantum codes, which bounds maximal d from below. Similarly to the classical case, to obtain the bound we calculate the refined enumerator polynomial averaged over all B-form codes, i.e. codes specified by the generator matrix (4.26) with all 2 n(n−1)/2 possible matrices B, W is manifestly invariant under the duality transformation (3.32), as well as y → −y. Taking z = y reduces it to the averaged enumerator polynomial, from which we immediately conclude that for n → ∞ the maximal d is bounded from below by d/n ≥ p * q , which is twice smaller than the upper bound (5.1), as expected. For a general n we find the maximal d to be equal to or larger than the maximal value d GV for which the following constraint is satisfied, This is a somewhat stronger bound than the conventional Gilbert-Varshamov bound which would naively follow from (3.3). Similar lower bounds can in principle be obtained by averaging over any class of codes for which this averaging is feasible. Averaging over B-form codes proves to be a convenient choice. We plot the numerical values d GV (n) for n ≤ 32 in Fig. 5. Even though the Gilbert-Varshamov bound is asymptotically weaker than the linear programming upper bound (and presumably the actual maximal value of d), there is no known systematic construction for producing codes with d ≥ d GV for arbitrarily large n. The averaged refined enumerator polynomial, via (4.23), can be interpreted as the averaged partition function of all B-code theories. In light of recent results relating the average over Narain CFTs to U (1) n × U (1) n Chern-Simons theory in AdS 3 [10,14], it is natural to ask if the averaged code theory may have a holographic interpretation. To see if a weakly coupled bulk description is possible, we would like to calculate the spectral gap of U (1) n primaries. Following [80] we define the spectral gap as the value of ∆ for which the density of primary states ρ(∆) assumes its asymptotic form. This might be different from the dimension of the lightest nontrivial primary. Primaries of Narain CFTs correspond to vectors in the Lorentzian lattice, and their dimension is proportional to the Euclidean norm-squared ∆ = 2 /2. For vectors associated with codewords, 2 = d b /2, where the binary Hamming weight is d b (c) := w z (c) + 2w y (c). The binary Hamming distance of a code (minimal weight of all non-trivial codewords) is the conventional Hamming distance of a classical binary [2n, n, d b ] isodual code defined from C via the Gray map. For large n and sufficiently large ∆ n, the density of vectors of a unimodular lattice Λ(C) ⊂ R 2n is given by the volume of a (2n − 1)dimensional sphere, yielding For a lattice Λ(C) associated with a stabilizer code, all points of the form √ 2(a, b), a, b ∈ Z n belong to the lattice and for sufficiently large ∆ their contribution to the density is given by (5.6) divided by 1/2 n . When ∆ increases such that the sphere of radius 2 = ∆/2 includes new codewords, the overall coefficient grows with each codeword eventually contributing 1/2 n , until the overall coefficient saturates at one for ∆ n. The spectral gap can be defined as the value of ∆, for which the coefficient in front of (5.6) becomes of order one. This is the value of d b for which coefficients of the averaged enumerator polynomial W C (x 2 , y 2 , xy) become of order one, This is exactly the Gilbert-Varshamov bound for binary self-dual codes, see Section 2.1. This suggests the following interpretation. The class of binary self-dual codes obtained through the Gray map from B-form codes is a good representation, in the statistical sense, of all self-dual codes. At the same time, isodual codes obtained from B-form codes either share a similar distribution of Hamming distances with self-dual codes, or their overall number is much smaller than the number of self-dual codes.
Since the spectral gap (5.7) scales linearly with the central charge n, we expect the corresponding averaged theory to be holographic. We leave the task of understanding the gravity dual theory for the future, while here we consider the simpler case of chiral theories and speculate about their possible gravity dual description. The chiral analogue of the average over the Narain lattices would be the average over even self-dual Euclidean lattices. The averaged theta-function is known to be given by the Eisenstein series E n/2 (τ ) [45], such that the averaged partition function of the corresponding chiral CFTs is For n divisible by 24, (5.8) is modular invariant. Otherwise, since n is divisible by 8, it is invariant under the subgroup of PSL(2, Z) generated by τ → τ + 3 and τ → −1/τ .
Using the conventional representation for the Eisenstein series we can rewrite (5.8) as , (5.9) for n divisible by 24 and interpret this sum as a sum over handlebodies, with 1/η n (τ ) being the partition function of U (1) n Chern-Simons on thermal AdS 3 geometry, parametrized by the modular parameter τ of the boundary torus. This holographic interpretation is schematic, and similarly to the non-chiral case [10,14] requires further checks and clarifications. Furthermore, if n is not divisible by 24, additional degrees of freedom in the bulk, possibly in the form of a Z 3 gauge field, would be necessary to make the sum in (5.9) well defined. We overlook these important nuances, as our goal here is to understand how averaging specifically over code CFTs would change the story. In the chiral case, we would consider even self-dual lattices Λ(C) associated with doubly-even self-dual binary codes. Their averaged theta-function is given by (2.51,2.29), which differs from E n/2 (τ ) by an appropriate modular form. For simplicity we consider the case n = 24, for which Z codes = a 12 E 12 (τ ) + a 6 2 E 2 6 (τ ) η 24 (τ ) , a 12 + a 6 2 = 1, (5.10) with the values of a 12 and a 6 2 being unimportant. This expression can be represented in a way similar to (5.9), Z codes = γ∈Γ∞\SL(2,Z) a 12 + a 6 2 E 6 (γτ ) η 24 (γτ ) .
(5.11) Again, the term 1/η 24 (τ ) can be interpreted as the holographic contribution of U (1) n gauge fields, while E 6 in the numerator suggests presence of non-abelian gauge fields in the bulk, which are known to produce certain combinations of Jacobi theta-functions [14]. Our considerations do not prove that the averaged code theories have weakly coupled holographic duals, but they indicate that such an interpretation may be possible. We end this discussion with a few concrete questions. First, the representation of a modular form as a polynomial in terms of E 4 , E 6 is not unique. Instead of a 12 E 12 +a 6 2 E 2 6 in (5.10) we can write the most general expression a 12 E 12 + a 6 2 E 2 6 + a 4 3 E 3 4 , and correspondingly the numerator in (5.11) will become a 12 +a 6 2 E 6 +b 4 E 4 +b 4 2 E 2 4 with arbitrary b 4 , b 4 2 satisfying b 4 + b 4 2 = a 4 3 . Thus the holographic partition function for n = 24 can be written in many different ways, suggesting there are different microscopic descriptions in the bulk, related by dualities. For larger n there would be more representations and potentially a larger duality web in the bulk. Even more intriguing is the chiral case of n = 8, for which there is a unique even self-dual lattice E 8 . Modulo the important subtlety that to make the sum over SL(2, Z) well-defined, additional degrees of freedom in the bulk would be needed, the partition function E 8 /η 8 (τ ) similarly admits a holographic interpretation. It therefore provides a setting to study from the CFT side interpretation of the Euclidean wormhole geometries when the boundary is not connected.
To conclude this section, we return back to the question of the spectral gap, which we now define strictly as the dimension of the lightest non-trivial primary. It can be equivalently defined as the length-squared of the shortest non-trivial vector of the Narain lattice, and in this way the maximal value of the spectral gap is related to the efficiency of lattice sphere packing in a given dimension. This question has been recently studied numerically in [8][9][10]. Similarly to classical binary codes, which give rise to optimal sphere packings in 8 and 24 dimensions and provide valuable insight about scaling for large n, one may expect quantum codes to occasionally saturate the spectral gap bounds and inform the large-n behavior. In terms of real stabilizer codes, maximizing the spectral gap is equivalent to maximizing the binary Hamming distance for a given n. The question of finding a quantum code with maximal d b for a given value of n is very similar to the conventional question of finding the "best" code with largest d, but to our knowledge it was not previously discussed in the literature. In Fig. 2 we plot the linear programming bound on d b obtained by imposing constraints similar to (5.3). The bound is tight at least for n ≤ 8. Superimposed with the approximate theoretical fit of the numerical spectral gap bound [10], translated into units of d b = 4∆ n/2 + 2 (dashed line in Fig. 2), we find that both exhibit approximately linear growth with a similar slope. We leave it as an open problem to find the analytic analogue of (5.2) for the binary Hamming distance, or at least its asymptotic behavior for large n.
By comparison with the case of classical codes and Euclidean lattices, we may expect quantum codes to yield Narain CFTs with the maximal spectral gap for certain special values of n, in particular for n = 4 and n = 12. This is indeed the case for n = 4, as is evident from Fig. 2 and discussed in Section 6.4. For n = 12 code theories fall short of saturating the spectral gap bound, which is discussed in more detail in Section 6.10.
In the discussion above we identified the spectral gap (length of the shortest vector) with the binary Hamming distance of a code. This is correct for d b ≤ 4, but for larger d b there are always lattice vectors of the form √ 2(±1, 0 2n−1 ) which are shorter. This is the same problem, discussed in Section 2.1, which precludes classical binary codes from yielding efficient sphere packings in large dimensions, at least directly via Construction A. For small n this problem can be partially solved by applying the shift procedure to the Construction A lattice, which will remove unwanted short vectors √ 2(±1, 0 2n−1 ). A similar strategy can be employed in the quantum case, making it possible to relate codes with larger d b to CFTs with larger spectral gap. We consider an explicit example of a lattice with shortest vector controlled by d b > 4 in Section 6.10.

Enumeration of self-dual codes with small n
In this section we discuss many explicit examples of self-dual stabilizer codes for n ≤ 12. As the number of codes rapidly grows with n, we emphasize different points for different n. For n = 1 we discuss all codes in detail. For n = 2 we focus on real codes, discuss them in detail and then illustrate the coset construction. Starting from n = 3 we restrict our attention to B-form codes, and for n = 3, 4 go over all classes of T-equivalent codes. For n = 3, 4 we also explicitly write down all "fake" enumerators. Starting from n ≥ 5 we only consider codes with maximal values of d and/or d b . For n = 7, 8 we give explicit examples of non-equivalent codes with the same refined enumerator polynomials, giving rise to groups of physically distinct isospectral code CFTs. For n = 4, 6, 12 we give examples of codes related to special lattices.

n = 1
For n = 1 there are three codes, see (3.33), specified by the unique stabilizer generator g = X, or Y, or Z.
Obviously all three codes are equivalent (in the sense of the code equivalence group). Two codes, see (3.35), the first and third, are real and correspond to code CFTs. The first code g = X is a B-form code (4.28) with B = 0. This is the only B-form code for n = 1. The corresponding graph is simply a graph consisting of one vertex. The corresponding CFT is a boson on a circle of radius R = 1. The third code g = Z is T-dual to the first one. This is a boson compactified on a circle of radius R = 2. Its lattice generator matrix is Λ = diag(1, 2)/ √ 2. The refined enumerator polynomial of these two codes is, cf. (3.36), 3) The first two matrices form the subgroup O 2 (1, 1, Z). The coset (4.36) includes two elements, with representatives The resulting lattice generator matrices (4.33) correspond to two n = 1 codes, with stabilizer generators g = X and g = Z correspondingly.

n = 2
There are fifteen codes with n = 2. Six of them, see (3.35), are real. We only list real codes, which are specified by a pair of stabilizer generators, (g 1 , g 2 ) = (X I, I X), (X I, I Z), (Z I, I X), (Z I, I Z), (6.7) (g 1 , g 2 ) = (X Z, Z X), (X X, Z Z). (6.8) The codes are split into two groups, with 4 and 2 elements. All codes within each group are T-dual to each other. The first group consists of decomposable codes. The corresponding code CFTs are tensor products of two bosons compactified on circles or radius R = 1 or 2. The refined enumerator polynomial of these codes is W = W 2 1 and the (binary) Hamming distance is d = d b = 1. Only the first code in (6.7) is of B-form, with zero 2 × 2 matrix B. The corresponding graph is shown in Fig. 3 left. The interlace polynomial of this graph is Q = Q 2 1 = (x + y) 2 . Codes in the second group are indecomposable. They have W = W 2 = x 2 +y 2 +2z 2 , c.f. with (3.36), and (binary) Hamming distance d = d b = 2. The first code in (6.8) is of B-form. The corresponding CFT consists of two compact bosons on circles of radius R = 1 with one unit of B-flux, B ij = ij . The corresponding graph is shown in Fig. 3 right, and the interlace polynomial is Q 2 = 2x(x + y). The second code in (6.8) is T-dual to the first one. The generating matrix of its lattice Λ(C) has the toroidal compactification form (4.11) with vanishing B-field and 2γ * = γ = 1 1 1 −1 . The lattice generated by γ is the square lattice with minimal length √ 2. Therefore the corresponding CFT is the tensor product of two theories, each being a boson compactified on a circle of self-dual radius R = √ 2. This is confirmed by the partition function where the right-hand-side follows from (4.23). This is a curious situation because the code CFT is a tensor product of two theories, while the code itself is indecomposable. The spectral gap of this theory is ∆ = d b /4 = 1/2. This code has other interesting properties. It is in fact the repetition code i 2 , the linear self-dual code of type 4 H mentioned in Section 2.2. Comparing with (2.59), we find W i 2 (x, y) = W 2 (x, y, y). The Lorentzian lattice in R 2,2 generated by (4.11) with γ as in (6.9), and the Euclidean lattice in C 2 = R 4 associated with i 2 ∈ 4 H are related to each other by a linear transformation in each R 2 = C plane. Upon setting τ = −3τ , the Siegel theta function (6.10) reduces to the theta function of the Euclidean lattice φ 2 0 + 3φ 2 1 , as expected. Using the Gray map, we can also interpret this code as a binary repetition code, such that the enumerator polynomial (2.22) is W i 2 (x, y) = W 2 (x 2 , y 2 , xy). The theta function following from (6.10), reduces to b 4 after setting τ = −τ , which is the correct theta function of a cubic lattice of size To conclude the case of n = 2, we describe the coset construction (4.36) of real self-dual codes. The group SO(2, 2, R) is the product of two SL(2, R) factors mod Z 2 , Elements of O(2, 2, Z) can be described in a similar way. They include products S 1 × S 2 where S 1,2 ∈ SL(2, Z) (these are matrices of det = 1) and t (S 1 × S 2 ) (these are matrices (6.14) The subgroup O 2 (2, 2, Z) includes only matrices with all elements of the upper-right 2×2 submatrix being even. This leaves S 1 ×S 2 where c 1 mod 2 = 0 and S 2 is arbitrary. In other words where Γ 0 (2) is the Hecke congruence subgroup of level 2. The quotient SL(2, Z)/Γ 0 (2) includes three matrices, Six real self-dual codes, arranged the same way as (6.7,6.8), are (6.18)

n = 3
There are 30 real codes for n = 3, which split into t ELC 4 = 4 orbits under T-duality equivalences (T-equivalences). In terms of the B-field representations, these four orbits correspond to the four inequivalent graphs with three vertices, labeled by the number of edges (links) 0 ≤ l ≤ 3. The equivalence of codes within each orbit is obvious, as the graphs with the same number of links l are isomorphic for n = 3. When l = 0, 1, the graphs, and hence the codes, are decomposable. We will not discuss these cases in detail as their properties were discussed above. B-form codes with l = 2 are T-dual to the code with lattice Λ(C) (4.11), with vanishing B-field and In this latter case there is no T-dual theory with lattice of the form (4.11), with some γ, and vanishing B-field. This is the general situation: it is always possible to use T-duality to bring (4.11) to the form (4.28) with γ = I and some non-trivial B-field, but it is almost never possible to get rid of B by changing γ.
There are t LC 4 = 3 inequivalent classes of codes for n = 3. The classes of Tequivalent codes with l = 2 and l = 3 are related to each other via C-equivalence. Accordingly, their refined enumerator polynomials yield the same enumerator polynomial W 3 (x, y, y) =W 3 (x, y, y) = x 3 +3xy 2 +4y 3 . Both classes of codes have the same Hamming distance d = 2 but different binary Hamming distances d b = 2 and d b = 3. The corresponding CFTs will have different partition functions, as well as different spectral gaps, ∆ = 1/2 and ∆ = 3/4 correspondingly. For n = 1 and n = 2, all polynomials W (x, y, z) invariant under (3.32) and y → −y, and satisfying additional conditions, W (1, 0, 0) = 1, all coefficients integer and positive, are actual refined enumerator polynomials of additive codes. For n = 3, besides the four polynomials associated with the four classes of T-equivalent codes, see Fig. 4, there are another six "fake" polynomials, W = x 3 + 2x 2 z + xy 2 + 2xz 2 + 2z 3 , (6.24) W = x 3 + xy 2 + 2xz 2 + 2y 2 z + 2z 3 , (6.25) W = x 3 + 2xy 2 + xz 2 + y 2 z + 3z 3 . Figure 4. Graphs, their adjacency matrices, refined and interlace polynomials, associated with the four T-dual classes of n = 3 codes. The polynomials W 1 , W 2 , W 3 are defined in (3.36), There are no additive self-dual codes for which these polynomials are refined enumerator polynomials, yet they satisfy all necessary properties, and the "partition function" defined via (4.23) is modular invariant and satisfies other basic properties expected of the CFT partition function. This poses the following question important in light of the modular bootstrap program: do those would-be CFT partition functions correspond to actual theories? Given that the number of "fake" polynomials increases rapidly with n, unless they correspond to actual CFTs, "bootstrapping" 2d theories must yield a growing number of consistency regions (occasionally taking the form of "islands") in the exclusion plots, which are in fact empty, contradicting our experience so far. Assuming the opposite, that some of these "fake" polynomials correspond to actual CFTs, they likely can be identified as refined enumerator polynomials for non-additive codes. In this case the scope of what we call code theories should be extended to include CFTs based on a wider class of codes. Continuing this logic further, the CFT partition function is a much richer object than the code enumerator polynomial, and may satisfy additional non-trivial conditions [90,91]. An interesting scenario would be if these additional conditions could be used to distinguish "fake" code enumerators from actual ones, thus introducing a new string theoretic tool to code theory.

n = 4
There are t ELC and those isomorphic (permutation equivalent) to them. We explicitly write down two different B's for the same class of T-equivalent codes to emphasize that edge local complementation can change the number of links, topology, etc. The REP for this class is W = x 4 + 4xy 2 z + 2x 2 z 2 + 4y 2 z 2 + 4xz 3 + z 4 = 2W 2 1 W 2 − 2W 1 R − W 4 1 , and the interlace polynomial is Q = 5x 4 + 8x 3 y + 3x 2 y 2 .
The second equivalence class include B-form codes with the B matrices (graphs) and those isomorphic to them. The edge local complementarity of these two graphs is discussed as an example in [18]. The REP for this class is W = x 4 + x 2 y 2 + x 2 z 2 + 4xy 2 z + 4xz 3 + 3y 2 z 2 + 2z 4 = 2W 2 1 W 2 − W 1 R − W 4 1 , and the interlace polynomial is Q = 6x 4 + 8x 3 y + 2x 2 y 2 .
The two classes of T-equivalent codes described above are related to each other via C-equivalence. One can easily check that the enumerator polynomial in both cases is the same by taking R → 0. Both classes of code theories have the same value of d = d b = 2 and spectral gap ∆ = 1/2.
There are two more classes of T-equivalent codes for n = 3 . B-form codes from the third class are given by and those isomorphic to them. The REP of this class is W = x 4 + 6x 2 z 2 + y 4 + 6y 2 z 2 + 2z 4 = W 2 2 − 2W 1 R and interlace polynomial is Q = 4x 4 + 7x 3 y + 4x 2 y 2 + xy 3 . Finally, the fourth class has a unique B-form representative with B ij = 1 − δ ij associated with the complete graph with four vertices (6.31) The REP of this class is W qe 8 = x 4 + 6x 2 y 2 + y 4 + 8z 4 = W 2 2 + 4W 1 R and the interlace polynomial is Q = 8x 3 (x + y).
The third and fourth classes are related to each other via C-equivalence (thus there are two classes of indecomposable equivalent codes i LC 4 = 2). Accordingly, the enumerator polynomials of the third and fourth classes are the same, W (x, y) = W 2 (x, y, y) 2 , and also the same as the enumerator polynomial of the decomposable code consisting of two n = 2 codes. This is an example of a generic situation: enumerator polynomials are not unique and different codes may share the same enumerator polynomial. The same is also true for the refined enumerator polynomial, see Section 6.7. Codes from the third and fourth classes have different d b , 2 and 4 correspondingly. This is similar to the case of n = 3 and is a reflection of the general situation: C-equivalent codes must have the same d but usually have different d b .
Comparing B from (6.31) with (2.18) we immediately recognize that the binary code obtained from this stabilizer code via Gray map is not merely isodual, but in fact self-dual. It is the extended Hamming [8,4,4] binary code, and the Lorentzian lattice Λ(C), understood as a lattice in the Euclidean space, is the even self-dual lattice E 8 . In other words E 8 is an even self-dual lattice in both Lorentzian and Euclidean signatures! As a consistency check one can easily verify that the enumerator polynomial of the resulting binary code W qe 8 (x 2 , y 2 , xy) = W e 8 (x, y). Now we can recognize d b /2 = 2 as the length-squared of the shortest root of E 8 lattice 2 = 2. The corresponding Narain CFT, which we will refer to as the non-chiral E 8 theory, has spectral gap ∆ = 1.
Modular bootstrap studies of the maximal spectral gap in U (1) 4 × U (1) 4 theories reveal with an astonishing precision that ∆ = 1 is in fact the optimal (maximal possible) value [10]. The theory of eight free Majorana fermions with diagonal GSO projection was identified in [86] as the CFT saturating the bound. Here we have found that the non-chiral E 8 Narain CFT also saturates the bound. Using the explicit form of W qe 8 and (4.23) we readily find the partition function of this theory which coincides with the partition function of eight fermions mentioned above [86]. In fact these are the same theory, which has other descriptions including as the SO(8) 1 WZW model. The theory of eight fermions exhibits rich group of symmetries, known as triality, which has been recently discussed in [92]. While the description of this theory as a Narain CFT has been discussed previously, to our knowledge connection with the E 8 lattice has not been pointed out. We establish an explicit relation between the theory of eight Majorana fermions and the non-chiral Narain E 8 theory in the Appendix A. There are 11 "fake" REPs for n = 4, The interlace polynomial Q(x) is a characteristic of the graph equivalence class under edge local complementation (and isomorphisms), but different classes may share the same Q(x). This happens for the first time (meaning smallest n) for n = 4, for the decomposable graphs shown in Fig. 5. Edge local complementation acts on each disconnected subgraph individually, and therefore these graphs, which have different decompositions 1 + 3 and 2 + 2 can not be equivalent.

n = 5
For n = 5 there are too many classes of codes (t ELC 5 = 21, i ELC 5 = 10) to describe all of them in detail. From now on we will only focus on codes maximizing the Hamming distance d (the conventional measure of quality for quantum codes) or the binary Hamming distance d b (which determines, up to some nuances, the spectral gap in the code Figure 5. Two non-equivalent graphs under edge local complementation, which have the same interlace polynomial Q = 4x 2 (x + y) 2 . Figure 6. ELC-equivalent (T-dual equivalent) graphs corresponding to n = 5 code with the largest d = 3, d b = 3, W = x 5 + 5x 2 y 2 z + 5x 2 z 3 + 10xy 2 z 2 + 5xz 4 + 5y 2 z 3 + z 5 and Q = x 3 11x 2 + 16xy + 5y 2 . CFT). For all n ≤ 4, there was a unique class of T-equivalent codes with maximal d b , which also had maximal d (note, there were other codes with the same d, but smaller d b ). For n = 5, there is a unique class of codes with the maximal d = 3 (and d b = 3), the so-called shorter hexacode related via Construction A of Section 2.2 to the shorter Coxeter-Todd lattice [36], and there is another unique class of codes with the maximal d b = 4 (and d = 2).
There are three distinct graphs (up to isomorphisms), which correspond to the shorter hexacode class, see Fig. 6. And there is a unique graph (up to isomorphism), which corresponds to the second class with d b = 4, Fig. 7.
For n = 3 and n = 4 we saw examples where C-equivalence would relate two classes of T-equivalent codes. For n = 5 there are already groups of 2, 3 and 4 classes of codes related to each other by C-equivalences.
There are 128 "fake" REPs for n = 5. The number of "fake" REPs increases rapidly with n, 2835 for n = 6, 71164 for n = 7, 4012529 for n = 8 and so on.

n = 6
There is a unique class of codes which achieves both maximal d = 4 and maximal d b = 4. This is the hexacode h 6 , introduced in Section 2.2. As can be easily seen from (2.66), the hexacode is a real code, and by using T-duality transformations it can be brought to the B-form. (We should note that there are other codes, C-equivalent to the hexacode (2.66), which are also called by this name in the literature, see [17,73]. Those codes are not real.) There are two graphs shown in Fig. 8, which are associated with the class of T-equivalent codes that includes the hexacode. The refined enumerator polynomial of the hexacode is W h 6 (x, y, z) = x 6 + 30x 2 y 2 z 2 + 15x 2 z 4 + y 6 + 15y 2 z 4 + 2z 6 , which reduces to (2.60) upon substituting z → y. The Lorentzian lattice of the hexacode Λ(h 6 ) is related to the Euclidean Coxeter-Todd lattice K 12 by the linear transformation (6.11) applied in each C plane. Upon setting τ = −3τ , the Siegel theta-function of Λ(h 6 ) reduces to the theta function of K 12 (2.67). We should note that the Lorentzian lattice Λ(h 6 ), although related to K 12 , is not the same as the Coxeter-Todd lattice understood as a Lorentzian even self-dual lattice. The latter interpretation and the related Narain CFT was recently introduced in [10]. That construction is analogous to our construction of E 8 as a Lorentzian even self-dual lattice, discussed in Section 6.4.
There are two other classes of codes with maximal d b = 4 and d = 2, which we do not discuss here.
It turns out, there are exactly two distinct classes of T-equivalent codes for n = 7 which share the same REP, W isospectral = x 7 + x 5 y 2 + 5x 4 y 2 z + 5x 2 y 4 z + x 5 z 2 + 12x 3 y 2 z 2 + 9xy 4 z 2 + 4x 4 z 3 + 22x 2 y 2 z 3 + 4y 4 z 3 + 5x 3 z 4 + 25xy 2 z 4 + 11x 2 z 5 + 11y 2 z 5 + 10xz 6 + 2z 7 . (6.38) Two representative graphs (there are many others) associated with these two classes are shown in fig. 10. (We note that the REP is unique for all classes of T-equivalent codes for n ≤ 6). It should be noted that the shown graphs are related by LC, which means that the corresponding classes of codes are C-equivalent. And while in the general case C-equivalence can change the REP, in this case it does not. Since these two classes of codes are not T-equivalent, corresponding code theories are not T-dual to each other, see Appendix D. Because they share the same REP, the corresponding code CFTs have the same partition function. In other words we have obtained an explicit example of two Narain CFTs, not related by T-duality, with the same spectrum. At the level of lattices, this is a pair of isospectral but not isomorphic even self-dual Lorentzian lattices in R 7,7 . It is interesting to compare this example with the examples of isospectral but not isomorphic Euclidean lattices associated with inequivalent classical codes, in particular Milnor's example of E 8 ⊕ E 8 and D + 16 even self-dual lattices in R 16 . In String Theory this example famously corresponds to the two possible isospectral compactifications of 16 left-moving modes of the heterotic string. Figure 9. Representatives from three distinct classes of T-equivalent codes, which maximize both d = 3 and d b = 4. The interlace polynomials for these classes are Q = 40x 7 + 62x 6 y + 24x 5 y 2 + 2x 4 y 3 , Q = 41x 7 + 63x 6 y + 23x 5 y 2 + x 4 y 3 , and Q = 43x 7 + 64x 6 y + 21x 5 y 2 . Figure 10. "Fish" graphs -representatives from two classes of T-dual codes, which share the same refined enumerator polynomial (6.38); they are C-equivalent but not T-dual to each other. The first class includes 10 distinct non-isomorphic graphs; the second class includes 9. We choose among them two representatives based on simplicity and aesthetics. These two classes of graphs also share the interlace polynomial Q = 30x 7 + 58x 6 y + 34x 5 y 2 + 6x 4 y 3 .
These two isospectral theories are related by T-duality upon compactification [26][27][28]. Our construction gives an example of isospectral even self-dual lattices in the smallest number of dimensions, and in that dimension it is unique, much like the Milnor's example. (We note our analysis covers only code-related lattices. It is an open question if there are other even self-dual isospectral lattices in R n,n for n ≤ 7.) But it is also different from Milnor's example in several ways. First, Milnor's example related a decomposable code with an indecomposable one. Here we have two indecomposable codes. Second, at the level of CFTs it is an example of two isospectral non-chiral CFTs, not related in any simple way to chiral CFTs. Furthermore, there is no obvious symmetry which would make this example unique or special, raising doubts that these isospectral theories might be related by a duality.
As we will see shortly there are many more examples of isospectral Narain CFTs with n ≥ 8. Our finding highlights a limitation of the modular bootstrap approach, which is incapable of differentiating isospectral theories.

n = 8
There are t ELC 8 = 1068 classes of T-equivalent codes with n = 8. Fourteen classes achieve the maximal allowed value of d = 4, and d b = 4. They form 5 groups of C-equivalent codes [72]. There are other classes with maximal d b = 4 but they have smaller d.
Among the fourteen classes with the maximal d = d b = 4 is the code with lattice Λ(C) = E 8 ⊕ E 8 , understood with the metric (3.17). Upon bringing it to B-form, its B-matrix is given by where the 4 × 4 matrix B 4 is given by (6.31). As can be seen from its graph, this code is indecomposable and its REP is We should note right away that there is another decomposable code, which is a product of two n = 4 codes (6.31). Its B matrix is block-diagonal, with each block equal to B 4 . The lattice Λ(C) of that code is also E 8 ⊕ E 8 but this time each E 8 is understood as a Lorentzian lattice, as in Section 6.4. The REP of this code is W 2 qe 8 and d = 2, d b = 4. Both codes would be equivalent as binary codes, and in particular W (qe 8 ) 2 (x 2 , y 2 , xy) = W 2 qe 8 (x 2 , y 2 , xy) = W e 8 (x, y) 2 . As we have mentioned several times already, the binary e 8 ⊕ e 8 code is isospectral with d + 16 (denoted E 16 in [24]). The latter can be brought to canonical form with the B-matrix being symmetric and B ii = 0. This means the binary self-dual code d + 16 can be uplifted to the real self-dual stabilizer code with B-matrix (one of many representatives from the T-equivalence class) and graph This code has d = 2, d b = 4 and REP W d + 16 = x 8 + 4x 6 y 2 + 22x 4 y 4 + 4x 2 y 6 + y 8 + 24x 4 z 4 + 144x 2 y 2 z 4 + 24y 4 z 4 + 32z 8 . Of course W d + 16 (x 2 , y 2 , xy) = W e 8 (x, y) 2 , which means that all three code CFTs -the one associated with (6.39), the tensor product of two (6.31) theories, and the one associated with (6.41) -have partition functions which coincide along the diagonalτ = −τ (purely imaginary τ ), but are different otherwise.
We just saw that Milnor's example of isospectral even self-dual lattices in Euclidean space R 16 does not lead to isospectral Lorentzian lattices. This does not mean there is any lack of isospectral even self-dual lattices in R 8,8 . Among n = 8 real self-dual stabilizer codes there are 60 isospectral pairs (excluding the product of the n = 1 code with the isospectral n = 7 codes shown in Fig. 10). Among these 60 pairs two relate a decomposable code with an indecomposable one, while the other 58 relate two indecomposable codes. Among the first two cases is the hexacode, see Fig. 8, combined together with the n = 2 code shown in Fig. 3 right, which is isospectral with the indecomposable n = 8 code associated with the graph (one of many representatives from the T-equivalence class) One can easily find two codewords with d b = 2 that are not orthogonal to other codewords in terms of the Euclidean metric. This means corresponding lattice is not decomposable into a sum of two lattices, but is isospectral with a decomposable one. In this sense this example is similar to Milnor's example. We will not discuss other examples of isospectral pairs in detail, but just mention that in 36 instances isospectral codes are C-equivalent, while in 24 instances they are not. Besides 60 isospectral pairs, there are 5 isospectral triples, when three different code CFTs are isospectral. Four triples include two C-equivalent codes, and another one, not C-equivalent. All three codes in the fifth triple are not C-equivalent. Representative graphs from the fifth triple are shown in Fig. 11.

n = 9 − 11
We have classified all graphs with up to n ≤ 8 vertexes, see Appendix E, and one can easily generate all corresponding refined enumerator polynomials and identify equivalent ones using computer algebra. We leave the task of classifying ELC classes of graphs (classes of T-equivalent codes) with larger n for the future. There is a full classification of LC classes (classes of equivalent codes) for n ≤ 12 obtained in [72], with the corresponding database available online. Going through 675 n = 9, 3990 n = 10 Figure 11. Representatives from three different not C-equivalent classes of T-equivalent codes, which share the same REP W = (x + z) 2 (x 6 − 2x 5 z + 3x 4 z 2 + 4x 3 y 2 z + x 2 y 4 + 2x 2 y 2 z 2 + 4x 2 z 4 + 2xy 4 z + 24xy 2 z 3 + 4xz 5 + 9y 4 z 2 + 10y 2 z 4 + 2z 6 ). and 45144 n = 11 codes available there we confirm there are more new examples of isospectral codes. There are instances of pairs, triples, and quadruples of isospectral n = 10 codes, and k-tuples of isospectral n = 11 codes for all k ≤ 11.

n = 12
The theoretical fit of the numerical bootstrap constraint for the value of the spectral gap ∆ ≤ (n + 4)/8, depicted as a dashed line in Fig. 2, seems to suggest the celebrated Leech lattice, the unique self-dual lattice in d = 24 with no vectors shorter than 2 = 4, will make an appearance when n = 12, saturating the bound. But this is not the case. First, the numerical bound on the spectral gap is close, but is strictly smaller than ∆ < 2 [10]. Second, the Leech lattice understood as a self-dual Lorentzian lattice is odd, see Appendix B. That leaves the possibility for the Leech lattice to define some special non-chiral fermionic CFT with large spectral gap, a question we leave for the future.
The largest achievable binary Hamming distance for real self-dual codes with n = 12 is d b = 6. It corresponds to spectral gap ∆ = d b /4 = 3/2. As we have mentioned already, the Construction A lattice Λ(C) of any stabilizer code necessarily has vectors of length 2 = 2, which limits the spectral gap to ∆ ≤ 1. Nevertheless in certain cases one can apply a twist by a half lattice vector δ to attain larger spectral gaps. To turn an even lattice into an even lattice, δ 2 should be odd. Assuming the vector 1 is one of the codewords, when n/4 is odd e.g. for n = 12, a twist by δ = 1/(2 √ 2) will yield a new even self-dual Lorentzian lattice, whose corresponding Narain CFT has spectral gap ∆ = d b /4. The Siegel theta-function and hence the partition function of the corresponding CFT is given by (3.45). This procedure is universal, and can be applied to any code whose REP includes the term y 12 .
The odd Leech lattice can be understood as an odd self-dual Lorentzian lattice, which means that the generator of the h + 24 code can be brought to the canonical form (2.15) with a symmetric B, in which not all B ii are zero. 8 Going back to the stabilizer codes from the third class (6.49), see Fig. 12, they correspond to an even self-dual Lorentzian lattice, which, understood as a Euclidean lattice, is isodual and isospectral with the Construction A lattice of h + 24 . If we apply a twist with δ = 1 2 √ 2 , we obtain an even self-dual Lorentzian lattice, which, as a Euclidean lattice, is isodual and isospectral with the odd Leech lattice. Its Siegel theta-function is given by (3.45) and reduces to (6.51) along the diagonal τ = −τ . The Narain CFT defined with this lattice has spectral gap ∆ = d b /4 = 3/2. It should also be noted that codes from two other classes, via the same twist procedure, also lead to Narain theories with ∆ = d b /4 = 3/2 .
Finally, returning to the Golay code, its matrix B in the canonical form (B.1) is symmetric but B ii = 0. This means that the Golay code can be interpreted as a selfdual stabilizer code, which is not real. We can use code equivalence to find C-equivalent real self-dual codes. There are three classes of T-dual codes (strictly speaking, at least three, see footnote 7): one has d = 4 and d b = 6 (and can be used to construct a Narian CFT with ∆ = 3/2 via twist construction), and other two have a more modest d = d b = 4. A curious observation here is that the matrix B (B.1) with all diagonal 7 Strictly speaking we should say at least three, as potentially there could be isospectral classes of T-equivalent codes, which are C-equivalent but not T-equivalent with each other. 8 If one defines h + 24 using the generator matrix given in Fig. 12.1 of [45], the permutation {1, 2, 3, 4, 5, 6, 7, 9, 10, 11, 20, 24, 15, 14, 13, 16, 19, 18, 17, 23, 22, 21, 8, 12} brings it to a form that is selfdual with respect to (3.17). elements set to zero gives rise to the stabilizer code, which shares the same REP with one of the d = d b = 4 classes mentioned above. This seems to indicate that for this matrix B, C-equivalence can simply remove all non-zero diagonal matrix elements while leaving everything else intact. This is an unusual situation, and it would be interesting to describe the class of matrices B for which this is possible.

Conclusions
In this paper we have discussed a relation between quantum stabilizer codes, a particular class of quantum error-correcting codes, and a class of 2d conformal field theories.
The key ingredient in our construction is the relation between stabilizer codes and Lorentzian lattices, which is the subject of Section 3.2. Self-dual quantum stabilizer codes correspond to self-dual lattices, and real codes to even lattices. In this way, real self-dual codes define CFTs based on even self-dual lattices, which we call code theories. Basic properties of code CFTs are captured by the corresponding codes; in particular, the CFT partition function is given by the code's refined enumerator polynomial (4.23). Qualitatively, classical codes are related to Euclidean lattices and chiral CFTs. In this paper we have shown that quantum codes correspond to Lorentzian lattices and non-chiral CFTs.
Our main focus has been on self-dual codes and lattices. The space of stabilizer codes is discrete, but in our construction it is embedded in the continuous space of Lorentzian self-dual lattices. This is an essential difference from the case of classical codes, for which the space of Euclidean self-dual lattices is discrete. Within the Narain moduli space of all self-dual even Lorentzian lattices, we describe the set of real selfdual stabilizer codes as a group coset (4.36). There are other spin-off results which can be formulated without references to CFTs. We have derived the Gilbert-Varshamov bound by averaging over all codes in canonical form, and calculated linear programming bounds on the largest binary Hamming distance, see Section 5. At the level of graphs, informed by the CFT interpretation, we outlined the importance of edge local complementation (ELC) equivalence classes, and classified all graphs on n ≤ 8 nodes, see Appendix E. Finally, we constructed an isodual non-integral lattice isospectral to odd Leech lattice in Section 6.10.
Code theories form a subsector of Narain CFTs. T-duality transformations can map a code theory into another code theory, in which case the corresponding codes are necessarily equivalent in the code sense, as proved in Appendix D. Using T-duality transformations one can always bring any code CFT into the form of a compactification on an n-dimensional cube of "unit" size and quantized B-flux, such that it is fully specified by a binary symmetric matrix B = B mod 2. The matrix B can be interpreted as the graph adjacency matrix. Thus code theories can be labeled by graphs, with graphs of T-dual theories being related by edge local complementation. By classifying all classes of ELC equivalent graphs on n ≤ 8 nodes we have found all physically distinct code CFTs with central charge c = n ≤ 8.
Schematically, one can think of code theories as a particular "ansatz" which reduces modular invariance of the CFT partition function to a simple algebraic condition satisfied by a multivariate polynomial. In this way code theories provide a playground to probe several questions central to the conformal modular bootstrap program. As in the case of classical binary codes which give rise to optimal sphere packings in certain dimensions, a particular code theory which we dub non-chiral E 8 , and which is SO(8) 1 WZW theory in disguise, attains the maximal value of the spectral gap among the Narain theories with central charge c = n = 4. This theory is based on the root lattice of E 8 , understood as a Lorentzian even self-dual lattice, see Section 6.4. Other special lattices, in particular the odd Leech lattice, also make an appearance, see Section 6.10. A drastic reduction of the modular invariance constraints at the level of code theories gives us a multitude of examples of "fake" CFT partition functions, modular invariant non-chiral functions Z(τ,τ ), which admit expansions in terms of U (1) n × U (1) n characters with positive integer coefficients (the first being 1), yet do not correspond to any known theory. The number of fake Z(τ,τ )'s quickly grows with c = n, which suggests one of two possibilities. It could be that these are not partition functions of any actual CFT, which means persistent allowed regions in modular bootstrap exclusion plots in fact might be empty. Another possibility is that these Z(τ,τ )'s might correspond to actual CFTs from some new sector, most likely related to a family of (non-additive) codes. This would mean that the notion of a code CFT could be extended to include these and perhaps other sets of theories. (We also mention that a completely analogous construction exists for classical binary codes leading to examples of "fake" chiral CFT partition functions for c ≥ 24 divisible by 8, see Section 2.1.) Finally, our analysis of stabilizer codes with small n ≤ 12 reveals a growing number of isospectral but physically inequivalent Narain CFTs. From the mathematical point of view these are examples of isospectral but non-isomorphic Narain lattices. The first such example appears for n = 7; it corresponds to a pair of isospectral even self-dual Lorentzian lattices in R 7,7 , see Fig. 10. For chiral CFTs based on Euclidean lattices, the lowest-dimensional pair of isospectral CFTs are the E 8 × E 8 and Spin(32)/Z 2 lattice CFTs corresponding to Milnor's example of isospectral even self-dual lattices in 16 dimensions. In contrast to the Euclidean case, where next example occurs in 24 dimensions, there are many dozens of examples of isospectral c = n = 8 theories, with the number presumably growing rapidly for larger c = n.
Code CFTs may provide a useful framework for addressing the following two ques-tions. The first is to understand the asymptotic behavior of the maximal spectral gap for Narain theories with c = n 1 [9,10]. At the level of code theories, the analog of the spectral gap is the binary Hamming distance d b , which can be effectively studied using linear programming methods. It is an open question though to relate quantum codes with large d b to Lorentzian lattices with large shortest vector. To that end one needs to go beyond Construction A lattices, discussed in this paper, and introduce some analogs of constructions B, C etc. developed for classical codes [17]. Another question is the recently proposed holographic duality between averaged Narain theories and certain Chern-Simons theories in the bulk [10,14]. We have argued in Section 5 that the ensemble average over all code theories exhibits the same basic features as the average over full Narain moduli space, suggesting a holographic interpretation. Thus, code theories may provide an additional testbed to verify and study this duality.
There are several different ways in which classical codes may be associated with various chiral CFTs, both supersymmetric and not [3,5]. We expect the construction outlined in this paper to be perhaps the simplest but not the only scheme relating quantum codes to non-chiral CFTs. We already mentioned a possible connection between self-dual albeit non-real stabilizer codes, associated with self-dual odd Lorentzian lattices, and fermionic CFTs. But we expect that many other constructions are possible. Perhaps the most important aspect of the relation between Euclidean lattices and chiral CFTs is that the former can be used to define consistent Vertex Operator Algebras (VOA). Thus, the VOA associated with the Leech lattice, and its Monster orbifold, exhibits symmetries which go beyond pure geometric symmetries of the lattice [2,93]. In light of our work, one of the immediate questions would be to study symmetries of the non-chiral VOAs associated with code CFTs, possibly leading to a non-chiral moonshine theory.
Let us conclude with one more fundamental question: to what extent does the physical Hilbert space of a code theory exhibit quantum error-correcting properties related, or inherited, from the associated codes? Here we have in mind various properties, including "quantum error correction" necessitated by the emergence of locality in the bulk [12] or related to the large N limit [94], quantum information properties of CFT ground states [95], and probably many others.
the European Union's Horizon 2020 research and innovation program (QUASIFT grant agreement 677368).

A E 7 and E 8 lattices and codes
In this section we show that root lattices of Lie algebras E 7 and E 8 are isomorphic to the Construction A lattices of Hamming [7,4,3] code and the extended [8,4,4] code e 8 . We start with the case of E 8 as it is more symmetric and simpler. For E 8 we also discuss equivalence of different Lorentz-signature metrics and the relation of non-chiral E 8 theory to the theory of eight free Majorana fermions.

A.1 E 8
Root lattice of D n series is the "checkerboard" lattice of integer vectors (x 1 , . . . , x n ) ∈ Z n with the sum of all coordinates being even, i x i mod 2 = 0. We denote it as D n . Vector δ = 1/2 does not belong to the lattice, but when n is even, 2δ does. In a procedure similar to the twist described in Section 2.1, we can define a new lattice where D n + δ is defined as in (2.38). For n = 8 this lattice is the root lattice of algebra E 8 . It includes vectors of the form (x 1 , . . . , x n ) where all x i are simultaneously either integer or half integer, and their sum is integer and even. One can choose as a generator matrix, in which case gram matrix is the Cartan matrix of E 8 , The generator matrix Λ E 8 is of course very different from the generator of the Construction A lattice Λ(e 8 ) associated with the Hamming [8,4,4] code. The latter is given by (2.17) with the matrix B given by (2.18). We will denote that matrix by Λ e 8 . The lattices generated by Λ E 8 and Λ e 8 are not identical but isomorphic, which means there is a rotation O ∈ O(8) and a matrix Z ∈ GL(8, Z) such that Finding O and Z directly from (A.4) is difficult, and therefore the Wikipedia calls the task of finding the explicit isomorphism "not entirely trivial." The following trick saves the day. There are 240 roots, vectors of length 2 = 2, which can be written explicitly in both representations, in particular all columns of Λ e 8 are roots. We consider the Gram matrix Our goal now is to choose 8 roots from the list of 240 roots of Λ E 8 such that their scalar product is given by (A.5). The procedure is iterative. Using computer algebra we calculate the 240 × 240 matrix of scalar products. First root is chosen at will. Then we choose second root at will from the set of those which have the desired scalar product with the first one. Third is chosen at will from the list of those which have desired scalar product with the first two, and so on. The procedure does not guarantee to succeed (we may not have a vector with the desired properties at a certain step), but since the lattice has many symmetries it works well in practice. Once the roots with the scalar product (A.5) are found, one can choose them to generate the lattice, which will be related to Λ E 8 by an appropriate GL(8, Z) transformation. That is the desired matrix Z. Once Z is known, O follows from (A.4), One can take another route and "guess" (A.6). Once O is known explicitly, it can be checked straightforwardly that O is orthogonal and solves (A.4) with some appropriate Z.
Lattice E 8 is even and self-dual, which follows from all diagonal matrix elements of (A.5) being even, while the matrix is integer and has determinant 1. Curiously E 8 is also even and self-dual with respect to Lorentz signature metric (3.17). Indeed, from where follows that it is self-dual. It is also even because all diagonal elements of the gram matrix are even. (Alternatively one can flip signs in B to make it antisymmetric. The lattice would remain the same, but now Λ e 8 would be orthogonal matrix from O(4, 4, R), which guarantees that the lattice is even and self-dual.) In Section (6.4) we used E 8 understood as a Lorentzian lattice to define "non-chiral E 8 " Narain CFT. An immediate check reveals that the lattice generated by Λ E 8 is also even self-dual with respect to the same metric g. We leave the exercise of calculating Λ T E 8 g Λ E 8 to the reader. This is curios now, because it means lattice generated by Λ e 8 is even self-dual with respect to both metrics, g and One can immediately ask, what is the Narain CFT defined with help of η? It turns out, this is the same theory because of the lattice symmetry. We consider an orthogonal transformation of the form where O L,R ∈ O(4, R), and 8 × 8 block-matrix performs the transformation (4.10). Then T is a symmetry of g, T T g T = g. (In physics terms, the transformation O L × O R is a part of T-duality group which rotates p L and p R .) Accordingly, the orthogonal matrix S = T O satisfies, It turns out that for the particular choice of S is a symmetry of the lattice, Therefore Narain CFTs defined with the lattice Λ(e 8 ) understood as the Lorentzian lattice with metrics g and η are T-dual to each other. More generally, the E 8 lattice has a rich group of symmetries, most of which do not respect the Lorentzian metric, "rotating" it into a new one. Narain CFTs defined with any choice of the Lorentzian metric are physically equivalent to each other.
Finally we discuss the equivalence of the non-chiral E 8 theory with the theory of eight free fermions with the diagonal GSO projection. The fermions can be bosonised, leading to toroidal compactification D 4 -root lattice of SO(8) [96] with the generator matrix is chosen such that upper triangular parts of γ T D 4 γ D 4 and γ T D 4 Bγ D 4 coincide, leading to SO(8)×SO(8) global symmetry [97][98][99]. Resulting Lorentzian lattice with the generator 16) describes SO(8) 1 WZW theory as a Narain CFT. It is related to the lattice generated by Λ e 8 by a T-duality transformation (A.9) with either O L or O R flipping sign of one arbitrary coordinate.

A.2 E 7
Root lattice E 7 can be defined via generator matrix such that gram matrix is the Cartan matrix of E 7 Lie algebra, The generating matrix of the Hamming [7,3,4] code is the transpose of (2.7). The generator matrix of the Construction A lattice of this code can be chosen as To match the the lattice generated by Λ e 7 with the one generated by Λ E 7 , we will employ the procedure analogous to the one used in the previous section. We construct 126 roots of the code lattice, which include 14 vectors of the form (±2, 0 6 )/ √ 2 (and permutations), and 2 4 × 7 vectors obtained from the 7 codewords of Hamming weight 4. Then we calculate the 126 × 126 scalar product matrix, and start choosing roots one by one such that their scalar product is equal to (A.18). The process does not need to succeed and in practice we had to experiment with a few different candidates for the fifth vector, before the process could be completed. Once those roots are identified, we can solve a system of linear equations to find a matrix Z −1 ∈ GL(7, Z) which expresses those roots in terms of Λ e 7 . After that an orthogonal matrix O satisfying

B Golay code and Leech lattice
Binary extended [24,12,8] Golay code g 24 can be defined using generator matrix in the canonical form (2.15) with This is a self-dual code as follows from B B T = I, understood over GF (2). Alternatively, one can define the generator matrix of the Construction A lattice Λ(g 24 ) and check that Λ T Λ is integer, unimodular, and with even numbers of the diagonal. Leech lattice can be obtained from Λ(g 24 ) by applying twist (2.39) with the vector δ = 1/2/ √ 2. We have seen in Section A.1 that E 8 lattice can be understood as a Lorentzian even self-dual lattice. It can be used to define non-chiral CFT with the largest spectral gap ∆ = 1 for the given value of central charge (and U (1) 4 × U (1) 4 symmetry). This extremal property can be traced to the lattice E 8 being the optimal sphere packing in 8 Euclidean dimensions, with the spectral gap being specified by the maximal possible length of the shortest lattice vector, 2∆ = 2 = 2. Given that Leech lattice yields the optimal sphere packing in 24 dimensions with the shortest vector of length 2 = 4, provided it can be interpreted as the Lorentzian lattice, it would lead to a non-chiral theory with the spectral gap ∆ = 2. It has been recently shown using numerical modular bootstrap that the spectral gap for all theories with n = 12 (and U (1) 12 × U (1) 12 symmetry) is strictly smaller than 2 [10]. This indirectly proves Leech lattice is not an even self-dual lattice for any Lorentzian metric with the R n,n signature. Here we provide an independent and more explicit consideration, underscoring the difference between Leech lattice and E 8 .
Our starting point is the Golay code g 24 . If we could interpret it, via Gray map, as the self-dual stabilizer code, that would immediately show that Λ(g 24 ) is even self-dual Lorentzian lattice. Then applying twist with the same δ = 1/2/ √ 2 would immediately yield Leech lattice, now as the Lorentzian even and self-dual. In other words we would like to interpret the generator matrix G T = ( I | B) of the binary Golay code as the generator matrix of the real stabilizer code. For that we need B = B T , which is satisfied, but also B ii = 0, which is not. In other words, understood as the stabilizer code, Golay code is self-dual but not real. Therefore corresponding lattice Λ(g 24 ), understood as a Lorentzian lattice, is self-dual but odd (one can check that Λ T g 24 g Λ g 24 is integer unimodular matrix with odd numbers on the diagonal). Proceeding to define Leech lattice via δ-twist would yield an odd self-dual lattice.
One may wonder if one can use code equivalences to define a new code with B being symmetric and B ii = 0. The transformations of B include permutations B → B O p , as well as (compare with (4.39)) where all algebra is over GF (2). It is assumed in (B.3) that sub-matrix b 11 is not degenerate. Matrix B is not necessarily symmetric and may have non-zero diagonal elements. But if B = B T and B ii = 0, (B.3) respects this property. Therefore if we hope to bring (B.1) to the form B = B T , B ii = 0, we must do it solely using permutations B → B O p . It can be easily seen, this is not possible.
To summarize, Leech lattice, as a Lorentzian lattice, is self-dual and odd.

C Any Narain CFT is a toroidal compactification
We want to show that using symmetries of the physical theory, namely O(d) × O(d) transformations, any even self-dual Lorentzian lattice (the so called Narain lattice), can be brought to the form (4.11). Our starting point is the equation (4.21), which states that any Narain lattice can be obtained from the cubic lattice with the generator matrix I by an appropriate transformation from O(d, d). Let us denote first d vectors (columns) of the generator matrix I by u i and last d vectors (columns) byũ i . They satisfy u i · u j = 0,ũ i ·ũ j = 0, u i ·ũ j = δ ij . (C.1) Since the transformation from O(d, d) leaves metric invariant, we can say that an arbitrary lattice Λ is generated by 2d vectors u i ,ũ j satisfying (C.1). Let's start with u 1 . It is a null-vector, |u 1 | 2 = 0, and therefore if we represent it in the u 1 = ( k 1 L , k 1 R ) coordinates, vectors k 1 L and k 1 R will have the same length. Using a transformation from O(d) we can bring k 1 R to be equal to k 1 L (and will be denoted simply as k 1 ). Next we consider vector u 2 = ( k 2 L , k 2 R ). For the same reason | k 2 L | = | k 2 R | and moreover By an orthogonal transformation in the directions orthogonal to k 1 we can make k 2 L = k 2 R = k 2 . Continuing this logic, we find We can repeat the same procedure for the vectorsũ i , but in this case orthogonal transformation acting on u i will bring them to the form where O ∈ O(d). We can find an orthogonal matrix Q satisfying Q 2 O = −I, and after a diagonal transformation Q×Q ∈ O(d)×O(d) and a trivial redefinition of k i , k i obtain Last step is to impose u i ·ũ j = δ ij . Vectors k i define a lattice, which we can take to be Γ * . Vectors k i satisfy Therefore vectors e i = (Q+Q T ) k i form lattice Γ, which is dual to Γ * , and antisymmetric matrix B from (4.8) is given by D T-duality as code equivalence We also remind the reader that α i , β i are equivalent (define the same code and the same lattice) upon shifting components by even number, G ∼ G + 2G,G ∈ Mat(2n, n, Z).

(D.3)
Another way to represent (D.2) is We already saw in Section 4. The diagonal part has been already discussed, and we only need to analyze (O p × I). For the vectors p L , p R to correspond to a code lattice, ith component of p L and p R must be sententiously integer or half-integer. Since the transformation (O p × I) leaves p R invariant, permutation O p must only reshuffle integer or half-integer components of p L with each other. In other words, provided there is a subset w ⊆ {1, . . . , n} such that for all n codewords (α i , β i ), all components of p k L,i , k ∈ w are simultaneously integer or half-integer, 2p k L,i = 2p l L,i mod 2 for k, l ∈ w, p k L,i = then O p is an arbitrary permutation of indexes within w. For simplicity we can assume w includes first k indexes, in which case all generators g are of the form where all ν i l for 1 ≤ l ≤ k are even or odd. If, for the given i, all ν i l are odd, vector p l,i = (1/2, . . . , 1/2 k , . . . ) and O p acts on it trivially. Otherwise, when all ν i l are even, first k components of p L,i are either zeros or ones, which are reshuffled by O p . Going back to the generator (D.7), in the first case the generator remains invariant, in the second case first k matrices are either I or σ y which are reshuffled by O p .
If we now take a particular g i such that first k matrices are either I or σ y and reshuffle them, new vector will trivially commute with all g j , provided g i was. Therefor the new reshuffled g i would belong to the code, since the code is self-dual. We therefore conclude that any transformation of the form O p × I which transforms a code (lattice) into another code (lattice), is in fact a symmetry of that code (lattice).
To transforms a given code into another code, the codes are equivalent in the code equivalence sense. So far we have only considered the transformations of the form (D.8), which is too restrictive. Going back to (D.4,D.5) and taking into account that α, β can be shifted by arbitrary even-valued vectors, α → α + 2a, a ∈ Z n , while p L , p R must always be integer or half-integer, we immediately conclude that all matrix elements of O L and O R are integer or half-integer. Since O L,R are orthogonal, (O L,R ) kl is integer, it must be equal to ±1, and all other components of k-th row and l-th column must be zero. Because of the symmetry (D.3), all components of 2Q must be integer. Therefore, if O L is integer, so must be O R , and if O L is half-integer, so must be O R . Finally, if (O L,R ) kl is half-integer, it is equal ±1/2 and there are three other components in k-th row and l-th column of (O L,R ) kl which also must be equal to ±1/2. Combining all this together and using diagonal transformation O × O, O ∈ O(n, Z), which maps codes into equivalent codes, we can always take O L,R to be block-diagonal matrices where each block being either: i) a 4 × 4 matrix with all elements being ±1/2, ii) orthogonal matrix from O(k, Z), k ≤ n. Both O L,R must have the same block structure.
If O L,R has no half-integer blocks, this is the case of (D.8) considered above. In what follows we assume O L,R has at least one half-integer 4 × 4 block, which, without loss of generality we can assume to be located in the upper-left corner. Our results are summarized as a Wolfram Mathematica lists in the file graphs8 available here. It contains one variable ELiELCiI which is a nested list of lists. It has 8 components, which contain information about LC equivalence classes split into ELC classes, which in turn split into graph isomorphism equivalence classes for the graphs on 1 ≤ n ≤ 8 nodes. ELiELCiI[[n]] is a list of t LC n elements, first i LC n correspond to decomposable graphs, last t LC n − i LC n to indecomposable graphs, see Table 2. For each 1 ≤ i ≤ t LC n , ELiELCiI[[n, i]] with each entry corresponding to a particular ELC equivalence class within given LC equivalence class. Each element of ELiELCiI[[n, i, j]] is a list with each entry corresponding to the graph isomorphism class, within given ELC equivalence class. Each graph isomorphism equivalence class is labeled by the maximal number k (E.1) among all numbers associated with graphs within this class.  Table 3. Number of inequivalent graphs on n nodes t I n (number of graph isomorphism equivalence classes), for n ≤ 12. Number of inequivalent indecomposable graphs i I n . Integer sequences A000088 and A001349 correspondingly.
A simple consistency check confirms correct number of ELC classes t ELC n and i ELC n , see Table 1, and the correct number of graph isomorphism classes, see Table 3.