Subsystem complexity and holography

As a probe of circuit complexity in holographic field theories, we study sub-system analogues based on the entanglement wedge of the bulk quantities appearing in the “complexity = volume” and “complexity = action” conjectures. We calculate these quantities for one exterior region of an eternal static neutral or charged black hole in general dimensions, dual to a thermal state on one boundary with or without chemical potential respectively, as well as for a shock wave geometry. We then define several analogues of circuit complexity for mixed states, and use tensor networks to gain intuition about them. In the action approach, we find two possible cases depending on an ambiguity in the definition of the action associated with a counterterm. In one case, there is a promising qualitative match between the holographic action and what we call the purification complexity, the minimum number of gates required to prepare an arbitrary purification of the given mixed state. In the other case, the match is to what we call the basis complexity, the minimum number of gates required to prepare the given mixed state starting from a minimal complexity state with the same eigenvalue spectrum. One way to fix this ambiguity is to choose an action definition such that UV divergent part is positive, in which case the best match to the action result is the basis complexity. In contrast, the holographic volume does not appear to match any of our definitions of mixed-state complexity.

There has been much recent progress in understanding how spacetime emerges from field theory degrees of freedom within the AdS/CFT correspondence. Considerations involving entanglement [1][2][3][4][5][6], quantum error correction [7], and other ideas from quantum information science have provided new clues concerning the emergence of the classical bulk geometry as well as the reconstruction of approximately local quantum fields in the bulk [8][9][10][11].
Tensor networks provide one set of toy models that instantiate many of the features of AdS/CFT [12] and that can also describe in detail the physics of more conventional systems [13,14]. Motivated by these tensor network models and by considerations involving the dynamics of black hole interiors, it was proposed that the quantum computational complexity of the boundary field theory state would also be encoded geometrically in the dual gravitational spacetime [15][16][17][18].
To be more specific, in the context of the eternal AdS-Schwarzchild black hole it was observed that the wormhole which connects the two sides grows linearly with time, say as measured by the length of a geodesic stretching through the wormhole [15,19]. One can then ask what the CFT dual of this linear growth is. The conjecture is that the growth of the wormhole is dual to the growth of complexity of the dual CFT state. Roughly speaking, the complexity of the CFT state is the minimum number of simple unitaries or "gates" needed to prepare the CFT state from a fixed reference state.
The physical picture is that the complexity of a state can increase due to Hamiltonian time evolution, and it is this increase of complexity that is dual to the late-time growth of the interior. Tensor network models again provide a concrete instantiation of complexity on the CFT side, with the complexity being defined as the number of tensors in the minimal network that describes the state. On the field theory side, one of the key open questions is how to define complexity more precisely. On the gravity side, one of the key issues is how to differentiate between different bulk proposals, including "complexity equals volume" (CV) [15,16], "complexity equals action" (CA) [17,20], and others [21]. There are by now a large number of papers developing and extending these ideas [22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37].
In this paper, we consider the problem of defining and evaluating complexity for subsystems of the CFT. To define subsystem complexity on the CFT side, we face the problem of defining complexity for mixed states, since subsystems will generically not be in pure states. On the gravity side, we must find suitable geometric measures which combine the global complexity measures with the physics of subsystem duality in AdS/CFT. Holographic subsystem complexity has recently been studied in a variety of works, with several proposals analogous to CV and CA being advanced and studied [22,[38][39][40][41][42].
This study was motivated by a desire to better understand the relationships between CA and CV duality and to subject the basic idea of a complexity/geometry duality to a new set of tests. We also wanted to gain insight into the way different subregions of the bulk might be represented in tensor network models.
Our main contributions are as follows. First, we study the analogs of CA and CV duality for subsystems for simple subregions of eternal black holes. This study include calculations of actions and volumes for a variety of black holes as a function of dimension, temperature, and charge. Second, we define a variety of measures of mixed state complexity and compare our definitions to the CA and CV calculations. This analysis complements and extends various discussions given in the holographic literature. Amongst definitions of mixed state complexity that we consider, we find that CA duality reasonably accords with one of two different definitions depending on an apparently arbitrary choice in the definition of the action, but that CV duality is harder to consistently reconcile with our notions of subsystem complexity. To more precisely define the holographic quantities we consider, recall that there are reasons to believe that the reduced density matrix for a spatial region A in a holographic theory is encoded in the corresponding entanglement wedge E A [7,[43][44][45][46]. Combining this observation with the CV and CA proposals for holographic complexity leads one to consider two bulk quantities that can be defined for a given region A: the volume C V (A) of a maximal Cauchy slice for E A anchored on A; and the action C A (A) of the Wheeler-de Witt (WdW) patch W A of E A associated to A. 1 W A is defined as the set of points in E A that are spacelike-or null-related to A (i.e. not in I + (A) ∪ I − (A)), or equivalently as the intersection of E A with the WdW patch W of any complete boundary slice containing A. 2 There are many possible situations in which we could study the above volume and action quantities. For definiteness, in this paper we focus on two-sided (neutral and charged) static AdS black holes, with the subsystem L consisting of a constant-time slice of one boundary. The state of L is thus a Gibbs state (with or without a chemical potential). The entanglement wedge E L of one boundary for a static black hole consists of the cor- 1 These prescriptions for holographic subsystem complexity were first suggested (to our knowledge) in [38] and [22] for C V (A) and C A (A) respectively 2 The equivalence between these two definitions of subsystem WdW patch can be shown as follows. The first definition is W 1 A := EA \ (I + (A) ∪ I − (A)). Let σ be a boundary Cauchy slice containing A. Then W is the complement of I + (σ) ∪ I − (σ), so the second definition is W 2 A := EA \ (I + (σ) ∪ I − (σ)). Clearly I ± (A) ⊆ I ± (σ), so W 1 A ⊇ W 2 A . For the other direction, let x be a point in W 1 A . Since x ∈ EA, it is not timelike-related to E σ\A [45], and in particular is not timelike-related to σ \ A. Since x is also not timelike-related to A, it is not timelike-related to σ, and is therefore in W 2 A .

JHEP02(2019)145
responding exterior region. Figure 1 illustrates E L , the WdW patch W for a complete boundary slice, and their intersection W L . In section 2, we calculate C A (L) and C V (L) for neutral black holes in D ≥ 3 and charged black holes in D ≥ 4. Note that the corresponding Gibbs states are static, reflecting the fact that the bulk isometry generated by the timelike Killing vector of the exterior region relates different constant-time slices of the boundary. Therefore the quantities C A (L) and C V (L) are time-independent. This is in contrast to the corresponding quantities for the full system, whose state, the thermofield double, is not static. We also compute C A (L) for a thermalizing system dual to a shock wave geometry, finding that at late times it grows linearly at a rate 2M , just as for the thermofield double.
In section 3, we define various measures of complexity for subsystems as well as for general mixed states (i.e. mixed states considered independently of any particular purification). In this discussion, we assume that some notion of pure-state complexity has been previously defined. This notion, however, can be used in different ways to define the complexity of a mixed state ρ. For example, we can consider a purification of ρ and evaluate its complexity, or we can decompose ρ into an ensemble of pure states and average their complexities. Other choices in the definition include whether to include ancilla degrees of freedom. By estimating the value of each measure on Gibbs states, and in particular its relation to the entropy, and comparing to the results obtained in section 2, we are able to rule out some of the proposed definitions as being related to either C A or C V . On the other hand, depending on how the action is defined, we find a promising qualitative match between C A and either the purification complexity or the basis complexity. Roughly speaking, these are the minimum complexity of any purification of the given mixed state and the minimum complexity needed to prepare the basis of the given mixed state, respectively.
The appendices contain certain details of the calculations, including a careful treatment of the corner terms that arise in the action calculations.

Holographic calculations
In this section, we consider static two-sided asymptotically AdS black holes, and take the region L to be a constant-time slice of one boundary. Its entanglement wedge E L is the corresponding exterior region in the bulk. We compute the volume C V (L) of a maximal slice and the action C A (L) of the Wheeler-de Witt (WdW) patch for this exterior region. These quantities are relevant for the subsystem analogues of "complexity equals volume" (CV) and "complexity equals action" (CA) dualities, respectively. We comment on the differences between the two quantities and the possibility that both dualities hold as they could potentially provide information about different notions of subsystem complexity. We treat the neutral case in subsection 2.2 and the charged one in subsection 2.3. We summarize the results at the top of each subsection before entering into the details of the calculations. First, however, in subsection 2.1, we make a qualitative observation concerning C V (L) and C A (L) that will play an important role when we compare these quantities to candidate complexity measures in section 3.

JHEP02(2019)145
Finally, in subsection 2.4 we compute C A (L) for a shockwave geometry, dual to a system undergoing thermalization after an injection of energy. We find exactly the same late-time behavior as for the two-sided black holes, namely a linear growth at a rate 2M .

Relation between subsystem and full-system measures
We begin with a general observation about the volume measure C V . Let σ be a boundary Cauchy slice, A a region of σ, and A c := σ \ A its complement. We assume that the full system on σ is in a pure state. Then we claim that in other words C V is superadditive. 3 The reason is that the left-hand side equals the maximum volume of a complete Cauchy slice bounded by σ which is constrained to pass through the HRT surface m(A) (which is the same as m(A c ), since the full system is pure), while the right-hand side is the maximum volume of a Cauchy slice that is bounded by σ but not constrained to pass through m(A).
In the case we deal with in this paper, where A = L is a constant-time slice of one boundary of a static two-sided black hole and A c = R is a constant-time slice of the other boundary, we have symmetries that allow us to say more. First, C V (L) and C V (R) are independent of time, and there is an isometry that exchanges the left and right sides, so (2. 2) The full system is not static, so C V (σ) depends on the times chosen for R and L. However, if these are both chosen at t = 0, then the time-reflection isometry of the black-hole is respected, so by symmetry the maximal-volume slice for σ must pass through the bifurcation surface, which is also m(L) and m(R). The inequality (2.1) is thus saturated: On the other hand, we find that the additivity property of C A depends on the definition of the action. As detailed below, there is a counterterm associated with the null surfaces bounding the WDW patch which is necessary to render the action reparameterization invariant, and this counterterm comes with an arbitrary length scale in its definition. 4 As this length scale is varied, the zero time C A result goes from being subadditive, to being superadditive. Unlike C V , it will not generically be the case that the action is exactly additive due to the regions behind the horizon. One way to fix this ambiguity in JHEP02(2019)145 Figure 2. Separation of the WdW patch in terms of its intersection with the entanglement wedges E L ∩ W and E R ∩ W, and with the regions behind the past and future horizons W ± int for an eternal black hole.
C A is to demand that the UV divergent part of the complexity be positive; in this case, the action is superadditive at time zero, (2.5) In more detail, while in the C V computation one deals with positive-definite quantities, in the action calculation we have different contributions whose sign depends on the gravitational Lagrangian as well as the boundary and corner terms. For the cases we consider, the Lagrangian is negative, and even including boundary and corner contributions the different regions into which one can decompose the action calculation all give negative results. More precisely, the calculation of the pure state action C A (σ) can be decomposed as where A ± int corresponds to the total action associated to the spacetime region W ± int defined as the intersection of the future/past interior of the black hole with the WdW patch as shown in figure 2.For example, for the neutral black hole at t = 0, one has the relation where S is the entropy, D is the bulk spacetime dimension, and g D is the following Ddependent constant:

JHEP02(2019)145
where ψ 0 (z) is the digamma function given by ψ 0 (z) = Γ (z)/Γ(z). Here l c / is an undetermined parameter appearing in the definition of the action, as explained in section 2.2.1. For l c / > 1, g D is positive, but one can find also values of l c / < 1 such that g D is negative. Depending on this choice, the subsystem complexity can be either superadditive or subadditive. This result can be found in (B.23) of appendix B.1. The analogous one for charged black holes can be read from (B.37).

Neutral black hole
Consider an eternal AdS black hole in D spacetime dimensions. In the planar limit, its metric can be written as 10) and the coordinates (t, x µ , z) cover one exterior region (left or right) of the full double-sided geometry, as schematically depicted in figure 1. As argued originally by Maldacena [49], this geometry is dual to the thermofield double state where the parameter β corresponds to the inverse temperature associated to the eternal black hole; in this case β −1 = T BH = (D − 1)/4πz h . If one considers expectation values of operators on a single CFT, then the effective state with respect to the CFT L/R is described by a thermal density matrix with inverse temperature β. Its entropy and mass are given by respectively, where V ⊥ is the dimensionless transverse volume parametrized by the coordinates x i . These quantities equal the entropy and energy of the dual quantum field theory respectively. We start by computing the WdW action C A (L) in subsection 2.2.1. This is done by evaluating the action associated to the region defined as the intersection of the entanglement wedge and WdW patch, including surface and corner contributions as specified by Myers et al. [50]. The result of the thermal state complexity turns out to be remarkably simple: where δ is a UV cutoff and the logarithmic term depends on a particular choice of an undetermined parameter in the computation of the corner contributions, as explained in that section. The D-dependent constant g 0 is defined in (2.39). The upper index A refers

JHEP02(2019)145
to the fact that this expression is obtained using the CA duality. We can see from (2.13) that, apart from the logarithmic term, the thermal state complexity has a simple relation to the entropy. More precisely, in terms of the boundary quantities (2.13) is where both a(D) and b(D) are positive coefficients given by In (2.14), V = V ⊥ D−2 is the dimension full volume of the boundary theory and c eff = D−2 /(16πG N ) characterizes the effective number of degrees of freedom of the dual CFT. In subsection 2.2.2 we evaluate the maximum volume C V (L), again obtaining a very simple answer in terms of the black hole horizon, the UV cut off δ, and the number of spacetime dimensions: where ξ is a length scale required to make the complexity dimensionless. In terms of the boundary quantities this is whereã(D) andb(D) are also positive and given bỹ .

(2.18)
An interesting feature of this result is the entropy independence for D = 3.

C A (L)
According to the CA duality, the complexity associated to the thermal state describing the left (right) system is given by the action evaluated on the space-time region given by W L/R = E L/R ∩ W as ilustrated in figure 1. In figure 3 we show the intersection of E L and W with the null boundaries labelled. W ± correspond to boundaries of W while H ± correspond to the boundaries of E and coincide with the black hole horizon. The action of W L receives three kinds of contributions, one from the bulk, one from the boundaries, and one from the corners [17,22,50]. We evaluate those following the rules laid out in ref. [50], including the extra counterterms on null boundaries recently discussed in [50,51] to guarantee the diffeomorphism invariance of the contributions of the null boundary terms. 5 The action diverges unless a cutoff is placed near z = 0. The regulated W is defined by starting the null lines that bound W from the cutoff surface z = δ.

JHEP02(2019)145
A convenient set of coordinates that naturally cover the region in question is obtained by changing z → z * , where z * is the tortoise coordinate , (2.19) and then defining the light-cone coordinates u = t − z * (z) and v = t + z * (z), which can be used to construct the Penrose diagram of figure 1. In these coordinates the metric is For this family of spacetimes the form of the function f (z) allows an explicit evaluation of the tortoise coordinates: where, in the second line, x ≡ z/z h , δ h ≡ δ/z h , and B(z; a, b) is the incomplete beta function given by We will see in the next section that for the computation we have in mind it is not necesary to have an explicit expression for this function. First consider the bulk contribution which arises from the bulk action The vacuum Einstein's equations are The bulk action is then proportional to the spacetime volume |W L |, The spacetime volume is computed as follows. We need the region between the null lines t(z) = ±(z * (z) − z * (δ)) from z = δ (a UV regulator) to z = z h , which is

JHEP02(2019)145
After the change of variable u → x D−1 the integral in (2.27) can be put in the form The remaining integral of the incomplete beta function was computed in Apppendix C in equation (C.11). Using that result, we get for the action so the bulk contribution of the complexity is Now consider the contribution coming from the light sheets which bound the WdW patch. As shown in [50,51] such contributions have two pieces for each null hypersurface N = {W ± , H ± }, this is where the sgn(N ) = 1(−1) if N lies to the future (past) of the space time region |W L |, κ is the function that appear in the parallel transport of the null generators k a , k a ∇ a k b = κk b , γ is the transverse metric on the null sheet and Θ is its expansion Notice the undetermined constant l c in the second term of (2.31). Due to the parametrization invariance of (2.31), one can evaluate it for any choice of null generators k a . We chose to use affinely parametrized ones for simplicity, which means κ = 0. In this case the first term in (2.31) gives zero contribution, and we are left with the so-called counterterm. On the horizons H ± , the expansion vanishes, so the counterterm is zero. Meanwhile, the hypersurfaces W ± are described in Poincaré coordinates by t(z) = ±(z * (z) − z * (δ)). We choose to parametrize the generators by λ = − /z. The transverse metric is simply γ ij = 2 δ ij /z 2 and then Θ = −(D − 2)z/ . Plugging these into (2.31) for both W + and W − results in where V ⊥ D−2 comes from the volume integral d D−2 x. Interestingly, part of the above expression, namely the action contribution from the null boundaries, exactly cancels the full bulk contribution (2.30). Notice that in the derivation we did not need to specify the exact form of f (z) and therefore this expression is the same for the charge black hole geometry which we study in the next section. Now we consider the corner terms, which arise from the action where Σ is the corner locus (codimension two) and a is the corner integrand given by where k,k are outward directed null normals 1-forms, k ·k = g ab k akb , and the sign is determined by the particular corner [22,50]. The inner product is easily computed: and therefore for the corner terms we have where z i is the z coordinate at the corner i. Since f (z) is zero at the horizon, one has to be careful when computing the corner contributions there. We do this carefully in appendix A,

JHEP02(2019)145
where we find that the sum of the four corner terms -one at the boundary, W − ∩ W + , and three on the horizon, W + ∩ H + , H + ∩ H − , and W − ∩ H − , as illustrated in figure 3-is given by and ψ 0 (z) is the digamma function given by ψ 0 (z) = Γ (z)/Γ(z). For D > 2, g 0 is negative (for example, for D = 3, g 0 = − log 2). The full action is thus Notice that the arbitrary constants appearing in the boundary and joint contributions form a single dimensionless parameter l c / in the above formula. This suggests the natural D-independent choice l c / ≥ 1 since that would be enough to guarantee a positive volumedivergent contribution for the subsystem complexity. However, this also implies that the first correction to the positive divergent term in C A L (T ) is negative. As we will see this is an important qualitative feature of this quantity. On the other hand, we might impose the condition that the finite subleading contribution should be positive instead, in which case then one would need to make /l c > c (D − 2) for some constant c and then the condition would be dimension-dependent and less natural.

C V (L)
In the spirit of the volume equals complexity conjecture, we would like to compute, for the family of geometries we considered in the previous section, the volume of the maximal slice bounded by Σ and the HRT surface m(Σ). In the case at hand Σ is the boundary region t = 0, z = δ, and m(Σ) is the horizon z = z h (where the coordinate t is undefined). In this case, due to the staticness of the metric in the exterior region, it is clear that the maximal slice is just the t = 0 hypersurface.
The extremal volume V is computed by direct integration: The following result is obtained after expanding the resulting integral is the hypergeometric function.

JHEP02(2019)145
so the corresponding volume complexity is where ξ is a length scale required to make the complexity dimensionless.

Charged black hole
In this section we repeat the analysis of the eternal black hole of subsection 2.2 for the more general family of charged black holes characterized by mass and charge parameters m, q and described by the space time metric in Poincaré-like coordinates. The spacetimes in this family are duals of the thermofield double state We are interested in studying the subsytem complexity associated to the left (right) subsystems obtained after tracing out the degrees of freedom of the right (left) parts of the Hilbert space. In this case the resulting reduced state is given by the density matrix describing a grand canonical ensamble, with temperature T = β −1 and chemical potential µ.
The metric (2.44) is the solution to the classical equations of motion derived from the Einstein-Hilbert action in the presence of an electromagnetic field F µν , this is The energy momentum tensor sourced by the field strength is given by is the cosmological constant. The solutions to the classical equations of motion for the metric (under the assumption of flat boundary conditions) and the gauge field are given by [52] and wherem andq are related to the ADM mass and charge viã The metric (2.44) is obtained by changing the radial coordinate r to z = l 2 /r where the parameters m, q 2 in (2.45) are related tom,q 2 via The function f (z) has two positive zeros z ± , where z − is the smaller one. Since the asymptotic boundary of this metric corresponds to the z → 0 region, the region outside the horizon corresponds to z < z − which implies that z − is the horizon radius of this metric. The existence of a horizon at z = z h = z − , f (z h ) = 0, establishes a useful relation between z h , q 2 , and m: (2.54) In subsections 2.3.1 and 2.3.2, we compute the mixed state complexity associated to the finite temperature and finite chemical potential density matrix describing the grand canonical ensemble given by ρ = Z −1 e −β(H+µQ) via the CA and CV duality respectively. The boundary temperature T and chemical potential µ are given by The expression for T is derived from T = −f (z h )/4π and the one for µ can be deduce from (2.51) by taking the r → ∞ limit of A 0 . For the action calculation performed in (2.3.1) we obtained a relatively simple answer where g(z h ) is defined from the following finite limit as explained in appendix A.

JHEP02(2019)145
singularity singularity singularity singularity In fact this result is very similar to the one for neutral black hole (2.13), with the only difference being the term g(z h ) and the explicit dependence of the black hole horizon z h on the mass and charge parameters m, q 2 .
In terms of the field theory quantities the subsystem complexity for charged black holes is given by where the coefficients a(D) and b(D) are given in (2.15), and z h is a complicated function of the boundary quantities T and µ which can in principle be derived from (2.55).
In section (2.3.2) we study the analogous quantity, C V (L) as given by the CV duality, obtaining explicit expressions in two different regimes corresponding to near extremal black holes where mz D−1 The action evaluation required to compute the subsystem complexity C A follows closely the steps laid down in subsection 2. is essentially the same, the only difference beging the functions we are integrating. Then, for the bulk evaluation we have Taking the trace of the Einstein equation in the presence of the electromagnetic field leads to the relation The term F µν F µν can be evaluated from the gauge field solution (2.51) This result together with (2.60) leads to the following simple expression for the bulk onshell action As in the uncharged case it is convenient to use light cone coordinates These coordinates can be used to construct the Penrose diagram of figure 4 and so they naturally cover the region of interest W L . For example, the metric takes the simpler form and the light rays which bound the causal region W L are given by t ± (z) = ±(z * (z) − z * (δ)) where δ is a UV cut off and Once the integration region W L is delimited, one can explicitly integrate the perpendicular directions since they are independent of it. Doing so leads to a dimensionless volume factor V ⊥ , and the remaining two-dimensional integral Notice that the ξ integral in (2.66) is highly non-trivial while the z integral is much simpler. To use this fact in our advantage, consider the integration region in the plane JHEP02(2019)145 (z, ξ) and invert the order in which we perform the integration. The resulting expression is then Let's consider the z integral separately, which evaluates to where in the second line we have used the relation between m, z h and q 2 (2.54). The exact cancellation of the function f (ξ) for arbitrary values of m and q 2 is remarkable, and leads to the following simple answer for the bulk action (2.69) As explained in the previous section, the gravitational boundary contribution 7 in this case is exactly the same that in the uncharged case and therefore is given by (2.33). This is Similarly to what happen to the boundary calculation, the calculation of the corner terms goes exactly as in the section 2.2.1, with the extra details from the appendix A. This leads to where g(z h ) is defined from the following limit which as explained in appendix A is finite for generic values of q, m 2 but has an IR divergence in the extremal case A.3. The full action is thus 7 We assume that the boundary conditions of the electromagnetic fields Fµν are such that they do give rise to boundary terms on the null surfaces.
To evaluate C V (L) we need the compute the extremal volume surface which asymptotes to the boundary t = 0 surface and the bulk minimal surface. The extremal surface is given by the t = 0 hypersurface in Poincare coordinates, and the extremal volume is therefore given by We would like to evaluate this integral for arbitrary values of m and q 2 but were unfortunately unable to do so. Nevertheless one can explore the finiteness of the volume answer. Apart from the obvious UV divergence, the presence of poles in f (z) indicate potential divergences in the volume integral. This is easy to do since we know the only real poles of f (z) are at z = z ± and in the integration region we only encounter the z = z − = z h pole, except when z ± collide with each other. That is the case of the extremal black holes for which as we will see there is an IR divergence in the volume integral. Let's study the contribution of the integral in the region close to z ≈ z h . First we want On the horizon we have f (z h ) = 0, therefore if f (z h ) = 0 we will have a square root divergence close to z = z h but a square root divergences integrates to a finite value, this is That means that for generic values of m, q 2 the integral is finite. On the other hand for specific values of m, q 2 for which −f (z h ) 1 but finite, then, we would have a large contribution from the above integral which scales as This means that C V (L), diverges as 1/ √ T as the black hole approaches extremality, since Extremal black holes: however, for m, q 2 such that f (z h ) = 0 we need to go to the next order in the expansion and then which is logarithmically divergent. This is in fact the case for extremal black holes. One can go one step forward and compute the coefficient of the logarithmic IR divergence in V by using the fact that where the integral is taken up to z h − . For extremal black holes, we were unable to obtain a closed expression for the volume, although its dependence on z h can be extracted since both m and q 2 parameters in dimensionless units depend only on the number of dimensions D: Therefore, the volume of the maximum volume surface in units of G N ξ (where, again, ξ is a length scale to make C V dimensionless) is given by . (2.81) The leading divergent term coming from the lower integration point δ/z h can be extracted by integrating the region close to x = 0. This result plus the integration in the region x = 1 leads to the following structure of the answer: where F (D) is undetermined function that depends only on the number of spacetime dimensions. Indeed one can evaluate the integral in a case by case basis, for example, for D = 4 we obtain: and for D = 5 where we have factored out the z h dependence up to the divergent UV piece and the dimensionless parameter Notice that one obvious concern in the previous equation is whether or not the perturbative expansion in q 2 z 2(D−2) h breaks down in the region where x ≈ 1. This is not the case, since the denominator of (2.85) behaves as ( in the x ≈ 1 regime, and therefore the term proportional to q 2 is parametrically smaller for all x. At first order in q 2 z 2(D−2) h the integral is and therefore the complexity of the weakly charged black hole is given by Notice that this answer is given in a mixed expansion, in which we used the exact z h and expanded the integral in powers of q 2 z 2 h , however a more consistent expansion would consider also z h = z h is z h (q 2 = 0). It is interesting to note that in such expansion the expression simplifies to (2.88)

JHEP02(2019)145
In this form we see explicitly that there is a legitimate q 2 correction to the zero charge case for the complexity which cannot be absorbed in the q dependence of the new z h as observed from (2.87).

Shock wave
An important motivation for the complexity equals action or complexity equals volume proposals were their linear growth behavior at late times [16,17,53]. This observation was more precisely stated in the CA duality by showing that for neutral black hole geometries one indeed has dA dt L = 2M , (2.89) at large t L . In (2.89), the l.h.s. is obtained by fixing the time slice on the right side of the boundary geometry t R = 0 and varying the left boundary time t L . The complexification growth of the pure state is associated to the growth of the part of the action that lies behind the horizon. Since in the evaluation of subsystem complexity one only consider the region outside the horizon one does not expect a similar statement to hold in that case. In other words, the action associated to the region W L ∩ E is invariant under t L → t L + δt. Nevertheless, one can consider a small deviation of the black hole geometry by perturbing the system slightly in the past. This is equivalent to injecting energy into the geometry in the form of a shock wave. In this case then the state is time dependent and its complexification rate is expected to have the same late time behavior, and what it is more it should present the expected time delay due to the injection of the shock wave [17]. In this section we evaluate the subsystem complexity in this dynamical situation for a neutral black hole with arbitrary asymptotic geometry. (To obtain the result for the flat boundary geometry we simply set k to 0).
The metric we consider is where dΣ 2 k,D−2 is the line element of the boundary geometry which can be flat (k = 0), spherical (k = 1) or hyperbolic (k = −1).
To describe the geometry in the presence of a shock wave it is convenient to move to light coordinates defined by where the − sign corresponds to the region r > r h while + to r < r h and r * (r) is defined by dr * = dr/f (r). The parameter β = 4π/f (r h ). Notice that r * (r) decreases as r decreases. If we fix r * (∞) = 0 then r * is negative everywhere else (in these coordinates the boundary is located at r → ∞). In particular, at the horizon r * (r h ) = −∞, leading to uv = 0.
The back-reacted solution of this metric in the presence of a shock wave in null coordinates is given by Here h ∼ e 2π β (|tw|−t * ) is the shift produced by the shock wave, where −|t w | is the boundary time at which the operator dual to the shock wave is inserted. The back reacted energy created by the shock wave scales with |t w | as E ∼ e 2π β |tw| and t * = β 2π log( D−2 /G N ) is the scrambling time. Therefore, for |t w | < t * the effect of the inserted energy has not effect on the geometry.
The net effect of the shock wave is to separate the Penrose diagram along the u = 0 region at which the shock wave has an important effect; however away from it the metric looks the same as the original black hole geometry. The essential difference will be in the way we glued the two sides together. Continuity of the v coordinate then implies that one has to shift the two spacetimes along the v coordinates by an amount h.
The horizon of the new metric will be located at v = h as described by the original metric and therefore will be behind that horizon region. The subsystem complexity of the perturbed metric can then be computed using the unperturbed black hole metric but the region of interest includes both exterior and interior regions. One can split that region in two: one outside the black hole horizon and one behind the horizon using the additivity property of the action. Since we are interested in the time-dependent term in the action then we will only focus on the behind horizon region which as we mentioned gives the full time dependence.
The full action is given by associated to the region given by E L ∩ W of figure 5. The region (∂M) N represents the boundary parts of the region E L ∩W which are null, and their respective integral represents the counter terms that renders the contributions to the null surfaces parametrization invariant.
We would like to start with the evaluation of the first term, the bulk space-time integral. To perform this integral it is convenient to do a further change of coordinates (ξ, χ) given by ξ = uv and χ = u/v on which the unperturbed metric looks and the bulk integral is therefore given by The bulk region is then delimited by the surfaces ξ = 1 (the singularity), ξ = (the horizon when → 0), v = h or equivalently χ = ξ/v 2 = ξ/h 2 and χ = u 2 0 /ξ. Both A and r are only functions of ξ and therefore the χ integral can be performed directly but from the definitions one has dξ/ξ = (4π/β)dr/f (r) then the ξ integral turns into a trivial integral on r where now the end points go from r = r h to r = 0 We are interested in the term proportional to the initial conditions and shock wave parameters, therefore we can ignore the log(ξ) term. In that case we have The second term in (2.95) leads to three boundary null surface terms which give zero contribution to the action, for affinely parametrized null generators, and one timelike boundary surface surrounding the singularity which we will compute as follows. First,

JHEP02(2019)145
the timelike boundary surface ∂M is given by ξ = 1. Here K = 1 2 n α ∂ α log |h| where the line element associated to h is simply where the χ integral was carried out from χ = ξ/h 2 to χ = u 2 0 /ξ as in the bulk case. Here, ξ = 1 cancels the term proportional to log ξ which we ignored for the bulk contribution. Multiplying by 1/8πG N and evaluating the previous result on r = 0 gives us the boundary contribution to the complexity which is The calculation of the third term in (2.95) goes exactly as in (2.2), in other words, the null surfaces at the black hole horizons do not contribute and the null surfaces that hit the future singularity do contribute but it does it so that the full contribution is time dependent. The reason is that one can parametrize such integrals with the affine parameter λ = − /z which runs from z h to ∞ and it is independent of the enlargement of the black hole interior due to shock wave. Finally we will focus on evaluating the terms coming from the corner contributions obtained at the intersection of light-like surfaces as the ones appearing on the horizon. The corners appearing on the singularity surface do not contribute as the volume factor goes to zero as r → 0. The calculation goes in the exact same way as in the evaluation of the corner contributions of the subsystem complexity for neutral black holes of section (2.2). The only difference here is that the regularized corners lie behind the horizon region and the individual contributions have a slightly different form in the (t, r) coordinates, which is (2.105) Here, as described in appendix A, we add and subtract the corner term appearing at the intersection H + ∩ H − and rewrite the differences of the corners in terms of the logarithms of uv products, leading to

JHEP02(2019)145
where r u v is the regularized radial coordinate lying at the intersection of the light sheet u = u and v = v . The last two terms in the u , v → 0 limit gives a finite contribution which is independent of the parameters u 0 , h and therefore we ignore them here. Notice however that these contributions could lead to divergent terms as the ones we found in previous calculations but those terms would not be time dependent. The time dependent piece is given by where we have used the relations u 0 = e 2π β t L and h ∼ e 2π β (|tω|−t * ) . Adding up all the pieces one gets and therefore at large t L we reproduced the late time complexification rate for subsystem complexity with the proper time shift due to the switchback effect [16,17]. Indeed, this effect provided important evidence for the CA and CV conjectures, which in the context of pure state complexity was tested even in the presence of multiple shock waves [17,54]. The result for the complexification rate is then for large t L .

Measures of mixed-state complexity
In the previous section, we calculated the volume and action quantities C V and C A for thermal states of holographic systems. In the spirit of the CV and CA conjectures, we would like to relate these to some notion of complexity for mixed states. Therefore, our first task is to come up with measures of complexity for mixed states. We will find that there are many ways to define such measures, and it is not straightforward to determine the relations among them. This is perhaps not surprising, as a similar situation obtains for entanglement in bipartite mixed states; many different measures have been defined (entanglement of purification, entanglement of formation, entanglement of distillation, logarithmic negativity, and so on), and determining how they are related to each other is far from straightforward. In subsection 3.1, after reminding ourselves of the relevant notion of complexity for pure states, we define several measures of complexity for mixed states. In subsection 3.2, we estimate the values of these measures in thermal states, in particular their dependence on temperature, using intuition from tensor networks. Then in subsection 3.3 we compare these estimates to the values of C V and C A obtained in section 2. We find that one of our proposed definitions matches well (to within the precision of our estimates) the behavior of C A . We thus arrive at a concrete and well-motivated subsystem CA conjecture. On

JHEP02(2019)145
the other hand, we do not find a match between C V and any of our proposed complexity definitions. In subsection 3.4, we briefly explore other possible approaches to defining mixed-state complexity, but again fail to find a plausible match to C V .
It is worth reiterating that almost all of the mixed states we consider in this paper are thermal (i.e. Gibbs or generalized Gibbs) states. This is both an advantage, as it gives us a handle on estimating their complexities that we would not necessarily have for general states, and a limitation. In particular, these states are static, eliminating the whole issue of time dependence, which was central to the development of the original CV and CA conjectures [15,17]. To further test and explore our subsystem CA conjecture, it will be important to study other types of subsystems, in particular those in time-dependent states. We took a small step in this direction in subsection 2.4 where we studied subsystem complexity for a time-dependent shockwave geometry.

Proposed definitions
We begin with simplest notion of pure state complexity. This definition has three ingredients: a reference state, a set of allowed gates, and a tolerance. The complexity of a target pure state is defined as the minimum number of gates from the allowed set needed to take the reference state to the target state up to the specified tolerance. The notion of tolerance has considerable freedom in it. We could require that the target state and the evolved reference state are close in trace norm or we could demand that they have approximately equal expectation values for some operators or any of a myriad of other measures. Let us denote this measure of pure state complexity, for some fixed set of choices, by C. We note that some pure state schemes which are particularly adapted to the problem of field theory complexity have been explored recently [55][56][57].
To approach the problem of mixed state complexity, we begin by making some preliminary remarks. First, note that the definition of C made no reference to ancilla, meaning that we implicitly fixed the number of qubits and only allowed gates to act on those qubits. However, one could also consider notions of complexity with ancilla included. We could either allow no ancilla, allow ancilla but require them to return approximately to their initial state, or allow ancilla with arbitrary final states so long as the target state is approximately obtained. These definitions are not all equivalent, although it is not clear under what conditions they differ substantially. We will assume, as above, that the definition of pure-state complexity does not allow ancilla even at intermediate stages.
Second, observe that there is a potential distinction between mixed states and subsystem states (which may of course still be mixed). A complexity measure for mixed states must be applicable to any mixed state without reference to any other system. However, a complexity measure for subsystem states could depend on the state of the whole system as well. It is not obvious which notion is relevant for holography, but we proceed by thinking about mixed states without reference to a fixed purification.
Third, we will demand that our notion of mixed state complexity reduce to the pure state definition when the state is pure. This seems trivial, but it turns out to restrict the kinds of operations we can consider, e.g., we cannot allow ancilla in the mixed case unless we also allow them in the pure case.

JHEP02(2019)145
With the above issues in mind, we now present two approaches to defining mixed-state complexity. Our analysis is complementary to some discussions in the quantum information literature [58].
Purification approach: the simplest definition of mixed state complexity is phrased in terms of minimal purifications. Given a mixed state ρ on n qubits, an initial state |0 . . . 0 , a set of allowed unitary transformations G, and a tolerance , the purification complexity C P of ρ is defined as the minimum number of gates from G needed to transform the initial pure state plus an arbitrary number of ancilla qubits initialized into the state |0 into a purification of ρ up to tolerance . Ancilla may only be used if they are entangled with the n qubit system at the end of the process. This is an important restriction if we are to recover the ancilla-less definition of pure state complexity (to recover a pure target state, all ancilla must be unentangled with the system up to the tolerance). Roughly speaking, this definition may be summarized as the pure state complexity of the minimum complexity purification of ρ where we use only essential ancilla.
A few additional comments are in order to clarify our hypothesis for the nature of the essential ancilla. If the goal is the minimize the use of ancilla, the first step would be to restrict the system plus purifier to be defined on no more than double the number of degrees of freedom of the original system. Moreover, when the system state is pure, then no ancilla should be used. Since the basis of the purifier state has no restriction placed on it, it is natural to suppose that the number of ancilla qubits is proportional to the entropy of the system state. For example, there is no need to embed the order e S states of the purifier that are entangled with the system into the larger UV Hilbert space for the purifier.
Spectrum approach: another way to think about complexity for a mixed state is to break the problem of creating the state into two parts: creating its spectrum and creating its basis of eigenstates. Given a mixed state ρ, an initial state |0 . . . 0 , a set of allowed unitary transformations G, and a tolerance , we define the spectrum complexity C S of ρ as the minimum number of unitaries from G needed to transform the initial state plus ancilla into a state whose partial trace has the same spectrum as ρ and such that all ancilla are entangled with the original system. Since ρ has the same spectrum as itself, in general C S ≤ C P .
Defining the complexity to construct the basis of eigenstates is harder. We could try to define it as the minimum number of unitaries needed to transform the initial state plus ancilla into a state whose partial trace has the same basis as ρ. However, since the maximally mixed state has the same basis as any state ρ, it would follow that the complexity to construct the basis of any state ρ is upper bounded by a fixed number independent of ρ of order the number of qubits.
We will therefore suggest two other definitions of basis complexity. First, since C S ≤ C P , we could define the basis complexity as their difference:  Figure 6. Illustration of measures of complexity defined in the main text. Roughly speaking, C P is the minimum number of gates required to go from the reference state to the target state ρ. Among the states with the same spectrum as ρ (blue region), the one that can be obtained with the fewest gates starting from the reference state is called ρ spec , and the minimum number of gates is C S .C B is the minimum number of gates required to take ρ spec to ρ. C B (not shown) is C P − C S , and by the triangle inequality this cannot be more thanC B . (More precisely, to go from the reference state to some mixed state such as ρ or ρ spec , we first add an arbitrary number of ancilla qubits to the reference state and then act with the gates to obtain a purification of the mixed state, in which all ancilla are required to be entangled with the original system.) i.e. roughly the extra work needed to get the basis right. Note that it is not really clear whether the effort is exactly additive in this fashion, e.g. it might be roughly as hard to prepare just the spectrum as to prepare the whole state.
Alternatively, we could define the basis complexity by starting with the minimal complexity state ρ spec with the same spectrum as ρ, and then finding the minimum number of gates needed to change ρ spec into ρ. This is always possible precisely because ρ and ρ spec share the same spectrum. We denote this notion of basis complexity byC B .
As usual, it is not clear how C andC B are related in general, but since C P ≤ C S +C B (because reaching ρ via ρ spec is one possible circuit) it follows that For a pure state of complexity C, it is easy to see that C S = 0 while C B =C B = C. Thus, in some sense, the basis complexity (with either definition) is the analogue of purestate complexity, while the spectrum complexity is a new feature of mixed states. These various definitions are illustrated in figure 6.

Expectations from tensor networks
To give a sense of these definitions and how they behave in a field-theory context, let us imagine applying them to a chaotic spin chain whose low-energy physics is described by a strongly interacting conformal field theory which has central charge larger than one and is chaotic. Below, these expectations will be compared with the results of holographic calculations. To fix notation, suppose that the model consists of n qubits with Hamiltonian

JHEP02(2019)145
H. The Hamiltonian has energies E i and eigenvectors |i . Throughout we consider the thermal state, ρ ∝ e −H/T . We focus on the two extremes: T = 0 and T = ∞.
At zero temperature, the ground state has approximate conformal invariance. Assuming it has a renormalization group circuit which prepares the ground state, e.g., a MERAlike circuit, the pure-state complexity of the ground state is of order n, say C(T = 0) = k 1 n. Since the state is pure, it has trivial spectrum and we find T = 0 : At infinite temperature, the thermal state is a maximally mixed state. In this case one finds T = ∞ : These statements follow because any state with the right (uniform) spectrum is automatically the right state, and because the maximally mixed state can be prepared with order n gates using n ancilla. Based on these two limits, we can make a minimal guess for the temperature dependence of the various complexity measures. We observe that C P need not depend strongly on temperature, although of course it could have some temperature dependence. Meanwhile, C S should increase while C B andC B should decrease as a function of temperature, although again we obviously cannot rule out non-monotonicity. Furthermore, it seems reasonable to suppose that C B andC B are of the same order for all temperatures.
We can use intuition from tensor networks to be a bit more specific about the form of these complexities at intermediate temperatures. If we imagine that the minimal circuit which prepares a purification of the thermal state has two pieces, one which prepares the spectrum and one which prepares the basis, then it is natural to guess that where S is the thermal entropy and In a MERA-like circuit, these behaviors come from two distinct effects: (1) The spectrum must be prepared, and if the spectrum may in some sense be approximated as Bell pairs, then the complexity should be roughly proportional to the entropy. (2) The basis must be prepared, but whereas the ground state has long-range correlations, the mixed state has shorter-range correlations, so less of the renormalization group part of the circuit is needed. Counting gates shows that the reduction is also roughly proportional to the entropy. However, it is not clear how the coefficients α and β compare or how they depend on temperature, e.g., due to logarithmic factors. Hence it is not clear at this level how C P = C S + C B ∼ k 1 n + (α − β)S depends on temperature. 8 8 It's worth noting that we have good reason to believe the TFD state is not the minimal purification.
Specifically, to use the basis and spectrum language, the TFD has many "useless" gates that adjust the basis of the purifying system. This basis change does not effect the state of the original system, hence it is non-minimal.

Comparison to holographic calculations
We now compare our various definitions and expectations to the holographic computations described above. For simplicity, we focus on the uncharged eternal black hole in any dimension. In the thermofield double state we found that CA generically predicted either (see (2.6)) The relation between these quantities and the entropy was (see (2.13)) and C V (ρ) ∼ n + S. (see (2.43)). Here n stands for V ⊥ /δ D−2 (the volume of the CFT in cutoff units), we have dropped logarithmic divergences, and we only care about the sign of the coefficient in front of the S terms (in D = 3 the coefficient of S in the volume is exactly zero). Are any of the quantities C P , C S , C B , andC B consistent with CA? Suppose first that C A increases with temperature and is subadditive. Then the temperature dependence rules out interpreting it as C B orC B because we expect the latter to decrease with temperature. Under the plausible assumption that the spectrum can be prepared without preparing the whole UV of the field theory, it follows that C A can also not be interpreted as C S since the former is UV sensitive. Moreover, C S does not reduce to the pure state definition of complexity. On the other hand, C P does appear consistent with our expectations and the CA results. In particular, if we think of C P as roughly the cost of the spectrum plus the cost of the basis, then because we must prepare the spectrum twice when preparing two copies of ρ but only once when preparing |TFD , it follows that 2C P (ρ) > C(|TFD ) which corresponds to a subadditive C A .
Now suppose that C A decreases with temperature and is superadditive. Again because of the UV divergence, the spectrum complexity is not a good match. The purification complexity is also no longer a good match since it should be subadditive. However, the basis complexity is now a better match. In particular, the basis complexity should decrease with temperature and be superadditive. This is because the TFD complexity should be roughly the spectrum complexity plus twice the basis complexity of a single side (for the left and right sides), hence it should be greater than twice the basis complexity of a single side.
What about CV? For similar reasons to the superadditive CA case, C S , C B , andC B are ruled out. Interestingly, C P is also ruled out since we have 2C V (ρ) = C V (|TFD ). Equality here is inconsistent with our previous story about basis plus spectrum unless the cost of the spectrum is zero.

JHEP02(2019)145
A similar analysis can be applied to the case of charged black holes. For weakly or moderately charged black holes, we find that, within the precision of our analysis, C A can be qualitatively matched to the purification complexity or the basis complexity. The extremal limit is an interesting further probe of complexity/geometry duality, since we find that both measures diverge there. It will be interesting to explore the possible physical significance of this divergence in the future, since it seems unexpected from the point of view of boundary complexity.

Other definitions
The conclusion of the preceding analysis is that CA duality appears consistent with the C P definition of mixed state complexity. By contrast, CV duality cannot apparently be consistently interpreted in terms of C P unless our analysis in terms spectrum and basis is very misguided. However, this analysis is corroborated in its broad outlines by a tensor network picture of the thermal state. Confronted with these conclusions, we now expand the discussion to include other possible definitions of complexity.
Open system approach: given a mixed state ρ, an initial state ρ 0 = |0 . . . 0 0 . . . 0|, a set of allowed quantum operations 9 G, and a tolerance , the open system complexity C O of ρ is defined as the minimum of number of operations from G needed to transform the initial state into ρ up to tolerance, say in trace norm. Formally, the open system complexity C O is the minimum number such that where Φ i ∈ G. We could obviously modify this definition by weighting elements in G differently, by adjusting how the tolerance is defined, or by changing the initial state. Note that even if ρ is a pure state, it is possible that by allowing general quantum operations, as opposed to just unitary transformations, some states could be reached more quickly.
Since allowing more general quantum operations, i.e., unitaries acting also on ancilla, does not reduce to the ancilla-less definition of pure state complexity, it follows that C O can give different results than C when applied to pure states. It is not clear to us if the results can be vastly different, but we do know of cases where there is some difference. For example, in the context of quantum many-body physics, it is known that a Chern insulator ground state cannot be prepared by a finite depth circuit without ancilla, but two copies of a Chern insulator ground state (really a copy and a conjugate copy) can be prepared by a finite depth circuit. Here the inclusion or not of ancilla makes a substantial difference.
From the perspective of holography and tensor networks, it seems to us that ancilla have generally not played a role in the discussion. In other words, the general point of view has been that complexity should be defined with respect to the intrinsic resources of the system and should not make reference to auxiliary degrees of freedom. From this point of view, it makes more sense to think of the purifying system as physical, i.e., as instantiated in the rest of the geometry, instead of merely as ancilla used to apply more JHEP02(2019)145 general quantum operations to a state. One concrete difference between the two points of view is in terms of whether or not we can act repeatedly on the ancilla.
Ensemble approach: an alternative point of view on mixed-state complexity arises from the fact that a mixed state can be written as a convex combination (i.e. ensemble) of pure states: We can thus define the ensemble complexity of ρ as the corresponding convex combination of complexities of the elements |φ i , minimized over ways of writing ρ as an ensemble: (3.14) Note that the eigenbasis of ρ is only one possible ensemble, and may be far from the minimal one. Furthermore, the states in a given ensemble need not be orthonormal, e.g., they could be overcomplete. This ensemble-based definition seems qualitatively different from the other definitions given above, although we can relate it to them in some cases. It does have the virtue of reducing to the pure state complexity when the state ρ is pure. One reason for considering this notion of complexity is that none of the other options we considered seemed to be a very good match for C V . As we will explain below, however, C E is roughly consistent with C A , but it also does not seem very well suited to C V .
What are our expectations for C E within the spin chain model considered above? At zero temperature it should agree with C P which is just the pure state complexity of the ground state. At infinite temperature the minimal complexity ensemble is simply the ensemble of product states. Hence we have and If we tried to match these expectations to CV duality, we would be faced with the unusual conclusion that the ensemble complexity of the thermal state is always exactly equal to the complexity of the thermofield double state for all temperatures. While we are not aware of anything ruling this out, this seems unlikely to be true. For example, we can definitely find models, e.g., models with a trivial tensor product ground state, in which C E is strongly dependent on temperature.

Bounds on subsystem complexity
The ensemble approach seems to have certain advantage over the other definitions, as it seems more tractable to explicit evaluation. To illustrate this point we compute a bound on the ensemble complexity (relative to the ground state) following the work of [59]. For some work towards defining complexity in quantum field theory see [56,57,60,61]. In [59] JHEP02(2019)145 the authors argued that the relative pure state complexity (which refers to the minimun number of gates required to take the vacuum state to any other pure state) associated to the coherent state |re iθ = e −r 2 e re iθ a † |0 is given by C(|re iθ , |0 ) = r(| cos θ| + |θ|| sin θ|) , (3.17) where a † is the creation operator associated to a single simple harmonic oscillator system, and θ is an angular coordinate with range in [−π, π).
We would like to make a slightly weaker assumption and propose the formula where f (θ) is undetermined, and use this simple result as a way to illustrate a simple way of finding useful bounds to the ensemble complexity. This requires knowledge of a formula for the pure state complexity as a well as a good candidate of low complexity ensemble. We would like to illustrate this in the simplest example of a single harmonic oscillator system as well as its generalization to free quantum field scalar theory.

Single oscillator
Consider a single oscillator mode of a quantum mechanical system with Hamiltonian where [a, a † ] = 1. We would like to bound the ensemble complexity associated to a thermal state ρ β ≡ e −βH , where β = 1/T is the inverse temperature. The thermal state, which one would normally write in the Hamiltonian basis {|n } as ρ β = 1 Z n e −βEn |n n| (3.20) with Z = 1/(1 − e −βω 0 ), has an equivalent decomposition in terms of the normalized coherent states |re iθ which are obtained from the vacuum by local unitary transformations and therefore are of relatively low complexity. Indeed, it is easy to check that ρ β is also given by with A = e βω 0 − 1, using the relation This ensemble then represents a relatively low complexity ensemble whose complexity can be used to bound our ensemble complexity C E . That is, given We would like to estimate the value of the upper bound on C E , C b E , using the formula for the pure state complexity of coherent states given by (3.18). However, we don't have a formula for C(|re iθ ) but instead for C(|re iθ , |0 ), therefore we obtain a bound on ∆C E defined as 10 and which we denote as ∆C b E . This is One can try relate this answer with the thermodynamic quantities for the single oscillator This is more naturally achieved in the two extreme cases of low and high temperatures: for low temperature ∆C b E ∼ e −βω 0 /2 and S ∼ (βω 0 )e −βω 0 therefore Similarly for high temperature ∆C b E ∼ 1/(βω 0 ) 1/2 , while S ∼ − log(βω 0 ) and therefore

Free scalar QFT
The field theory estimate proceeds from the oscillator discussion by considering many a's: an a k for each spatial momentum k. In this simple case one simply adds the contribution for each k, so 10 A similar quantity called complexity of formation was central in the discussion of [18].

JHEP02(2019)145
We see that this is just a product of the coherent states of the previous case for each momentum k i . If one writes ρ β = k i ρ k i β then (3.21) holds for each ρ k i β with r → r k i and θ → θ k i and therefore would also hold for its product, leading to the full thermal density matrix.
Its complexity, which is linear in the parameter that appears in the exponent, will be given by the sum of the individual complexities since there does not seems to be a short cut even in this case: (3.32) Therefore an upper bound on the mixed state complexity is given by (3.33) Consider a system of relativistic particles so ω k ∼ |k| (in this limit the theory becomes conformal), then going to the continuum one has where V is the spatial volume, Ω D−2 = 2π (D−1)/2 /Γ((D −1)/2) is the volume of the (D −2) dimensional sphere. In the last integral we made the change of variables x → βk/2, and noticed that the integral is finite for all values of D > 2, so we removed the cut-off, taking Λ → ∞. The numerical value of the integral can be evaluated case by case, for example for D = 3 it takes the value π log(2)/ √ 2.

Discussion
This work analyzed various holographic proposals for subsystem complexity and compared results for eternal black holes to various qubit-based proposals for subsystem complexity. Simple tensor network models were used to develop intuition for the behavior of these measures in a strongly interacting system of qubits which might be expected to reasonably model the gross features of a holographic conformal field theory. While CA duality could be reasonably matched to the purification complexity or the basis complexity, we found that CV duality was somewhat in tension with the various proposals we considered. This tension arose in part because CV duality requires that subsystem complexities be superadditive with respect to the total system complexity. If the action is defined so that the UV divergent terms in CA are positive, then we found that the basis complexity is the best match for CA. One interesting direction for future work is to search for other measures of complexity that might be better matched to CV duality; alternatively, one could try to modify CV duality, e.g., by including new contributions localized at the RT surface in the bulk. Another interesting direction is to try combining the notions of mixed state complexity studied here with more field-theoretic notions of pure state complexity. While we were JHEP02(2019)145 able to draw some conclusions about the way different spacetime regions contributed to complexity, for example, the interior contribution in CA duality being associated with the spectrum preparation, there is still much more to learn about the way subregions influence the state complexity. It would also be very interesting to further explore holographic subsystem complexity in time-dependent situations, especially its covariant aspects.

Acknowledgments
We would like to thank Mohsen Alishahiha, Josiah Couch, Stefan Eccles, Hugo Marrochio, Reza Mozaffar, Rob Myers, and Phuc Nguyen for very useful discussions. We would also like to thank the anonymous referee for her or his many valuable suggestions on improving the presentation.

A Corner terms in subsystem complexity
In the complexity equals action prescription it was argued that spacetime regions lying between null sheets give rise to additional boundary terms whose contribution to the action has the following form [50] where Σ is the corner locus (codimension two) and a is the corner integrand given by The sign depends on the causal relation between the region of interest (for the purpose of the action evaluation), the null sheets defining Σ , and Σ . The normals k,k are the tangent vectors to the null sheet. Parametrizing the sheets by λ = − /z for a future boundary the region and λ = /z for a past boundary. Given the metric the inner product between normals is therefore and therefore for the corner terms we have

JHEP02(2019)145
where z i is the z coordinate at the corner i.
In this appendix we study these corner terms as they appear in the subsystem complexity evaluation for the cases of charged and un-charged eternal black holes as the ones studied in sections (2.3) and (2.2) which have precisely the form (A.3) with for the charged case, and for the uncharged one.
In that evaluation we have 4 corners: one on the boundary W − ∩ W + and three on the horizon W + ∩ H + , H + ∩ H − and W − ∩ H − as illustrated in figure 7 A challenge posed by (A.5) is how to evaluate those expressions at the horizon, since they diverge at that exact location. Our strategy is to approximate the contribution from the surfaces lying on the horizon by approaching null surfaces. In that case the surface terms give zero contributions and the corners can be evaluated using (A.5). Once a regularized answer is obtained we expect to get a final finite result in the limit in which the surfaces approach the horizon. We will explain in detail such evaluation

JHEP02(2019)145
First, let's consider a coordinate system in which the null surfaces are easily described. This can be achieved by the u, v null coordinates defined as for z < z h left side of the exterior horizon region, and for z > z h , which covers the future of the inside horizon region. Similar coordinate patches can be defined for the other two regions to cover the full geometry of the eternal black hole. The coordinate z * (z) approaches a constant at the boundary and grows arbitrarily towards the horizon. If one fixes its boundary value to zero, this is z * (δ) = 0 then the product We assume that f (z) is smooth and different than zero in a neighborhood of z = z h ; we will comment on the extremal charged case later on.
Since the left future and past horizons H + and H − corresponds to the surfaces v = 0 and u = 0 respectively, we would like to consider instead the surfaces v = v 11 and u = u as the surfaces that approach them. Assuming that the boundary corner W + ∩ W − corresponds to the coordinate (u 0 , v 0 ), then the three near-horizon corners will be at (u 0 , v ), ( u , v 0 ) and ( u , v ) which are associated to three different radial coordinate points which we denote z u 0 , v , z u,v0 and z u, v respectively. All these points have been regularized and depends on the regularization parameters ( u , v ) as well as the physical information contained in u 0 and v 0 .
Even though we don't know how to obtain the explicit function z * (z) for general f (z) and then the relation between the z coordinate and the parameters u 0 and v 0 , one can find useful relations between them by using the equation which is equivalent to (A.10). Notice that the way the corner contributions were derived in [50] was to guarantee the additivity property of the action formula. Therefore, it is interesting to check what does that imply to the regularization of the horizon surfaces as the one we proposed before. An obvious requirement is the following.
Consider a spacetime region that crosses the horizon at v = 0 and divides the region in two on the horizon with corners at u 1 and u 2 . The additivity property of the boundary tells us that the regularization should be such that the corner terms on opposite sides of JHEP02(2019)145 The above limit can be evaluated by using the series expansion of B(z/z h , 1/D − 1, 0), for z < z h which results in where ψ 0 (z) is the digamma function defined as ψ 0 (z) = Γ (z)/Γ(z), and ψ 0 (1) = −γ where γ ≈ 0.577216 . . . , is the Euler gamma constant. The full contribution of the corner terms for the subsystem complexity of neutral black holes can be written as where we have introduced the coefficient for ease of notation.

A.2 Charged black hole
Unlike for neutral black holes, for charged black holes we do not have an explicit expression for z * (z). Nevertheless, we know that the function F (z h ) which in this case can depend on z h , q 2 and D, is well defined and finite. Similar manipulations to the ones performed in A.1 are still of utility:

A.3 Extremal black hole
The previous analysis only works for f (z h ) = 0, however it is well known that for extremal black holes this is not the case and so one would need to accommodate the previous procedure to account for those particular cases. In these cases it is easy to modify the null coordinates (A.8), (A.9). The simple replacement f (z h ) → −f (z h )z h makes them a consistent coordinate chart. However, for where u = x D−1 . This is another member of the integrals computed in appendix (C). Using equation (C.12), we get for the action and for the bulk contribution to the "complexity" In the interior region we also have a York-Gibbons-Hawing boundary term which gives a non-zero contribution on the space-like surface which covers the singularity, and zero on the light-like surfaces. The contributions has the form ∂M |h|K (B.17)

JHEP02(2019)145
where K is the trace of the extrinsic curvature on ∂M and h is its induced metric K = n µ ∂ µ (|h|) 2|h| , (B. 18) with n µ being the normal to the surface in question. At the z = ∞, we have |h| = −l 3D−4 f (z)z 2−2D , andn = (z/l) −f (z)∂ z then We also have the counter term boundary contribution coming from the null surfaces that goes from the left and right black hole horizons to the singularity. Those terms can be evaluated as in section (2.2.1) leading to The corner calculation is a bit more involved although it follows directly from the procedure outlined in appendix A. One has to be careful with the signs of the each corner as well as the sign change in f (z). The result of that analysis leads to Notice that for this range of boundary times t L , t R the first term in the above expression is equal to the full term coming from the boundary contribution (B. 19). Adding all the contributions we get for the full interior action in particular for t L = t R = 0 we have

JHEP02(2019)145
Notice that for c ≥ which was motivated in section (2.2.1) based on requiring the positivity of the divergent contribution in C A L (T ), we find that each term in (B.24) is positive definite (since both g 0 and t c are negative) and therefore A + int > 0. For comparison purposes with the pure state complexity of the thermal field double state we would like to have also the past interior action in the same regime, this is −t c > t L , t R > t c . In that case the answer to that complexity contribution can be obtained from the future interior answer by simply doing t L,R → −t L/R .
Recalling the additivity property of the action contribution one can write the pure state complexity in the interval −t c > t L , t R > t c as where σ = R ∪ L is the full system. During that time interval the complexity is time independent as noted in [39].

JHEP02(2019)145
The particular set of integrals we are interested in has the general form x β B(x; α, 0)dx (C.4) which we aim to study here for positive and negative integer values of β.