Subsystem Complexity and Holography

We study circuit complexity for spatial regions in holographic field theories. We study analogues based on the entanglement wedge of the bulk quantities appearing in the"complexity = volume"and"complexity = action"conjectures. We calculate these quantities for one exterior region of an eternal static neutral or charged black hole in general dimensions, dual to a thermal state on one boundary with or without chemical potential respectively, as well as for a shock wave geometry. We then define several analogues of circuit complexity for mixed states, and use tensor networks to gain intuition about them. We find a promising qualitative match between the holographic action and what we call the purification complexity, the minimum number of gates required to prepare an arbitrary purification of the given mixed state. On the other hand, the holographic volume does not appear to match any of our definitions of mixed-state complexity.


Introduction
There has been much recent progress in understanding how spacetime emerges from field theory degrees of freedom within the AdS/CFT correspondence. Considerations involving entanglement [1][2][3][4][5][6], quantum error correction [7], and other ideas from quantum information science have provided new clues concerning the emergence of the classical bulk geometry as well as the reconstruction of approximately local quantum fields in the bulk [8][9][10][11].
Tensor networks provide one set of toy models that instantiate many of the features of AdS/CFT [12] and that can also describe in detail the physics of more conventional systems [13,14]. Motivated by these tensor network models and by considerations involving the dynamics of black hole interiors, it was proposed that the quantum computational complexity of the boundary field theory state would also be encoded geometrically in the dual gravitational spacetime [15][16][17][18].
To be more specific, in the context of the eternal AdS-Schwarzchild black hole it was observed that the wormhole which connects the two sides grows linearly with time, say as measured by the length of a geodesic stretching through the wormhole [15,19]. One can then ask what the CFT dual of this linear growth is. The conjecture is that the growth of the wormhole is dual to the growth of complexity of the dual CFT state. Roughly speaking, the complexity of the CFT state is the minimum number of simple unitaries or "gates" needed to prepare the CFT state from a fixed reference state.
The physical picture is that the complexity of a state can increase due to Hamiltonian time evolution, and it is this increase of complexity that is dual to the late-time growth of the interior. Tensor network models again provide a concrete instantiation of complexity on the CFT side, with the complexity being defined as the number of tensors in the minimal network that describes the state. On the field theory side, one of the key open questions is how to define complexity more precisely. On the gravity side, one of the key issues is how to differentiate between different bulk proposals, including "complexity equals volume" (CV) [15,16], "complexity equals action" (CA) [20,21], and others [22]. There are by now a large number of papers developing and extending these ideas [23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38].
In this paper, we consider the problem of defining and evaluating complexity for subsystems of the CFT. To define subsystem complexity on the CFT side, we face the problem of defining complexity for mixed states, since subsystems will generically not be in pure states. On the gravity side, we must find suitable geometric measures which combine the global complexity measures with the physics of subsystem duality in AdS/CFT. Holographic subsystem complexity has recently been studied in a variety of works, with several proposals analogous to CV and CA being advanced and studied [39][40][41][42][43][44].
This study was motivated by a desire to better understand the relationships between CA and CV duality and to subject the basic idea of a complexity/geometry duality to a new set of tests. We also wanted to gain insight into the way different subregions of the bulk might be represented in tensor network models.
Our main contributions are as follows. First, we study the analogs of CA and CV duality for subsystems for simple subregions of eternal black holes. This study include calculations of actions and volumes for a variety of black holes as a function of dimension, temperature, and charge. Second, we define a variety of measures of mixed state complexity and compare our definitions to the CA and CV calculations. This analysis complements and extends various discussions given in the holographic literature. Amongst definitions of mixed state complexity that we consider, we find that CA duality reasonably accords with one definition in particular, but that CV duality is harder to consistently reconcile with our notions of subsystem complexity.
To more precisely define the holographic quantities we consider, recall that there are reasons to believe that the reduced density matrix for a spatial region A in a holographic theory is encoded in the corresponding entanglement wedge E A [7,45]. Combining this observation with the CV and CA proposals for holographic complexity leads one to consider two bulk quantities that can be defined for a given region A: the volume C V (A) of a maximal Cauchy slice for E A anchored on A; and the action C A (A) of the Wheeler-de Witt (WdW) patch W A of E A associated to A. W A is defined as the set of points in E A that are spacelike-or null-related to A (i.e. not in I + (A) ∪ I − (A)), or equivalently as the intersection of E A with the WdW patch W of any complete boundary slice containing A. 1 There are many possible situations in which we could study the above volume and action quantities. For definiteness, in this paper we focus on two-sided (neutral or charged) static AdS black holes, with the subsystem L consisting of a constant-time slice of one boundary. 1 The equivalence between these two definitions of subsystem WdW patch can be shown as follows. The first definition is W 1 A := EA \ (I + (A) ∪ I − (A)). Let σ be a boundary Cauchy slice containing A. Then W is the complement of I + (σ) ∪ I − (σ), so the second definition is W 2 A := EA \ (I + (σ) ∪ I − (σ)). Clearly I ± (A) ⊆ I ± (σ), so W 1 A ⊇ W 2 A . For the other direction, let x be a point in W 1 A . Since x ∈ EA, it is not timelike-related to E σ\A [46], and in particular is not timelike-related to σ \ A. Since x is also not timelike-related to A, it is not timelike-related to σ, and is therefore in The state of L is thus a Gibbs state (with or without a chemical potential). The entanglement wedge E L of one boundary for a static black hole consists of the corresponding exterior region. Figure 1 illustrates E L , the WdW patch W for a complete boundary slice, and their intersection W L . In section 2, we calculate C A (L) and C V (L) for neutral black holes in D ≥ 3 and charged black holes in D ≥ 4. Note that the corresponding Gibbs states are static, reflecting the fact that the bulk isometry generated by the timelike Killing vector of the exterior region relates different constant-time slices of the boundary. Therefore the quantities C A (L) and C V (L) are time-independent. This is in contrast to the corresponding quantities for the full system, whose state, the thermofield double, is not static. We also compute C A (L) for a thermalizing system dual to a shock wave geometry, finding that at late times it grows linearly at a rate 2M , just as for the thermofield double.
In section 3, we define various measures of complexity for subsystems as well as for general mixed states (i.e. mixed states considered independently of any particular purification). In this discussion, we assume that some notion of pure-state complexity has been previously defined. This notion, however, can be used in different ways to define the complexity of a mixed state ρ. For example, we can consider a purification of ρ and evaluate its complexity, or we can decompose ρ into an ensemble of pure states and average their complexities. Other choices in the definition include whether to include ancilla degrees of freedom. By estimating the value of each measure on Gibbs states, and in particular its relation to the entropy, and comparing to the results obtained in section 2, we are able to rule out some of the proposed definitions as being related to either C A or C V . On the other hand, we find a promising qualitative match between C A and the purification complexity, roughly speaking the minimum complexity of any purification of the given mixed state.
The appendices contain certain details of the calculations, including a careful treatment of the corner terms that arise in the action calculations.

Holographic calculations
In this section, we consider static two-sided asymptotically AdS black holes, and take the region L to be a constant-time slice of one boundary. Its entanglement wedge E L is the corresponding exterior region in the bulk. We compute the volume C V (L) of a maximal slice and the action C A (L) of the Wheeler-de Witt (WdW) patch for this exterior region. These quantities are relevant for the subsystem analogues of CV and CA dualities, respectively. We comment on the differences between the two quantities and the possibility that both dualities hold as they could potentially provide information about different notions of subsystem complexity. We treat the neutral case in subsection 2.2 and the charged one in subsection 2.3. We summarize the results at the top of each subsection before entering into the details of the calculations. First, however, in subsection 2.1, we make a qualitative observation concerning C V (L) and C A (L) that will play an important role when we compare these quantities to candidate complexity measures in Section 3.
Finally, in subsection 2.4 we compute C A (L) for a shockwave geometry, dual to a system undergoing thermalization after an injection of energy. We find exactly the same late-time behavior as for the two-sided black holes, namely a linear growth at a rate 2M .

Relation between subsystem and full-system measures
We begin with a general observation about the volume measure C V . Let σ be a boundary Cauchy slice, A a region of σ, and A c := σ \ A its complement. We assume that the full system on σ is in a pure state. Then we claim that in other words C V is superadditive. The reason is that the left-hand side equals the maximum volume of a complete Cauchy slice bounded by σ which is constrained to pass through the HRT surface m(A) (which is the same as m(A c ), since the full system is pure), while the righthand side is the maximum volume of a Cauchy slice that is bounded by σ but not constrained to pass through m(A).
In the case we deal with in this paper, where A = L is a constant-time slice of one boundary of a static two-sided black hole and A c = R is a constant-time slice of the other boundary, we have symmetries that allow us to say more. First, C V (L) and C V (R) are independent of time, and there is an isometry that exchanges the left and right sides, so (2. 2) The full system is not static, so C V (σ) depends on the times chosen for R and L. However, if these are both chosen at t = 0, then the time-reflection isometry of the black-hole is respected, so by symmetry the maximal-volume slice for σ must pass through the bifurcation surface, which is also m(L) and m(R). The inequality (2.1) is thus saturated: On the other hand, we find that C A is subadditive: While in the C V computation one deals with positive-definite quantities, in the action calculation we have different contributions whose sign depends on the gravitational Lagrangian as well as the boundary and corner terms. For the cases we consider, the Lagrangian is negative, and even including boundary and corner contributions the different regions into which one can decompose the action calculation all give negative results. More precisely, the calculation of the pure state action C A (σ) can be decomposed as where A ± int corresponds to the total action associated to the spacetime region W ± int defined as the intersection of the future/past interior of the black hole with the WdW patch as shown in Figure 2. Separation of the WdW patch in terms of its intersection with the entanglement wedges E L ∩ W and E R ∩ W, and with the regions behind the past and future horizons W ± int for an eternal black hole. int turns out to be negative and has a logarithmic ultraviolet divergence. For example, for the neutral black hole for t = 0, one has the relation where S is the entropy, g D is a D-dependent constant, z h is the horizon radius, and δ is an ultraviolet cutoff. 2 This result can be found in (B.22) of appendix B.1. The analogous one for charged black holes can be read from (B.35).

Neutral black hole
Consider an eternal AdS black hole in D spacetime dimensions. In the planar limit, its metric can be written as To be completely precise, the sign of the interior action action depends somewhat on the choice of normalization in the corner terms. We made a choice related to the UV cutoff which is furthermore independent of temperature. For a fixed z h , it is possible to make a normalization choice so that the interior action is positive. However, the limit z h → ∞ always results in a negative interior action provided the cutoff choice is independent of z h . where and the coordinates (t, x µ , z) cover one exterior region (left or right) of the full double-sided geometry, as schematically depicted in Figure 1. As argued originally by Maldacena [47], this geometry is dual to the thermofield double state where the parameter β corresponds to the inverse temperature associated to the eternal black hole; in this case β −1 = T BH = (D − 1)/πz h . If one considers expectation values of operators on a single CFT, then the effective state with respect to the CFT L/R is described by a thermal density matrix with inverse temperature β. Its entropy is (2.10) We start by computing the WdW action C A (L) in subsection 2.2.1. This is done by evaluating the action associated to the region defined as the intersection of the entanglement wedge and WdW patch, including surface and corner contributions as specified by Myers et al [48]. The result of the thermal state complexity is remarkably simple; it turns out to be given by where δ is a UV cutoff and the logarithmic term depends on a particular choice of an undetermined parameter in the computation of the corner contributions, as explained in that section. V ⊥ is the dimensionless transverse volume parametrized by the coordinates x i . The upper index A refers to the fact that this expression is obtained using the CA duality. We can see from (2.11) that, apart from the logarithmic term, the thermal state complexity has a simple relation to the entropy. In subsection 2.2.2 we evaluate the maximum volume C V (L), again obtaining a very simple answer in terms of the black hole horizon, the UV cut off δ, and the number of spacetime dimensions: where ξ is a length scale required to make the complexity dimensionless. An interesting feature of this result is the entropy independence for D = 3.

C A (L)
According to the CA duality, the complexity associated to the thermal state describing the left (right) system is given by the action evaluated on the space-time region given by W L/R = E L/R ∩ W as ilustrated in Figure 1. In Figure 3 we show the intersection of E L and W with the null boundaries labelled. W ± correspond to boundaries of W while H ± correspond to the boundaries of E and coincide with the black hole horizon. The action of W L receives three kinds of contributions, one from the bulk, one from the boundaries, and one from the corners [20,40,48]. Following the rules laid out in Ref. [48], the null boundaries may be chosen to give zero contribution to the action. The bulk and the corners, however, do contribute. The action diverges unless a cutoff is placed near z = 0. The regulated W is defined by starting the null lines that bound W from the cutoff surface z = δ.
A convenient set of coordinates that naturally cover the region in question is obtained by changing z → z * , where z * is the tortoise coordinate and from them define the light cone coordinates u = t − z * (z) and v = t + z * (z), which can be used to construct the Penrose diagram of Figure 1.
In these coordinates the metric is (2.14) For these family of space-times the form of the function f (z) allows an explicit evaluation of the tortoise coordinates.
where δ h ≡ δ/z h and B(z; a, b) is the incomplete beta function given by We will see in the next section that for the computation we have in mind it is not necesary to have an explicit expression for this function. First consider the bulk contribution which arises from the bulk action . The vacuum Einstein's equations are The bulk action is then proportional to the spacetime volume |W L |, The spacetime volume is computed as follows. We need the region between the null lines t(z) = ±(z * (z) − z * (δ)) from z = δ (a UV regulator) to z = z h , this is After the change of variable u → x D−1 the integral in (2.20) can be put in the form The remaining integral of the incomplete beta function was computed in Apppendix (C) in equation (C.11). Using that result, we get for the action and then the bulk contribution of the complexity (2. 22) up to order δ/z h . Now consider the corner terms, which arise from the action where Σ is the corner locus (codimension two) and a is the corner integrand given by where k,k are outward directed null normals 1-forms, k ·k = g ab k akb , and sgn is a sign determined by the particular corner [40,48]. The normals k,k are normalized such that their inner products with the killing vectors implementing the time translations on the left and right boundaries are constant. This is, k ·t L = −c andk ·t R = −c. This fixes the inner product to be and therefore for the corner terms we have where z i is the z coordinate at the corner i. Since f (z) is zero at the horizon, one has to be careful when computing the corner contributions there. We do this carefully in Appendix A, where for the uncharged black hole we found that the result of the sum of the four corner terms-one at the boundary W − ∩ W + and three on the horizon W + ∩ H + , H + ∩ H − and W − ∩ H − , as illustrated in figure 3-is given by where g 0 = 1 2 ψ 0 1 D−1 − ψ 0 (1) and ψ 0 (z) is the digamma function given by ψ 0 (z) = Γ (z)/Γ(z).
The full action is thus In the spirit of the volume equals complexity conjecture, we would like to compute, for the family of geometries we considered in the previous section, the volume of the maximal slice bounded by Σ and the HRT surface m(Σ). In the case at hand Σ is the boundary region t = 0, z = δ, and m(Σ) is the horizon z = z h (where the coordinate t is undefined). In this case, due to the staticness of the metric in the exterior region, it is clear that the maximal slice is just the t = 0 hypersurface. The extremal volume V is computed by direct integration: whose leading order value in the δ/z h expansion is 3 and their corresponding volume complexity where ξ is a length scale required to make the complexity dimensionless.

Charged black hole
In this section we repeat the analysis of the eternal black hole of subsection 2.2 for the more general family of charged black holes characterized by mass and charge parameters m, q and described by the space time metric The following result is obtained after expanding the resulting integral in Poincaré-like coordinates. The spacetimes in this family are duals of the thermofield double state We are interested in studying the subsytem complexity associated to the left (right) subsystems obtained after tracing out the degrees of freedom of the right (left) parts of the Hilbert space.
In this case the resulting reduced state is given by the density matrix describing a grand canonical ensamble, with temperature T = β −1 and chemical potential µ.
The metric (2.32) is the solution to the classical equations of motion derived from the Einstein-Hilbert action in the presence of an electromagnetic field F µν , this is The energy momentum tensor sourced by the field strength is given by is the cosmological constant. The solutions to the classical equations of motion for the metric (under assumption of flat boundary conditions) and the gauge field are given by [49] and wherem andq are related to the ADM mass and charge viã The metric (2.32) is obtained by changing the radial coordinate r to z = l 2 /r where the parameters m, q 2 in (2.33) are related tom,q 2 via The function f (z) has two positive zeros {z − , z + } where z − is the smallest root. Since the asymptotic boundary of this metric corresponds to the z → 0 region, the region outside the horizon corresponds to z < z − which implies that z − is the horizon radii of this metric. The existence of a horizon at z = z h = z − , f (z h ) = 0, establishes a useful relation between z h , q 2 , and m: (2.42) In subsections 2.3.1 and 2.3.2, we computed the mixed state complexity associated to the finite temperature and finite chemical potential density matrix describing the gran canonical ensemble given by ρ = Z −1 e −β(H+µQ) via the CA and CV duality respectively. For the action calculation performed in (2.3.1) we obtained a relatively simple answer where g(z h ) is defined from the following finite limit In fact this result is very similar to the one for neutral black hole (2.11), with the only difference being the term g(z h ) and the explicit dependence of the black hole horizon z h on the mass and charge parameters m, q 2 .
In section (

C A (L)
The action evaluation required to compute the subsystem complexity C A follows closely the steps laid down in subsection 2.2.1, although the Penrose diagram differs from the uncharged case, as illustrated in Figure 4. The integration region of interest W L = E L ∩ W is essentially the same, the only difference beging the functions we are integrating. Then, for the bulk evaluation we have Taken the trace of the Einstein's equation in the presence of the electromagnetic leads to the relation singularity singularity singularity singularity the term F µν F µν can be evaluated from the gauge field solution (2.39) This result together with (2.46) leads to the following simple expression for the bulk on-shell action As in the uncharged case it is convenient to use light cone coordinates These coordinates can be used to construct the Penrose diagram of figure 4 and so they naturally cover the region of interest W L . For example, the metric adopts the simpler form 50) and the light rays which bound the causal region W L are given by t ± (z) = ±(z * (z) − z * (δ)) where δ is a UV cut off and Once the integration region W L is delimited, one can explicitly integrate the perpendicular directions since they are independent of it. Doing so leads to a dimensionless volume factor V ⊥ , and the remaining two-dimensional integral (2.52) Notice that the ξ integral in (2.52) is highly non-trivial while the z integral is much simpler. To use this fact in our advantage, consider the integration region in the plane (z, ξ) and invert the order in which we perform the integration. The resulting expression is then Let's consider the z integral separately, which evaluates to where in the second line we have used the relation between m, z h and q 2 (2.42). The exact cancellation of the function f (ξ) for arbitrary values of m and q 2 is remarkable, and leads to the following simple answer for the bulk action (2.55) The calculation of the corner terms goes exactly as in the section (2.2.1), with the extra details from the Appendix A. therefore if we made the same choice for cc one gets where g(z h ) is defined from the following limit which as explained in Appendix A is finite for generic values of q, m 2 but has an IR divergence in the extremal case A.3. Here we have made the same choice for the parameter cc, namely cc = l 2 /δ 2 . The full action is thus To evaluate C V (L) we need the compute the extremal volume surface which asymptotes to the boundary t = 0 surface and the bulk minimal surface. The extremal surface is given by the t = 0 hypersurface in Poincare coordinates, and the extremal volume is therefore given by We would like to evaluate this integral for arbitrary values of m and q 2 , unfortunately we were unable to do so. Nevertheless one can explore the finiteness of the volume answer. Apart from the obvious UV divergence, the presence of poles in f (z) indicate potential divergences in the volume integral. This is easy to do since we know the only real poles of f (z) are at z = z ± and in the integration region we only encounter the z = z − = z h pole, except when z ± collide with each other. That is the case of the extremal black holes for which as we will see there is an IR divergence in the volume integral. Let's study the contribution of the integral in the region close to z ≈ z h . First we want On the horizon we have f (z h ) = 0, therefore if f (z h ) = 0 we will have a square root divergence close to z = z h but a square root divergences integrates to a finite value, this is That means that for generic values of m, q 2 the integral is finite.
Extremal black holes: However, for m, q 2 such that f (z h ) = 0 we need to go to the next order in the expansion and then which is logarithmically divergent. This is in fact the case for extremal black holes. One can go one step forward and compute the coefficient of the logarithmic IR divergence in V by using the fact that f (z h ) = 0 and f (z h ) = 0 For extremal black holes, we were unable to obtain a closed expression for the volume, although its dependence on z h can be extracted since both m and q 2 parameters in dimensionless units depend only on the number of dimensions D: Therefore, the volume of the maximum volume surface in units of G N ξ is given by The leading divergent term coming from the lower integration point δ/z h can be extracted by integrating the region close to x = 0. This result plus the integration in the region x = 1 leads to the following structure of the answer: where F (D) is undetermined function that depends only on the number of spacetime dimensions. Indeed one can evaluate the integral in a case by case basis, for example, for D = 4 we obtain: and for D = 5 The undetermined function F (D) seems to have the structure log(G(D))/ 2(D − 1)(D − 2).
Weakly charged black holes: Another regime in which one can have analytic control is

The integral of interest is
where we have factor out the z h dependence up to the divergent UV piece and the dimensionless parameter Notice that one obvious concern in the previous equation is whether or not the perturbative expansion in q 2 z 2(D−2) h breaks down in the region where x ≈ 1. This is not the case, since the denominator of (2.69) behaves as in the x ≈ 1 regime, and therefore the term proportional to q 2 is parametrically smaller for all x.
At first order in q 2 z and therefore the complexity of the weakly charged black hole is given by Notice that this answer is given in a mixed expansion, in which we use the exact z h an expanded the integral in powers of q 2 z 2 h , however a more consistent expansion would consider also z h = z . It is interesting to note that in such expansion the expression simplifies to In this form we see explicitly that there is a legitimate q 2 correction to the zero charge case for the complexity which cannot be absorbed in the q dependence of the new z h as observed from (2.71).

Shock wave
An important motivation for the complexity equals action or complexity equals volume proposals were their linear growth behavior at late times [16,20,50]. This observation was more precisely stated in the CA duality by showing that for neutral black hole geometries one indeed has at large t L . In (2.73), the LHS is obtained by fixing the time slice on the right side of the boundary geometry t R = 0 and varying the left boundary time t L . The complexification growth of the pure state is associated to the growth of the part of the action that lies behind the horizon. Since in the evaluation of subsystem complexity one only consider the region outside the horizon one does not expect a similar statement to hold in that case. In other words, the action associated to the region W L ∩ E is invariant under t L → t L + δt. Nevertheless, one can consider a small deviation of the black hole geometry by perturbing the system slightly in the past. This is equivalent to injecting energy into the geometry in the form of a shock wave. In this case then the state is time dependent and its complexification rate is expected to have the same late time behavior, and what it is more it should present the expected time delay due to the injection of the shock wave [20]. In this section we evaluate the subsystem complexity in this dynamical situation for a neutral black hole with arbitrary asymptotic geometry. (To obtain the result for the flat boundary geometry we simply set k to 0.) The metric we consider is where dΣ 2 k,D−2 is the line element of the boundary geometry which can be flat (k = 0), spherical (k = 1) or hyperbolic (k = −1).
To describe the geometry in the presence of a shock wave it is convenient to move to light coordinates defined by where − sign corresponds to the region r > r h while + to r < r h and r * (r) is defined by dr * = dr/f (r). The parameter β = 4π/f (r h ). Notice that r * (r) decreases as r decreases. If we fix r * (∞) = 0 then r * is negative everywhere else (in these coordinates the boundary is located at r → ∞). In particular, at the horizon r * (r h ) = −∞, leading to uv = 0.
The back-reacted solution of this metric in the presence of a shock wave in null coordinates is given by β (|tw|−t * ) is the shift produced by the shock wave The net effect of the shock wave is to separate the Penrose diagram along the u = 0 region at which the shock wave has an important effect; however away from it the metric looks the same as the original black hole geometry. The essential difference will be in the way we glued the two sides together. Continuity of the v coordinate then implies that one has to shift the two spacetimes along the v coordinates by an amount h.
The horizon of the new metric will be located at v = h as described by the original metric and therefore will be behind that horizon region. The subsystem complexity of the perturbed metric can then be computed using the unperturbed black hole metric but the region of interest includes both exterior and interior regions. One can split that region in two: one outside the black hole horizon and one behind the horizon using the additivity property of Figure 5. Representation of the left and right entanglement wedges E L and E R in red and green respectively, for an eternal black hole in the presence of a shock wave inserted on the left boundary. In blue it is represented the associated WdW patch the action. Since we are interested in the time-dependent term in the action then we will only focus on the behind horizon region which as we mentioned gives the full time dependence.
The full action is given by associated to the region given by E L ∩ W of Figure 5 We would like to start with the evaluation of the first term, the bulk space-time integral. To perform this integral it is convenient to do a further change of coordinates (ξ, χ) given by ξ = uv and χ = u/v on which the unperturbed metric looks and the bulk integral is therefore given by The bulk region is then delimited by the surfaces ξ = 1 (the singularity), ξ = (the horizon when → 0), v = h or equivalently χ = ξ/v 2 = ξ/h 2 and χ = u 2 0 /ξ. Both A and r are only functions of ξ and therefore the χ integral can be performed directly but from the definitions one has dξ/ξ = (4π/β)dr/f (r) then the ξ integral turns into a trivial integral on r where now the end points go from r = r h to r = 0 We are interested in the term proportional to the initial conditions and shock wave parameters, therefore we can ignore the log(ξ) term. In that case we have The second term in (2.79) corresponds to three boundary null surface terms which give zero contribution to the action and one the time-like boundary surface surrounding the singularity which we will compute as follows. First the time-lime boundary surface ∂M is given by ξ = 1. Here K = 1 2 n α ∂ α log |h| where the line element associated to h is simply Simplifying the expressions we get where the χ integral was carried out from χ = ξ/h 2 to χ = u 2 0 /ξ as in the bulk case. Here, ξ = 1 cancels the term proportional to log ξ which we ignored for the bulk contribution. Multiplying by 1/8πG N and evaluating the previous result on r = 0 gives us the boundary contribution to the complexity which is Finally we will focus on evaluating the terms coming from the corner contributions obtained at the intersection of light-like surfaces as the ones appearing on the horizon. The corners appearing on the singularity surface do not contribute as the volume factor goes to zero as r → 0. The calculation goes in the exact same way as in the evaluation of the corner contributions of the subsystem complexity for neutral black holes of section (2.2). The only difference here is that the regularized corners lie behind the horizon region and the individual contributions has a slightly different form in the (t, r) coordinates, which is (2.89) Here as described in Appendix A we add and subtract the corner term appearing at the intersection H + ∩ H − and rewrite the differences of the corners in terms of the logarithms of uv products, this leads to where r u v is the regularized radial coordinate lying at the intersection of the light sheet u = u and v = v . The last two terms in the u , v → 0 limit gives a finite contribution which is independent of the parameters u 0 , h and therefore we ignore them here. The time dependent piece is given by where we have used the relations u 0 = e 2π β t L and h ∼ e 2π β (|tω|−t * ) . Adding up all the pieces one gets and therefore at large t L we reproduced the late time complexification rate for subsystem complexity with the proper time shift due to the switch back effect [16,20]. Indeed, this effect provided important evidence for the CA and CV conjectures, which in the context of pure state complexity was tested even in the presence of multiple shock waves [20,51]. The result for the complexification rate is then for large t L .

Measures of mixed-state complexity
In the previous section, we calculated the volume and action quantities C V and C A for thermal states of holographic systems. In the spirit of the CV and CA conjectures, we would like to relate these to some notion of complexity for mixed states. Therefore, our first task is to come up with measures of complexity for mixed states. We will find that there are many ways to define such measures, and it is not straightforward to determine the relations among them. This is perhaps not surprising, as a similar situation obtains for entanglement in bipartite mixed states; many different measures have been defined (entanglement of purification, entanglement of formation, entanglement of distillation, logarithmic negativity, and so on), and determining how they are related to each other is far from straightforward. In subsection 3.1, after reminding ourselves of the relevant notion of complexity for pure states, we define several measures of complexity for mixed states. In subsection 3.2, we estimate the values of these measures in thermal states, in particular their dependence on temperature, using intuition from tensor networks. Then in subsection 3.3 we compare these estimates to the values of C V and C A obtained in section 2. We find that one of our proposed definitions matches well (to within the precision of our estimates) the behavior of C A . We thus arrive at a concrete and well-motivated subsystem CA conjecture. On the other hand, we do not find a match between C V and any of our proposed complexity definitions. In subsection 3.4, we briefly explore other possible approaches to defining mixed-state complexity, but again fail to find a plausible match to C V .
It is worth reiterating that almost all of the mixed states we consider in this paper are thermal (i.e. Gibbs or generalized Gibbs) states. This is both an advantage, as it gives us a handle on estimating their complexities that we would not necessarily have for general states, and a limitation. In particular, these states are static, eliminating the whole issue of time dependence, which was central to the development of the original CV and CA conjectures [15,20]. To further test and explore our subsystem CA conjecture, it will be important to study other types of subsystems, in particular those in time-dependent states. We took a small step in this direction in subsection 2.4 where we studied subsystem complexity for a time-dependent shockwave geometry.

Proposed definitions
We begin with simplest notion of pure state complexity. This definition has three ingredients: a reference state, a set of allowed gates, and a tolerance. The complexity of a target pure state is defined as the minimum number of gates from the allowed set needed to take the reference state to the target state up to the specified tolerance. The notion of tolerance has considerable freedom in it. We could require that the target state and the evolved reference state are close in trace norm or we could demand that they have approximately equal expectation values for some operators or any of a myriad of other measures. Let us denote this measure of pure state complexity, for some fixed set of choices, by C. We note that some pure state schemes which are particularly adapted to the problem of field theory complexity have been explored recently [52][53][54].
To approach the problem of mixed state complexity, we begin by making some preliminary remarks. First, note that the definition of C made no reference to ancilla, meaning that we implicity fixed the number of qubits and only allowed gates to act on those qubits. However, one could also consider notions of complexity with ancilla included. We could either allow no ancilla, allow ancilla but require them to return approximately to their initial state, or allow ancilla with arbitrary final states so long as the target state is approximately obtained. These definitions are not all equivalent, although it is not clear under what conditions they differ substantially. We will assume, as above, that the definition of pure-state complexity does not allow ancilla even at intermediate stages.
Second, observe that there is a potential distinction between mixed states and subsystem states (which may of course still be mixed). A complexity measure for mixed states must be applicable to any mixed state without reference to any other system. However, a complexity measure for subsystem states could depend on the state of the whole system as well. It is not obvious which notion is relevant for holography, but we proceed by thinking about mixed states without reference to a fixed purification.
Third, we will demand that our notion of mixed state complexity reduce to the pure state definition when the state is pure. This seems trivial, but it turns out to restrict the kinds of operations we can consider, e.g., we cannot allow ancilla in the mixed case unless we also allow them in the pure case.
With the above issues in mind, we now present two approaches to defining mixed-state complexity. Our analysis is complementary to some discussions in the quantum information literature [55].
Purification approach: The simplest definition of mixed state complexity is phrased in terms of minimal purifications. Given a mixed state ρ on n qubits, an initial state |0 . . . 0 , a set of allowed unitary transformations G, and a tolerance , the purification complexity C P of ρ is defined as the minimum number of gates from G needed to transform the initial pure state plus an arbitrary number of ancilla qubits initialized into the state |0 into a purification of ρ up to tolerance . Ancilla may only be used if they are entangled with the n qubit system at the end of the process. This is an important restriction if we are to recover the ancillaless definition of pure state complexity (to recover a pure target state, all ancilla must be unentangled with the system up to the tolerance). Roughly speaking, this definition may be summarized as the pure state complexity of the minimum complexity purification of ρ where we use only essential ancilla.
Spectrum approach: Another way to think about complexity for a mixed state is to break the problem of creating the state into two parts: creating its spectrum and creating its basis of eigenstates. Given a mixed state ρ, an initial state |0 . . . 0 , a set of allowed unitary transformations G, and a tolerance , we define the spectrum complexity C S of ρ as the minimum number of unitaries from G needed to transform the initial state plus ancilla into a state whose partial trace has the same spectrum as ρ and such that all ancilla are entangled with the original system. Since ρ has the same spectrum as itself, in general C S ≤ C P .
Defining the complexity to construct the basis of eigenstates is harder. We could try to define it as the minimum number of unitaries needed to transform the initial state plus ancilla into a state whose partial trace has the same basis as ρ. However, since the maximally mixed state has the same basis as any state ρ, it would follow that the complexity to construct the basis of any state ρ is upper bounded by a fixed number independent of ρ of order the number of qubits.
We will therefore suggest two other definitions of basis complexity. First, since C S ≤ C P , we could define the basis complexity as their difference: i.e. roughly the extra work needed to get the basis right. Note that it is not really clear whether the effort is exactly additive in this fashion, e.g. it might be roughly as hard to prepare just the spectrum as to prepare the whole state. Alternatively, we could define the basis complexity by starting with the minimal complexity state ρ spec with the same spectrum as ρ, and then finding the minimum number of gates needed to change ρ spec into ρ. This is always possible precisely because ρ and ρ spec share the same spectrum. We denote this notion of basis complexity byC B .
As usual, it is not clear how C andC B are related in general, but since C P ≤ C S +C B (because reaching ρ via ρ spec is one possible circuit) it follows that For a pure state of complexity C, it is easy to see that C S = 0 while C B =C B = C. Thus, in some sense, the basis complexity (with either definition) is the analogue of pure-state complexity, while the spectrum complexity is a new feature of mixed states. These various definitions are illustrated in figure 6.

Expectations from tensor networks
To give a sense of these definitions and how they behave in a field-theory context, let us imagine applying them to a chaotic spin chain whose low-energy physics is described by a strongly interacting conformal field theory which has central charge larger than one and is chaotic. Below, these expectations will be compared with the results of holographic calculations. To fix notation, suppose that the model consists of n qubits with Hamiltonian H. The Hamiltonian has energies E i and eigenvectors |i . Throughout we consider the thermal state, ρ ∝ e −H/T . We focus on the two extremes: T = 0 and T = ∞.
At zero temperature, the ground state has approximate conformal invariance. Assuming it has a renormalization group circuit which prepares the ground state, e.g., a MERA-like circuit, the pure-state complexity of the ground state is of order n, say C(T = 0) = k 1 n. Since the state is pure, it has trivial spectrum and we find T = 0 : s t a t e s w i t h s a m e s p e c t r u m a s ⇢ ⇢ target state Figure 6. Illustration of measures of complexity defined in the main text. Roughly speaking, C P is the minimum number of gates required to go from the reference state to the target state ρ. Among the states with the same spectrum as ρ (blue region), the one that can be obtained with the fewest gates starting from the reference state is called ρ spec , and the minimum number of gates is C S .C B is the minimum number of gates required to take ρ spec to ρ. C B (not shown) is C P − C S , and by the triangle inequality this cannot be more thanC B . (More precisely, to go from the reference state to some mixed state such as ρ or ρ spec , we first add an arbitrary number of ancilla qubits to the reference state and then act with the gates to obtain a purification of the mixed state, in which all ancilla are required to be entangled with the original system.) At infinite temperature, the thermal state is a maximally mixed state. In this case one finds T = ∞ : These statements follow because any state with the right (uniform) spectrum is automatically the right state, and because the maximally mixed state can be prepared with order n gates using n ancilla. Based on these two limits, we can make a minimal guess for the temperature dependence of the various complexity measures. We observe that C P need not depend strongly on temperature, although of course it could have some temperature dependence. Meanwhile, C S should increase while C B andC B should decrease as a function of temperature, although again we obviously cannot rule out non-monotonicity. Furthermore, it seems reasonable to suppose that C B andC B are of the same order for all temperatures.
We can use intuition from tensor networks to be a bit more specific about the form of these complexities at intermediate temperatures. If we imagine that the minimal circuit which prepares a purification of the thermal state has two pieces, one which prepares the spectrum and one which prepares the basis, then it is natural to guess that where S is the thermal entropy and In a MERA-like circuit, these behaviors come from two distinct effects: (1) The spectrum must be prepared, and if the spectrum may in some sense be approximated as Bell pairs, then the complexity should be roughly proportional to the entropy. (2) The basis must be prepared, but whereas the ground state has long-range correlations, the mixed state has shortrange correlations, so less of the renormalization group part of the circuit is needed. Counting gates shows that the reduction is also roughly proportional to the entropy. However, it is not clear how the coefficients α and β compare or how they depend on temperature, e.g., due to logarithmic factors. Hence it is not clear at this level how C P = C S + C B ∼ k 1 n + (α − β)S depends on temperature.

Comparison to holographic calculations
We now compare our various definitions and expectations to the holographic computations described above. For simplicity, we focus on the uncharged eternal black hole in any dimension.
In the thermofield double state we found that CA predicted (see (2.5)) The relation between these quantities and the entropy was (see (2.11)) and C V (ρ) ∼ n + S. (see (2.31)). Here n is a stand-in for V ⊥ /δ D−2 (the volume of the CFT in cutoff units) and we only care about the sign of the coefficient in front of the S terms (in D = 3 the coefficient of S in the volume is exactly zero). Are any of the quantities C P , C S , C B , andC B consistent with CA? The fact that C A increases with temperature rules out interpreting it as C B orC B because we expect the latter to decrease with temperature. Under the plausible assumption that the spectrum can be prepared without preparing the whole UV of the field theory, it follows that C A can also not be interpreted as C S since the former is UV sensitive. Moreover, C S does not reduce to the pure state definition of complexity. On the other hand, C P does appear consistent with our expectations and the CA results. In particular, if we think of C P as roughly the cost of the spectrum plus the cost of the basis, then because we must prepare the spectrum twice when preparing two copies of ρ but only once when preparing |TFD , it follows that 2C P (ρ) > C(|TFD ) exactly as we found for C A .
What about CV? Just as for CA, C S , C B , andC B are ruled out. Interestingly, C P is also ruled out since we have 2C V (ρ) = C V (|TFD ). Equality here is inconsistent with our previous story about basis plus spectrum unless the cost of the spectrum is zero.
A similar analysis can be applied to the case of charged black holes. For weakly or moderately charged black holes, we find that, within the precision of our analysis, C A can be qualitatively matched to the purification complexity. The extremal limit is an interesting further probe of complexity/geometry duality, since we find that both measures diverge there. It will be interesting to explore the possible physical significance of this divergence in the future, since it seems unexpected from the point of view of boundary complexity.

Other definitions
The conclusion of the preceding analysis is that CA duality appears consistent with the C P definition of mixed state complexity. By contrast, CV duality cannot apparently be consistently interpreted in terms of C P unless our analysis in terms spectrum and basis is very misguided. However, this analysis is corroborated in its broad outlines by a tensor network picture of the thermal state. Confronted with these conclusions, we now expand the discussion to include other possible definitions of complexity.
Open system approach: Given a mixed state ρ, an initial state ρ 0 = |0...0 0...0|, a set of allowed quantum operations G 0 , and a tolerance , the open system complexity C O of ρ is defined as the minimum of number of operations from G needed to transform the initial state into ρ up to tolerance, say in trace norm. Formally, the open system complexity C O is the minimum number such that where Φ i ∈ G are completely positive trace-preserving maps. We could obviously modify this definition by weighting elements in G differently, by adjusting how the tolerance is defined, or by changing the initial state. Note that even if ρ is a pure state, it is possible that by allowing general quantum operations, as opposed to just unitary transformations, some states could be reached more quickly.
Since allowing more general quantum operations, i.e., unitaries acting also on ancilla, does not reduce to the ancilla-less definition of pure state complexity, it follows that C O can give different results than C when applied to pure states. It is not clear to us if the results can be vastly different, but we do know of cases where there is some difference. For example, in the context of quantum many-body physics, it is known that a Chern insulator ground state cannot be prepared by a finite depth circuit without ancilla, but two copies of a Chern insulator ground state (really a copy and a conjugate copy) can be prepared by a finite depth circuit. Here the inclusion or not of ancilla makes a substantial difference.
From the perspective of holography and tensor networks, it seems to us that ancilla have generally not played a role in the discussion. In other words, the general point of view has been that complexity should be defined with respect to the intrinsic resources of the system and should not make reference to auxiliary degrees of freedom. From this point of view, it makes more sense to think of the purifying system as physical, i.e., as instantiated in the rest of the geometry, instead of merely as ancilla used to apply more general quantum operations to a state. One concrete difference between the two points of view is in terms of whether or not we can act repeatedly on the ancilla.
Ensemble approach: An alternative point of view on mixed-state complexity arises from the fact that a mixed state can be written as a convex combination (i.e. ensemble) of pure states: We can thus define the ensemble complexity of ρ as the corresponding convex combination of complexities of the elements |φ i , minimized over ways of writing ρ as an ensemble: Note that the eigenbasis of ρ is only one possible ensemble, and may be far from the minimal one. Furthermore, the states in a given ensemble need not be orthonormal, e.g., they could be overcomplete. This ensemble-based definition seems qualitatively different from the other definitions given above, although we can relate it to them in some cases. It does have the virtue of reducing to the pure state complexity when the state ρ is pure. One reason for considering this notion of complexity is that none of the other options we considered seemed to be a very good match for C V . As we will explain below, however, C E is roughly consistent with C A , but it also does not seem very well suited to C V .
What are our expectations for C E within the spin chain model considered above? At zero temperature it should agree with C P which is just the pure state complexity of the ground state. At infinite temperature the minimal complexity ensemble is simply the ensemble of product states. Hence we have T = 0 : C E = k 1 n (3.14) and If we tried to match these expectations to CV duality, we would be faced with the unusual conclusion that the ensemble complexity of the thermal state is always exactly equal to the complexity of the thermofield double state for all temperatures. While we are not aware of anything ruling this out, this seems unlikely to be true. For example, we can definitely find models, e.g., models with a trivial tensor product ground state, in which C E is strongly dependent on temperature.

Bounds on subsystem complexity
The ensemble approach seems to have certain advantage over the other definitions, at it seems more tractable to explicit evaluation. To illustrate this point we compute a bound on the ensemble complexity (relative to the ground state) following the work of [56]. For some work towards defining complexity in quantum field theory see [57][58][59][60]. In [56] the authors argued that the relative pure state complexity (which refers to the minimun number of gates required to take the vacuum state to any other pure state) associated to the coherent state |re iθ = e −r 2 e re iθ a † |0 is given by C(|re iθ , |0 ) = r(| cos θ| + |θ|| sin θ|) , (3.16) where a † is the creation operator associated to a single simple harmonic oscillator system, and θ is an angular coordinate with range in [−π, π).
We would like to make a slightly weaker assumption and propose the formula where f (θ) is undetermined, and use this simple result as a way to illustrate a simple way of finding useful bounds to the ensemble complexity. This requires knowledge of a formula for the pure state complexity as a well as a good candidate of low complexity ensemble. We would like to illustrate this in the simplest example of a single harmonic oscillator system as well as its generalization to free quantum field scalar theory.

Single oscillator
Consider a single oscillator mode of a quantum mechanical system with Hamiltonian where [a, a † ] = 1. We would like to bound the ensemble complexity associated to a thermal state ρ β ≡ e −βH , where β = 1/T is the inverse temperature. The thermal state, which one would normally write in the Hamiltonian basis {|n } as ρ β = 1 Z n e −βEn |n n| (3.19) with Z = 1/(1 − e −βω 0 ), has an equivalent decomposition in terms of the normalized coherent states |re iθ which are obtained from the vacuum by local unitary transformations and therefore are of relatively low complexity. Indeed, it is easy to check that ρ β is also given by with A = (e βω 0 − 1), using the relation This ensemble then represents a relatively low complexity ensemble whose complexity could be used to bound our ensemble complexity C E . This is, given We would like to estimate the value of the upper bound on C E , C b E , using the formula for the pure state complexity of coherent states given by (3.17). However, we don't have a formula for C 0 (|re iθ ) but instead for C(|re iθ , |0 ), therefore we obtain a bound on ∆C E defined as 4 and which we denote as ∆C b E . This is One can try relate this answer with the thermodynamic quantities for the single oscillator Similarly for high temperature ∆C b E ∼ 1/(βω 0 ) 1/2 , while S ∼ − log(βω 0 ) and therefore

Free scalar QFT
The field theory estimate proceeds from the oscillator discussion by considering many a's: an a k for each spatial momentum k. In this simple case one simply add the contribution for each k, so 4 A similar quantity called complexity of formation was central in the discussion of [61].
We see that this is just a product of the coherent states of the previous case for each momentum k i . If one writes ρ β = k i ρ k i β then (3.20) holds for each ρ k i β with r → r k i and θ → θ k i and therefore would also hold for its product, leading to the full thermal density matrix.
Its complexity, which is linear in the parameter that appears in the exponent, will be given by the sum of the individual complexities since there does not seems to be a short cut even in this case: (3.31) Therefore an upper bound on the mixed state complexity is given by Consider a system of relativistic particles so ω k ∼ |k| (in this limit the theory becomes conformal), then at low temperatures one has while at large T one instead has

Discussion
This work analyzed various holographic proposals for subsystem complexity and compared results for eternal black holes to various qubit-based proposals for subsystem complexity. Simple tensor network models were used to develop intuition for the behavior of these measures in a strongly interacting system of qubits which might be expected to reasonably model the gross features of a holographic conformal field theory. While CA duality could be reasonably matched to the purification complexity, we found that CV duality was somewhat in tension with the various proposals we considered. This tension arose in part because CV duality requires that subsystem complexities be superadditive with respect to the total system complexity.
One interesting direction for future work is to search for other measures of complexity that might be better matched to CV duality; alternatively, one could try to modify CV duality, e.g., by including new contributions localized at the RT surface in the bulk. Another interesting direction is to try combining the notions of mixed state complexity studied here with more field-theoretic notions of pure state complexity. While we were able to draw some conclusions about the way different spacetime regions contributed to complexity, for example, the negative interior contribution in CA duality being associated with savings coming from the spectrum preparation, there is still much more to learn about the way subregions influence the state complexity. It would also be very interesting to further explore holographic subsystem complexity in time-dependent situations, especially its covariant aspects.

A Corner terms in subsystem complexity
In the complexity equals action prescription it was argued that spacetime regions lying between null sheets give rise to additional boundary terms whose contribution to the action has the following form [48] where Σ is the corner locus (codimension two) and a is the corner integrand given by The factor sgn corresponds to a + or -sign that depends on the causal relation between the region of interest (for the purpose of the action evaluation), the null sheets defining Σ , and Σ . The normals k,k are normalized such that their inner products with the killing vectors implementing the time translations on the left and right boundaries are constant, this is, k ·t L = −c andk ·t R = −c. For a metric of the form the inner product between normals is therefore fixed to be and therefore for the corner terms we have where z i is the z coordinate at the corner i.
In this appendix we study these corner terms as they appear in the subsystem complexity evaluation for the cases of charged and un-charged eternal black holes as the ones studied in sections for the charged case, and for the uncharged one.
In that evaluation we have 4 corners: one on the boundary W − ∩ W + and three on the horizon W + ∩ H + , H + ∩ H − and W − ∩ H − as illustrated in figure 7 A challenge posed by (A.5) is how to evaluate those expressions at the horizon, since they diverge at that exact location. Our strategy is to approximate the contribution from the surfaces lying on the horizon by approaching null surfaces. In that case the surface terms give zero contributions and the corners can be evaluated using (A.5). Once a regularized answer is obtained we expect to get a final finite result in the limit in which the surfaces approach the horizon. We will explain in detail such evaluation First, let's consider a coordinate system in which the null surfaces are easily described. This can be achieved by the u, v null coordinates defined as for z < z h left side of the exterior horizon region, and for z > z h , which covers the future of the inside horizon region. Similar coordinate patches can be defined for the other two regions to cover the full geometry of the eternal black hole. The coordinate z * (z) approaches a constant at the boundary and grows arbitrarily towards the horizon. If one fixes its boundary value to zero, this is z * (δ) = 0 then the product We assume that f (z) is smooth and different than zero in a neighborhood of z = z h ; we will comment on the extremal charged case later on.
Since the left future and past horizons H + and H − corresponds to the surfaces v = 0 and u = 0 respectively, we would like to consider instead the surfaces v = v 5 and u = u as the surfaces that approach them. Assuming that the boundary corner W + ∩ W − corresponds to the coordinate (u 0 , v 0 ), then the three near-horizon corners will be at (u 0 , v ), ( u , v 0 ) and ( u , v ) which are associated to three different radial coordinate points which we denote z u 0 , v , z u,v0 and z u, v respectively. All these points have been regularized and depends on the regularization parameters ( u , v ) as well as the physical information contained in u 0 and v 0 . Even though we don't know how to obtain the explicit function z * (z) for general f (z) and then the relation between the z coordinate and the parameters u 0 and v 0 , one can find useful relations between them by using the equation which is equivalent to (A.10). Notice that the way the corner contributions were derived in [48] was to guarantee the additivity property of the action formula. Therefore, it is interesting to check what does that imply to the regularization of the horizon surfaces as the one we proposed before. An obvious requirement is the following: Consider a spacetime region that crosses the horizon at v = 0 and divides the region in two on the horizon with corners at u 1 and u 2 . The additivity property of the boundary tells us that the regularization should be such that the corner terms on opposite sides of the where as before we chose cc = l 2 /δ 2 In order to simplify the analysis we take the → 0 limit in the following way. We chose v = − u /u 2 0 such that u 0 v = v 0 u = − u /u 0 , which also implies u v = − 2 u /u 2 0 . In this case we have z u,v0 = z u 0 , v . Since z u,v is actually a function of (uv), to make this fact manifest we use instead z u,v = z(uv). In the u → 0, z → z h limit we have: and therefore The first integral is trivial and the second one can be written in terms of the variable x ≡ z h /z An extra change of variables for the remaining integral leads to where u = x D−1 . This is another member of the integrals computed in appendix (C). Using equation (C.12), we get for the action and for the bulk contribution to the "complexity" In the interior region we also have a York-Gibbons-Hawing boundary term which gives a non-zero contribution on the space-like surface which covers the singularity, and zero on the light-like surfaces. The contributions has the form The corner calculation is a bit more involved although it follows directly from the procedure outlined in Appendix A. One has to be careful with the signs of the each corner as well as the sign change in f (z). The result of that analysis with cc = 2 /δ 2 is Here g 0 is the same that the one computed in Appendix A, (A.28) since the continuity imposed in z * (z) at z h guarantees it. The coordinates u L , v R corresponds to the bounded light sheets equal to u L = exp[−f (z h )(t L − t c )/2] and v R = exp[−f (z h )(t R − t c )/2] and since they are computed in the region behind the horizon, −f (z h ) = (D − 1)/z h there, and then Notice that for this range of boundary times t L , t R the first term in the above expression is equal to the full term coming from the boundary contribution (B. 19).
Adding all the contributions we get for the full interior action in particular for t L = t R = 0 we have (B.23) Here we expect the log term to be the dominant contribution and therefore A + int to be always negative. Notice, however that in this particular case one can be more precise about this condition, namely one finds that A + int < 0 provided log(z h /δ) > (3D − 5)/2 which holds for finite D and reasonably small cut off δ. The above constraint is derived from the bounds on g 0 and t c , namely 0.6931... < −g 0 < (D − 1)/2 and 0 < −t c < 1 for D ≥ 3.
For comparison purposes with the pure state complexity of the thermal field double state we would like to have also the past interior action in the same regime, this is −t c > t L , t R > t c . In that case the answer to that complexity contribution can be obtained from the future interior answer by simply doing t L,R → −t L/R .
Recalling the additivity property of the action contribution one can write the pure state complexity in the interval −t c > t L , t R > t c as where σ = R ∪ L is the full system. During that time interval the complexity is time independent as noted in [41].
together with the fact that at t L = t R = 0 we simply have A + int = A − int . First, notice that the terms with explicit dependence on z h in A ± int cancel with the analogous terms in C A L/R and therefore

C Integrals of incomplete beta functions
In the evaluation of the action associated to some space-time subregions we encountered integrals involving the incomplete beta function B(z; a, b) given by x α+β+1+n (α + n)(α + β + 1 + n) x α+β+1+n α + β + 1 + n (C. 5) up to an unimportant constant. Now we will consider to diferent cases of interest: Case I: Consider β to be a non-negative integer this is {β ∈ Z, β ≥ 0} and assume the following relation holds {α + β + 1 > 0} then Case II Consider β to be a negative integer whose magnitude is larger or equal to two, this is {β ∈ Z, −β ≥ 2} and restrict α to be a non-integer, this is {α / ∈ Z}. In that case then we have which belongs to the case II, and its value was obtained from (C.9) for β = −2 and α = 1/(D − 1), while in equation (B.14) we have which belong to the case I, and its value was obtained from (C.7) for β = 0 and α = (D − 2)/(D − 1).