Circuit complexity for free fermions

We study circuit complexity for free fermionic field theories and Gaussian states. Our definition of circuit complexity is based on the notion of geodesic distance on the Lie group of special orthogonal transformations equipped with a right-invariant metric. After analyzing the differences and similarities to bosonic circuit complexity, we develop a comprehensive mathematical framework to compute circuit complexity between arbitrary fermionic Gaussian states. We apply this framework to the free Dirac field in four dimensions where we compute the circuit complexity of the Dirac ground state with respect to several classes of spatially unentangled reference states. Moreover, we show that our methods can also be applied to compute the complexity of excited states. Finally, we discuss the relation of our results to alternative approaches based on the Fubini-Study metric, the relevance to holography and possible extensions.


Introduction
Recently, holographic complexity has been suggested as a new tool with which to gain insight in the role of entanglement in the emergence of spacetime geometry in quantum gravity [1][2][3][4][5][6]. In particular, it has drawn attention to new gravitational observables to probe the bulk spacetime in the holographic theories. The complexity=volume (CV) conjecture suggests that complexity is dual to the volume of an extremal (codimensionone) bulk surface anchored to a certain time slice in the boundary [1,4]. Alternatively, the complexity=action (CA) conjecture identifies the complexity with the gravitational action evaluated on a particular bulk region, known as the Wheeler-DeWitt (WDW) patch, which is again anchored on a boundary time slice [5,6]. 1 Both of the holographic complexity conjectures point out new classes of interesting gravitational observables and there has been a growing interest in studying these new observables and the corresponding conjectures, e.g., [7][8][9][10][11][12][13][14][15][16][17][18][19]. At present, both conjectures appear to provide viable candidates for holographic complexity, but this research program is still at a preliminary stage. While understanding the properties of the new gravitational observables certainly deserves further study, providing concrete, even qualitative, tests of the two conjectures is hampered because we lack a good understanding of what complexity actually means in the boundary CFT, or in quantum field theory, more generally. Certainly, this lack of understanding stands as an obstacle to constructing a precise translation between the new bulk observables and specific quantities in the boundary theory, e.g., in an analogous way that the translation of the replica construction in the boundary yielded a derivation of holographic entanglement entropy [20][21][22]. Beyond gaining new insights into holographic complexity, developing an understanding of complexity in quantum field theory is an interesting research program in its own right. For example, it may lead to progress in quantum simulations of field theories, e.g., [23][24][25][26], or in our understanding of Hamiltonian complexity, e.g., [27,28] and the description of many-body wave functions, e.g., [29,30].
Recently, some preliminary steps were taken to provide a precise definition of circuit complexity in quantum field theories, e.g., [31][32][33][34][35][36][37][38]. The present paper extends these investigations by examining complexity in a free fermionic quantum field theory. Our current investigation is closely related to the discussions in refs. [32,33], which studied the ground state complexity of a free scalar field theory. In particular, ref. [32] adapted a geometric approach, which was developed by Nielsen and collaborators [39][40][41], to evaluate circuit complexity in a scalar field theory, and here we apply Nielsen's approach to defining the complexity of states in a fermionic field theory. We might note that a possible connection between Nielsen's approach and holographic complexity had been advocated by Susskind [2,42,43], but further, the complexity for the free scalar [32] was found to show some surprising similarities to holographic complexity, despite the enormous differences between the quantum field theories appearing in these two settings. We should also point out that ref. [33] developed an alternative approach of defining complexity for the free scalar field theory using the Fubini-Study metric, which matched many results found using Nielsen's approach. 2 Even though, we will focus on Nielsen's approach for the fermionic theory, we will also comment on this alternative approach, as well as point out differences and similarities with the scalar theory.
The remainder of the paper is organized as follows: In section 2, we provide a brief review of Nielsen's geometric approach to evaluating circuit complexity and we introduce a group theoretic perspective that naturally arises in applying this technique to evaluate the complexity of quantum field theory states. In section 3, we continue to develop this group theoretic approach by first reviewing its application to Gaussian states in free scalar field theories [32,38]. This review then sets the stage to extend this technique to examine the complexity of Gaussian states in free fermionic theories, which is discussed in section 3.2. As well as discussing the salient features of the application to a general theory of N fermionic degrees of freedom, we present some explicit calculations for the simple case of two fermions. We can then apply the previous analysis to evaluate the complexity of the ground state of a free Dirac field in section 4. In section 4.2, we also evaluate the complexity of certain excited states. Section 5 presents a general framework which allows one to evaluate the circuit complexity of arbitrary Gaussian states in any fermionic theory. In section 6, we apply this general method to further examine complexity of the free Dirac field. In particular, we investigate how the complexity of the ground state is effected by alternate choices for the reference state, and also the complexity of more general excited states. We close with a brief discussion of our results and of possible future directions in section 7. We leave some additional technical details for appendices. In appendix A, we discuss a particular class of simple geodesics on general Lie groups, which are relevant in our application of Nielsen's approach to quantum field theories. Appendix B provides a general construction of the minimal geodesics connecting an arbitrary reference state to any desired target state in a fermionic theory.
Note: While the present paper was in preparation, ref. [36,37] appeared which also address the question of circuit complexity of free fermionic theories. In particular, there is a strong overlap with our study of the ground state complexity in section 4. However, we would like to note that our approach adopts a more abstract group theoretic formalism, which allows us to prove e.g., that our unitary circuits in fact correspond to minimal geodesics, which is lacking in [36,37]. Further, we evaluate the complexity of the ground state for a variety of different reference states, and we also consider the complexity of various excited states.

Complexity, Nielsen and group theory
The concept of complexity stems from the notion of computational complexity in computer science [45,46]. The question of interest is to ask how much of certain computational resources are required to solve a given task. For a digital computer, we can ask what is minimal number of computational gates required to implement a specific algorithm, i.e., a specific map between a certain sets of input bits and output bits. This question readily extends to quantum information science where the question becomes what is the minimal number of gates chosen from some set of elementary unitaries {V I } to implement a unitary transformation U , which produces a desired map from some n-qubit inputs to the corresponding n-qubit outputs [47,48]. An implementation of U becomes a string of elementary unitaries, i.e., U = D k=1 V I k where D defines the circuit depth of this particular implementation. The circuit complexity then corresponds to the depth of the optimal construction, i.e., the minimal number of gates needed to build U . To be even more precise, it is rarely possible to write a given U exactly as a finite string of discrete gates V I , but rather only up an error . Hence the circuit complexity of a unitary transformation U is usually defined with respect to some gate set {V I } and a given tolerance as the minimal number of V I required to implement U , up to an error of .
In the context of holography, or in applying these concepts to quantum field theory, we are interested in quantifying the effort required to prepare a certain target state |ψ T from a specific reference state |ψ R by applying a sequence of unitary gates. Here, |ψ R will be chosen with some notion of simplicity in mind, e.g., the degrees of freedom are completely unentangled. Hence the complexity of a family of target states is defined with respect to the reference state |ψ R , as well as the gate set {V I } and the tolerance . 3 Again, we wish to construct the optimal unitary or shortest circuit which implements |ψ T = U |ψ R , (2.1) and the complexity of the state |ψ T is simply defined as the number of elementary gates comprising this optimal U . Of course, generally there will exist infinitely many different sequences of gates which produce the same target state from a given reference state. Hence, our challenge is to identify the optimal circuit from amongst the infinite number of possibilities. Nielsen and collaborators [39][40][41], introduced a geometric approach to identify the optimal circuit, which was adapted in [32] to evaluate the complexity of the ground state of a free scalar field. In contrast to the previous discussion, where U is constructed as a string of discrete gates, this new approach begins with a continuous description of the unitary U = P exp −i In this space, the circuits of interest are the trajectories satisfying the boundary conditions U (s = 0) = 1 and U (s = 1) = U . 5 In this framework, Y (s) = (Y 1 (s), Y 2 (s), · · · ) can also be interpreted as the tangent vector of the corresponding trajectory, Now Nielsen's approach is to optimize the circuit (2.2) by minimizing a particular cost defined by D(U (t)) = 1 0 ds F U (s), Y (s) , (2.5) where the cost function F (U, Y ) is a local functional along the trajectory of the position U (s) and the tangent vector Y (s). Some simple examples would include: Given the interpretation of the Y I as indicating when certain gates appear in the circuit, the F 1 measure is the closest to the original definition of simply counting the number of gates in the circuit. In F 1p , penalty factors p I are introduced to favour certain directions in the circuit space over others, i.e., to give a higher cost to certain classes of gates. Of course, the F 2 measure can be recognized as the proper distance in a Riemannian geometry on the space of unitaries. This choice will be the focus of much of our discussion in the following. The κ measures F κ were introduced in [32] because the resulting complexity compared well with results for holographic complexity. Of course, with κ = 2, the F κ measure yields the same optimal trajectories as F 2 with a test particle action in the corresponding geometry, while with κ = 1, this reverts back to the F 1 measure. We return to discussing the relative merits of these measures in more detail in section 7. In applying the above approach to a free scalar field theory in [32], a group theoretic structure was found to naturally appear. To produce a tractable problem, only a limited basis of operators O I were used in constructing the unitary circuit (2.2) and these operators naturally formed a closed algebra, i.e., a Lie algebra g with [O I , O J ] = if IJ K O K . In [32], a GL(N, R) algebra appeared in the construction of the free scalar ground state using a lattice of N bosonic degrees of freedom. 6 In the following discussion of free fermions, we will be making use of the analogous group structure, which turns out to be O(2N ). One advantage of this group theoretic perspective is that the physical details of the operators O I become less important. Rather, we can simply think of the generators in eq. (2.2) as the elements of the Lie algebra g and the circuits are then trajectories in the corresponding group manifold G, without making reference to a specific representation, or rather we can choose whichever representation is most convenient for our calculations.
Let us phrase the preceding description of Nielsen's approach in the corresponding group theoretic language -see appendix A for further discussion. In particular, the circuits (2.2) of interest become continuous trajectories γ : [0, 1] → G which connect the identity 1 with the desired unitary transformation U . In identifying the elementary generators with a basis of the Lie algebra g, we are presented a natural cost function which is inherited from the geometry of the underlying group structure. That is, we restrict ourselves to a cost function that is induced by a positive definite metric ·, · 1 : g × g → R on the Lie algebra. 7 If we extend a circuit U → e −εA U by applying the gate exp[−εA] from the right, then δU ≈ −iεA U and we expect that the length of the circuit should increase by a step ε A , irrespective of the precise form of U , or equivalently that the tangent vector A U ∈ T U G has the same length as A ∈ T 1 U . We can therefore extend the metric ·, · 1 to arbitrary tangent spaces via right-translation, leading to the right-invariant metric Using the F 2 cost function, the circuit complexity of a given U ∈ G is then defined as the minimal path length which is nothing else than the geodesic distance between 1 and U on G, which was turned into a Riemannian manifold by the metric ·, · U . If instead, we wished to consider the F κ=2 measure, the circuit complexity becomes C κ=2 (U ) = min The group theoretic perspective proves to be quite powerful in evaluating the circuit complexity of simple states in quantum field theory, as was already implicitly seen with the analysis in [32]. In the following, we will apply the tools of Lie theory and the study of symmetric spaces to examine fermionic Gaussian states. In this case, we can restrict our attention to the group G = O(2N ) for N fermionic degrees of freedom. Taking N → ∞ then leads to the continuum limit of a fermionic field theory. We will be able solve for the minimal geodesic analytically using the metric ·, · 1 , which is compatible with the group structure. In closing this section, let us add the following aside: Recall that evaluating the complexity of a given target state amounts to finding the optimal circuit U which produces the desired transformation in eq. (2.1). However, this prescription typically does not actually fix the boundary condition U (s = 1). That is, one will find that there are simple transformations u which leave the reference state invariant, i.e., |ψ R = u|ψ R and then given any unitary U 0 satisfying eq. (2.1), U = U 0 u will produce the desired transformation as well. This ambiguity is elegantly characterized in our group theoretic approach if we define the stabilizer subgroup that preserves |ψ R . We can then define the equivalence relation U ∼ V iff U = V u with u ∈ Sta, i.e., iff U |ψ R = V |ψ R . Hence the problem of finding the minimal circuit now involves a double extremization. First, we must find the family of geodesics running from 1 to all unitaries in the equivalence class [U ] ∈ G/Sta. Secondly, we must find the shortest geodesic amongst this family. Note that the equivalence class [U ] is just given by U Sta, where we displace the stabilizer by multiplying with an arbitrary representative U from the left. We illustrate the involved geometry in figure 1.
In the setting of bosonic and fermionic Gaussian states, we have Sta = U(N ) and the quotient manifolds G/∼ turn out to be given by symmetric spaces [49], namely type DI corresponding to Sp(2N, R)/U(N ) for bosons and type CIII corresponding to SO(2N )/U(N ) for fermionic systems.

Prelude: fermions versus bosons
In developing our complexity model for free fermions, we are interested in describing fermionic Gaussian states and the unitary transformations that map Gaussian states onto Gaussian states. As we discuss below, this problem naturally involves the group O(2N ) for N fermionic degrees of freedom. Nielsen's approach to defining circuit complexity was recently applied for free scalars [32], which required understanding the analogous unitary transformations mapping bosonic Gaussian states amongst themselves. A GL(N, R) structure arose in this analysis but this is only a subgroup of the full Sp(2N, R) family of transformations, as we review below [38,50]. Hence it is useful to begin here by comparing and contrasting the bosonic and fermionic Gaussian states.
As emphasized above, the precise representation of the unitary circuits becomes unimportant with our group theoretic perspective. We use this freedom here to focus on the simple description of the group of transformations mapping Gaussian states amongst themselves given in terms of their action on the covariance matrix, e.g., [50]. In particular, we will parametrize Gaussian states in terms of their covariance matrix, where ξ a ≡ (q 1 , · · · , q N , p 1 , · · · , p N ) describes N degrees of freedom, which may be either bosonic or fermionic. On the right-hand side, G ab = G (ab) is the symmetric part of the correlation matrix on the left, while Ω ab = Ω [ab] denotes the antisymmetric part.
Bosons: For a system of bosonic degrees of freedom, Ω ab is trivial in that it simply encodes the canonical commutation relations of the q i 's and p i 's. On the other hand, the symmetric two-point function G ab completely characterizes the corresponding Gaussian state |ψ -we are assuming that ψ|ξ a |ψ = 0 here and in the rest of this paper. Hence, as described in [50] and below, a simple description of the group of transformations mapping bosonic Gaussian states amongst themselves is then given in terms of their action on the symmetric covariance matrix.
Fermions: When we consider eq. (3.1) for a fermionic system instead, the symmetric part G ab is fixed by the anti-commutation relations amongst the fermionic degrees of freedom while the antisymmetric part Ω ab completely characterizes the fermionic Gaussian state |ψ . Hence the covariance matrix (3.1) again provides a simple framework to discuss the corresponding group of unitary transformations for fermionic Gaussian states, as we discuss in the following.

Single boson
It is well known that the group of transformations preserving Gaussian states for N bosonic degrees of freedom is Sp(2N, R), as explained in [38,50,51]. As above, we as-semble the conjugate position and momentum operators as ξ a ≡ (q 1 , · · · , q N , p 1 , · · · , p N ). Then the antisymmetric component of the covariance matrix (3.1) becomes where 1 and 0 are N ×N identity and zero matrices, respectively. This result holds for any Gaussian state (or in fact, any state), since the canonical commutation relations can be written as [ξ a , ξ b ] = i Ω ab . The nontrivial component of eq. (3.1) is then the symmetric two-point function which gives a complete characterization of the Gaussian state |ψ . As a simple example, we consider one bosonic degree of freedom, so the group of interest is simply Sp(2, R) and we have ξ a ≡ (q, p). Now, another way to characterize the Gaussian states is in terms of annihilation and creation operators, 8 e.g., That is, given these operators, there is a corresponding Gaussian state satisfying a |ψ = 0. However, there is some freedom in the precise definition the annihilation operator, namely the Bogoliubov transformations, 9 a = α a + β a † , (3.5) In order to preserve the commutation relations [ã,ã † ] = [a, a † ] = 1, the coefficients α and β need to satisfy To correctly account for the dimensions of q and p, these expressions should include a specific scale, e.g., a = 1 √ 2 (ω 1 q + i p/ω 1 ) yields a properly dimensionless annihilation operator. One effect of the Bogoliubov transformations (3.5) is then to scale this scale, e.g., ω 1 → e r ω 1 with ϕ = ϑ = 0 in eq. (3.7). See [38] for further discussion. 9 Note that we can change a toã = e iϕ a without changing the vacuum, which corresponds to a U(1) subgroup of Bogoliubov transformations that do not change the vacuum. For N bosonic degrees of freedom, there is the freedom of unitarily mixing all N annihilation operators (and creation operators respectively) among themselves, leading to a U(N ) subgroup of different choices of a i that all define the same vacuum.
From this, we can conclude that the most general Bogoliubov transformation (for a single degree of freedom) is given by α = e iϕ cosh r , (3.7) β = e iϑ sinh r . Now given two pairs of creation and annihilation operators, namely (a, a † ) and (ã,ã † ), they define two distinct Gaussian states satisfying a |ψ = 0 andã |ψ = 0. Hence the Bogoliubov transformations (3.5) describe the desired group of transformations mapping the Gaussian states amongst themselves. We can invert eq. (3.4) tõ ξ a ≡ (q,p) for the pair (ã,ã † ). Then, the Bogoliubov transformation (3.5) from (a, a † ) to (ã,ã † ) induces a linear transformation M a b on the space V * spanned by ξ a andξ a , i.e., ξ a = M a bξ b . Note that we define M to be the inverse transformation that mapsξ into ξ. The condition of preserving the commutation relations then translates into where Ω is a symplectic on V * . This expression (3.8) extends trivially to the case of N degrees of freedom (by simply extending the range of the indices) and then reveals the Sp(2N, R) group structure noted at the beginning of this section. Of course, we are also interested in the transformation of the symmetric two-point correlator which encodes the transformation of the state, namelyG ab = ψ | {ξ a , ξ b } |ψ , i.e., the expectation value of the original operators ξ a in the transformed state. In particular, in a discussion of the circuit complexity of these states, we can represent the gates and unitary circuits with the appropriate symplectic transformations, and describe their action on the state in terms of the above transformation, e.g., see [50]. In our example with N = 1, the Bogoliubov transformation (3.5) gives the symplectic matrix M ≡ cos(ϕ) cosh(r) + cos(ϑ) sinh(r) sin(ϑ) sinh(r) − sin(ϕ) cosh(r) sin(ϕ) cosh(r) + sin(ϑ) sinh(r) cos(ϕ) cosh(r) − cos(ϑ) sinh(r) . (3.10) If we start with an initial state |ψ , whose covariance matrix is G ≡ 1, then using eq. (3.9), the transformed state |ψ is described by 10 G ab ≡ cosh(2r) + cos(ϑ + ϕ) sinh(2r) sin(ϑ + ϕ) sinh(2r) sin(ϑ + ϕ) sinh(2r) cosh(2r) − cos(ϑ + ϕ) sinh(2r) . (3.11) We notice that the final state |ψ is independent of (ϑ − ϕ), which corresponds to the U(1) subgroup where we just multiply creation and annihilation operators with opposite complex phases. As a manifold, we have Sp(2, R) = R 2 × U(1) where (r, ϑ + ϕ) provide polar coordinates of the plane and (ϑ − ϕ), the remaining coordinate on the circle U(1). Since this overall phase is trivial, the space of states M b,1 is properly described by the quotient M b,1 = R 2 = Sp(2, R)/U(1). In the general case of N degrees of freedom, this expression for the space of states would become M b,N = Sp(2N, R)/U(N ), where the U (N ) group mixes the various annihilation operators amongst themselves leaving the corresponding Gaussian state unchanged. M b,N is also known as the symmetric space of type CI [49]. For a detailed discussion of the resulting geometry and geodesics, we refer the interested reader to [38]. However, we add the following comments to conclude our review here: For every Gaussian state |G , we can choose a canonical basis ξ a ≡ (q i , p i ), such that G ≡ 1. This means the bilinear form G does not contain information that is invariant under changing the canonical basis or put simply: "All Gaussian states look the same if we can choose the right basis for each individual state." This changes of course, if we have two Gaussian states |G and |G in the same system 11 and force ourselves to represent the two-point functions G andG with respect to the same canonical basis. Again, we can choose a basis, such that G ≡ 1, but we will not be able to accomplish the same forG. The remaining freedom of choosing a canonical basis is described by the group U(N ) = Sp(2N, R) ∩ SO(2N ) consisting of canonical transformation (i.e., M ΩM = Ω) that simultaneously orthogonal with respect to G (i.e., M GM = G). The invariant information about the relation between the original state |ψ and the transformed state |ψ is completely captured by the eigenvalues of the relative covariance matrix 12 i.e., G ac g cb = δ a b . In particular, any quantities that depend on the two states in a Sp(2N, R)-invariant way, e.g., their inner product, 13 can be computed purely from ∆. This will apply to the complexity provided that we choose a geometry that is Sp(2N, R)invariant, e.g., we do not introduce penalty factors which conflict with the group structure. For our Bogoliubov transformation (3.5), we have spec(∆) = (e 2r , e −2r ). We 11 Of course, this is the situation where we are examining circuit complexity of states since we have both the target state and the reference state. 12 Note that one could have just as easily defined ∆ = Gg withg =G −1 . However, one then has ∆ = ∆ −1 and due to the fact that ∆ is symplectic, the two have the same spectrum. This is discussed in more detail in section 5. 13 For bosonic states, we find the simple formula | G|G | 2 = det [50].
say that |ψ arises from a one-mode squeezing of |ψ with squeezing parameter r. For bosonic Gaussian states, understanding one-mode squeezing is the key to relate any two states. That is, for any two bosonic Gaussian states |ψ and |ψ with N degrees of freedom, there exists a normal mode basis (q 1 , · · · , q N , p 1 , · · · , p N ), such that |ψ is the result of N independent one-mode squeezing operations in the N different normal modes [38,50]. This is related to the Iwasawa (or KAN) decomposition of Sp(2N, R), e.g., see [52,53].

Two fermions
We now turn to the case of fermionic Gaussian states. In this case, the space of Gaussian states for N fermionic degrees of freedom is given by the quotient M f,N = O(2N )/U(N ), which has dimension N (N −1), e.g., [50]. Of course, this space is a small submanifold within the full 2 N -dimensional Hilbert space H of the fermionic system. Further, it is not preserved by general unitary transformations U(2 N ) acting on H, but only the subgroup O(2N ) corresponding to Bogoliubov transformations. That is, the most straightforward way to think of characterizing the fermionic Gaussian states is in terms of the annihilation and creation operators. With N fermionic pairs (a i , a † i ) satisfying {a i , a † j } = δ ij , the corresponding Gaussian state is again defined by a i |ψ = 0 and the Bogoliubov transformations mixing these fermionic operators map Gaussian states to Gaussian states.
In analogy to eq. (3.4) for the bosons, we begin by defining a set of Hermitian fermionic operators given by which are commonly referred to as Majorana modes. In contrast to the analogous bosonic operators, they do not consist of conjugate pairs (q i , p i ), but rather they are governed by the anti-commutation relations: {q i , q j } = δ ij = {p i , p j } and {q i , p j } = 0. Turning to the covariance matrix (3.1), if we choose the Majorana basis ξ a ≡ (q 1 , · · · , q N , p 1 , · · · , p N ), the symmetric component becomes simply This result holds for any Gaussian state since G simply encodes the canonical anticommutation relations G ab = {ξ a , ξ b } ≡ δ ab (which are preserved by the Bogoliubov transformations). Further, as we will see below, this matrix G ab provides a useful positive definite metric. Hence, in the fermionic case, the nontrivial component of eq. (3.1) is the antisymmetric two-point correlator which characterizes the corresponding Gaussian state |ψ . Given eq. (3.13) above, we may evaluate this matrix for the state |ψ annihilated by a i as where 1 and 0 are N ×N identity and zero matrices, respectively. We note that this Ω coincides with the form of the symplectic form (3.2) for bosons. Now in analogy with our discussion of bosons, a pair (a i , a † i ) and (ã i ,ã † i ) defines two distinct Gaussian states satisfying a i |ψ = 0 andã i |ψ = 0. Hence understanding the group of transformations mapping fermionic Gaussian states amongst themselves is again understanding the Bogoliubov transformations acting on the fermionic annihilation and creation operators. It is simplest to work with the Majorana basis, i.e., ξ a ≡ (q i ,p i ) and ξ a ≡ (q i , p i ), where the Bogoliubov transformations act as a linear transformation. Again, we define the inverse transformation M , such that ξ a = M a bξ b .
The condition of preserving the anti-commutation relations translates into Recalling that G ab ≡ δ ab in the Majorana basis, eq. (3.17) makes evident the O(2N ) group structure, which we referred to above. Of course, the transformation of the states is now encoded in the transformation of the antisymmetric two-point correlator Hence in a discussion of the circuit complexity of fermionic Gaussian states, we can represent the unitary circuits and gates with the appropriate orthogonal transformations and their generators, and describe their action on the states in terms of the above transformation.
To make this discussion more concrete, let us consider a simple example. However (as we now show), the simplest case of a single pair, i.e., N = 1, turns out to be trivial. In this case, the most general Bogoliubov transformation is Demanding that the anti-commutation relation is preserved, i.e., {ã,ã † } = 1, yields However, fermionic creation and annihilation operators also need to satisfyã 2 = (ã † ) 2 = 0. Computing this explicitly for above transformation leads to a second requirement This means up to an overall phase, the only possible transformations are α = 1, β = 0 or α = 0, β = 1. That is,ã = a or we swap the role of creation and annihilation operators withã = a † . With N = 1, the space of Gaussian states is M = O(2)/U(1), where the U(1) corresponds to the overall complex phase, but this space simply consists of two points. 14 This means that -in contrast to a single bosonic degree of freedom -the squeezing of a single fermionic degree of freedom is trivial. The first non-trivial system consists of two fermionic degrees of freedom, often interpreted as two qubits. With N = 2, the state manifold will be which is two-dimensional. In this case, we consider two pairs fermionic creation and annihilation operators, (a 1 , a † 1 ) and (a 2 , a † 2 ). For this example, let us consider the fermionic Bogoliubov transformationã This is not the most general transformation, but the natural choice if we want to mix a 1 with a † 2 . In fact, one can show that one can bring any Bogoliubov transformation into this form by mixing a 1 with a 2 , andã 1 withã 2 via U(2), which does not change the corresponding Gaussian states, |ψ and |ψ .
Further, for eq. (3.23), we may choose α to be real so that the following parametrization works well: α = cos ϑ , β = e iϕ sin ϑ . (3.24) The induced transformation M that mapsξ a into ξ a can then be written as Here, we have decomposed M as a series of rotations and so it is clear that M ∈ O(4) or rather M ∈ SO(4), because we can continuously reach 1, and satisfies M GM = G.
As mentioned in eq. (3.22), the space of states is given by the quotient M f,2 = O(4)/U(2) = S 2 ∪ S 2 because we need to divide by the subgroup U (2) associated to mixing creation and annihilations operators among themselves, respectively. In particular, we see that the manifold of fermionic Gaussian states again consists of two disconnected components -see footnote 14. We can only continuously deform one state to the other, if they lie in the same component -unless we are willing to leave the space of Gaussian states. Our choice of Bogoliubov transformations parametrized by ϑ and ϕ corresponds to the S 2 connected to the identity. Similar to the bosonic example, we can ask how to encode the invariant relative information between two fermionic Gaussian states |Ω and |Ω . As a preliminary step towards answering this question, let us note that with an appropriate choice of an orthonormal basis ξ a ≡ (q 1 , q 2 , p 1 , p 2 ) of Majorana modes, G ≡ 1 and the covariance matrix Ω takes the standard form While preserving these forms, we would also like to bringΩ into a standard form. The allowed transformations are given by the subgroup U(2) = O(4) ∩ Sp(4, R), just like for the bosonic case. One can show that the covariance matrixΩ can be brought into the standard form 16 provided that |Ω and |Ω belong to the same connected component. This indicates that the invariant relative information is encoded in ϑ alone, i.e., the second angle ϕ in eq. (3.24) is irrelevant. Following the discussion of the bosonic theories (e.g., compare to eq. (3.12)), we can describe this invariant information about the relation between the two states in terms of the relative (fermionic) covariance matrix 17 i.e., Ω ac ω cb = δ a b . 18 The invariant information is then captured in the eigenvalues of this matrix. For our choice of Bogoliubov transformation in eqs. (3.23) and (3.24), we have spec(∆) = (e 2iϑ , e 2iϑ , e −2iϑ , e −2iϑ ) and as expected, ϕ does not appear here. We will later show that for a natural choice of invariant metric on the group, our Bogoliubov transformation that changes ϑ continuously from zero to its final value along a path of fixed ϕ is the minimal geodesic connecting a reference state |ψ to a target state |ψ . 16 Examining the transformation in eq. (3.25), one finds the final rotation can be eliminated with the phase rotation (ã 1 ,ã 2 ) → (ã 1 , e −iϕã 2 ), which of course leaves the |ψ unchanged. Further, applying the latter transformation takesΩ from eq. (3.26) to the canonical form (3.28). 17 For fermionic states, we find the formula | Ω|Ω | 2 = det [50] which is strikingly similar to the one for bosons from footnote 13. 18 We will see in section 5 that the bosonic and fermionic relative convariance matrices ∆ arise in the same way when one labels states by their linear complex structure J.
In particular, the geodesic length will be given by |2ϑ| ∈ [0, π]. These paths are just the great circles passing through the pole (at ϑ = 0) on the corresponding two-sphere. This means in each linearly independent direction, the maximal path length is π/2. 19 However, if with a large number of degrees of freedom, geodesic will be moving along several such paths in orthogonal directions at the same time. In particular, the overall path can become arbitrarily large in the field theory limit where we consider an infinite number of degrees of freedom.
For bosons, we reviewed that any two Gaussian states define a set of normal modes, such that there is a natural transformation built from linearly independent one-mode squeezing operations in these modes. In the case of fermions, we observed that: (a) there are two disconnected components on the manifold of states (separating states with even and odd fermion number); and (b) one-mode-squeezing is trivial and we need to perform two-mode squeezing operations. Therefore, we can only find normal modes if two Gaussian states lie in the same connected component and these normal modes always come in pairs, so that the two states are related by a collection of independent two-mode squeezing operations. In particular, if we have an odd number of fermionic degrees of freedom, there will always be a single normal mode left that is not squeezed when moving from one state to the other.

Gates, circuits and complexity
So far, our discussion of fermionic Gaussian states has been at a fairly abstract level. We have used the covariance matrix Ω as a convenient parametrization of the manifold of fermionic Gaussian states and the action of Bogoliubov transformations on this space. In particular, much of the discussion focused on the case of two fermionic degrees of freedom. Here, we would like to bring the discussion more closely in line with the continuous description of unitary circuits in eqs. (2.2) and (2.3). In particular, these unitaries will be constructed using some basis of Hermitian operators O I , which act on the states in the Hilbert space of our fermionic system. Since we are focusing our attention on circuits which map Gaussian states to Gaussian states, i.e., which implement Bogoliubov transformations, we will only consider generators that are 19 One may be surprised to find π/2 rather than π here. The reason is that at π, we would reach the group element M = −1, as shown by eq. (3.25), which is as far away from 1 as possible. However, the transformed two-point function becomesΩ = M ΩM = Ω, i.e., eq. (3.26) reduces to the initial covariance matrix in eq. (3.27) with ϑ = π, and so the final state is identical to the initial one at ϑ = π. This means the group elements, which take our state as far away as possible from the initial one, are those sitting on the circle at ϑ = π/2, i.e., the equator of the connected S 2 component. Recall that at ϑ = 2π, M returns to the identity, but when we measure the length of the circle covered by ϑ running from 0 to 2π (with fixed ϕ) using our metric ·, · 1 (see eq. (3.37) below), its length is actually 4π. Therefore the resulting distance to the maximally distant states is π. quadratic operators, in analogy with the study of bosonic Gaussian states in [32,33,38]. One may describe these quadratic generators in terms of the annihilation and creation operators, but we find it more convenient to work with the Majorana modes (3.13), i.e., ξ a = (q i , p i ). That is, we choose our basis of generators to be the antisymmetric 20 The antisymmetric form of the indices for these basis generators hints at an SO(2N ) group structure, which is readily confirmed by examining the commutation algebra of the generators.
LetK be a general real linear combination of these Hermitian quadratic operators. Such an operator is completely characterized by an antisymmetric matrix k ab = k [ab] , As a Hermitian operator,K gives rise to the unitary operator U (K) = e −iK which acts on our Gaussian states, i.e., |ψ → |ψ = U (K)|ψ . However, we wish to understand this transformation through the action of U (K) on the covariance matrix. Hence we consider the corresponding action on the operators ξ a themselves, i.e., where we have defined [iK, ξ a ] (n+1) = [iK, [iK, ξ a ] (n) ] and [iK, ξ a ] (0) = ξ a , and we have used Baker-Campbell-Hausdorff to simplify this expression. With some algebra, we find that the first commutator yields where we used the anti-commutation relations {ξ a , ξ b } = G ab . By defining we can write successive commutators as [iK, ξ a ] n = (K n ) a b ξ b . The action of U (K) in eq. (3.31) can therefore be simply expressed as 34) or alternatively, following the notation introduced in the preceding discussion we have Again, from the antisymmetry of k ab , it is obvious that the generator K will be given by an antisymmetric matrix with respect to a basis where G ab ≡ δ ab , which was implicitly chosen in using the Majorana modes for the above. Hence we recognize M (K) as a group element in SO(2N ) with the generator K a b = G ac k [cb] ∈ so(2N ). Recall that in discussing the complexity, we must choose a metric ·, · 1 in eq. (2.7) on the Lie algebra, i.e., so(2N ) for N fermionic degrees of freedom. This Lie algebra is (2N − 1)N -dimensional, so the possible metrics correspond to the space of positive definite linear forms described by symmetric (2N − 1)N × (2N − 1)N matrices, which has some very large dimension. 21 However, there is one particularly natural choice that is induced by the anticommutation relations, namely This inner product is clearly positive definite because in a basis with G ab ≡ δ ab , we have A, A 1 = Tr(AA ) ≥ 0. This inner product can be recognized to be a canonical Lie algebra structure by realizing that for A ∈ so(2N ), we have GA g = −A, so that we can rewrite The last expression is well known to be proportional to the negative Killing form, which is a positive definite inner product for semi-simple compact Lie groups [54]. Recall that this metric is then extended to the entire group by right translation as in eq. (2.8). For this choice of metric, the computation of geodesics becomes relatively simple. In fact, in appendix A, we prove that every geodesic beginning at the identity is given by e sA for some fixed A ∈ so(2N ). For the rest of this paper, we will always refer to this metric, if not indicated otherwise. Given this key result from appendix A, we can easily compute the complexity associated to the state produced by such a geodesic which connects some reference state |ψ R at s = 0 to the target state The key simplification is that the magnitude of the tangent vector along these geodesics is fixed. We can compute explicitlyγ(s) = Ae sA leading to using eq. (2.8). The result is not very surprising because the trajectory is moving continuously in the direction generated by a single Lie algebra element A.
The geodesic trajectory arises naturally in evaluating the complexity using the F 2 measure as in eq. (2.9), in which case it is given by the Riemannian length of the geodesic, However, the same geodesic appears using the κ measure with κ = 2 as in eq. (2.10), and then the complexity is given by Note that these two results are very simply related, i.e., C κ=2 (e A ) = C 2 (e A ) 2 . We can make this discussion more explicit by turning the N = 2 case considered in the previous subsection. In particular, given the two-mode squeezing transformation M (ϑ, ϕ) in eq. (3.25), we find the generator to be That is, we can write M (ϑ, ϕ) = e A(ϑ,ϕ) . To gain some intuition for these transformations, we might imagine that ϕ is fixed but ϑ allowed to vary. Recall that these angular coordinates cover the S 2 connected to the identity in eq. (3.22). The identity corresponds to say, the north pole (i.e., ϑ = 0). Fixing the angle ϕ corresponds selecting a direction from amongst the lines of longitude, which describe the different state-changing directions at the identity. Finally varying ϑ from zero to say, π/2 describes a trajectory along this line of longitude from the north pole to the equator. Of course, as described above if we continue along the same great circle, we arrive at the south pole at ϑ = π and return to the north pole at ϑ = 2π. We can use the above expressions to build a geodesic path from the identity to the group element e A(ϑ,ϕ) given by Further, for the generator A(ϑ, ϕ) in eq. (3.42), we can compute the magnitude of the tangent vector using eq. (3.37), which can then be substituted into either eq. (3.40) or (3.41) to evaluate the complexity. In particular, the geodesic length of eq. (3.43) is simply given by γ(ϑ, ϕ) = 2ϑ. At this point, we have not proven that eq. (3.43) is the minimal geodesic (i.e., recall the discussion around eq. (2.11)), but based on the results of appendix A, we have shown that the geodesic distance between 1 and e A(ϑ,ϕ) is given by 2ϑ.

Complexity for the Dirac field
Before developing systematic methods to compute the circuit complexity of arbitrary fermionic Gaussian states, we can already apply the previous results from section 3 discussing two fermionic degrees of freedom to find the complexity of the ground state of a free Dirac fermion. We are applying Nielsen's approach to build the optimal unitary circuit U , which accomplishes the transformation |ψ T = U |ψ R . The target state will be the ground state of the Dirac field, |ψ T = |0 . As reference state, we will choose a state where the local fermionic degrees of freedom (at each spatial point on a given time slice) are unentangled, |ψ R = |0 . We consider a free Dirac field in four-dimensional Minkowski space. 22 We introduce the following basis of four-component spinors (4.1) Boosted spinors can then be found by acting with the boost matrix, e.g., where p · σ = E p 1 − p · σ and p · σ = E p 1 + p · σ, with E p = m 2 + p 2 . Of course, the analogous formula applies for v s (p). We can now write the the Dirac spinor field (on a fixed time slice, e.g., t = 0) as Clearly, we have four fermionic degrees of freedom per (spatial) momentum p. Recall that the annihilation and creation operators satisfy Here, we closely follow the conventions of [55]. However, note that we have changed the normalization of the basis spinors by a factor of √ m, e.g., u s (p) [55] = √ m u s (p) here .
The ground state is the fermionic Gaussian state |0 , defined by a s p |0 = 0 = b s p |0 , and this will be the target state for which we are evaluating the circuit complexity.
As we indicated above, our desired reference state, |ψ R = |0 , will be a Gaussian state where the local fermionic degrees of freedom at each spatial point on a given time slice are unentangled. Therefore, let us now introduce local creation and annihilation These operators are not completely defined until we make a specific choice on how to express the Dirac field (4.3) in terms of these local operators as Our unentangled reference state is then defined byā s x |0 = 0 =b s x |0 . Note that in this expression, we intentionally chose the rest-frame basis spinors (4.1) with p = 0 to find a rotationally invariant reference state |0 , but we will discuss alternative choices of our reference state in section 6.1.
As described in the previous section, the unitary transformation from reference state to the target state, i.e., |0 → |0 = U |0 , can be understood in terms of the Bogoliubov transformation relating the annihilation and creation operators with which we define these states. Hence to simplify the latter, we first find the Fourier transformed version of local operators introduced above, The Dirac field is then expressed as We note that the Fourier transform performs a 'trivial' Bogoliubov transformation, in that it mixes only the annihilation operatorsā s x amongst themselves and the same for theb s x . As a result, the Gaussian state defined by these new operators is still the unentangled reference state |0 , i.e.,ā s p |0 = 0 =b s p |0 . Now comparing eqs. (4.3) and (4.8), we can immediately identify the Bogoliubov transformation which yields (ā s . In particular, computing the product with the conjugate basis spinors u r † (p) and v r † (−p) from the left, 23 we find The spinor products are most easily evaluated by assuming that p points in, e.g., the third spatial direction, p = (0, 0, p z ) and then rotating to a general frame with spinor labelsr ands. 24 The resulting products are Note that in the third line, we have introduced the notationr ≡r + 1 (mod 2). Substuting these into eqs. (4.9) and (4.10), from before, we find a simple Bogoliubov transformation for pairs of operators given by where Note that there is no sum ons in eq. (4.12). Further, it is easy to verify |αs p | 2 +|βs p | 2 = 1, which ensures that we indeed have a proper fermionic Bogoliubov transformation. 24 We must point out that this rotation acts on both the momentum and spin at the same time. As a result, the spin labels in eqs. (4.11) and throughout the rest of this section are implicitly oriented along the momentum direction, and we have introduced that 'barred' spin labels to denote this orientation. To be precise, as † p (or bs † p ) withs = 1 creates a particle (an antiparticle) with its spin aligned with the momentum p, while withs = 2, the spin is oriented in the −p direction. Further, we note that the reference state is rotationally invariant and so these rotations leave |0 unchanged.
Hence for the annihilation and creation operators are paired according to their momentum and spin (i.e.,s ∈ {1, 2}), but for each of these pairs the Bogoliubov transformation takes the simple form given in eq. (3.23). In particular, comparing to eq. (3.24), we may set cos ϑ = α s p and ϕ = 0 fors = 1 (or ϕ = π fors = 2).

Dirac ground state
For the complexity to transform the unentangled reference state |0 into the fermionic vacuum |0 , we recall that the geodesic distance was given by 2ϑ in the parameterization of a fermionic two-mode squeezing operations in eq. (3.24) -see discussion around eqs. (3.43) and (3.44). In particular, there is a generator analogous to that in eq. (3.42) for each pair of modes and the magnitude of this generator is given by (4.14) Above, the sign is not fixed by the cos −1 but with choices of ϕ above, we ensure that sin ϑ > 0 and so Y (m, p,s) > 0 in the final expression. Note that for each momentum, the two spins (i.e.,s = 1, 2) give two identical contributions. Figure 2 shows this expression as a function of |p| for various values of the mass m. We note that for large |p|, the complexity per mode rapidly approaches A special case is m = 0 for which the complexity per mode is a fixed constant, i.e., Y (m = 0, p,s) = π/2. That is, Y takes the maximal value for all modes in the theory of the massless free fermion.
To generate the vacuum state |0 from our unentangled reference state |0 , we are squeezing all of the modes, and we should think of Y (m, p,s) for various values of p ands as the components of the tangent vector Y to the geodesic trajectory in the full geometry. The total complexity is then found by integrating over all momenta and summing over the spins with either the F 2 or F κ=2 measures in eq. (2.6). 25 With the F 2 measure, we find  where the spatial volume V appears to normalize the momentum integral. Similarly, for the κ measure with κ=2, we find Note that the integral is over the squares of the individual complexities per mode. The gate generating the minimal circuit corresponds to the sum of individual gates for each mode. As Lie algebra generators, they are orthogonal with respect to our rightinvariant metric, such that the total norm of their sum is by an pythogerean sum, or rather integral. Of course, we also have the expected relation C κ=2 = C 2 2 . Because Y (m, p,s) tends to the constant π/2 at large momenta (as shown in eq. (4.15)), this total complexity is UV divergent. Choosing a hard cutoff Λ for the momentum integral allows us to compute the integral exactly leading to a rather long expression given by This expression can be simplified by expanding for large Λ/m, which yields This result becomes more and more precise in the massless limit as we can infer from figure 2, where the complexity per mode approaches the constant π/2. That is, for the massless theory, we have simply C κ=2 |0 → |0 = V Λ 3 /12. Of course, as noted above, the results for the F 2 measure are simply given by taking a square root of the above complexities, i.e., C 2 = √ C κ=2 . However, the divergence structure produced with the κ = 2 measures matches more closely that found in holographic complexity, i.e., we expect that C holo ∼ V Λ 3 [12].
At this point, we should add that our results in eqs. (4.18) and (4.19) agree with those presented in [37], up to an overall normalization constant (i.e., , if we multiply our results by 2π 2 , the expressions agree). This discrepancy simply arises due to a slight difference in the choice of conventions. Further, let us emphasize that our methods presented in section 5 and appendix B prove that our path is the minimal geodesic in the full SO(2N ) group of the fermionic theory, which was left an open question in [37].
We might also consider the κ = 1 measure (or equivalently, the F 1 measure) with, If we again introduce the cutoff Λ, we can do this integral explicitly and find the relatively simple expression This expression then yields the following large Λ/m expansion for the complexity, However, we should note that this measure (as well as the general κ measures with κ = 2) is basis dependent [32] and so implicitly we are choosing the normal mode basis in eq. (4.20).

Simple excited states
We should note that we can also evaluate the complexity of a number of excited states as well. First, we observe that the state |ψ = ar † q |0 with a single particle excitation (in a fixed spin state) remains a Gaussian state since it is annihilated by ar † q , i.e., ar † q |ψ = (ar † q ) 2 |0 = 0, as well as the usual annihilation operators for all of the other spins and momenta. However, this particular state has odd fermion number and is on the disconnected component of the space of Gaussian states -see footnote 14. While we can only evaluate the complexity of Gaussian states with even fermion number, we will have to develop our formalism further in the next section to describe the complexity of general states with even fermion number -see the discussion in section 6.2. However, one simple set of states which we can consider here given the Bogoliubov transformations in eq. (4.12) are excited states of the form 26 The above state is annihilated by ar † q and br † −q (and again by the usual annihilation operators for all of the other modes). Hence eq. (4.14) still applies for most of the pairs of modes, but we must reconsider the contribution for the pair labeled by p = q and s = r. However, for this pair of modes, we can simply relabel the annihilation operators (ã,b) = ((−)r br † −q , (−)rar † q ). With this choice, eq. (4.12) can be rewritten as (4.26) 26 Recall our notation isr =r + 1 (mod 2). There is no sum overr here but both creation operators carry the opposite spin labels. Note that this state has vanishing particle number since it involves one particle and one antiparticle. Similarly, it has zero net momentum, but there is a net spin because, e.g., withr = 1, the particle's spin is oriented in the +q direction and the antiparticle withr = 2 also has its spin pointing in the +q direction -see footnote 24). Hence the Bogoliubov transformation still takes the simple form given in eq. (3.23).
In particular, comparing to eq. (3.24), we may set cosθ =α and ϕ = π forr = 1 (or ϕ = 0 forr = 2). Now the analog of eq. (4.14) for the Bogoliubov transformation (4.25) for these particular modes is given bỹ Again, this result is independent of the spin labelr appearing in the state (4.23). Comparing eqs. (4.14) and (4.27), we see that Y (m, q,r) +Ỹ (m, q,r) = π. 27 Therefore while Y ∈ [0, π/2], we haveỸ ∈ [π/2, π]. Figure 3 shows this expression as a function of |q| for various values of the mass m. We note that for large |q|, the complexity per mode rapidly approaches π/2, which is now the minimal value (and also coincides with the contribution of these modes to the vacuum complexity). Again, m = 0 is a special case whereỸ (m = 0, q,r) = π/2. As before, when evaluating the total complexity of our excited state (4.23), we must integrate the Y (m, p,s) over all momenta p, as well as sum over the spin labels s. However, in this integration only a single contribution, i.e., p = q ands =r, differs from that in the vacuum complexity. Hence, for example, with the κ = 2 measure, we have (4.28) Let us note an important subtlety in arriving at the above expression: At first sight, one may think that sinceỸ only differs for a single momentum mode, this should correspond to a set of measure zero in the integration and hence the complexity should remain unchanged. However, if we are working with a finite volume V , the momentum integral would become a discrete sum. Alternatively, each momentum mode occupies a cell of size (2π) 3 /V in the continuous integration, i.e., one can think that exciting a single discrete (physical) mode q is properly approximated by exciting all momenta in a cell of size (2π) 3 /V around q. These two perspectives are then reconciled by noting that in eq. (4.28), the additional terms do not scale with volume, i.e., their contribution is vanishingly small in the limit V → ∞. Now just as with the vacuum complexity, the complexity of these excited states are UV divergent, as shown in eq. (4.19). However, eq. (4.28) shows that exciting the particle-antiparticle pair in eq. (4.23) only makes a finite perturbation of the vacuum complexity. Thus an interesting quantity to consider is the difference between the complexity of our excited state and that of the vacuum state, i.e., which yields a UV finite quantity, i.e., the UV divergences in the complexity of the excited state are precisely canceled by those in the vacuum complexity. Note that we used Y (m, q,r) +Ỹ (m, q,r) = π in the final expression. We can construct a similar difference using the κ = 1 measure, which yields Interestingly, both of these differences are equal to one another up to an overall factor of π. In general, one finds tht ∆C κ |0 → |ψ ∝ π − 2Y (m, q,r) but the full expressions are more complex for general κ. Therefore, since Y (m, q,r) tends to π/2 for large momenta, ∆C κ |0 → |ψ → 0 in the limit of large q.
One could attempt similar calculations with the F 2 measure (4.16). However, because of the square-root appearing in this expression, one finds that the difference in the complexities is vanishingly small, i.e., Hence this analysis is less interesting for the F 2 measure. A simple extension of the above discussion would be to excite a finite number of particle-antiparticle pairs in a state of the form Again, these simple states are characterized by having vanishing particle number and vanishing net momentum. The above calculations extend in a straightforward manner and one would find, e.g., With the methods developed so far, we can also examine the complexity of some other families of simple excited states. For example, we next consider states where we excite two particles or two antiparticles with the same momentum but opposite spins, We will focus on the (B) states with two particle excitations in the following, but of course, the discussion for (C) states would be the same after exchanging a ↔ b (as well as q ↔ −q).
In the new state a 1 † q a 2 † q |0 , the sector describing q momentum mode has annihilation operators (ãr q ,br −q ) = (ar † q , br −q ) forr = 1, 2. Similarly the creation operators are (ãr † q ,br † −q ) = (ar q , br † −q ), and so for this sector, the Bogoliubov expression (4.12) becomes where αr q and βr q are given by eq. (4.13). While this transformation does not take our standard form (4.12), we see that the creation operators are both given by some (real) linear combination ofār q andbr † −q . Analogously, the annihilation operators are linear combinations ofār † q andbr −q and hence both the target state and the reference state are annihilated bybr −q ! This contrasts with the standard situation for all of the other momentum modes. Examining eq. (4.12), it is straightforward to show that the target state is not annihilated byās p ,bs −p or any linear combination of these operators. Hence the essential feature of the transformation (4.35) is that it implicitly swapsār q toār † q . Hence since the above transformation does not take our usual form, we instead pair the (annihilation) operators of the reference state as (ā 1 q ,ā 2 q ) and (b 1 −q ,b 2 −q ). We can then produce the desired target state (4.34) with two transformations of the form in eqs. (3.23) and (3.24). The first performs the desired swap on theār q with ϑ 2 = π/2 and the second leaves thebr −q unchanged with an angle ϑ 1 = 0, i.e., we havẽ Hence, as in eq. (4.28) with the κ = 2 measure, we have However, as in eq. (4.29), we may also consider the difference between the complexities of our excited state and the vacuum state, which yields where we used the fact that Y (m, q,r) in eq. (4.14) is actually independent of the spin. We must note that the generators implied by the transformation described above in eq. (4.36) are not the same as the standard two-mode squeezing operators producing eq. (4.12). Hence, we have implicitly made use here of the fact that the κ = 2 measure is independent of the basis of generators. While the same is not true of the κ = 1 measure, we may write as long as we align the basis for the excited modes with the generators which produce the above transformation. We note that this difference is identical to that found for the previous excited states with the κ = 1 measure, in eq. (4.30).
To close the discussion here, we examine the complexity of a fourth class of simple excited states, In these states, we excite one particle and one antiparticle with opposite momenta as in eq. (4.23), but here their spins are anti-aligned with each other. For example, settinḡ r = 1, a 1 † q creates to a particle with its spin aligned to the +q direction. However for the antiparticle, we have b 1 † −q which creates an antiparticle whose spin points in the same direction to its momentum −q. Thus the antiparticle spin oriented in the opposite direction to the spin of the particle, and the state (4.40) has zero net spin. Now using reasoning analogous to that in the previous case, one concludes that both the reference state and the new excited state are annihilated byār q andbr −q . Further the desired transformation must swapār q toār † q andbr −q tobr † −q . Hence following the previous reasoning, we pair the annihilation operators as (ār q ,br −q ) and (ār q ,br −q ). We can then produce the desired target state (4.40) with two standard Bogoliubov transformations (as in eqs. (3.23) and (3.24)) where the transformation acting on the first pair produces the desired swap with ϑ 1 = π/2 and one which leaves the second pair unchanged with ϑ 2 = 0. Hence, we arrive at essentially the same result as in eq. (4.36) Y (m, p, P 1) = 2ϑ 1 = π ,Ỹ (m, p, P 2) = 2ϑ 2 = 0 . (4.41) Further, the results for the complexity are identical to those above for the states in eq. (4.34). In particular, if we evaluate the difference between the complexities of this excited state and the vacuum state with the κ = 2 and 1 measures, we find precisely the results in eqs. (4.38) and (4.39), respectively. At this point, we would like to emphasize that it was essential in our derivation of the complexity of the (A) and (D) families of excited states, in eqs. (4.23) and (4.40) that the spin axis of all of the excitations was aligned (or anti-aligned) with the momentum of the given mode. After developing systematic analytical tools in section 5, we will be able to generalize these classes in section 6.2 by allowing a spin axis independent of the momentum direction. In contrast, the (B) and (C) families in eq. (4.34), the two particles (or antiparticles) combine to form a spin singlet and therefore the result for the complexity should not rely on the alignment of the spin and momentum axes. As a final note, let us add that it is straightforward to extend to the discussion of the complexity of the states in eqs. (4.34) and (4.40) to states where we excite a finite number of pairs of particles and antiparticles (in analogy to eq. (4.32)).

Complexity of general fermionic Gaussian states
We study the circuit complexity of arbitrary fermionic Gaussian states |Ω with respect to an arbitrary Gaussian reference state |Ω . In accord with our previous discussions, the notation here indicates that the state is characterized by Ω ab , the antisymmetric part of the covariance matrix (3.1). As discussed in section 3, when we apply Nielsen's method by geometrizing the problem of finding the circuit complexity for fermionic systems, we restrict our study to Gaussian states. On the level of Lie groups, this means we are restricting ourselves to a SO(2N ) subgroup of the full U(2 N ) group of unitary transformations, which could act on the states for our system of N fermionic degrees of freedom.

Gaussian states from Kähler methods
Recently, it has become apparent that bosonic and fermionic Gaussian states can be characterized in a unified framework [50] based on a triangle of structures (Kähler methods) consisting of a positive definite metric G, a symplectic form Ω and a linear complex structure J. We will review the relevant ingredients of these methods and fix conventions. For the most part, this is a straightforward generalization of our initial warm up exercise with two fermionic degrees of freedom, and much of this analysis was anticipated in the discussion in section 3.
A system with N fermionic degrees of freedom is defined on a Hilbert space H = (C 2 ) ⊗N . Linear observables can equivalently described by N pairs of creation and annihilation operators (a i , a † i ) or their hermitian counterparts, the Majorana modes (q i , p i ) in eq. (3.13). The latter provides a basis ξ a = (q 1 , · · · , q N , p 1 , · · · , p N ) for linear fermionic observables. These form a vector space Γ * which we refer to as the dual phase space, 28 equipped with the positive definite metric G ab that fixes the anticommutation relations as {ξ a , ξ b } = G ab . Recall that the Gaussian states are completely characterized by the antisymmetric covariance matrix iΩ ab = ψ|ξ a ξ b −ξ b ξ a |ψ and we will label these states accordingly, i.e., |ψ = |Ω , in the following. Hence for our fermionic Gaussian states, eq. (3.1) becomes Again, this same form also applies for bosonic Gaussian states, however, the roles of G and Ω are interchanged: For bosons, G labels the state and Ω fixes the bosonic commutation relations, and as indicated above, Ω labels the fermionic states while G determines the anticommutation relations for the fermionic degrees of freedom. We also introduce the inverse matrices g and ω defined by the conditions G ac g cb = δ a b and Ω ac ω cb = δ a b .
Mathematically speaking, G represents a positive definite metric and Ω a symplectic form on the classical phase space isomorphic to R 2N . Together they define a third object called a linear complex structure. Together, they form a triangle of structures that we call Kähler structures due to its common use in the context of Kähler manifolds. The beauty of parameterizing Gaussian states with these structures lies in the fact that this provides a unifying framework for both bosonic and fermionic Gaussian states [50]. The linear complex structure J can be used to label both types of states and characterizes them uniquely (up to a complex phase) via the following equation: The relative covariance matrix ∆ between a state |J and |J can be directly computed as ∆ = −J J. Again, this is the same formula for bosons and for fermions. However, as we will exclusively focus on fermions for the rest of this paper, we will continue to use the antisymmetric covariance matrix Ω to label the Gaussian state |Ω .

Geometry of SO(2N )
We explore the differential geometry of the group SO(2N ) that corresponds to all fermionic squeezing operations that are connected to the identity. The Lie algebra so(2N ) is given by generators K that satisfy which is equivalent to saying that K is antisymmetric with respect to G. As discussed in section 3.2, a group element M = e K transforms a Gaussian state as in accord with eq. (3.18).
Recall that if we choose a target state |Ω T and a reference state |Ω R , eq. (2.1) still leaves an ambiguity in the desired transformation because there are transformations which leave the reference state unchanged -see discussion around eq. (2.11). Hence we must find the stabilizer subgroup that preserves |Ω R : Due to the fact that Ω R is a symplectic form, the stabilizer subgroup of the state |Ω R is given by the intersection of the symplectic and the special orthogonal group which is well known to be U(N ), i.e., Sta = U(N ) = SO(2N ) ∩ Sp (2N, R). Similar to what we saw in section 3.2, this U(N ) subgroup corresponds to Bogoliubov transformations which only mix creation and annihilation operators among themselves respectively and which therefore do not change the state being annihilated. The corresponding Lie subalgebra u(N ) ⊂ so(2N ) is generated by algebra elements K satisfying which means that K is symmetric with respect to Ω R , i.e., (KΩ R ) ab = (KΩ R ) (ab) . Before we can compute geodesics on SO(2N ), we need to equip it with a geometric structure, namely a right-invariant metric following Nielsen's approach. At this point, we need to make a choice and for a general metric, we would not be able to continue with analytical methods, because even for a right-invariant metric, computing the corresponding geodesics will be very hard. However, for the group SO(N ), there is canonical choice that is compatible with the group structure and built from the metric G that determines the anticommutation relations. As introduced in eq. (3.36), this choice of metric ·, · 1 is given by For bosons, such a choice depends on the reference state, but for fermions it is the completely canonical choice that is already induced by the group structure. Not surprisingly, it is proportional to minus the Killing form which is provides a negative definite bilinear form for compact groups. Such a canonical choice does not exist for bosons, because the symplectic group is non-compact and therefore its Killing form is not definite [54]. Rather it would give rise to a Lorentzian geometry. In order to find the minimal geodesic from the identity to some final group element M that prepares a target state |Ω T , we will need to identify the equivalence classes of all group elements preparing the same state. The equivalence relation therefore becomes M ∼M iff M Ω R M =M Ω RM . The latter can be reformulated as We need to verify that T Ω R T = M Ω R M holds which implies u ∈ U(N ). We can do this by first confirming that T 2 = M Ω R M ω R is symplectic since it satisfies T 2 Ω R = Ω R (T 2 ) . This implies that its squareroot is also symplectic, i.e., it implies that which will be a distinguishing feature to identify T . Now to complete the proof, we use the above feature to compute which we wanted to verify. Now using eq. (5.5), we write the target state as |Ω T = |M Ω R M , which implies where we recall from eq. (3.29) that ∆ a b = Ω ac T (ω R ) cb is the relative covariance matrix between the states |Ω T and |Ω R . ∆ will have eigenvalues e i2ϑ with modulus 1. The square root ∆ = √ T then has eigenvalues e iϑ with ϑ ∈ [−π/2, π/2]. At this point, we reached a fairly good geometric understanding of the group SO(2N ) as a fiber bundle over its quotient SO(2N )/U(N ). In particular, we can use the polar decomposition to select a unique point from each fiber, namely the group element T which satisfies T Ω R = Ω R T , as in eq. (5.11). In contrast to the symplectic group, there are many Lie algebra elements A ⊂ so(2N ) that satisfy e A = T . However, if we use the standard definition of the logarithmic map that takes e i2ϑ to a real number ϑ ∈ (−π/2, π/2], the Lie algebra element A = log T = 1 2 log ∆ becomes unique.

Normal modes and two-mode squeezing
Now we show how for any two fermionic Gaussian states, we can find a set of normal modes, such that one state results from the other by applying independent two-mode squeezing operations onto pairs of normal modes.
Recall the group invariant information about the relation between reference and target state is captured in the relative covariance matrix (3.29) In particular, for ∆ = 1, reference and target state are the same. Note that ∆ satisfies ∆Ω R = Ω R ∆ which is a similar, but different condition than being symplectic. Due to being an element of SO(2N ), its eigenvalues are complex numbers with unit modulus e 2iϑ I . They can either appear in quadruples (e 2iϑ I , e 2iϑ I , e −2iϑ I , e −2iϑ I ) or in pairs (1, 1) or (−1, −1). There are in general two classes of spectra which correspond to: • Reference and target state cannot be disconnected If there is an odd number of pairs (−1, −1) in the spectrum of ∆, the reference and target states are located on disconnected components of the space of fermionic Gaussian states M f,N = O(2N )/U(N ) and they cannot be joined by a geodesic through SO(2N ).
• Reference and target state can be connected In all other instances, we can find a geodesic that connects the reference state to the target state.
We could assign a complexity of infinity to the former class of states, because reference and target state cannot be connected by applying quadratic operators as gates. Instead, one could try to extend the group or analytically continue the formula for the geodesic distance that we will find. However in the following, we will only be considering the complexity for the latter class of states that lie on the same connected component.
After computing the eigenvalues of ∆, we can combine the complex eigenvectors associated to a quadruple of eigenvalues (e 2iϑ I , e 2iϑ I , e −2iϑ I , e −2iϑ I ) to choose a quadruple of real eigenvectors which form an orthonormal basis (in this I'th sector with respect to G): (ξ a ) I = (q, Q, p, P ) I . In particular, the latter can be chosen such that the associated block of ∆ takes the form which is associated to a degree of freedom that was not squeezed. For eigenvalue pairs (−1, −1), we need to count how many such pairs exist. If we have an even number of such pairs, we can actually group them to form quadruples (e 2iϑ I , e 2iϑ I , e −2iϑ I , e −2iϑ I ) with ϑ I = π/2. However, if we have an odd number of pairs, i.e., an even number of eigenvalues −1 that cannot be divided by 4, we are left with one degree of freedom with basis (q, p) I , such that the block in ∆ I takes the form If such a block stands alone and cannot be combined with another one, it implies that reference and target states belong to topologically disconnected components of M f,N because there is no one-mode squeezing operation connected to the identity that connects reference to target state. We can always find a basis, in which reference and target state in this sector take the form Clearly, this transformation could be implemented by the group element with Ω T = M I Ω R M I . However, this group element is not connected to the identity on O(2) and log M I will not give a proper Lie algebra element in so (2). 29 We prove in appendix B that the shortest geodesic between reference and target states that can be connected is given by Right-invariance of the metric implies that the length of this path is just given by the norm of A. Using the F 2 cost function (2.6), this implies that we can compute the complexity as In terms of our eigenvalue quadruples (e 2iϑ I , e 2iϑ I , e −2iϑ I , e −2iϑ I ), we find where ϑ I ∈ [0 , π/2]. If we choose generators O I in eq. (2.2) that coincide with the two-mode squeezing operations that generate ϑ I , we can normalize them to satisfy Y I = 2 ϑ I . Such a choice would always be adapted to the pair consisting of reference and target state because ∆ = Ω T ω R and explicitly given by The complexity with the κ = 2 measure is then given by

Applications
At this point, we have developed general methods to compute the circuit complexity of arbitrary pairs of fermionic Gaussian states as reference and target states. Hence in the present section, we will return to considering the free Dirac field in four dimensions.
In section 4, we already considered the circuit complexity of the ground state and of certain special excited states. Here, we extend these results for the free Dirac field by applying the general method developed in section 5. For example, our calculations in section 4 implicitly involved choosing a particularly simple reference state. In section 6.1, we examine how other choices for the reference state modify the complexity of the ground state. Further in section 6.2, we discuss the complexity of more general excited states. However, we begin below by describing how the general construction of section 5 can be adapted to the continuum quantum field theory of a free Dirac fermion.
In the previous section, we computed the complexity for arbitrary fermionic Gaussian states for systems with a finite number of degrees of freedom, namely N , and the key lesson was that the computations simplify for two pure Gaussian states |Ω R and |Ω T of the form where the Ω R and Ω T are block-diagonal with respect to the same basis. This structure immediately implies that the states themselves are tensor products of the form In this case, the relative covariance matrix takes the form For the Gaussian states on connected component of the state space, each of the ∆ I has a quadruple of eigenvalues (e 2iϑ I , e 2iϑ I , e −2iϑ I , e −2iϑ I ) and the overall complexity is then given by, e.g., If we are dealing with a continuum quantum field theory, the label I can be continuous by referring for instance to the momentum p or the position x. In this case, the states |(Ω R ) I and |(Ω T ) I will describe the degrees of freedom for each mode at a given momentum p or position x, which we expect to be finite in number, e.g., the spin s and particle number as in section 4. Then as in eq. (5.23), we can recover the complexity per mode with and we can apply our previous results without any alteration. At this point, we have full control over the individual contributions to the complexity and can study how the overall complexity behaves with various cost functions. We will see that the complexity is in most cases both, IR and UV divergent, but by understanding the individual pieces, we can meaningfully regularize these divergences. For the IR divergence, we put the whole system into a box with volume V and for the UV divergence, we introduce a momentum cutoff Λ. For the rest of this section, we will focus on translationally invariant states. These states necessarily have a tensor product structure over momentum modes: 31 For our reference state |Ω R , we must choose a state that is not just translationally invariant, but also has zero spatial correlation which implies that it should be a tensor product state in position space. This requirement enforces that the covariance matrix that characterizes each momentum mode |Ω R (p) p must be the same, i.e., 31 Translational invariance still allows for the possibility of correlating the mode p with −p, which means it would be more precisely to have a tensor product of mode pairs (p, −p), but we will focus on states that are actually tensor product states over all modes p. Therefore the reference state |Ω R is completely characterized by the finite dimensional covariance matrix Ω 0 , which describes the correlations within the degrees of freedom associated with each mode. The fact that these correlations look the same when studied in either momentum or position space is a consequence of |Ω R being translationally invariant without spatial correlations.
Examining translationally invariant states of the Dirac field in more detail, we begin by noting that for each momentum mode p, the Dirac field has the four components ψ(p) = (ψ 1 (p), ψ 2 (p), ψ 3 (p), ψ 4 (p)) with ψ i (p) = d 3 x e −ip·x ψ(x). The anticommutation relations are given by Thus, we can associate the Hilbert space (C 2 ) 4 to each momentum mode p. From the expansion in eq. (4.3), we have Now following eq. (3.13) for each momentum p, we can define four pairs of Majorana modes (6.10) In the notation of section 3, we assemble these modes as ξ a (p) = Q i (p), P i (p) , which then satisfy {ξ a (p), ξ b (q)} = (2π) 3 δ ab δ (3) (p − q) . (6.11) The ground state |0 of the Dirac field has a covariance matrix (5.1) given by With respect to our basis ξ a above, the symmetric component is simply G ab (p) ≡ δ ab . What remains is to compute the antisymmetric component Ω ab (p), which is a real linear form. For a pure state, Ω ab (p) is a symplectic form compatible with G ab (p), which is equivalent to saying that with respect to the above basis, the matrix Ω ab (p) has eigenvalues ±i.
Recall that Ω ab is the component of the covariance matrix that characterizes the fermionic Gaussian states. As in eq. (6.6), the Dirac vacuum |0 can be written as a tensor product over sectors for each momentum p, i.e., where we made the dependence on the mass m explicit. We can evaluate the covariance matrix Ω ab (m, p), which will encode the relevant properties of the complexity in a given mode p.
Computing Ω ab (p) explicitly takes some work: We begin by explicitly evaluating the spinors us(p) and vs(−p), e.g., see eq. (4.2). 32 We then substitute these expressions into the eq. (6.9) and then write out the left-hand side of eq. (6.12) in terms of the creation and annihilation operators of |0 . Using their algebra, we can then simplify the right-hand side and extract Ω ab (m, p). What we find is rather simple and can be expressed using p = (p x , p y , p z ): where as usual, E p = p 2 + m 2 . Note that here Ω ab (m, p) is an eight-by-eight matrix because for each each momentum mode, it describes correlations over both spin and particle number. 33 With eq. (6.14) in hand, it is easy to take various limits. For example, we can consider the rest frame (p = 0) and the massless limit (m = 0 or equivalently |p| → ∞). In the next section, we will use these expressions to define the covariance matrix Ω 0 appearing in the reference state (6.7). For example, in this newly developed language, the reference state |0 appearing in section 4 can be described as That is, in this state, we put every single momentum mode p into the same state corresponding to zero-momentum mode of Dirac ground state with covariance matrix Ω(M, 0). Note that we have introduced a new mass scale M here, but in fact it turns 32 Recall from footnote 24, that we evaluate us(p) and vs(−p) by boosting the spinors in eq. (4.1) along the z-axis withp z = |p| and then rotate the spinor to align the momentum (and spin) with the direction of p. 33 Hence in the following discussion, the corresponding relative covariance matrix (see eq. (6.36) or (6.19)) will have eight eigenvalues e ±2iϑ I . This contrasts with the previous discussion in section 4, where implicitly the spin was treated as a separate quantum number and we had a single quadruple of eigenvalues, as in section 5. out that Ω(M, 0) is a special case which independent of M -see further discussion in section 6.1.

Alternative reference states
Above, we discussed that requiring our reference state |Ω R be translationally invariant and also have no spatial correlations enforces that |Ω R take the simple form given in eq. (6.7). Therefore, the reference state is completely determined by a single (finite dimensional) covariance matrix Ω 0 , which fixes the correlations for each momentum mode. We can then study the contribution to the complexity for each momentum mode p by considering the geodesic length from some reference state |Ω 0 p to |Ω(m, p) p ∈ (C 2 ) 4 . The minimal geodesic for the full theory then moves in this normal mode submanifold and the full complexity of the vacuum state combines all of these contributions for each momentum sector. As alluded to above, we specify the reference covariance matrix Ω 0 with eq. (6.14) and a specific choice of a reference momentum q and a reference mass M , i.e., we set Ω ab 0 = Ω ab (M, q). 34 We emphasize that we are using the same fixed momentum q for all of the momentum modes. In the notation of eq. (6.13), the reference state can be written as In comparison to section 4, we are replacing eq. (4.6) with ψ(x) = 1 √ 2 s ās xũs (q) +bs † xṽs (q) . where the basis spinors defined as above, except the tilde superscript indicates the replacement m → M . Now in principle, for each quadruple of momentum modes, we would compute the geodesic between |Ω(M, q) to |Ω(m, p) . However, given our analysis in the previous section, we know that we must replace eq. (4.14) with Y (m, p,s) = 2ϑ where using eq. (6.5). The normalization factor is different here because in this construction ∆ has eight (rather than four) eigenvalues ±2iϑ. That is, just as in eq. (4.14), the complexity per mode Y (m, p,s) is independent of the spins, and the trace in eq. (6.18) effectively sums over the spins as well.

Rotational invariant reference state
We begin here with the simple choice q = 0, which produces a rotationally invariant reference state. In this particular case, the mass scale M of the reference state does not enter in any way, i.e., the only nonvanishing entries in Ω ab (q = 0, M ) reduce to ±Ẽ q=0 /M = ±1. 35 This means, the Y (m, p,s) can only depend on the mass and momentum of the mode that we are considering. Let us construct ∆: The corresponding eigenvalues appear with a multiplicity of four and are explicitly given by Note that this corresponds to two quadruples of (e 2iϑ , e 2iϑ , e −2iϑ , e −2iϑ ) associated to the two spin degrees of freedom s = 1, 2. We can recall from eq. (5.22) that the contribution to the complexity of each spin is given by Y (m, p,s) = 2ϑ = tan −1 |p| m , (6.21) which completely agrees with the result found in eq. (4.14). Of course, this is not surprising, because q = 0 corresponds to choosing the same reference state as the one we considered before, i.e., eq. (6.17) completely agrees with eq. (4.6) since the mass does not play a role at q = 0 and the basis spinors reduce to those given in eq. (4.1). Figure 4 shows a three-dimensional plot of the function Y given by eq. (6.21) -see also figure 2. The complexity for the κ = 2 and κ = 1 measures are given in eqs. of large momentum and hence both of these complexities are UV divergent, as shown in eqs. (4.19) and (4.21). More specifically, the leading divergences are as given in eqs. (4.19) and (4.22). In both cases, the next correction is a divergence of order log (Λ/m).

Massless reference state
We can also choose a reference state that corresponds to spinors associated to a massless state with momentum q in a given direction. Without loss of generality, we choose the positive z-direction, namely q = (0, 0, q). In this case, the complexity of the momentum mode (p x , p y , p z ) should also depend on the angle that p has with the z-axis. This reference state is therefore not rotationally invariant, but only invariant under the little group of a massless particle. The explicit form of ∆ is given by Again, we find two quadruples of equal eigenvalues, which are given by Hence using eq. (5.22), we can extract the contribution to the complexity to be We give a three-dimensional plot of this expression in figure 5. We expect that the complexity will again be UV divergent. In order to compute the leading order contribution, we need to take the limit |p| → ∞. This time, there is no universal limit given by a single constant, but rather the limit will depend on the angle θ between p and q: 36 We can compare this expression with eq. (4.15) for the restframe reference state. The leading order contribution to the complexity can be computed by simply substituting this limit for Y in the desired integral. For example, using eq. (4.17) for the κ = 2 measure, we find 36 That is, cos θ = p z /|p| in the present case where q is aligned with the (positive) z-axis. Similarly, using eq. (4.20) for the κ = 1 measure produces Note that the leading divergence above is identical to that found for the reference state with q = 0 while the leading divergence in C κ=2 is about 20% larger, e.g., compare with eq. (6.22). The latter shows that for the restframe reference state, the leading corrections were O(V mΛ 2 ) but above, we see that the corrections vanish at this order for the massless reference state. This comparison can be understood from the limiting function for a general reference state |M, q , which we will discuss next.

Massive reference state
We now consider our most general reference state (6.16), corresponding to a Gaussian state given by a tensor product over identical states |Ω(M, q) , with a fixed reference momentum q and mass M . 37 The calculation of complexity can be simplified by choosing again the momentum q to be along the z-direction, such that we have q = (0, 0, q).
The explicit form of the relative covariance matrix ∆ is then given by After some extended calculations, we find the two quadruples of identical eigenvalues corresponding to Let us note that as expected from the covariance matrix (6.29), these eigenvalues are only functions of the dimensionless ratio q/M (rather than of q and M independently). Hence to simplify the following expressions, we introduceq = q/M . From here, we can use the same steps as above to find the general function of the complexity, namely We can verify that in the limit q/M → 0 (restframe) or q/M → ∞ (massless reference state), we find the expected results in eqs. (6.21) and (6.25), respectively. We illustrate Y (m, p,s) with a three-dimensional plot in figure 6, forq = q/M = 1. A convenient choice is to study the complexity as function of the dimensionless vectorŝ p = p/m andq = q/M . In particular, given eq. (6.31), we can write the complexity per mode as Y (p,q) -as in the two previous examples, Y is independent of the spin s. Note that we allow these vectors to be infinitely large for m → 0 or M → 0 corresponding to point on the two-sphere at infinity. Clearly, Y takes its minimum value (equal to zero) at the point wherep =q. We can always choose a plane, such that both pointsq andp lie on it. If we now use the (inverse) stereographic projection to map the plane onto a half-sphere (of unit radius) touching the origin, then Y becomes just the geodesic distance between the projected points on the sphere. We illustrate this geometry in figure 8. In order to compute the leading order UV contribution to the complexity, we need to take the limit |p| → ∞. Again, this limit will depend on the angle θ between p and q, i.e., cos θ = p z /|p| in the case where q is aligned with the (positive) z-axis. Again, the other relevant quantity will beq = q/M . The asymptotics for |p| → ∞ are given by which can be used to identify the leading UV divergences in the complexity. Considering  ) . By using the inverse stereographical projection of the plane onto a half-sphere of unit radius, the complexity can be identified with the geodesic distance on the sphere between the two projected points. We also indicate the angle θ betweenq andp.
the the κ = 2 measure, we substitute the above expression into eq. (4.17) and find This function neatly interpolates between the previous results two results, namely between eq. (6.22) for the restframe reference state withq = q/M → 0, and eq. (6.27) for the massless reference state with q/M → ∞. Alternatively, using eq. (4.20) for the κ = 1 measure, we find It is straightforward to show that in the limit q/M → 0, the above reduces to the corresponding expression in eq. (6.22). Similarly in the limit q/M → ∞, one finds that the subleading correction vanishes above, which is in agreement with the result in eq. (6.28).

More excited states
The formalism developed in section 5 allows us to compute the complexity between any two fermionic Gaussian states that belong to the same connected component of the state space, i.e., they must both have even or odd fermion number (see footnote 14). A key feature which distinguishes fermions from bosons is that states with individual particle excitations, e.g., i ar i † p i |0 , are still Gaussian states. That is, as a result of the fermion anticommutation relations, these states are annihilated by the operators ar i † p i . Hence we can apply our techniques to compute the complexity of fermionic Gaussian states to such excited states (provided that the number of excitations is even so that they are on the same connected component as the reference state). We already considered several simple examples of such excited states in section 4.2, but with the new methods at our disposal, we can approach this question systematically here. Thoughout the following analysis, we will use |0 as the reference state, with Ω 0 = Ω(M, q = 0).

Excitations with a single momentum
We will begin by analyzing excitations in a single mode p, for which we will consider two and four excitations. In section 4.2, we have already considered the complexity of simple examples of these states, as given in eqs. (4.23), (4.34) and (4.40). The elaboration here will have to do with the spin. So far, we only considered spins of the individual (anti)particles are aligned along the axis defined by the corresponding momentum -see footnote 24. Previously, we denoted spins with this orientation with the spin labelsr,s. However, we now introduce the labels r, s ∈ {1, 2} to denote spins aligned along an arbitrary axis that is not related to the momentum axis, but rather is fixed in the rest frame of the fermions. Without loss of generality, we will choose this axis to be the z-axis in the following. 38 As in section 6.1, the strategy for evaluating the complexity is: Given our excited state |ψ , we first compute the corresponding covariance matrix. Since the excitations involve a single momentum q, we need only focus on that sector and the corresponding Ω(m, q) is computed with respect to the basis ξ a (q) introduced in eq. (6.10). We then evaluate the relative covariance matrix ∆(q) =Ω(m, q) ω(M, 0), where ω(M, 0)  Figure 8. This plot shows the angles 2θ i of the two eigenvalue quadruples is the inverse of Ω 0 = Ω(M, 0) which means we continue to use the rotational invariant reference state as in section 4. We then evaluate the spectrum of ∆(q), which reveals the change in the complexity of the excited state. The tedious part of this computation lies in evaluatingΩ(m, q) which is best accomplished by expanding ξ a (q) in terms of creation and annihilation operators and keeping in mind that for the excited state, some of the original creation operators now annihilate |ψ . The first class of excited states which we consider here take the form with arbitrary q. These states are similar to those from eqs. (4.23) and (4.40), but as described above in the present state, the spins are aligned with the z-axis in the rest frame. In fact, it is straightforward to see that the (A) and (E) families coincide for the special case where the momentum q is oriented along the z-axis. The relative covariance matrix is given by: The eigenvalues of ∆(q) appear in two quadruples (e 2iθ i , e 2iθ i , e −2iθ i , e −2iθ i ) given by Hence the anglesθ i are given by 39 and we plot the values 2θ i in figure 8. In particular, we notice that with q x = q y = 0, the spin axis and the momentum axis are aligned and our results agree with eq. (4.27) for excited states in the (A) class, i.e., in eq. (4.23). Following the analysis in section 4.2, we then evaluate the difference in the complexity of the excited state and that of the vacuum. With the κ = 1, 2 measures, we find where Y (m, q,s) is given in eq. (4.14), and we have used 2θ 1 + 2θ 2 = π in the second expression. It is interesting to observe that our result here for ∆C κ=1 is precisely the same as that found in eq. (4.30) for the excited states in eq. (4.23). Of course, we must reiterate that the κ = 1 measure is basis dependent and implicitly, for ∆C κ=1 , we are aligning the basis in the q sector here with the generators which produce the above transformations.  Figure 9. This plot shows the angles 2θ i of the two eigenvalue quadruples (e 2iθ i , e 2iθ i , e −2iθ i , e −2iθ i ) of ∆(q) in the state a r † q b r † −q |0 for q = (q x , 0, q z ). Note that the result is identical with the plots shown in figure 8 where q x and q z are swapped.
Next as a generalization of the (D) states in eq. (4.40), we consider where, as before, we use the convention that r to refers to the opposite spin label as r, i.e., r ≡ r + 1 (mod 2). Again, we are considering the spins to be oriented along the z-axis, independent of the orientation of the momentum q. We can assume q y = 0 without loss of generality, i.e., we orient the spatial axes so that the momentum lies in the xz-plane. This choice simplifies the computation of the relative covariance matrix, for which we find The eigenvalues of ∆(q) appear in two quadruples (e 2iθ i , e 2iθ i , e −2iθ i , e −2iθ i ) given by where we reinstated q y with the substitution q x → q 2 x + q 2 y . Thus, the two anglesθ i are given by and we plot 2θ i in figure 9. Let us observe that with q x = 0 = q y , the spin axis is aligned with the momentum axis and the above angles coincide with those in eq. (4.36) for the excited states in class (D) (since in this case, the two classes coincide). Furthermore, we can compare these results with those for the (E) excited states. For example, the eigenvalues in eq. (6.37) match those above in eq. (6.42) if we swap q z ↔ q 2 x + q 2 y . Again, we can evaluate the difference in the complexities of the excited state and the Dirac vacuum with the κ = 1, 2 measures to find the same result as in eq. (6.39), where theθ i appearing in ∆C κ=2 are given by eq. (6.43). We note that once more that ∆C κ=1 is precisely the same as found in eq. (4.30) for the (A) states in eq. (4.23) -again, this result relies on the fact that 2θ 1 + 2θ 2 = π for the new excited states. We reiterate that the κ = 1 measure is basis dependent and implicitly, for ∆C κ=1 , we are aligning the basis in the q sector here with the generators which produce the above transformations.
Another interesting class of excited states is given by where we excite every degree of freedom in q sector. Physically, this means we have two particles with momentum q but opposite spins, and two antiparticles with with momentum −q, also with both spins. These states are similar to a special case with two pairs excited for the same momentum q in eq. (4.32), i.e., ar † q ar † q br † −q br † −q |0 . Again, as our notation indicates, the difference is in the orientation of the spins, but we return to this point below. In evaluating the corresponding covariance matrixΩ(m, q), for this state (6.44) we are swapping all creation operators with annihilation operators (for this momentum mode q) and this swap will just reverse the overall sign of the covariance matrix from that in eq. (6.14), i.e.,Ω(m, q) = −Ω(m, q). The reason for the sign change can be understood best by recalling that the eigenspaces of the matrix J = Ωg correspond to the spaces spanned by creation operators (corresponding to eigenvalues +i) and annihilation operators (eigenvalues −i). Hence if we swap the role of the operators, we need to go from J →J = −J, which implies to Ω →Ω = −Ω. We find that the eigenvalues of ∆ =Ω(m, q) ω(M, 0) have the opposite sign, i.e., there are two four-fold degenerate eigenvalues, 45) Table 1. In this table, we summarize the results for the complexity of various excited states in a single mode q. The first four cases were considered in section 4.2. The relative covariance matrix ∆(q) will have two eigenvalue quadruples (e 2iθ i , e 2iθ i , e −2iθ i , e −2iθ i ) which encode the angles needed to evaluate the complexity or the difference of the complexity from that of the ground state. Recall barred spin labelsr,s denote spins aligned along the momentum axis, while unbarred labels r, s denote spins aligned along a fixed axis in the rest frame. Given such a fixed axis, we specify the momentum q by the magnitude of the tangential component q and of the orthogonal component q ⊥ . Recall that cases (A) and (D) coincide with cases (E) and (F), respectively, when q ⊥ = 0. Further, since the states in (B), (C) and (G) form spin singlets, the spins can actually be oriented along any axis.
Excited states which of course, is the same as in eq. (6.20) up to an overall sign change. Accordingly, we haveỸ (m, q, s) = 2θ = π − tan −1 |q| m , (6.46) where as usual, we chose the angle to lie in the range 0 ≤ 2θ ≤ π. Clearly, in eq. (6.46), we haveỸ (m, q, s) = π − Y (m, q,s) where Y (m, q,s) is the complexity of the corresponding modes in the Dirac ground state given eq. (4.14). We already plotted this function in figure 3, since the same expression appeared in evaluating the complexity of the state ar † q br † −q |0 -see eq. (4.23). Hence, we will find the same complexity here for eq. (6.44) as for the excited state ar † q ar † q br † −q br † −q |0 , which is the special case of eq. (4.32) noted above. As mentioned above, the difference between the two states is the orientation of the spin axes of the individual particles (and antiparticles), however, we are finding that the complexity does not depend on these details. The reason for this is that in both cases, the full state is a spin singlet (i.e., the net spin is zero) and so it is invariant under rotations of the spin axis. In fact then, the two states are identical and so it is a confirmation of our methods that we find the same complexity in either case irrespective of the details of the construction of the state. The above discussion also provides us with an alternative perspective on how to arrive at this same result.
Further, the same reasoning can be applied to the (B) and (C) families of states in eq. (4.34). In either case, the pair of excited particles (or antiparticles) form a spin singlet. Hence the states would actually be identical with the spins oriented along any axis, i.e., they need not be along the momentum axis as in the discussion in section 4.2. Of course then, with an alternate choice of spin axis, the complexity would remain the same as in eqs. (4.38) and (4.39).
We summarize our results in table 1 for the complexity of states with excitations in a single mode q. The table includes all the states discussed in section 4.2 and the present section. This covers all states with an even number of excitations in a single mode q where the spin axis can be different from the momentum orientation. This could be generalized further by allowing differently oriented spin axes for the different particle and antiparticle excitations. Our methods readily apply to this scenario, but it will be hard to find analytical expressions as one needs to find closed expressions for the eigenvalues of ∆. Instead, it will be easy to evaluate the eigenvalues numerically for any specific choice of spin orientations.

Excitations in many modes
At this stage, we are essentially prepared to consider general excited states of the form where the only constraint is that the total number of excitations must be even to ensure that these states lie on the connected component of the space of fermionic Gaussian states, i.e., i max + j max = 2n . (6.48) Implicitly, we also assume for simplicity that in all of the different momentum sectors, that the spins are oriented along the momentum axis of the respective modes. As in the previous examples of excited states, the above states will still be tensor product over all modes p where only the sectors with p = q i differ from Dirac ground state. 40 This means it will be sufficient to compute the covariance matrix Ω(m, q i ) for each excited sector q i to find the change in complexity for this class of excited states. In particular, we would first examine the state to determine the number of excitations in each of these sectors. In many sectors, there would be an even number of excitations and for these sectors, we can use the results in table 1 to evaluate their contribution to the complexity. Hence only the sectors with an odd number of excitations need further consideration.
An important observation is that in each sector with an odd number of excitations, the state will lie in the disconnected component from the reference state |Ω(M, 0) q i for this momentum q i . This implies that any geodesic transforming the full reference state |0 to the excited target state (6.47) will actually not preserve the tensor product structure over modes. In fact, this is the reason that it only works if we have an even number of excitations, because the generator of the geodesic will always mix two excitations intermediately along the path, and not until the end point is reached, will the tensor product over momentum modes be restored. Clearly, the geodesic accomplishing this transformation is not unique and arbitrary pairings are among the even number of excited modes are allowed. However, all these geodesics will have the same length, so it does not matter for the purpose of computing complexity. Further, our approach of evaluating the length of the geodesic(s) using the relative covariance matrix can still be applied, but clearly it does not depend on these details since it only refers to the endpoints of the geodesic where the tensor product structure holds.
Let us begin with the sectors with a single excitation. We can compute the relative covariance matrix ∆(q i ) =Ω(m, q i ) ω(M, 0) for each of these excited modes q i and find its eigenvalues. The calculations are a little tedious, but the result is rather simple and given by spec(∆(q i )) = 1, 1, −1, −1, Hence rather than finding two identical quadruples, we find a single quadruple with the familiar form (e 2iϑ , e 2iϑ , e −2iϑ , e −2iϑ ) where ϑ = tan −1 (|q i |/m) and two pairs (1, 1) and (−1, −1). Note that this result is independent of the type of excitation (particle or antiparticle) and of the orientation of the spin (r i = 1, 2). Based on our discussion in section 5, the eigenvalue pair (−1, −1) implies that the reference and target states in the q i sector lie on disconnected components. However, if we combine two sectors (q i , q j ) with an odd number of excitations, these additional pairs combine into two quadruples given by (−1, −1, −1, −1) with ϑ = π and (1, 1, 1, 1) with ϑ = 0. For example, combining two sectors with a single excitation, the complexity contributions are described by four angles Our discussion above implied that an unmatched eigenvalue pair (−1, −1) also appears for the sectors with three excitations. In such a situation, there must be a particle-antiparticle pair with the same spin label, i.e., a pair of creation operators as appear in the (A) states in eq. (4.23). This pair can be dealt with separately as in section 4.2 and a quadruple of eigenvalues appears specified by the angle 2ϑ = π − tan −1 |q i | m , as in eq. (4.27). Similarly, as in the previous case, the unpaired creation operator then yields an eigenvalue quadruple of the form (1, 1, −1, −1). Hence we see the unmatched pair (−1, −1) appearing here, and as discussed above, if the reference and target state are connected within the set of Gaussian states, this pair can be matched with another such pair from one of the other sectors with an odd number excitations.
While we have outlined the steps needed of the evaluation of the complexity of a general state (6.47), writing out the final complexity would require a rather elaborate expression since there are many different possibilities for the different sectors. Instead let us focus on the special case where in each excited sector, there is only a single excitation. Then following the analysis in section 4.2, it is straightforward to evaluate the difference in the complexity of the excited state and that of the vacuum. With the κ = 1, 2 measures, we find where n is defined in eq. (6.48), i.e., n is half of the total number of excitations. We repeat the usual caveat that implicitly, for ∆C κ=1 , we are aligning the basis in each q i sector with the generators which produce the desired unitary transformation. In comparing to our previous results, we see that these results match ∆C κ=2 and ∆C κ=1 in eqs. (4.38) and (4.39), respectively, for the states in eq. (4.34) (or the corresponding differences in for the more general states in eq. (6.44)).

Discussion and outlook
As was reviewed in section 2, Nielsen's perspective [39][40][41] allows one to bring the power of differential geometry to bear on the problem of constructing optimal quantum circuits, and also provides an objective manner in which to measure the complexity as the 'length' of extremal paths in this geometry. The present paper extends the study of [32], which applied Nielsen's geometric approach to investigate the ground state complexity of a free scalar field theory. In particular, we examined the complexity of Gaussian states in a free fermionic quantum field theory. Let us reiterate that some aspects of our work overlaps with the recent studies in [36,37]. As discussed in section 3, we can think of the Gaussian states for N fermionic degrees of freedom, as those annihilated by a family of N destruction operators a i , and hence the transformations carrying us from one state to another can be identified with the Bogoliubov transformations amongst the annihilation and creation operators. This perspective readily reveals a group structure for the family of transformations of interest, namely O(2N ) for free fermions. This group structure can be connected to Nielsen's construction of the unitary circuits which prepare these states by observing that the operators O I in eq. (2.2) form a representation of the corresponding Lie algebra so(2N ) and the circuits are then trajectories in the corresponding group manifold. In contrast, with e.g., [32,36,37], our approach was to emphasize this group structure and in doing so the precise representation appearing in the construction of the circuits becomes less important. Rather we focused on the action of the group transformation on the covariance matrix (3.1), which can be used to completely characterize the Gaussian states for either fermionic or bosonic degrees of freedom. With this representation, we were able to equip the group with a natural positive definite right-invariant metric, which allowed us to find all geodesics and their path lengths analytically by exploiting the underlying U(N ) symmetry of this metric, as explained in appendix B. We found this more abstract group theoretic perspective yields an extremely powerful approach to apply Nielsen's construction to this problem. In particular, we were able to prove that our unitary circuits in fact correspond to minimal geodesics in the corresponding geometry on the space of states. 41 Further, we evaluated the complexity of the ground state for a variety of different disentangled reference states, and we also evaluated the complexity of various families of excited states.
In evaluating the complexity, one must choose a cost function (2.5) and in our analysis, we focused on three choices, the F 2 measure and the κ = 1 and κ = 2 measures. Of course, the F 2 measure can be recognized as the proper distance in the Riemannian geometry, which we defined for the SO(2N ) group manifold. With a physicist's perspective, we can regard the κ = 2 measure as a 'standard' test particle action on the same geometry and of course, it yields the same optimal trajectories as the F 2 measure. Given the interpretation of the Y I functions in eq. (2.2) as indicating when particular gates appear in the circuit, the κ = 1 measure comes the closest amongst the examples in eq. (2.5) to the original definition of simply counting the number of gates in the circuit. Unfortunately, in the practical situation where the relevant trajectories are constructed by many orthogonal generators, this measure defines a 'Manhattan metric' on the relevant submanifold and does not provide the most useful measure to distinguish different circuits, e.g., [32,39]. Furthermore, as discussed in [32], the precise value of the complexity depends on the precise choice of the generators O I appearing in the construction. An advantage of the previous two measures is that they do not suffer from this basis dependence, 42 and of course, our analysis of the geodesics, e.g., in appendices A and B, made reference to the F 2 measure (but as noted above, the κ = 2 measure yields identical geodesics). Implicitly, our results for the κ = 1 measure assume that the basis of generators is aligned with the generators producing the desired transformation. For higher values of κ, i.e., κ > 2, one finds a similar basis dependence [32], which is unfortunate because the κ cost functions were introduced because of the close parallels between the resulting complexity for free fields and the results found for holographic complexity [32,33]. However, at least for κ = 1, this situation can be remedied by making use of the Schatten norm (e.g., see [56,57]). This norm actually provides a family of measures based on computing the singular value decomposition of the desired transformation and in the present case, it reduces to ( I |2ϑ I | p ) 1/p , where p is a positive integer and the ϑ I are precisely the eigenvalues of the generator producing the desired transformation. With p = 2, this reduces to the standard Frobenius-Hilbert-Schmidt norm, and we recover the F 2 measure which we were studying in the main text. More importantly with p = 1, we will recover our results for the κ = 1 measure, however, they are now basis independent when framed in terms of the Schatten norm. We return to discuss this issue in more detail in [58,59].
As discussed in section 3.2, we chose a natural metric (3.37) defined by the group structure of the present problem, namely one proportional to the Killing form on the O(2N ) group manifold. This choice simplified the calculation of the geodesics and their 42 Strictly speaking, this statement applies for any orthonormal basis of generators, however, a completely basis independent framework is produced by extending these measures toF 2 (U, Y ) = where g IJ is a frame metric on the space of generators -see the discussion of penalty factors below.
lengths, as explained in appendix B. Alternatively, one may wish to use a different rightinvariant metric which involves "penalty factors", e.g., as in the F 1p measure in (2.5) or in an elaboration of the F 2 cost function of the formF 2 (U, Y ) = I,J g IJ Y I Y J . With such a penalized metric, one favours certain directions in the circuit space over others, i.e., a higher cost is given to certain classes of gates (i.e., Lie algebra generators). For example, in [32], it was suggested that such an approach could be used to restore a notion of locality for circuit complexity in quantum field theories by increasing the cost of gates that correlate degrees of freedom separated by larger distances. 43 However, in the initial exploration of the latter for the scalar field theory, the shortest paths were dramatically changed and in general, evaluating the geodesics relied on numerical computations. We expect a similar situation will arise for the present fermionic systems, especially if the penalty factors break the U(N ) symmetry of the present metric. We leave the study of such penalized metrics for a future project.
An alternative approach was introduced in [33] to compute circuit complexity by assigning a geometry to the space of states rather than the space of unitaries. In particular, this approach is based on the fact that the set of normalized vectors in a Hilbert space 44 form a Riemannian manifold whose metric is inherited from the positive definite inner product of the Hilbert space, i.e., the Fubini-Study metric [61,62]. As it turns out, the minimal geodesic between two states |ψ R and |ψ T is just given by the arc of a circle that connects the two states in the two-dimensional plane spanned by them. The geodesic length is therefore trivially given by the angle ϑ between the two state vectors which can be computed as cos ϑ = ψ R |ψ T . However, as stressed in [2,42,43], this geometry alone is inappropriate to define a notion of circuit complexity for states in quantum field theory since the maximum separation of any two states is only π. Instead, it was proposed in [33] to restrict the manifold of states to a subset, in particular to the set of Gaussian states in the context of free field theories. In this case, the geodesics are forced to lie on a submanifold with a more intricate geometry. In the case of a free scalar field, the complexity determined with this alternative approach was actually found to agree with that determined with the Nielsen approach in [32]. Similarly, for a free fermionic field, one can show that the length of the minimal geodesics found in the present paper agrees with the lengths found with the Fubini-Study metric restricted to the submanifold of Gaussian states. We believe that this is a general feature which relates minimal Lie group geodesics with respect to a "natural metric" with the Fubini-Study geodesics on the quotient manifold where we divide the Lie group Sp(2N, R) or 43 Let us note however that ref. [60] recently argued that the microscopic model underlying holographic complexity must be nonlocal. 44 More precisely, we also divide by the complex phase to get a representation of the "space of rays" in the Hilbert space. O(2N ) by their subgroup U(N ) [58,59].
A primary motivation to develop techniques to evaluate complexity in simple quantum field theories is to better understand holographic complexity. Of course, there is no a priori reason to expect the results for free field theories to agree with those in holography, which necessarily describes strongly coupled quantum field theories with a large number of degrees of freedom. Nevertheless, it was found that if the cost function is chosen appropriately, the scalar field complexity exhibits some remarkable similarities with holographic complexity [32,33]. In particular, the leading divergences in both the CV and CA proposals are extensive, i.e., they are proportional to the volume of the time slice on which the boundary state is evaluated, as shown in [12]. Just as for the scalar field in [32], we found here that complexity of a fermionic field yields an analogous leading divergence with the κ cost functions, in particular, with κ = 2 and κ = 1 as shown in eqs. (4.22) and (4.19), respectively. In contrast, the F 2 cost function gives a result proportional to V 1/2 which does not match the holographic results. 45 Hence, our fermionic results reinforce the previous insights provided by the complexity calculations for a free scalar field with regards to the form of the cost functions that implicitly underly holographic complexity.
Another interesting feature of the complexity results for the free scalar [32] was that the leading contribution with the κ measures contained an extra logarithmic factor proportional to log κ (Λ/ω 0 ), where ω 0 was the frequency specifying the unentangled reference state. Surprisingly, a similar logarithmic factor was found in the leading divergence in the holographic complexity [12] for the complexity=action proposal. In the latter case, the logarithmic factor came from joint terms [9] in the gravitational action, and the argument of the logarithm was ambiguous because of the freedom in the normalization of the null normals on the boundary of the WDW patch. Whereas this ambiguity had originally been seen as problematic for the CA conjecture, the scalar field results indicated that it is a perfectly natural feature in the complexity of QFT states.
However, we find that no such logarithmic factors appear in the leading divergences of the complexities evaluated here for a fermionic field, e.g., see eqs. (4.22) and (4.19). This motivated our study of alternative reference states in section 6.1. However, we found that for any reference state which was translationally invariant and spatially unentangled, the leading singularity in the ground state complexity takes the same form. That is, with the κ = 1 measure, the leading divergence is precisely the same for all such reference states, i.e., it is independent ofq = q/M , as shown in eq. (6.34). On the other hand, eq. (6.33) shows that while the numerical coefficient varies with the reference state, i.e., withq, the leading singularity is still proportional to V Λ 3 in all cases. Of course, this is not in contradiction with holographic complexity. Typically, holography involves a supersymmetric boundary theories and so there will be both bosonic and fermionic degrees of freedom. Further, as explained in [32], if one were to choose the reference scale to be proportional to the cutoff, i.e., ω 0 = e −σ Λ, then the logarithmic factor would simply appear as some numercal factor, i.e., σ κ , multiplying the usual V Λ d−1 divergence. 46 However, with regards to supersymmetry, it may be interesting to investigate if the relative strength of the logarithmic factor in the leading divergence found with complexity=action changes as the amount of supersymmetry in the boundary theory changes.
As noted above, while there was no reference frequency that appeared in the ground state complexity of a fermionic field, we were able parametrize a whole family of reference states (all of which were spatially unentangled and translationally invariant). These states were characterized by a momentum q and a mass scale M , although we found that the complexity only depended on the dimensionless vectorq = q/M . We can make a parallel with these reference states in the free scalar field theory as follows: In notation analogous to eq. (6.13), the ground state of scalar is given by Here, the state |M, q p indicates the ground state of the Hamiltonian of a single degree of freedom (in the mode p) Now we can choose our reference state, the state where we put every mode p into the ground state of H p (M, q) with the same M and q. The resulting reference state, is both spatially unentangled and translationally invariant, as desired. Now in the scalar theory, it just so happens that the only relevant quantity is the frequency ω 0 = ω(M, q) = M 2 + |q| 2 and eq. (7.3) is precisely the family of reference states considered in [32]. Here, we are constructing them in a way that parallels our construction of the fermionic reference states in section 6.1.
However, let us note that it is straightfoward to extend this family of reference states for the scalar field as follows: We can choose any fixed Gaussian state |G p to construct a reference state analogous to that in eq. (7.3). Hence for a scalar field, with a single bosonic degree of freedom at each spatial point (or in each momentum mode), we can form a two-dimensional family of reference states corresponding to M b,1 = Sp(2, R)/U(1) by performing Bogoliubov transformations to a fixed |G 0 p (e.g., with G 0 = 1), as discussed in section 3.1. In terms of the language used in the previous paragraph, we can say that the most general |G p can be labeled by a reference frequency ω 0 and an angle θ 0 . The corresponding state |ω 0 , θ 0 p is the ground state of the following Hamiltonian, This general family of reference states extends the coefficient matrix A ab of [32] to have complex values. Further, with this extension, it would not suffice to construct the unitary circuit with only entangling gates. That is, one would have to extend the analysis of geodesics on the group GL(N, R) in [32] to geodesics on the full Sp(2N, R) group of Bogoliubov transformations discussed in section 3.1. 47 It would be interesting to study the effect of this generalization on the complexity of the ground state. Further, it is amusing to note that in considering a theory of n scalar fields, the general family of reference states would be n(n + 1)-dimensional, i.e., the full family corresponds to M b,n = Sp(2n, R)/U(n).
Of course, the above generalization can also be applied to a fermionic field. In particular, given a system with n fermionic degrees of freedom at each spatial point or in each momentum mode, the corresponding space of Gaussian states is n(n − 1)dimensional, i.e., recall M f,n = O(2n)/U(n) as discussed in section 3.2. Again, we can choose any of the corresponding Gaussian states to define Ω 0 in constructing the reference state |Ω R = ⊗ p |Ω 0 p , as described in section 6.1. For example, with the fourdimensional Dirac field studied in the main text, we have n = 4 degrees of freedom at each spatial point and in principle, we can construct a twelve-dimensional family of spatially unentangled and translationally invariant reference states. In section 6.1, in fact, we only considered the three-dimensional subspace labeled by the vectorq = q/M . It would be interesting to study the effect of choosing reference states throughout this full family on the ground state complexity of the Dirac field.
In sections 4.2 and 6.2, we were also able to study the complexity of a broad variety of excited states. This study was facilitated by the fact that in a free fermionic theory, acting with products of creation operators a r † q transforms the vacuum to another Gaussian state, which allowed us to apply the general framework developed in section 5. While the complexity depended on the details of which modes were excited (e.g., see table 1), a general feature was that when using the κ cost functions, the difference between the complexities of the excited state and the vacuum state was finite, 48 e.g., see eqs. (4.33) or (6.51). This result is perhaps not unexpected but we note that it is in keeping with our expectations for holographic conjecture. That is, low energy excitations in the bulk will not effect the structure of the UV divergences, which is determined by contributions coming from the asymptotic regions of the bulk spacetime. One explicit example of this behaviour is provided with the complexity of formation in [11,13]. However, our new results for excited states in the free fermionic field theory provide some additional motivation to study the complexity of excited states in a holographic framework more carefully. In any event, the finite difference in the complexities of the excited states and the vacuum reinforces the suggestion that the cost functions which are implicit in the microscopic rules governing holographic complexity are similar to the κ cost functions used in our free field studies.
We should note that while we considered a broad family of excited states in our complexity calculations, these only represent discrete points in the full N (N − 1)dimensional space of Gaussian states. 49 While our excited states form a physically interesting case, where one considers states with an arbitrary (but even) number of particles (and antiparticles) that have well-defined momentum, Gaussian states are far more complicated in general, e.g., (1 + α ar † q br † −q )|0 is a Gaussian state without a definite particle number. However, we want to emphasize that our framework also applies to these more complicated, coherent excitations. For instance, we can consider a set of coherent creation operators with smearing functions f i and g i , such that {A i , A † j } = δ ij . In this case, each excited state will still be Gaussian (since as before, they are annihilated by the A † i themselves). As long as the number of such excitations is even, |ψ will live on the same connected 48 Assuming that we are only exciting a finite number of momentum modes. 49 Hence in terms of the UV cutoff, the dimension of this space is of the order (V Λ 3 ) 2 . component as the Dirac vacuum (and appropriate choices of the reference state), such that the complexity can be computed. In general, |ψ will not be a tensor product over momentum modes implying that it is not translationally invariant and it will have not just spatial, but also momentum entanglement. Computing the complexity of such states will thus require more work in the continuum than the energy eigenstates considered in this paper, because one needs to find a generalized normal mode basis, such that both |ψ and the reference state are tensor products with respect to this basis. A simple solution for such examples in practice will be to put the field theory onto a finite lattice and compute the circuit complexity from the eigenvalues of ∆, which will be a large but finite matrix.
The techniques introduced in this paper are quite general. One could easily extend the present discussion to Dirac fields in higher (or lower) spacetime dimensions. It may also be interesting to study circuit complexity for states in a theory of chiral fermions. A more challenging extension of the present work would be to evaluate the complexity of Gaussian states with odd fermion number. As noted before, the corresponding geodesics could not remain within the space of Gaussian states because they must reach the disconnected component of M f,N = O(2N )/U(N ). Hence, it may be just as simple to consider the complexity of more general states, i.e., non-Gaussian states. Another possibility would be to analytically continue our formula for length of the minimal geodesics by just defining the circuit complexity to be computed in the same way from the relative covariance matrix ∆. This would be an analytical continuation where we allow paths in the complexification of O(2N ) which is a connected Lie group. However, it is not clear how such a path could be related to a sequence of intermediate quantum states between reference and target state.
Another interesting direction would be connect the present model for the complexity for fermionic states (as well as in [36,37]) to previous discussions of simulating fermionic systems on a universal quantum computer, e.g., [63][64][65]. In particular, ref. [65] bounded the complexity of quantum algorithms computing scattering amplitudes in interacting fermionic field theories. In particular, they studied the two-dimensional Gross-Neveu model [66] describing N species of fermions with quartic interactions. The first step in this process is to prepare the vacuum of the free theory (using adiabatic state preparation) and the upper bound on the number of gates required for this step is proportional to V 3 Λ 6 /m 3 . Of course, the complexity of preparing the vacuum of a free fermion is a central question in the present paper but our result for a two-dimensional theory would be of order V Λ, e.g., see explicit calculations for d = 2 in [37]. The latter is dramatically smaller than the bound found by [65] and hence it would be interesting to investigate if the present construction based on Nielsen's geometric approach [39][40][41] can be adapted to provide practical quantum algorithms.

A Geodesics on Lie groups
We show under which conditions, the one-parameter subgroup e sA is a geodesics on a Lie group G. In particular, this allows us to prove that any such subgroup is geodesic if we equip SO(2N ) with the unpenalized right-invariant metric that we use in the body of this paper. These results are standard material from Lie group geometry and the study of symmetric spaces [49], but we will review them here for completeness and to match our conventions.

A.1 Geodesics for right-invariant metric
We consider the setting of a general Lie group G with Lie algebra g and positive definite metric ·, · 1 : g × g → R. We can extend this metric to all tangent spaces of the Lie group by requiring right-invariance, which means at the point M ∈ G (we think of M as a matrix in the fundamental representation of the group), we have the inner product In this context, we can ask the question for which A ∈ g is the trajectory e sA a geodesic. Given a Lie group G with right-invariant metric ·, · M and a Lie algebra element A ∈ g, the path with d ds Q B (s) = 0. Due to the fact that we have as many linearly independent Killing vector fields X B as we have generators B ∈ g, the combination of conserved quantities and initial point γ(0) characterize the geodesic uniquely. Put differently, if we know γ(0) andγ(0), we can use the dim g linearly independent charges Q B i to rewrite the geodesics equation as a first order-equation with a unique solution. In particular, if we find trajectory γ(s) that preserves all charges Q B (s), we can be certain that we found a geodesic.

Critical points on the adjoint orbits generate geodesics
Based on our previous considerations, we can check what the conditions on A ∈ g are, so that γ(s) = e sA is a geodesic. We can compute for γ(s), the charges where we used right-invariance of the metric to compute the inner product at the identity. From this equation we can prove that A, [A, B] 1 = 0 for all B ∈ g is a necessary and sufficient condition for all charges being conserved: • Necessary. We just evaluate d ds Q B (s)| s=0 = A, [A, B] 1 . All charges will be conserved if this equation vanishes for all B ∈ g.
where G and g refer to the metric governing the anticommutation relations {ξ a , ξ b } = G ab . This metric is positive definite, but it is also invariant under the adjoint action, which we can check by computing where we used cyclicity of the trace to move M from the front to the end. With this in hand, we know that extending ·, · 1 to a right-invariant metric gives actually rise to a bi-invariant metric. In particular, all geodesics departing from the identity are given by one-parameter subgroups γ(s) = e sA .

B Minimal geodesics in SO(2N )
We will give a general proof on which geodesics are the minimal ones connecting the identity with an equivalent class of unitaries that prepare the same state. In particular, we show that any such geodesic is nothing else than a collection of fermionic twomode squeezing operations in fermionic normal modes. These results are the fermionic analogues to the derivation for bosons presented in the appendix of [38]. The geometry discussed in the following for the Lie algebra so(2N ) and for the Lie group SO(2N ) is illustrated in figure 10.

B.1 Lie group geometry
In the Nielsen approach to complexity, we equip the Lie group SO(2N ) with rightinvariant and positive metric. Such a metric is completely characterized by its value at the identity where we identity the tangent space T 1 SO(2N ) with its Lie algebra so(2N ). We represent a generator A ∈ so(2N ) as matrices, namely linear maps A a b . A natural choice for the invariant metric on so(2N ) is given by where we used GB g = −B to find the RHS, which makes explicit that our natural inner product is just minus the Killing form on SO(2N ). In particular, extending this metric to a right-invariant metric over the whole group will give rise to a bi-invariant metric. Given two tangent vectors X, Y ∈ T M Sp(2N, R) represented as matrices at point M ∈ SO(2N ), we can compute their inner product by multiplying with M −1 from the right, leading to For general N , this space has some non-trivial topology and is generally referred to as symmetric space of type DIII. We will refer to it as M, the space of pure Gaussian states, and identify a point [M ] ∈ M with the Gaussian state |M Ω R M up to an overall complex phase.

B.3 Cartan decomposition
Identifying the Lie algebra so(2N ) with the tangent space at the identity, we have a natural "vertical" subalgebra u(N ) ⊂ so(2N ) that is tangential to the fiber [1] = U(N ). A priori, there is no natural "horizontal" complement to write the Lie algebra as a direct sum of a vertical and a horizontal part. However, by equipping the Lie algebra with the inner product ·, · 1 , we can choose the orthogonal complement asym(N ) := A ∈ so(2N ) A, B 1 = 0 ∀ B ∈ u(N ) (B.5) In contrast to u(N ), asym(N ) is not a subalgebra. Its name stems from the fact that the decomposition so(2N ) = asym(N ) ⊕ u(N ) (B.6) is equivalent to splitting the set of generators into symmetric and antisymmetric matrices with respect to the symplectic form Ω R of the reference state: • Vertical subspace u(N ) A generator B in the subspace u(N ) must generate transformations that preserve the reference state |Ω R . It must therefore be both orthogonal (e.g., BG = GB ) and symplectic with respect Ω R BΩ R = Ω R B , (B.7) which is equivalent to BΩ R being a symmetric matrix in a basis where Ω R takes the standard form from eq. (3.2).
• Horizontal subspace asym(N ) =⊥u(N ) A generator A that is orthogonal to all elements B ∈ u(N ) satisfies 0 = A, B 1 = Tr(AGB g) . (B.8) Using GB g = −A, we can rewrite this expression as −Tr(AB). We will go a step further by inserting 1 = Ω R ω R , leading to In this expression, (ω R B) ba is symmetric as a consequence of BΩ R = Ω R B . The fact that the inner product vanishes is therefore equivalent to (AΩ R ) being antisymmetric with respect Ω R , namely It is unique and provides locally (around the identity) a diffeomorphism between the special orthogonal group and the Cartesian product Asym(N ) × U(N ). In particular, it provides a local trivialization of the fiber bundle SO(2N ) where the base manifold is identified with the surface Asym(N ) from which we can move up and down along the fiber by multiplying with group elements u ∈ U(N ). Due to the fact that Asym(N ) is locally diffeomorphic to asym(N ), we can use the pair (A, u) as generalized coordinates for group elements M (A, u) in a neighborhood around the identity: M (A, u) = e A u with A ∈ asym(N ) and u ∈ U(N ) . (B.13)

B.4 Cylindrical foliation
We can foliate the symplectic group by generalized cylinders defined as C σ = e A u A ∈ asym(N ), A = σ, u ∈ U(N ) (B.14) with the topology S N (N −1)−1 × U(N ). Moreover, we will define the radial vector field We will prove that this vector fields points radially outwards and is everywhere orthogonal to the cylindrical surfaces C σ . Therefore, we need to show that R is indeed orthogonal to the surfaces C σ . We will prove this individually for different directions. Note that the normalization 1/ A is irrelevant here.
• Orthogonality to the U(N ) fiber: We show that R is orthogonal to any vector pointing along the U(N ) fiber. Let X ∈ u(N ), so that e A uX points in the direction of the U(N ) fiber at point M (A, u). We can compute the inner product Lie group can be represented as fiber bundle over its quotient given by the symmetric space SO(2N )/U(N ). This base manifold can be interpreted as the space of Gaussian quantum states. The fiber over the reference state |G R is given by the subgroup U(N ) ⊂ SO(2N ), while the fiber e A U(N ) over any target state |G T is not a subgroup. We consider a path γ in the group that connects 1 to some other group element M = e A u. Such a point lies on the cylinder C σ for σ = A . Every curve γ in the group can be projected down to a curve πγ in the manifold of Gaussian states. The vertical submanifold Asym(N ) = exp asym(N ) is generated by exponentiating asym(N ) and it plays an important role because it contains the minimal geodesics. Note that the Asym(N ) has a complicated topology and intersects the fibers several times, but we only sketched a single layer corresponding to the region of Asym(N ) near the identity. In particular, the straight line e sA connecting 1 with e A will turn out to be the minimal geodesic between 1 and the fiber e A U(N ). Note that we do not show the vector field R consisting of radially outwards pointing unit vectors on the cylindric surfaces C σ , such that the curves e sA u are its integral curves. where we used the fact that trace is cyclic and that B was chosen orthogonal to A. Note that the sum just gives the identity.
This proves that we have indeed a vector field R that is everywhere orthogonal to the cylindrical surfaces C σ . Furthermore, we can quickly confirm that this vector field has indeed constant length equal to 1, by computing

B.5 Inequality for the geodesic length
We will now use the cylindrical structure to bound the geodesic length from below. Given an arbitrary point M (A, u) = e A u on the cylinder C σ , let us assume that we have already found the shortest path connecting the identity 1 with M (A, u). This path may be given by γ(s) with γ(0) = 1 and γ(1) = M (A, u). We can compute the change dσ as the inner product dσ(s) = ds γ(s), R γ(s) γ(s) .

(B.26)
Clearly, if we integrate this inner product we find how far we move in the σ-direction. This follows directly from the fact that moving in the direction of R increases σ with a constant rate, while moving along any orthogonal direction does not change σ. Therefore, we have σ = At this point, we should note that γ(s), R γ(s) γ(s) ≤ γ(s) for all s. This follows from the fact that we are projecting onto the unit vector R, so this projection is at most the length ofγ(s). We can combine these two equations to find the important inequality σ ≤ γ , (B. 29) stating compactly that any path connecting 1 with M ∈ C σ must have a length of σ or more. At this point, we have not proven that for every M ∈ C σ ⊂ SO(2N ) there exists a path with length σ connecting 1 with M and there certainly are points M where we cannot find such a shortest path. However, we are interested in the minimal geodesic that connects the identity 1 with an arbitrary point in the the fiber [M ]. This means if we find a single path that does this with length σ, we have proven that this is indeed the optimal path and there is no shorter one.
B.6 Shortest path to a fiber e A U(N ) We will now show explicitly that for every fiber e A U(N ) with σ = A , there exists a path of length σ that connects the identity with the point e A on this fiber. This path is given by γ(s) = e sA (B.30) and reaches the representative e A at s = 1. This path has length γ = A = σ. At this point, we have proven that for our chosen inner product A, B 1 = −Tr(AB), the shortest path is indeed always given by e sA with A ∈ asym(N ).
We can now ask how A is related to the target state |Ω T . We must have Ω T = e A Ω R e A . (B.31) Now requiring that A ∈ asym(N ) implies that M Ω R = e A Ω R is antisymmetric. In an invariant language, we have equivalently With this in hand, we can claim that the linear map M = √ Ω T ω R will do the job. Importantly, M satisfies M Ω R = Ω R M . We can check explicitly (B.33) The algebra element that generates M is given by A = log M = (log Ω T ω R )/2. We have σ = A = log Ω T ω R /2. Let us note at this point that all expressions, such as log Ω T ω R and √ Ω T ω R are well defined, because Ω T ω R is an orthogonal matrix. This fact implies that Ω T ω R is (a) diagonalizable and (b) has conjugate complex eigenvalues ±e iϕ . 50 The linear map Ω T ω R encodes the invariant information about the relation between the reference state |Ω R and the target state |Ω T , which we can refer to as The eigenvalues of ∆ come in conjugate pairs (e ir i , e −ir i ). We can compute the geodesic distance, which is equal to the norm A , directly from ∆: A = Tr(i log ∆) 2 2 .
(B.35) 50 Note that the definition of square root involves a small subtlety: If we describe the conjugate eigenvalues (e iri , e −iri ) using r i ∈ [0, π], the square root √ Ω T ω R has the same eigenvectors, but eigenvalues (e iri/2 , e −iri/2 ) which defines √ Ω T ω R uniquely for r i = π. For r i = π, the square root √ Ω T ω R has eigenvalues (e iri/2 , e −iri/2 ), but the assignment to the eigenvectors could be interchanged. However, either choice will lead to a valid definition for the square root √ Ω T ω R for the purpose here.