1.1 Numeric Renormalization Group in One Dimension

Numerical simulation is one of the most important approaches in science, in particular for the complicated problems beyond the reach of analytical solutions. One distinguished example of the algorithms in physics as well as in chemistry is ab initio principle calculation, which is based on density function theory (DFT ) [1,2,3]. It provides a reliable solution to simulate a wide range of materials that can be described by the mean-field theories and/or single-particle approximations. Monte Carlo method [4], named after a city famous of gambling in Monaco, is another example that appeared in almost every corner of science. In contemporary physics, however, there are still many “hard nuts to crack.” Specifically in quantum physics, numerical simulation faces un-tackled issues for the systems with strong correlations, which might lead to exciting and exotic phenomena like high-temperature superconductivity [5, 6] and fractional excitations [7].

Tensor network (TN) methods in the context of many-body quantum systems have been developed recently. One could however identify some precursors of them in the seminal works of Kramers and Wannier [8, 9], Baxter [10, 11], Kelland [12], Tsang [13], Nightingale and Blöte [11], and Derrida [14, 15], as found by Nishino [16,17,18,19,20,21,22]. Here we start their history from the Wilson numerical renormalization group (NRG) [23]. The NRG aims at finding the ground state of a spin system. The idea of the NRG is to start from a small system whose Hamiltonian can be easily diagonalized. The system is then projected on few low-energy states of the Hamiltonian. A new system is then constructed by adding several spins and a new low-energy effective Hamiltonian is obtained working only in the subspace spanned by the low-energy states of the previous step and the full Hilbert space of the new spins. In this way the low-energy effective Hamiltonian can be diagonalized again and its low-energy states can be used to construct a new restricted Hilbert space. The procedure is then iterated. The original NRG has been improved, for example, by combining it with the expansion theory [24,25,26]. As already shown in [23] the NRG successfully tackles the Kondo problem in one dimension [27], however, its accuracy is limited when applied to generic strongly correlated systems such as Heisenberg chains.

In the nineties, White and Noack were able to relate the poor NRG accuracy with the fact that it fails to consider properly the boundary conditions [28]. In 1992, White proposed the famous density matrix renormalization group (DMRG) that is as of today the most efficient and accurate algorithms for one-dimensional (1D) models [29, 30]. White used the largest eigenvectors of the reduced density matrix of a block as the states describing the relevant part of the low energy physics Hilbert space. The reduced density matrix is obtained by explicitly constructing the ground state of the system on a larger region. In other words, the space of one block is renormalized by taking the rest of the system as an environment.

The simple idea of environment had revolutionary consequences in the RG-based algorithms. Important generalizations of DMRG were then developed, including the finite-temperature variants of matrix renormalization group [31,32,33,34], dynamic DMRG algorithms [35,36,37,38], and corner transfer matrix renormalization group by Nishino and Okunishi [16].Footnote 1

About 10 years later, TN was re-introduced in its simplest form of matrix product states (MPS) [14, 15, 39,40,41] in the context of the theory of entanglement in quantum many-body systems; see, e.g., [42,43,44,45].Footnote 2 In this context, the MPS encodes the coefficients of the wave-functions in a product of matrices, and is thus defined as the contraction of a one-dimensional TN. Each elementary tensor has three indexes: one physical index acting on the physical Hilbert space of the constituent, and two auxiliary indexes that will be contracted. The MPS structure is chosen since it represents the states whose entanglement only scales with the boundary of a region rather than its volume, something called the “area law” of entanglement. Furthermore, an MPS gives only finite correlations, thus is well suited to represent the ground states of the gapped short-range Hamiltonians. The relation between these two facts was evinced in seminal contributions [50,51,52,53,54,55,56,57,58,59] and led Verstraete and Cirac to prove that MPS can provide faithful representations of the ground states of 1D gapped local Hamiltonian [60].

These results together with the previous works that identified the outcome of converged DMRG simulations with an MPS description of the ground states [61] allowed to better understand the impressive performances of DMRG in terms of the correct scaling of entanglement of its underlying TN ansatz. The connection between DMRG and MPS stands in the fact that the projector onto the effective Hilbert space built along the DMRG iterations can be seen as an MPS. Thus, the MPS in DMRG can be understood as not only a 1D state ansatz, but also a TN representation of the RG flows ([40, 61,62,63,64,65], as recently reviewed in [66]).

These results from the quantum information community fueled the search for better algorithms allowing to optimize variationally the MPS tensors in order to target specific states [67]. In this broader scenario, DMRG can be seen as an alternating-least-square optimization method. Alternative methods include the imaginary-time evolution from an initial state encoded as in an MPS base of the time-evolving block decimation (TEBD) [68,69,70,71] and time-dependent variational principle of MPS [72]. Note that these two schemes can be generalized to simulate also the short out-of-equilibrium evolution of a slightly entangled state. MPS has been used beyond ground states, for example, in the context of finite-temperature and low-energy excitations based on MPS or its transfer matrix [61, 73,74,75,76,77].

MPS has further been used to characterize state violating the area law of entanglement, such as ground states of critical systems, and ground states of Hamiltonian with long-range interactions [56, 78,79,80,81,82,83,84,85,86].

The relevance of MPS goes far beyond their use as a numerical ansatz. There have been numerous analytical studies that have led to MPS exact solutions such as the Affleck–Kennedy–Lieb–Tasaki (AKLT) state [87, 88], as well as its higher-spin/higher-dimensional generalizations [39, 44, 89,90,91,92]. MPS has been crucial in understanding the classification of topological phases in 1D [93]. Here we will not talk about these important results, but we will focus on numerical applications even though the theory of MPS is still in full development and constantly new fields emerge such as the application of MPS to 1D quantum field theories [94].

1.2 Tensor Network States in Two Dimensions

The simulations of two-dimensional (2D) systems, where analytical solutions are extremely rare and mean-field approximations often fail to capture the long-range fluctuations, are much more complicated and tricky. For numeric simulations, exact diagonalization can only access small systems; quantum Monte Carlo (QMC) approaches are hindered by the notorious “negative sign” problem on frustrated spin models and fermionic models away from half-filling, causing an exponential increase of the computing time with the number of particles [95, 96].

While very elegant and extremely powerful for 1D models, the 2D version of DMRG [97,98,99,100] suffers several severe restrictions. The ground state obtained by DMRG is an MPS that is essentially a 1D state representation, satisfying the 1D area law of entanglement entropy [52, 53, 55, 101]. However, due to the lack of alternative approaches, 2D DMRG is still one of the most important 2D algorithms, producing a large number of astonishing works including discovering the numeric evidence of quantum spin liquid [102,103,104] on kagomé lattice (see, e.g., [105,106,107,108,109,110]).

Besides directly using DMRG in 2D, another natural way is to extend the MPS representation, leading to the tensor product state [111], or projected entangled pair state (PEPS) [112, 113]. While an MPS is made up of tensors aligned in a 1D chain, a PEPS is formed by tensors located in a 2D lattice, forming a 2D TN. Thus, PEPS can be regarded as one type of 2D tensor network states (TNS) . Note the work of Affleck et al. [114] can be considered as a prototype of PEPS.

The network structure of the PEPS allows us to construct 2D states that strictly fulfill the area law of entanglement entropy [115]. It indicates that PEPS can efficiently represent 2D gapped states, and even the critical and topological states, with only finite bond dimensions. Examples include resonating valence bond states [115,116,117,118,119] originally proposed by Anderson et al. for superconductivity [120,121,122,123,124], string-net states [125,126,127] proposed by Wen et al. for gapped topological orders [128,129,130,131,132,133,134], and so on.

The network structure makes PEPS so powerful that it can encode difficult computational problems including non-deterministic polynomial (NP) hard ones [115, 135, 136]. What is even more important for physics is that PEPS provides an efficient representation as a variational ansatz for calculating ground states of 2D models. However, obeying the area law costs something else: the computational complexity rises [115, 135, 137]. For instance, after having determined the ground state (either by construction or variation), one usually wants to extract the physical information by computing, e.g., energies, order parameters, or entanglement. For an MPS, most of the tasks are matrix manipulations and products which can be easily done by classical computers. For PEPS, one needs to contract a TN stretching in a 2D plain, unfortunately, most of which cannot be neither done exactly or nor even efficiently. The reason for this complexity is what brings the physical advantage to PEPS: the network structure. Thus, algorithms to compute the TN contractions need to be developed.

Other than dealing with the PEPS, TN provides a general way to different problems where the cost functions are written as the contraction of a TN. A cost function is usually a scalar function, whose maximal or minimal point gives the solution of the targeted optimization problem. For example, the cost function of the ground-state simulation can be the energy (e.g., [138, 139]); for finite-temperature simulations, it can be the partition function or free energy (e.g., [140, 141]); for the dimension reduction problems, it can be the truncation error or the distance before and after the reduction (e.g., [69, 142, 143]); for the supervised machine learning problems, it can be the accuracy (e.g., [144]). TN can then be generally considered as a specific mathematical structure of the parameters in the cost functions.

Before reaching the TN algorithms, there are a few more things worth mentioning. MPS and PEPS are not the only TN representations in one or higher dimensions. As a generalization of PEPS, projected entangled simplex state was proposed, where certain redundancy from the local entanglement is integrated to reach a better efficiency [145, 146]. Except for a chain or 2D lattice, TN can be defined with some other geometries, such as trees or fractals. Tree TNS is one example with non-trivial properties and applications [39, 89, 147,148,149,150,151,152,153,154,155,156,157,158]. Another example is multi-scale entanglement renormalization ansatz (MERA) proposed by Vidal [159,160,161,162,163,164,165,166,167], which is a powerful tool especially for studying critical systems [168,169,170,171,172,173] and AdS/CFT theories ([174,175,176,177,178,179,180], see [181] for a general introduction of CFT ). TN has also been applied to compute exotic properties of the physical models on fractal lattices [182, 183].

The second thing concerns the fact that some TNs can indeed be contracted exactly. Tree TN is one example, since there is no loop of a tree graph. This might be the reason that a tree TNS can only have a finite correlation length [151], thus cannot efficiently access criticality in two dimensions. MERA modifies the tree in a brilliant way, so that the criticality can be accessed without giving up the exactly contractible structure [164]. Some other exactly contractible examples have also been found, where exact contractibility is not due to the geometry, but due to some algebraic properties of the local tensors [184, 185].

Thirdly, TN can represent operators, usually dubbed as TN operators. Generally speaking, a TN state can be considered as a linear mapping from the physical Hilbert space to a scalar given by the contraction of tensors. A TN operator is regarded as a mapping from the bra to the ket Hilbert space. Many algorithms explicitly employ the TN operator form, including the matrix product operator (MPO) for representing 1D many-body operators and mixed states, and for simulating 1D systems in and out-of-equilibrium [186,187,188,189,190,191,192,193,194,195,196], tensor product operator (also called projected entangled pair operators) in for higher-systems [140, 141, 143, 197,198,199,200,201,202,203,204,205,206], and multiscale entangled renormalization ansatz [207,208,209].

1.3 Tensor Renormalization Group and Tensor Network Algorithms

Since most of TNs cannot be contracted exactly (with #P-complete computational complexity [136]), efficient algorithms are strongly desired. In 2007, Levin and Nave generalized the NRG idea to TN and proposed tensor renormalization group (TRG) approach [142]. TRG consists of two main steps in each RG iteration: contraction and truncation. In the contraction step, the TN is deformed by singular value decomposition (SVD) of matrix in such a way that certain adjacent tensors can be contracted without changing the geometry of the TN graph. This procedure reduces the number of tensors N to Nν, with ν an integer that depends on the way of contracting. After reaching the fixed point, one tensor represents in fact the contraction of infinite number of original tensors, which can be seen as the approximation of the whole TN.

After each contraction, the dimensions of local tensors increase exponentially, and then truncations are needed. To truncate in an optimized way, one should consider the “environment,” a concept which appears in DMRG and is crucially important in TRG-based schemes to determine how optimal the truncations are. In the truncation step of Levin’s TRG, one only keeps the basis corresponding to the χ-largest singular values from the SVD in the contraction step, with χ called dimension cut-off. In other words, the environment of the truncation here is the tensor that is decomposed by SVD. Such a local environment only permits local optimizations of the truncations, which hinders the accuracy of Levin’s TRG on the systems with long-range fluctuations. Nevertheless, TRG is still one of the most important and computationally cheap approaches for both classical (e.g., Ising and Potts models) and quantum (e.g., Heisenberg models) simulations in two and higher dimensions [184, 210,211,212,213,214,215,216,217,218,219,220,221,222,223,224,225,226,227]. It is worth mentioning that for 3D classical models, the accuracy of the TRG algorithms has surpassed other methods [221, 225], such as QMC . Following the contraction-and-truncation idea, the further developments of the TN contraction algorithms concern mainly two aspects: more reasonable ways of contracting and more optimized ways of truncating.

While Levin’s TRG “coarse-grains” a TN in an exponential way (the number of tensors decreases exponentially with the renormalization steps), Vidal’s TEBD scheme [68,69,70,71] implements the TN contraction with the help of MPS in a linearized way [189]. Then, instead of using the singular values of local tensors, one uses the entanglement of the MPS to find the optimal truncation, meaning the environment is a (non-local) MPS , leading to a better precision than Levin’s TRG. In this case, the MPS at the fixed point is the dominant eigenstate of the transfer matrix of the TN. Another group of TRG algorithms, called corner transfer matrix renormalization group (CTMRG) [228], are based on the corner transfer matrix idea originally proposed by Baxter in 1978 [229], and developed by Nishina and Okunishi in 1996 [16]. In CTMRG, the contraction reduces the number of tensors in a polynomial way and the environment can be considered as a finite MPS defined on the boundary. CTMRG has a compatible accuracy compared with TEBD .

With a certain way of contracting, there is still high flexibility of choosing the environment, i.e., the reference to optimize the truncations. For example, Levin’s TRG and its variants [142, 210,211,212, 214, 221], the truncations are optimized by local environments. The second renormalization group proposed by Xie et al. [221, 230] employs TRG to consider the whole TN as the environments.

Besides the contractions of TNs, the concept of environment becomes more important for the TNS update algorithms, where the central task is to optimize the tensors for minimizing the cost function. According to the environment, the TNS update algorithms are categorized as the simple [141, 143, 210, 221, 231, 232], cluster [141, 231, 233, 234], and full update [221, 228, 230, 235,236,237,238,239,240]. The simple update uses local environment, hence has the highest efficiency but limited accuracy. The full update considers the whole TN as the environment, thus has a high accuracy. Though with a better treatment of the environment, one drawback of the full update schemes is the expensive computational cost, which strongly limits the dimensions of the tensors one can keep. The cluster update is a compromise between simple and full update, where one considers a reasonable subsystem as the environment for a balance between the efficiency and precision.

It is worth mentioning that TN encoding schemes are found to bear close relations to the techniques in multi-linear algebra (MLA) (also known as tensor decompositions or tensor algebra; see a review [241]). MLA was originally targeted on developing high-order generalization of the linear algebra (e.g., the higher-order version of singular value or eigenvalue decomposition [242,243,244,245]), and now has been successfully used in a large number of fields, including data mining (e.g., [246,247,248,249,250]), image processing (e.g., [251,252,253,254]), machine learning (e.g., [255]), and so on. The interesting connections between the fields of TN and MLA (for example, tensor-train decomposition [256] and matrix product state representation) open new paradigm for the interdisciplinary researches that cover a huge range in sciences.

1.4 Organization of Lecture Notes

Our lectures are organized as following. In Chap. 2, we will introduce the basic concepts and definitions of tensor and TN states/operators, as well as their graphic representations. Several frequently used architectures of TN states will be introduced, including matrix product state, tree TN state, and PEPS . Then the general form of TN, the gauge degrees of freedom, and the relations to quantum entanglement will be discussed. Three special types of TNs that can be exactly contracted will be exemplified in the end of this chapter.

In Chap. 3, the contraction algorithms for 2D TNs will be reviewed. We will start with several physical problems that can be transformed to the 2D TN contractions, including the statistics of classical models, observation of TN states, and the ground-state/finite-temperature simulations of 1D quantum models. Three paradigm algorithms, namely TRG , TEBD , and CTMRG , will be presented. These algorithms will be further discussed from the aspect of the exactly contractible TNs.

In Chap. 4, we will concentrate on the algorithms of PEPS for simulating the ground states of 2D quantum lattice models. Two general schemes will be explained, which are the variational approaches and the imaginary-time evolution. According to the choice of environment for updating the tensors, we will explain the simple, cluster, and full update algorithms. Particularly in the full update, the contraction algorithms of 2D TNs presented in Chap. 3 will play a key role to compute the non-local environments.

In Chap. 5, a special topic about the underlying relations between the TN methods and the MLA will be given. We will start from the canonicalization of MPS in one dimension, and then generalize to the super-orthogonalization of PEPS in higher dimensions. The super-orthogonalization that gives the optimal approximation of a tree PEPS in fact extends the Tucker decomposition from single tensor to tree TN . Then the relation between the contraction of tree TNs and the rank-1 decomposition will be discussed, which further leads to the “zero-loop” approximation of the PEPS on the regular lattice. Finally, we will revisit the infinite DMRG (iDMRG) , infinite TEBD (iTEBD) , and infinite CTMRG in a unified picture indicated by the tensor ring decomposition, which is a higher-rank extension of the rank-1 decomposition.

In Chap. 6, we will revisit the TN simulations of quantum lattice models from the ideas explained in Chap. 5. Such a perspective, dubbed as quantum entanglement simulation (QES) , shows a unified picture for simulating one- and higher-dimensional quantum models at both zero [234, 257] and finite [258] temperatures. The QES implies an efficient way of investigating infinite-size many-body systems by simulating few-body models with classical computers or artificial quantum platforms. In Chap. 7, a brief summary is given.

As TN makes a fundamental language and efficient tool to a huge range of subjects, which has been advancing in an extremely fast speed, we cannot cover all the related progresses in this review. We will concentrate on the algorithms for TN contractions and the closely related applications. The topics that are not discussed or are only briefly mentioned in this review include: the hybridization of TN with other methods such as density functional theories and ab initio calculations in quantum chemistry [259,260,261,262,263,264,265,266,267,268], the dynamic mean-field theory [269,270,271,272,273,274,275,276,277,278], and the expansion/perturbation theories [274, 279,280,281,282,283,284]; the TN algorithms that are less related to the contraction problems such as time-dependent variational principle [72, 285], the variational TN state methods [76, 240, 286,287,288,289,290,291], and so on; the TN methods for interacting fermions [167, 266, 292,293,294,295,296,297,298,299,300,301,302,303,304,305,306], quantum field theories [307,308,309,310,311,312,313], topological states and exotic phenomena in many-body systems (e.g., [105, 106, 108, 110, 116,117,118,119, 125, 126, 306, 314,315,316,317,318,319,320,321,322,323,324,325,326,327,328,329]), the open/dissipative systems [186, 190,191,192, 194, 330,331,332,333,334], quantum information and quantum computation [44, 335,336,337,338,339,340,341,342,343,344], machine learning [144, 345,346,347,348,349,350,351,352,353,354,355,356,357,358,359,360], and other classical computational problems [361,362,363,364,365,366]; the TN theories/algorithms with non-trivial statistics and symmetries [125,126,127, 303, 309, 314, 319, 367,368,369,370,371,372,373,374,375,376,377,378,379,380]; several latest improvements of the TN algorithms for higher efficiency and accuracy [236, 239, 381,382,383,384,385].