Advertisement

# Tensor Spaces

• Wolfgang Hackbusch
Chapter
Part of the Springer Series in Computational Mathematics book series (SSCM, volume 49)

## Abstract

Similar to fully populated matrices, tensors are mathematical objects with such a large amount of data that naive representations must fail. Sparse representations of tensors are an actual field of research (see Hackbusch [132]). Here, we restrict the discussion to tensor techniques which make use of hierarchical matrices. General vector spaces $${V_j}\left( {1 \le j \le d} \right)$$ can be used to form the tensor space $$V = \otimes _{j = 1}^d{V_j}$$. Concerning the vector spaces Vj, we discuss two cases in Section 16.1: finite-dimensional model vector spaces $${V_j} = {^{{I_j}}}$$ (cf. §16.1.2), and matrix spaces $${V_j} = {^{{I_j} \times {J_j}}}$$ (cf. §16.1.3). In the latter case, the tensor product is also called a Kronecker matrix. The crucial problem is the sparse representation of tensors and their approximation, see Section 16.2. In §16.2.1, we discuss the r-term representation. For Kronecker products, the r-term representation can be combined with the hierarchical matrix format, resulting in the “HKT representation” (cf. §16.2.5). In Section 16.3 we present two applications. The first concerns an integral operator giving rise to a representation by (Kronecker) tensors of order d=2. The second application shows that the tensor approach can be used to solve differential equations in a high number of spatial variables $$d\;\gg 2$$. The latter application is based on a stable r-term approximation constructed using exponential sums for the function 1/x. In general, the r-term approximation is not easy to apply to tensors of order d ≥ 3. Better tensor representations are described in Hackbusch [132, 133].

The tensor applications in this chapter concern matrices, since this is the subject of the monograph. In general, tensor approximations are directed to “vectors” represented by tensors. In the context of hierarchical matrices, it is assumed that vectors in Rn can be treated in full format. However, considering a boundary value problem in a cube [0, 1]3 discretised by N × N × N grid points with N = 106, the size of the grid function (“vector”) is n = N3 = 1018 and a direct approach is difficult. Regarding such a vector as a tensor of order d = 3, there may be good tensor approximations reducing the data size to $$\mathcal{O}(\mathrm{log} n)$$ (cf. [132, §3.2], [133, §12]).

Also the tensor approximations are based on low-rank approximations (involving different kinds of ranks!), but these are global approximations, in contrast to the local low-rank approximations of the hierarchical matrices.

## Preview

Unable to display preview. Download preview PDF.

## Copyright information

© Springer-Verlag Berlin Heidelberg 2015

## Authors and Affiliations

• Wolfgang Hackbusch
• 1
1. 1.MPI für Mathematik in den NaturwissenschaftenLeipzigGermany