Skip to main content

Part of the book series: Springer Series in Computational Mathematics ((SSCM,volume 49))

Abstract

Similar to fully populated matrices, tensors are mathematical objects with such a large amount of data that naive representations must fail. Sparse representations of tensors are an actual field of research (see Hackbusch [132]). Here, we restrict the discussion to tensor techniques which make use of hierarchical matrices. General vector spaces \( {V_j}\left( {1 \le j \le d} \right) \) can be used to form the tensor space \( V = \otimes _{j = 1}^d{V_j} \). Concerning the vector spaces Vj, we discuss two cases in Section 16.1: finite-dimensional model vector spaces \( {V_j} = {^{{I_j}}} \) (cf. §16.1.2), and matrix spaces \( {V_j} = {^{{I_j} \times {J_j}}} \) (cf. §16.1.3). In the latter case, the tensor product is also called a Kronecker matrix. The crucial problem is the sparse representation of tensors and their approximation, see Section 16.2. In §16.2.1, we discuss the r-term representation. For Kronecker products, the r-term representation can be combined with the hierarchical matrix format, resulting in the “HKT representation” (cf. §16.2.5). In Section 16.3 we present two applications. The first concerns an integral operator giving rise to a representation by (Kronecker) tensors of order d=2. The second application shows that the tensor approach can be used to solve differential equations in a high number of spatial variables \(d\;\gg 2\). The latter application is based on a stable r-term approximation constructed using exponential sums for the function 1/x. In general, the r-term approximation is not easy to apply to tensors of order d ≥ 3. Better tensor representations are described in Hackbusch [132, 133].

The tensor applications in this chapter concern matrices, since this is the subject of the monograph. In general, tensor approximations are directed to “vectors” represented by tensors. In the context of hierarchical matrices, it is assumed that vectors in Rn can be treated in full format. However, considering a boundary value problem in a cube [0, 1]3 discretised by N × N × N grid points with N = 106, the size of the grid function (“vector”) is n = N3 = 1018 and a direct approach is difficult. Regarding such a vector as a tensor of order d = 3, there may be good tensor approximations reducing the data size to \(\mathcal{O}(\mathrm{log} n)\) (cf. [132, §3.2], [133, §12]).

Also the tensor approximations are based on low-rank approximations (involving different kinds of ranks!), but these are global approximations, in contrast to the local low-rank approximations of the hierarchical matrices.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 89.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Hackbusch, W. (2015). Tensor Spaces. In: Hierarchical Matrices: Algorithms and Analysis. Springer Series in Computational Mathematics, vol 49. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-47324-5_16

Download citation

Publish with us

Policies and ethics