Abstract
This work proposes the extended functional tensor train (EFTT) format for compressing and working with multivariate functions on tensor product domains. Our compression algorithm combines tensorized Chebyshev interpolation with a low-rank approximation algorithm that is entirely based on function evaluations. Compared to existing methods based on the functional tensor train format, the adaptivity of our approach often results in reducing the required storage, sometimes considerably, while achieving the same accuracy. In particular, we reduce the number of function evaluations required to achieve a prescribed accuracy by up to over \(96\%\) compared to the algorithm from Gorodetsky et al. (Comput. Methods Appl. Mech. Eng. 347, 59–84 2019).
Article PDF
Similar content being viewed by others
Avoid common mistakes on your manuscript.
References
Ali, M., Nouy, A.: Approximation with tensor networks. Part I: Approximation spaces. arXiv:2007.00118 (2020)
Ali, M., Nouy, A.: Approximation with tensor networks. Part II: Approximation rates for smoothness classes. arXiv:2007.00128 (2020)
Ali, M., Nouy, A.: Approximation with tensor networks. Part III: Multivariate approximation. arXiv:2101.11932 (2021)
An, J., Owen, A.: Quasi-regression. J. Complexity 17, 588–607 (2001)
Aurentz, J.L., Trefethen, L.N.: Chopping a Chebyshev series. ACM Trans. Math. Software 43, 1–21 (2017)
Bachmayr, M., Cohen, A.: Kolmogorov widths and low-rank approximations of parametric elliptic PDEs. Math. Comp. 86, 701–724 (2017)
Bachmayr, M., Nouy, A., Schneider, R.: Approximation by tree tensor networks in high dimensions: Sobolev and compositional functions. arXiv:2112.01474 (2021)
Ballani, J., Grasedyck, L., Kluge, M.: Black box approximation of tensors in hierarchical Tucker format. Linear Algebra Appl. 438, 639–657 (2013)
Ballester-Ripoll, R., Paredes, E.G., Pajarola, R.: Sobol tensor trains for global sensitivity analysis. Reliab. Eng. Syst. Saf. 183, 311–322 (2019)
Bebendorf, M.: Approximation of boundary element matrices. Numer. Math. 86, 565–589 (2000)
Bebendorf, M., Rjasanow, S.: Adaptive low-rank approximation of collocation matrices. Computing 70, 1–24 (2003)
Beylkin, G., Mohlenkamp, M.J.: Numerical operator calculus in higher dimensions. Proc. Natl. Acad. Sci. USA 99, 10246–10251 (2002)
Bigoni, D., Engsig-Karup, A.P., Marzouk, Y.M.: Spectral tensor-train decomposition. SIAM J. Sci. Comput. 38, A2405–A2439 (2016)
Boyd, J.P., Petschek, R.: The relationships between Chebyshev, Legendre and Jacobi polynomials: the generic superiority of Chebyshev polynomials and three important exceptions. J. Sci. Comput. 59, 1–27 (2014)
Bungartz, H.-J., Griebel, M.: Sparse grids. Acta Numer. 13, 147–269 (2004)
Chaturantabut, S., Sorensen, D.C.: Nonlinear model reduction via discrete empirical interpolation. SIAM J. Sci. Comput. 32, 2737–2764 (2010)
Chertkov, A., Ryzhakov, G., Oseledets, I.: Black box approximation in the tensor train format initialized by ANOVA decomposition. arXiv:2208.03380 (2022)
Clenshaw, C.W., Curtis, A.R.: A method for numerical integration on an automatic computer. Numer. Math. 2, 197–205 (1960)
Cortinovis, A., Kressner, D.: Low-rank approximation in the Frobenius norm by column and row subset selection. SIAM J. Matrix Anal. Appl. 41, 1651–1673 (2020)
Cortinovis, A., Kressner, D., Massei, S.: On maximum volume submatrices and cross approximation for symmetric semidefinite and diagonally dominant matrices. Linear Algebra Appl. 593, 251–268 (2020)
De Lathauwer, L., De Moor, B., Vandewalle, J.: A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl. 21, 1253–1278 (2000)
Dektor, A., Venturi, D.: Dynamically orthogonal tensor methods for high-dimensional nonlinear PDEs. J. Comput. Phys. 404, 103501 (2020)
Dektor, A., Venturi, D.: Tensor rank reduction via coordinate flows. arXiv:2207.11955 (2022)
Deshpande, A., Rademacher, L.: Efficient volume sampling for row/column subset selection. In: 51st Annu. IEEE Symp. Found. Comput. Sci. FOCS, pp. 329–338 (2010)
Dette, H., Pepelyshev, A.: Generalized Latin hypercube design for computer experiments. Technometrics 52, 421–429 (2010)
Dieterich, J., Hartke, B.: Empirical review of standard benchmark functions using evolutionary global optimization. Applied Math. 3 (2012)
Dolgov, S., Khoromskij, B.: Two-level QTT-Tucker format for optimized tensor calculus. SIAM J. Matrix Anal. Appl. 34, 593–623 (2013)
Dolgov, S., Kressner, D., Strössner, C.: Functional Tucker approximation using Chebyshev interpolation. SIAM J. Sci. Comput. 43, A2190–A2210 (2021)
Driscoll, T.A., Hale, N., Trefethen, L.N.: Chebfun Guide. Pafnuty Publications, (2014)
Eigel, M., Gruhlke, R., Marschall, M.: Low-rank tensor reconstruction of concentrated densities with application to Bayesian inversion. Stat. Comput., 32 , p. Paper No. 27 (2022)
Forrester, A.I.J., Sóbester, A., Keane, A.J.: Engineering design via surrogate modelling: a practical guide. John Wiley & Sons, (2008)
Friedman, J.H.: Multivariate adaptive regression splines. Ann. Statist. 19, 1–141 (1991)
Gentleman, W.M.: Algorithm 424: Clenshaw-Curtis quadrature [d1]. Commun. ACM 15, 353–355 (1972)
Genz, A.: A package for testing multiple integration subroutines. In: Keast, P., Fairweather, G. (eds.) Numerical Integration, pp. 337–340. Springer, NATO ASI Series (1987)
Goreinov, S.A., Tyrtyshnikov, E.E., Zamarashkin, N.L.: A theory of pseudoskeleton approximations. Linear Algebra Appl. 261, 1–21 (1997)
Gorodetsky, A.: Continuous low-rank tensor decompositions, with applications to stochastic optimal control and data assimilation. PhD thesis, MIT, Cambridge, MA, (2017)
Gorodetsky, A., Karaman, S., Marzouk, Y.: High-dimensional stochastic optimal control using continuous tensor decompositions. Int. J. Robot. Res. 37, 340–377 (2018)
Gorodetsky, A., Karaman, S., Marzouk, Y.: A continuous analogue of the tensor-train decomposition. Comput. Methods Appl. Mech. Eng. 347, 59–84 (2019)
Gramacy, R.B., Lee, H.K.H.: Adaptive design and analysis of supercomputer experiments. Technometrics 51, 130–145 (2009)
Grasedyck, L., Kressner, D., Tobler, C.: A literature survey of low-rank tensor approximation techniques. GAMM-Mitt. 36, 53–78 (2013)
Grelier, E., Nouy, A., Chevreuil, M.: Learning with tree-based tensor formats. arXiv:1811.04455 (2018)
Griebel, M., Harbrecht, H.: Analysis of tensor approximation schemes for continuous functions. Found. Comput. Math. 1–22 (2021)
Griebel, M., Harbrecht, H., Schneider, R.: Low-rank approximation of continuous functions in Sobolev spaces with dominating mixed smoothness. Math. Comp. 92, 1729–1746 (2023)
Haberstich, C.: Adaptive approximation of high-dimensional functions with tree tensor networks for Uncertainty Quantification. PhD thesis, École centrale de Nantes, (2020)
Haberstich, C., Nouy, A., Perrin, G.: Active learning of tree tensor networks using optimal least squares. SIAM/ASA J. Uncertain. Quantif. 11, 848–876 (2023)
Hackbusch, W.: Tensor spaces and numerical tensor calculus. Springer Ser, vol. 42. Springer, Comput. Math. (2012)
Hackbusch, W., Khoromskij, B.N.: Tensor-product approximation to operators and functions in high dimensions. J. Complexity 23, 697–714 (2007)
Hackbusch, W., Kühn, S.: A new scheme for the tensor representation. J. Fourier Anal. Appl. 15, 706–722 (2009)
Hashemi, B., Trefethen, L.N.: Chebfun in three dimensions. SIAM J. Sci. Comput. 39, C341–C363 (2017)
Jamil, M., Yang, X.-S.: A literature survey of benchmark functions for global optimisation problems. Int. J. Math. Model. Numer. Optim. 4, 150–194 (2013)
Khoromskaia, V., Khoromskij, B.N.: Tensor Numerical Methods in Quantum Chemistry. De Gruyter, Berlin (2018)
Khoromskij, B.N.: Structured rank-\((R_1,\dots, R_D)\) decomposition of function-related tensors in \(\mathbb{R} ^D\). Comput. Methods. Appl. Math. 6, 194–220 (2006)
Khoromskij, B.N.: Tensor numerical methods in scientific computing. Radon Ser, vol. 19. Comput. Appl. Math, De Gruyter. Berlin (2018)
Koepf, W.: Hypergeometric Summation. Adv. Lect. Math., Friedr. Vieweg & Sohn, Braunschweig, (1998)
Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Rev. 51, 455–500 (2009)
Konakli, K., Sudret, B.: Polynomial meta-models with canonical low-rank approximations: numerical insights and comparison to sparse polynomial chaos expansions. J. Comput. Phys. 321, 1144–1169 (2016)
Kressner, D., Tobler, C.: Krylov subspace methods for linear systems with tensor product structure. SIAM J. Matrix Anal. Appl. 31, 1688–1714 (2009)
Kressner, D., Tobler, C.: Low-rank tensor Krylov subspace methods for parametrized linear systems. SIAM J. Matrix Anal. Appl. 32, 1288–1316 (2011)
Martinsson, P.-G., Tropp, J.A.: Randomized numerical linear algebra: foundations and algorithms. Acta Numer. 29, 403–572 (2020)
Mason, J.C.: Near-best multivariate approximation by Fourier series, Chebyshev series and Chebyshev interpolation. J. Approx. Theory 28, 349–358 (1980)
Mason, J.C., Handscomb, D.C.: Chebyshev polynomials. Chapman and Hall/CRC, (2002)
Michel, B., Nouy, A.: Learning with tree tensor networks: complexity estimates and model selection. Bernoulli 28, 910–936 (2022)
Minster, R., Saibaba, A.K., Kilmer, M.E.: Randomized algorithms for low-rank tensor decompositions in the Tucker format. SIAM J. Math. Data Sci. 2, 189–215 (2020)
Moon, H., Dean, A.M., Santner, T.J.: Two-stage sensitivity-based group screening in computer experiments. Technometrics 54, 376–387 (2012)
Olver, F.W.J., Lozier, D.W., Boisvert, R.F., Clark, C.W. (eds.): NIST handbook of mathematical functions. Cambridge University Press (2010)
Orús, R.: A practical introduction to tensor networks: matrix product states and projected entangled pair states. Ann. Phys. 349, 117–158 (2014)
Oseledets, I., Tyrtyshnikov, E.: TT-cross approximation for multidimensional arrays. Linear Algebra Appl. 432, 70–88 (2010)
Oseledets, I.V.: Tensor-train decomposition. SIAM J. Sci. Comput. 33, 2295–2317 (2011)
Osinsky, A.I.: Tensor trains approximation estimates in the Chebyshev norm. Comput. Math. Math. Phys. 59, 201–206 (2019)
Psenka, M., Boumal, N.: Second-order optimization for tensors with fixed tensor-train rank. arXiv:2011.13395 (2020)
Qin, Z., Lidiak, A., Gong, Z., Tang, G., Wakin, M.B., Zhu, Z.: Error analysis of tensor-train cross approximation. Adv. Neural Inf. Process. Syst. 35, 14236–14249 (2022)
Qing, A.: Dynamic differential evolution strategy and applications in electromagnetic inverse scattering problems. IEEE Trans. Geosci. Remote Sens. 44, 116–125 (2006)
Rahnamayan, S., Tizhoosh, H., Salama, M.: Opposition-based differential evolution (ODE) with variable jumping rate. In: IEEE Symp. Found. Comput. Intell. pp. 81–88 (2007)
Rahnamayan, S., Tizhoosh, H.R., Salama, M.M.A.: A novel population initialization method for accelerating evolutionary algorithms. Comput. Math. Appl. 53, 1605–1614 (2007)
Saibaba, A.K., Minster, R., Kilmer, M.E.: Efficient randomized tensor-based algorithms for function approximation and low-rank kernel interactions. Adv. Comput. Math. 48 (2022)
Sauter, S.A., Schwab, C.: Boundary element methods. Springer Ser, vol. 39. Springer, Comput. Math. (2011)
Savostyanov, D., Oseledets, I.: Fast adaptive interpolation of multi-dimensional arrays in tensor train format. In: 7th Int. Workshop Multidimens. (nD) Syst. pp. 1–8 (2011)
Savostyanov, D.V.: Quasioptimality of maximum-volume cross interpolation of tensors. Linear Algebra Appl. 458, 217–244 (2014)
Schneider, R., Uschmajew, A.: Approximation rates for the hierarchical tensor format in periodic Sobolev spaces. J. Complexity 30, 56–71 (2014)
Shi, T., Townsend, A.: On the compressibility of tensors. SIAM J. Matrix Anal. Appl. 42, 275–298 (2021)
Soley, M.B., Bergold, P., Gorodetsky, A., Batista, V.S.: Functional Tensor-Train Chebyshev method for multidimensional quantum dynamics simulations. J. Chem. Theory Comput. 18, 25–36 (2022)
Sorensen, D.C., Embree, M.: A DEIM induced CUR factorization. SIAM J. Sci. Comput. 38, A1454–A1482 (2016)
Strössner, C., Kressner, D.: Fast global spectral methods for three-dimensional partial differential equations. IMA J. Numer. Anal. pp. 1–24 (2022)
Sudret, B., Marelli, S., Wiart, J.: Surrogate models for uncertainty quantification: an overview. In: 17th Eur. Conf. Antennas Propag. pp. 793–797 (2017)
Surjanovic, S., Bingham, D.: Virtual library of simulation experiments: test functions and datasets. Retrieved November 14, 2022, from https://www.sfu.ca/~ssurjano/, (2013)
Townsend, A.: Computing with functions in two dimensions. PhD thesis, University of Oxford, (2014)
Townsend, A., Olver, S.: The automatic solution of partial differential equations using a global spectral method. J. Comput. Phys. 299, 106–123 (2015)
Townsend, A., Trefethen, L.N.: An extension of Chebfun to two dimensions. SIAM J. Sci. Comput. 35, C495–C518 (2013)
Trefethen, L.N.: Computing numerically with functions instead of numbers. Math. Comput. Sci. 1, 9–19 (2007)
Trefethen, L.N.: Approximation theory and approximation practice. SIAM, (2013)
Trefethen, L.N.: Cubature, approximation, and isotropy in the hypercube. SIAM Rev. 59, 469–491 (2017)
Trefethen, L.N.: Multivariate polynomial approximation in the hypercube. Proc. Amer. Math. Soc. 145, 4837–4844 (2017)
Trunschke, P., Nouy, A., Eigel, M.: Weighted sparsity and sparse tensor networks for least squares approximation. arXiv:2310.08942 (2023)
Tucker, L.R.: Some mathematical notes on three-mode factor analysis. Psychometrika 31, 279–311 (1966)
Vanaret, C., Gotteland, J.-B., Durand, N., Alliot, J.-M.: Certified global minima for a benchmark of difficult optimization problems. arXiv:2003.09867 (2020)
Waldvogel, J.: Fast construction of the Fejér and Clenshaw-Curtis quadrature rules. BIT 46, 195–202 (2006)
Woodruff, D.P.: Sketching as a tool for numerical linear algebra. Found. Trends Theor. Comput. Sci. 10, 157 (2014)
Xiu, D.: Stochastic collocation methods: a survey. In: Handbook of uncertainty quantification. Springer, pp. 699–716 (2017)
Zankin, V.P., Ryzhakov, G.V., Oseledets, I.V.: Gradient descent-based D-optimal design for the least-squares polynomial approximation. arXiv e-prints arXiv:1806.06631 (2018)
Acknowledgements
The authors would like to thank Sergey Dolgov and both reviewers for their very helpful comments.
Funding
Open access funding provided by EPFL Lausanne.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no competing interests.
Additional information
Communicated by: Anthony Nouy
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A. Test functions
In Tables 2 and 3, we define the test functions for our numerical experiments. Note that the functions are defined on different tensor product domains. In our experiments, we map the domain of these functions onto \([-1,1]^d\) using an affine linear transformation.
Some of these functions do not fit into the format of the table. These are defined in the following:
where
with \(M \in [30,60],\ S \in [0.005,0.02],\ V_0 \in [0.002,0.01],\ k = [1000,5000],\ P_0 \in [90000, 110000],\ T_a \in [290,296],\ T_0 \in [340,360]\).
with \(r_w \in [0.05,0.15],\ r \in [100,50000],\ T_u \in [63070,115600],\ H_u \in [990,1110],\ T_l \in [63.1,116],\ H_l \in [700,820],\ L \in [1120,1680],\ K_w \in [9855,12045]\).
with \(b_1 \in [50,150],\ b_2 \in [25,70],\ f \in [0.5,3],\ c_1 \in [1.2,2.5],\ c_2 \in [0.25,1.2],\) \( \beta \in [50,300]\).
where \(u = \!\sum _{i=1}^4 L_i \cos (\sum _{j=1}^4 \theta _i)\), \(v = \!\sum _{i=1}^4 L_i \sin (\sum _{j=1}^4 \theta _i)\) and \(\theta _i \in [0,2\pi ],\ L_i \in [0,1]\) for \(i = 1,\dots ,4\).
with \(S_w \in [150,200], W_f \in [220,300],\ A \in [6,10],\ \Delta \in [-10,10],\ q \in [16,45],\ \lambda \in [0.5,1],\ t_c \in [0.08,0.18],\ N_z \in [2.5,6],\ W_d \in [1700,2500],\ W_p \in [0.025,0.08].\)
Appendix B. Genz functions
In the following, we define the Genz functions [34], which are frequently used to evaluate function approximation and integration schemes. On the domain \([-1,1]^d\), we consider
The parameters \(w_i\) and \(c_i\) are drawn uniformly from [0, 1], where \(w_i\) acts as a shift for the functions, while \(c_i\) determines the approximation difficulty of the functions. We normalized \(c_i\) such that
where the scaling constants h and b are defined for each function in Table 4. Note that \(f_1\) can be represented in FTT format (2) with \(\max R_\ell = 2\). The function \(f_3\) is separable. For \(f_2\), we are not aware of any analytic FTT representation.
Appendix C. Parametric PDE problem
In the following, we recall the example from [58, Section 4]. Assume \(\sqrt{d} \in \mathbb {N}\). Let \(\Omega = [0,1]^2\). We consider the parametric elliptic PDE
with homogeneous Dirichlet boundary conditions and parameter \(p \in [-1,1]^{d}\). We define the piecewise constant coefficient \(a(x,p): \Omega \times \mathbb {R}^{d}\) as
where we denote the disk with radius \(\rho = 1/(4\sqrt{d}+2)\) centered around \((\rho (4s-1),\rho (4t-1))\) by \(\Omega _{s,t}\) for \(s,t = 1,\dots ,\sqrt{d}\). In our numerical experiments, we approximate the quantity of interest \(Q:[-1,1]^{d} \rightarrow \mathbb {R}\) defined as
where u(x, p) denotes the solution of the PDE (16) for the given parameter \(p \in [-1,1]^{d}\). For each value of p, we solve the resulting PDE using a discretization based on linear finite elements.
Appendix D. Experimental result tables
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Strössner, C., Sun, B. & Kressner, D. Approximation in the extended functional tensor train format. Adv Comput Math 50, 54 (2024). https://doi.org/10.1007/s10444-024-10140-9
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10444-024-10140-9