Skip to main content
Log in

High-dimensional scalar function visualization using principal parameterizations

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Insightful visualization of multidimensional scalar fields, in particular parameter spaces, is key to many computational science and engineering disciplines. We propose a principal component-based approach to visualize such fields that accurately reflects their sensitivity to their input parameters. The method performs dimensionality reduction on the space formed by all possible partial functions (i.e., those defined by fixing one or more input parameters to specific values), which are projected to low-dimensional parameterized manifolds such as 3D curves, surfaces, and ensembles thereof. Our mapping provides a direct geometrical and visual interpretation in terms of Sobol’s celebrated method for variance-based sensitivity analysis. We furthermore contribute a practical realization of the proposed method by means of tensor decomposition, which enables accurate yet interactive integration and multilinear principal component analysis of high-dimensional models.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Data Availability Statement

The datasets generated during and analyzed during the current study are available in the ttrecipes repository: https://github.com/rballester/ttrecipes/tree/master/models. The code for the method is available in https://github.com/ghalter/pca-webapp-paper, and a deployed application can be found at http://pcatestwebapp.westeurope.cloudapp.azure.com/.

References

  1. ttpy: a tensor train toolbox implemented in Python. http://github.com/oseledets/ttpy

  2. ttrecipes: a high-level Python library of tensor train numerical utilities. https://github.com/rballester/ttrecipes

  3. An, J., Owen, A.B.: Quasi-regression. J. Complex. 17(4), 588–607 (2001)

    Article  MathSciNet  Google Scholar 

  4. Ballester-Ripoll, R., Paredes, E.G., Pajarola, R.: A surrogate visualization model using the tensor train format. In: SIGGRAPH ASIA 2016 Symposium on Visualization, pp. 13:1–13:8 (2016)

  5. Ballester-Ripoll, R., Paredes, E.G., Pajarola, R.: Sobol tensor trains for global sensitivity analysis. ArXiv e-print arXiv:1712.00233 (2017)

  6. Ballester-Ripoll, R., Paredes, E.G., Pajarola, R.: Tensor algorithms for advanced sensitivity metrics. SIAM/ASA J. Uncertain. Quant. 6(3), 1172–1197 (2018)

    Article  MathSciNet  Google Scholar 

  7. Banzhaf, J.F.: Weighted voting doesn’t work: a mathematical analysis. Rutgers Law Rev. 19, 317–343 (1965)

    Google Scholar 

  8. Bigoni, D., Engsig-Karup, A., Marzouk, Y.: Spectral tensor-train decomposition. SIAM J. Sci. Comput. 38(4), A2405–A2439 (2016)

    Article  MathSciNet  Google Scholar 

  9. Bolado-Lavin, R., Castaings, W., Tarantola, S.: Contribution to the sample mean plot for graphical and numerical sensitivity analysis. Reliab. Eng. Syst. Saf. 94(6), 1041–1049 (2009). https://doi.org/10.1016/j.ress.2008.11.012. www.sciencedirect.com/science/article/pii/S0951832008002743’

  10. Bruckner, S., Moeller, T.: Result-driven exploration of simulation parameter spaces for visual effects design. IEEE Trans. Vis. Comput. Graph. 16(6), 1467–1475 (2010)

    Article  Google Scholar 

  11. Castura, J.C., Baker, A.K., Ross, C.F.: Using contrails and animated sequences to visualize uncertainty in dynamic sensory profiles obtained from temporal check-all-that-apply (TCATA) data. Food Qual. Prefer. 54, 90–100 (2016)

    Article  Google Scholar 

  12. Chan, Y.H., Correa, C.D., Ma, K.L.: Flow-based scatterplots for sensitivity analysis. In: IEEE Symposium on Visual Analytics Science and Technology, pp. 43 – 50 (2010)

  13. Constantine, P.G.: Active Subspaces: Emerging Ideas for Dimension Reduction in Parameter Studies. Society for Industrial and Applied Mathematics, Philadelphia (2015)

    Book  Google Scholar 

  14. Correa, C., Lindstrom, P., Bremer, P.T.: Topological spines: a structure-preserving visual representation of scalar fields. IEEE Trans. Vis. Comput. Graph. 17(12), 1842–1851 (2011)

    Article  Google Scholar 

  15. Diaz, P., Constantine, P.G., Kalmbach, K., Jones, E., Pankavich, S.: A modified SEIR model for the spread of Ebola in western Africa and metrics for resource allocation. Appl. Math. Comput. 324, 141–155 (2018)

    MathSciNet  Google Scholar 

  16. Dubourg, V., Sudret, B., Deheeger, F.: Metamodel-based importance sampling for structural reliability analysis. Probab. Eng. Mech. 33, 47–57 (2013)

    Article  Google Scholar 

  17. Fruth, J., Roustant, O., Muehlenstaedt, T.: The fanovaGraph package: visualization of interaction structures and construction of block-additive kriging models. HAL preprint 00795229 (2013). https://hal.archives-ouvertes.fr/hal-00795229

  18. Gallagher, M., Downs, T.: Visualization of learning in multilayer perceptron networks using principal component analysis. IEEE Trans. Syst., Man, Cybern., Part B 33(1), 28–34 (2003)

    Article  Google Scholar 

  19. Gardner, T.S., Dolnik, M., Collins, J.J.: A theory for controlling cell cycle dynamics using a reversibly binding inhibitor. In: National Academy of Sciences of the United States of America, vol. 95, no. 24, pp. 14,190–14,195 (1998)

  20. Gerber, S., Bremer, P.T., Pascucci, V., Whitaker, R.: Visual exploration of high dimensional scalar functions. IEEE Trans. Vis. Comput. Graph. 16(6), 1271–1280 (2010)

    Article  Google Scholar 

  21. Goldbeter, A.: A minimal cascade model for the mitotic oscillator involving cyclin and CDC2 kinase. In: National Academy of Sciences of the United States of America, vol. 88, no. 20, pp. 9107–9111 (1991)

  22. Gorodetsky, A.A., Jakeman, J.D.: Gradient-based optimization for regression in the functional tensor-train format. ArXiv e-print (2018). arxiv:1801.00885

  23. Grasedyck, L., Kressner, D., Tobler, C.: A literature survey of low-rank tensor approximation techniques. GAMM-Mitteilungen 36(1), 53–78 (2013)

    Article  MathSciNet  Google Scholar 

  24. Heinrich, J., Weiskopf, D.: State of the art of parallel coordinates. In: Proceedings Eurographics (State of the Art Reports), pp. 95–116 (2013)

  25. Insuasty, E., Van den Hof, P.M.J., Weiland, S., Jansen, J.D.: Flow-based dissimilarity measures for reservoir models: a spatial-temporal tensor approach. Comput. Geosci. 21(4), 645–663 (2017)

    Article  MathSciNet  Google Scholar 

  26. Iooss, B., Lemaître, P.: A Review on Global Sensitivity Analysis Methods, pp. 101–122. Springer, Boston (2015)

  27. Knutsson, H.: Representing local structure using tensors. Tech. Rep. LiTH-ISY-I, 8765-4321, Computer Vision Laboratory, Linköping University (1989)

  28. Kolda, T.G., Bader, B.W.: Tensor decompositions and applications. SIAM Rev. 51(3), 455–500 (2009)

    Article  MathSciNet  Google Scholar 

  29. Liu, S., Maljovec, D., Wang, B., Bremer, P.T., Pascucci, V.: Visualizing high-dimensional data: advances in the past decade. IEEE Trans. Vis. Comput. Graph. 23(3), 1249–1268 (2017)

    Article  Google Scholar 

  30. Oseledets, I.V.: Tensor-train decomposition. SIAM J. Sci. Comput. 33(5), 2295–2317 (2011)

    Article  MathSciNet  Google Scholar 

  31. Oseledets, I.V., Tyrtyshnikov, E.: TT-cross approximation for multidimensional arrays. Linear Algebra Appl. 432(1), 70–88 (2010)

    Article  MathSciNet  Google Scholar 

  32. Saltelli, A., Ratto, M., Andres, T., Campolongo, F., Cariboni, J., Gatelli, D., Saisana, M., Tarantola, S.: Global Sensitivity Analysis: The Primer. Wiley (2008)

  33. Sedlmair, M., Brehmer, M., Ingram, S., Munzner, T.: Dimensionality reduction in the wild: gaps and guidance. Tech. Rep. TR-2012-03, University of British Columbia, Department of Computer Science (2012)

  34. Sedlmair, M., Heinzl, C., Bruckner, S., Piringer, H., Möller, T.: Visual parameter space analysis: a conceptual framework. IEEE Trans. Vis. Comput. Graph. 20(12), 2161–2170 (2014)

    Article  Google Scholar 

  35. Shao, L., Mahajan, A., Schreck, T., Lehmann, D.J.: Interactive regression lens for exploring scatter plots. Comput. Graph. Forum 36(3), 157–166 (2017)

    Article  Google Scholar 

  36. Sobol’, I.M.: Sensitivity estimates for nonlinear mathematical models (in Russian). Math. Models 2, 112–118 (1990)

    Google Scholar 

  37. Torsney-Weir, T., Saad, A., Möller, T., Hege, H.C., Weber, B., Verbavatz, J.M.: Tuner: principled parameter finding for image segmentation algorithms using visual response surface exploration. IEEE Trans. Vis. Comput. Graph. 17(12), 1892–1901 (2011)

    Article  Google Scholar 

  38. Torsney-Weir, T., Sedlmair, M., Möller, T.: Sliceplorer: 1D slices for multi-dimensional continuous functions. Comput. Graph. Forum 36(3), 167–177 (2017)

    Article  Google Scholar 

  39. Turk, M., Pentland, A.: Eigenfaces for recognition. J. Cogn. Neurosci. 3(1), 71–86 (1991)

    Article  Google Scholar 

  40. Vasilescu, M.A.O., Terzopoulos, D.: Multilinear analysis of image ensembles: Tensorfaces. In: European Conference on Computer Vision, pp. 447–460 (2002)

  41. Vervliet, N., Debals, O., Sorber, L., Lathauwer, L.D.: Breaking the curse of dimensionality using decompositions of incomplete tensors: tensor-based scientific computing in big data analysis. IEEE Signal Process. Mag. 31(5), 71–79 (2014)

    Article  Google Scholar 

  42. Ward, M.O., LeBlanc, J.T., Tipnis, R.: N-land: a graphical tool for exploring N-dimensional data. In: Proceedings Computer Graphics International, pp. 95–116 (1994)

  43. van Wijk, J.J., van Liere, R.: HyperSlice: visualization of scalar functions of many variables. In: Proceedings IEEE VIS, pp. 119–125 (1993)

  44. Wu, Q., Xia, T., Chen, C., Lin, H.Y.S., Wang, H., Yu, Y.: Hierarchical tensor approximation of multidimensional visual data. IEEE Trans. Vis. Comput. Graph. 14(1), 186–199 (2008)

    Article  Google Scholar 

Download references

Acknowledgements

This project was partially supported by a Swiss National Science Foundation (SNSF) SPIRIT Grant (project no. IZSTZ0_208541).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rafael Ballester-Ripoll.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest. This work involved no human participants or animals.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix

Details on cross-approximation

Cross-approximation is a surrogate modeling technique that incrementally builds a compressed TT tensor approximating a target black-box function by evaluating the function at a batch of samples per iteration. These samples are chosen in a clever adaptive manner so as to minimize the model’s relative error using the smallest possible number of evaluations. The error is defined as \(\Vert \tilde{\textbf{X}} - \textbf{X}\Vert / \Vert \textbf{X}\Vert \), where \(\textbf{X}\) are the groundtruth evaluations and \(\tilde{\textbf{X}}\) is the model’s prediction. These samples are also used as a validation set, and the process is stopped as soon as the error is below a user-defined threshold \(\epsilon \). This is a standard way to approximate the generalization error when building compressed tensors, and the estimation is known [31] to be reliable in practice.

We used \(\epsilon := 10^{-4}\) in all cases, which required between \(10^5\) and \(10^7\) samples and resulted in a few seconds’ time to build each TT metamodel. Although not needed for our work, note that there are alternatives to cross-approximation for the case when function evaluation is expensive [23].

In this paper, we have used the implementation of cross-approximation that is released in the ttpy Python toolbox [1].

Step-by-step illustration of the algorithm

In Fig. 16, we show a flowchart of the proposed algorithm using an intermediate visualization along all its steps. To this end, we take a simplistic 3D use case function that is simple to understand and can directly be displayed both as a 3D rendering as well as a sequence of 2D slices.

Fig. 16
figure 16

Flowchart of the proposed method. For illustrative purposes, we consider a 3D function f representing a grayscale video: f(xyt) is the intensity of pixel (xy) at time \(0 \le t \le 1\). The variable of interest is t. The video captures a static ball in a fixed scene and camera position where the light source moves in a circular fashion. In addition, the light is suddenly dimmed in the middle of the video, which is manifested as a jump in t’s curve along the \(\pi _x\) axis. The three red boxes correspond to the coordinates \((\pi _x(t), \pi _y(t), \pi _z(t))\) at time \(t=0.5\), right before the light source is dimmed. The original video is a tensor of shape \(256 \times 256 \times 256\); for the sake of presentation, only 10 timesteps are shown in this diagram

Operations in the TT compressed domain

The TT format allows for efficient computation of several operations without having to explicitly decompress the tensor it represents. This appendix covers the three main operations that are leveraged in the paper.

1.1 Multidimensional integrals

Suppose our original function is expressed as a TT with cores \({\mathcal {T}}^{(1)}, \dots , {\mathcal {T}}^{(N)}\). To marginalize the n-th variable away (i.e. compute the expected value along \(x_n\)), one simply needs to replace the core \({\mathcal {T}}^{(n)}\) by \(\widehat{{\mathcal {T}}}^{(n)}\) where

$$\begin{aligned} \widehat{{\mathcal {T}}}^{(n)}:= \frac{1}{I_n} \sum _{i=1}^{i=I_n} {\mathcal {T}}^{(n)}[i], \end{aligned}$$

where \(I_n\) is the shape of the grid along dimension n. In other words, we simply need to average the n-th core along its second dimension.

1.2 Element-wise arithmetics

If two tensors are in the TT format with cores \({\mathcal {T}}_1^{(1)}, \dots , {\mathcal {T}}_1^{(N)}\) and \({\mathcal {T}}_2^{(1)}, \dots , {\mathcal {T}}_2^{(N)}\), then their element-wise sum \({\mathcal {T}}_3 = {\mathcal {T}}_1 + {\mathcal {T}}_2\) is given by the following cores:

$$\begin{aligned} {\mathcal {T}}_3^{(n)}[i_n]:= {\left\{ \begin{array}{ll} \left( \begin{array}{cc} \mathcal {T}_1^{(1)} [i_1]; &{} \mathcal {T}_2^{(1)} [i_1] \\ \end{array} \right) &{} \text{ if } n = 1; \\ \left( \begin{array}{cc} \mathcal {T}_1^{(n)} [i_n]; &{} 0 \\ 0; &{} \mathcal {T}_2^{(n)} [i_n] \end{array} \right) &{} \text{ if } n = 2, \dots , N-1; \\ \left( \begin{array}{c} \mathcal {T}_1^{(N)} [i_N] \\ \mathcal {T}_2^{(N)} [i_N] \end{array} \right) &{} \text{ if } n = N. \\ \end{array}\right. } \end{aligned}$$

To subtract two tensors, it is enough to flip the sign of the second one by flipping the sign of its first core, and then sum them as above.

The operations just described allow us to obtain up to \({\mathcal {M}}\) (line 8 from Algorithm 1) in the TT format.

1.3 PCA projection

In practice, once we have \({\mathcal {M}}\), we found it more efficient to compute its PCA projection in the compressed domain via the SVD decomposition as follows. Let \({\mathcal {M}}\), which represents a matrix of size \(I^K \times I^{N-K}\), be given by TT cores \({\mathcal {M}}^{(1)}, \dots , {\mathcal {M}}^{(N)}\). We proceed as follows:

  1. 1.

    We left-orthogonalize [31] the TT. This is equivalent to finding the RQ decomposition of \({\mathcal {M}}\), with the first K cores representing the \({\textbf{R}}\) part (a matrix of shape \(I^K \times R\), where R is the K-th TT rank of \({\mathcal {M}}\)) and the remaining cores the \({\textbf{Q}}\) part (of shape \(R \times I^{N-K}\)).

  2. 2.

    We discard \({\textbf{Q}}\) by keeping the first K cores only.

  3. 3.

    We perform rank-truncation [31] on the last rank in order to decrease it to 3. This involves a SVD decomposition on the last core and is equivalent to keeping the three leading left singular vectors of \({\textbf{R}}\).

The resulting tensor has shape \(I^K \times 3\), as desired, and represents a mapping from the original \(I^K\) grid to \({\mathbb {R}}^3\), i.e. a discretized K-dimensional manifold, that is as close as possible to \({\mathcal {M}}\) in the \(L^2\) sense.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ballester-Ripoll, R., Halter, G. & Pajarola, R. High-dimensional scalar function visualization using principal parameterizations. Vis Comput 40, 2571–2588 (2024). https://doi.org/10.1007/s00371-023-02937-4

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-023-02937-4

Keywords

Navigation