Skip to main content
Log in

Computational graph completion

  • Research
  • Published:
Research in the Mathematical Sciences Aims and scope Submit manuscript

Abstract

We introduce a framework for generating, organizing, and reasoning with computational knowledge. It is motivated by the observation that most problems in Computational Sciences and Engineering (CSE) can be described as that of completing (from data) a computational graph (or hypergraph) representing dependencies between functions and variables. In that setting nodes represent variables and edges (or hyperedges) represent functions (or functionals). Functions and variables may be known, unknown, or random. Data come in the form of observations of distinct values of a finite number of subsets of the variables of the graph (satisfying its functional dependencies). The underlying problem combines a regression problem (approximating unknown functions) with a matrix completion problem (recovering unobserved variables in the data). Replacing unknown functions by Gaussian processes and conditioning on observed data provides a simple but efficient approach to completing such graphs. Since the proposed framework is highly expressive, it has a vast potential application scope. Since the completion process can be automatized, as one solves \(\sqrt{\sqrt{2}+\sqrt{3}}\) on a pocket calculator without thinking about it, one could, with the proposed framework, solve a complex CSE problem by drawing a diagram. Compared to traditional regression/kriging, the proposed framework can be used to recover unknown functions with much scarcer data by exploiting interdependencies between multiple functions and variables. The computational graph completion (CGC) problem addressed by the proposed framework could therefore also be interpreted as a generalization of that of solving linear systems of equations to that of approximating unknown variables and functions with noisy, incomplete, and nonlinear dependencies. Numerous examples illustrate the flexibility, scope, efficacy, and robustness of the CGC framework and show how it can be used as a pathway to identifying simple solutions to classical CSE problems. These examples include the seamless CGC representation of known methods (for solving/learning PDEs, surrogate/multiscale modeling, mode decomposition, deep learning) and the discovery of new ones (digital twin modeling, dimension reduction).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Data Availability

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Notes

  1. Write \(\mathcal {L}({\mathcal {Y}})\) for the set of bounded linear operators mapping \({\mathcal {Y}}\) to \({\mathcal {Y}}\)

  2. Evidently, the independence assumption can be relaxed.

  3. This type of regularization was introduced in [29] (as a principled and rigorous alternative to Dropout [41]) for ANNs/ResNets and their kernelized variants where it was shown to be necessary and sufficient to avoid regression instabilities (the lack of stability/continuity of regressors with respect to data and input variables).

References

  1. Baldi, P.: Autoencoders, unsupervised learning, and deep architectures. In: Proceedings of ICML Workshop on Unsupervised and Transfer Learning, pp 37–49. JMLR Workshop and Conference Proceedings (2012)

  2. Belkin, M.: Fit Without Fear: Remarkable Mathematical Phenomena of Deep Learning Through the Prism of Interpolation. (2021). arXiv preprint arXiv: 2105.14368

  3. Brown, L.G.: A survey of image registration techniques. ACM Comput. Surv. (CSUR) 24(4), 325–376 (1992)

    Article  Google Scholar 

  4. Chen, Y., Hosseini, B., Owhadi, H., Stuart, A.M.: Solving and learning nonlinear pdes with gaussian processes. Journal of Computational Physics. (2021). arXiv preprint arXiv:2103.12959

  5. Chen, Y., Owhadi, H., Stuart, A.: Consistency of empirical bayes and kernel flow for hierarchical parameter estimation. Math. Comput. 90(332), 2527–2578 (2021)

  6. Cockayne, J., Oates, C.J., Sullivan, T.J., Girolami, M.: Bayesian probabilistic numerical methods. SIAM Rev. 61(4), 756–789 (2019)

    Article  MathSciNet  Google Scholar 

  7. Constantine, P.G., Dow, E., Wang, Q.: Active subspace methods in theory and practice: applications to kriging surfaces. SIAM J. Sci. Comput. 36(4), A1500–A1524 (2014)

    Article  MathSciNet  Google Scholar 

  8. Cressie, N.: Spatial prediction and ordinary kriging. Math. Geol. 20(4), 405–421 (1988)

    Article  MathSciNet  Google Scholar 

  9. Darcy, M.D., Hamzi, B., Livieri, G., Owhadi, H., Tavallali, P.: One-shot learning of stochastic differential equations with computational graph completion. (2022). https://doi.org/10.2139/ssrn.4046014

  10. Darcy, M., Hamzi, B., Susiluoto, J., Braverman, A., Owhadi, H.: Learning dynamical systems from data: a simple cross-validation perspective, part ii: nonparametric kernel flows (2021). https://doi.org/10.13140/RG.2.2.16391.32164

  11. Fensel, D., Simsek, U., Angele, K., Huaman, E., Kärle, E., Panasiuk, O., Toma, I., Umbrich, J., Wahler, A.: Knowledge Graphs. Springer, New York (2020)

    Book  Google Scholar 

  12. Golub, G.H., Reinsch, C.: Singular value decomposition and least squares solutions. In: Linear Algebra, pp. 134–151. Springer, New York (1971)

    Chapter  Google Scholar 

  13. Grenander, U., Miller, M.I.: Computational anatomy: an emerging discipline. Q. Appl. Math. 56(4), 617–694 (1998)

    Article  MathSciNet  Google Scholar 

  14. Hamzi, B., Maulik, R., Owhadi, H.: Simple, low-cost and accurate data-driven geophysical forecasting with learned kernels. Proc. R. Soc. A 477(2252), 20210326 (2021)

    Article  MathSciNet  Google Scholar 

  15. Hamzi, B., Owhadi, H.: Learning dynamical systems from data: a simple cross-validation perspective, part i: parametric kernel flows. Phys. D Nonlinear Phenom. 421, 132817 (2021)

    Article  MathSciNet  Google Scholar 

  16. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

  17. Hennig, P., Osborne, M.A., Girolami, M.: Probabilistic numerics and uncertainty in computations. Proc. R. Soc. A. 471(2179), 20150142 (2015)

    Article  MathSciNet  Google Scholar 

  18. Huang, N.E., Shen, Z., Long, S.R., Wu, M.C., Shih, H.H., Zheng, Q., Yen, N.-C., Tung, C.C., Liu, H.H.: The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis. Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 454(1971), 903–995 (1998)

    Article  MathSciNet  Google Scholar 

  19. Irwin Jordan, M.: Learning in Graphical Models, vol. 89. Springer Science & Business Media, New York (1998)

    Book  Google Scholar 

  20. Lee, J., De Brouwer, E., Hamzi, B., Owhadi, H.: Learning Dynamical Systems from Data: A Simple Cross-Validation Perspective, Part iii: Irregularly-Sampled Time Series. (2021). arXiv preprint arXiv: 2111.13037

  21. Lin, Y., Liu, Z., Sun, M., Liu, Y., Zhu, X.: Learning entity and relation embeddings for knowledge graph completion. In: Twenty-Ninth AAAI Conference on Artificial Intelligence (2015)

  22. Micchelli, C.A., Rivlin, T.J.: A survey of optimal recovery. In: Optimal Estimation in Approximation Theory, pp. 1–54. Springer, New York (1977)

    Chapter  Google Scholar 

  23. Mika, S., Schölkopf, B., Smola, A.J., Müller, K.-R., Scholz, M., Rätsch, G.: Kernel pca and de-noising in feature spaces. In: NIPS, vol. 11, pp. 536–542 (1998)

  24. Noy, N., Gao, Y., Jain, A., Narayanan, A., Patterson, A., Taylor, J.: Industry-scale knowledge graphs: lessons and challenges. Commun. ACM 62(8), 36–43 (2019)

    Article  Google Scholar 

  25. Owhadi, H., Scovel, C.: Operator-adapted wavelets, fast solvers, and numerical homogenization, Cambridge Monographs on Applied and Computational Mathematics, vol. 35. Cambridge University Press, Cambridge (2019)

  26. Owhadi, H., Scovel, C., Schäfer, F.: Statistical numerical approximation. Notices Amer. Math. Soc. 66(10), 1608–1617 (2019)

  27. Owhadi, H.: Bayesian numerical homogenization. Multiscale Model. Simul. 13(3), 812–828 (2015)

    Article  MathSciNet  Google Scholar 

  28. Owhadi, H.: Multigrid with rough coefficients and multiresolution operator decomposition from hierarchical information games. SIAM Rev. 59(1), 99–149 (2017)

    Article  MathSciNet  Google Scholar 

  29. Owhadi, H: Do Ideas Have Shape? Plato’s Theory of Forms as the Continuous Limit of Artificial Neural Networks. (2020). arXiv preprint arXiv: 2008.03920

  30. Owhadi, H: Notes on Operator Valued Kernels, Feature Maps and Gaussian Processes. (2021). http://users.cms.caltech.edu/~owhadi/index_htm_files/OperatorValuedGPs.pdf

  31. Owhadi, H., Scovel, C.: Operator-Adapted Wavelets, Fast Solvers, and Numerical Homogenization: From a Game Theoretic Approach to Numerical Approximation and Algorithm Design, vol. 35. Cambridge University Press, Cambridge (2019)

    Book  Google Scholar 

  32. Owhadi, H., Scovel, C., Yoo, G.R.: Kernel Mode Decomposition and the programming of kernels. Springer. (2021). arXiv preprint arXiv:1907.08592 for early version

  33. Owhadi, H., Ryan, Y.G.: Kernel flows: from learning kernels from data into the abyss. J. Comput. Phys. 389, 22–47 (2019)

    Article  MathSciNet  Google Scholar 

  34. Prasanth, S., Haddad, Z.S., Susiluoto, J., Braverman, A.J., Owhadi, H., Hamzi, B., Hristova-Veleva, S.M., Turk, J.: Kernel flows to infer the structure of convective storms from satellite passive microwave observations. In: AGU Fall Meeting 2021. AGU (2021)

  35. Raissi, M., Perdikaris, P., Karniadakis, G.E.: Inferring solutions of differential equations using noisy multi-fidelity data. J. Comput. Phys. 335, 736–746 (2017)

    Article  MathSciNet  Google Scholar 

  36. Raissi, M., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 378, 686–707 (2019)

    Article  MathSciNet  Google Scholar 

  37. Reisert, M., Burkhardt, H.: Learning equivariant functions with matrix valued kernels. J. Mach. Learn. Res. 8, 385–408 (2007)

    MathSciNet  MATH  Google Scholar 

  38. Rusnak, L.J.: Oriented hypergraphs: introduction and balance. Electron. J. Comb. 20(3), 48 (2013)

    Article  MathSciNet  Google Scholar 

  39. Schäfer, F., Katzfuss, M., Owhadi, H.: Sparse cholesky factorization by kullback-leibler minimization. SIAM J. Sci. Comput. 43(3), A2019–A2046 (2021)

    Article  MathSciNet  Google Scholar 

  40. Scholkopf, B., Smola, A.J.: Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge (2018)

    Book  Google Scholar 

  41. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  42. Tinhofer, G., Albrecht, R., Mayr, E., Noltemeier, H., Syslo, M.M.: Computational Graph Theory, vol. 7. Springer Science & Business Media, New York (2012)

    Google Scholar 

  43. Wendland, H.: Scattered data approximation. In: Cambridge Monographs on Applied and Computational Mathematics, vol. 17. Cambridge University Press, Cambridge (2005)

    Google Scholar 

  44. Williams, C.K., Rasmussen, C.E.: Gaussian processes for machine learning, vol. 2. MIT press, Cambridge, MA (2006)

  45. Yoo, G.R., Owhadi, H.: Deep Regularization and Direct Training of the Inner Layers of Neural Networks with Kernel Flows. (2020). arXiv preprint arXiv: 2002.08335

  46. Younes, L.: Shapes and Diffeomorphisms, vol. 171. Springer, New York (2010)

    Book  Google Scholar 

Download references

Acknowledgements

The author gratefully acknowledges partial support by the Air Force Office of Scientific Research under MURI award number FA9550-20-1-0358 (Machine Learning and Physics-Based Modeling and Simulation). Thanks to Amy Braverman, Jouni Susiluoto, and Otto Lamminpaeae for stimulating discussions. Thanks to an anonymous referee and to Jean-Luc Cambier for helpful comments and feedback.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Houman Owhadi.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Owhadi, H. Computational graph completion. Res Math Sci 9, 27 (2022). https://doi.org/10.1007/s40687-022-00320-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s40687-022-00320-8

Navigation