Skip to main content
Log in

Effective approximation of high-dimensional space using neural networks

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

Because of the curse of dimensionality, the data in high-dimensional space hardly afford sufficient information for neural networks training. Hence, this is a tough task to approximate the high-dimensional space using neural networks. To address this, here proposes the method that neural networks approximate a high-dimensional function that can effectively approach the high-dimensional space, rather than using neural networks to directly approximate the high-dimensional space. Hence, two boundaries were derived by Lipschitz condition, i.e., the one is that neural networks approximate a high-dimensional function, and the other is that a high-dimensional function approaches the high-dimensional space. Experimental results on synthetic and real-world datasets show that our method is effective and outperforms the competing methods in the performance to approximate the high-dimensional space. We find that this manner of using neural networks to approximate a high-dimensional function that can effectively approach the high-dimensional space is more resistance to the curse of dimensionality. In addition, the ability of the proposed method to approximate the high-dimensional space is related to the number of hidden layers and the choice of high-dimensional functions, but more relies on the latter. Our findings demonstrate that it is no obvious dependency between the number of hidden layers respecting the proposed method and the choice for high-dimensional functions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Data availability

All real-world datasets in this work can be found at http://archive.ics.uci.edu/ml/

References

  1. Bethany Lusch J, Kutz N, Brunton SL (2018) Deep learning for universal linear embeddings of nonlinear dynamics. Nat Commun 9:1–10

    Article  Google Scholar 

  2. Le Cun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521:436–444

    Article  Google Scholar 

  3. Chen T, Ruifeng Xu, He Y, Wang X (2017) Improving sentiment analysis via sentence type classification using BiLSTM-CRF and CNN. Expert Syst Appl 72:221–230

    Article  Google Scholar 

  4. Oliaee SME, Shoorehdeli MA, Teshnehlab M (2018) Faults detecting of high-dimension gas turbine by stacking DNN and LLM. In: 2018 6th Iranian Joint Congress on Fuzzy and Intelligent Systems (CFIS), pp 142–145

  5. Berman JJ (2013) Principles, and practice of big data: preparing, sharing, and analyzing complex information. Newnes

    Google Scholar 

  6. Ta QM, Nguyen H-T, Cheah CC (2020) Data-driven learning for approximation of nonlinear functions with stochastic disturbances. In: 2020 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM). IEEE

  7. Astafyev AN, Gerashchenko SI, Markuleva MV (2020) Neural network system for medical data approximation. In: 2020 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus). IEEE.

  8. Calafiore GC, Gaubert S, Possieri C (2020) A universal approximation result for difference of log-sum-exp neural networks. IEEE Trans Neural Netw Learn Syst 31(12):5603–5612

    Article  MathSciNet  Google Scholar 

  9. Krishnan R, Subedar M, Tickoo O (2019) Efficient priors for scalable variational inference in Bayesian deep neural networks. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp 773–777

  10. Hoffman MD, Blei DM, Wang C, Paisley J (2013) Stochastic variational inference. J Mach Learn Res 14(1):1303–1347

    MathSciNet  MATH  Google Scholar 

  11. Nalisnick E, Hernández-Lobato JM, Smyth P (2019) Dropout as a structured shrinkage prior. In: International Conference on Machine Learning, pp 4712–4722

  12. Rocha R, Gomide F (2016) Performance evaluation of evolving classifier algorithms in high dimensional spaces. In: 2016 Annual Conference of the North American Fuzzy Information Processing Society (NAFIPS), pp 1–6

  13. Vignac C, Ortiz-Jiménez G, Frossard P (2020) On the choice of graph neural network architectures. In: 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp 8489–8493

  14. Shchur O, Mumme M, Bojchevski A, Günnemann S (2018) Pitfalls of graph neural network evaluation. arXiv:1811.05868

  15. Deng Y, Bao F, Dai Q (2019) Scalable analysis of cell-type composition from single-cell transcriptomics using deep recurrent learning. Nat Methods 16:311–314

    Article  Google Scholar 

  16. Zhu B, Liu JZ, Cauley SF (2018) Image reconstruction by domain transform manifold learning. Nature 555:487–492

    Article  Google Scholar 

  17. Andras P (2014) Function approximation using combined unsupervised and supervised learning. IEEE Trans Neural Netw Learn Syst 25(3):495–505

    Article  Google Scholar 

  18. Andras P (2015) High-dimensional function approximation using local linear embedding. In: International Joint Conference on Neural Networks

  19. Andras P (2018) Random projection neural network approximation. In: 2018 International Joint Conference on Neural Networks (IJCNN)

  20. Petersen P, Voigtlaender F (2018) Optimal approximation of piecewise smooth functions using deep ReLU neural networks. Neural Netw 108:296–330

    Article  Google Scholar 

  21. Voigtlaender F, Petersen P (2019) Approximation in Lp(μ) with deep ReLU neural networks. arXiv:1904.04789

  22. Lu J, Shen Z, Yang H, Zhang S (2020) Deep network approximation for smooth functions. arXiv:2001.03040

  23. Yarotsky D (2017) Error bounds for approximations with deep ReLU networks. Neural Netw 94:103–114

    Article  Google Scholar 

  24. Jia Y, Chen F, Wu P (2019) A study of online function approximation system based on BP neural network. In: 2019 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC). IEEE

  25. Erfani SM, Rajasegarar S, Karunasekera S, Leckie C (2016) High-dimensional and large-scale anomaly detection using a linear one-class SVM with deep learning. Pattern Recogn 58:121–134

    Article  Google Scholar 

  26. Cybenko G (1989) Approximation by superpositions of a sigmoidal function. Math Control Signals Syst 2(4):303–314

    Article  MathSciNet  Google Scholar 

  27. Hornik K, Stinchcombe M, White H (1989) Multilayer feedforward networks are universal approximators. Neural Netw 2(5):359–366

    Article  Google Scholar 

  28. Zainuddin Z, Fard SP (2015) Approximation of multivariate 2π-periodic functions by multiple 2π-periodic approximate identity neural networks based on the universal approximation theorems. In: 2015 11th International Conference on Natural Computation (ICNC). IEEE

  29. Schwab C, Zech J (2019) Deep learning in high dimension: neural network expression rates for generalized polynomial chaos expansions in UQ. Anal Appl 17(01):19–55

    Article  MathSciNet  Google Scholar 

  30. Grohs P, Hornung F, Jentzen A, von Wurstemberger P (2018) A proof that artificial neural networks overcome the curse of dimensionality in the numerical approximation of black-Scholes partial differential equations. arXiv:1809.02362

  31. Jentzen A, Salimova D, Welti T (2018) A proof that deep artificial neural networks overcome the curse of dimensionality in the numerical approximation of Kolmogorov partial differential equations with constant diffusion and nonlinear drift coefficients. arXiv:1809.07321

  32. Hutzenthaler M, Jentzen A, Kruse T, Nguyen TA (2020) A proof that rectified deep neural networks overcome the curse of dimensionality in the numerical approximation of semilinear heat equations. Social Netw Partial Differ Equ Appl 1(2):10

    Article  MathSciNet  Google Scholar 

  33. Bölcskei H, Grohs P, Kutyniok G, Petersen P (2019) Optimal approximation with sparsely connected deep neural networks. SIAM J Math Data Sci 1(1):8–45

    Article  MathSciNet  Google Scholar 

  34. Lee H, Ge R, Ma T, Risteski A, Arora S (2017) On the ability of neural nets to express distributions. In: Proc. Conf. Learn. Theory, pp 1271–1296

  35. Elbrächter D, Perekrestenko D, Grohs P, Bölcskei H (2019) Deep neural network approximation theory. arXiv:1901.02220

  36. Guliyev NJ, Ismailov VE (2018) On the approximation by single hidden layer feedforward neural networks with fixed weights. Neural Netw 98:296–304

    Article  Google Scholar 

  37. Voevoda AA, Romannikov DO (2018) Synthesis of a neural network for N-dimension surfaces approximation. In: 2018 XIV International Scientific-Technical Conference on Actual Problems of Electronics Instrument Engineering (APEIE). IEEE

  38. EminOrhan A, Ma WJ (2017) Efficient probabilistic inference in generic neural networks trained with non-probabilistic feedback. Nat Commun 8(1):1–14

    Article  Google Scholar 

  39. Cheridito P, Jentzen A, Rossmannek F (2021) Efficient approximation of high-dimensional functions with neural networks. IEEE Trans Neural Netw Learn Syst 15:1–15

    Article  Google Scholar 

  40. Huang Y, Capretz LF, Ho D (2019) Neural network models for stock selection based on fundamental analysis. In: 2019 IEEE Canadian Conference of Electrical and Computer Engineering (CCECE). IEEE

  41. Campos GO, Zimek A, Sander J, Campello RJGB (2016) On the evaluation of unsupervised outlier detection: measures, datasets, and an empirical study. Data Min Knowl Disc 30:891–927

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work was supported by the Science and Technology Research Program of Chongqing Municipal Education Commission of China under Grant KJQN201903003. And the Science and Technology Research Program of Chongqing Municipal Education Commission of China under Grant KJQN202003001. And the Chongqing Municipal Education Commission of China under Grant 192072. And the Higher Education of Chongqing Municipal Education Commission of China under Grant CQGJ20ZX021.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jian Zheng.

Ethics declarations

Conflict of interest

The authors declare no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zheng, J., Wang, J., Chen, Y. et al. Effective approximation of high-dimensional space using neural networks. J Supercomput 78, 4377–4397 (2022). https://doi.org/10.1007/s11227-021-04038-2

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11227-021-04038-2

Keyword

Navigation