Advertisement

Cognitive Computation

, Volume 10, Issue 3, pp 464–477 | Cite as

Extreme Learning Machines for VISualization+R: Mastering Visualization with Target Variables

  • Andrey GritsenkoEmail author
  • Anton Akusok
  • Stephen Baek
  • Yoan Miche
  • Amaury Lendasse
Article

Abstract

The current paper presents an improvement of the Extreme Learning Machines for VISualization (ELMVIS+) nonlinear dimensionality reduction method. In this improved method, called ELMVIS+R, it is proposed to apply the originally unsupervised ELMVIS+ method for the regression problems, using target values to improve visualization results. It has been shown in previous work that the approach of adding supervised component for classification problems indeed allows to obtain better visualization results. To verify this assumption for regression problems, a set of experiments on several different datasets was performed. The newly proposed method was compared to the ELMVIS+ method and, in most cases, outperformed the original algorithm. Results, presented in this article, prove the general idea that using supervised components (target values) with nonlinear dimensionality reduction method like ELMVIS+ can improve both visual properties and overall accuracy.

Keywords

Nonlinear regression Machine learning Artificial neural networks Extreme learning machines Visualization Nonlinear dimensionality reduction Cosine similarity 

Notes

Compliance with Ethical Standards

Conflict of interests

The authors declare that they have no conflict of interest.

Ethical Approval

This article does not contain any studies with human participants or animals performed by any of the authors.

References

  1. 1.
    Løkse S, Bianchi FM, Jenssen R. Training echo state networks with regularization through dimensionality reduction. Cogn Comput 2017;9(3):364–378.CrossRefGoogle Scholar
  2. 2.
    Gisbrecht A., Hammer B. Data visualization by nonlinear dimensionality reduction. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 2015;5(2):51–73.Google Scholar
  3. 3.
    Kaski S., Peltonen J. Dimensionality reduction for data visualization. IEEE Signal Process Mag 2011;28(2): 100–104.CrossRefGoogle Scholar
  4. 4.
    Akusok A., Baek S., Miche Y., Björk K.M., Nian R., Lauren P., Lendasse A. ELMVIS+: fast nonlinear visualization technique based on cosine distance and extreme learning machines. Neurocomputing 2016;205: 247–263.CrossRefGoogle Scholar
  5. 5.
    Padmaja DL, Vishnuvardhan B. Comparative study of feature subset selection methods for dimensionality reduction on scientific data. 2016 IEEE 6th International Conference on Advanced Computing (IACC); 2016. p. 31–34.Google Scholar
  6. 6.
    Torabi A., Zareayan Jahromy F., Daliri M.R. 2017. Semantic category-based classification using nonlinear features and wavelet coefficients of brain signals. Cognitive Computation.Google Scholar
  7. 7.
    Xia S.X., Meng F.R., Liu B., Zhou Y. A kernel clustering-based possibilistic fuzzy extreme learning machine for class imbalance learning. Cogn Comput 2015;7(1):74–85.CrossRefGoogle Scholar
  8. 8.
    Wei H., Dong Z. V4 neural network model for shape-based feature extraction and object discrimination. Cogn Comput 2015;7(6):753–762.CrossRefGoogle Scholar
  9. 9.
    Kohonen T. Self-organized formation of topologically correct feature maps. Biol Cybern 1982;43(1):59–69.CrossRefGoogle Scholar
  10. 10.
    Minhas S., Hussain A. From spin to swindle: identifying falsification in financial text. Cogn Comput 2016;8 (4):729–745.CrossRefGoogle Scholar
  11. 11.
    Dornaika F, Assoum A. Linear Dimensionality Reduction through Eigenvector Selection for Object Recognition. Springer Berlin Heidelberg, Berlin, Heidelberg; 2010. p. 276–285.Google Scholar
  12. 12.
    Shereena V.B., Julie M.D. Significance of dimensionality reduction in image processing. Sign Image Process Int J 2015;6(3):27–42.CrossRefGoogle Scholar
  13. 13.
    Haghighat M., Zonouz S., Abdel-Mottaleb M. CloudID: trustworthy cloud-based and cross-enterprise biometric identification. Expert Syst Appl 2015;42(21):7905–7916.CrossRefGoogle Scholar
  14. 14.
    Ding S., Meng L., Han Y., Xue Y. A review on feature binding theory and its functions observed in perceptual process. Cogn Comput 2017;9(2):194–206.CrossRefGoogle Scholar
  15. 15.
    Ye J. Dimension Reduction Algorithms in Data Mining, with Applications. Minneapolis: PhD thesis, University of Minnesota; 2005. AAI3172868.Google Scholar
  16. 16.
    Kruskal J.B. Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika 1964;29(1):1–27.CrossRefGoogle Scholar
  17. 17.
    Sammon J. A nonlinear mapping for data structure analysis. IEEE Trans Comput 1969;18:401–409.CrossRefGoogle Scholar
  18. 18.
    Tenenbaum J.B., De Silva V., Langford J.C. A global geometric framework for nonlinear dimensionality reduction. Science 2000;290(5500):2319–2323.CrossRefPubMedGoogle Scholar
  19. 19.
    Merlin P., Sorjamaa A., Maillet B., Lendasse A. X-SOM and L-SOM: A double classification approach for missing value imputation. Neurocomputing 2010;73(7–9):1103–1108.CrossRefGoogle Scholar
  20. 20.
    Dablemont S, Simon G, Lendasse A, Ruttiens A, Blayo F, Verleysen M. Time series forecasting with SOM and local non-linear models - application to the DAX30 index prediction. Proceedings of the Workshop on Self-organizing Maps, Hibikino, Japan; 2003. p. 340–345.Google Scholar
  21. 21.
    Khan A., Xue L.Z., Wei W., Qu Y., Hussain A., Vencio R.Z.N. Convergence analysis of a new self organizing map based optimization (SOMO) algorithm. Cogn Comput 2015;7(4):477–486.CrossRefGoogle Scholar
  22. 22.
    Bishop C.M., Svensén M, Williams C.K.I. GTM: The generative topographic mapping. Neural Comput 1998;10(1):215–234.CrossRefGoogle Scholar
  23. 23.
    Belkin M., Niyogi P. Laplacian eigenmaps and spectral techniques for embedding and clustering. Adv Neural Inf Proces Syst 2001;14:585–591.Google Scholar
  24. 24.
    Belkin M., Niyogi P. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput 2003;15(6):1373–1396.CrossRefGoogle Scholar
  25. 25.
    Jolliffe I. Principal Component Analysis. Berlin: Springer Verlag; 1986.CrossRefGoogle Scholar
  26. 26.
    Wold H. Estimation of principal components and related models by iterative least squares. In Multivariate Analysis. Volume 59. Academic Press, NY. In: Krishnaiah P, editors; 1966. p. 391– 420.Google Scholar
  27. 27.
    Lendasse A, Corona F. Linear projection based on noise variance estimation: Application to spectral data. Proceedings of ESANN 2008, European Symposium on Artificial Neural Networks, Bruges (Belgium), d-side publ. (Evere, Belgium). In: Verleysen M, editors; 2008. p. 457–462.Google Scholar
  28. 28.
    Akusok A., Miche Y., Björk K.M., Nian R., Lauren P., Lendasse A. ELMVIS+: improved nonlinear visualization technique using cosine distance and extreme learning machines. Proceedings of ELM-2015 Volume 2: Theory, Algorithms and Applications (II). Springer International Publishing; 2016. p. 357–369.Google Scholar
  29. 29.
    Huang G.B. What are extreme learning machines? Filling the gap between Frank Rosenblatt’s dream and John von Neumann’s puzzle. Cogn Comput 2015;7(3):263–278.CrossRefGoogle Scholar
  30. 30.
    Cambria E., et al. Extreme learning machines. IEEE Intell Syst 2013;28(6):30–59.CrossRefGoogle Scholar
  31. 31.
    Gritsenko A, Akusok A, Miche Y, Björk KM, Baek S, Lendasse A. Combined Nonlinear Visualization and Classification: ELMVIS++C. International Joint Conference on Neural Networks (IJCNN 2016), IEEE; 2016. p. 2617–2624.Google Scholar
  32. 32.
    Nian R., He B., Zheng B., Van Heeswijk M., Yu Q., Miche Y., Lendasse A. Extreme learning machine towards dynamic model hypothesis in fish ethology research. Neurocomputing 2014;128:273–284.CrossRefGoogle Scholar
  33. 33.
    Akusok A., Björk K.M., Miche Y., Lendasse A. High-performance extreme learning machines: a complete toolbox for big data applications. IEEE Access 2015;3:1011–1025.CrossRefGoogle Scholar
  34. 34.
    Burkard R., Dell’Amico M., Martello S. 2012. Assignment problems. Society for Industrial and Applied Mathematics.Google Scholar
  35. 35.
    Lee J.A., Verleysen M. Nonlinear dimensionality reduction. New York: Springer; 2007.CrossRefGoogle Scholar
  36. 36.
    Venna J., Peltonen J., Nybo K., Aidos H., Kaski S. Information retrieval perspective to nonlinear dimensionality reduction for data visualization. J Mach Learn Res 2010;11:451–490.Google Scholar
  37. 37.
    LeCun Y., Bottou L., Bengio Y., Haffner P. Gradient-based learning applied to document recognition. Proc IEEE 1998;86(11):2278–2324.CrossRefGoogle Scholar
  38. 38.
    Lichman M. 2013. UCI machine learning repository. http://archive.ics.uci.edu/ml.
  39. 39.
    Gerritsma J., Omnink R., Versluis A. Geometry, resistance and stability of the delft systematic yacht hull series. Int Shipbuild Prog 1981;28(328):276–297.CrossRefGoogle Scholar
  40. 40.
    Ortigosa I, Lopez R, Garcia J. A neural networks approach to residuary resistance of sailing yachts prediction. Proceedings of the international conference on marine engineering MARINE. Volume 2007; 2007. p. 250.Google Scholar
  41. 41.
    Yeh I.C. Modeling of strength of high-performance concrete using artificial neural networks. Cem Concr Res 1998;28(12):1797–1808.CrossRefGoogle Scholar
  42. 42.
    Kaya H, Tüfekci P, Gürgen FS. 2012. Local and global learning methods for predicting power of a combined gas & steam turbine.Google Scholar
  43. 43.
    Cortez P, Cerdeira A, Almeida F, Matos T, Reis J. Modeling wine preferences by data mining from physicochemical properties. Decis Support Syst 2009;47(4):547–553. Smart Business Networks: Concepts and Empirical Evidence.CrossRefGoogle Scholar
  44. 44.
    Vergara A., Vembu S., Ayhan T., Ryan M.A., Homer M.L., Huerta R. Chemical gas sensor drift compensation using classifier ensembles. Sensors Actuators B Chem 2012;166–167:320–329.CrossRefGoogle Scholar
  45. 45.
    Rodriguez-Lujan I., Fonollosa J., Vergara A., Homer M., Huerta R. On the calibration of sensor arrays for pattern recognition using the minimal number of experiments. Chemom Intell Lab Syst 2014;130:123–134.CrossRefGoogle Scholar
  46. 46.
    Akusok A. 2016. ELMVIS+ code. https://github.com/akusok/elmvis.

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2017

Authors and Affiliations

  1. 1.Department of Mechanical and Industrial EngineeringThe University of IowaIowa CityUSA
  2. 2.The Iowa Informatics InitiativeThe University of IowaIowa CityUSA
  3. 3.Risklab at Arcada University of Applied SciencesHelsinkiFinland
  4. 4.Nokia Bell LaboratorysHelsinkiFinland

Personalised recommendations