Abstract
Deep neural maps are unsupervised learning and visualization methods that combine autoencoders with self-organizing maps. An autoencoder is a deep artificial neural network that is widely used for dimension reduction and feature extraction in machine learning tasks. The self-organizing map is a neural network for unsupervised learning often used for clustering and the representation of high-dimensional data on a 2D grid. Deep neural maps have shown improvements in performance compared to standalone self-organizing maps when considering clustering tasks. The key idea is that a deep neural map outperforms a standalone self-organizing map in two dimensions: (1) better convergence behavior by removing noisy/superfluous dimensions from the input data and (2) faster training due to the fact that the cluster detection part of the DNM deals with a lower dimensional latent space. Traditionally, only the basic autoencoder has been considered for use in deep neural maps. However, many different kinds of autoencoders exist such as the convolutional and the denoising autoencoder, and here we examine the effects of various autoencoders on the performance of the resulting deep neural maps. We investigate five types of autoencoders as part of our deep neural maps using three different data sets. Overall we show that deep neural maps perform better than standalone self-organizing maps both in terms of improved convergence behavior and faster training. Additionally we show that deep neural maps using the basic autoencoder outperform deep neural maps based on other autoencoders on nonimage data. To our surprise, we found that deep neural maps based on contractive autoencoders outperformed deep neural maps based on convolutional autoencoders on image data.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
F. Chollet, Keras, GitHub (2015). https://github.com/fchollet/keras
P. Domingos, The master algorithm: How the quest for the ultimate learning machine will remake our world. The master algorithm: How the quest for the ultimate learning machine will remake our world. (Basic Books, New York, 2015)
D. Dua, E. Karra Taniskidou, UCI machine learning repository, in School of Information and Computer Science (University of California, Irvine, 2019)
C. Ferles, Y. Papanikolaou, K.J. Naidoo, Denoising autoencoder self-organizing map (DASOM). Neural Netw. 105, 112–131 (2018)
V. Fortuin, M. Hüser, F. Locatello, H. Strathmann, G. Rätsch, Som-vae: Interpretable discrete representation learning on time series (2018). arXiv preprint arXiv:1806.02199
P. Fränti, S. Sieranoja, K-means properties on six clustering benchmark datasets. Appl. Intell. 48(12), 4743–4759 (2018)
P. Franti, O. Virmajoki, V. Hautamaki, Fast Agglomerative clustering using a k-nearest neighbor graph. IEEE Trans. Pattern Anal. Mach. Intell. 28(11), 1875–1881 (2006)
I. Goodfellow, Y. Bengio, A. Courville, Deep Learning (MIT Press, New York, 2016)
X. Guo, X. Liu, E. Zhu, J. Yin, Deep Clustering with Convolutional Autoencoders, in Neural Information Processing, ed. by D. Liu, S. Xie, Y. Li, D. Zhao, E.-S.M. El-Alfy. Lecture Notes in Computer Science (Springer, Berlin, 2017), pp. 373–382
L. Hamel, SOM Quality Measures: An Efficient Statistical Approach, in Advances in Self-Organizing Maps and Learning Vector Quantization, ed. by E. Merényi, M.J. Mendenhall, P. O’Driscoll. Advances in Intelligent Systems and Computing (Springer, Cham, 2016), pp. 49–59
L. Hamel, VSOM: efficient, stochastic self-organizing map training, in Intelligent Systems and Applications, vol. 869, ed. by K. Arai, S. Kapoor, R. Bhatia (Springer, Cham, 2019), pp. 805–821
L. Hamel, C.W. Brown, Improved interpretability of the unified distance matrix with connected components, in Proceedings of the International Conference on Data Mining (DMIN). The Steering Committee of The World Congress in Computer Science, Computer…(2011), pp. 1
L. Hamel, B. Ott, G. Breard, R. Tatoian, V. Gopu, Popsom: functions for constructing and evaluating self-organizing maps (2019)
G.E. Hinton, R.R. Salakhutdinov, Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)
W. Jang, M. Hendry, Cluster analysis of massive datasets in astronomy. Stat. Comput. 17(3), 253–262 (2007)
T. Kohonen, Self-organizing maps, in Springer Series in Information Sciences, 3 edn. (Springer, Berlin, 2001)
A. Kristiadi, Deriving contractive autoencoder and implementing it in Keras—Agustinus Kristiadi’s Blog
A. Krizhevsky, I. Sutskever, G.E. Hinton, Imagenet classification with deep convolutional neural networks, in Advances in Neural Information Processing Systems (2012), pp. 1097–1105
Y. LeCun, C. Cortes, MNIST Handwritten Digit Database (2010)
M. Pesteie, P. Abolmaesumi, R. Rohling, Deep Neural Maps (2018). arXiv:1810.07291 [cs, stat]
K.S. Pollard, M.J. van der Laan, Cluster analysis of genomic data, in Bioinformatics and Computational Biology Solutions Using R and Bioconductor, ed. by W. Wong, M. Gail, K. Krickeberg, A. Tsiatis, J. Samet, R. Gentleman, V.J. Carey, W. Huber, R.A. Irizarry, S. Dudoit (Springer, New York, 2005), pp. 209–228
D. Rajashekar, One-class learning with an Autoencoder Based Self Organizing Map (2017)
S. Rifai, P. Vincent, X. Muller, X. Glorot, Y. Bengio, Contractive auto-encoders: explicit invariance during feature extraction. In ICML (2011)
C.J. Tucker, D.M. Grant, J.D. Dykstra, Nasa’s global orthorectified landsat data set. Photogramm. Eng. Remote Sens. 70(3), 313–322 (2004)
P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, P.-A. Manzagol, Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 11, 3371–3408 (2010)
L. Xu, A. Krzyzak, C.Y. Suen, Methods of combining multiple classifiers and their applications to handwriting recognition. IEEE Trans. Syst. Man Cybern 22(3), 418–435 (1992)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Zheng, B., Hamel, L. (2021). Performance Analysis of Deep Neural Maps. In: Stahlbock, R., Weiss, G.M., Abou-Nasr, M., Yang, CY., Arabnia, H.R., Deligiannidis, L. (eds) Advances in Data Science and Information Engineering. Transactions on Computational Science and Computational Intelligence. Springer, Cham. https://doi.org/10.1007/978-3-030-71704-9_21
Download citation
DOI: https://doi.org/10.1007/978-3-030-71704-9_21
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-71703-2
Online ISBN: 978-3-030-71704-9
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)