Abstract
In this chapter, we discuss the implementations of the incremental clustering algorithms described in previous chapters. We also provide some recommendations on the choice of parameters of the algorithms. In addition, the choice of parameters in the algorithm for finding starting cluster centers, which is an important part of all nonsmooth optimization based clustering algorithms, is discussed. Finally, data sets used in numerical experiments are described and grouped into different subclasses according to their size.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Dua, D., Graff, C.: UCI Machine Learning Repository. School of Information and Computer Sciences, University of California, Irvine. http://archive.ics.uci.edu/ml (2017)
Reinelt, G.: TSP-LIB-A travelling salesman library. ORSA J. Comput. 3, 319–350 (1991)
Späth, H.: Cluster Analysis Algorithms for Data Reduction and Classification of Objects. Computers and Their applications. Ellis Horwood Limited, Chichester (1980)
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this chapter
Cite this chapter
M. Bagirov, A., Karmitsa, N., Taheri, S. (2020). Implementations and Data Sets. In: Partitional Clustering via Nonsmooth Optimization. Unsupervised and Semi-Supervised Learning. Springer, Cham. https://doi.org/10.1007/978-3-030-37826-4_11
Download citation
DOI: https://doi.org/10.1007/978-3-030-37826-4_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-37825-7
Online ISBN: 978-3-030-37826-4
eBook Packages: EngineeringEngineering (R0)