Advertisement

Quality-driven early stopping for explorative cluster analysis for big data

  • Manuel FritzEmail author
  • Michael Behringer
  • Holger Schwarz
Special Issue Paper
  • 12 Downloads

Abstract

Data analysis has become a critical success factor for companies in all areas. Hence, it is necessary to quickly gain knowledge from available datasets, which is becoming especially challenging in times of big data. Typical data mining tasks like cluster analysis are very time consuming even if they run in highly parallel environments like Spark clusters. To support data scientists in explorative data analysis processes, we need techniques to make data mining tasks even more efficient. To this end, we introduce a novel approach to stop clustering algorithms as early as possible while still achieving an adequate quality of the detected clusters. Our approach exploits the iterative nature of many cluster algorithms and uses a metric to decide after which iteration the mining task should stop. We present experimental results based on a Spark cluster using multiple huge datasets. The experiments unveil that our approach is able to accelerate the clustering up to a factor of more than 800 by obliterating many iterations which provide only little gain in quality. This way, we are able to find a good balance between the time required for data analysis and quality of the analysis results.

Keywords

Clustering Big data Early stop Convergence Regression 

Notes

Acknowledgements

This research was partially funded by the Ministry of Science of Baden-Württemberg, Germany, for the Doctoral Program ‘Services Computing’. Some work presented in this paper was performed within the project ‘INTERACT’ as part of the Software Campus program. This project is funded by the German Federal Ministry of Education and Research (BMBF), Grant No. 01IS17051. Finally, we thank Dennis Tschechlov for his implementation work.

References

  1. 1.
    Anand SS, Bell DA, Hughes JG (1995) The role of domain knowledge in data mining. In: Proceedings of the fourth international conference on Information and knowledge management-CIKM ’95, pp 37–43.  https://doi.org/10.1145/221270.221321
  2. 2.
    Arthur D, Vassilvitskii S (2007) K-means++: the advantages of careful seeding. In: Proceedings of the eighteenth annual ACM-SIAM symposium on discrete algorithms, pp 1027–1025.  https://doi.org/10.1145/1283383.1283494
  3. 3.
    Bahmani B, Moseley B, Vattani A, Kumar R, Vassilvitskii S (2012) Scalable K-means++. Proc VLDB Endow 5(7):622–633.  https://doi.org/10.14778/2180912.2180915 CrossRefGoogle Scholar
  4. 4.
    Bergstra J, Bengio Y (2012) Random search for hyper-parameter optimization. J Mach Learn Res 13:281–305.  https://doi.org/10.1162/153244303322533223 MathSciNetzbMATHGoogle Scholar
  5. 5.
    Brachman RJ, Anand T (1994) The process of knowledge discovery in databases: a first sketch. KDD workshop, pp 1–11Google Scholar
  6. 6.
    Coggins JM, Jain AK (1985) A spatial filtering approach to texture analysis. Pattern Recogn Lett 3(3):195–203.  https://doi.org/10.1016/0167-8655(85)90053-4 CrossRefGoogle Scholar
  7. 7.
    Davies DL, Bouldin DW (1979) A cluster separation measure. IEEE Trans Pattern Anal Mach Intell PAMI–1(2):224–227.  https://doi.org/10.1109/TPAMI.1979.4766909 CrossRefGoogle Scholar
  8. 8.
    Dunn JC (1974) Well-separated clusters and optimal fuzzy partitions. J Cybern 4(1):95–104.  https://doi.org/10.1080/01969727408546059 MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Elkan C (2003) Using the triangle inequality to accelerate k-means. In: Proceedings of the twentieth international conference on machine learning (ICML-2003), pp 147–153.  https://doi.org/10.1016/0026-2714(92)90278-S
  10. 10.
    Hochbaum DS, Shmoys DB (1985) A best possible heuristic for the k-center problem. Math Oper Res 10(2):180–184.  https://doi.org/10.1287/moor.10.2.180 MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Jain AK, Dubes RC (1988) Algorithms for clustering data. Prentice-Hall, Inc., Upper Saddle River.  https://doi.org/10.2307/1268876 zbMATHGoogle Scholar
  12. 12.
    Kanungo T, Mount D, Netanyahu N, Piatko C, Silverman R, Wu A (2002) An efficient k-means clustering algorithm: analysis and implementation. IEEE Trans Pattern Anal Mach Intell 24(7):881–892.  https://doi.org/10.1109/TPAMI.2002.1017616 CrossRefzbMATHGoogle Scholar
  13. 13.
    Kopanas I, Avouris NM, Daskalaki S (2002) The role of domain knowledge in a large scale data mining project. Methods Appl Artif Intell 2308(June 2002):288–299.  https://doi.org/10.1007/3-540-46014-4_26 zbMATHGoogle Scholar
  14. 14.
    Kotthoff L, Thornton C, Hoos HH, Hutter F, Leyton-Brown K (2016) Auto-WEKA 2.0: automatic model selection and hyperparameter optimization in WEKA. J Mach Learn Res 17:1–5zbMATHGoogle Scholar
  15. 15.
    Lloyd SP (1982) Least squares quantization in PCM. IEEE Trans Inf Theory 28(2):129–137.  https://doi.org/10.1109/TIT.1982.1056489 MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Macqueen JB (1967) Some methods for classification and analysis of multivariate observations. Proc Fifth Berkeley Symp Math Stat Probab 1:281–297MathSciNetzbMATHGoogle Scholar
  17. 17.
    Malkomes G, Schaff C, Garnett R (2016) Bayesian optimization for automated model selection. In: International conference on machine learning 2016, AutoML workshop, vol 1, No. Nips, pp 1–7Google Scholar
  18. 18.
    Meng X, Bradley J, Yavuz B, Sparks E, Venkataraman S, Liu D, Freeman J, Tsai D, Amde M, Owen S, Xin D, Xin R, Franklin MJ, Zadeh R, Zaharia M, Talwalkar A (2016) MLlib: machine learning in Apache spark. J Mach Learn Res 17:1–7MathSciNetzbMATHGoogle Scholar
  19. 19.
    Mexicano A, Rodríguez R, Cervantes S, Montes P, Jiménez M, Almanza N, Abrego A (2016) The early stop heuristic: a new convergence criterion for K-means. In: AIP conference proceedings, vol 1738.  https://doi.org/10.1063/1.4952103
  20. 20.
    Pérez J, Mexicano A, Pazos R, Santaolaya R, Hidalgo M, Moreno A, Almanza N (2013) Improvement to the K-means algorithm through a heuristics based on a Bee Honeycomb structure. J Netw Innov Comput ISSN 1:2160–2174Google Scholar
  21. 21.
    Rousseeuw PJ (1987) Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J Comput Appl Math 20(C):53–65.  https://doi.org/10.1016/0377-0427(87)90125-7 CrossRefzbMATHGoogle Scholar
  22. 22.
    Sculley D (2010) Web-scale K-means clustering. In: Proceedings of the 19th international conference on world wide web WWW 10, p 1177.  https://doi.org/10.1145/1772690.1772862
  23. 23.
    Selim SZ, Ismail MA (1984) K-means type algorithms: a generalized concergence theorem and characterization of local optimality. IEEE Tran Pattern Anal Mach Intell PAMI 6(1):81–87CrossRefzbMATHGoogle Scholar
  24. 24.
    Sparks ER, Talwalkar A, Haas D, Franklin MJ, Jordan MI, Kraska T (2015) Automating model search for large scale machine learning. In: Proceedings of the sixth ACM symposium on cloud computing-SoCC ’15, pp 368–380.  https://doi.org/10.1145/2806777.2806945
  25. 25.
    Thornton C, Hutter F, Hoos HH, Leyton-Brown K (2013) Auto-WEKA. In: Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining-KDD ’13, p 847.  https://doi.org/10.1145/2487575.2487629
  26. 26.
    Vendramin L, Campello RJ, Hruschka ER (2010) Relative clustering validity criteria: a comparative overview. Stat Anal Data Min 3(4):209–235.  https://doi.org/10.1002/sam.10080 MathSciNetGoogle Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2019

Authors and Affiliations

  • Manuel Fritz
    • 1
    Email author
  • Michael Behringer
    • 1
  • Holger Schwarz
    • 1
  1. 1.Institute for Parallel and Distributed SystemsUniversity of StuttgartStuttgartGermany

Personalised recommendations