Advertisement

Parallelization of Algorithms for Mining Data from Distributed Sources

  • Ivan KholodEmail author
  • Andrey Shorov
  • Maria Efimova
  • Sergei Gorlatch
Conference paper
  • 278 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11657)

Abstract

We suggest an approach to optimize data mining in modern applications that work on distributed data. We formally transform a high-level functional representation of a data-mining algorithm into a parallel implementation that performs as much as possible computations locally at the data sources, rather than accumulating all data for processing at a central location as in the traditional MapReduce approach. Our approach avoids the main disadvantages of the state-of-the-art MapReduce frameworks in the context of distributed data: increased run time, high network traffic, and an unauthorized access to data. We use the popular data-mining algorithm – Naive Bayes – for illustrating our approach and evaluating it experimentally. Our experiments confirm that the implementation of Naive Bayes developed by using our approach significantly outperforms the traditional MapReduce-based implementation regarding the run time and the network traffic.

Keywords

Parallel algorithms Distributed algorithms Data mining Distributed data mining MapReduce Homomorphisms 

Notes

Acknowledgments

We thank the anonymous referees for very helpful remarks on the preliminary version of the paper. This work was supported by the Ministry of Education and Science of the Russian Federation in the framework of the state order “Organization of Scientific Research”, task #2.6113.2017/BУ, and by the German Ministry of Education and Research (BMBF) in the framework of project HPC2SE at the University of Muenster.

References

  1. 1.
    Santucci, G.: From internet to data to Internet of Things. In: Proceedings of the International Conference on Future Trends of the Internet (2009)Google Scholar
  2. 2.
  3. 3.
  4. 4.
    Dean, J., Ghemawat, S.: MapReduce: simplified data processing on large clusters. In: Proceedings of Operating Systems Design and Implementation, San Francisco, CA, December 2004Google Scholar
  5. 5.
    Harshawardhan, S.B., et al.: A review paper on Big Data and Hadoop. Int. J. Sci. Res. Publ. 4(10), 1–7 (2014)Google Scholar
  6. 6.
    Kholod, I., Shorov, A., Titkov, E., Gorlatch, S.: A formally-based parallelization of data mining algorithms for multi-core systems. J. Supercomputing (2018)Google Scholar
  7. 7.
    Gorlatch, S., Cole, M.: Parallel skeletons. In: Padua, D. (ed.) Encyclopedia of Parallel Computing. Springer, Boston (2011).  https://doi.org/10.1007/978-0-387-09766-4CrossRefGoogle Scholar
  8. 8.
  9. 9.
    Apache Ignite. Documentation. Machine Learning. https://apacheignite.readme.io/docs/machine-learning
  10. 10.
    De Francisci, M.G., Bifet, A.: SAMOA scalable advanced massive online analysis. J. Mach. Learn. Res. 16, 149–153 (2015)Google Scholar
  11. 11.
    Langford, J., Strehl, F., Li, L.: Vowpal wabbit (2007). http://hunch.net/~vw
  12. 12.
    Wang, L., et al.: G-Hadoop: MapReduce across distributed data centers for data-intensive computing. FGCS 29(3), 739–750 (2013)CrossRefGoogle Scholar
  13. 13.
    Jayalath, C., Stephen, J.J., Eugster, P.: From the cloud to the atmosphere: running MapReduce across data centers. IEEE Trans. Comput. 63(1), 74–87 (2014)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Ryden, M., et al.: Nebula: distributed edge cloud for data intensive computing. In: IC2E, pp. 57–66 (2014)Google Scholar
  15. 15.
    Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning. SSS. Springer, New York (2009).  https://doi.org/10.1007/978-0-387-84858-7CrossRefzbMATHGoogle Scholar
  16. 16.
    George, H.J., Langley, P.: Estimating continuous distributions in Bayesian classifiers. In: Eleventh Conference on Uncertainty in Artificial Intelligence, pp. 338–345 (1995)Google Scholar
  17. 17.
    Xindong, W., et al.: Top 10 algorithms in data mining. Knowl. Inf. Syst. 14(1), 1–37 (2007)Google Scholar
  18. 18.
    Bernstein, J.: Program analysis for parallel processing. IEEE Trans. Electron. Comput. EC-15, 757–762 (1966)CrossRefGoogle Scholar
  19. 19.
  20. 20.
    Kaggle: Dataset: Predict Outcome of Pregnancy. https://www.kaggle.com/rajanand/ahs-woman-1

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Ivan Kholod
    • 1
    Email author
  • Andrey Shorov
    • 1
  • Maria Efimova
    • 1
  • Sergei Gorlatch
    • 2
  1. 1.Saint Petersburg Electrotechnical University “LETI”Saint PetersburgRussia
  2. 2.University of MuensterMuensterGermany

Personalised recommendations