Advertisement

Generic Methodology for the Design of Parallel Algorithms Based on Pattern Languages

  • A. Alejandra Serrano-RubioEmail author
  • Amilcar Meneses-Viveros
  • Guillermo B. Morales-Luna
  • Mireya Paredes-López
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 948)

Abstract

A parallel system to solve complex computational problems involve multiple instruction, simultaneous flows, communication structures, synchronisation and competition conditions between processes, as well as mapping and balance of workload in each processing unit. The algorithm design and the facilities of processing units will affect the cost-performance ratio of any algorithm. We propose a generic methodology to capture the main characteristics of parallel algorithm design methodologies, and to add the experience of expert programmers through pattern languages. Robust design considering the relations between architectures and programs is a crucial item to implement high-quality parallel algorithms. We aim for a methodology to exploit algorithmic concurrencies and to establish optimal process allocation into processing units, exploring the lowest implementation details. Some basic examples are described, such as the k-means algorithm, to illustrate and to show the effectiveness of our methodology. Our proposal identifies essential design patterns to find models of Data Mining algorithms with string self-adaptive mechanisms for homogeneous and heterogeneous parallel architectures.

Keywords

Pattern language Methodology Parallel algorithm design Data Mining algorithms 

Notes

Acknowledgment

The authors thank the financial support by the Mexican CONACyT, as well as ABACUS: Laboratory of Applied Mathematics and High-Performance Computing of the Mathematics Department of CINVESTAV-IPN. Our institution provided the facilities to accomplish this work.

References

  1. 1.
    Fox, G.C., Williams, R.D., Messina, G.C.: Parallel computing works!. Elsevier, Amsterdam (2014)Google Scholar
  2. 2.
    Bientinesi, P., Herrero, J.R., Quintana-Ortí, E.S., Strzodka, R.: Parallel computing on graphics processing units and heterogeneous platforms. Concurr. Comput.: Pract. Exp. 27(6), 1525–1527 (2015)CrossRefGoogle Scholar
  3. 3.
    Aliaga, J.I., Sáez, R.C., Ortí, E.S.Q.: Parallel solution of hierarchical symmetric positive definite linear systems. Appl. Math. Nonlinear Sci. 2(1), 201–212 (2017)CrossRefGoogle Scholar
  4. 4.
    Hong, T.P., Lee, Y.C., Wu, M.T.: An effective parallel approach for genetic-fuzzy data mining. Expert Syst. Appl. 41(2), 655–662 (2014)CrossRefGoogle Scholar
  5. 5.
    Foster, I.: Designing and Building Parallel Programs, vol. 78. Addison Wesley Publishing Company, Boston (1995)zbMATHGoogle Scholar
  6. 6.
    Chandy, K.M., Taylor, S.: An Introduction to Parallel Programming. Jones and Bartlett Publishers, Inc., Burlington (1992)zbMATHGoogle Scholar
  7. 7.
    Culler, D.E., Singh, J.P., Gupta, A.: Parallel Computer Architecture: A Hardware/Software Approach. Gulf Professional Publishing, Houston (1999). ISO 690Google Scholar
  8. 8.
    Mattson, T.G., Sanders, B., Massingill, B.: Patterns for Parallel Programming. Pearson Education, London (2004)zbMATHGoogle Scholar
  9. 9.
    Jamali, S., Alizadeh, F., Sadeqi, S.: Task scheduling in cloud computing using particle swarm optimization. In: The Book of Extended Abstracts, p. 192 (2016)Google Scholar
  10. 10.
    Kirk, D.B., Wen-Mei, W.H.: Programming Massively Parallel Processors: A Hands-on Approach. Morgan kaufmann, Burlington (2016)Google Scholar
  11. 11.
    Barney, B.: Introduction to parallel computing. Lawrence Livermore National Laboratory, vol. 6, no. 13, p. 10 (2010)Google Scholar
  12. 12.
    Almasi, G.S., Gottlieb, A.: Highly Parallel Computing. The Benjamin/Cummings Publishing, San Francisco (1988)zbMATHGoogle Scholar
  13. 13.
    Brown, J., Bowling, A., Flynn, T.: Models of quality of life: a taxonomy, overview and systematic review of the literature. In: European Forum on Population Ageing Research (2004)Google Scholar
  14. 14.
    Aloise, D., Deshpande, A., Hansen, P., Popat, P.: NP-hardness of Euclidean sum-of-squares clustering. Mach. Learn. 75, 245–249 (2009)CrossRefGoogle Scholar
  15. 15.
    Benson, A.R., Ballard, G.: A framework for practical parallel fast matrix multiplication. ACM SIGPLAN Not. 50(8), 42–53 (2015)CrossRefGoogle Scholar
  16. 16.
    Cui, H., Ruan, G., Xue, J., Xie, R., Wang, L., Feng, X.: A collaborative divide-and-conquer K-means clustering algorithm for processing large data. In: Proceedings of the 11th ACM Conference on Computing Frontiers, p. 20. ACM, May 2014Google Scholar
  17. 17.
    Rajeswari, K., Acharya, O., Sharma, M., Kopnar, M., Karandikar, K.: Improvement in K-means clustering algorithm using data clustering. In: 2015 International Conference on Computing Communication Control and Automation (ICCUBEA), pp. 367–369. IEEE (2015)Google Scholar
  18. 18.
    Törn, A., Žilinskas, A.: Clustering methods. In: Törn, A., Žilinskas, A. (eds.) Global Optimization. LNCS, vol. 350, pp. 95–116. Springer, Heidelberg (1989).  https://doi.org/10.1007/3-540-50871-6_5CrossRefzbMATHGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • A. Alejandra Serrano-Rubio
    • 1
    Email author
  • Amilcar Meneses-Viveros
    • 1
  • Guillermo B. Morales-Luna
    • 1
  • Mireya Paredes-López
    • 2
  1. 1.Computer Science DepartmentCinvestav-IPNMexico CityMexico
  2. 2.Mathematics DepartmentCinvestav-IPNMexico CityMexico

Personalised recommendations