Advertisement

Performance Evaluation of Parallel Inference of Large Phylogenetic Trees in Santos Dumont Supercomputer: A Practical Approach

  • Kary OcañaEmail author
  • Carla OsthoffEmail author
  • Micaella CoelhoEmail author
  • Marcelo GalheigoEmail author
  • Isabela CanutoEmail author
  • Douglas de OliveiraEmail author
  • Daniel de OliveiraEmail author
Conference paper
  • 19 Downloads
Part of the Communications in Computer and Information Science book series (CCIS, volume 1087)

Abstract

The modern high-throughput techniques of analytical chemistry and molecular biology produce a massive amount of data. Omics sciences cover complex areas as next-generation sequencing for genomics, systems biology studies of biochemical pathways, or novel bioactive compounds discovery and they can be fostered by the use of high-performance computing. Nowadays, the effective use of supercomputers plays an important role in phyloinformatics since most of these applications are considered as memory or compute-bound and have large number of simple and regular computations which exhibit potentially massive parallelism. Phyloinformatics analyses cover phylogenomic and computational evolutionary studies of the life of genomes of organisms. RAxML is a popular phylogenomic software based on maximum likelihood algorithms used for the analyses of phylogenetic trees, which require high computational computing to process large amounts of data. RAxML implements several phylogenetic likelihood function kernel variants (SSE3, AVX, AVX2) and offers coarse-grain/fine-grain parallelism via Hybrid and MPI/PThread versions. The present paper aims at exploring the performance and scalability of RAxML in the Santos Dumont supercomputer. Machine learning analyses were applied to support the choice of features which lead to the efficient allocation of resources in Santos Dumont. Recommending features such as type of clusters, number of cores, input data size, or RAxML historical performance results were used for generating the predictive models used for allocating computational resources. In the experiments, the hybrid version of RAxML improves the speedup significantly while maintaining efficiency over 75%.

Notes

Acknowledgements

The funding for this research was provided by the Brazilian sponsors projects CNPq/Universal (Grant no. 429328/2016-8) and FAPERJ/JCNE (Grant no. 232985/2017-03). We are also grateful to the comments made by the anonymous referees.

References

  1. 1.
    Breiman, L., Friedman, J., Olshen, R., Stone, C.: Classification and Regression Trees. Wadsworth and Brooks, Monterey (1984)zbMATHGoogle Scholar
  2. 2.
    Demšar, J., et al.: Orange: data mining toolbox in python. J. Mach. Learn. Res. 14, 2349–2353 (2013). http://jmlr.org/papers/v14/demsar13a.htmlzbMATHGoogle Scholar
  3. 3.
    Foster, I., Kesselman, C. (eds.): The Grid: Blueprint for a New Computing Infrastructure. Morgan Kaufmann Publishers Inc., San Francisco (2004)Google Scholar
  4. 4.
    Freire, J., Koop, D., Santos, E., Silva, C.: Provenance for computational tasks: a survey. Comput. Sci. Eng. 10, 11–21 (2008).  https://doi.org/10.1109/MCSE.2008.79CrossRefGoogle Scholar
  5. 5.
    Hager, G., Jost, G., Rabenseifner, R.: Communication characteristics and hybrid MPI/OpenMP parallel programming on clusters of multi-core SMP nodes. In: Proceedings of Cray User Group Conference, vol. 4, no. 500, p. 5455 (2009)Google Scholar
  6. 6.
    Hamidouche, K., Falcou, J., Etiemble, D.: A framework for an automatic hybrid MPI+OpenMP code generation (2011)Google Scholar
  7. 7.
    Hey, T., Tansley, S., Tolle, K. (eds.): The Fourth Paradigm: Data-Intensive Scientific Discovery. Microsoft Research, Redmond (2009)Google Scholar
  8. 8.
    Lomont, C.: Introduction to Intel Advanced Vector Extensions. Intel White Paper (2011)Google Scholar
  9. 9.
    Ocaña, K., et al.: Towards a science gateway for bioinformatics: experiences in the Brazilian system of high performance computing. In: 2019 Proceedings of the Workshop on Clusters, Clouds and Grids for Life Sciences (In Conjunction with CCGrid 2019 - 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing) (2019)Google Scholar
  10. 10.
    Pfeiffer, W., Stamatakis, A.: Hybrid MPI/Pthreads parallelization of the RAxML phylogenetics code. In: 2010 IEEE International Symposium on Parallel Distributed Processing, Workshops and Phd Forum (IPDPSW), pp. 1–8, April 2010.  https://doi.org/10.1109/IPDPSW.2010.5470900
  11. 11.
    Rodrigo, G.P., Östberg, P.O., Elmroth, E., Antypas, K., Gerber, R., Ramakrishnan, L.: Towards understanding HPC users and systems: a NERSC case study. J. Parallel Distrib. Comput. 111, 206–221 (2018).  https://doi.org/10.1016/j.jpdc.2017.09.002. http://www.sciencedirect.com/science/article/pii/S0743731517302563CrossRefGoogle Scholar
  12. 12.
    Rohlf, F.: J. Felsenstein, Inferring phylogenies, Sinauer Assoc., 2004, pp. xx + 664. J. Classif. 22, 139–142 (2005).  https://doi.org/10.1007/s00357-005-0009-4CrossRefGoogle Scholar
  13. 13.
    Som, A.: Causes, consequences and solutions of phylogenetic incongruence. Brief. Bioinform. 16 (2014).  https://doi.org/10.1093/bib/bbu015
  14. 14.
    Stamatakis, A.: RAxML version 8: a tool for phylogenetic analysis and post-analysis of large phylogenies. Bioinformatics 30(9), 1312–1313 (2014).  https://doi.org/10.1093/bioinformatics/btu033CrossRefGoogle Scholar
  15. 15.
    Weiss, S., Kulikowski, C.: Computer Systems That Learn: Classification and Prediction Methods from Statistics, Neural Nets, Machine Learning, and Expert Systems. Morgan Kaufmann Publishers Inc., San Francisco (1991)Google Scholar
  16. 16.
    Younge, A.J., Pedretti, K., Grant, R.E., Brightwell, R.: A tale of two systems: using containers to deploy HPC applications on supercomputers and clouds. In: 2017 IEEE International Conference on Cloud Computing Technology and Science (CloudCom), pp. 74–81. IEEE (2017)Google Scholar
  17. 17.
    Zhou, X., Shen, X.X., Todd Hittinger, C., Rokas, A.: Evaluating fast maximum likelihood-based phylogenetic programs using empirical phylogenomic data sets. Mol. Biol. Evol. 35 (2017).  https://doi.org/10.1093/molbev/msx302

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.National Laboratory of Scientific ComputingPetrópolisBrazil
  2. 2.Fluminense Federal University (UFF)NiteróiBrazil

Personalised recommendations