Enhancing Energy Production with Exascale HPC Methods

  • Rafael Mayo-García
  • José J. Camata
  • José M. Cela
  • Danilo Costa
  • Alvaro L. G. A. Coutinho
  • Daniel Fernández-Galisteo
  • Carmen Jiménez
  • Vadim Kourdioumov
  • Marta Mattoso
  • Thomas Miras
  • José A. Moríñigo
  • Jorge Navarro
  • Philippe O. A. Navaux
  • Daniel de Oliveira
  • Manuel Rodríguez-Pascual
  • Vítor Silva
  • Renan Souza
  • Patrick Valduriez
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 697)

Abstract

High Performance Computing (HPC) resources have become the key actor for achieving more ambitious challenges in many disciplines. In this step beyond, an explosion on the available parallelism and the use of special purpose processors are crucial. With such a goal, the HPC4E project applies new exascale HPC techniques to energy industry simulations, customizing them if necessary, and going beyond the state-of-the-art in the required HPC exascale simulations for different energy sources. In this paper, a general overview of these methods is presented as well as some specific preliminary results.

Notes

Acknowledgments

The research leading to these results has received funding from the European Union’s Horizon 2020 Programme (2014-2020) under the HPC4E Project (www.hpc4e.eu), grant agreement no 689772, the Spanish Ministry of Economy and Competitiveness under the CODEC2 project (TIN2015-63562-R), and from the Brazilian Ministry of Science, Technology and Innovation through Rede Nacional de Pesquisa (RNP). Computer time on Endeavour cluster is provided by the Intel Corporation, which enabled us to obtain the presented experimental results in uncertainty quantification in seismic imaging.

References

  1. 1.
    Synergistic Challenges in Data-Intensive Science and Exascale Computing, DOE ASCAC Data Subcommittee Report, March 2013Google Scholar
  2. 2.
    The PaStiX software. http://pastix.gforge.inria.fr
  3. 3.
    The MaPHyS software. http://maphys.gforge.inria.fr
  4. 4.
    The libMesh library. http://libmesh.github.io/
  5. 5.
    Sogachev, A., Kelly, M., Leclerc, M.Y.: Consistent two-equation closure modelling for atmospheric research: buoyancy and vegetation implementations. Bound.-Layer Meteorol. 145(2), 307–327 (2012)CrossRefGoogle Scholar
  6. 6.
    The IEA-Task 31 Wakebench. http://www.ieawind.org/task_31.html
  7. 7.
    The NEWA project. http://euwindatlas.eu/
  8. 8.
    Deelman, E., Gannon, D., Shields, M., Taylor, I.: Workflows and e-Science: an overview of workflow system features and capabilities. Future Gener. Comput. Syst. 25(5), 528–540 (2009)CrossRefGoogle Scholar
  9. 9.
    Walker, E. Guiang, C.: Challenges in executing large parameter sweep studies across widely distributed computing environments. In: Workshop on Challenges of Large Applications in Distributed Environments, Monterey, California, USA, pp. 11–18 (2007)Google Scholar
  10. 10.
    Ogasawara, E., Dias, J., Oliveira, D., Porto, F., Valduriez, P., Mattoso, M.: An algebraic approach for data-centric scientific workflows. Proc. VLDB Endow. 4, 1328–1339 (2011)Google Scholar
  11. 11.
    Mattoso, M., Dias, J., Ocaña, K.A.C.S., Ogasawara, E., Costa, F., Horta, F., Silva, V., de Oliveira, D.: Dynamic steering of HPC scientific workflows: a survey. Future Gener. Comput. Syst. 46, 100–113 (2015)CrossRefGoogle Scholar
  12. 12.
    Costa, D.L., Coutinho, A.L., Silva, B.S., Silva, J.J., Borges, L.: A trade-off analysis between high-order seismic RTM and computational performance tuning. In: 1st Pan-American Congress on Computational Mechanics, Buenos Aires, Argentina, pp. 955–962 (2015)Google Scholar
  13. 13.
    Ogasawara, E., Dias, J., Silva, V., Chirigati, F., Oliveira, D., Porto, F., Valduriez, P., Mattoso, M.: Chiron: a parallel engine for algebraic scientific workflows. Concurr. Comput. 25(16), 2327–2341 (2013)CrossRefGoogle Scholar
  14. 14.
    Silva, V., de Oliveira, D., Valduriez, P., Mattoso, M.: Analyzing related raw data files through dataflows. Concurr. and Comput.: Pract. Exp. 28(8), 2528–2545 (2016)CrossRefGoogle Scholar
  15. 15.
    Carpenter, B., Getov, V., Judd, G., Skjellum, A., Fox, G.: MPJ: MPI-like message passing for Java. Concurr.: Pract. Exp. 12(11), 1019–1038 (2000)CrossRefMATHGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Rafael Mayo-García
    • 1
  • José J. Camata
    • 2
  • José M. Cela
    • 3
  • Danilo Costa
    • 2
  • Alvaro L. G. A. Coutinho
    • 2
  • Daniel Fernández-Galisteo
    • 1
  • Carmen Jiménez
    • 1
  • Vadim Kourdioumov
    • 1
  • Marta Mattoso
    • 2
  • Thomas Miras
    • 2
  • José A. Moríñigo
    • 1
  • Jorge Navarro
    • 1
  • Philippe O. A. Navaux
    • 4
  • Daniel de Oliveira
    • 5
  • Manuel Rodríguez-Pascual
    • 1
  • Vítor Silva
    • 2
  • Renan Souza
    • 2
  • Patrick Valduriez
    • 6
  1. 1.Centro de Investigaciones Energéticas Medioambientales y TecnológicasMadridSpain
  2. 2.COPPE/Federal University of Rio de JaneiroRio de JaneiroBrazil
  3. 3.Barcelona Supercomputing Center-Centro Nacional de SupercomputaciónBarcelonaSpain
  4. 4.Universidade Federal do Rio Grande do Sul (UFRGS)Porto AlegreBrazil
  5. 5.Fluminense Federal UniversityNiteróiBrazil
  6. 6.Zenith Team, Inria and LIRMMMontpellierFrance

Personalised recommendations