Advertisement

Hybrid Distributed Computing Service Based on the DIRAC Interware

  • Victor Gergel
  • Vladimir Korenkov
  • Igor Pelevanyuk
  • Matvey Sapunov
  • Andrei TsaregorodtsevEmail author
  • Petr Zrelov
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 706)

Abstract

Scientific data intensive applications requiring simultaneous use of large amounts of computing resources are becoming quite common. Properties of applications coming from different scientific domains as well as their requirements to the computing resources are varying largely. Many scientific communities have access to different types of computing resources. Often their workflows can benefit from a combination of High Throughput (HTC) and High Performance (HPC) computing centers, cloud or volunteer computing power. However, all these resources have different user interfaces and access mechanisms, which are making their combined usage difficult for the users. This problem is addressed by projects developing software for integration of various computing centers into a single coherent infrastructure, the so-called interware. One of such software toolkits is the DIRAC interware. This product was very successful to solve problems of large High Energy Physics experiments and was reworked to offer a general-purpose solution suitable for other scientific domains. Services based on the DIRAC interware are now proposed to users of several distributed computing. One of these services is deployed at Joint Institute for Nuclear Research, Dubna. It aims at integration of computing resources of several grid and supercomputer centers as well as cloud providers. An overview of the DIRAC interware and its use for creating and operating of a hybrid distributed computing system at JINR is presented in this article.

Keywords

Grid computing Hybrid distributed Computing systems Supercomputers DIRAC 

References

  1. 1.
    Tsaregorodtsev, A., Brook, N., Casajus Ramo, A., et al.: DIRAC3: the new generation of the LHCb grid software. J. Phys: Conf. Ser. 219, 6 (2010). doi: 10.1088/1742-6596/219/6/062029 Google Scholar
  2. 2.
    Klimentov, A., Buncic, P., De, K., et al.: Next generation workload management system for big data on heterogeneous distributed computing. J. Phys: Conf. Ser. 608, 1 (2015). doi: 10.1088/1742-6596/608/1/012040 Google Scholar
  3. 3.
    Bagnasco, S., Betev, L., Buncic, P., et al.: AliEn: ALICE environment on the grid. J. Phys: Conf. Ser. 119, 6 (2008). doi: 10.1088/1742-6596/119/6/062012 Google Scholar
  4. 4.
    Casajus Ramo, A., Graciani, R., Tsaregorodtsev, A.: DIRAC pilot framework and the DIRAC workload management system. J. Phys: Conf. Ser. 219, 6 (2010). doi: 10.1088/1742-6596/219/6/062049 Google Scholar
  5. 5.
  6. 6.
  7. 7.
    Smith, A., Tsaregorodtsev, A.: DIRAC: data production management. J. Phys: Conf. Ser. 119, 6 (2008). doi: 10.1088/1742-6596/119/6/062046 Google Scholar
  8. 8.
  9. 9.
    Poss, S., Tsaregorodtsev, A.: DIRAC file replica and metadata catalog. J. Phys: Conf. Ser. 396, 3 (2012). doi: 10.1088/1742-6596/396/3/032108 Google Scholar
  10. 10.
    Stagni, F., Charpentier, P.: The LHCb DIRAC-based production and data management operations systems. J. Phys: Conf. Ser. 368, 1 (2012). doi: 10.1088/1742-6596/368/1/012010 Google Scholar
  11. 11.
    Zhang, X.M., Pelevanyuk, I., Korenkov, V., et al.: Design and operation of the BES-III distributed computing system. Procedia Comput. Sci. 66, 619–624 (2015). doi: 10.1016/j.procs.2015.11.070 CrossRefGoogle Scholar
  12. 12.
    Kuhr, T.: Computing at Belle II. J. Phys: Conf. Ser. 396, 3 (2015). doi: 10.1088/1742-6596/396/3/032063 Google Scholar
  13. 13.
    Arrabito, L., Barbier, C., Graciani Diaz, R., et al.: Application of the DIRAC framework in CTA: first evaluation. J. Phys: Conf. Ser. 396, 3 (2015). doi: 10.1088/1742-6596/396/3/032007 Google Scholar
  14. 14.
    France-Grilles DIRAC portal. http://dirac.france-grilles.in2p3.fr
  15. 15.
    Korenkov, V., Pelevanyuk, I., Tsaregorodtsev, A., Zrelov, P.: Accessing distributed computing resources by scientific communities using DIRAC services. In: The XVIII International Conference on Data Analytics & Management in Data Intensive Domains (DAMDID/RCDL 2016), Ershovo, Moscow Region, Russia, CEUR Workshop Proceedings, vol. 1752, pp. 110–115 (2016)Google Scholar
  16. 16.
    DIRAC4EGI service portal. http://dirac.egi.eu
  17. 17.
    Trubnikov, G., Agapov, N., Alexandrov, V., et al.: Project of the nuclotron-based ion collider facility (NICA) at JINR. In: EPAC 2008 Conference Proceedings, Genoa, Italy (2008)Google Scholar
  18. 18.
    Wischnewski, R.: The Baikal neutrino telescope – results and plans. Int. J. Modern Phys. A 20(29), 6932–6936 (2005). doi: 10.1142/S0217751X0503051X CrossRefGoogle Scholar
  19. 19.
    Zweber, P.: Charm factories: present and future. AIP Conf. Proc. 1182(1), 406–409 (2009)CrossRefGoogle Scholar
  20. 20.
    Alexandrov, E., Belyakov, D., Matveyev, M., et al.: Research of acceleration of calculation in solving scientific problems on the heterogeneous cluster HybriLIT. RUDN J. Math. Inf. Sci. Phys. 4, 20–27 (2015)Google Scholar
  21. 21.
    Lobachevsky supercomputer. https://www.top500.org/system/178472
  22. 22.
    Mesocentre of Aix-Marseille University. http://mesocentre.univ-amu.fr/en
  23. 23.
    DIRAC Project. http://diracgrid.org

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Victor Gergel
    • 1
  • Vladimir Korenkov
    • 2
    • 3
  • Igor Pelevanyuk
    • 2
  • Matvey Sapunov
    • 4
  • Andrei Tsaregorodtsev
    • 3
    • 5
    Email author
  • Petr Zrelov
    • 2
    • 3
  1. 1.Lobachevsky State UniversityNizhni NovgorodRussia
  2. 2.Joint Institute for Nuclear ResearchDubnaRussia
  3. 3.Plekhanov Russian Economics UniversityMoscowRussia
  4. 4.Aix-Marseille UniversityMarseilleFrance
  5. 5.CPPM, Aix-Marseille University, CNRS/IN2P3MarseilleFrance

Personalised recommendations