Advertisement

Computing Services for LHC: From Clusters to Grids

Chapter
Part of the The Frontiers Collection book series (FRONTCOLL)

Abstract

This chapter traces the development of the computing service for the Large Hadron Collider (LHC) at CERN data analysis over the 10 years prior to the start-up of the accelerator. It explores the main factors that influenced the choice of technology, a data intensive computational Grid, provides a brief explanation of the fundamentals of Grid computing, and records some sof the technical and organisational challenges that had to be overcome to achieve the capacity, performance, and usability requirements of the LHC experiments.

Keywords

Large Hadron Collider High Energy Physic Computing Service Grid Service Grid Infrastructure 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Baud, J.P., et al.: SHIFT, The Scalable Heterogeneous Integrated Facility for HEP Computing. In: Proceedings of International Conference on Computing in High Energy Physics, Tsukuba, Japan, March 1991, Universal Academy Press, TokyoGoogle Scholar
  2. 2.
    Bethke, S. (Chair), Calvetti, M. , Hoffmann, H.F., Jacobs, D., Kasemann, M., Linglin, D.: Report of the Steering Group of the LHC Computing Review. CERN/LHCC/2001-004, 22 February 2001Google Scholar
  3. 3.
    Bird, I. (ed.): Baseline Services Working Group Report. CERN-LCG-PEB-2005-09. http://lcg.web.cern.ch/LCG/peb/bs/BSReport-v1.0.pdf
  4. 4.
    Carminati, F. (ed.): Common Use Cases for a HEP Common Application Layer. May 2002. CERN-LHC-SC2-20-2002Google Scholar
  5. 5.
    Defanti, T.A., Foster, I., Papka, M.E., Stevens, R., Kuhfuss, T.: Overview of the I-WAY: Wide-Area Visual Supercomputing. Int. J. Supercomput. Appl. High Perform. Comput. 10(2/3), (Summer - Fall 1996), 123–131Google Scholar
  6. 6.
    Delfino, M., Robertson, L. (eds.): Solving the LHC Computing Challenge: A Leading Application of High Throughput Computing Fabrics combined with Computational Grids. Technical Proposal. CERN-IT-DLO-2001-03. http://lcg.web.cern.ch/lcg/peb/Documents/CERN-IT-DLO-2001-003.doc(version1.1)
  7. 7.
    Elmsheuser, J., et al.: Distributed analysis using GANGA on the EGEE/LCG infrastructure. J. Phys.: Conf. Ser. 119 072014 (8pp) (2008)Google Scholar
  8. 8.
    Enabling Grids for E-Science. Information Society Project INFSO-RI-222667. http://www.eu-egee.org/fileadmin/documents/publications/EGEEIII_Publishable_summary.pdf
  9. 9.
    Ernst, M., Fuhrmann, P., Gasthuber, M., Mkrtchyan, T., Waldmann, C.: dCache – a distributed storage data caching system. In: Proceedings of Computing in High Energy and Nuclear Physics 2001, Beijing, China. Science Press, New YorkGoogle Scholar
  10. 10.
    Foster, D.G. (ed.): LHC Tier-0 to Tier-1 High-Level Network Architecture. CERN 2005. https://www.cern.ch/twiki/bin/view/LHCOPN/LHCopnArchitecture/LHCnetworkingv2.dgf.doc
  11. 11.
    Foster, I., Kesselman, C.: Globus: A Metacomputing Infrastructure Toolkit. Int. J. Supercomput. Appl. 11(2), 115–128 (1997). http://www.globus.org/alliance/publications/papers.php#globus
  12. 12.
    Foster, I., Kesselman, C.: The Grid: Blueprint for a new computing infrastructure. Morgan Kaufmann, San Francisco (1999). ISBN: 1-558660-475-8Google Scholar
  13. 13.
    Grid Physics Network – GriPhyN Project. http://www.mcs.anl.gov/research/project_detail.php?id=11
  14. 14.
  15. 15.
    Knobloch, J. (ed.): The LHC Computing Grid Technical Design Report, CERN June 2005. CERN-LHCC-2005-024. ISBN 92-9083-253-3Google Scholar
  16. 16.
    Laure, E., et al.: Programming the Grid with gLite. EGEE Technical Report EGEE-TR-2006- 001- http://cdsweb.cern.ch/search.py?p=EGEE-TR-2006-001
  17. 17.
    LHC Computing Grid goes Online - CERN Press Release. 29 September 2003. http://press.web.cern.ch/press/PressReleases/Releases2003/PR13.03ELCG-1.html
  18. 18.
    Light weight Disk Pool Manager status and plans. Jean-Philippe Baud, CERN - EGEE 3 Conference Athens April 2005. https://svnweb.cern.ch/trac/lcgdm/wiki/Dpm
  19. 19.
    Lo Presti, G. , Barring, O. , Earl, A. , Garcia Rioja, R.M., Ponce, S., Taurelli, G., Waldron, D., Coelho Dos Santos, M.: CASTOR: A Distributed Storage Resource Facility for High Performance Data Processing at CERN. In: Proceedings of the 24th IEEE Conference on Mass Storage Systems and Technologies, pp. 275–280. IEEE Computer Society (2007)Google Scholar
  20. 20.
    Newman, H. (ed.): Models of Networked Analysis at Regional Centres for LHC Experiments (MONARC) Phase 2 Report. CERN 24 March 2000. CERN/LCB 2000-001Google Scholar
  21. 21.
    Proposal for Building the LHC Computing Environment at CERN. CERN/2379/Rev. 5 September 2001. http://cdsweb.cern.ch/record/35736/files/CM-P00083735-e.pdf
  22. 22.
    The Condor Project. http://www.cs.wisc.edu/condor
  23. 23.
    The European Data Grid Project - European Commission Information Society Project IST- 2000-25182. http://eu-datagrid.web.cern.ch/eu-dataGrid/Intranet_Home.htm
  24. 24.
    The European Grid Infrastructure. http://www.egi.eu/
  25. 25.
    The Globus Alliance. http://www.globus.org/
  26. 26.
    The Open Science Grid. http://www.openscienceGrid.org/
  27. 27.
    The Particle Physics Data Grid. http://ppdg.net/
  28. 28.
    Storage Resource Managers: Recent International Experience on Requirements and Multiple Co-Operating Implementations, Arie Shoshani et al., pp. 47–59, 24th IEEE Conference on Mass Storage Systems and Technologies (MSST 2007), 2007Google Scholar
  29. 29.
    The Virtual Data Toolkit. http://vdt.cs.wisc.edu/
  30. 30.
    Scalla: Scalable Cluster Architecture for Low Latency Access Using xrootd and olbd Servers - http://xrootd.slac.stanford.edu/papers/Scalla-Intro.pdf

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  1. 1.CERNGenevaSwitzerland

Personalised recommendations