Advertisement

From the CMS Computing Experience in the WLCG STEP’09 Challenge to the First Data Taking of the LHC Era

  • D. Bonacorsi
  • O. Gutsche
Conference paper

Abstract

The Worldwide LHC Computing Grid (WLCG) project decided in March 2009 to perform scale tests of parts of its overall Grid infrastructure before the start of the LHC data taking. The “Scale Test for the Experiment Program” (STEP’09) was performed mainly in June 2009 -with more selected tests in September- October 2009 -and emphasized the simultaneous test of the computing systems of all 4 LHC experiments. CMS tested its Tier-0 tape writing and processing capabilities. The Tier-1 tape systems were stress tested using the complete range of Tier-1 work-flows: transfer from Tier-0 and custody of data on tape, processing and subsequent archival, redistribution of datasets amongst all Tier-1 sites as well as burst transfers of datasets to Tier-2 sites. The Tier-2 analysis capacity was tested using bulk analysis job submissions to backfill normal user activity. In this talk, we will report on the different performed tests and present their post-mortem analysis.

Keywords

Disk Cache Tape System Distribute Computing Infrastructure Total Data Volume Lumi Section 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    The Worldwide LHC Computing Grid (WLCG) web portal: http://lcg.web.cern.ch/LCG
  2. 2.
    J. D. Shiers, “The Worldwide LHC Computing Grid (worldwide LCG)”, Computer Physics Communications 177 (2007) 219–223, CERN, SwitzerlandGoogle Scholar
  3. 3.
    M. Aderholz et al., “Models of Networked Analysis at Regional Centres for LHC Experiments (MONARC), Phase 2 Report”, CERN/LCB 2000-001 (2000)Google Scholar
  4. 4.
    S. Bethke et al., “Report of the Steering Group of the LHC Computing Review”, CERN/LHCC 2001-004 (2001)Google Scholar
  5. 5.
    CMS Collaboration, “The CMS experiment at the CERN LHC”, JINST 3 S08004 (2008) doi:  10.1088/1748-0221/3/08/S08004
  6. 6.
    D. Bonacorsi et al., “WLCG Service Challenges and Tiered architecture in the LHC era”, IFAE, Pavia, April 2006Google Scholar
  7. 7.
    D. Bonacorsi et al., “Towards the operation of the INFN Tier-1 for CMS: lessons learned from CMS DC04”, Proceedings of XI International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT’05), DESY, Zeuthen, 2005Google Scholar
  8. 8.
    I. Fisk, “CMS Experiences with Computing Software and Analysis Challenges”, presented at Computing in High Energy and Nuclear Physics (CHEP’07), Victoria, BC, Canada, September 2007Google Scholar
  9. 9.
    O. Gutsche et al., “WLCG scale testing during CMS data challenges”, J. Phys.: Conf. Ser. 119 062033 (2008) doi:  10.1088/1742-6596/119/6/062033
  10. 10.
  11. 11.
    WLCG CCRC’08 twiki portal: https://twiki.cern.ch/twiki/bin/view/LCG/ WLCGCommonComputingReadinessChallenges
  12. 12.
    J.D. Shiers et al., “The (WLCG) Common Computing Readiness Challenge(s) -CCRC’08”, contribution N29-2, Grid Computing session -Nuclear Science Symposium, IEEE (Dresden), October 2008Google Scholar
  13. 13.
  14. 14.
    L. Bauerdick and D. Bonacorsi, “CMS results in the Combined Computing Readiness Challenge (CCRC08)”, 11th Topical Seminar on Innovative Particle and Radiation Detectors (IPRD’08), Siena, Italy, Jun 7-10, 2010Google Scholar
  15. 15.
    CMS Collaboration, “The CMS Computing Project Technical Design Report”, CERN-LHCC2005-023Google Scholar
  16. 16.
    CASTOR project website: http://castor.web.cern.ch/castor
  17. 17.
    S. Chatrchyan et al., “CMS Data Processing Workflows during an Extended Cosmic Ray Run”, J. Instrum. 5 T03006 (2009) arXiv: 0911.4842. CMS-CFT-09-007Google Scholar
  18. 18.
    Storage Resource Management (SRM) project website: http://sdm.lbl.gov/srm-wg
  19. 19.
    D. Bonacorsi, T. Barrass, J. Hernandez, J. Rehn, L. Tuura, J. Wu, I. Semeniouk, “PhEDEx high-throughput data transfer management system”, CHEP06, Computing in High Energy and Nuclear Physics, T.I.F.R. Bombay, India, February 2006Google Scholar
  20. 20.
    T. Barrass et al, “Software agents in data and workflow management”, Proc. CHEP04, Interlaken, 2004. See also http://www.pa.org
  21. 21.
    R. Egeland et al., “Data transfer infrastructure for CMS data taking”, XIII International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT’08), Erice, Italy, Nov 3-7, 2008 -Proceedings of Science, PoS (ACAT08) 033 (2008)Google Scholar
  22. 22.
    L. Tuura et al., “Scaling CMS data transfer system for LHC start-up”, presented at Computing in High Energy and Nuclear Physics (CHEP’07), Victoria, BC, Canada, September 2007 J. Phys.: Conf. Ser. 119 072030 (2008) doi:  10.1088/1742-6596/119/7/072030
  23. 23.
    Debugging Data Transfer program twiki portal: https://twiki.cern.ch/twiki/bin/view/CMS/ DebuggingDataTransfers
  24. 24.
    N. Magini et al., “The CMS Data Transfer Test Environment in Preparation for LHC Data Taking”, IEEE Nuclear Science Symposium, Dresden, Conference Record N67-2, Applied Computing Techniques session (2008)Google Scholar
  25. 25.
    L. Tuura et al., “Scaling CMS data transfer system for LHC start-up”, presented at Computing in High Energy and Nuclear Physics (CHEP’07), Victoria, BC, Canada, September 2007 J. Phys.: Conf. Ser. 119 072030 (2008)Google Scholar
  26. 26.
    A. Fanfani et al., “Distributed Analysis in CMS”, Journal of Grid Computing, Computer Science Collection, 1570-7873 (Print) 1572-9814 (Online), doi:  10.1007/s10723-010-9152-1, March 2010
  27. 27.
    D. Spiga et al., “The CMS Remote Analysis Builder (CRAB)”, Lect. Notes Comput. Sci. 4873, 580-586 (2007)CrossRefGoogle Scholar
  28. 28.
    D. Andreotti et al.,, “First experiences with CMS Data Storage on the GEMSS system at the INFN-CNAF Tier-1”, International Symposium on Grid Computing 2010 (ISGC’10), Taipei, Taiwan, March 5-12, 2010Google Scholar
  29. 29.
    P. Kreuzer et al., “The CMS CERN Analysis Facility (CAF)”, presented at Computing in High Energy and Nuclear Physics (CHEP’09), Prague, Czech Republic, March 21-27, 2009 -to be published in J. Phys.: Conf. Ser.Google Scholar
  30. 30.
    S. Belforte et al., “The commissioning of CMS Computing Centres in the WLCG Grid”, XIII International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT’08), Erice, Italy, Nov 3-7, 2008 -Proceedings of Science, PoS (ACAT08) 043 (2008)Google Scholar
  31. 31.
    D. Evans et al., “The CMS Monte Carlo production system: Development and Design”, Nucl. Phys. Proc.Suppl. 177-178 285-286 (2008)CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  • D. Bonacorsi
    • 1
  • O. Gutsche
    • 2
  1. 1.Physics DepartmentUniversity of BolognaBolognaItaly
  2. 2.FermilabChicagoUSA

Personalised recommendations