Advertisement

CMS Data Transfer Tests Towards LHC Data taking

  • Daniele Bonacorsi
Conference paper

Abstract

The CMS experiment has developed a Computing Model designed as a distributed system of computing resources and services relying on Grid technologies. The Data Management part of this model has been established, and it is being constantly exercised and improved through several kind of computing challenged, among which CMS-specific exercises. As LHC starts, CMS will need to manage tens of petabytes of data, thousands of datasets and continuous transfers among 170 CMS Institutions. To get there prepared, in early in 2007 the CMS experiment deployed a traffic load generator infrastructure, aimed at providing CMS Computing Centres (Tiers in the World-wide LHC Computing Grid) with a means for de- bugging, loadtesting and commissioning data transfer routes among them: the LoadTest infrastructure. In addition, a Debugging Data Transfers (DDT) Task Force is being created to coordinate the debugging of data transfer links in the preparation period and during the Computing Software and Analysis challenge in 2007 (CSA07). The task force aims to commission most crucial transfer routes among CMS tiers by designing and enforcing a clear procedure to debug problematic links. Experiences within the CMS LoadTest, standalone and in the context of the first outcomes of the DDT program, are reviewed and discussed.

Keywords

Data Transfer Grid Technology Continuous Transfer Transfer Route Monte Carlo Simulated Data 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    The CMS Collaboration, “The CMS Computing Model”, CERN LHCC 2004-035Google Scholar
  2. [2]
    The CMS Collaboration, “The CMS Computing Project - Technical Design Report”, CERN-LHCC-2005-023Google Scholar
  3. [3]
    D. Bonacorsi, “WLCG Service Challenges and Tiered architecture in the LHC era”, IFAE, Pavia, April 2006Google Scholar
  4. [4]
    D. Bonacorsi et al, Towards the operation of the INFN Tier-1 for CMS: lessons learned from CMS DC04, in: Proc. ACAT05, DESY Zeuthen, 2005Google Scholar
  5. [5]
    M. Aderholz et al., “Models of Networked Analysis at Regional Centres for LHC Experiments (MONARC), Phase 2 Report”, CERN/LCB 2000-001Google Scholar
  6. [6]
    LHC Computing Grid (LCG) project: http://www.cern.ch/lcg/
  7. [7]
    S. Bethke et al., “Report of the Steering Group of the LHC Computing Review”, CERN/LHCC 2001-004 (2001)Google Scholar
  8. [8]
    T. Barrass et al, Software agents in data and workflow management, in: Proc. CHEP04, Interlaken, 2004. See also http://www.fipa.org
  9. [9]
    L. Tuura, B. Bockelman, D. Bonacorsi, R. Egeland, D. Feichtinger, S. Metson, J. Rehn, “Scaling CMS data transfer system for LHC start-up”, this conferenceGoogle Scholar
  10. [10]
    Storage Resource Management (SRM) project website, http://sdm.lbl.gov/indexproj.php? ProjectID=SRM
  11. [11]
    WLCG Memorandum Of Understanding, CERN-C-RRB-2005-01/Rev, March 2006Google Scholar

Copyright information

© Springer-Verlag US 2010

Authors and Affiliations

  • Daniele Bonacorsi
    • 1
  1. 1.University of BolognaBolognaItaly

Personalised recommendations