Advertisement

Reducing the Human-in-the-Loop Component of the Scheduling of Large HTC Workloads

  • Frédéric Azevedo
  • Luc Gombert
  • Frédéric Suter
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11332)

Abstract

A common characteristic to major physics experiments is an ever increasing need of computing resources to process experimental data and generate simulated data. The IN2P3 Computing Center provides its 2,500 users with about 35,000 cores and processes millions of jobs every month. This workload is composed of a vast majority of sequential jobs that corresponds to Monte-Carlo simulations and related analysis made on data produced on the Large Hadron Collider at CERN.

To schedule such a workload under specific constraints, the CC-IN2P3 relied for 20 years on an in-house job and resource management system complemented by an operation team who can directly act on the decisions made by the job scheduler and modify them. This system has been replaced in 2011 but legacy rules of thumb remained. Combined to other rules motivated by production constraints, they may act against the job scheduler optimizations and force the operators to apply more corrective actions than they should.

In this experience report from a production system, we describe the decisions made since the end of 2016 to either transfer some of the actions done by operators to the job scheduler or make these actions become unnecessary. The physical partitioning of resources in distinct pools has been replaced by a logical partitioning that leverages scheduling queues. Then some historical constraints, such as quotas, have been relaxed. For instance, the number of concurrent jobs from a given user group allowed to access a specific resource, e.g., a storage subsystem, has been progressively increased. Finally, the computation of the fair-share by the job scheduler has been modified to be less detrimental to small groups whose jobs have a low priority. The preliminary but promising results coming from these modifications constitute the beginning of a long-term activity to change the operation procedures applied to the computing infrastructure of the IN2P3 Computing Center.

Notes

Acknowledgements

The authors would like to thank the members of the Operation and Applications teams of the CC-IN2P3 for their help in the preparation of this experience report.

References

  1. 1.
    Chapin, S.J., et al.: Benchmarks and standards for the evaluation of parallel job schedulers. In: Feitelson, D.G., Rudolph, L. (eds.) JSSPP 1999. LNCS, vol. 1659, pp. 67–90. Springer, Heidelberg (1999).  https://doi.org/10.1007/3-540-47954-6_4CrossRefGoogle Scholar
  2. 2.
    Dutot, P.-F., Mercier, M., Poquet, M., Richard, O.: Batsim: a realistic language-independent resources and jobs management systems simulator. In: Desai, N., Cirne, W. (eds.) JSSPP 2015-2016. LNCS, vol. 10353, pp. 178–197. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-61756-5_10CrossRefGoogle Scholar
  3. 3.
    Feitelson, D., Tsafrir, D., Krakov, D.: Experience with using the parallel workloads archive. J. Parallel Distrib. Comput. 74(10), 2967–2982 (2014)CrossRefGoogle Scholar
  4. 4.
    Jackson, D., Snell, Q., Clement, M.: Core algorithms of the Maui scheduler. In: Feitelson, D.G., Rudolph, L. (eds.) JSSPP 2001. LNCS, vol. 2221, pp. 87–102. Springer, Heidelberg (2001).  https://doi.org/10.1007/3-540-45540-X_6CrossRefGoogle Scholar
  5. 5.
    Kay, J., Lauder, P.: A fair share scheduler. Commun. ACM 31(1), 44–55 (1988)CrossRefGoogle Scholar
  6. 6.
    Klusáček, D., Tóth, Š.: On interactions among scheduling policies: finding efficient queue setup using high-resolution simulations. In: Silva, F., Dutra, I., Santos Costa, V. (eds.) Euro-Par 2014. LNCS, vol. 8632, pp. 138–149. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-09873-9_12CrossRefGoogle Scholar
  7. 7.
    Klusáček, D., Tóth, Š., Podolníková, G.: Real-life experience with major reconfiguration of job scheduling system. In: Desai, N., Cirne, W. (eds.) JSSPP 2015-2016. LNCS, vol. 10353, pp. 83–101. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-61756-5_5CrossRefGoogle Scholar
  8. 8.
    Klusáček, D., Tóth, V., Podolníková, G.: Complex Job Scheduling Simulations with Alea 4. In: Proceedings of the 9th EAI International Conference on Simulation Tools and Techniques (Simutools 2016), pp. 124–129. ICST, Prague (2016)Google Scholar
  9. 9.
    Michelotto, M., et al.: A comparison of HEP code with SPEC 1 benchmarks on multi-core worker nodes. J. Phys. Conf. Ser. 219(5), 052009 (2010)CrossRefGoogle Scholar
  10. 10.
    The ATLAS Collaboration: Observation of a new particle in the search for the standard model Higgs Boson with the ATLAS detector at the LHC. Phys. Lett. B 716(1), 1–29 (2012).  https://doi.org/10.1016/j.physletb.2012.08.020CrossRefGoogle Scholar
  11. 11.
    The CMS Collaboration: Observation of a New Boson at a Mass of 125 GeV with the CMS experiment at the LHC. Phys. Lett. B 716(1), 30–61 (2012).  https://doi.org/10.1016/j.physletb.2012.08.021CrossRefGoogle Scholar
  12. 12.
    The IN2P3/CNRS Computing Center. http://cc.in2p3.fr/en/
  13. 13.
    The LIGO Scientific Collaboration and Virgo Collaboration: Observation of gravitational waves from a binary black hole merger. Phys. Rev. Lett. 116, 061102 (2016).  https://doi.org/10.1103/PhysRevLett.116.061102MathSciNetCrossRefGoogle Scholar
  14. 14.
    Univa Corporation: Grid Engine. http://www.univa.com/products/

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Frédéric Azevedo
    • 1
  • Luc Gombert
    • 1
  • Frédéric Suter
    • 1
  1. 1.IN2P3 Computing Center, CNRSLyon-VilleurbanneFrance

Personalised recommendations