Journal of Grid Computing

, Volume 8, Issue 3, pp 473–492

Multi-objective Reinforcement Learning for Responsive Grids

Authors

  • Julien Perez
    • Laboratoire de Recherche en Informatique (LRI)/CNRSUniversité Paris-Sud
    • Laboratoire de Recherche en Informatique (LRI)/CNRSUniversité Paris-Sud
  • Balazs Kégl
    • Laboratoire de l’ Accélérateur Linéaire (LAL)/CNRSUniversité Paris-Sud
  • Charles Loomis
    • Laboratoire de l’ Accélérateur Linéaire (LAL)/CNRSUniversité Paris-Sud
Article

DOI: 10.1007/s10723-010-9161-0

Cite this article as:
Perez, J., Germain-Renaud, C., Kégl, B. et al. J Grid Computing (2010) 8: 473. doi:10.1007/s10723-010-9161-0

Abstract

Grids organize resource sharing, a fundamental requirement of large scientific collaborations. Seamless integration of Grids into everyday use requires responsiveness, which can be provided by elastic Clouds, in the Infrastructure as a Service (IaaS) paradigm. This paper proposes a model-free resource provisioning strategy supporting both requirements. Provisioning is modeled as a continuous action-state space, multi-objective reinforcement learning (RL) problem, under realistic hypotheses; simple utility functions capture the high level goals of users, administrators, and shareholders. The model-free approach falls under the general program of autonomic computing, where the incremental learning of the value function associated with the RL model provides the so-called feedback loop. The RL model includes an approximation of the value function through an Echo State Network. Experimental validation on a real data-set from the EGEE Grid shows that introducing a moderate level of elasticity is critical to ensure a high level of user satisfaction.

Keywords

Grid schedulingPerformance of systemsMachine learningReinforcement learning

Copyright information

© Springer Science+Business Media B.V. 2010