, Volume 8, Issue 3, pp 473-492
Date: 08 Jun 2010

Multi-objective Reinforcement Learning for Responsive Grids

Rent the article at a discount

Rent now

* Final gross prices may vary according to local VAT.

Get Access

Abstract

Grids organize resource sharing, a fundamental requirement of large scientific collaborations. Seamless integration of Grids into everyday use requires responsiveness, which can be provided by elastic Clouds, in the Infrastructure as a Service (IaaS) paradigm. This paper proposes a model-free resource provisioning strategy supporting both requirements. Provisioning is modeled as a continuous action-state space, multi-objective reinforcement learning (RL) problem, under realistic hypotheses; simple utility functions capture the high level goals of users, administrators, and shareholders. The model-free approach falls under the general program of autonomic computing, where the incremental learning of the value function associated with the RL model provides the so-called feedback loop. The RL model includes an approximation of the value function through an Echo State Network. Experimental validation on a real data-set from the EGEE Grid shows that introducing a moderate level of elasticity is critical to ensure a high level of user satisfaction.