Journal of Grid Computing

, Volume 8, Issue 3, pp 473–492 | Cite as

Multi-objective Reinforcement Learning for Responsive Grids

  • Julien Perez
  • Cécile Germain-Renaud
  • Balazs Kégl
  • Charles Loomis
Article

Abstract

Grids organize resource sharing, a fundamental requirement of large scientific collaborations. Seamless integration of Grids into everyday use requires responsiveness, which can be provided by elastic Clouds, in the Infrastructure as a Service (IaaS) paradigm. This paper proposes a model-free resource provisioning strategy supporting both requirements. Provisioning is modeled as a continuous action-state space, multi-objective reinforcement learning (RL) problem, under realistic hypotheses; simple utility functions capture the high level goals of users, administrators, and shareholders. The model-free approach falls under the general program of autonomic computing, where the incremental learning of the value function associated with the RL model provides the so-called feedback loop. The RL model includes an approximation of the value function through an Echo State Network. Experimental validation on a real data-set from the EGEE Grid shows that introducing a moderate level of elasticity is critical to ensure a high level of user satisfaction.

Keywords

Grid scheduling Performance of systems Machine learning Reinforcement learning 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    The Grid Observatory Portal. www.grid-observatory.org. Accessed 24 May 2010
  2. 2.
    Baird, L.: Residual algorithms: reinforcement learning with function approximation. In: 12th Int. Conf. on Machine Learning, pp. 30–37. Morgan Kaufmann, San Francisco, CA (1995)Google Scholar
  3. 3.
    Beckman, P., Nadella, S., Trebon, N., Beschastnikh, I.: SPRUCE: a system for supporting urgent high-performance computing. IFIP Series 239, 295–311 (2007)Google Scholar
  4. 4.
    Blanchet, C., Combet, C., Deleage, G.: Integrating bioinformatics resources on the EGEE Grid platform. In: 6th ACM/IEEE Int. Symp. on Cluster Computing and the Grid, p. 48 (2006)Google Scholar
  5. 5.
    Blanchet, C., Mollon, R., Thain, D., Deleage, G.: Grid deployment of legacy bioinformatics applications with transparent data access. In: 7th IEEE/ACM International Conference on Grid Computing, pp. 120–127 (2006)Google Scholar
  6. 6.
    Boyan, J.A., Moore, A.W.: Generalization in reinforcement learning: safely approximating the value function. In: Advances in Neural Information Processing Systems, vol. 7, pp. 369–376. MIT Press, Cambridge, MA (1995)Google Scholar
  7. 7.
    Colling, D.J., McGough, A.S.: The GRIDCC project. In: 1st Int. Conf. on Communication System Software and Middleware, pp. 1–4 (2006)Google Scholar
  8. 8.
    Doya, K.: Reinforcement learning in continuous time and space. Neural Comput. 12, 219–245 (2000)CrossRefGoogle Scholar
  9. 9.
    Weissenbach, D., Clévédé, E., Gotab, B.: Distributed jobs on EGEE Grid infrastructure for an Earth science application: moment tensor computation at the centroid of an earthquake. Earth Science Informatics 7(1–2), 97–106 (2009)Google Scholar
  10. 10.
    Iosup, A., et al.: The Grid workloads archive. Future Gener. Comput. Syst. 24(7), 672–686 (2008)CrossRefGoogle Scholar
  11. 11.
    Germain-Renaud, C., et al.: Grid analysis of radiological data. In: Cannataro, M. (ed.) Handbook of Research on Computational Grid Technologies for Life Sciences, Biomedicine and Healthcare. IGI Press, Hershey, PA (2009)Google Scholar
  12. 12.
    Laure, E., et al.: Programming the Grid with gLite. Comput. Methods Sci. Technol. 12(1), 33–45 (2006)Google Scholar
  13. 13.
    Gagliardi, F., et al.: Building an infrastructure for scientific Grid computing: status and goals of the EGEE project. Philos. Trans. R. Soc. A 1833, 1729–1742 (2005)CrossRefGoogle Scholar
  14. 14.
    Montagnat, J., et al.: Workflow-based data parallel applications on the EGEE production Grid infrastructure. Journal of Grid Computing 6(4), 369–383 (2008)CrossRefGoogle Scholar
  15. 15.
    Foster, I., Kesselman, C., Tuecke, S.: The anatomy of the Grid: enabling scalable virtual organizations. Int. J. Supercomput. Appl. 15(3), 200–222 (2001)CrossRefGoogle Scholar
  16. 16.
    Foster, I., Zhao, Y., Raicu, I., Lu, S.: Cloud computing and Grid computing 360-degree compared. In: Grid Computing Environments Workshop, pp. 1–10. IEEE, Austin, TX (2008)Google Scholar
  17. 17.
    Germain-Renaud, C., Loomis, C., Moscicki, J., Texier, R.: Scheduling for responsive Grids. Journal of Grid Computing 6(1), 15–27 (2008)CrossRefGoogle Scholar
  18. 18.
    Germain-Renaud, C., Perez, J., Kégl, B., Loomis, C.: Grid differentiated services: a reinforcement learning approach. In: 8th IEEE International Symposium on Cluster Computing and the Grid, Lyon France (2008)Google Scholar
  19. 19.
    Germain-Renaud, C., Rana, O.F.: The convergence of clouds, Grids, and autonomics. Internet Computing 13(6), 9 (2009)CrossRefGoogle Scholar
  20. 20.
    Gordon, G.J.: Reinforcement learning with function approximation converges to a region. In: Advances in Neural Information Processing Systems, pp. 1040–1046. MIT Press, Denver, CO (2001)Google Scholar
  21. 21.
    Jaeger, H.: Adaptive nonlinear system identification with Echo State Networks. In: Advances in Neural Information Processing Systems, vol. 15, pp. 593–600. MIT Press, Cambridge, MA (2003)Google Scholar
  22. 22.
    Jaeger, H., Haas, H.: Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304(5667), 78–80 (2004)CrossRefGoogle Scholar
  23. 23.
    Douglas Jensen, E., Douglas Locke, C., Tokuda, H.: A time-driven scheduling model for real-time operating systems. In: IEEE Real-Time Systems Symposium, pp. 112–122 (1985)Google Scholar
  24. 24.
    Amar, L., Barak, A., Levy, E., Okun, M.: An on-line algorithm for fair-share node allocations in a cluster. In: 7th IEEE/ACM Int. Symp. on Cluster Computing and the Grid, pp. 83–91 (2007)Google Scholar
  25. 25.
    Li, H., Muskulus, M.: Analysis and modeling of job arrivals in a production Grid. SIGMETRICS Perform. Eval. Rev. 34(4), 59–70 (2007)CrossRefGoogle Scholar
  26. 26.
    Li, H., Muskulus, M., Wolters, L.: Modeling correlated workloads by combining model based clustering and a localized sampling algorithm. In: 21st Int. Conf. on Supercomputing, pp. 64–72 (2007)Google Scholar
  27. 27.
    Llorente, I.M.: Researching cloud resource management and use: the StratusLab initiative. The Cloud Computing Journal. http://cloudcomputing.sys-con.com/node/856815 (2009). Accessed 24 May 2010
  28. 28.
    Loomis, C.: The Grid observatory. In: Grids Meet Autonomic Computing workshop at ICAC’09. ACM, New York, NY (2009)Google Scholar
  29. 29.
    Luckow, A., Lacinski, L., Jha, S.: SAGA BigJob: an extensible and interoperable pilot-job abstraction for distributed applications and systems. In: 10th ACM/IEEE Int. Symp. on Cluster, Cloud and Grid Computing (2010)Google Scholar
  30. 30.
    Perez, J., Germain Renaud, C., Kégl, B., Loomis, C.: Utility-based reinforcement learning for reactive Grids. In: 5th IEEE International Conference on Autonomic Computing (2008). Short paperGoogle Scholar
  31. 31.
    Rasmusen, C.E., Williams, C.: Gaussian Processes for Machine Learning. MIT Press, Cambridge, MA (2006)Google Scholar
  32. 32.
    Sakellariou, R., Yarmolenko, V.: Job scheduling on the Grid: towards SLA-based scheduling. In: Grandinetti, L. (ed.) High Performance Computing and Grids in Action. IOS Press, Amsterdam (2008)Google Scholar
  33. 33.
    Snell, Q., Clement, M.J., Jackson, D.B., Gregory, C.: The performance impact of advance reservation meta-scheduling. In: IPDPS ’00/JSSPP ’00, pp. 137–153. Springer, Berlin/Heidelberg (2000)Google Scholar
  34. 34.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA (1998)Google Scholar
  35. 35.
    Szita, I., Gyenes, V., Lorincz, A.: Reinforcement learning with echo state networks. In: Artificial Neural Networks, ICANN 2006, pp. 830–839. Springer, Berlin/Heidelberg (2006)Google Scholar
  36. 36.
    Tesauro, G., Jong, N.K., Das, R., Bennani, M.N.: On the use of hybrid reinforcement learning for autonomic resource allocation. Cluster Comput. 10(3), 287–299 (2007)Google Scholar
  37. 37.
    Tesauro, G., Sejnowski, T.J.: A parallel network that learns to play backgammon. Artif. Intell. 39(3), 357–390 (1989)MATHCrossRefGoogle Scholar
  38. 38.
    Tesauro, G.J., Walsh, W.E., Kephart, J.O.: Utility-function-driven resource allocation in autonomic systems. In: Int. Conf. Autonomic computing and Communications, pp. 342–343 (2005)Google Scholar
  39. 39.
    Wu, H., Ravindran, B., Jensen, E.D., Balli, U.: Utility accrual scheduling under arbitrary time/utility functions and multiunit resource constraints. In: IEEE Real-Time and Embedded Computing Systems and Applications, pp. 80–98 (2004)Google Scholar
  40. 40.
    Vengerov, D.: A reinforcement learning approach to dynamic resource allocation. Eng. Appl. Artif. Intell. 20(3) (2007)Google Scholar

Copyright information

© Springer Science+Business Media B.V. 2010

Authors and Affiliations

  • Julien Perez
    • 1
  • Cécile Germain-Renaud
    • 1
  • Balazs Kégl
    • 2
  • Charles Loomis
    • 2
  1. 1.Laboratoire de Recherche en Informatique (LRI)/CNRSUniversité Paris-SudOrsay CedexFrance
  2. 2.Laboratoire de l’ Accélérateur Linéaire (LAL)/CNRSUniversité Paris-SudOrsayFrance

Personalised recommendations