Mean-Field Analysis of Markov Models with Reward Feedback

  • Anton Stefanek
  • Richard A. Hayden
  • Mark Mac Gonagle
  • Jeremy T. Bradley
Conference paper

DOI: 10.1007/978-3-642-30782-9_14

Part of the Lecture Notes in Computer Science book series (LNCS, volume 7314)
Cite this paper as:
Stefanek A., Hayden R.A., Mac Gonagle M., Bradley J.T. (2012) Mean-Field Analysis of Markov Models with Reward Feedback. In: Al-Begain K., Fiems D., Vincent JM. (eds) Analytical and Stochastic Modeling Techniques and Applications. ASMTA 2012. Lecture Notes in Computer Science, vol 7314. Springer, Berlin, Heidelberg

Abstract

We extend the population continuous time Markov chain formalism so that the state space is augmented with continuous variables accumulated over time as functions of component populations. System feedback can be expressed using accumulations that in turn can influence the Markov chain behaviour via functional transition rates. We show how to obtain mean-field differential equations capturing means and higher-order moments of the discrete populations and continuous accumulation variables. We also provide first- and second-order convergence results and suggest a novel normal moment closure that can greatly improve the accuracy of means and higher moments.

We demonstrate how such a framework is suitable for modelling feedback from globally-accumulated quantities such as energy consumption, cost or temperature. Finally, we present a worked example modelling a hypothetical heterogeneous computing cluster and its interaction with air conditioning units.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Anton Stefanek
    • 1
  • Richard A. Hayden
    • 1
  • Mark Mac Gonagle
    • 1
  • Jeremy T. Bradley
    • 1
  1. 1.Department of ComputingImperial College LondonLondonUK

Personalised recommendations