Skip to main content
Book cover

Markov Chain Aggregation for Agent-Based Models

  • Book
  • © 2016

Overview

  • Introduces and describes a new approach for modelling certain types of complex dynamical systems
  • Self-contained presentation and introductory level
  • Useful as advanced text and as self-study guide
  • Includes supplementary material: sn.pub/extras

Part of the book series: Understanding Complex Systems (UCS)

This is a preview of subscription content, log in via an institution to check access.

Access this book

eBook USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book USD 84.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Other ways to access

Licence this eBook for your library

Institutional subscriptions

Table of contents (10 chapters)

Keywords

About this book

This self-contained text develops a Markov chain approach that makes the rigorous analysis of a class of microscopic models that specify the dynamics of complex systems at the individual level possible. It presents a general framework of aggregation in agent-based and related computational models, one which makes use of lumpability and information theory in order to link the micro and macro levels of observation. The starting point is a microscopic Markov chain description of the dynamical process in complete correspondence with the dynamical behavior of the agent-based model (ABM), which is obtained by considering the set of all possible agent configurations as the state space of a huge Markov chain. An explicit formal representation of a resulting “micro-chain” including microscopic transition rates is derived for a class of models by using the random mapping representation of a Markov process. The type of probability distribution used to implement the stochastic part of the model, which defines the updating rule and governs the dynamics at a Markovian level, plays a crucial part in the analysis of “voter-like” models used in population genetics, evolutionary game theory and social dynamics. The book demonstrates that the problem of aggregation in ABMs - and the lumpability conditions in particular - can be embedded into a more general framework that employs information theory in order to identify different levels and relevant scales in complex dynamical systems

Authors and Affiliations

  • in the Sciences, Max Planck Institute for Mathematics in the Sciences, Leipzig, Germany

    Sven Banisch

Bibliographic Information

Publish with us