Skip to main content

Empirical Approximation-Estimation Algorithms in Markov Games

  • Chapter
  • First Online:
Zero-Sum Discrete-Time Markov Games with Unknown Disturbance Distribution

Abstract

This chapter proposes an empirical approximation-estimation algorithm in difference equation game models (see Sect. 1.1.1) whose evolution is given by

$$\displaystyle x_{t+1}=F(x_{t},a_{t},b_{t},\xi _{t}), \ t\in \mathbb {N}_{0}, {} $$

where {ξ t} is a sequence of observable i.i.d. random variables defined on a probability space \(\left ( \varOmega ,\mathscr {F},P\right ) ,\) taking values in an arbitrary Borel space S, with common unknown distribution \(\theta \in \mathbb {P}(S).\)

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 16.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Jaśkiewicz, A., Nowak, A.: Zero-sum ergodic stochastic games with Feller transition probabilities. SIAM J. Control Optim. 45, 773–789 (2006)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2020 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Minjárez-Sosa, J.A. (2020). Empirical Approximation-Estimation Algorithms in Markov Games. In: Zero-Sum Discrete-Time Markov Games with Unknown Disturbance Distribution. SpringerBriefs in Probability and Mathematical Statistics. Springer, Cham. https://doi.org/10.1007/978-3-030-35720-7_4

Download citation

Publish with us

Policies and ethics