Skip to main content

Advertisement

Log in

Jarzynski-Type Equalities in Gambling: Role of Information in Capital Growth

  • Published:
Journal of Statistical Physics Aims and scope Submit manuscript

Abstract

We study the capital growth in gambling with (and without) side information and memory effects. We derive several equalities for gambling, which are of similar form to the Jarzynski equality and its extension to systems with feedback controls. Those relations provide us with new measures to quantify the effects of information on the statistics of capital growth in gambling. We discuss the implications of the equalities and show that they reproduce the known upper bounds of average capital growth rates.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. The analogy between the work extraction in a feedback control system and gambling is recently discussed in Ref. [25].

  2. Card counting is a method to improve a gambler’s return by utilizing the information of dealt cards.

  3. We use the superscript to denote the variables collectively in this paper.

  4. In the context of non-equilibrium physics, \(s_y\) is called as the trajectory (stochastic) entropy [12].

  5. In the discussion of non-equilibrium equalities, \(i_{xy}\) is introduced in Refs. [16]. Equation (25) is a counterpart of the generalized Jarzynski equality under feedback controls.

  6. As another extension, we can also formulate the Jarzynski-type equalities in gambling with more complex information structures using the Bayesian network. The proof of the equality is almost the same, we just have to replace \(P(x^n||y^n)\) with \(P_c(x^n||y^n) \equiv \prod _i^n P(x_i|\mathrm{pa}(x_i))\) and similarly for \(f(y^n||x^n)\) and \(o(y^n||x^n)\). See Ref. [38].

  7. The quantities f and o in Eq. (81) correspond to \((1+y f)Q(y)\) and 1 / Q(y) in the case of binary betting discussed in the previous subsections.

  8. Mathematical aspects of blackjack are comprehensively analyzed in a recent book by Werthamer [42].

References

  1. Andrieux, D., Gaspard, P.: Fluctuation theorem for currents and schnakenberg network theory. J. Stat. Phys. 127(1), 107–131 (2007)

    Article  MATH  MathSciNet  ADS  Google Scholar 

  2. Campisi, M., Talkner, P., Hänggi, P.: Fluctuation theorem for arbitrary open quantum systems. Phys. Rev. Lett. 102, 210401 (2009)

    Article  ADS  Google Scholar 

  3. Crooks, G.E.: Entropy production fluctuation theorem and the nonequilibrium work relation for free energy differences. Phys. Rev. E 60(3), 2721 (1999)

    Article  ADS  Google Scholar 

  4. Evans, D.J., Cohen, E.G.D., Morriss, G.P.: Probability of second law violations in shearing steady states. Phys. Rev. Lett. 71(15), 2401 (1993)

    Article  MATH  ADS  Google Scholar 

  5. Evans, D.J., Searles, D.J.: Equilibrium microstates which generate second law violating steady states. Phys. Rev. E 50(2), 1645 (1994)

    Article  ADS  Google Scholar 

  6. Harada, T., Sasa, S.: Equality connecting energy dissipation with a violation of the fluctuation-response relation. Phys. Rev. Lett. 95(13), 130602 (2005)

    Article  ADS  Google Scholar 

  7. Hatano, T., Sasa, S.: Steady-state thermodynamics of langevin systems. Phys. Rev. Lett. 86(16), 3463 (2001)

    Article  ADS  Google Scholar 

  8. Jarzynski, C.: Nonequilibrium equality for free energy differences. Phys. Rev. Lett. 78(14), 2690 (1997)

    Article  ADS  Google Scholar 

  9. Jarzynski, C., Wójcik, D.K.: Classical and quantum fluctuation theorems for heat exchange. Phys. Rev. Lett. 92, 230602 (2004)

    Article  ADS  Google Scholar 

  10. Kurchan, J.: A quantum fluctuation theorem. arXiv:cond-mat/0007360, preprint (2000)

  11. Lebowitz, J.L., Spohn, H.: A gallavotti-cohen-type symmetry in the large deviation functional for stochastic dynamics. J. Stat. Phys. 95(1–2), 333–365 (1999)

    Article  MATH  MathSciNet  ADS  Google Scholar 

  12. Seifert, U.: Entropy production along a stochastic trajectory and an integral fluctuation theorem. Phys. Rev. Lett. 95, 040602 (2005)

    Article  ADS  Google Scholar 

  13. Tasaki, H.: Jarzynski relations for quantum systems and some applications. arXiv:cond-mat/0009244, preprint (2000)

  14. Sagawa, T., Ueda, M.: Second law of thermodynamics with discrete quantum feedback control. Phys. Rev. Lett. 100(8), 080403 (2008)

    Article  ADS  Google Scholar 

  15. Sagawa, T., Ueda, M.: Minimal energy cost for thermodynamic information processing: measurement and information erasure. Phys. Rev. Lett. 102(25), 250602 (2009)

    Article  ADS  Google Scholar 

  16. Sagawa, T., Ueda, M.: Generalized jarzynski equality under nonequilibrium feedback control. Phys. Rev. Lett. 104(9), 090602 (2010)

    Article  ADS  Google Scholar 

  17. Barato, A.C., Seifert, U.: Unifying three perspectives on information processing in stochastic thermodynamics. Phys. Rev. Lett. 112(9), 090601 (2014)

    Article  ADS  Google Scholar 

  18. Barato, A.C., Hartich, D., Seifert, U.: Efficiency of cellular information processing. N. J. Phys. 16(10), 103024 (2014)

    Article  MathSciNet  Google Scholar 

  19. Barato, A.C., Seifert, U.: An autonomous and reversible maxwell’s demon. EPL (Europhys. Lett.) 101(6), 60001 (2013)

    Article  ADS  Google Scholar 

  20. Deffner, S., Jarzynski, C.: Information processing and the second law of thermodynamics: an inclusive, hamiltonian approach. Phys. Rev. X 3(4), 041003 (2013)

    Google Scholar 

  21. Horowitz, J.M., Sandberg, H.: Second-law-like inequalities with information and their interpretations. N. J. Phys. 16(12), 125007 (2014)

    Article  MathSciNet  Google Scholar 

  22. Mandal, D., Jarzynski, C.: Work and information processing in a solvable model of maxwells demon. Proc. Natl. Acad. Sci. 109(29), 11641–11645 (2012)

    Article  ADS  Google Scholar 

  23. Mandal, D., Quan, H.T., Jarzynski, C.: Maxwells refrigerator: an exactly solvable model. Phys. Rev. Lett. 111(3), 030602 (2013)

    Article  ADS  Google Scholar 

  24. Parrondo, J.M.R., Horowitz, J.M., Sagawa, T.: Thermodynamics of information. Nat. Phys. 11(2), 131–139 (2015)

    Article  Google Scholar 

  25. Vinkler, D.A., Permuter, H.H., Merhav, N.: Analogy between gambling and measurement-based work extraction. In: 2014 IEEE International Symposium on Information Theory (ISIT), pp. 1111–1115. IEEE (2014)

  26. Kelly, J.L.: A new interpretation of information rate. IRE Trans. Inform. Theory 2(3), 185–189 (1956)

    Article  Google Scholar 

  27. Shannon, C.E.: Communication theory of secrecy systems*. Bell Syst. Tech. J. 28(4), 656–715 (1949)

    Article  MATH  MathSciNet  Google Scholar 

  28. Thorp, E.O.: Optimal gambling systems for favorable games. Rev. Inst. Int. Stat. 37, 273–293 (1969)

    Article  MATH  Google Scholar 

  29. Thorp, E.O.: Portfolio choice and the kelly criterion. In: Ziemba, W.T., Vickson, R.G. (eds.) Stochastic Models in Finance, pp. 599–619. Academic Press, New York (1971)

    Google Scholar 

  30. Thorp, E.O.: Beat the Dealer: A Winning Strategy for the Game of Twenty-One. Vintage, New York (1966)

    Google Scholar 

  31. Poundstone, W.: Fortune’s Formula: The Untold Story of the Scientific Betting System that Beat the Casinos and Wall Street. Macmillan, London (2010)

    Google Scholar 

  32. Kramer, G.: Directed information for channels with feedback. Ph.D. thesis, University of Manitoba, Canada (1998)

  33. Massey, J.: Causality, feedback and directed information. In: Proceedings of International Symposiumon Information Theory and Its Applications (ISITA-90), pp. 303–305. Citeseer (1990)

  34. Bell, R.M., Cover, T.M.: Competitive optimality of logarithmic investment. Math. Oper. Res. 5(2), 161–166 (1980)

    Article  MATH  MathSciNet  Google Scholar 

  35. Breiman, L.: Optimal gambling systems for favorable games. In: Proceedings of the Berkeley Symposium on Mathematical Statistics and Probability. University of California Press, Berkeley (1961)

  36. Horowitz, J.M., Vaikuntanathan, S.: Nonequilibrium detailed fluctuation theorem for repeated discrete feedback. Phys. Rev. E 82(6), 061120 (2010)

    Article  ADS  Google Scholar 

  37. Massey, J.L., Massey, P.C.: Conservation of mutual and directed information. In: Proceedings of International Symposium on Information Theory, 2005. ISIT’05, pp. 157–158. IEEE (2005)

  38. Ito, S., Sagawa, T.: Information thermodynamics on causal networks. Phys. Rev. Lett. 111(18), 180603 (2013)

    Article  ADS  Google Scholar 

  39. Cover, T.M., Thomas, J.A.: Elements of Information Theory. Wiley, New York (2012)

    Google Scholar 

  40. Permuter, H.H., Kim, Y.-H., Weissman T.: On directed information and gambling. In: IEEE International Symposium on Information Theory, 2008 (ISIT’08), pp. 1403–1407. IEEE (2008)

  41. Permuter, H.H., Kim, Y.-H., Weissman, T.: Interpretations of directed information in portfolio theory, data compression, and hypothesis testing. IEEE Trans. Inform. Theory 57(6), 3248–3259 (2011)

    Article  MathSciNet  Google Scholar 

  42. Werthamer, N.R.: Risk and Reward. Springer, Berlin (2009)

    Book  MATH  Google Scholar 

Download references

Acknowledgments

Y. Hirono is grateful to S. Nakayama for useful discussions and careful reading of the manuscript. Y. Hirono is supported by JSPS Research Fellowships for Young Scientists. This work is partially supported by the RIKEN iTHES Project. This work is also supported by JSPS Strategic Young Researcher Overseas Visits Program for Accelerating Brain Circulation (No. R2411).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuji Hirono.

Appendices

Appendix 1: Notations and Definitions

Here we summarize notations and definitions of information-theoretical quantities used in the text.

A realization of a stochastic variable X is represented by its small letter, x in this case.

Let \(\{ x_i\}\), \(\{y_i\}\) be time-sequences of stochastic variables. A variable with a superscript n indicates variables from 1 to n collectively,

$$\begin{aligned} x^n \equiv \{ x_1, \ldots , x_n \}. \end{aligned}$$
(97)

We assume that the dependences of the variables \(x^n\) are causal, by which we mean that the probability distribution \(P(x_i)\) is dependent on \(x_j\) only if \(j<i\). The joint probability \(P(x^n)\) is decomposed as

$$\begin{aligned} P(x^n) = \prod _i P(x_i | x^{i-1}) , \end{aligned}$$
(98)

where P(x|y) is the conditional probability.

The average over variables \(\{x,y,\ldots \}\) is expressed by \(\langle \cdots \rangle _{x,y,\ldots }\). Subscript may be omitted, in that case the average is taken over all the stochastic variables.

The Shannon entropy of \(X^n = \{X_1, \ldots , X_n \}\) is

$$\begin{aligned} S(X^n) \equiv - \langle \ln P(x^n) \rangle _{x^n} = - \sum _{x^n} P(x^n) \ln P(x^n) . \end{aligned}$$
(99)

The Kullback-Leibler divergence of a distribution Q(y) from another distribution P(y) is defined by

$$\begin{aligned} D_\mathrm{KL}\left( P(y)||Q(y) \right) \equiv \sum _y P(y) \ln \frac{P(y)}{Q(y)}. \end{aligned}$$
(100)

The mutual information between the stochastic variable X and Y is defined by

$$\begin{aligned} I(X:Y) \equiv \left\langle \ln \frac{P(x,y)}{P(x)P(y)} \right\rangle _{x,y} = \sum _{x,y} P(x,y) \ln \frac{P(x,y)}{P(x)P(y)}. \end{aligned}$$
(101)

We used the following causal conditioning notations developed by Kramer [32]. The probability distribution of \(x^n\) causally conditioned on \(y^{n-d}\) is denoted as

$$\begin{aligned} P(x^n || y^{n-d}) \equiv \prod _{i=1}^n P(x_i | x^{i-1}, y^{i-d}). \end{aligned}$$
(102)

We use a convention that, if \(i-d \le 0\), \(y^{i-d}\) is set to null. Mostly, the cases with \(d=0,1\) are used:

$$\begin{aligned} P\big (x^n || y^{n}\big ) = \prod _{i=1}^n P\big (x_i | x^{i-1}, y^{i}\big ), \end{aligned}$$
(103)
$$\begin{aligned} P\big (x^n || y^{n-1}\big ) = \prod _{i=1}^n P\big (x_i | x^{i-1}, y^{i-1}\big ). \end{aligned}$$
(104)

The joint probability of \(x^n\) and \(y^n\) is decomposed as

$$\begin{aligned} P\big (x^n , y^n\big ) = P\big (x^n || y^n\big ) P\big (y^n || x^{n-1}\big ) . \end{aligned}$$
(105)
$$\begin{aligned} \begin{array}{ll} \because P(x^n , y^n) &{}= \prod \limits _i P(x_i, y_i | x^{i-1}, y^{i-1}) \\ &{}= \prod \limits _i P\big (x_i | x^{i-1}, y^i\big ) P\big (y_i | x^{i-1}, y^{i-1}\big ) \\ &{}= P\big (x^n || y^n\big ) P\big (y^n || x^{n-1}\big ). \end{array} \end{aligned}$$
(106)

The causally conditional entropy is defined as

$$\begin{aligned} S(X^n || Y^n) \equiv - \langle \ln P(x^n || y^n)\rangle = \sum _{i=1}^n S\big (X_i | X^{i-1}, Y^i\big ) . \end{aligned}$$
(107)

The directed information, introduced by Massey [33], is defined as

$$\begin{aligned} I_\mathrm{dr}(Y^n \rightarrow X^n) \equiv S(X^n) - S(X^n || Y^n) . \end{aligned}$$
(108)

It can be explicitly written as

$$\begin{aligned} I_\mathrm{dr}(Y^n \rightarrow X^n) = \left\langle \ln \frac{P(x^n || y^n)}{P(x^n)} \right\rangle = \sum _i \left\langle \ln \frac{P\big (x_{i+1} | x^{i}, y^{i+1}\big )}{P\big (x_{i+1}|x^{i}\big )} \right\rangle . \end{aligned}$$
(109)

Appendix 2: Markovian Coin Tossing and 1D Ising Model

We here show the equivalence of the Markovian coin tossing discussed in Sect. 4.1 with the 1D Ising model. Without loss of generality, we can parametrize the conditional probability \(P(y_{i+1}|y_i)\) as

$$\begin{aligned} P(y_{i+1}|y_i) = \frac{\exp \left[ \beta J \ y_{i+1} y_i \right] }{2 \cosh \beta J} . \end{aligned}$$
(110)

One can see \(0 < P(y_{i+1}|y_i) < 1\) and the normalization condition \(\sum _{y_{i+1}}P(y_{i+1}|y_i)=1\) is satisfied. The new parameter J can be related to the flipping rate \(\epsilon \) as

$$\begin{aligned} \beta J = \frac{1}{2} \ln \frac{\bar{\epsilon }}{\epsilon } . \end{aligned}$$
(111)

By rewriting the normalization condition of \(P(y^n)\) in the following way, the correspondence to the Ising model is evident:

$$\begin{aligned} \begin{array}{ll} 1 &{}= \sum \limits _{y^n} P(y^n) \\ &{}= \sum \limits _{y^n} \exp \left[ \sum \limits _i \ln P(y_{i+1}|y_i) \right] \\ &{}= \frac{1}{\left( 2 \cosh \beta J\right) ^n} \sum \limits _{y^n} \exp \left[ \sum \limits _i \beta J \ y_{i+1}y_i \right] \\ &{}\equiv \frac{\mathrm{tr}\left[ e^{- \beta H} \right] }{Z} . \end{array} \end{aligned}$$
(112)

Thus, the numerator is the definition of the partition function of the Ising model without external fields. The average of the exponential of g is written as

$$\begin{aligned} \begin{array}{ll} \langle \exp \left[ n g_n \right] \rangle _{y^n} &{}= \sum \limits _{y^n} P(y^n) \prod \limits _i (1 + f_i y_i) \\ &{}= \frac{1}{\left( 2 \cosh \beta J\right) ^n} \sum \limits _{y^n} \exp \left[ \sum \limits _i \beta J \ y_{i+1}y_i + \sum \limits _i \ln (1 + f_i y_i) \right] . \end{array} \end{aligned}$$
(113)

When \(f_i(y_i|y^{i-1})\) is independent of \(y^{i-1}\), the numerator of RHS is the partition function of the Ising model in a weird form of magnetic field. In one dimension, the symmetry breaking never occurs in the Ising model at finite temperature. In the context of the Markovian coin tossing, the absence of symmetry breaking corresponds to the fact that, at finite values of \(\epsilon \), the coin flips after finite number of trials, and the “magnetization” always vanishes,

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{n}\sum _i \langle y_i \rangle = 0 . \end{aligned}$$
(114)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hirono, Y., Hidaka, Y. Jarzynski-Type Equalities in Gambling: Role of Information in Capital Growth. J Stat Phys 161, 721–742 (2015). https://doi.org/10.1007/s10955-015-1348-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10955-015-1348-0

Keywords

Navigation