Maximizing Entropy over Markov Processes

  • Fabrizio Biondi
  • Axel Legay
  • Bo Friis Nielsen
  • Andrzej Wąsowski
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7810)


The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity computation reduces to finding a model of a specification with highest entropy.

Entropy maximization for probabilistic process specifications has not been studied before, even though it is well known in Bayesian inference for discrete distributions. We give a characterization of global entropy of a process as a reward function, a polynomial algorithm to verify the existence of an system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy.

We show how to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code.


Markov Chain Markov Process Channel Capacity Authentication Protocol Reward Function 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    de Alfaro, L.: Formal Verification of Probabilistic Systems. Ph.D. thesis, Stanford (1997)Google Scholar
  2. 2.
    Bhargava, M., Palamidessi, C.: Probabilistic Anonymity. In: Abadi, M., de Alfaro, L. (eds.) CONCUR 2005. LNCS, vol. 3653, pp. 171–185. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  3. 3.
    Biondi, F., Legay, A., Malacaria, P., Wąsowski, A.: Quantifying Information Leakage of Randomized Protocols. In: Giacobazzi, R., Berdine, J., Mastroeni, I. (eds.) VMCAI 2013. LNCS, vol. 7737, pp. 68–87. Springer, Heidelberg (2013), CrossRefGoogle Scholar
  4. 4.
    Chatterjee, K., Sen, K., Henzinger, T.A.: Model-Checking ω-Regular Properties of Interval Markov Chains. In: Amadio, R.M. (ed.) FOSSACS 2008. LNCS, vol. 4962, pp. 302–317. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  5. 5.
    Chatzikokolakis, K., Palamidessi, C., Panangaden, P.: Anonymity Protocols as Noisy Channels. In: Montanari, U., Sannella, D., Bruni, R. (eds.) TGC 2006. LNCS, vol. 4661, pp. 281–300. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  6. 6.
    Chen, H., Malacaria, P.: Quantifying maximal loss of anonymity in protocols. In: Li, W., Susilo, W., Tupakula, U.K., Safavi-Naini, R. (eds.) ASIACCS. ACM (2009)Google Scholar
  7. 7.
    Clark, D., Hunt, S., Malacaria, P.: A static analysis for quantifying information flow in a simple imperative language. Journal of Computer Security 15 (2007)Google Scholar
  8. 8.
    Cover, T., Thomas, J.: Elements of information theory. Wiley, New York (1991)MATHCrossRefGoogle Scholar
  9. 9.
    Girardin, V.: Entropy maximization for markov and semi-markov processes. Methodology and Computing in Applied Probability 6, 109–127 (2004)MathSciNetMATHCrossRefGoogle Scholar
  10. 10.
    Jaynes, E.T.: Information Theory and Statistical Mechanics. Physical Review Online Archive (Prola) 106(4), 620–630 (1957)MathSciNetMATHGoogle Scholar
  11. 11.
    Jonsson, B., Larsen, K.G.: Specification and refinement of probabilistic processes. In: LICS, pp. 266–277. IEEE Computer Society (1991)Google Scholar
  12. 12.
    Kozine, I., Utkin, L.V.: Interval-valued finite markov chains. Reliable Computing 8(2), 97–113 (2002)MathSciNetMATHCrossRefGoogle Scholar
  13. 13.
    Malacaria, P.: Algebraic foundations for information theoretical, probabilistic and guessability measures of information flow. CoRR abs/1101.3453 (2011)Google Scholar
  14. 14.
    Malacaria, P., Chen, H.: Lagrange multipliers and maximum information leakage in different observational models. In: PLAS 2008, pp. 135–146. ACM, New York (2008)Google Scholar
  15. 15.
    Millen, J.K.: Covert channel capacity. In: IEEE Symposium on Security and Privacy, pp. 60–66 (1987)Google Scholar
  16. 16.
    Puterman, M.L.: Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley-Interscience (April 1994)Google Scholar
  17. 17.
    Shannon, C.E.: A mathematical theory of communication. The Bell System Technical Journal 27, 379–423 (1948)MathSciNetMATHGoogle Scholar
  18. 18.
    Stoer, J., Bulirsch, R., Bartels, R., Gautschi, W., Witzgall, C.: Introduction to Numerical Analysis. Texts in Applied Mathematics. Springer (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Fabrizio Biondi
    • 1
  • Axel Legay
    • 2
  • Bo Friis Nielsen
    • 3
  • Andrzej Wąsowski
    • 1
  1. 1.IT University of CopenhagenDenmark
  2. 2.INRIARennesFrance
  3. 3.Technical University of DenmarkLyngbyDenmark

Personalised recommendations