An Ising Model for Road Traffic Inference

Chapter
Part of the Nonlinear Systems and Complexity book series (NSCH, volume 5)

Abstract

We review some properties of the “belief propagation” algorithm, a distributed iterative map used to perform Bayesian inference and present some recent work where this algorithm serves as a starting point to encode observation data into a probabilistic model and to process large-scale information in real time. A natural approach is based on the linear response theory and various recent instantiations are presented. We will focus on the particular situation where the data have many different statistical components, representing a variety of independent patterns. As an application, the problem of reconstructing and predicting traffic states based on floating car data is then discussed.

Keywords

Entropy Covariance 

Notes

Acknowledgment

This gives me the occasion to express my warm thanks to my colleagues Victorin Martin and Jean-Marc Lasgouttes with whom it is a pleasure to collaborate on the main subjects discussed in this review. I am also grateful to Anne Auger, Yufei Han, Fabrice Marchal and Fabien Moutarde, for many aspects mentioned in this work concerning ongoing projects. This work was supported by the grant ANR-08-SYSC-017 from the French National Research Agency.

References

  1. 1.
    D.J. Amit, H. Gutfreund, H. Sompolinsky, Statistical mechanics of neural networks near saturation. Ann. Phys. 173(1), 30–67 (1987)CrossRefGoogle Scholar
  2. 2.
    H. Chau Nguyen, J. Berg, Bethe-peierls approximation and the inverse ising model. ArXiv e-prints, 1112.3501 (2011)Google Scholar
  3. 3.
    S.Cocco, R. Monasson, Adaptive cluster expansion for the inverse Ising problem: convergence, algorithm and tests. arXiv:1110.5416, 2011Google Scholar
  4. 4.
    S. Cocco, R. Monasson, V. Sessak, High-dimensional inference with the generalized hopfield model: Principal component analysis and corrections. Phys. Rev. E 83, 051123 (2011)CrossRefGoogle Scholar
  5. 5.
    A. de Palma, F. Marchal, Real cases applications of the fully dynamic METROPOLIS tool-box: an advocacy for large-scale mesoscopic transportation systems. Networks Spatial Econ. 2(4), 347–369 (2002)CrossRefGoogle Scholar
  6. 6.
    B. Frey, D. Dueck, Clustering by passing messages between data points. Science 315, 972–976 (2007)MathSciNetCrossRefMATHGoogle Scholar
  7. 7.
    C. Furtlehner, Y. Han, J.-M. Lasgouttes, V. Martin, F. Marchal, F. Moutarde, Spatial and temporal analysis of traffic states on large scale networks. In Intelligent Transportation Systems (ITSC), 2010 13th International IEEE Conference on, pp. 1215 –1220, 2010Google Scholar
  8. 8.
    C. Furtlehner, J.-M. Lasgouttes, A. Auger, Learning multiple belief propagation fixed points for real time inference. Physica A: Stat. Mech. Appl. 389(1), 149–163 (2010)MathSciNetCrossRefGoogle Scholar
  9. 9.
    C. Furtlehner, J.-M. Lasgouttes, A. de La Fortelle, A belief propagation approach to traffic prediction using probe vehicles. In Proceedings IEEE 10th Intelligent Conference Intelligent Transport System, pp. 1022–1027, 2007Google Scholar
  10. 10.
    A. Georges, J. Yedidia, How to expand around mean-field theory using high-temperature expansions. J. Phys. A: Math. Gen. 24(9), 2173 (1991).Google Scholar
  11. 11.
    Y. Han, F. Moutarde, Analysis of Network-level Traffic States using Locality Preservative Non-negative Matrix Factorization. In Proceedings of ITSC, 2011Google Scholar
  12. 12.
    N. Hansen, A. Ostermeier, Completely derandomized self-adaptation in evolution strategies. Evol. Comput. 9(2), 159–195 (2001)CrossRefGoogle Scholar
  13. 13.
    T. Heskes, On the uniqueness of loopy belief propagation fixed points. Neural Comput. 16, 2379–2413 (2004)CrossRefMATHGoogle Scholar
  14. 14.
    J.J. Hopfield, Neural network and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 79, 2554–2558 (1982)MathSciNetCrossRefGoogle Scholar
  15. 15.
    E.T. Jaynes, Probability Theory: The Logic of Science (Vol 1) (Cambridge University Press, Cambridge, 2003)CrossRefGoogle Scholar
  16. 16.
    Y. Kabashima, D. Saad, Belief propagation vs. tap for decoding corrupted messages. Europhys. Lett. 44, 668 (1998)Google Scholar
  17. 17.
    H. Kappen, F. Rodrguez, Efficient learning in boltzmann machines using linear response theory. Neural Comput. 10(5), 1137–1156 (1998)CrossRefGoogle Scholar
  18. 18.
    F.R. Kschischang, B.J. Frey, H.A. Loeliger, Factor graphs and the sum-product algorithm. IEEE Trans. Inf. Th. 47(2), 498–519 (2001)MathSciNetCrossRefMATHGoogle Scholar
  19. 19.
    V. Martin, Modélisation probabiliste et inférence par l’algorithme Belief Propagation, Thèse de doctorat, Ecole des Mines de Paris, 2013Google Scholar
  20. 20.
    M. Mezard, T. Mora, Constraint satisfaction problems and neural networks: A statistical physics perspective. J. Physiology-Paris 103(1–2), 107–113 (2009)CrossRefGoogle Scholar
  21. 21.
    M. Mézard, G. Parisi, M. Virasoro, Spin Glass Theory and Beyond (World Scientific, Singapore, 1987)MATHGoogle Scholar
  22. 22.
    M. Mézard, R. Zecchina, The random K-satisfiability problem: from an analytic solution to an efficient algorithm. Phys. Rev. E 66, 56126 (2002)CrossRefGoogle Scholar
  23. 23.
    T. Minka, Expectation propagation for approximate bayesian inference. In Proceedings UAI, pp. 362–369, 2001Google Scholar
  24. 24.
    J.M. Mooij, H.J. Kappen, On the properties of the Bethe approximation and loopy belief propagation on binary network. J. Stat. Mech. P11012 (2005)Google Scholar
  25. 25.
    J. Pearl, Probabilistic Reasoning in Intelligent Systems: Network of Plausible Inference (Morgan Kaufmann, San Mateo, 1988)Google Scholar
  26. 26.
    T. Plefka, Convergence condition of the tap equation for the infinite-ranged ising spin glass model. J. Phys. A: Math. Gen. 15(6), (1971, 1982),Google Scholar
  27. 27.
    PUMAS project, (2010–2013). http://pumas.inria.fr/public/document
  28. 28.
    TRAVESTI project, (2009–2012). http://travesti.gforge.inria.fr/
  29. 29.
    M.J. Wainwright, Stochastic processes on graphs with cycles: geometric and variational approaches. PhD thesis, MIT, 2002Google Scholar
  30. 30.
    Y. Watanabe, K. Fukumizu, Graph zeta function in the bethe free energy and loopy belief propagation. In Advances in Neural Information Processing Systems, vol. 22, pp. 2017–202, 2009Google Scholar
  31. 31.
    Y. Weiss, W.T. Freeman, Correctness of belief propagation in gaussian graphical models of arbitrary topology. Neural Comput. 13(10), 2173–2200 (2001)CrossRefMATHGoogle Scholar
  32. 32.
    M. Welling, Y.W. Teh, Approximate inference in boltzmann machines. Artif. Intell. 143(1), 19–50 (2003)MathSciNetCrossRefMATHGoogle Scholar
  33. 33.
    M. Yasuda, K. Tanaka, Approximate learning algorithm in boltzmann machines. Neural Comput. 21, 3130–3178 (2009)MathSciNetCrossRefMATHGoogle Scholar
  34. 34.
    J.S. Yedidia, W.T. Freeman, Y. Weiss, Generalized belief propagation. Adv. Neural Inform. Process. Syst. 13, 689–695 (2001)Google Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  1. 1.INRIA Saclay – LRI, Bat. 490Université Paris-SudOrsayFrance

Personalised recommendations