Advertisement

An Information Theoretic Approach to Learning Generative Graph Prototypes

  • Lin Han
  • Edwin R. Hancock
  • Richard C. Wilson
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7005)

Abstract

We present a method for constructing a generative model for sets of graphs by adopting a minimum description length approach. The method is posed in terms of learning a generative supergraph model from which the new samples can be obtained by an appropriate sampling mechanism. We commence by constructing a probability distribution for the occurrence of nodes and edges over the supergraph. We encode the complexity of the supergraph using the von-Neumann entropy. A variant of EM algorithm is developed to minimize the description length criterion in which the node correspondences between the sample graphs and the supergraph are treated as missing data.The maximization step involves updating both the node correspondence information and the structure of supergraph using graduated assignment. In the experimental part, we demonstrate the practical utility of our proposed algorithm and show that our generative model gives good graph classification results. Besides, we show how to perform graph clustering with Jensen-Shannon kernel and generate new sample graphs.

Keywords

Minimum Description Length Posteriori Probability Sample Graph Median Graph Laplacian Spectrum 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Friedman, N., Koller, D.: Being Bayesian about network structure.A Bayesian approach to structure discovery in Bayesian networks. Machine Learning, 95–125 (2003)Google Scholar
  2. 2.
    Christmas, W.J., Kittler, J., Petrou, M.: Probabilistic feature labeling schemes: modeling compatibility coefficient distribution. Image and Vision Computing 14, 617–625 (1996)CrossRefGoogle Scholar
  3. 3.
    Bagdanov, A.D., Worring, M.: First order Gaussian graphs for efficient structure classification. Pattern Recognition 36, 1311–1324 (2003)CrossRefzbMATHGoogle Scholar
  4. 4.
    Cootes, T.F., Edwards, G.J., Taylor, C.J.: Active Appearance Models. IEEE PAMI 23, 681–685 (2001)CrossRefGoogle Scholar
  5. 5.
    Luo, B., Hancock, E.R.: A spectral approach to learning structural variations in graphs. Pattern Recognition 39, 1188–1198 (2006)CrossRefzbMATHGoogle Scholar
  6. 6.
    Torsello, A., Hancock, E.R.: Learning shape-classes using a mixture of tree-unions. IEEE PAMI 28, 954–967 (2006)CrossRefGoogle Scholar
  7. 7.
    Luo, B., Hancock, E.R.: Structural graph matching using the EM alogrithm and singular value decomposition. IEEE PAMI 23, 1120–1136 (2001)CrossRefGoogle Scholar
  8. 8.
    White, D., Wilson, R.C.: Parts based generative models for graphs. In: ICPR, pp. 1–4 (2008)Google Scholar
  9. 9.
    Rissanen, J.: Modelling by Shortest Data Description. Automatica, 465–471 (1978)Google Scholar
  10. 10.
    Rissanen, J.: Stochastic complexity in statistical inquiry. World Scientific, Singapore (1989)zbMATHGoogle Scholar
  11. 11.
    Passerini, F., Severini, S.: The von neumann entropy of networks, arXiv:0812.2597 (2008)Google Scholar
  12. 12.
    Cover, T., Thomas, J.: Elements of Information Theory. John Wiley&Sons, New York (1991)CrossRefzbMATHGoogle Scholar
  13. 13.
    Grunwald, P.: Minimum Description Length Tutorial. In: Advances in Minimum Description Length: Theory and Applications (2005)Google Scholar
  14. 14.
    Bridle, J.S.: Training stochastic model recognition algorithms as networks can lead to maximum mutual information estimation of parameters. In: Advances in Neural Information Processing Systems, vol. 2, pp. 211–217 (1990)Google Scholar
  15. 15.
    Gold, S., Rangarajan, A.: A Graduated Assignment Algorithm for Graph Matching. IEEE PAMI 18, 377–388 (1996)CrossRefGoogle Scholar
  16. 16.
    Figueiredo, M.A.T., Jain, A.K.: Unsupervised learning of finite mixture models. IEEE PAMI 24, 381–396 (2002)CrossRefGoogle Scholar
  17. 17.
    Wilson, R.C., Zhu, P.: A study of graph spectra for comparing graphs and trees. Pattern Recognition 41, 2833–2841 (2008)CrossRefzbMATHGoogle Scholar
  18. 18.
    Shi, J., Malik, J.: Normalized cuts and image segmentation. In: CVPR, pp. 731–737 (1997)Google Scholar
  19. 19.
    Robles-Kelly, A., Hancock, E.R.: A riemannian approach to graph embedding. Pattern Recognition 40, 1042–1056 (2007)CrossRefzbMATHGoogle Scholar
  20. 20.
    Nene, S.A., Nayar, S.K., Murase, H.: Columbiaobjectimagelibrary (COIL100). Technical Report CUCS-006-96. Department of Computer Science, Columbia University (1996)Google Scholar
  21. 21.
    Lowe, D.G.: Distinctive image features from scale invariant keypoints. IJCV 99, 91–110 (2004)CrossRefGoogle Scholar
  22. 22.
    Wilson, R.C., Hancock, E.R.: Structural matching by discrete relaxation. IEEE PAMI 19, 634–648 (1997)CrossRefGoogle Scholar
  23. 23.
    Han, L., Wilson, R.C., Hancock, E.R.: A Supergraph-based Generative Model. In: ICPR, pp. 1566–1569 (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Lin Han
    • 1
  • Edwin R. Hancock
    • 1
  • Richard C. Wilson
    • 1
  1. 1.Department of Computer ScienceUniversity of YorkUK

Personalised recommendations