Advertisement

Abstract

Chapters  4 and  5 have discussed the use of decision forests in supervised tasks, i.e. when labeled training data are available. In contrast, this chapter discusses the use of forests in unlabeled scenarios. For instance, one important task is that of discovering the intrinsic nature and structure of large sets of unlabeled data. This task can be tackled via another probabilistic model, the density forest. Density forests are explained here as an instantiation of our abstract decision forest model as described in Chap.  3.

Keywords

Expectation Maximization Gaussian Mixture Model Density Forest Unlabeled Data Multivariate Gaussian Distribution 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 28.
    Bishop CM (2006) Pattern recognition and machine learning. Springer, New York Google Scholar
  2. 44.
    Breiman L (2001) Random forests. Mach Learn 45(1) Google Scholar
  3. 79.
    Criminisi A, Shotton J, Konukoglu E (2011) Online tutorial on decision forests. http://research.microsoft.com/projects/decisionforests
  4. 80.
    Criminisi A, Shotton J, Konukoglu E (2012) Decision forests: a unified framework for classification, regression, density estimation, manifold learning and semi-supervised learning. Found Trends Comput Graph Vis 7(2–3) Google Scholar
  5. 88.
    Dempster A, Laird N, Rubin D (1977) Maximum likelihood from incomplete data via the EM algorithm. J R Stat Soc Ser B Methodol 39 Google Scholar
  6. 91.
    Devroye L (1986) Non-uniform random variate generation. Springer, New York Google Scholar
  7. 148.
    Gupta SS (1963) Probability integrals of multivariate normal and multivariate t. Ann Math Stat 34(3) Google Scholar
  8. 195.
    Kristan M, Skocaj D, Leonardis A (2008) Incremental learning with Gaussian mixture models. In: Computer vision winter workshop (CVWW), Moravske Toplice, Slovenia Google Scholar
  9. 227.
    MacQueen JB (1967) Some methods for classification and analysis of multivariate observations. In: Proc of 5th Berkeley symposium on mathematical statistics and probability. University of California Press, Berkeley Google Scholar
  10. 253.
    Moosmann F, Triggs B, Jurie F (2006) Fast discriminative visual codebooks using randomized clustering forests. In: Advances in neural information processing systems (NIPS) Google Scholar
  11. 258.
    Müller A, Nowozin S, Lampert CH (2012) Information theoretic clustering using minimum spanning trees. In: Proc annual symposium of the German association for pattern recognition (DAGM) Google Scholar
  12. 263.
    Neal RM (2001) Annealed importance sampling. Stat Comput 11 Google Scholar
  13. 282.
    Parzen E (1962) On estimation of a probability density function and mode. Ann Math Stat 33 Google Scholar
  14. 292.
    Plackett RL (1954) A reduction formula for normal multivariate integrals. Biometrika 41 Google Scholar
  15. 304.
    Ram P, Gray AG (2011) Density estimation trees. In: Proc ACM SIGKDD intl conf on knowledge discovery and data mining (KDD) Google Scholar
  16. 335.
    Shi T, Horvath S (2006) Unsupervised learning with random forest predictors. J Comput Graph Stat 15 Google Scholar
  17. 341.
    Shotton J, Johnson M, Cipolla R (2008) Semantic texton forests for image categorization and segmentation. In: Proc IEEE conf computer vision and pattern recognition (CVPR) Google Scholar
  18. 347.
    Silverman BW (1986) Density estimation. Chapman and Hall, London Google Scholar
  19. 349.
    Skilling J (2010) Maximum entropy and Bayesian methods. Kluwer Academic, Dordrecht Google Scholar
  20. 362.
    Szekely GJ, Rizzo ML (2004) Testing for equal distributions in high dimensions. Interstat, Nov 2004. Google Scholar

Copyright information

© Springer-Verlag London 2013

Authors and Affiliations

  • A. Criminisi
    • 1
  • J. Shotton
    • 1
  1. 1.Microsoft Research Ltd.CambridgeUK

Personalised recommendations