From Eigenspots to Fisherspots — Latent Spaces in the Nonlinear Detection of Spot Patterns in a Highly Varying Background

  • Bjoern H. Menze
  • B. Michael Kelm
  • Fred A. Hamprecht
Part of the Studies in Classification, Data Analysis, and Knowledge Organization book series (STUDIES CLASS)


We present a scheme for the development of a spot detection procedure which is based on the learning of latent linear features from a training data set. Adapting ideas from face recognition to this low level feature extraction task, we suggest to learn a collection of filters from representative data that span a subspace which allows for a reliable distinction of a spot vs. the heterogeneous background; and to use a non-linear classifier for the actual decision. Comparing different subspace projections, in particular principal component analysis, partial least squares, and linear discriminant analysis, in conjunction with subsequent classification by random forests on a data set from archaeological remote sensing, we observe a superior performance of the subspace approaches, both compared with a standard template matching and a direct classification of local image patches.


Principal Component Analysis Ordinary Little Square Random Forest Face Recognition Linear Discriminant Analysis 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. BELHUMEUR, P.N. et al. (1996): Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projections. Proc ECCV.Google Scholar
  2. BJORKSTROM, A. (1999): A Generalized View on Continuum Regression. Scandinavian Journal Statistics, 26, 17–30.CrossRefMathSciNetGoogle Scholar
  3. BOETTICHER, G.D. et al. (2005): A SVM for Protein Spot Detection in 2-dimensional Gel Electrophoresis. Journal of Computer Science, 1, 355–362.CrossRefGoogle Scholar
  4. BORGA, M. (1997): A Unified Approach to PCA, PLS, MLR and CCA. Technical Report, University of Linkoping, Sweden.Google Scholar
  5. BREIMAN, L. (2001): Random Forests. Machine Learning, 45, 532.Google Scholar
  6. FRANK, I.E. and FRIEDMAN, J.H. (1993): A Statistical View of Some Chemometric Regression Tools. Technometrics, 35, 109–148.CrossRefzbMATHGoogle Scholar
  7. HASTIE, T. et al. (2001): The Elements of Statistical Learning. Springer, New York.CrossRefzbMATHGoogle Scholar
  8. MENZE, B.H., UR, J.A. and SHERRATT, A.G. (2006): Detection of Ancient Settlement Mounds. Photogrammetric Engineering & Remote Sensing, 72, 321327.CrossRefGoogle Scholar
  9. MOON, T.K. and STIRLING, W.C. (2000): Mathematical Methods and Algorithms for Signal Processing. Prentice Hall, New York.Google Scholar
  10. RAHNENFUEHRER, J. and BOZINOV, D. (2004): Hybrid Clustering for Microarray Image Analysis. BMC Bioinformatics, 5, online.Google Scholar
  11. SHERRATT, A. (2004): Spotting Tells from Space: Antiquity, 77, online.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Bjoern H. Menze
    • 1
  • B. Michael Kelm
    • 1
  • Fred A. Hamprecht
    • 1
  1. 1.Interdisciplinary Center for Scientific Computing (IWR)University of HeidelbergHeidelbergGermany

Personalised recommendations