Advertisement

Two-Stage Augmented Kernel Matrix for Object Recognition

  • Muhammad Awais
  • Fei Yan
  • Krystian Mikolajczyk
  • Josef Kittler
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6713)

Abstract

Multiple Kernel Learning (MKL) has become a preferred choice for information fusion in image recognition problem. Aim of MKL is to learn optimal combination of kernels formed from different features, thus, to learn importance of different feature spaces for classification. Augmented Kernel Matrix (AKM) has recently been proposed to accommodate for the fact that a single training example may have different importance in different feature spaces, in contrast to MKL that assigns same weight to all examples in one feature space. However, AKM approach is limited to small datasets due to its memory requirements.

We propose a novel two stage technique to make AKM applicable to large data problems. In first stage various kernels are combined into different groups automatically using kernel alignment. Next, most influential training examples are identified within each group and used to construct an AKM of significantly reduced size. This reduced size AKM leads to same results as the original AKM. We demonstrate that proposed two stage approach is memory efficient and leads to better performance than original AKM and is robust to noise. Results are compared with other state-of-the art MKL techniques, and show improvement on challenging object recognition benchmarks.

Keywords

Feature Space Object Recognition Mean Average Precision Feature Channel Multiple Kernel 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Schölkopf, B., Smola, A.: Learning with Kernels. MIT Press, Cambridge (2002)zbMATHGoogle Scholar
  2. 2.
    Vapnik, V.: The Nature of Statistical Learning Theory. Springer, Heidelberg (2000)CrossRefzbMATHGoogle Scholar
  3. 3.
    Mikolajczyk, K., Schmid, C.: A performance evaluation of local descriptors. PAMI 27(10), 1615–1630 (2005)CrossRefGoogle Scholar
  4. 4.
    van de Sande, K., Gevers, T., Snoek, C.: Evaluation of color descriptors for object and scene recognition. In: CVPR (2008)Google Scholar
  5. 5.
    Lanckriet, G., Cristianini, N., Bartlett, P., Ghaoui, L., Jordan, M.: Learning the Kernel Matrix with Semidefinite Programming. JMLR 5, 27–72 (2004)zbMATHGoogle Scholar
  6. 6.
    Bach, F., Lanckriet, G., Jordan, M.: Multiple Kernel Learning, Conic Duality, and the SMO Algorithm. In: ICML (2004)Google Scholar
  7. 7.
    Sonnenburg, S., Rätsch, G., Schafer, C., Schölkopf, B.: Large Scale Multiple Kernel Learning. JMLR 7, 1531–1565 (2006)zbMATHGoogle Scholar
  8. 8.
    Yan, F., Mikolajczyk, K., Kittler, J., Tahir, M.A.: Combining multiple kernels by augmenting the kernel matrix. In: El Gayar, N., Kittler, J., Roli, F. (eds.) MCS 2010. LNCS, vol. 5997, pp. 175–184. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  9. 9.
    Szafranski, M., Grandvalet, Y., Rakotomamonjy, A.: Composite Kernel Learning. ML 79(1), 73–103 (2010)Google Scholar
  10. 10.
    Nath, J., Dinesh, G., Raman, S., Bhattacharyya, C., Ben-Tal, A., Ramakrishnan, K.: On the Algorithmics and Applications of a Mixed-norm Based Kernel Learning Formulation. In: NIPS (2009)Google Scholar
  11. 11.
    Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCALVisual Object Classes Challenge ( VOC 2007)Results (2007) http://www.pascal-network.org/challenges/VOC/voc2007/workshop/index.html
  12. 12.
    M. Nilsback A. Zisserman.: A visual Vocabulary for Flower Classification. In: CVPR, 2006.Google Scholar
  13. 13.
    Nilsback, M.-E., Zisserman, A.: Automated Flower Classification over a Large Number of Classes. In: ICCVGIP (2008)Google Scholar
  14. 14.
    Kloft, M., Brefeld, U., Laskov, P., Sonnenburg, S.: Nonsparse Multiple Kernel Learning. In: NIPS Workshop on Kernel Learning: Automatic Selection of Optimal Kernels (2008)Google Scholar
  15. 15.
    Schölkopf, B., Mika, S., Burges, C., Knirsch, P., Müller, K., Rätsch, G., Smola, A.: Input Space Versus Feature Space in Kernel-Based Methods. NN 10(5), 1000–1017 (1999)Google Scholar
  16. 16.
    Cristianini, N., Shawe-Taylor, J., Elisseeff, A., Kandola, J.: On Kernel-Target Alignment. In: NIPS (2001)Google Scholar
  17. 17.
    Lawrence, N., Sanguinetti, G.: Matching Kernel through Kullback-Leibler Divergence Minimisation. Technical Report CS-04-12, Department of Computer Science, University of Sheffield (2005)Google Scholar
  18. 18.
    Lazebnik, S., Schmid, C., Ponce, J.: Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories. In: CVPR (2006)Google Scholar
  19. 19.
    Gehler, P., Nowozin, S.: On Feature Combination for Multiclass Object Classification. In: ICCV (2009)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Muhammad Awais
    • 1
  • Fei Yan
    • 1
  • Krystian Mikolajczyk
    • 1
  • Josef Kittler
    • 1
  1. 1.Centre for Vision, Speech and Signal Processing (CVSSP)University of SurreyUK

Personalised recommendations