Skip to main content

Data Driven Constraints for the SVM

  • Conference paper
  • 2119 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 7588))

Abstract

We propose a generalized data driven constraint for support vector machines exemplified by classification of paired observations in general and specifically on the human ear canal. This is particularly interesting in dynamic cases such as tissue movement or pathologies developing over time. Assuming that two observations of the same subject in different states span a vector, we hypothesise that such structure of the data contains implicit information which can aid the classification, thus the name data driven constraints. We derive a constraint based on the data which allow for the use of the ℓ1-norm on the constraint while still allowing for the application of kernels. We specialize the proposed constraint to orthogonality of the vectors between paired observations and the estimated hyperplane. We show that imposing the constraint of orthogonality on the paired data yields a more robust classifier solution, compared to the SVM i.e. reduces variance and improves classification rates. We present a quantitative measure of the information level contained in the pairing and test the method on simulated as well as a high-dimensional paired data set of ear-canal surfaces.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   49.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Aizerman, M., Braverman, E., Rozonoer, L.: Theoretical foundations of the potential function method in pattern recognition learning. Automation and Remote Control 25, 821–837 (1964)

    MathSciNet  Google Scholar 

  2. Boser, B., Guyon, I., Vapnik, V.: A training algorithm for optimal margin classifiers. In: Fifth Annual Workshop on Computational Learning Theory, pp. 144–152 (1992)

    Google Scholar 

  3. Darkner, S., Larsen, R., Paulsen, R.R.: Analysis of Deformation of the Human Ear and Canal Caused by Mandibular Movement. In: Ayache, N., Ourselin, S., Maeder, A. (eds.) MICCAI 2007, Part II. LNCS, vol. 4792, pp. 801–808. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  4. Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification. John Wiley & Sons (2001)

    Google Scholar 

  5. Golland, P.: Discriminative direction for kernel classifiers. In: NIPS, pp. 745–752 (2001)

    Google Scholar 

  6. Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning. Springer (2001)

    Google Scholar 

  7. Hoerl, A.E., Kennard, R.W.: Ridge regression: Biased estimation for nonorthogonal problems. Technometrics 12, 55–67 (1970)

    Article  MATH  Google Scholar 

  8. Karush, W.: Minima of Functions of Several Variables with Inequalities as Side Constraints. Master’s thesis, Univ. of Chicago (1939)

    Google Scholar 

  9. Li, F., Yang, Y., Xing, E.: From lasso regression to feature vector machine. In: Weiss, Y., Schölkopf, B., Platt, J. (eds.) Advances in Neural Information Processing Systems 18, pp. 779–786. MIT Press, Cambridge (2006)

    Google Scholar 

  10. Schölkopf, B., Smola, A.J.: Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press (2002)

    Google Scholar 

  11. Shawe-Taylor, J., Cristianini, N.: Kernel Methods for Pattern Analysis. Cambridge University Press, UK (2004)

    Book  Google Scholar 

  12. Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Statist. Soc. B 58(1), 267–288 (1996)

    MathSciNet  MATH  Google Scholar 

  13. Tibshirani, R., Saunders, M., Rosset, S., Zhu, J., Knight, K.: Sparsity and smoothness via the fused lasso. J. R. Statist. Soc. B 1(67), 91–106 (2006)

    Google Scholar 

  14. Vapnik, V.: The Nature of Statistical Learning Theory, 2nd edn. Springer, New York (1999)

    Google Scholar 

  15. Wang, L., Zhu, J., Zou, H.: The doubly regularized support vector machine. Statistica Sinica 16, 589–615 (2006)

    MathSciNet  MATH  Google Scholar 

  16. Zou, H., Hastie, T.: Regularization and variable selection via the elastic net. J. R. Statist. Soc. B 67(pt. 2), 301–320 (2005)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Darkner, S., Clemmensen, L.H. (2012). Data Driven Constraints for the SVM. In: Wang, F., Shen, D., Yan, P., Suzuki, K. (eds) Machine Learning in Medical Imaging. MLMI 2012. Lecture Notes in Computer Science, vol 7588. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-35428-1_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-35428-1_9

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-35427-4

  • Online ISBN: 978-3-642-35428-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics