Skip to main content

Extension II: A Regression Model from M4

  • Chapter
Machine Learning

Part of the book series: Advanced Topics in Science and Technology in China ((ATSTC))

  • 6129 Accesses

Abstract

In this chapter, we present a novel regression model which is directly motivated from the Maxi-Min Margin Machine(M4) model described in Chapter 4. Regression is one of the problems in supervised learning. The objective is to learn a model from a given dataset, (x 1, y 1),...,(xN, yN), and then based on the learned model, to make accurate predictions of y for future values of x. Support Vector Regression (SVR), a successful method in dealing with this problem contains the good generalization ability [20, 17, 8, 6]. The standard SVR adopts the 2 -norm to control the functional complexity and chooses an -insensitive loss function with a fixed tube (margin) to measure the empirical risk. By introducing the ℓ2-norm, the optimization problem in SVR can be transformed to a quadratic programming problem. On the other hand, the ∈-tube has the ability to tolerate noise in data and fixing the tube enjoys the advantages of simplicity. These settings are in a global fashion and are effective in common applications, but they lack the ability and the flexibility to capture the local trend in some applications. For example, in stock markets, the data are highly volatile and the associated variance of noise varies over time. In such cases, fixing the tube cannot capture the local trend of data and cannot tolerate the noise adaptively.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Chang CC, Lin CJ (2001) LIBSVM: A Library for Support Vector Machines

    Google Scholar 

  2. Chen S (1995) Basis Pursuit. PhD thesis, Department of Statistics, Standford University

    Google Scholar 

  3. Chen S, Donoho D, Saunders M (1995) Atomic decomposition by basis pursuit. Technique Report 479, Department of Statistics, Standford University

    Google Scholar 

  4. Coifman RR, Wickerhauser MV (1992) Entropy-based algorithms for best-basis selection. IEEE Transactions on Information Theory 38(2): 713–718

    Article  MATH  Google Scholar 

  5. Daubechies I (1992) Ten lectures on wavelets. In CBMS-NSF Regional Conferences Series in Applied Mathematics. Philadelphia, PA: SIAM

    Google Scholar 

  6. Drucker H, Burges C, Kaufman L, Smola A, Vapnik VN (1997) Support Vector Regression Machines. In Mozer Michael C, Jordan Michael I, Petsche Thomas, editors, Advances in Neural Information Processing Systems. Cambridge, MA: The MIT Press 9:155–161

    Google Scholar 

  7. Girosi F (1998) An equivalence between sparse approximation and support vector machines. Neural Computation 10(6):1455–1480

    Article  Google Scholar 

  8. Gunn S (1998) Support vector machines for classification and regression. Technical Report NC2-TR-1998-030, Faculty of Engineering and Applied Science, Department of Electronics and Computer Science, University of Southampton

    Google Scholar 

  9. Harpur GF, Prager RW (1996) Development of low entropy coding in a recurrent network. Networks 7:277–284

    Article  Google Scholar 

  10. Huang K, Yang H, King I, Lyu MR (2004) Learning large margin classifiers locally and globally. In The 21st International Conference on Machine Learning (ICML-2004)

    Google Scholar 

  11. Lobo M, Vandenberghe L, Boyd S, Lebret H (1998) Applications of second order cone programming. Linear Algebra and Its Applications 284:193–228

    Article  MATH  MathSciNet  Google Scholar 

  12. Mallat S, Zhang Z (1993) Matching pursuit in a time-frequency dictionary. IEEE Transactions on Signal Processing 41(12):3397–3415

    Article  MATH  Google Scholar 

  13. Montgomery Douglas C, Runger George C (1999) Applied statistics and probability for engineers. New York, NY: John & Wileys, 2nd edition

    Google Scholar 

  14. Olshausen BA, Field DJ (1996) Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381:607–609

    Article  Google Scholar 

  15. Pompe Bernd (2002) Mutual information and relevant variables for predictions. In Soof Abdols, Cao Liangyue, editors, Modelling and forecasting financial data: techniques of nonlinear dynamics. Boston, MA: Kluwer Academic Publishers 61–92

    Google Scholar 

  16. Schölkopf B, Bartlett P, Smola A, Williamson R (1999) Shrinking the Tube: A New Support Vector Regression Algorithm. In Kearns MS, Solla SA, Cohn DA, editors, Advances in Neural Information Processing Systems. Cambridge, MA: The MIT Press 11:330–336

    Google Scholar 

  17. Smola A, Schölkopf B (1998) A tutorial on support vector regression. Technical Report NC2-TR-1998-030, NeuroCOLT2

    Google Scholar 

  18. Sturm JF (1999) Using sedumi 1.02, a matlab toolbox for optimization over symmetric cones. Optimization Methods and Software 11:625–653

    Article  MathSciNet  Google Scholar 

  19. Sturm JF (2000) Central region method. In Frenk JBG, Roos C, Terlaky T, Zhang S, editors, High Performance Optimization. Kluwer Academic Publishers 157–194

    Google Scholar 

  20. Vapnik VN (1999) The Nature of Statistical Learning Theory. New York, NY: Springer-Verlag 2nd edition

    Google Scholar 

  21. Yang H, King I, Chan L, Huang K (2004) Financial Time Series Prediction Using Non-fixed and Asymmetrical Margin Setting with Momentum in Support Vector Regression. In Rajapakse JC, Wang L, editors, Neural Information Processing: Research and Development, Studies in Fuzziness and Soft Computing. New York, NY: Springer-Verlag 152:334–350

    Google Scholar 

Download references

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Zhejiang University Press, Hangzhou and Springer-Verlag GmbH Berlin Heidelberg

About this chapter

Cite this chapter

(2008). Extension II: A Regression Model from M4 . In: Machine Learning. Advanced Topics in Science and Technology in China. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-79452-3_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-79452-3_6

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-79451-6

  • Online ISBN: 978-3-540-79452-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics