Advertisement

Generalized \(\varepsilon \)—Loss Function-Based Regression

  • Pritam Anand
  • Reshma Rastogi (nee Khemchandani)
  • Suresh Chandra
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 748)

Abstract

In this paper, we propose a new loss function termed as “generalized \(\varepsilon \)—loss function” to study the regression problem. Unlike the standard \(\varepsilon \)—insensitive loss function, the generalized \(\varepsilon \)—loss function penalizes even those data points which lie inside of the \(\varepsilon \)—tube so as to minimize the scatter within the tube. Also, the rate of penalization of data points lying outside of the \(\varepsilon \)—tube is much higher in comparison to the data points which lie inside of the \(\varepsilon \)—tube. Based on the proposed generalized \(\varepsilon \)—loss function, a new support vector regression model is formulated which is termed as “Penalizing \(\varepsilon \)—generalized SVR (Pen-\(\varepsilon \)—SVR).” Further, extensive numerical experiments are carried out to check the validity and efficacy of the proposed Pen-\(\varepsilon \)—SVR.

References

  1. 1.
    Cortes, C., Vapnik, V.: Support vector networks. Mach. Learn. 20(3), 273–297 (1995)zbMATHGoogle Scholar
  2. 2.
    Burges, J.C.: A tutorial on support vector machines for pattern recognition. Data Min. Knowl. Discov. 2(2), 121–167 (1998)CrossRefGoogle Scholar
  3. 3.
    Cherkassky, V., Mulier, F.: Learning From Data:concepts, Theory and Methods. John Wiley and Sons, New York (2007)CrossRefGoogle Scholar
  4. 4.
    Vapnik, V.: Statistical Learning Theory, vol. 1. :Wiley, New York (1998)Google Scholar
  5. 5.
    Osuna, E., Freund, R., Girosit, F.: Training support vector machines: An application to face detection. In: Proceedings of IEEE Computer Vision and Pattern Recognition, pp. 130–136 . San Juan, Puerto Rico (1997)Google Scholar
  6. 6.
    Joachims, T.: Text categorization with support vector machines: learning with many relevant features. Eur. Conference Mach. Learn. Springer, Berlin (1998)Google Scholar
  7. 7.
    Schlkopf, B., Tsuda, K., Vert, J.P.: Kernel Methods in Computational Biology. MIT press (2004)Google Scholar
  8. 8.
    Lal, T.N., Schroder, M., Hinterberger, T., Weston, J., Bogdan, M., Birbaumer, N., Scholkopf, B.: Support vector channel selection in BCI. IEEE Trans. Biomed. Eng. 51(6), 10031010 (2004)CrossRefGoogle Scholar
  9. 9.
    Tanveer, M.: Linear programming twin support vector regression. Filomat, 31(7), 2123–2142 (2017)Google Scholar
  10. 10.
    Schlkopf, B., Smola, A.J.: Learning With Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT press (2002)Google Scholar
  11. 11.
    Blake, C.I., Merz, C.J.: UCI repository for machine learning databases (1998). http://www.ics.uci.edu/*mlearn/MLRepository.html
  12. 12.
    Hsu, C.W., Lin, C.J.: A comparison of methods for multi class support vector machines. IEEE Trans. Neural Netw. 13, 415425 (2002)CrossRefGoogle Scholar
  13. 13.
    Duda, R.O., Hart, P.R., Stork, D.G.: Pattern Classification, 2nd edn. Wiley, USA (2001)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  • Pritam Anand
    • 1
  • Reshma Rastogi (nee Khemchandani)
    • 1
  • Suresh Chandra
    • 2
  1. 1.Faculty of Mathematics and Computer ScienceSouth Asian UniversityNew DelhiIndia
  2. 2.Department of MathematicsIndian Institute of Technology DelhiNew DelhiIndia

Personalised recommendations