An Eager Regression Method Based on Best Feature Projections

  • Tolga Aydin
  • H Altay Güvenir
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2070)

Abstract

This paper describes a machine learning method, called Regression by Selecting Best Feature Projections(RSBFP). In the training phase, RSBFP projects the training data on each feature dimension and aims to find the predictive power of each feature attribute by constructing simple linear regression lines, one per each continuous feature and number of categories per each categorical feature. Because, although the predictive power of a continuous feature is constant, it varies for each distinct value of categorical features. Then the simple linear regression lines are sorted according to their predictive power. In the querying phase of learning, the best linear regression line and thus the best feature projection are selected to make predictions.

keywords

prediction feature projection regression 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Breiman, L, Friedman, J H, Olshen, R A and Stone, C J ‘Classification and Regression Trees’ Wadsworth, Belmont, California (1984)Google Scholar
  2. [2]
    Friedman, J H ‘Local Learning Based on Recursive Covering’ Department of Statistics, Stanford University (1996)Google Scholar
  3. [3]
    Weiss, S and Indurkhya, N ‘Rule-based Machine Learning Methods for Functional Prediction’ Journal of Artificial Intelligence Research Vol 3 (1995) pp 383–403MATHGoogle Scholar
  4. [4]
    Aha, D, Kibler, D and Albert, M ‘Instance-based Learning Algorithms’ Machine Learning Vol 6 (1991) pp 37–66Google Scholar
  5. [5]
    Quinlan, J R ‘Learning with Continuous Classes’ Proceedings AI’92 Adams and Sterling (Eds) Singapore (1992) pp 343–348Google Scholar
  6. [6]
    Bratko, I and Karalic A ‘First Order Regression’ Machine Learning Vol 26 (1997) pp 147–176MATHCrossRefGoogle Scholar
  7. [7]
    Karalic, A ‘Employing Linear Regression in Regression Tree Leaves’ Proceedings of ECAI’92 Vienna, Austria, Bernd Newmann (Ed.) (1992) pp 440–441Google Scholar
  8. [8]
    Friedman, J H ‘Multivariate Adaptive Regression Splines’ The Annals of Statistics Vol 19 No 1 (1991) pp 1–141MATHMathSciNetCrossRefGoogle Scholar
  9. [9]
    Breiman, L ‘Stacked Regressions’ Machine Learning Vol 24 (1996) pp 49–64MATHMathSciNetGoogle Scholar
  10. [10]
    Kibler, D, Aha D W and Albert, M K ‘Instance-based Prediction of Real-valued Attributes’ Comput. Intell. Vol 5 (1989) pp 51–57CrossRefGoogle Scholar
  11. [11]
    Weiss, S and Indurkhya, N ‘Optimized Rule Induction’ IEEE Expert Vol 8 No 6 (1993) pp 61–69CrossRefGoogle Scholar
  12. [12]
    Graybill, F, Iyer, H and Burdick, R ‘Applied Statistics’ Upper Saddle River, NJ (1998)Google Scholar
  13. [13]
    Aydin, T ‘Regression by Selecting Best Feature(s)’ M.S.Thesis, Computer Engineering, Bilkent University, September, (2000)Google Scholar
  14. [14]
    Aydin, T and Güvenir, H A ‘Regression by Selecting Appropriate Features’ Proceedings of TAINN’2000, Izmir, June 21-23, (2000), pp 73–82Google Scholar
  15. [15]
    Uysul, İ and Güvenir, H A ‘Regression on Feature Projections’ Knowledge-Based Systems, Vol.13, No:4, (2000), pp 207–214CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • Tolga Aydin
    • 1
  • H Altay Güvenir
    • 1
  1. 1.Department of Computer EngineeringBilkent UniversityAnkaraTURKEY

Personalised recommendations