Advertisement

Feature Selection in an Electric Billing Database Considering Attribute Inter-dependencies

  • Manuel Mejía-Lavalle
  • Eduardo F. Morales
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4065)

Abstract

With the increasing size of databases, feature selection has become a relevant and challenging problem for the area of knowledge discovery in databases. An effective feature selection strategy can significantly reduce the data mining processing time, improve the predicted accuracy, and help to understand the induced models, as they tend to be smaller and make more sense to the user. Many feature selection algorithms assumed that the attributes are independent between each other given the class, which can produce models with redundant attributes and/or exclude sets of attributes that are relevant when considered together. In this paper, an effective best first search algorithm, called buBF, for feature selection is described. buBF uses a novel heuristic function based on n-way entropy to capture inter-dependencies among variables. It is shown that buBF produces more accurate models than other state-of-the-art feature selection algorithms when compared on several real and synthetic datasets. Specifically we apply buBF to a Mexican Electric Billing database and obtain satisfactory results.

Keywords

Feature Selection Feature Subset Feature Selection Method Synthetic Dataset Feature Selection Algorithm 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Guyon, I., Elisseeff, A.: An introduction to variable and feature selection. Journal of machine learning research 3, 1157–1182 (2003)zbMATHCrossRefGoogle Scholar
  2. 2.
    Kohavi, R., John, G.: Wrappers for feature subset selection. Artificial Intelligence Journal, Special issue on relevance, 273–324 (1997)Google Scholar
  3. 3.
    Piramuthu, S.: Evaluating feature selection methods for learning in data mining applications. In: Proc. 31st annual Hawaii Int. conf. on system sciences, pp. 294–301 (1998)Google Scholar
  4. 4.
    Perner, P., Apté, C.: Empirical Evaluation of Feature Subset Selection Based on a Real-World Data Set. In: Zighed, D.A., Komorowski, J., Żytkow, J.M. (eds.) PKDD 2000. LNCS, vol. 1910, pp. 575–580. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  5. 5.
    Molina, L., Belanche, L., Nebot, A.: Feature selection algorithms, a survey and experimental eval. In: IEEE Int. conf. data mining, Maebashi City Japan, pp. 306–313 (2002)Google Scholar
  6. 6.
    Mitra, S., et al.: Data mining in soft computing framework: a survey. IEEE Trans. on neural networks 13(1), 3–14 (2002)CrossRefGoogle Scholar
  7. 7.
    Narendra, P., Fukunaga, K.: A branch and bound algorithm feature subset selection. IEEE Trans. computers 26(9), 917–922 (1977)zbMATHCrossRefGoogle Scholar
  8. 8.
    Yu, B., Yuan, B.: A more efficient branch and bound algorithm for feature selection. Pattern Recognition 26, 883–889 (1993)CrossRefGoogle Scholar
  9. 9.
    Frank, A., Geiger, D., Yakhini, Z.: A distance-B&B feature selection algorithm. In: Procc. Uncertainty in artificial intelligence, México, August 2003, pp. 241–248 (2003)Google Scholar
  10. 10.
    Somol, P., Pudil, P., Kittler, J.: Fast Branch & bound algorithms for optimal feature selection. IEEE Trans. Pattern Analysis and Machine Intelligence 26(7), 900–912 (2004)CrossRefGoogle Scholar
  11. 11.
    Jakulin, A., Bratko, I.: Testing the significance of attribute interactions. In: Proc. Int. conf. on machine learning, Canada, pp. 409–416 (2004)Google Scholar
  12. 12.
    Agrawal, R., Imielinski, T., Swami, A.: Database mining: a performance perspective. IEEE Trans. Knowledge data engrg. 5(6), 914–925 (1993)CrossRefGoogle Scholar
  13. 13.
    Yu, L., Liu, H.: Efficient feature selection via analysis of relevance and redundancy. Journal of Machine Learning Research 5, 1205–1224 (2004)MathSciNetGoogle Scholar
  14. 14.
  15. 15.
  16. 16.
    Quinlan, J.R.: Decision trees and multi-valued attributes. In: Hayes, J.E., Michie, D., Richards, J. (eds.) Machine Intelligence, vol. 11, pp. 305–318. Oxford University Press, Oxford (1988)Google Scholar
  17. 17.
    Liu, M.H., Dash, M.: A monotonic measure for optimal feature selection. In: Nédellec, C., Rouveirol, C. (eds.) ECML 1998. LNCS, vol. 1398, pp. 101–106. Springer, Heidelberg (1998)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Manuel Mejía-Lavalle
    • 1
  • Eduardo F. Morales
    • 2
  1. 1.Instituto de Investigaciones EléctricasCuernavacaMéxico
  2. 2.INAOEStMa. TonantzintlaMéxico

Personalised recommendations