Journal of Intelligent Information Systems

, Volume 40, Issue 3, pp 501–527 | Cite as

Unsupervised feature construction for improving data representation and semantics

  • Marian-Andrei Rizoiu
  • Julien Velcin
  • Stéphane Lallich
Article

Abstract

Attribute-based format is the main data representation format used by machine learning algorithms. When the attributes do not properly describe the initial data, performance starts to degrade. Some algorithms address this problem by internally changing the representation space, but the newly constructed features rarely have any meaning. We seek to construct, in an unsupervised way, new attributes that are more appropriate for describing a given dataset and, at the same time, comprehensible for a human user. We propose two algorithms that construct the new attributes as conjunctions of the initial primitive attributes or their negations. The generated feature sets have reduced correlations between features and succeed in catching some of the hidden relations between individuals in a dataset. For example, a feature like \(sky \wedge \neg building \wedge panorama\) would be true for non-urban images and is more informative than simple features expressing the presence or the absence of an object. The notion of Pareto optimality is used to evaluate feature sets and to obtain a balance between total correlation and the complexity of the resulted feature set. Statistical hypothesis testing is employed in order to automatically determine the values of the parameters used for constructing a data-dependent feature set. We experimentally show that our approaches achieve the construction of informative feature sets for multiple datasets.

Keywords

Unsupervised feature construction Feature evaluation Nonparametric statistics Data mining Clustering Representations Algorithms for data and knowledge management Heuristic methods Pattern analysis 

References

  1. Benjamini, Y., & Liu, W. (1999). A step-down multiple hypotheses testing procedure that controls the false discovery rate under independence. Journal of Statistical Planning and Inference, 82(1–2), 163–170.MathSciNetMATHCrossRefGoogle Scholar
  2. Blockeel, H., De Raedt, L., Ramon, J. (1998). Top-down induction of clustering trees. In Proceedings of the 15th international conference on machine learning (pp. 55–63).Google Scholar
  3. Bloedorn, E., & Michalski, R.S. (1998). Data-driven constructive induction. Intelligent Systems and their Applications, 13(2), 30–37.CrossRefGoogle Scholar
  4. Cortes, C., & Vapnik, V. (1995). Support-vector networks. Machine Learning, 20(3), 273–297MATHGoogle Scholar
  5. Dunteman, G.H. (1989). Principal components analysis (Vol. 69). SAGE publications, Inc.Google Scholar
  6. Feller, W. (1950). An introduction to probability theory and its applications (Vol. I). Wiley.Google Scholar
  7. Ge, Y., Dudoit, S., Speed, T.P. (2003). Resampling-based multiple testing for microarray data analysis. Test, 12(1), 1–77.MathSciNetCrossRefGoogle Scholar
  8. Gomez, G., & Morales, E. (2002). Automatic feature construction and a simple rule induction algorithm for skin detection. In Proc. of the ICML workshop on machine learning in computer vision (pp. 31–38).Google Scholar
  9. Holm, S. (1979). A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics, 6(2), 65–70.Google Scholar
  10. Huo, X., Ni, X.S., Smith, A.K. (2006). A survey of manifold-based learning methods. In Mining of Enterprise Data, emerging nonparametric methodology (Chapter 1, pp. 06–40). Springer.Google Scholar
  11. Lallich, S., & Rakotomalala, R. (2000). Fast feature selection using partial correlation for multi-valued attributes. In Zighed, D.A., Komorowski, J., Zytkow, J.M. (Eds.), Proceedings of the 4th European conference on principles of data mining and knowledge discovery, LNAI (pp. 221–231). Springer-Verlag.Google Scholar
  12. Lallich, S., Teytaud, O., Prudhomme, E. (2006). Statistical inference and data mining: False discoveries control. In COMPSTAT: Proceedings in computational statistics: 17th symposium (p. 325). Springer.Google Scholar
  13. Liu, H., & Motoda, H. (1998). Feature extraction, construction and selection: A data mining perspective. Springer.Google Scholar
  14. Matheus, C.J. (1990). Adding domain knowledge to sbl through feature construction. In Proceedings of the eighth national conference on artificial intelligence (pp. 803–808).Google Scholar
  15. Michalski, R.S. (1983). A theory and methodology of inductive learning. Artificial Intelligence 20(2), 111–161.MathSciNetCrossRefGoogle Scholar
  16. Mo, D., & Huang, S.H. (2011). Feature selection based on inference correlation. Intelligent Data Analysis 15(3), 375–398.Google Scholar
  17. Motoda, H., & Liu, H. (2002). Feature selection, extraction and construction. Communication of IICM (Institute of Information and Computing Machinery), 5, 67–72.Google Scholar
  18. Murphy, P.M., & Pazzani, M.J. (1991). Id2-of-3: Constructive induction of m-of-n concepts for discriminators in decision trees. In Proceedings of the 8th international workshop on machine learning (pp. 183–187).Google Scholar
  19. Pagallo, G., & Haussler, D. (1990). Boolean feature discovery in empirical learning. Machine Learning, 5(1), 71–99.CrossRefGoogle Scholar
  20. Piatetsky-Shapiro, G. (1991). Discovery, analysis, and presentation of strong rules. Knowledge Discovery in Databases, 229, 229–248.Google Scholar
  21. Quinlan, J.R. (1986). Induction of decision trees. Machine Learning, 1(1), 81–106.Google Scholar
  22. Quinlan, J.R. (1993). C4.5: Programs for machine learning. Morgan Kaufmann.Google Scholar
  23. Russell, B.C., Torralba, A., Murphy, K.P., Freeman, W.T. (2008). Labelme: a database and web-based tool for image annotation. International Journal of Computer Vision, 77(1), 157–173.CrossRefGoogle Scholar
  24. Sawaragi, Y., Nakayama, H., Tanino, T. (1985). Theory of multiobjective optimization (Vol. 176). New York: Academic Press.Google Scholar
  25. Storey, J.D. (2002). A direct approach to false discovery rates. Journal of the Royal Statistical Society. Series B, Statistical Methodology, 64(3), 479–498.MathSciNetMATHCrossRefGoogle Scholar
  26. Yang, D.S., Rendell, L., Blix, G. (1991). A scheme for feature construction and a comparison of empirical methods. In Proceedings of the 12th international joint conference on artificial intelligence (pp. 699–704).Google Scholar
  27. Zheng, Z. (1995). Constructing nominal x-of-n attributes. In Proceedings of international joint conference on artificial intelligence (Vol. 14, pp. 1064–1070).Google Scholar
  28. Zheng, Z. (1996). A comparison of constructive induction with different types of new attribute. Tech. rep., School of Computing and Mathematics, Deakin University, Geelong.Google Scholar
  29. Zheng, Z. (1998). Constructing conjunctions using systematic search on decision trees. Knowledge-Based Systems, 10(7), 421–430.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  • Marian-Andrei Rizoiu
    • 1
  • Julien Velcin
    • 1
  • Stéphane Lallich
    • 1
  1. 1.ERIC LaboratoryUniversity Lumière Lyon 2Bron CedexFrance

Personalised recommendations