Feature Selection for Analogy-Based Learning to Rank

  • Mohsen Ahmadi FahandarEmail author
  • Eyke Hüllermeier
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11828)


Learning to rank based on principles of analogical reasoning has recently been proposed as a novel method in the realm of preference learning. Roughly speaking, the method proceeds from a regularity assumption as follows: Given objects A, B, C, D, if A relates to B as C relates to D, and A is preferred to B, then C is presumably preferred to D. This assumption is formalized in terms of so-called analogical proportions, which operate on a feature representation of the objects. Consequently, a suitable feature representation is an important prerequisite for the success of analogy-based learning to rank. In this paper, we therefore address the problem of feature selection and adapt common feature selection techniques, including forward selection, correlation-based filter techniques, as well as Relief-based methods, to the case of analogical learning. The usefulness of these approaches is shown in experiments with synthetic and benchmark data.


Feature selection Leaning to rank Analogical reasoning 


  1. 1.
    Ahmadi Fahandar, M., Hüllermeier, E.: Learning to rank based on analogical reasoning. In: AAAI (2018)Google Scholar
  2. 2.
    Ahmadi Fahandar, M., Hüllermeier, E.: Analogy-based preference learning with kernels. In: Benzmüller, C., Stuckenschmidt, H. (eds.) KI 2019. LNCS (LNAI), vol. 11793, pp. 34–47. Springer, Cham (2019). Scholar
  3. 3.
    Ahmadi Fahandar, M., Hüllermeier, E., Couso, I.: Statistical inference for incomplete ranking data: the case of rank-dependent coarsening. In: ICML (2017)Google Scholar
  4. 4.
    Bounhas, M., Pirlot, M., Prade, H.: Predicting preferences by means of analogical proportions. In: ICCBR (2018)Google Scholar
  5. 5.
    Draper, B., Kaito, C., Bins, J.: Iterative relief. In: 2003 Conference on Computer Vision and Pattern Recognition Workshop (2003)Google Scholar
  6. 6.
    Fürnkranz, J., Hüllermeier, E.: Preference Learning. Springer, Heidelberg (2011). Scholar
  7. 7.
    Geng, Z., Shi, N.Z.: Algorithm AS 257: isotonic regression for umbrella orderings. J. R. Stat. Soc. Seri. C (Appl. Stat.) 39(3), 397–402 (1990)zbMATHGoogle Scholar
  8. 8.
    Guyon, I., Elisseeff, A.: An introduction to variable and feature selection. JMLR 3, 1157–1182 (2003)zbMATHGoogle Scholar
  9. 9.
    Keogh, E.: Instance-Based Learning, pp. 549–550. Springer, Boston (2010). Scholar
  10. 10.
    Kira, K., Rendell, L.A.: The feature selection problem: traditional methods and a new algorithm. In: AAAI (1992)Google Scholar
  11. 11.
    Kira, K., Rendell, L.A.: A practical approach to feature selection. In: Proceedings ML-92, 9th International Workshop on Machine Learning (1992)CrossRefGoogle Scholar
  12. 12.
    Kononenko, I.: Estimating attributes: analysis and extensions of relief. In: ECML (1994)Google Scholar
  13. 13.
    Miclet, L., Prade, H.: Handling analogical proportions in classical logic and fuzzy logics settings. In: Sossai, C., Chemello, G. (eds.) ECSQARU 2009. LNCS (LNAI), vol. 5590, pp. 638–650. Springer, Heidelberg (2009). Scholar
  14. 14.
    Mirzazadeh, F., Guo, Y., Schuurmans, D.: Convex co-embedding. In: AAAI (2014)Google Scholar
  15. 15.
    Sun, Y.: Iterative relief for feature weighting: algorithms, theories, and applications. IEEE TPAMI 29(6), 1035–1051 (2007)CrossRefGoogle Scholar
  16. 16.
    Sun, Y., Li, J.: Iterative relief for feature weighting. In: ICML (2006)Google Scholar
  17. 17.
    Turner, T., Wollan, P.: Locating a maximum using isotonic regression. Comput. Stat. Data Anal. 25(3), 305–320 (1997)CrossRefGoogle Scholar
  18. 18.
    Urbanowicz, R., Meeker, M., LaCava, W., Olson, R., Moore, J.: Relief-based feature selection: introduction and review. J. Biomed. Inform. 85, 189–203 (2017)CrossRefGoogle Scholar
  19. 19.
    Weinberger, K.Q., Saul, L.K.: Distance metric learning for large margin nearest neighbor classification. JMLR 10, 207–244 (2009)zbMATHGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Heinz Nixdorf Institute and Department of Computer Science, Intelligent Systems and Machine Learning GroupPaderborn UniversityPaderbornGermany

Personalised recommendations