Advertisement

Learning Choice Functions via Pareto-Embeddings

  • Karlson PfannschmidtEmail author
  • Eyke Hüllermeier
Conference paper
  • 173 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12325)

Abstract

We consider the problem of learning to choose from a given set of objects, where each object is represented by a feature vector. Traditional approaches in choice modelling are mainly based on learning a latent, real-valued utility function, thereby inducing a linear order on choice alternatives. While this approach is suitable for discrete (top-1) choices, it is not straightforward how to use it for subset choices. Instead of mapping choice alternatives to the real number line, we propose to embed them into a higher-dimensional utility space, in which we identify choice sets with Pareto-optimal points. To this end, we propose a learning algorithm that minimizes a differentiable loss function suitable for this task. We demonstrate the feasibility of learning a Pareto-embedding on a suite of benchmark datasets.

Keywords

Choice function Pareto-embedding Generalized utility 

References

  1. 1.
    Arrow, K.J.: Social Choice and Individual Values. Wiley, Hoboken (1951)zbMATHGoogle Scholar
  2. 2.
    Benson, A.R., Kumar, R., Tomkins, A.: A discrete choice model for subset selection. In: Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, WSDM 2018, pp. 37–45. ACM (2018)Google Scholar
  3. 3.
    Deb, K., Thiele, L., Laumanns, M., Zitzler, E.: Scalable test problems for evolutionary multiobjective optimization. In: Abraham, A., Jain, L., Goldberg, R. (eds.) Evolutionary Multiobjective Optimization. Advanced Information and Knowledge Processing, pp. 105–145. Springer, London (2005).  https://doi.org/10.1007/1-84628-137-7_6zbMATHCrossRefGoogle Scholar
  4. 4.
    Domshlak, C., Hüllermeier, E., Kaci, S., Prade, H.: Preferences in AI: an overview. Artif. Intell. 175(7–8), 1037–1052 (2011)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Fürnkranz, J., Hüllermeier, E. (eds.): Preference Learning. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-14125-6zbMATHCrossRefGoogle Scholar
  6. 6.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6–11 July 2015. JMLR Workshop and Conference Proceedings, vol. 37, pp. 448–456. JMLR.org (2015)Google Scholar
  7. 7.
    Luce, R.D.: Individual Choice Behavior. Wiley, Hoboken (1959)zbMATHGoogle Scholar
  8. 8.
    Marschak, J.: Binary choice constraints on random utility indicators. Technical report 74, Cowles Foundation for Research in Economics, Yale University (1959)Google Scholar
  9. 9.
    Mead, A.: Review of the development of multidimensional scaling methods. J. Roy. Stat. Soc. Ser. D (Stat.) 41(1), 27–39 (1992)Google Scholar
  10. 10.
    Menon, A., Narasimhan, H., Agarwal, S., Chawla, S.: On the statistical consistency of algorithms for binary classification under class imbalance. In: Proceedings of the 30th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 28, pp. 603–611. PMLR, Atlanta, 17–19 June 2013Google Scholar
  11. 11.
    Pfannschmidt, K., Gupta, P., Hüllermeier, E.: Learning Choice Functions: Concepts and Architectures. CoRR abs/1901.10860 (2019)Google Scholar
  12. 12.
    Thurstone, L.L.: A law of comparative judgment. Psychol. Rev. 34(4), 273–286 (1927)CrossRefGoogle Scholar
  13. 13.
    Zitzler, E., Deb, K., Thiele, L.: Comparison of multiobjective evolutionary algorithms: empirical results. Evol. Comput. 8(2), 173–195 (2000)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Paderborn UniversityPaderbornGermany

Personalised recommendations