Advertisement

Learning to Aggregate Using Uninorms

  • Vitalik Melnikov
  • Eyke HüllermeierEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9852)

Abstract

In this paper, we propose a framework for a class of learning problems that we refer to as “learning to aggregate”. Roughly, learning-to-aggregate problems are supervised machine learning problems, in which instances are represented in the form of a composition of a (variable) number on constituents; such compositions are associated with an evaluation, score, or label, which is the target of the prediction task, and which can presumably be modeled in the form of a suitable aggregation of the properties of its constituents. Our learning-to-aggregate framework establishes a close connection between machine learning and a branch of mathematics devoted to the systematic study of aggregation functions. We specifically focus on a class of functions called uninorms, which combine conjunctive and disjunctive modes of aggregation. Experimental results for a corresponding model are presented for a review data set, for which the aggregation problem consists of combining different reviewer opinions about a paper into an overall decision of acceptance or rejection.

Keywords

Feature Vector Learning Problem Aggregation Function Neutral Element Preference Learning 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgments

We thank Pritha Gupta and Karlson Pfannschmidt for their helpful suggestions. This work is part of the Collaborative Research Center “On-the-Fly Computing”, which is supported by the German Research Foundation (DFG).

References

  1. 1.
    Alvo, M., Yu, P.: Statistical Methods for Ranking Data. Springer, New York (2014)CrossRefzbMATHGoogle Scholar
  2. 2.
    Amores, J.: Multiple instance classification: review, taxonomy and comparative study. Artif. Intell. 201, 81–105 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Beliakov, G., Calvo, T., James, S.: Aggregation of preferences in recommender systems. In: Recommender Systems Handbook, pp. 705–734. Springer, US (2011)Google Scholar
  4. 4.
    Byrd, R.H., Lu, P., Nocedal, J., Zhu, C.: A limited memory algorithm for bound constrained optimization. SIAM J. Sci. Comput. 16(5), 1190–1208 (1995)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Csurka, G., Dance, C.R., Fan, L., Willamowski, J., Bray, C.: Visual categorization with bags of keypoints. In: Workshop on Statistical Learning in Computer Vision, ECCV (2004)Google Scholar
  6. 6.
    Elidan, G.: Copula bayesian networks. In: Proceedings of the NIPS, Advances in Neural Information Processing Systems 23, pp. 559–567 (2010)Google Scholar
  7. 7.
    Frank, E.T., Xu, X.: Applying propositional learning algorithms to multi-instance data. Technical report, University of Waikato, Department of Computer Science, University of Waikato, Hamilton, NZ, June 2003Google Scholar
  8. 8.
    Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: Thirteenth International Conference on Machine Learning, pp. 148–156. Morgan Kaufmann, San Francisco (1996)Google Scholar
  9. 9.
    Fürnkranz, J., Hüllermeier, E. (eds.): Preference Learning. Springer, Heidelberg (2011)zbMATHGoogle Scholar
  10. 10.
    Grabisch, M., Marichal, J., Mesiar, R., Pap, E.: Aggregation Functions. Cambridge University Press, Cambridge (2009)CrossRefzbMATHGoogle Scholar
  11. 11.
    Grabisch, M., Marichal, J., Mesiar, R., Pap, E.: Aggregation functions: construction methods, conjunctive, disjunctive and mixed classes. Inf. Sci. 181, 23–43 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Greco, S., Mousseau, V., Slowinski, R.: Robust ordinal regression for value functions handling interacting criteria. Eur. J. Oper. Res. 239(3), 711–730 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Hajek, P.: Metamathematics of Fuzzy Logic. Springer, Dordrecht (1998)CrossRefzbMATHGoogle Scholar
  14. 14.
    Hajimirsadeghi, H., Mori, G.: Multiple instance real boosting with aggregation functions. In: Proceedings of the ICPR, 21st International Conference on Pattern Recognition, pp. 2706–2710 (2012)Google Scholar
  15. 15.
    Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., Witten, I.: The WEKA data mining software: an update. SIGKDD Explor. 11(1), 10–18 (2009)CrossRefGoogle Scholar
  16. 16.
    Klement, E., Mesiar, R., Pap, E.: Triangular Norms. Kluwer Academic Publishers, Dordrecht (2002)zbMATHGoogle Scholar
  17. 17.
    McKay, M.D., Beckman, R.J., Conover, W.J.: A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics 21(2), 239 (1979)MathSciNetzbMATHGoogle Scholar
  18. 18.
    Musicant, D.R., Christensen, J.M., Olson, J.F.: Supervised learning by training on aggregate outputs. In: Proceedings of the ICDM, 7th IEEE International Conference on Data Mining, Omaha, Nebraska, USA, pp. 252–261 (2007)Google Scholar
  19. 19.
    Platt, J.: Machines using sequential minimal optimization. In: Schoelkopf, B., Burges, C., Smola, A. (eds.) Advances in Kernel Methods - Support Vector Learning. MIT Press (1998)Google Scholar
  20. 20.
    Ray, S., Page, D.: Multiple instance regression. In: ICML, vol. 1, pp. 425–432 (2001)Google Scholar
  21. 21.
    Schweizer, B., Sklar, A.: Probabilistic Metric Spaces. North-Holland, New York (1983)zbMATHGoogle Scholar
  22. 22.
    Senge, R., Hüllermeier, E.: Top-down induction of fuzzy pattern trees. IEEE Trans. Fuzzy Syst. 19(2), 241–252 (2011)CrossRefGoogle Scholar
  23. 23.
    Senge, R., Hüllermeier, E.: Fast fuzzy pattern tree learning for classification. IEEE Trans. Fuzzy Syst. 23(6), 2024–2033 (2015)CrossRefGoogle Scholar
  24. 24.
    Tehrani, A.F., Cheng, W., Dembczynski, K., Hüllermeier, E.: Learning monotone nonlinear models using the Choquet integral. Mach. Learn. 89(1), 183–211 (2012)MathSciNetzbMATHGoogle Scholar
  25. 25.
    Tschiatschek, S., Djolonga, J., Krause, A.: Learning probabilistic submodular diversity models via noise contrastive estimation. In: Proceedings of the AISTATS, 19th International Conference on Artificial Intelligence and Statistics (2016)Google Scholar
  26. 26.
    Narukawa, Y., Torra, T.: Modeling Decisions: Information Fusion and Aggregation Operators. Springer, Berlin (2007)zbMATHGoogle Scholar
  27. 27.
    Yager, R., Rybalov, A.: Uninorm aggregation operators. Fuzzy Sets Syst. 80, 111–120 (1996)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  1. 1.Department of Computer SciencePaderborn UniversityPaderbornGermany

Personalised recommendations