Advertisement

Interactive Data Analytics for the Humanities

  • Iryna GurevychEmail author
  • Christian M. MeyerEmail author
  • Carsten Binnig
  • Johannes Fürnkranz
  • Kristian Kersting
  • Stefan Roth
  • Edwin Simpson
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10761)

Abstract

In this vision paper, we argue that current solutions to data analytics are not suitable for complex tasks from the humanities, as they are agnostic of the user and focused on static, predefined tasks with large-scale benchmarks. Instead, we believe that the human must be put into the loop to address small data scenarios that require expert domain knowledge and fluid, incrementally defined tasks, which are common for many humanities use cases. Besides the main challenges, we discuss existing and urgently required solutions to interactive data acquisition, model development, model interpretation, and system support for interactive data analytics. In the envisioned interactive systems, human users not only provide annotations to a machine learner, but train a model by using the system and demonstrating the task. The learning system will actively query the user for feedback, refine its model in real-time, and is able to explain its decisions. Our vision links natural language processing research with recent advances in machine learning, computer vision, and data management systems, as realizing this vision relies on combining expertise from all of these scientific fields.

References

  1. 1.
    Abbeel, P., Ng, A.Y.: Apprenticeship learning via inverse reinforcement learning. In: 21st International Conference on Machine learning (ICML), ACM, New York (2004)Google Scholar
  2. 2.
    von Ahn, L.: Games with a purpose. Computer 39(6), 96–98 (2006)MathSciNetGoogle Scholar
  3. 3.
    Ambati, V., Vogel, S., Carbonell, J.G.: Active learning-based elicitation for semi-supervised word alignment. In: 48th Annual Meeting of the Association for Computational Linguistics (ACL), pp. 365–370. ACL, Stroudsburg (2010)Google Scholar
  4. 4.
    Amershi, S., Cakmak, M., Knox, W.B., Kulesza, T.: Power to the people: the role of humans in interactive machine learning. AI Mag. 35(4), 105–120 (2014)CrossRefGoogle Scholar
  5. 5.
    Argall, B.D., Chernova, S., Veloso, M., Browning, B.: A survey of robot learning from demonstration. Robot. Auton. Syst. 57(5), 469–483 (2009)CrossRefGoogle Scholar
  6. 6.
    Atkeson, C.G., Schaal, S.: Robot learning from demonstration. In: 14th International Conference on Machine Learning (ICML), pp. 12–20. Morgan Kaufmann, San Francisco (1997)Google Scholar
  7. 7.
    Attias, H.: A variational Bayesian framework for graphical models. In: Advances in Neural Information Processing Systems 12 (NIPS), pp. 209–215. MIT Press, Cambridge (2000)Google Scholar
  8. 8.
    Becker, M., Osborne, M.: A two-stage method for active learning of statistical grammars. In: 19th International Joint Conference on Artificial Intelligence (IJCAI), pp. 991–996. Morgan Kaufmann, San Francisco (2005)Google Scholar
  9. 9.
    Beckerle, M.: Interaktives Regellernen. Diploma thesis, Technische Universität Darmstadt (2009). [in German]Google Scholar
  10. 10.
    Bejan, C.A., Harabagiu, S.: Unsupervised event coreference resolution. Comput. Linguist. 40(2), 311–347 (2014)CrossRefGoogle Scholar
  11. 11.
    Blei, D.M.: Probabilistic topic models. Commun. ACM 55(4), 77–84 (2012)CrossRefGoogle Scholar
  12. 12.
    Branson, S., et al.: Visual recognition with humans in the loop. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6314, pp. 438–451. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15561-1_32CrossRefGoogle Scholar
  13. 13.
    Brinker, K.: Active learning of label ranking functions. In: 21st International Conference on Machine Learning (ICML), pp. 129–136. ACM, New York (2004)Google Scholar
  14. 14.
    Burger-Helmchen, T., Pénin, J.: The limits of crowdsourcing inventive activities: what do transaction cost theory and the evolutionary theories of the firm teach us? In: Proceedings of the Workshop on Open Source Innovation, Strasbourg, France, pp. 1–26 (2010)Google Scholar
  15. 15.
    Cakmak, M., Chao, C., Thomaz, A.L.: Designing interactions for robot active learners. IEEE Trans. Auton. Ment. Dev. 2(2), 108–118 (2010)CrossRefGoogle Scholar
  16. 16.
    Chambers, R.A., Michie, D.: Man-machine co-operation on a learning task. In: Parslow, R.D., Prowse, R., Elliott-Green, R. (eds.) Computer Graphics: Techniques and Applications, pp. 179–185. Plenum, London (1969)CrossRefGoogle Scholar
  17. 17.
    Chaney, A.J., Blei, D.M.: Visualizing topic models. In: 6th International Conference on Weblogs and Social Media (ICWSM). AAAI Press, Palo Alto (2012)Google Scholar
  18. 18.
    Chen, X., Bennett, P.N., Collins-Thompson, K., Horvitz, E.: Pairwise ranking aggregation in a crowdsourced setting. In: 6th ACM International Conference on Web Search and Data Mining (WSDM), pp. 193–202. ACM, New York (2013)Google Scholar
  19. 19.
    Chen, Z., Liu, B.: Lifelong Machine Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool, San Rafael (2016)Google Scholar
  20. 20.
    Cohn, D.A., Atlas, L., Ladner, R.: Improving generalization with active learning. Mach. Learn. 15(2), 201–221 (1994)Google Scholar
  21. 21.
    Cooper, S., Foldit Players, et al.: predicting protein structures with a multiplayer online game. Nature 466, 756–760 (2010)Google Scholar
  22. 22.
    Crammer, K., Singer, Y.: Ultraconservative online algorithms for multiclass problems. J. Mach. Learn. Res. 3, 951–991 (2003)zbMATHGoogle Scholar
  23. 23.
    Crotty, A., Galakatos, A., Zgraggen, E., Binnig, C., Kraska, T.: The case for interactive data exploration accelerators (IDEAs). In: Workshop on Human-In-the-Loop Data Analytics (HILDA@SIGMOD), p. 11. ACM, New York (2016)Google Scholar
  24. 24.
    Das, A., Agrawal, H., Zitnick, L., Parikh, D., Batra, D.: Human attention in visual question answering: do humans and deep networks look at the same regions? In: 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 932–937. ACL, Stroudsburg (2016)Google Scholar
  25. 25.
    Daumé III, H.: Frustratingly easy domain adaptation. In: 45th Annual Meeting of the Association of Computational Linguistics (ACL), pp. 256–263. ACL, Stroudsburg (2007)Google Scholar
  26. 26.
    De Raedt, L., Kersting, K., Natarajan, S., Poole, D.: Statistical Relational Artificial Intelligence: Logic, Probability, and Computation. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool, San Rafael (2016)zbMATHGoogle Scholar
  27. 27.
    Dzyuba, V., van Leeuwen, M., Nijssen, S., De Raedt, L.: Interactive learning of pattern rankings. Int. J. Artif. Intell. Tools 23(6), 1460026 (2014).  https://doi.org/10.1142/S0218213014600264CrossRefGoogle Scholar
  28. 28.
    Elkahky, A.M., Song, Y., He, X.: A multi-view deep learning approach for cross domain user modeling in recommendation systems. In: 24th International Conference on World Wide Web (WWW), pp. 278–288. International World Wide Web Conferences Steering Committee, Geneva (2015)Google Scholar
  29. 29.
    Fails, J.A., Olsen, Jr., D.R.: Interactive machine learning. In: 8th International Conference on Intelligent User Interfaces (IUI), pp. 39–45. ACM, New York (2003)Google Scholar
  30. 30.
    Fürnkranz, J., Gamberger, D., Lavrač, N.: Foundations of Rule Learning. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-540-75197-7CrossRefzbMATHGoogle Scholar
  31. 31.
    Fürnkranz, J., Hüllermeier, E. (eds.): Preference Learning. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-14125-6CrossRefzbMATHGoogle Scholar
  32. 32.
    Gabriel, A., Paulheim, H., Janssen, F.: Learning semantically coherent rules. In: Cellier, P., Charnois, T., Hotho, A., Matwin, S., Moens, M.F., Toussaint, Y. (eds.) 1st International Workshop on Interactions between Data Mining and Natural Language Processing. CEUR Workshop Proceedings, vol. 1202, pp. 49–63 (2014)Google Scholar
  33. 33.
    Gambäck, B., Olsson, F., Täckström, O.: Active learning for dialogue act classification. In: 12th Annual Conference of the International Speech Communication Association (INTERSPEECH), pp. 1329–1332. International Speech Communication Association, Baixas (2011)Google Scholar
  34. 34.
    Ghavamzadeh, M., Engel, Y., Valko, M.: Bayesian policy gradient and actor-critic algorithms. J. Mach. Learn. Res. 17, 1–53 (2016)MathSciNetzbMATHGoogle Scholar
  35. 35.
    Gillies, M., et al.: Human-centered machine learning. In: CHI Conference on Human Factors in Computing Systems, pp. 3558–3565. ACM, New York (2016)Google Scholar
  36. 36.
    Guns, T., Dries, A., Nijssen, S., Tack, G., De Raedt, L.: MiningZinc: a declarative framework for constraint-based mining. Artif. Intell. 244, 6–29 (2017)MathSciNetCrossRefGoogle Scholar
  37. 37.
    He, H., Daumé III, H., Eisner, J.: Imitation learning by coaching. In: Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems 25 (NIPS), pp. 3149–3157. Curran Associates, Red Hook (2012)Google Scholar
  38. 38.
    Hendricks, L.A., et al.: Generating visual explanations. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 3–19. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46493-0_1CrossRefGoogle Scholar
  39. 39.
    Hu, Z., Ma, X., Liu, Z., Hovy, E., Xing, E.: Harnessing deep neural networks with logic rules. In: 54th Annual Meeting of the Association for Computational Linguistics (ACL), pp. 2410–2420. ACL, Stroudsburg (2016)Google Scholar
  40. 40.
    Hutter, F., Lücke, J., Schmidt-Thieme, L.: Beyond manual tuning of hyperparameters. Künstl Intell. 29(4), 329–337 (2015)CrossRefGoogle Scholar
  41. 41.
    Ipeirotis, P.G., Provost, F.J., Sheng, V.S., Wang, J.: Repeated labeling using multiple noisy labelers. Data Min. Knowl. Disc. 28(2), 402–441 (2014)MathSciNetCrossRefGoogle Scholar
  42. 42.
    Jamieson, K.G., Jain, L., Fernandez, C., Glattard, N.J., Nowak, R.: NEXT: a system for real-world development, evaluation, and application of active learning. In: Cortes, C., Lawrence, N., Lee, D., Sugiyama, M., Garnett, R. (eds.) Advances in Neural Information Processing Systems 28 (NIPS), pp. 2638–2646 (2015)Google Scholar
  43. 43.
    Jannach, D., Zanker, M., Felfernig, A., Friedrich, G.: Recommender Systems: An Introduction. Cambridge University Press, Cambridge (2010)CrossRefGoogle Scholar
  44. 44.
    Kandasamy, K., Schneider, J., Poczos, B.: Bayesian active learning for posterior estimation. In: 24th International Joint Conference on Artificial Intelligence (IJCAI), pp. 3605–3611. AAAI Press, Menlo Park (2015)Google Scholar
  45. 45.
    Kapoor, A., Grauman, K., Urtasun, R., Darrell, T.: Active learning with Gaussian processes for object categorization. In: 11th International Conference on Computer Vision (ICCV), pp. 1–8. IEEE, New York (2007)Google Scholar
  46. 46.
    Kapoor, A., Horvitz, E.: Principles of lifelong learning for predictive user modeling. In: Conati, C., McCoy, K., Paliouras, G. (eds.) UM 2007. LNCS (LNAI), vol. 4511, pp. 37–46. Springer, Heidelberg (2007).  https://doi.org/10.1007/978-3-540-73078-1_7CrossRefGoogle Scholar
  47. 47.
    Karger, D.R., Oh, S., Shah, D.: Iterative learning for reliable crowdsourcing systems. In: Shawe-Taylor, J., Zemel, R.S., Bartlett, P.L., Pereira, F., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems 24 (NIPS), pp. 1953–1961. Curran Associates, Red Hook (2011)Google Scholar
  48. 48.
    Kersting, K., Mladenov, M., Tokmakov, P.: Relational linear programming. Artif. Intell. 244, 188–216 (2017)MathSciNetCrossRefGoogle Scholar
  49. 49.
    Kim, B., Malioutov, D., Varshney, K. (eds.): Proceedings of the ICML 2016 Workshop on Human Interpretability in Machine Learning, New York (2016) https://sites.google.com/site/2016whi/
  50. 50.
    Kim, H., Teh, Y.W.: Scalable structure discovery in regression using Gaussian processes. In: Hutter, F., Kotthoff, L., Vanschoren, J. (eds.) 2016 Workshop on Automatic Machine Learning. JMLR Workshop and Conference Proceedings, vol. 64, pp. 31–40 (2016)Google Scholar
  51. 51.
    Kim, Y.B., Stratos, K., Sarikaya, R., Jeong, M.: New transfer learning techniques for disparate label sets. In: 53rd Annual Meeting of the Association for Computational Linguistics and 7th International Joint Conference on Natural Language Processing (ACL/IJCNLP), pp. 473–482. ACL, Stroudsburg (2015)Google Scholar
  52. 52.
    Kingma, D.P., Welling, M.: Auto-encoding variational Bayes. In: Proceedings of the International Conference on Learning Representations (ICLR). arXiv:1312.6114, Banff, AB, Canada (2014)
  53. 53.
    Kraska, T., Talwalkar, A., Duchi, J.C., Griffith, R., Franklin, M.J., Jordan, M.I.: MLbase: a distributed machine-learning system. In: 6th Biennial Conference on Innovative Data Systems Research (CIDR) (2013)Google Scholar
  54. 54.
    Kucherbaev, P., Daniel, F., Tranquillini, S., Marchese, M.: Crowdsourcing processes: a survey of approaches and opportunities. IEEE Internet Comput. 20(2), 50–56 (2016)CrossRefGoogle Scholar
  55. 55.
    Lampouras, G., Vlachos, A.: Imitation learning for language generation from unaligned data. In: 26th International Conference on Computational Linguistics (COLING), pp. 1101–1112. The COLING 2016 Organizing Committee, Osaka (2016)Google Scholar
  56. 56.
    Lang, T., Toussaint, M., Kersting, K.: Exploration in relational domains for model-based reinforcement learning. J. Mach. Learn. Res. 13, 3725–3768 (2012)MathSciNetzbMATHGoogle Scholar
  57. 57.
    Lecun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)CrossRefGoogle Scholar
  58. 58.
    Lewis, D.D., Gale, W.: A sequential algorithm for training text classifiers. In: Croft, B.W., van Rijsbergen, C.J. (eds.) SIGIR ’94, pp. 3–12. Springer, London (1994).  https://doi.org/10.1007/978-1-4471-2099-5_1CrossRefGoogle Scholar
  59. 59.
    Lin, L.J.: Self-improving reactive agents based on reinforcement learning, planning and teaching. Mach. Learn. 8(3), 293–321 (1992)Google Scholar
  60. 60.
    Lindauer, M.T., Hoos, H.H., Hutter, F., Schaub, T.: AutoFolio: an automatically configured algorithm selector. J. Artif. Intell. Res. 53, 745–778 (2015)CrossRefGoogle Scholar
  61. 61.
    Liu, Z., Heer, J.: The effects of interactive latency on exploratory visual analysis. IEEE Trans. Vis. Comput. Graph. 20(12), 2122–2131 (2014)CrossRefGoogle Scholar
  62. 62.
    Long, M., Cao, Y., Wang, J., Jordan, M.I.: Learning transferable features with deep adaptation networks. In: Bach, F., Blei, D. (eds.) 32nd International Conference on Machine Learning (ICML). JMLR: Workshop and Conference Proceedings, vol. 37, pp. 97–105 (2015)Google Scholar
  63. 63.
    Lu, J., Yang, J., Batra, D., Parikh, D.: Hierarchical question-image co-attention for visual question answering. In: Lee, D.D., Sugiyama, M., Luxburg, U.V., Guyon, I., Garnett, R. (eds.) Advances In Neural Information Processing Systems 29 (NIPS), pp. 289–297 (2016)Google Scholar
  64. 64.
    Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Burges, C., Bottou, L., Welling, M., Ghahramani, Z., Weinberger, K. (eds.) Advances in Neural Information Processing Systems 26 (NIPS), pp. 3111–3119 (2013)Google Scholar
  65. 65.
    Mladenov, M., Kleinhans, L., Kersting, K.: Lifted inference for convex quadratic programs. In: 31st AAAI Conference on Artificial Intelligence (AAAI), pp. 2350–2356. AAAI Press, Palo Alto (2017)Google Scholar
  66. 66.
    Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)Google Scholar
  67. 67.
    Natarajan, S., Joshi, S., Tadepalli, P., Kersting, K., Shavlik, J.: Imitation learning in relational domains: a functional-gradient boosting approach. In: 22nd International Joint Conference on Artificial Intelligence (IJCAI), pp. 1414–1420. AAAI Press, Menlo Park (2011)Google Scholar
  68. 68.
    Ng, A.Y., Russell, S.J.: Algorithms for inverse reinforcement learning. In: Langley, P. (ed.) 17th International Conference on Machine Learning (ICML), pp. 663–670. Morgan Kaufmann, San Francisco (2000)Google Scholar
  69. 69.
    Odom, P., Natarajan, S.: Actively interacting with experts: a probabilistic logic approach. In: Frasconi, P., Landwehr, N., Manco, G., Vreeken, J. (eds.) ECML PKDD 2016. LNCS (LNAI), vol. 9852, pp. 527–542. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46227-1_33CrossRefGoogle Scholar
  70. 70.
    Olsson, F.: A literature survey of active machine learning in the context of natural language processing. SICS Technical report T2009:06, Swedish Institute of Computer Science (2009)Google Scholar
  71. 71.
    Oquab, M., Bottou, L., Laptev, I., Sivic, J.: Learning and transferring mid-level image representations using convolutional neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1717–1724. IEEE, New York (2014)Google Scholar
  72. 72.
    Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010)CrossRefGoogle Scholar
  73. 73.
    Papandreou, G., Chen, L.C., Murphy, K., Yuille, A.L.: Weakly-and semi-supervised learning of a deep convolutional network for semantic image segmentation. In: International Conference on Computer Vision (ICCV), pp. 1742–1750. IEEE, New York (2015)Google Scholar
  74. 74.
    Parikh, D., Grauman, K.: Interactively building a discriminative vocabulary of nameable attributes. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1681–1688. IEEE, New York (2011)Google Scholar
  75. 75.
    Park, Y., Cafarella, M.J., Mozafari, B.: Neighbor-sensitive hashing. Proc. VLDB Endow. 9(3), 144–155 (2015)CrossRefGoogle Scholar
  76. 76.
    Pennington, J., Socher, R., Manning, C.D.: GloVe: global vectors for word representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. ACL, Stroudsburg (2014)Google Scholar
  77. 77.
    Piot, B., Geist, M., Pietquin, O.: Bridging the gap between imitation learning and inverse reinforcement learning. IEEE Trans. Neural Netw. 28(8), 1814–1826 (2016)MathSciNetCrossRefGoogle Scholar
  78. 78.
    Porter, R., Theiler, J., Hush, D.: Interactive machine learning in data exploitation. Comput. Sci. Eng. 15(5), 12–20 (2013)CrossRefGoogle Scholar
  79. 79.
    Radlinski, F., Joachims, T.: Query chains: learning to rank from implicit feedback. In: 11th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 239–248. ACM, New York (2005)Google Scholar
  80. 80.
    de Raedt, L., Bruynooghe, M.: Interactive concept-learning and constructive induction by analogy. Mach. Learn. 8(2), 107–150 (1992)zbMATHGoogle Scholar
  81. 81.
    Ranganath, R., Tang, L., Charlin, L., Blei, D.M.: Deep exponential families. In: Lebanon, G., Vishwanathan, S. (eds.) 18th International Conference on Artificial Intelligence and Statistics (AISTATS). JMLR Workshop and Conference Proceedings, vol. 38, pp. 762–771 (2015)Google Scholar
  82. 82.
    Ratner, A., De Sa, C., Wu, S., Selsam, D., Re, C.: Data programming: creating large training sets, quickly. In: Lee, D.D., Sugiyama, M., Luxburg, U.V., Guyon, I., Garnett, R. (eds.) Advances in Neural Information Processing Systems 29 (NIPS), pp. 3567–3575 (2016)Google Scholar
  83. 83.
    Recht, B., Ré, C., Wright, S.J., Niu, F.: Hogwild: A lock-free approach to parallelizing stochastic gradient descent. In: Shawe-Taylor, J., Zemel, R.S., Bartlett, P.L., Pereira, F., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems 24 (NIPS), pp. 693–701. Curran Associates, Red Hook (2011)Google Scholar
  84. 84.
    Rosenberg, C., Hebert, M., Schneiderman, H.: Semi-supervised self-training of object detection models. In: 7th IEEE Workshops on Application of Computer Vision (WACV), pp. 29–36. IEEE, New York (2005)Google Scholar
  85. 85.
    Rothe, S., Schütze, H.: Word embedding calculus in meaningful ultradense subspaces. In: 54th Annual Meeting of the Association for Computational Linguistics (ACL), pp. 512–517. ACL, Stroudsburg (2016)Google Scholar
  86. 86.
    Rother, C., Kolmogorov, V., Blake, A.: “GrabCut”: interactive foreground extraction using iterated graph cuts. ACM Trans. Graph. 23(3), 309–314 (2004)CrossRefGoogle Scholar
  87. 87.
    Rush, A.M., Chopra, S., Weston, J.: A neural attention model for abstractive sentence summarization. In: Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 379–389. ACL, Stroudsburg (2015)Google Scholar
  88. 88.
    Schaal, S.: Learning from demonstration. In: Jordan, M.I., Petsche, T. (eds.) Advances in Neural Information Processing Systems 9 (NIPS), pp. 1040–1046. MIT Press, Cambridge (1997)Google Scholar
  89. 89.
    Schein, A.I., Popescul, A., Ungar, L.H., Pennock, D.M.: Methods and metrics for cold-start recommendations. In: 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 253–260. ACM, New York (2002)Google Scholar
  90. 90.
    Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015)CrossRefGoogle Scholar
  91. 91.
    Settles, B.: Active Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool, San Rafael (2012)zbMATHGoogle Scholar
  92. 92.
    Seung, H.S., Opper, M., Sompolinsky, H.: Query by committee. In: 5th Annual ACM Workshop on Computational Learning Theory (COLT), pp. 287–294. ACM, New York (1992)Google Scholar
  93. 93.
    Shaikhina, T., Lowe, D., Daga, S., Briggs, D., Higgins, R., Khovanova, N.: Machine learning for predictive modelling based on small data in biomedical engineering. IFAC-PapersOnLine 48(20), 469–474 (2015)CrossRefGoogle Scholar
  94. 94.
    Shivaswamy, P., Joachims, T.: Coactive learning. J. Artif. Intell. Res. 53, 1–40 (2015)MathSciNetCrossRefGoogle Scholar
  95. 95.
    Simpson, E., Roberts, S.: Bayesian methods for intelligent task assignment in crowdsourcing systems. In: Guy, T., Kárný, M., Wolpert, D. (eds.) Decision Making: Uncertainty, Imperfection, Deliberation and Scalability. SCI, vol. 538, pp. 1–32. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-15144-1_1CrossRefGoogle Scholar
  96. 96.
    Simpson, E., Roberts, S., Psorakis, I., Smith, A.: Dynamic Bayesian combination of multiple imperfect classifiers. In: Guy, T., Karny, M., Wolpert, D. (eds.) Decision Making and Imperfection. SCI, vol. 474, pp. 1–35. Springer, Berlin, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-36406-8_1CrossRefGoogle Scholar
  97. 97.
    Stecher, J., Janssen, F., Frederik, F.: Shorter rules are better, aren’t they? In: Calders, T., Ceci, M., Malerba, D. (eds.) DS 2016. LNCS (LNAI), vol. 9956, pp. 279–294. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46307-0_18CrossRefGoogle Scholar
  98. 98.
    Subramanian, K., Amor, H.B., Isbell, C.L., Thomaz, A.L. (eds.): Proceedings of the IJCAI 2016 Workshop on Interactive Machine Learning: Connecting Humans and Machines, New York (2016). https://sites.google.com/site/ijcai2016iml/
  99. 99.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)Google Scholar
  100. 100.
    Tatbul, N.: Load shedding. In: Liu, L., Özsu, M.T. (eds.) Encyclopedia of Database Systems, pp. 1632–1636. Springer, New York (2009).  https://doi.org/10.1007/978-1-4899-7993-3_211-2CrossRefGoogle Scholar
  101. 101.
    Thornton, C., Hutter, F., Hoos, H.H., Leyton-Brown, K.: Auto-WEKA: combined selection and hyperparameter optimization of classification algorithms. In: 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 847–855. ACM, New York (2013)Google Scholar
  102. 102.
    Titov, I., Khoddam, E.: Unsupervised induction of semantic roles within a reconstruction-error minimization framework. In: Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pp. 1–10. ACL, Stroudsburg (2015)Google Scholar
  103. 103.
    Tomanek, K., Olsson, F.: A web survey on the use of active learning to support annotation of text data. In: NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing, pp. 45–48. ACL, Stroudsburg (2009)Google Scholar
  104. 104.
    Wang, S.I., Liang, P., Manning, C.D.: Learning language games through interaction. In: 54th Annual Meeting of the Association for Computational Linguistics (ACL), pp. 2368–2378. ACL, Stroudsburg (2016)Google Scholar
  105. 105.
    Welinder, P., Branson, S., Belongie, S., Perona, P.: The multidimensional wisdom of crowds. In: 23rd International Conference on Neural Information Processing Systems (NIPS), pp. 2424–2432. Curran Associates, Red Hook (2010)Google Scholar
  106. 106.
    Wilson, A.G., Kim, B., Herland, W. (eds.): Proceedings of the NIPS 2016 Workshop on Interpretable Machine Learning for Complex Systems, Barcelona, Spain (2016). https://sites.google.com/site/nips2016interpretml/
  107. 107.
    Yang, Z., Cohen, W., Salakhutdinov, R.: Revisiting semi-supervised learning with graph embeddings. In: Balcan, M.F., Weinberger, K.Q. (eds.) 33rd International Conference on Machine Learning (ICML). JMLR: Workshop and Conference Proceedings, vol. 48, pp. 40–48 (2016)Google Scholar
  108. 108.
    Yimam, S.M., Biemann, C., Eckart de Castilho, R., Gurevych, I.: Automatic annotation suggestions and custom annotation layers in WebAnno. In: 52nd Annual Meeting of the Association for Computational Linguistics (ACL): System Demonstrations, pp. 91–96. ACL, Stroudsburg (2014)Google Scholar
  109. 109.
    Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10590-1_53CrossRefGoogle Scholar
  110. 110.
    Zhou, S., Chen, Q., Wang, X.: Active deep networks for semi-supervised sentiment classification. In: 23rd International Conference on Computational Linguistics (COLING), pp. 1515–1523. Tsinghua University Press, Beijing (2010)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Iryna Gurevych
    • 1
    Email author
  • Christian M. Meyer
    • 1
    Email author
  • Carsten Binnig
    • 1
  • Johannes Fürnkranz
    • 1
  • Kristian Kersting
    • 1
  • Stefan Roth
    • 1
  • Edwin Simpson
    • 1
  1. 1.Department of Computer ScienceTechnische Universität DarmstadtDarmstadtGermany

Personalised recommendations