Advertisement

Using Semantic Web for Generating Questions: Do Different Populations Perceive Questions Differently?

  • Nguyen-Thinh Le
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9240)

Abstract

In this paper, I propose an approach to using semantic web data for generating questions that are intended to help people develop arguments in a discussion session. Applying this approach, a question generation system that exploits WordNet for generating questions for argumentation has been developed. This paper describes a study that investigates a research question of whether different populations perceive questions (either generated by a system or by human experts) differently. To conduct this study, I asked eight human experts of the argumentation and the question generation communities to construct questions for three discussion topics and used a question generation system for generating questions for argumentation. Then, the author invited three groups of researchers to rate the mix of questions: (1) computer scientists, (2) researchers of the argumentation and question generation communities, and (3) student teachers for Computer Science. The evaluation study showed that human-generated questions were perceived differently by three different populations over three quality criteria (the understandability, the relevance, and the usefulness). For system-generated questions, the hypothesis could only be confirmed on the criteria of relevance and usefulness of questions. This contribution of the paper motivates researchers of question generation to deploy various techniques to generate questions adaptively for different target groups.

Keywords

Semantic web Linked open data Question generation Question taxonomy Adaptivity 

Notes

Acknowledgements

The author would like to thank researchers of the argumentation community and the problem/question generation community (Prof. Kevin Ashley, Prof. Kazuhisa Seta, Prof. Tsukasa Hirashima, Prof. Matthew Easterday, Prof. Reuma De Groot, Prof. Fu-Yun Yu, Dr. Bruce McLaren, Dr. Silvia De Ascaniis) for generating questions, Computer Scientists (Prof. Ngoc-Thanh Nguyen, Prof. Viet-Tien Do, Dr. Thanh-Binh Nguyen, Zhilin Zheng, Madiah Ahmad, Sebastian Groß, Sven Strickroth), and student teachers at the Humboldt-Universität zu Berlin for their contribution in this evaluation study. Especially, the author would like to express his gratitude to Prof. Pinkwart for introducing experts of the argumentation community.

References

  1. Alkinson, J.M., Drew, P.: Order in Court. Macmillan, London (1979)Google Scholar
  2. Adamson, D., Bhartiya, D., Gujral, B., Kedia, R., Singh, A., Rosé, C.P.: Automatically generating discussion questions. In: Lane, H., Yacef, K., Mostow, J., Pavlik, P. (eds.) AIED 2013. LNCS, vol. 7926, pp. 81–90. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  3. Arias de Sanchez, G.: The art of questioning: using bloom’s taxonomy in the elementary school classroom. Teach. Innov. Proj. 3(1), Article 8 (2013)Google Scholar
  4. Baccianella, S., Esuli, A., Sebastiani, F.: SentiWordNet 3.0: an enhanced lexical resource for sentiment analysis and opinion mining. In: Proceedings of the Seventh International Conference on Language Resources and Evaluation (2010)Google Scholar
  5. Barlow, A., Cates, J.M.: The impact of problem posing on elementary teachers’ beliefs about mathematics and mathematics teaching. Sch. Sci. Math. 106(2), 64–73 (2006)CrossRefGoogle Scholar
  6. Bloom, B.S.: Taxonomy of Educational Objectives: Handbook 1: Cognitive Domain. Addison Wesley Publishing, Boston (1956)Google Scholar
  7. Brown, G., Wragg, E.C.: Questioning. Routledge, London (1993)Google Scholar
  8. Chafi, M.E., Elkhouzai, E.: Classroom interaction: investigating the forms and functions of teacher questions in Moroccan primary school. J. Innov. Appl. Stud. 6(3), 352–361 (2014)Google Scholar
  9. Clayman, S., Heritage, J.: The News Interview: Journalists and Public Figures on the Air. Cambridge University Press, New York (2002)CrossRefGoogle Scholar
  10. Dillon, J.T.: Questioning and Teaching. A Manual of Practice. Croom Helm, London (1988)Google Scholar
  11. Drew, P., Heritage, J.: Analyzing talk at work: an introduction. In: Drew, P., Heritage, J. (eds.) Talk at Work: Interaction in Institutional Settings, pp. 3–65. Cambridge University Press, New York (1992)Google Scholar
  12. Dumais, S.T.: Latent semantic analysis. Annu. Rev. Inf. Sci. Technol. 38(1), 188–230 (2004)CrossRefGoogle Scholar
  13. Graesser, A.C., et al.: Mechanisms that generate questions. In: Lauer, T., et al. (eds.) Questions and Information Systems. Erlbaum, Hillsdale (1992)Google Scholar
  14. Heilman, M., Smith, N.A.: Question generation via over-generating transformations and ranking. Report CMU-LTI-09-013, Language Technologies Institute, School of Computer Science, Carnegie Mellon University (2009)Google Scholar
  15. Hoffart, J., Suchanek, F., Berberich, K., Weikum, G.: YAGO2: a spatially and temporally enhanced knowledge base from wikipedia. Spec. Issue Artif. Intell. J. 194, 28–61 (2013)MathSciNetCrossRefMATHGoogle Scholar
  16. Huang, A.: Similarity measures for text document clustering. In: Proceedings of the 6th New Zealand Computer Science Research Student Conference, pp. 49–56 (2008)Google Scholar
  17. Jouault, C., Seta, K.: Content-dependent question generation for history learning in semantic open learning space. In: Trausan-Matu, S., Boyer, K.E., Crosby, M., Panourgia, K. (eds.) ITS 2014. LNCS, vol. 8474, pp. 300–305. Springer, Heidelberg (2014)CrossRefGoogle Scholar
  18. Kunichika, H., Katayama, T., Hirashima, T., Takeuchi, A.: Automated question generation methods for intelligent english learning systems and its evaluation. In: Proceedings of the International Conference on Computers in Education, pp. 1117–1124 (2001)Google Scholar
  19. Le, N.T., Nguyen, N.P., Seta, K., Pinkwart, N.: Automatic question generation for supporting argumentation. Vietnam J. Comput. Sci. 1(2), 117–127 (2014). Springer VerlagCrossRefGoogle Scholar
  20. Le, N.T., Pinkwart, N.: Evaluation of a question generation approach using open linked data for supporting argumentation. Special Issue on Modeling, Management and Generation of Problems/Questions in Technology-Enhanced Learning, J. Res. Pract. Technol. Enhanc. Learn. (RPTEL) (2015)Google Scholar
  21. Lin, L., Atkinson, R.K., Savenye, W.C., Nelson, B.C.: Effects of visual cues and self-explanation prompts: empirical evidence in a multimedia environment. Interact. Learn. Environ. J. 49(1), 83–110 (2014)Google Scholar
  22. Liu, M., Calvo, R.A., Rus, V.: G-Asks: an intelligent automatic question generation system for academic writing support. Dialogue Discourse 3(2), 101–124 (2012)CrossRefGoogle Scholar
  23. Mahdisoltani, F., Biega, J., Suchanek, F.M.: YAGO3: a knowledge base from multilingual wikipedias. In: Proceedings of the Conference on Innovative Data Systems Research (CIDR 2015) (2015)Google Scholar
  24. Miller, G.A.: WordNet: a lexical database for english. Commun. ACM 38(11), 39–41 (1995)CrossRefGoogle Scholar
  25. Morgan, N., Saxton, J.: Asking better questions. Pembroke Publishers, Makhma (2006)Google Scholar
  26. Mostow, J., Chen, W.: Generating instruction automatically for the reading strategy of self-questioning. In: Proceeding of the Conference on AI in Education, pp. 465–472 (2009)Google Scholar
  27. Mostow, J., Beck, J.E.: When the rubber meets the road: lessons from the in-school adventures of an automated reading tutor that listens. In: Schneider, B., McDonald, S.-K. (eds.) Scale-Up in Education, pp. 183–200. Rowman and Littlefield Publishers, Lanham (2007)Google Scholar
  28. Navigli, R., Ponzetto, S.P.: BabelNet: the automatic construction, evaluation and application of a wide-coverage multilingual semantic network. J. Artif. Intell. 193, 217–250 (2012)MathSciNetCrossRefMATHGoogle Scholar
  29. Otero, J., Graesser, A.C.: PREG: elements of a model of question asking. J. Cogn. Instr. 19(2), 143–175 (2001)CrossRefGoogle Scholar
  30. Pate, R.T., Bremer, N.H.: Guiding learning through skillful questioning. Elem. Sch. J. 67, 417–422 (1967)CrossRefGoogle Scholar
  31. Rigotti, E., Greco Morasso, S.: Comparing the argumentum model of topics to other contemporary approaches to argument schemes: the procedural and material components. Argumentation 24(4), 489–512 (2010)CrossRefGoogle Scholar
  32. Rothstein, D., Santana, L.: Teaching students to ask their own questions. Harv. Educ. Lett. 27(5) (2014). http://hepg.org/hel-home/issues/27_5/helarticle/teaching-students-to-ask-their-own-questions_507
  33. Schreiber, J.E.: Teacher’s question-asking techniques in social studies. Doctoral dissertation, University of Iowa, No. 67-9099 (1967)Google Scholar
  34. Suchanek, F.M., Kasneci, G., Weikum, G.: YAGO: a core of semantic knowledge unifying WordNet and wikipedia. In: Proceedings of the International World Wide Web Conference, pp. 697–706, ACM (2007)Google Scholar
  35. Tenenberg, J., Murphy, L.: Knowing what i know: an investigation of undergraduate knowledge and self-knowledge of data structures. Comput. Sci. Educ. 15(4), 297–315 (2005)CrossRefGoogle Scholar
  36. Wu, H.C., Luk, R.W.P., Wong, K.F., Kwok, K.L.: Interpreting TF-IDF term weights as making relevance decisions. ACM Trans. Inf. Syst. 26(3), 13:1–13:37 (2008)CrossRefGoogle Scholar
  37. Yu, F.Y., Pan, K.J.: The effects of student question-generation with online prompts on learning. Educ. Technol. Soc. 17(3), 267–279 (2014)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  1. 1.Department of Informatics Research Group “Computer Science Education/Computer Science and Society”Humboldt-Universität zu BerlinBerlinGermany

Personalised recommendations