Advertisement

Crowdsourcing Real-World Feedback for Human–Computer Interaction Education

  • Fernando LoizidesEmail author
  • Kathryn Jones
  • Carina Girvan
  • Helene de Ribaupierre
  • Liam Turner
  • Ceri Bailey
  • Andy Lloyd
Chapter
Part of the Human–Computer Interaction Series book series (HCIS)

Abstract

In this chapter we investigate using real-world feedback to compliment academic feedback during a course on mobile development using HCI methods. Students used crowdsourcing and macro tasking to recruit suitable end-users in order to generate feedback. During the study, we uncovered benefits and disbenefits for both staff and student stakeholders, motivations and blockers that readers and others planning to apply this method should be aware of. We report on practical matters of scalability, legal and governance issues that arise. Overall, the process proposed in the chapter produces a greatly enhanced experience for students and improves the richness of the feedback as well as the authenticity of the end-user testing experience. Challenges that are faced include incorporating this process within an academic environment with matters such as the liability of the university towards externalising student work. Overall, we were also surprised to see that harsh criticism was not taken negatively by students but was a source of motivation to improve.

Keywords

Human computer interaction Crowdsourcing Feedback Assessment Education Mobile applications 

References

  1. Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent Dirichlet allocation. The Journal of Machine Learning Research, 3, 993–1022.Google Scholar
  2. Buckland, M., & Gey, F. (1994). The relationship between recall and precision. Journal of the American Society for Information Science, 45(1), 12–19.CrossRefGoogle Scholar
  3. Carless, D., & Boud, D. (2018). The development of student feedback literacy: Enabling uptake of feedback. Assessment and Evaluation in Higher Education.  https://doi.org/10.1080/02602938.2018.1463354.CrossRefGoogle Scholar
  4. Cheng, J., Teevan, J., Iqbal, S. T., & Bernstein, M. S. (2015). Break it down: A comparison of macro- and microtasks. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI’15) (pp. 4061–4064). ACM, New York, NY. http://dx.doi.org/10.1145/2702123.2702146.
  5. Genc-Nayebi, N., & Abran, A. (2017). A systematic literature review: Opinion mining studies from mobile app store user reviews. Journal of Systems and Software, 125, 207–219.CrossRefGoogle Scholar
  6. Haas, D., Ansel, J., Gu, L., & Marcus, A. (2015). Argonaut: Macrotask crowdsourcing for complex data processing. Proceedings of the VLDB Endowment, 8(12), 1642–1653.CrossRefGoogle Scholar
  7. Jessop, T., & Tomas, C. (2016). The implications of programme assessment patterns for student learning. Assessment & Evaluation in Higher Education.  https://doi.org/10.1080/02602938.2016.1217501.CrossRefGoogle Scholar
  8. Jonsson, A. (2014). Rubrics as a way of providing transparency in assessment. Assessment & Evaluation in Higher Education, 39(7), 840–852.CrossRefGoogle Scholar
  9. Liu, B. (2012). Sentiment analysis and opinion mining. Synthesis Lectures on Human Language Technologies, 5(1), 1–167.CrossRefGoogle Scholar
  10. Manning, C., Surdeanu, M., Bauer, J., Finkel, J., Bethard, S., & McClosky, D. (2014). The Stanford CoreNLP natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations (pp. 55–60).Google Scholar
  11. Noy, N. F., & McGuinness, D. L. (2001). Ontology development 101: A guide to creating your first ontology. Google Scholar
  12. Orsmond, P., Merry, S., & Reiling, K. (1996). The importance of marking criteria in the use of peer assessment. Assessment & Evaluation in Higher Education, 21(3), 239–250.CrossRefGoogle Scholar
  13. Price, M., et al. (2012). Assessment literacy: The foundation for improving student learning (p. 2012). Oxford: OCSLD.Google Scholar
  14. Soilemetzidis, I., Bennett, P., Buckley, A., Hillman, N., Stoakes, G. (2014). The HEPI – HEA student academic experience survey 2014. In HEPI – HEA Spring Conference, 21 May 2014 (pp. 1–40).Google Scholar
  15. Sun, S., Luo, C., & Chen, J. (2017). A review of natural language processing techniques for opinion mining systems. Information Fusion, 36, 10–25.CrossRefGoogle Scholar
  16. Wiggins G. (1990) The case for authentic assessment. Practical Assessment Research and Evaluation, 2(2). https://www.pareonline.net/getvn.asp?v=2&n=2.

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Fernando Loizides
    • 1
    Email author
  • Kathryn Jones
    • 1
  • Carina Girvan
    • 2
  • Helene de Ribaupierre
    • 1
  • Liam Turner
    • 1
  • Ceri Bailey
    • 2
  • Andy Lloyd
    • 3
  1. 1.School of Computer Science and InformaticsCardiff UniversityCardiffUK
  2. 2.School of Social SciencesCardiff UniversityCardiffUK
  3. 3.Centre for Education Support and InnovationCardiff UniversityCardiffUK

Personalised recommendations