Advertisement

Supporting the discoverability of open educational resources

  • Renato CortinovisEmail author
  • Alexander Mikroyannidis
  • John Domingue
  • Paul Mulholland
  • Robert Farrow
Article
  • 89 Downloads

Abstract

Open Educational Resources (OERs), now available in large numbers, have a considerable potential to improve many aspects of society, yet one of the factors limiting this positive impact is the difficulty to discover them. This study investigates and proposes strategies to better support educators in discovering OERs, mainly focusing on secondary education. The literature suggests that the effectiveness of existing search systems could be improved by supporting high-level and domain-oriented tasks. Hence a preliminary taxonomy of discovery-related tasks was developed, based on the analysis of the literature, interpreted through Information Foraging Theory. This taxonomy was empirically evaluated with a few experienced educators, to preliminary identify an interesting class of Query By Examples (QBE) expansion by similarity tasks, which avoids the need to decompose natural high-level tasks in a complex sequence of sub-tasks. Following the Design Science Research methodology, three prototypes to support as well as to refine those tasks were iteratively designed, implemented, and evaluated involving an increasing number of educators in usability oriented studies. The resulting high-level and domain-oriented blended search/recommendation strategy, transparently replicates Google searches in specialized networks, and identifies similar resources with a QBE strategy. It makes use of a domain-oriented similarity metric based on shared schema.org/LRMI alignments to educational frameworks, and clusters results in expandable classes of comparable degree of similarity. The summative evaluation shows that educators appreciate this exploratory-oriented strategy because – balancing similarity and diversity – it supports their high-level tasks, such as lesson planning and personalization of education.

Keywords

Discoverability Exploratory search OER schema.org Educational alignments 

Notes

Acknowledgments

We would like to thank Prof. Marian Petre for her advice and feedback.

Compliance with ethical standards

Ethical approval

The research was reviewed by, and received a favourable opinion from, the Open University Human Research Ethics Committee.

Conflict of interest

The authors declare that they have no conflict of interest.

References

  1. Abeywardena, I. S., Chan, C. S., & Tham, C.Y. (2013). OERScout technology framework: A novel approach to open educational resources search. The International Review of Research in Open and Distance Learning, 14(4), 214–237.CrossRefGoogle Scholar
  2. Agarwal, R., & Venkatesh, V. (2002). Assessing a firm’s web presence: A heuristic evaluation procedure for the measurement of usability. Information Systems Research, 13(2), 168–186.CrossRefGoogle Scholar
  3. Al-Khalifa, H. S., & Davis, H. C. (2006). The evolution of metadata from standards to semantics in E-learning applications. In Proceedings of the seventeenth conference on Hypertext and hypermedia (pp. 69-72). New York: ACM.Google Scholar
  4. Atenas, J., & Havemann, L. (2013). Quality assurance in the open: An evaluation of OER repositories. INNOQUAL: International Journal for Innovation and Quality in Learning, 1(2), 22–34.Google Scholar
  5. Atkins, D. E., Brown, J. S., & Hammond, A. L. (2007). A review of the open educational resources (OER) movement: Achievements, challenges, and new opportunities. Menlo Park: The William and Flora Hewlett Foundation.Google Scholar
  6. Barker, P. (2014). Explaining the LRMI alignment object. Sharing and learning. Resource document. http://blogs.pjjk.net/phil/explaining-the-lrmi-alignment-object/. Accessed 2 June 2018.
  7. Barker, P., & Campbell, L. M. (2016a). Technology strategies for open educational resource dissemination. In Open education: international perspectives in higher education (pp. 51–70). Cambridge: Openbook Publishers.Google Scholar
  8. Barker, P., & Campbell, L. (2016b). Learning resource metadata on the web. In International conference on world wide web (WWW ‘15 companion) (pp. 687–688).  https://doi.org/10.1145/2740908.2741745.
  9. Belkin, N. J. (1995). Strategies for evaluation of interactive multimedia information retrieval systems. Proceedings of the final workshop on multimedia information retrieval (MIRO 95), Glasgow, London: Springer.Google Scholar
  10. Berners-Lee, T. (2007). Giant global graph. timbl’s blog. Resource document. http://dig.csail.mit.edu/breadcrumbs/node/215. Accessed 10 February 2014.
  11. Berners-Lee, T., Hendler, J., & Lassila, O. (2001). The semantic web. Scientific American, 284(5), 28–37.CrossRefGoogle Scholar
  12. Bizer, C., Heath, T., & Berners-Lee, T. (2009). Linked data – The story so far. International Journal on Semantic Web and Information Systems, 5(3), 1–22.CrossRefGoogle Scholar
  13. Brooke, J. (1986). System usability scale (SUS): A quick-and-dirty method of system evaluation user information. Reading UK: Digital Equipment Co Ltd..Google Scholar
  14. Byström, K., & Hansen, P. (2002). Work tasks as unit for analysis in information seeking and retrieval studies. In Proceedings of the 4th international conference on conceptions of library and information science: emerging frameworks and methods (CoLIS4) (pp. 239–252). Greenwood Village: Libraries Unlimited.Google Scholar
  15. Chi, E. H. (2015). Blurring of the boundary between interactive search and recommendation. Proceedings of the 20th International Conference on Intelligent User Interfaces.  https://doi.org/10.1145/2678025.2700998.
  16. Cho, J. Y., & Lee, E. (2014). Reducing confusion about grounded theory and qualitative content analysis: Similarities and differences. The Qualitative Report, 19(32), 1–20.Google Scholar
  17. Cleverdon, C. W. (1960). ASLIB Cranfield research project on the comparative efficiency of indexing systems. ASLIB Proceedings, XII, 421–431.Google Scholar
  18. D’Aquin, M., Adamou, A., & Dietze, S. (2013). Assessing the educational linked data landscape. Proceedings of the 5th annual ACM Web science conference (pp. 43–46). New York: ACM.Google Scholar
  19. Daher, J., Brun, A., & Boyer, A. (2017). A review on explanations in recommender systems. Technical Report LORIA - Université de Lorraine.Google Scholar
  20. DCMI (2012). DC-education application profile. Resource document. http://dublincore.org/moinmoin-wiki-archive/educationwiki/pages/DC_2dEducation_20Application_20Profile.html. Accessed 2 May 2016.
  21. Dietze, S., SanchezAlonso, S., Ebner, H., Yu, H. Q., Giordano, G., Marenzi, I., & Pereira Nunes, B. (2013). Interlinking educational resources and the web of data: A survey of challenges and approaches. Program, 47(1), 60–91.CrossRefGoogle Scholar
  22. Dietze, S., Drachsler, H., & Giordano, D. (2014). A survey on linked data and the social web as facilitators for TEL recommender systems. In Recommender systems for technology enhanced learning (pp. 47–75). New York: Springer.Google Scholar
  23. Dietze, S., Taibi, D., Yu, R., Barker, P., & d’Aquin, M. (2017). Analysing and improving embedded markup of learning resources on the web. In Proceedings of the 26th international conference on world wide web companion (pp. 283–292). Republic and Canton of Geneva: International World Wide Web Conferences Steering Committee.Google Scholar
  24. Doctorow, C. (2001). Metacrap: Putting the torch to seven straw-men of the meta-utopia. Resource document. http://www.well.com/doctorow/metacrap.htm. Accessed 15 April 2015.
  25. Dou, Z., Song, R., & Wen, J. R. (2007, May). A large-scale evaluation and analysis of personalized search strategies. In Proceedings of the 16th international conference on World Wide Web, 581-590. ACM.Google Scholar
  26. Downes, S. (2003). One standard for all: Why We Don’t Want It, Why We Don’t Need It. National Research Council. Resource document. http://zope.cetis.ac.uk/lib/media/one_standard.pdf. Accessed 2 May 2016.
  27. Drachsler, H., Greller, W., Fazeli, S., Niemann, K., Sanchez-Alonso, S., Rajabi, E., Palmér, M., Ebner, H., Simon, B., Nösterer, D., Kastrantas, K., Manouselis, N., Hatzakis, I., & Clements, K. (2012). D8.1 review of social data requirements. Open Discovery Space project.Google Scholar
  28. Dziadosz, S., & Chandrasekar, R. (2002). Do thumbnail previews help users make better relevance decisions about web search results? In Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval (pp. 365–366). New York: ACM.Google Scholar
  29. Gerhardt-Powals, J. (1996). Cognitive engineering principles for enhancing human-computer performance. International Journal of Human Computer Interaction, 8(2), 189–211.CrossRefGoogle Scholar
  30. GLOBE (2016). GLOBE – Connecting the world and unlocking the deep web. Resource document. http://globe-info.org. Accessed 30 July 2016.
  31. Guha, R. V., Brickley, D., & Macbeth, S. (2016). Schema.org: Evolution of structured data on the web. Communications of the ACM, 59(2), 44–51.CrossRefGoogle Scholar
  32. Gurell, S., & Wiley, D. (2008). OER handbook for educators. Resource document. http://wikieducator.org/OER_Handbook/educator_version_one. Accessed 11 Aug 2017.
  33. Hevner, A. R., March, S. T., Park, J., & Ram, S. (2004). Design science in information systems research. MIS Quarterly, 28(1), 75–105.CrossRefGoogle Scholar
  34. IEEE (2002). Draft standard for learning object metadata. Resource document. http://ltsc.ieee.org/wg12/files/LOM_1484_12_1_v1_Final_Draft.pdf. Accessed 10 April 2014.
  35. IMS (2015). Interoperability standards helping to transform the digital curriculum. Resource document. www.imsglobal.org/sites/default/files/articles/slI17-051815.pdf. Accessed 10 June 2015.
  36. ISO. (1998). ISO 9241-11:1998: Ergonomic requirements for office work with visual display terminals (VDTs) -- Part 11: Guidance on usability. Geneva: International Organization for Standardization.Google Scholar
  37. Kabel, S., De Hoog, R., Wielinga, B. J., & Anjewierden, A. (2004). The added value of task and ontology-based markup for information retrieval. Journal of the American Society for Information Science and Technology, 55(4), 348–362.CrossRefGoogle Scholar
  38. Kearney, M. (2017). Measure performance with the RAIL model. Resource document. https://developers.google.com/web/fundamentals/performance/rail. Accessed 13 August 2017.
  39. Klo, J. (2011). AMPlied search extension for chrome. Resource document. https://github.com/jimklo/AMPS-Chrome. Accessed 01 May 2016.
  40. Knoth, P. (2015). Linking Textual Resources to Support Information Discovery. Ph.D. thesis. Milton Keynes: The Open University.Google Scholar
  41. Kules, B., & Shneiderman, B. (2008). Users can change their web search tactics: Design guidelines for categorized overviews. Information Processing & Management, 44(2), 463–484.CrossRefGoogle Scholar
  42. Learning Registry. (2016). Learning Registry. Resource document. www.learningregistry.org. Accessed 1 May 2016.
  43. Lockley, P. (2011). Pgogy/learning_registry_chrome. Resource document. https://github.com/pgogy/learning_registry_chrome. Accessed 1 May 2016.
  44. LRMI (2013a). The smart Publisher’s guide to LRMI tagging. Resource document. http://www.lrmi.net/wp-content/uploads/2013/03/LRMI_tagging_Guide.pdf. Accessed 5 March 2014.
  45. LRMI (2013b). LRMI survey report. Resource document. http://www.lrmi.net/wp-content/uploads/2013/08/LRMI-Survey-Report-August-2013-Update.pdf. Accessed 5 April 2014.
  46. LRMI (2013c). LRMI’s “killer feature”: educationalAlignment. Resource document. http://lrmi.dublincore.net/2013/06/12/lrmis-killer-feature-educationalalignment/. Accessed 2 May 2016.
  47. LRMI (2014). LRMI Project Description Resource document. http://wiki.lrmi.net/LRMI+Project+Description. Accessed 5 April 2014.
  48. Madan, A., & Dubey, S.K. (2012). Usability evaluation methods: A literature review. International Journal of Engineering Science and Technology, 4(2), 590–599.Google Scholar
  49. Manouselis, N., Drachsler, H., Vuorikari, R., Hummel, H., & Koper, R. (2011). Recommender systems in technology enhanced learning. In Recommender systems handbook (pp. 387–415). New York: Springer.Google Scholar
  50. Marchionini, G. (2006). Exploratory search: From finding to understanding. Communications of the ACM, 49(4), 41–46.CrossRefGoogle Scholar
  51. Marshall, B., Cardon, P., Poddar, A., & Fontenot, R. (2013). Does sample size matter in qualitative research? A review of qualitative interviews in IS research. Journal of Computer Information Systems, 54(1), 11–22.CrossRefGoogle Scholar
  52. McNamara, N., & Kirakowski, J. (2006). Functionality, usability, and user experience: Three areas of concern. Interactions, 13(6), 26–28.CrossRefGoogle Scholar
  53. Molich, R., & Nielsen, J. (1990). Improving a human - computer dialogue. Communications of the ACM, 33(3), 338–348.CrossRefGoogle Scholar
  54. Nielsen, J. (1995). Applying discount usability engineering. Software, IEEE, 12(1), 98–100.CrossRefGoogle Scholar
  55. Nilsson, M. (2010). From interoperability to harmonization in metadata standardization: designing an evolvable framework for metadata harmonization. Ph.D. thesis. Stockholm: KTH.Google Scholar
  56. Peffers, K., Tuunanen, T., Rothenberger, M. A., & Chatterjee, S. (2007). A design science research methodology for information systems research. Journal of Management Information Systems, 24(3), 45–77.CrossRefGoogle Scholar
  57. Pirolli, P., & Card, S. K. (1999). Information foraging. Psychological Review, 106, 643–675.CrossRefGoogle Scholar
  58. Porter, A., McMaken, J., Hwang, J., & Yang, R. (2011). Common core standards the new US intended curriculum. Educational Researcher, 40(3), 103–116.CrossRefGoogle Scholar
  59. Prat, N., Comyn-Wattiau, I., & Akoka, J. (2014). Artifact evaluation in information systems design-science research-a holistic view. In PACIS 2014 Proceedings (vol. Paper 23, pp. 1–16).Google Scholar
  60. Qu, Y., & Furnas, G. W. (2008). Model-driven formative evaluation of exploratory search: A study under a sensemaking framework. Information Processing & Management, 44(2), 534–555.CrossRefGoogle Scholar
  61. Riley, J. (2010). Glossary of metadata standards. Resource document. http://www.dlib.indiana.edu/~jenlrile/metadatamap/seeingstandards_glossary_pamphlet.pdf. Accessed 02 May 2016.
  62. Schema.org. (2013). Frequently asked questions. Resource document. http://schema.org/docs/faq.html. Accessed 10 June 2015.
  63. Schraefel, M. C. (2009). Building knowledge: What’s beyond keyword search? Computer, 42(3), 52–59.CrossRefGoogle Scholar
  64. Shear, L., Means, B., & Lundh, P. (2015). Research on open: OER research hub review and futures for research on OER. Menlo Park: SRI International.Google Scholar
  65. Singhal, A. (2012). Introducing the knowledge graph: Things, not strings. Official Google Blog. Resource document. http://googleblog.blogspot.co.uk/2012/05/introducing-knowledge-graph-things-not.html. Accessed 13 April 2015.
  66. Smyth, B., & McClave, P. (2001). Similarity vs. diversity. In Case-based reasoning research and development (pp. 347–361). Berlin, Heidelberg: Springer.Google Scholar
  67. Sue, V. M., & Ritter, L. A. (2007). Designing and developing the survey instrument. In Conducting online surveys (pp. 59–88). Thousand Oaks: Sage.Google Scholar
  68. Sutton, S. A. (2008). Metadata quality, utility and the semantic web: The case of learning resources and achievement standards. Cataloging and Classification Quarterly, 46(1), 81–107.CrossRefGoogle Scholar
  69. Sutton, S. A., & Mason, J. (2001). The Dublin Core and metadata for educational resources. In International conference on Dublin Core and metadata applications, Wroclaw, Poland (pp. 25–31).Google Scholar
  70. Timpany, G. (2015). Reaching the constant sum. Resource document. http://survey.cvent.com/blog/customer-insights-2/using-constant-sum-questions. Accessed 2 May 2016.
  71. UNESCO (2012). UNESCO Paris OER declaration. Resource document. http://www.unesco.org/new/fileadmin/MULTIMEDIA/HQ/CI/CI/pdf/Events/Paris%20OER%20Declaration_01.pdf. Accessed 8 February 2014.
  72. Vaishnavi, V., & Kuechler, W. (2015). Design science research in information systems. Resource document. http://www.desrist.org/design-research-in-information-systems/. Accessed 5 May 2017.
  73. Verbert, K., Drachsler, H., Manouselis, N., Wolpers, M., Vuorikari, R., & Duval, E. (2011). Dataset-driven research for improving recommender systems for learning. In Proceedings of the 1st international conference on learning analytics and knowledge (pp. 44–53). New York: ACM.Google Scholar
  74. Weller, M. (2010). Big and little OER. OpenED2010: Seventh Annual Open Education Conference, 2-4 Nov 2010, Barcelona, Spain. http://oro.open.ac.uk/24702/.
  75. White, R. W., Drucker, S. M., Marchionini, G., & Hearst, M. (2007). Exploratory search and HCI: Designing and evaluating interfaces to support exploratory search interaction. In CHI’07 extended abstracts on human factors in computing systems (pp. 2877–2880). New York: ACM.Google Scholar
  76. Wildemuth, B. M. & Freund, L. (2009). Search tasks and their role in studies of search behaviors. In Third annual workshop on human computer interaction and information retrieval, Washington DC.Google Scholar
  77. Wilson, M. L., Kules, B., Schraefel, M. C., & Shneiderman, B. (2010). From keyword search to exploration: Designing future search interfaces for the web. Foundations and Trends in Web Science, 2(1), 1–97.CrossRefzbMATHGoogle Scholar
  78. Winston, P. H. (1984). Artificial intelligence. Reading: Addison-Wesley.zbMATHGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.The Open University (UK)Milton KeynesUK

Personalised recommendations