A semi-automatic metadata extraction model and method for video-based e-learning contents
- 102 Downloads
Video-based learning offers a learner a self-paced, lucid, memorizable, and a flexible way of learning. The availability of abundant educational video materials on the web has certainly abetted an individual’s learning means. But the lack of necessary information about the videos makes it difficult for the learner to search and select the exact video as per his/her requirement and suitability in terms of the learner’s learning capability and the material’s relevancy, difficulty level, etc. Educational video recommendation systems also suffer from a similar problem. Extracting the required metadata, by different means, from the learning videos is a plausible solution. Despite the credible research efforts on video metadata extraction, the problem of educational video metadata extraction has been overlooked. This paper proposes a comprehensive approach to extract educational metadata from a learning video. A semiautomatic mechanism that includes manual and computational approaches is introduced for metadata extraction and to evaluate the values of these metadata. Along with identifying a set of specific metadata attributes from IEEE LOM, few additional attributes are suggested which are imperative to assess the suitability of a video-based learning object in terms of the personalized preference and suitability of a learner. The test results are validated by comparing with the manually extracted metadata by experts, on the same videos. The outcome establishes the promising effectiveness of the approach.
KeywordsVideo metadata extraction Video-based learning Metadata IEEE LOM Speech-to-text conversion Educational recommendation system
- Algur, S. P., & Bhat, P. (2016). Web Video Mining: Metadata Predictive Analysis using Classification Techniques. International Journal of Information Technology and Computer Science, 2, 68–76.Google Scholar
- Anusuya, M. A., & Katti, S. K. (2009). Speech Recognition by Machine A Review. International Journal of Computer Science and Information Security, 6(3), 181–205.Google Scholar
- Balagopalan, A. et al. (2012). Automatic keyphrase extraction and segmentation of video lectures . Kerala, IEEE International Conference on Technology Enhanced Education (ICTEE).Google Scholar
- CSU Northridge Oviatt Library (2019). What are digital learning objects?. [Online] Available at: https://library.csun.edu/docs/ScholarWorks/LearningObjectsClarification.pdf. Accessed 12 Mar 2019.
- Gunter, G. A., & Kenny, R. (2004). Video in the classroom: learning objects or objects of learning? Chicago: Association for Educational Communications and Technology.Google Scholar
- IEEE Computer Society. (2002). 1484.12.1 IEEE Standard for Learning Object Metadata. New York: The Institute of Electrical and Electronics Engineers.Google Scholar
- Institute for Teaching and Learning Innovation (2018). Pedagogical benefits. [Online] Available at: http://www.uq.edu.au/teach/video-teach-learn/ped-benefits.html. Accessed Sept 2018.
- Khurana, K., & Chandak, M. B. (2013). Study of various video annotation techniques. International Journal of Advanced Research in Computer and Communication Engineering, 2(1), 909–914.Google Scholar
- Kothawade, A. Y., & Patil, D. R. (2016). Retrieving Instructional Video Content from Speech and Text Information. In S. Satapathy, Y. Bhatt, A. Joshi, & D. Mishra (Eds.), Advances in Intelligent Systems and Computing (pp. 311–322). Singapore: Springer.Google Scholar
- Linfield College (2018). Why use digital video? [Online] Available at: https://www.linfield.edu/tls/blendedlearning/why-use.html. Accessed Sept 2018].
- LoveToKnow (2018). Keyword outline example. [Online] Available at: http://examples.yourdictionary.com/keyword-outline-examples.html. Accessed Sept 2018.
- Mori, S., Nishida, H., & Yamada, H. (1999). Optical character recognition. New York: John Wiley & Sons.Google Scholar
- Noy, N. F., & Mcguinness, D. L. (2001). Ontology development 101: A guide to creating your first ontology. Stanford: Stanford University.Google Scholar
- Othman, E. H., Abdelali, S., & Jaber, E. B. (2016). Education data mining: Mining MOOCs video using meta data based approach. Tangier: IEEE International Colloquium on Information Science and Technology (CiSt).Google Scholar
- Pal, S., Mukhopadhyay, M., Pramanik, P. K. D., & Choudhury, P. (2018). Assessing the learning difficulty of text-based learning materials. Da Nang city: Frontiers of Intelligent Computing: Theory and Application.Google Scholar
- Pal, S., Pramanik, P. K. D. & Choudhury, P., 2019. A step towards smart learning: Designing an interactive video-based M-learning system for educational institutes. International Journal of Web-Based Learning and Teaching Technologies , 14(4).Google Scholar
- Pramanik, P. K. D., Choudhury, P. & Saha, A., 2017. Economical Supercomputing thru smartphone crowd computing: An assessment of opportunities, benefits, deterrents, and applications from India’s Perspective. Coimbatore, International Conference on Advanced Computing and Communication Systems.Google Scholar
- Rafferty, J., Nugent, C., Liu, J. & Chen, L. (2015). Automatic metadata generation through analysis of narration within instructional video. Journal of Medical System, 39, (9).Google Scholar
- Rouse, M. (2005). Ontology. [Online] Available at: https://whatis.techtarget.com/definition/ontology. Accessed Sept 2018.
- Singh, R. K., & Singh, R. (2014). Emerging role of ontology in semantic web:developmental prospective. International Journal of Advanced Research in Computer Science and Software Engineering, 4(7), 301–307.Google Scholar
- VARK Learn Limited (2018). The VARK Modalities. [Online] Available at: http://vark-learn.com/introduction-to-vark/the-vark-modalities/. Accessed 9 12 2018].
- Waitelonis, J., Plank, M., & Sack, H. (2016). TIB|AV-Portal: Integrating Automatically Generated Video Annotations into the Web of Data. In N. Fuhr, L. Kovács, T. Risse, & W. Nejdl (Eds.), Research and advanced technology for digital libraries (pp. 429–433). Hannover: Springer.CrossRefGoogle Scholar