Advertisement

Developing the Petal E-Learning Platform for Facial Analytics and Personalized Learning

  • Vincent TamEmail author
  • Edmund Y. Lam
  • Y. Huang
  • Kelly Liu
  • Victoria Tam
  • Phoebe Tse
Living reference work entry

Abstract

Learning analytics is targeted to better understand each learner’s interests and characteristics in order to build a personalized and smart learning environment. However, many learning analytics techniques are computationally intensive, thus inappropriate for any mobile application. In this chapter, a mobile and smart e-learning platform named the Personalized teaching and learning (PETAL) system is proposed and facilitated by an intelligent facial analytics algorithm running on any mobile device to quickly estimate the learner’s responses through continuously monitoring the individual’s attention spans, facial orientation, and eye movements while viewing online course materials such as some educational videos. To protect each individual’s data privacy, the learner profile is stored under a password-protected account on a cloud server with all the intermediate data to be erased after completing a learning task. This work represents the first attempt to successfully develop an intelligent and personalized learning environment facilitated by an efficient facial analytics algorithm that can be run on any mobile device. To demonstrate its feasibility, a prototype of the PETAL e-learning system is built with the open source computer vision library to detect the learner’s responses to some educational videos. With the capability of notifying the learners of their, possibly unconscious, reactions to such educational videos, the platform is targeted to promote a truly personalized and smart learning environment through learning analytics techniques. Clearly, there are many promising directions in terms of both pedagogical and technological impacts to enhance the mobile PETAL platform for the next-generation e-learning systems.

Keywords

Facial analytics Learning analytics Personalized learning Mobile devices Smart learning environments 

References

  1. Asteriadis, S., Tzouveli, P., Karpouzis, K., & Kollias, S. (2009). Estimation of behavioral user state based on eye gaze and head pose – Application in an e-learning environment. Multimedia Tools and Application, 41, 469–493. The Springer.CrossRefGoogle Scholar
  2. Cantoni, V., Cellario, M., & Porta, M. (2004). Perspectives and challenges in e-learning: Towards natural interaction paradigms. Journal of Visual Languages and Computing, 15, 333–345. The Elsevier.CrossRefGoogle Scholar
  3. Chatti, M. A., Dychhoff, A. L., Schroeder, U., & Thüs, H. (2012). A reference model for learning analytics. International Journal of Technology Enhanced Learning (IJTEL), Special issue on “state-of-the-art in TEL,” 4(5), 318–331. The Inderscience Publishers.Google Scholar
  4. Dinh, H. T., Lee, C., Niyato, D., & Wang, P. (2013). A survey of mobile cloud computing: Architecture, applications, and approaches. Wireless Communications and Mobile Computing, 13(18), 1587–1611. December 25, 2013. Wiley.CrossRefGoogle Scholar
  5. Dyckhoff, A. L., Zielke, D., Bültmann, M., Chatti, M. A., & Schroeder, U. (2012). Design and implementation of a learning analytics toolkit for teachers. Educational Technology and Society, 15(3), 58–76.Google Scholar
  6. Eseryel, D., Ifenthaler, D., & Ge, X. (2011). Alternative assessment strategies for complex problem solving in game-based learning environments. In D. Ifenthaler, Kinshuk, P. Isaias, D. G. Sampson, & J. M. Spector (Eds.), Multiple perspectives on problem solving and learning in the digital age (pp. 159–178). New York, NY: Springer.CrossRefGoogle Scholar
  7. Hennessey, C., Noureddin, B., & Lawrence, P. (2006). A single camera eye-gaze tracking system with free head motion. In Proceedings of the Symposium on Eye Tracking Research and Applications (ETRA ‘05) (pp. 87–94). San Diego, CA.Google Scholar
  8. Ifenthaler, D., Kinshuk, Isaias, P., Sampson, D. G., & Spector, J. M. (Eds.). (2011a). Multiple perspectives on problem solving and learning in the digital age. New York, NY: Springer.Google Scholar
  9. Ifenthaler, D., Masduki, I., & Seel, N. M. (2011b). The mystery of cognitive structure and how we can detect it. Tracking the development of cognitive structures over time. Instructional Science, 39(1), 41–61. doi:10.1007/s11251-009-9097-6Google Scholar
  10. Ioannou, S., Caridakis, G., Karpouzis, K., Kollias, S. (2007). Robust feature detection for facial expression recognition. In EURASIP Journal of Image and Video Processing, 2007, Article ID: 29081, 22 pages. doi:10.1155/2007/29081. Hindawi Publishing Corporation.Google Scholar
  11. Jang, J. (1993). ANFIS: Adaptive-network-based fuzzy inference systems. IEEE Transactions on Systems, Man, and Cybernetics, 23(3), 665–685. The IEEE Systems, Man, and Cybernetics Society.CrossRefGoogle Scholar
  12. Sabih, T. (2013). PETAL: Learning the android way. In the HKU Journal of Technology (TecHKU). http://www.engineering.hku.hk/tecHKU/2013/11/11/petal/. Last visited in November, 2013.
  13. The Microsoft Kinect Development Team. (2014). Kinect for windows. http://www.microsoft.com/en-us/kinectforwindows/. Last visited in April, 2014.
  14. The OpenCV Development Team. (2014). The OpenCV. http://opencv.org/. Last visited in March, 2014.

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Vincent Tam
    • 1
    Email author
  • Edmund Y. Lam
    • 1
  • Y. Huang
    • 1
  • Kelly Liu
    • 2
  • Victoria Tam
    • 3
  • Phoebe Tse
    • 2
  1. 1.Department of Electrical & Electronic EngineeringThe University of Hong KongHong KongHong Kong
  2. 2.Department of Electrical Engineering & Computer ScienceMassachusetts Institute of TechnologyCambridgeUSA
  3. 3.Department of Mechanical EngineeringMassachusetts Institute of TechnologyCambridgeUSA

Personalised recommendations