Skip to main content
Log in

Synergistic integration between internet of things and augmented reality technologies for deaf persons in e-learning platform

  • Published:
The Journal of Supercomputing Aims and scope Submit manuscript

Abstract

The development of the Internet of Things (IoT) accentuates the interweaving of the digital with the physical, raising multiple issues, including for learning and more specifically for the learning of the deaf. While emerging technologies may have given rise to various methods and modes of learning, the implications of the IoT for deaf learning are not yet understood. This study aims to propose the possible integration of the IoT and augmented reality (AR) to support deaf learning from three dimensions of analysis: data, interfaces and pervasiveness. The axes of applications identified suggest that the IoT promotes learning characterized by experimentation, adaptation (to the context and to the learner), the manipulation of objects and exploration without the constraints of time or space. In order to achieve this integration, preliminary survey is collected from deaf learners. Based on the results, a deaf intelligent learning model is proposed and discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Mohd Hashim MH, Tasir Z (2020) An e-learning environment embedded with sign language videos: research into its usability and the academic performance and learning patterns of deaf students. Educ Tech Res Dev 68:2873–2911

    Article  Google Scholar 

  2. McKeown C, McKeown J (2019) Accessibility in online courses: understanding the deaf learner. TechTrends 63:506–513. https://doi.org/10.1007/s11528-019-00385-3

    Article  Google Scholar 

  3. Tanganelli G, Curado M (2019) Reliability of internet of things: smart objects and services. J Reliable Intell Environ 5:1. https://doi.org/10.1007/s40860-019-00078-x

    Article  Google Scholar 

  4. Xiong J, Hsiang EL, He Z et al (2021) Augmented reality and virtual reality displays: emerging technologies and future perspectives. Light Sci Appl 10:216. https://doi.org/10.1038/s41377-021-00658-8

    Article  Google Scholar 

  5. Scavarelli A, Arya A, Teather RJ (2021) Virtual reality and augmented reality in social learning spaces: a literature review. Virtual Reality 25:257–277. https://doi.org/10.1007/s10055-020-00444-8

    Article  Google Scholar 

  6. Venkatasubramaniam AMRC, Duraisamy K, Subramaniam D, Chelladurai M (2011) Embedding sign representation in mobile phones to assist disabled. Comput Technol Appl 2:42–47

    Google Scholar 

  7. Prinetto P, Shoaib U, Tiotto G The italian sign language sign bank : using wordnet for sign language corpus creation, in communications and information technology (ICCIT). In: International Conference on, 2011, pp. 134–137.

  8. Harry G Lang. Higher education for deaf students: research priorities in the new millennium, national technical university for the deaf, rochester. Oxford university press 2002

  9. Hisyamuddin, zaidatun. E-learning environment for hearing impaired students, tojet. The turkish online journal of educational technology. October 2013

  10. Korucu T and Alkan A Differences between m-learning (mobile learning) and e-learning, basic terminology and usage of m-learning in IoT 2011

  11. Saleem TA Mobile learning technology: a new step in e-learning, journal of theoretical and applied information technology, vol. 34, 31st December 2011

  12. Nasr MM An enhanced e-learning environement for deaf/hoh pupils, in computer technology and development (icctd), In: 2nd International Conference on, 2010, pp.724

  13. Paudyal P, Banerjee A, Hu Y, and Gupta S (2019) Davee: A deaf accessi ble virtual environment for education, In: Proceedings of the 2019 on Creativity and Cognition, pp.522–526, ACM

  14. Te´ofilo M, Lourenc A ¸ o,J Postal, and Lucena VF (2018) Exploring virtual reality to enable deaf or hard of hearing accessibility in live theaters: Acasestudy. In: International Conference on Universal Access in Human-Computer Interaction, pp.132–148, Springer

  15. Lee M, Je S, Lee W, Ashbrook D, Bianchi A (2019) Activearring: spatiotemporal haptic cues on the ears. IEEE Trans Haptics 12:554–562

    Article  Google Scholar 

  16. DY Huang, T Seyed, LLi, J Gong, Z Yao, Y Jiao, XA Che, and XD Yang Orecchio: Extending body language through actuated static and dynamic auricular postures,” In: Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology, UIST’18, (NewYork,NY,USA) ,pp.697–710, ACM,2018

  17. Kojima Y, Hashimoto Y, Fukushima S, and Kajimoto H (2009) Pull-navi: A novel tactile navigation interface by pulling the ears. In: ACM SIGGRAPH 2009 Emerging Technologies, SIGGRAPH’09, (NewYork,NY,USA), pp.19:1–19:1, ACM

  18. Wolf F, Kuber R (2018) Developing a head-mounted tactile prototype to support situational awareness. Int J Hum Comput Stud 109:54–67

    Article  Google Scholar 

  19. de Jesus Oliveira VA, Brayda L, Nedel L, and Maciel A (2017) Designing a vibrotactile head-mounted display for spatial awareness in 3d spaces. In: IEEE Transactions on Visualization and Computer Graphics, vol.23, pp.1409–1417

  20. Jain D, Findlater L, Gilkeson J, Holland B, Duraiswami R, Zotkin D, Vogler C, and Froehlich JE (2015) Head-mounted display visualizations to support sound awareness for the deaf and hard of hearing. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI’15, (NewYork, NY, USA), pp. 241–250, ACM

  21. Findlater L, Chinh B, Jain D, Froehlich J, Kushalnagar R, and Lin AC (2019) Deaf and hard-of-hearing individuals preferences for wearable and mobile sound awareness technologies. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI’19, (NewYork, NY, USA), pp. 46:1–46:13, ACM

  22. Sicong L, Zimu Z, Junzhao D, Longfei S, Han J, and Wang X (2017) Ubiear: Bringing location-independent sound awareness to the hard-of hearing people with smartphones. In: Proceedings ACM Interactactive Mobile. Wearable Ubiquitous Technol, vol.1, pp.17:1–17:21

  23. Shibasaki M, Kamiyama Y, and Minamizawa k (2016) Designing ahaptic feedback system for hearing-impaired to experience tap dance. In: Proceedings of the 29th Annual Symposium on User Interface Software and Technology, pp.97–99, ACM

  24. Petry B, Illandara T, and Nanayakkara S (2016) Muss-bits: sensor-display blocks for deaf people to explore musical sounds. In: Proceedings of the 28th Australian Conference on Computer-Human Interaction, pp. 72–80, ACM

  25. Guzman D, Brito G, Naranjo JE, Garcia CA, Saltos LF, and Garcia MV (2018) Virtual assistance environment for deaf people based on an electronic gauntlet. In: 2018 IEEE Third Ecuador Technical Chapters Meeting (ETCM), pp.1–6, IEEE

  26. Hashizume S, Sakamoto S, Suzuki K, and Ochiai Y (2018) Livejacket: Wearable music experience device with multiple speakers. In: International Conference on Distributed, Ambient, and Pervasive Interactions, pp.359–371, Springer

  27. Ma, C, Kulshrestha, S, Shi, W, Okada, Y, Bose, R (2018). E-learning Material Development Framework Supporting VR/AR Based on Linked Data for IoT Security Education. In: Barolli, L., Xhafa, F., Javaid, N., Spaho, E., Kolici, V. (eds) Advances in Internet, Data & Web Technologies. EIDWT 2018. Lecture Notes on Data Engineering and Communications Technologies, vol 17. Springer, Cham. https://doi.org/10.1007/978-3-319-75928-9_43

  28. Smart digital education enhanced by AR and IoT data. Shoikova, R. Nikolov, E. Kovatcheva. 12th International Technology, Education and Development Conference, March 5th–7th, 2018, Valencia, Spain, pp. 5861–5871. Doi: https://doi.org/10.21125/inted.2018.1392

  29. Bahuguna Y, Verma A and Raj K (2018) Smart learning based on augmented reality with android platform and its applicability. In: 2018 3rd International Conference On Internet of Things: Smart Innovation and Usages (IoT-SIU). pp. 1-5, https://doi.org/10.1109/IoT-SIU.2018.8519853

  30. Petrović L, Stojanović D, Mitrović S, Barac D, Bogdanović Z (2022) Designing an extended smart classroom: an approach to game-based learning for IoT. Comput Appl Eng Educ 30:117–132. https://doi.org/10.1002/cae.22446

    Article  Google Scholar 

  31. Meda P, Kumar M and Parupalli R (2014) Mobile augmented reality application for Telugu language learning. In: IEEE International Conference on MOOC, Innovation and Technology in Education (MITE), pp. 183–186, doi: https://doi.org/10.1109/MITE.2014.7020267.

  32. Paul S, Hamad S, Khalid S (2019) The Role of AR/ VR in an IoT connected digital enterprise for smart education. Sixth HCT Inf Technol Trends (ITT) 2019:305–308. https://doi.org/10.1109/ITT48889.2019.9075102

    Article  Google Scholar 

  33. Liao Y-W, Hsieh M-C, Wei C-W (2021) Effectiveness of integrating ar and iot technologies into environmental education for elementary school students. Int Conf Adv Learn Technol (ICALT) 2021:78–80. https://doi.org/10.1109/ICALT52272.2021.00031

    Article  Google Scholar 

  34. Dinayusadewi NP, Agustika GNS (2020) Development of augmented reality application as a mathematics learning media in elementary school geometry materials. J Educ Technol. 4(2):204–210. https://doi.org/10.23887/jet.v4i2.25372

    Article  Google Scholar 

  35. Rohendi D, Septian S and Sutarno H The Use of Geometry Learning Media Based on Augmented Reality for Junior High School Students. IOP Conference Series: Materials Science and Engineering, Volume 306, 2nd International Conference on Innovation in Engineering and Vocational Education, October 25–26th 2017, Manado, Indonesia. https://doi.org/10.1088/1757-899X/306/1/012029

  36. Ismail A and Festiana I and Hartini TI and Yusal Y and Malik A (2019) Enhancing students' conceptual understanding of electricity using learning media-based augmented reality. Journal of Physics: Conference Series. IOP Publishing, vol 1157, pp. 032049. https://doi.org/10.1088/1742-6596/1157/3/032049

  37. Elmunsyah H, Hidayat WN, Asfani K (2019) Interactive learning media innovation: utilization of augmented reality and pop-up book to improve user’s learning autonomy. J Phys: Conf Ser 1193:012031. https://doi.org/10.1088/1742-6596/1193/1/012031

    Article  Google Scholar 

  38. Hakin A, Gokhale A, Berthou P, Schmidt DC, Gayraud T (2014) Software-defined networking: challenges and research opportunities for future internet. Published Comput Netw. https://doi.org/10.1016/j.comnet.2014.10.015

    Article  Google Scholar 

  39. Krumm JÉd (2010) Ubiquitous computing fundamentals. Chapman & Hall BOOK/CRC Press, Microsoft Corporation Redmond, Washington, U.S.A, Boca Ragon

    Google Scholar 

  40. Zuckerman O. (2006), « Historical overview and classification of traditional and digital learning objects », MIT Media Laboratory, 20 Ames Street, Cambridge, MA 02139.

  41. Huang YM, Chiu PS, Liu TC, Chen TS (2011) The design and implementation of a meaningful learning-based evaluation method for ubiquitous learning. Comput Educ 57(4):2291–2302

    Article  Google Scholar 

  42. Kannouf N, Benabdellah M, Douzi Y, Azizi A (2015), «Security on RFID technology», International Conference on Cloud Technologies and Applications (CloudTech), 2–4 June 2015, Marrakesh, Morocco, IEEE Xplore Digital Library

  43. BUNBURY SENIOR HIGH SCHOOL. (2019), «Bring Your Own Device Information», Haig Cres, Bunbury WA 6230, Australie.

  44. Savoie-Zajc L (2007) Comment peut-on construire un échantillonnage scientifiquement valide ? Recherches qualitatives, Hors série 5:99–111

    Google Scholar 

  45. Sharma R, Palaskar S, Black AW and Metze F End-To-End speech summarization using restricted self-attention. arXiv:2110.06263v2 [cs.CL] 24 Jan 2022

  46. Palaskar S, Salakhutdinov R, Black AW, Metze F (2021) Multimodal speech summarization through semantic concept learning. Proc Interspeech 2021:791–795

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Malek Alrashidi.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix I

Questionnaire about the use of modern communication and e-learning technologies by deaf persons.

Section (I): General information.

Q1: What is your age?

۝ 18–25.

۝ 25–30.

۝ 30–35.

۝ 35–40.

۝ More than 40.

Section (II): Use of modern communication technologies.

Q2: What is your degree of use of the following services?

 

Low

Medium

High

Forums and chat

   

Social networks

   

E-mailing

   

Search engines

   

E-learning

   

E-gov services

   

Banking online

   

Shopping online

   

Use of technology in daily tasks

   

Q3: What is your degree of use of the following devices?

 

Low

Medium

High

Smartphones

   

Tablets

   

Computers

   

Smart watches

   

IoT Connected devices

   

Q4: For presential courses, what is the best method to communicate with the teacher?

۝ Face to face.

۝ Through e-learning systems (e.g., Blackboard).

۝ Communicate via e-mail.

۝ Communicate through social networking programs (WhatsApp—Twitter—Facebook).

۝ Through intelligent wearable smart IoT devices (smart watches).

Section (III): E-learning experience

Q5: What is the preferred method of communication to know the activities and announcements of the department, college and university?

۝ By directly entering the college or university page.

۝ By following the advertisements of e-learning systems (example: Blackboard).

۝ By following the university's social media (WhatsApp—Twitter—Facebook).

۝ By following up on university emails.

۝ Via direct communication with a sign language interpreter.

۝ Other: …………………………………………………………

Q6: How do you qualify your e-learning experience (if any)

۝ Excellent.

۝ Very Good.

۝ Average.

۝ Acceptable.

۝ Weak.

Q7 In case you attended e-learning courses, you recommend that the course should

 

Strongly disagree

Disagree

Neutral

Agree

Strongly agree

Be short and clear

     

Contain graphs and animations

     

Contain videos

     

Contain examples

     

Contain videos in sign language

     

Contain summarization at the end

     

Contain evaluation at the end

     

Appendix II

Summary of information of participants (see Tables

Table 7 Gender of participants

7,

Table 8 Time of hearing loss

8,

Table 9 Functional status

9,

Table 10 Educational status

10).

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Alrashidi, M. Synergistic integration between internet of things and augmented reality technologies for deaf persons in e-learning platform. J Supercomput 79, 10747–10773 (2023). https://doi.org/10.1007/s11227-022-04952-z

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11227-022-04952-z

Keywords

Navigation