1 Introduction

Patients with speech and verbal motor disorders have difficulties to communicate with their living world. Since the communication is essential for humans to express their feelings, needs, develop relationships, and simply to live a normal life, the users with speech impairments lose the connection to their family/friends, and their dependency on them is increased to accomplish their basic life requirements. There are several factors for causing speech impairments such as damage in human brain cells, hearing loss, drug abuse, mental and neurological disorders [1, 2]. Patients with motor neuron diseases such as Amyotrophic Lateral Sclerosis, ALS, and Primary Lateral Sclerosis, PLS, also develop speech impairment symptoms and gradually lose the ability to control their muscles leading to a completely paralysis stage with their eye’s pupil is the only available movement organ for initiating a communication with the others [3, 4]. Many communication strategies for ALS patients are reported such as gestures, letter-boards, eye blinks, hand squeeze, pre-recorded messages’ sounds, and flash cards [5].

ALS patients need some augmented and alternative communication devices that assist them during verbal and non-verbal communications. These augmented computer devices are based on tracking the patient’s eyes or their brain signals to initiate actions according to their specific needs. The term Augmented and Alternative Communication (AAC) is used to describe the set of technologies or devices that support peoples with different paralyzed conditions to interact personally with their world, release their dependency on the others, and be more productive and involved in different activities without any limitations or obstacles [6, 7]. The AAC technologies can be classified into two categories; one relies on software programs that track facial expressions, hand gestures, or eye blinks and translate these features into a generative speech or written text. The other category relies on the hardware devices that track head or limb movements and uses smart tablets or touch-pads to aid the communication [8, 9].

The acceptance rate of these supported devices is increased quickly among ALS patients with different developing symptoms and the only reported rejection usage came from patients that suffer from additional health issues such as cancer or functional impairments such as Dementia [10,11,12]. Also, the patient’s acceptance affected by the acceptance of their involved family members and caregivers in the communication process.

The revolution of internet, communication protocols and the introduction of Internet of Things, IoT, technology has enabled the data generation, processing, and storing on large scale. Also, it has allowed real time services and initiated controlling actions based on instant processing of data items. IoT field has recently evolved to improve our daily life quality by enabling a communication among different objects/things that are sharing our physical world over the internet. Home devices have microcontrollers that can receive or transmit signals over the internet to other connected devices and hence can be utilized to support the communications of users with speech impairments [13,14,15].

Most of the invented devices for users with speech impairments are expensive and can not be afforded by users in the low income countries. Also, the implementation complexity of such devices and their limited usage scenarios make them customized for specific patient needs and initiating some limited actions accordingly. Further a standard communication language should be followed by all invented devices in order to develop a unified flexible environment that is suitable for all users with speech impairments and to assist them in their different situations.

From the previous perspectives, we designed a simple flexible cost-efficient eye glass that captures eye blinks and translated it using a universal standard Morse code into a set of alphabets/sentences. These alphabets/sentences are displayed on a mobile application that is installed on the phone or any android supported device. Also, the sound corresponding to these sentences can be heard through our application so the patient’s family/friends can communicate with him in an efficient manner. Thus our system is not customized to use specific smart screens or keyboards nor need specific equipments or configurations to be installed. The patient’s only need to wear a Morse glass and the person who is communicating with him needs to hold his phone with no additional costs are need to buy extra smart devices such as screens or keyboards.

Our paper is organized as follows: Sect. 2 demonstrates a previously related work, Sect. 3 provides a basic background on the IoT system infrastructure and the standard Morse code as a communication protocol, Sect. 4 explains the Morse Glasses system from IoT perspective along with our modified Morse code chart based on the patient’s eyes blinks, Sect. 5 discusses different aspects related to our initial prototype including the expected cost and challenges while Sect. 6 concludes our paper and provide future insights about the updates of Morse Glasses system.

2 Related work

ALS patient’s communication approaches can be classified into five basic approaches according to the studied literature: auditory Brain Computer Interfaces (BCIs), Electrooculography (EOG), Eye tracking, Eye Blinking, and other aided devices based on facial expression, gestures, signs, head and hand movements, etc. Each approach utilizes signals received from a functional organ of the patient and the usage of each approach will depend mainly on the developing ALS symptoms (see Fig. 1). Other factors will be considered in order to choose the communication approach such as its speed, cost, flexibility, hardware and software support, and if the communication needs assistance from a caregiver. EOG, Eye Tracking, and Eye Blinking could be grouped together under the term of Eye movement communication approach that relies on the patient’s eye organ to communicate with the others [16,17,18].

As the ALS disease symptoms progress, the only available organ for patients to communicate is using their eyes. This leads to the widespread usage of AAC devices that translates the eye language into a generative speech. Some studies reported that the ALS patients usually prefer using their eyes in order to communicate compared to the other methods [17, 19, 20]. Other studies reported that the eye movement approach is the least fatiguing communication approach for ALS patients and usually there is a little effort for moving the eye muscles compared to the other based communication approaches [21, 22].

Since our paper follows the eye movement communication approach, we provided a comparison among different approaches based on some reported studies (see Table 1) [1, 5, 6, 16, 17, 23].

Fig. 1
figure 1

Different communications approaches for ALS patients

Auditory BCIs based approach tracks the brain signal and covert it into a generative speech. It is an independent muscle communication and can be used in the case if the patients lost the control of their eyes but it is a slow expensive complex communication approach and can not be afforded for patients in the low income counties [17, 24].

Eye-Gaze tracking systems are one of AAC applications to support ALS patient’s communication based on tracking their eye movement, detecting the pupils and translating it to understandable messages or taking some actions accordingly [3, 25]. The actions allowed by an early eye-movement tracking systems is the cursor movement on some connected screens according to the movement of pupils. It is considered a simple tracking technique and might be the only available one if the ALS patient’s symptoms are developed quickly and the only available movement to track is their eyes. Systems that rely on eye’s movement to generate understandable speech uses a camera with infrared sensor to detect the eye movements based on the amount of reflected lights from infrared sensor on the eye’s retina to provide a clear bright image of eye’s pupil and create its corresponding glint vector. Each point in pupil’s glint vector is translated into a cursor movement on the connected screen and consequently each eye movement on the on-screen keyboard that is designated to be gaze sensitive will generate a communicative speech or initiate other controlling actions [26].

Table 1 Comparing attributes of eye movement based approaches for ALS patients’ communications

The electromyography system is another proposed solution provided to ALS patients to control various desktop applications by placing nearly five electrodes around their eyes to detect eye blinks/movements using electromyography waves. This system is developed to be attached on the patient’s arm or forehead and can maintain a keyboard that translates eye movements into some recognizable words to make the communication more easily and efficient [17, 27].

One of IoT based solutions to help ALS patients is ALS specs, which translates eye and head movements into understandable or well-defined actions so the home devices can be controlled easily by ALS patients without any self-dependency on the others [28]. ALS specs has a glass frame with attached colored leds to track the head movements along with a camera to detect the eye movements correctly. ALS specs device is connected to a tablet with a pre-installed specific application for ALS patients. The application can be navigated through head movements and have some functionality to form words/sentences based on detecting eye/head movements through colored leds.

EyeLive [29] is another IoT Eye-Gaze tracking system that relies on measuring eye reflections based on infrared sensors. The analog signals are transmitted to a microcontroller that performs analog-to-digital conversion and sends the data to computer software for further analysis and processing steps. The software relies on some algorithms such as principal component analysis to detect eye movements and initiate the controlling actions accordingly. EyeLive has a user interface with a keyboard to enable the patients use their eyes to look at the intended letters and make a selection based on eye blinks for one second.

Mukherjee and Chatterjee [16] propose an IR eyeglass communication system that translates eye-blinks into a sequence of alphabets based on Morse code. The alphabets are displayed on the LCD screen that connected wirely to the Arduino circuit as a proof of concept of the system functionality. The wired connection among different system componenets complicates the system design, and limits the usability of the device, its accessibility and functions extendiblity. Morse Glasses overcome the limitations of Mukherjee and Chatterjee device by improving the hardware architecture to allow the wireless connection over IoT based environment, utilizing a mobile application, encoding the frequently used patients’ phrases, and generating their corresponding sounds. The introduced features of Morse Glasses will increase the device usability, accessibility and functionality for ALS patients’ communications.

3 Background

3.1 IoT system architecture

IoT has revolutionized our life by introducing the idea of communicating objects over the interconnected wireless sensor network. These communicating things, devices, or objects have the ability to sense/perceive its environment through a set of integrated sensors and act through a set of designed actuators. The sensors are used to gather data from the device working environment and send it to a microcontroller that processes, analyzes, the data and generates controlling actions accordingly [30, 31].

Technology cost has rapidly decreased along with the intervention of newly smart sensing devices including the wearable ones. These devices are connected to the internet via a machine-to-machine and human-to-machine communication paradigms and exchange their collected data items on a timely manner, which enables a real time analysis and processing and consequently performing right actions accordingly [32,33,34]. A long with the recent development in some communication technologies such as RFID, radio-frequency identification, IoT plays a key role to enable a communication among different objects that are sharing our physical world over the internet [35].

IoT has several points of views according to its mentioned scope. IoT can be explained as some services provided by devices communicating to a computer or to each other to enrich human life. IoT can be also described as a world with no limitations or boundaries where every device in every where at any time can be accessed and communicated effectively in a timely manner. In traditional networking systems where computers are connected over the internet, the data is produced and used by humans where in the IoT, the data is collected by some sensors and utilized by some actuators that initiate some actions on their working environment [36,37,38,39,40].

IoT overcomes the communication, inclusion and productivity challenges that are currently facing people with disabilities. Introducing wearable devices that cable to sense their environment, and perform some actions accordingly. This will increase their engagement in different activities, and making them more productive and social, improving overall their life quality. IoT technology can assist people with hearing, visual, or physical impairments to accomplish their daily life activities independently.

IoT architecture has three layers which are perception layer, network/communication layer, and application/actions layer [13]. The perception layer is responsible for gathering data from the environment that a wearable device will working with through a set of sensors and performs actions through some actuators based on some decisions. The network layer is a communication layer that is responsible for transmitting the collected data over wired/wireless communication networks. The application layer is responsible for analyzing, processing the data, and executing some intelligent algorithms and applications/services to take a rational smart decision in a timely manner. There are many challenges that are currently facing people with disabilities from IoT research perspectives such as making personalized actions based on their personalized disability and personalized context and acting accordingly. Also, managing the actions initiated by IoT devices without human intervention is another key challenge. Further the security of personal data and the requirement of persistent internet connection represent additional challenges [40].

3.2 Morse code

Morse code is a character encoding system designed for radio telegraphy communication. It encodes the character alphabets using short and long electronic pulses. The short pulse is represented by a dot character while the long one is represented by a dash character. Also, the Morse coding system assigned short coding sequences to the frequently used alphabets [41, 42]. Using the same analogy, we can use the short eye-blink to encode the dot and long eye-blink to encode the dash and the Morse code map is constructed accordingly. Morse code is used previously as a communication protocol for ALS patients using a face recognition software that is able to extract and detect facial features and translates eye blinks into words/sentences to enable ALS patients’ communication with the others in their environment [43, 44]. Also, Mukherjee and Chatterjee used Morse code in their AAC device in order to translate the eye blinks into a set of communicating alphabets [16].

Blink to Speak is another eye speaking language adapted for ALS patient’s communication with the assistance of their doctors and caregivers. The language alphabets are corresponding to different eye movement states such as up, down, left, right, blink, wink, roll, and shut. The set of 50 most basic daily life commands are encoded using these alphabets and the paralyzed patients can be trained to use it as a daily communication language [45]. This proposed language is not utilized in any AAC device for ALS patient’s communication and some of the encoded commands are based on the combinations of different eye movement states that may complicate and slow the communication process.

Morse code is considered an adaptive low-cost encoding scheme for people with disabilities. It is international universal encoding system that has many advantages such as simplicity and speediness to encode/decode the communicated messages [41]. In the following section, we will discuss our modified Morse code map and how it is utilized by Morse Glasses system for users with speech impairments.

4 Morse glasses for users with speech impairments

Morse Glasses is a smart wearable device based on IoT technology used to assist users with speech impairments especially with motor neuron diseases such as ALS and PLS in their daily life communications. These patients develop speech impairment symptoms and gradually lose their ability to control their muscles leading to a completely paralysis stage with the only available movement organ for their communication is the eye’s pupil. Accordingly, a universal Morse code communication language is modified to consider the patient’s eye’s blinks as series of dots and dashes based on setting specific threshold. Once dots and dashes are identified, the sequence of communicated alphabets/sentences is generated and displayed on an android mobile application. Also, the patients can initiate a speak command for their generated sentences to communicate efficiently with the others.

4.1 System architecture

Morse Glasses system implements a three layered IoT architecture (see Fig. 2) where the perception layer includes our wearable Morse Glasses device with IR sensor interfaced with one side of the eyeglass. The active IR sensor is responsible for gathering the eye’s blinks information and sending it to a microcontroller in the Arduino Uno board to process the eye’s blinks and convert it into a series of dashes/dots and consequently to a series of communicated alphabets. The active IR sensor is fixed on a position interfaced with the patient’s eye to emit the infrared radiation, detect the reflected light back and measure its intensity in order to decide the state of patient’s eye (open, close or blink). The intensity of reflected light corresponding to one of the three patient’s eye states (i.e. opening, closing, and blinking) is different and by measuring the amount of reflected lights on the specific intervals, the states of the patient’s eye can be determined.

Fig. 2
figure 2

 A three layered IoT system architecture for Morse glasses

The IR sensor sends its analog signal to the microcontroller on an Arduino Uno board which is fixed on one of the eyeglass temples. The microcontroller’s memory stores a set of instructions to convert the analog signals to digital ones in order to start the processing of the received data from IR sensor. The signals are classified into one of the three eye’s states based on measuring the intensity of reflected light from IR sensor on the specific time intervals. The blinking state is classified further into short-blinking and long-blinking based on the blinking duration time. We tested different threshold values and choose a fixed value of 0.6 s to classify the blinking states into short or long ones. If the patient’s eye blinks for duration less than 0.6 s, the blinking is classified as short one, otherwise, it is long blinking. A short blinking is translated into dot while the long one is translated into dash.

Figure 3 shows some drawn signals and their classification as dot/dash speech according to the specified threshold. The signals classified as a dot always have a peak less than 0.6 s (i.e. alphabet E) and the dash signals always have peak greater than 0.6 s (i.e. alphabet T).

Fig. 3
figure 3

Classification of eye’s blinking signals as dot/dash speech. a Blinking signals of alphabets E and T, b Blinking signals for a sentence “I’m hungry”.

By using our modified Morse code conversion table (see Fig. 4), a sequence of alphabets is generated based on reading a series of eye blinks and classifying them as short or long ones. Once the sequence of alphabets is generated, a sequence of sentences can be constructed. Further, we encoded some of the most used sentences by the patients using the same analogy by assigning them a unique eye’s blinking code.

Fig. 4
figure 4

 A modified Morse code map for Morse Glasses system based on eye’s blinking scheme

We can extend this list to include additional sentences and construct a help manual to speed up the communication process (see Fig. 5). The patients can be trained very quickly to use different encoding styles for eye blinks and their corresponding generated speech. A buzzer is attached to the Arduino board to alert that the patient will start a conversation. Also, the buzzer will enable the patient to know how his eye blinks is received by IR sensor and interpreted by the Arduino circuit (i.e. classified as short or long blinks) according to their corresponding alert’s sound and hence the patient can adjust his blinks accordingly.

Fig. 5
figure 5

Eye’s blinking encoding scheme for some frequently used phrases by users with speech impairments

A second IoT layer of Morse Glasses is a communication layer which includes the Bluetooth and Wi-Fi modules to transmit a sequence of dots and dashes to the application layer. The patient’s have the choice to use either one depends on the available communication medium. Also, the Bluetooth can be used to overcome the requirement of persistent internet connection to enable the communication.

The third IoT layer of Morse Glasses is the application layer which includes an android mobile application that is running on the phones of the people that the patient’s want to communicate such as their friends and family. The android application is a simple mobile application that displays the translated eye blinks sentences/alphabets and speaks it loudly if the patient ends his message by eye blinks code corresponding to a dot character. Morse Glasses mobile application can be installed on any android device ranges from small phones to large tablets, which release the dependency of previous suggested systems for users with speech impairments on specific hardware components with pre-installed configurations.

Figure 6 shows a communication protocol among two basic components of Morse Glasses system. A patient wearing Morse Glasses initiates a communication by blinking a code corresponding to the alphabet S to indicate the starting of a communication process. The mobile application will receive the signal, translate it as a communication starting alert and wait for the next eye blinks to decode its corresponding dots/dashes speech. The patients can end the communication process by blinking a code corresponding to the dot alphabet (see Fig. 4). The mobile application turned off after the signal is received and translated. Morse Glasses mobile application will translate the alphabet S as a part of speech if the application is already on and accepting speech from the patient. If the signal corresponding to the alphabet S is the first signal received by the mobile application, it will be encoded as a starting signal and no action will be taken. Each patient communication protocol will be initiated by the eye blinks corresponding to the alphabet S to turn the Morse Glasses mobile application on and ended by the eye blinks corresponding to the dot alphabet to turn it off. During the communication any natural blinks with a duration time less than 0.1 s will be considered as a false signal and no encoded speech will be generated (see Fig. 7)

Fig. 6
figure 6

A communication protocol of Morse Glasses system.

Fig. 7
figure 7

Signals interpretation by Morse Glasses system

4.2 Hardware architecture

We designed an affordable eye glass to assist patients with disabilities in their daily life communication activities. Our Morse Glasses utilized a modified Morse code based on eye blinks and used the IoT infrastructure to process the reading signals and display the final generated speech with its corresponding sound on any android supported device such as smart phones or tablets.

The IoT infrastructure used in Morse Glasses is described in Fig. 8 where the IR sensor TCRT5000 is attached to one side of eye glass and the Arduino Uno board with ATmega328 microcontroller is fixed with one of the eye glass temples. The jumper wires are used to connect the IR sensor with the Arduino board. We used a Bluetooth module HC-05 to provide a wireless communication protocol between Morse Glasses device hardware and its corresponding mobile application. Further to allow communications over large distances and over the internet, we can replace the Bluetooth module with ESP8266 ESP-01 serial wireless Wi-Fi module. The details of circuit connectivity can be showed in Fig. 9.

Fig. 8
figure 8

Basic components of Morse Glasses system. The designed system has three basic components, which are a wearable glass, a controlling circuit and a mobile application

Fig. 9
figure 9

The Arduino Uno board connectivity details of Morse Glasses system

5 Discussions

Morse Glasses is a simple flexible cost efficient wearable device for users with speech impairments in developing countries like Egypt. Most of eye-Gaze tracking systems are complex, require specific equipments such as smart screens, keyboards, cameras, and custom configurations. Also, it is too expensive with their cost ranges from 5000$–20 K$, which is not affordable to poor patients in the low income countries. Morse Glasses device costs on average 30$ and does not need any additional costs for buying extra equipments for the patients to communicate effectively (see Fig. 10). On a simple case scenario, the patient’s need to wear a Morse Glass and his family/friends need to hold their phones and install the android application associated with the device.

Morse Glasses utilized a modified version of a standard universal Morse code based on eye blinks, which make a device suitable for a standard communication with peoples with disabilities. The simplicity and flexibility of Morse code and its modified version make it easy for patients to learn and customize by adding new codes for their most usage phrases according to their specific needs and disabilities. Also, the most repeated sentences/alphabets used by patients can be assigned short codes so the patients can learn it quickly and the list of coded phrases can be extended easily to include more ones according to their personal preferences. Further, the patient can allow his family/friends to receive his messages as a voice speech and this can increase their communication, inclusion, and productivity.

Fig. 10
figure 10

The initial prototype of Morse glasses

As mentioned previously, there are many software programs that rely on a camera connected to a computer to capture a sequence of images that are processed using computer vision libraries such as OpenCV to detect eye’s blinks and translate it to a sequence of alphabets/sentences using Morse code. Morse Glasses device is classified under the category of hardware supported device based on IoT for ALS patients’ communication. The ALS patient has to wear Morse Glasses device without any restrictions or any previously setup communication environment or lighting conditions. ALS patients will be more flexible to wear an eye glass and communicate through it rather than relying on a caregiver for adjusting a camera connected to a computer application to track their eyes’ blinks and translated it to a communicating speech. Morse Glasses mobile application can be installed and configured on any android supported devices without any required technical background or software dependencies/libraries (i.e. like any downloaded application from a Google play store). The proposed IoT Morse Glasses design will release the patient’s dependency on caregivers since the patients, who are wearing Morse Glasses as normal eye glasses, can do daily life activities, control their home appliances, and communicate wirelessly over the internet with friends/family. The issue of controlling home hardware devices by ALS patients can not be achieved by the previously proposed software systems that rely on a camera connected to a computer application since their proposed design does not integrate any microcontroller hardware devices.

Due to the COVID-19 global pandemic, our appointments with ALS patients and medical centers to test our device were canceled. Therefore, we tested our device by the members of our team, their families, and friends to measure the performance of Morse Glasses and report the results accordingly (see Fig. 11).

Fig. 11
figure 11

Example of testing Morse Glasses device

We selected a group of 10 cases that represent different society sectors (i.e. ages, culture, education, etc.). Since they are not ALS patients, each case had three days maximum training of using Morse Glasses device based on the encoded alphabets /sentences presented in Figs. 4 and 5. After the three days training period, we asked each case to wear our device and communicate through it by blinking their eyes according to the previously trained alphabets/sentences encoding schemes. Since the blinks corresponding to the complete sentences are more challenging, we focused our performance analysis on their corresponding trials. Each case had three test trials to blink the five sentences appeared in Fig. 5 and the results are recorded accordingly.

Table 2 displays the recorded experimental results, where the first column represents a code given to each case in order to study the effect of educational level, the technology awareness, and the age on the person’s ability to blink the five requested sentences correctly. The second column represents the average number of trials required to blink the requested five sentences; we limited the number of trials to three and reported the trial as a fail if the person required more than three trials to blink the requested five sentences (cases 07 and 08). The third column represents the number of recognized sentences by Morse Glasses device (i.e. correctly blinked by a case) out of the whole set of five sentences.

Cases (01 to 06) have ages ranging from 20 to 40 so they required a less number of trials compared to the cases (07–10) with ages above 55. Also, the level of education will affect the required number of trails. The more educated cases are flexible in using our device correctly and efficiently. The educational level of cases 07 and 08 is low, so they required more training time to use our device. We reported their trials as a failure since they required more training time and more than three trials to blink the five requested sentences. Further, the people’s awareness and acceptance of the technology will affect the successful usage of our device. The younger people (cases 01–06) are more aware of the technology, their learning rate is very fast, and accordingly they able to blink most of the requested sentences correctly compared to the other cases. Cases 01, 05, and 06 have an intermediate educational level so they required more trials (i.e. 2 and 3 trials) compared to the high level educational ones (cases 02, 03, and 04). The age of cases 09 and 10 is above 55 and their educational level is high so they able to use our device within a limited three trails test.

Table 2 Experimental results of Morse glasses device

We expected that ALS patients will need more training time than reported in our testing experiment and the age, educational level, patient’s awareness of the technology, and their acceptance will play a key role on the success of Morse Glasses usage. The patients will need on average 10 days to learn our modified Morse codes for different alphabets and sentences. Also, we can provide patients a code manual to learn new codes and remind him with existing ones to speed the communication process and make it more smoothly and easily. In order to increase the device accessibility and usability for ALS patients, we designed a simple android application to display and play the speech translated by our Morse Glasses device. The application has no specific configurations and can be running on any android supported device, which removes any extra costs needed for smart screens or keyboards. The application can be improved further to include extra set of functionalities such as initiating phone calls or sending messages/emails over the internet. Also, the IoT infrastructure can be extended to allow a full automation of controlling smart houses, cars, etc. based on our modified eye blinks Morse code.

We compared Morse Glasses device to the previously introduced system by Mukherjee and Chatterjee [16] according to the hardware design, patient and family communicating devices, communication language and technologies, patients blinking alerts, patients training, and system cost (see Table 3). Morse Glasses device represents a real practical implementation of the system proposed by Mukherjee and Chatterjee based on IoT wireless communication protocol. Morse Glasses introduce many invovative ideas to the previously proposed system in order to increase its usability, accessibility and extend its functionality. Morse Glasses utilize a mobile application installed on any android supported device to display the translated alphabets/sentences. The mobile application communicates via a wireless connection (Blutooth/WiFi) with Morse Glasses. This wireless communication system will open the opportunity to extend the system functionality to control various smart home appliances and release the dependeny of the previous systems on specific fixed hardware devices (i.e. screens, cameras or keyboards). Also, Morse Glasses introduce the idea of encoding frequently used patients’ phrases in order to extend the communication to the sentences level instead of alphabets. This will help to customize our device to meet the requirements of each patient according to his profile (i.e. patient needs, disability, envioronment, etc.). The list of encoded phrases can be adjusted/updated according to different users profiles. Futher, Morse Glasses mobile application utilizes a sound module in order to facilitate the communication process by generating a speech corresponding to the encoded sentences/alphabets.

Table 3  A comparison of Morse glasses to AAC device introduced in [16]

6 Conclusions

Morse Glasses is a cost efficient wearable device for users with speech impairments based on IoT infrastructure and Morse code. The designed system has three basic components, which are a wearable glass, a controlling circuit and a mobile application. These three components interact to support the communication of people with disabilities such as ALS patients. The eye’s blinks of the patient are translated using our modified Morse code chart into alphabets/sentences with their corresponding sound can be heard through our mobile application. Morse Glasses is a simple flexible system that has no specific hardware requirements or customized configurations. The patient needs only to wear a Morse glass and his participant in the communication process needs to hold a phone or any android supported device. We designed a Morse code for some frequently used phrases by users with speech impairments and the list can be extended to include additional phrases depends on their custom needs and disabilities. The future improvement of our system is including a machine learning module to predict the patient’s speech, provide suggestions based on initial phrases and make auto completion of the generated sentences. Also, the Morse Glasses controlling circuit can be upgraded to include more sensors and actuators to increase the system functionality to support the communication of patients with different types of disabilities and allow them to control various home appliances, cars, etc. Further, we are planning to extend the functionality of our mobile application to allow different types of device controlling over Bluetooth and Wi-Fi connection. This will help people with disabilities to live normal life, release their dependency on the others, and increase their inclusion and productivity in their working environment.