Keywords

1 Introduction

Computer technology plays an increasing role in allowing people with severe disabilities to live fuller lives. For over 20 years our lab at Boston College has been developing assistive technologies to allow people with no voluntary control of their hands or legs and no ability to speak to communicate through eye movements or head movements. These technologies initially required a desktop computer. Assistive technologies based on desktop computers were a major boon for people with disabilities, but they required a person to use the technology at the place where the computer was located. With the development of powerful notebook computers and then tablet computers the technologies became mobile. Technologies that run on notebook computers and tablets can accompany a person with disabilities, for example on a wheelchair. Now with the advent of Google Glass and other wearables, the assistive technologies could be with the person always. The person could literally wear the assistive technology. Google Glass may or may not continue as a viable product. Some head mounted wearable products will succeed. We hope that the systems described here will be forerunners of assistive technologies for head mounted wearables in the future.

2 Past Work

At Boston College we developed two assistive technologies, EagleEyes and Camera Mouse, before the work on Google Glass.

2.1 EagleEyes

EagleEyes [1–3] allows people with the most severe physical disabilities, people who cannot mover their arms, legs, or heads and cannot speak, to control the computer just through eye movements. Almost all of the people using EagleEyes are children or young adults. Most were completely locked in, with no way to communicate before using EagleEyes. Many were thought to have no mental life, no intelligence [4, 5].

Some of the people using EagleEyes have genetic disorders such as Reyes Syndrome, Trisomy 18, or Spinal Muscular Atrophy Type 1. Some have disabilities resulting from birth, such as Cerebral Palsy. Some suffered drowning accidents or strokes. For most, EagleEyes provides the first means for people to communicate, to show their intelligence.

EagleEyes works through five electrodes placed on the head, around the eyes. EagleEyes measures the vertical and horizontal electrooculogram (EOG), an electrical signal that is proportional to the angle of the eyes in the head. The EOG signals are amplified, filtered, digitized, and used to control the mouse pointer on the screen. Selection is done using dwell time; holding the mouse pointer above a spot on the screen for half a second causes EagleEyes to issue a mouse click. EagleEyes is a general mouse replacement system that allows a person to control the mouse pointer on the screen just with eye movements.

Since the EOG indicates the angle of the eye in the head, people with voluntary head movement can control the mouse pointer just with head movements alone or with a coordinated combination of head movements and eye movements.

EagleEyes is manufactured and distributed by the Opportunity Foundation of America (OFOA) under license from Boston College. Currently they are distributing about two EagleEyes systems per week to families and schools.

2.2 Camera Mouse

Camera Mouse [6, 7] allows people with disabilities to control the computer just with head movements. Camera Mouse uses the built-in camera in a Windows computer or any standard USB webcam to track a location on the head, for example the corner of an eyebrow, specified by the user or selected by the program. As the person moves the head, the mouse pointer moves accordingly. Move your head to the right, the mouse pointer moves to the right, and so on. As with EagleEyes, selection is done using dwell time. Camera Mouse is a general mouse replacement systems, so it works with any standard Windows software.

Camera Mouse is used by many adults with disabilities as well as children. Camera Mouse is available for free download at cameramouse.org. Currently there are over 1,000 downloads per day. Since the program first was made available at the website in 2007 there have been over 2,700,000 downloads.

3 Google Glass

Glass is a lightweight, head-mounted system first made available to the public in the Explorer edition in April 2013. See Fig. 1. Glass is a complete wearable computer system that features a 640 × 360 display, voice control, a touchpad, a 5 megapixel camera, a speaker, Wi-Fi, Bluetooth, a 3 axis gyroscope, and a 3 axis accelerometer. In our software we make use of the display, the gyroscope, and the speaker. The display is the equivalent of a 25 in. high definition screen from eight feet away.

Fig. 1.
figure 1

Google glass, explorer edition

On January 15, 2015 Google halted sale of Glass to the general public, though promising there would be improved versions available in the future. Several companies have head mounted display systems available or in development. There are two types of head mounted display systems. Some are closed systems where the user can see just computer generated images. Some, like Glass, allow the user to see the computer generated image in front of the real world image. The techniques developed in our software are applicable to both types of head mounted systems.

4 Noggin

Noggin is a program for Glass developed by Muhan Zhang in the summer of 2014. Noggin allows the user to move a yellow circle “mouse pointer” on the screen with head movements. Noggin displays a Yes, No, and Enter button on the screen. Noggin allows a user to answer Yes or No questions by selecting the answer on the screen. Noggin then speaks Yes or No. See Fig. 2. Noggin provided a proof of concept of the ability to use the gyroscope to sense head movements to move a mouse pointer with reasonable accuracy.

Fig. 2.
figure 2

Noggin screen. The yellow circle is the “mouse pointer” (color figure online)

Noggin for Google Glass is developed in Java using the Glass Development Kit, an additional layer of abstraction on top of the Android Development Kit. Graphics for the app heavily utilizes the AndEngine game engine library. Advantages to this engine are that it is powerful, light, and actively developed by an employee of Zynga with a wide proliferation in the Android mobile gaming world. The major disadvantage is the lack of documentation.

5 Glass Gab

Glass Gab is a program for Glass developed by Deirdre Anderson in the fall of 2014 and winter of 2015. Glass Gab allows the user to spell out a message using an onscreen keyboard. See Fig. 3. Glass Gab makes use of the code base developed for Noggin.

Fig. 3.
figure 3

Glass gab screen. The user has spelled out Boston College

As the user rotates his or her head the program senses the rotation through the changes in the gyroscope settings and moves the yellow circle “mouse pointer” accordingly. If the circle is kept above a letter for one second the letter is selected and added to the end of the message. In addition to the letter keys, a space key, delete key, clear key, and speak key are included.

Initially we were concerned that the user might not have the accuracy to select from 30 targets, but it seems to work after some practice. It took Deirdre 38 s to spell out the 14 character message BOSTON COLLEGE. Of course 14 of the 38 s were required for the dwell times to select the letters. There is more experimentation we could do to improve the mapping between gyroscope changes and pointer movement.

6 Discussion

Three generations of devices allow for a person to control the mouse pointer on the screen by head movements. EagleEyes senses head movements, actually changes in the angle of the eye in the head, through electrodes. Camera Mouse senses head movements with a visual tracking algorithm using a webcam. Noggin and Glass Gab sense head movements though changes of readings of a gyroscope. Camera Mouse is using translational movements while Noggin and Glass Gab are using rotational movements of the head.

Noggin and Glass Gab point the way for assistive technologies using future versions of Google Glass and other head mounted displays.