Neural Engineering pp 87-151
Brain–computer interfaces are a new technology that could help to restore useful function to people severely disabled by a wide variety of devastating neuromuscular disorders and to enhance functions in healthy individuals. The first demonstrations of brain–computer interface (BCI) technology occurred in the 1960s when Grey Walter used the scalp-recorded electroencephalogram (EEG) to control a slide projector in 1964  and when Eberhard Fetz taught monkeys to control a meter needle (and thereby earn food rewards) by changing the firing rate of a single cortical neuron [2, 3]. In the 1970s, Jacques Vidal developed a system that used the scalp-recorded visual evoked potential (VEP) over the visual cortex to determine the eye-gaze direction (i.e., the visual fixation point) in humans, and thus to determine the direction in which a person wanted to move a computer cursor [4, 5]. At that time, Vidal coined the term “brain–computer interface.” Since then and into the early 1990s, BCI research studies continued to appear only every few years. In 1980, Elbert et al. showed that people could learn to control slow cortical potentials (SCPs) in scalp-recorded EEG activity and could use that control to adjust the vertical position of a rocket image moving across a TV screen . In 1988, Farwell and Donchin  reported that people could use scalp-recorded P300 event-related potentials (ERPs) to spell words on a computer screen. Wolpaw and his colleagues trained people to control the amplitude of mu and beta rhythms (i.e., sensorimotor rhythms) in the EEG recorded over the sensorimotor cortex and showed that the subjects could use this control to move a computer cursor rapidly and accurately in one or two dimensions [8, 9].