Brain Computer Interfaces (BCIs) enable one to control peripheral ICT and robotic devices by processing brain activity on-line. The potential usefulness of BCI systems, initially demonstrated in rehabilitation medicine, is now being explored in education, entertainment, intensive workflow monitoring, security, and training. Ethical issues arising in connection with these investigations are triaged taking into account technological imminence and pervasiveness of BCI technologies. By focussing on imminent technological developments, ethical reflection is informatively grounded into realistic protocols of brain-to-computer communication. In particular, it is argued that human-machine adaptation and shared control distinctively shape autonomy and responsibility issues in current BCI interaction environments. Novel personhood issues are identified and analyzed too. These notably concern (i) the “sub-personal” use of human beings in BCI-enabled cooperative problem solving, and (ii) the pro-active protection of personal identity which BCI rehabilitation therapies may afford, in the light of so-called motor theories of thinking, for the benefit of patients affected by severe motor disabilities.
This is a preview of subscription content, log in to check access.
Buy single article
Instant access to the full article PDF.
Price includes VAT for USA
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
This is the net price. Taxes to be calculated in checkout.
Imminence is regarded as a chief dimension for neuro-ethical triage in .
Accordingly, the broad philosophical context of transhumanism - in the framework of which issues of cyborg identity, rights, and responsibilities are often examined  - is hardly relevant here.
The distinction between personal and sub-personal levels of explanation in psychology was introduced in . “It is only on the personal level that explanations proceed in terms of the needs, desires, intentions and beliefs of an actor in the environment.”, p.164. In connection with the explanation of pain states, Dennett remarks: “Since the introduction of unanalysable mental qualities leads to a premature end to explanation, we may decide that such introduction is wrong, and look for alternative modes of explanation. If we do this we must abandon the explanatory level of people and their sensations and activities and turn to the sub-personal level of brains and events in the nervous system.” , p. 93, emphasis mine. For a more recent analysis of this distinction, see .
Chiefly based on invasive transduction technologies are so-called input BCIs, which fall outside the scope of this paper. Input BCIs establish computer-to-brain communication by collecting, processing, and transmitting to the brain signals that are produced by a source external to the human body.
See , for a more detailed description of these functional components.
The possibility of deploying on-line learning methods to deal with BCI learning problems is analyzed in .
For an introduction to individualistic and relational conceptions of autonomy, see . The promotion of autonomy of locked-in patients afforded by BCI systems is appropriately emphasized in , pp. 127-129. On more general grounds, however, one should carefully note, as Hansson does, that “subordination to technology will probably become an increasingly serious problem as enabling technology is developed that exhibits more and more intelligent behavior” [38, 264].
The Charter of Fundamental Rights of the European Union, art. 26, states: “The Union recognizes and respects the right of persons with disabilities to benefit from measures designed to ensure their independence, social and occupational integration and participation in the life of the community”.
It is not clear, however, that the more appropriate liability ascription policies for brain-actuated robots in the near future will be those based on economically oriented criteria, in view of the free exchange of technological resources which has become standard practice within the BCI research community: “The non-invasive BCI community overcame commercial temptations with the BCI 2000 website allowing laboratories worldwide access to the necessary hard- and software.” [12, p. 482].
This issue is examined in the light of the distinction between negative and positive rights in . Moreover, Fenton and Alpert discuss possible enhancement effects in LIS subjects deriving from the use of BCI systems in the light of extended mind theories in the philosophy of mind, according to which BCI-controlled peripheral devices may enable one to augment neural structures for cognitive processing [11, p. 127].
http://www.darpa.mil/dso/thrusts/trainhu/nia/index.htm (site visited on February 12, 2009).
The authors of this study go as far as claiming that “...the presence of reproducible and task-dependent responses to command without the need for any practice or training suggests a method by which some non-communicative patients, including those diagnosed as vegetative,... may be able to use their residual cognitive capabilities to communicate their thoughts to those around them by modulating their own neural activity.”  p. 1402.
Rather than as a paralyzing maxim which uniformly blocks the use of technologies if one cannot exclude undesirable consequences with absolute scientific certainty. This construal of the precautionary principle is arguably incoherent, insofar as it is oblivious to the fact that scientific theories and models are inherently fallible.
Early ethical reflection on affective computing are found in .
This scenario reminds one of the psychoanalytic variation of the know thyself maxim that Freud set out as a main goal for psychoanalytic interactions, that is, “…to strengthen the ego, to make it more independent of the superego, to widen its field of perception and enlarge its organization, so that it can appropriate fresh portions of the id. Where id was, there ego shall be. It is a work of culture - not unlike the draining of the Zuider Zee .” [42 p. 80 of the English translation].
Birbaumer, N., N. Ghanayim, T. Hinterberger, B. Kotchoubey, A. Kuebler, J. Perelmouter, E. Taub, and H. Flor. 1999. A spelling device for the paralyzed. Nature 398:297–298.
Hochberg, L.R., M.D. Serruya, G.M. Friehs, J.A. Mukand, M. Saleh, A.H. Caplan, A. Branner, D. Chen, R.D. Penn, and J.P. Donoghue. 2006. Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature 442:164–171.
Birbaumer, N. 2006a. Breaking the silence: Brain-computer interfaces for communication and motor control. Psychophysiology 43:517–532.
Wolpaw, J.R., N. Birbaumer, D.J. McFarland, G. Purtscheller, and T.M. Vaughan. 2002. Brain-computer interfaces for communication and control. Clinical Neurophysiology 113:767–791.
Millán, J. del R., F. Renkens, J. Mouriño, and W. Gerstner. 2004. Brain-actuated interaction. Artificial Intelligence 159:241–259.
Galán, F., M. Nuttin, E. Lew, P.W. Ferrez, G. Vanacker, J. Philips, and J. del. R. Millán. 2008. A brain-actuated wheelchair: Asynchronous and non-invasive brain-computer interfaces for continuous control of robots. Clinical Neurophysiology 119:2159–2169.
Friedman, D., R. Leeb, L. Dikovsky, M. Reiner, G. Pfurtscheller, and M. Slater. 2007. Controlling a virtual body by thought in a highly immersive virtual environment, in GRAPP 2007, Barcelona, Spain, 83–90.
Nijholt, A., D. Tan, A. Brendan, J. del R. Millán, B. Graimann. 2008. Brain-computer interfaces for HCI and games, in Proceedings of CHI’08, ACM, pp. 3225–3228.
Gerson, A.D., L.C. Parra, and P. Sajda. 2006. Cortically coupled computer vision for rapid image search. IEEE Transactions on Neural Systems and Rehabilitation Engineering 14(2):174–179.
Yahud, S., and N.A. Abu Osman. 2007. Prosthetic hand for the brain-computer interface system. IFMBE Proceedings 15:643–646. Springer, Berlin.
Fenton, A., and S. Alpert. 2008. Extending our view on using BCIs for locked-in syndrome. Neuroethics 1:119–132.
Birbaumer, N. 2006b. Brain-computer interface research: Coming of age. Clinical Neurophysiology 117:479–483.
Buxton, R.B. 2002 An introduction to functional magnetic resonance imaging: Principles and techniques. Cambridge UP.
Linderman, M. D., G. Santhanam, C.T. Kemere, V. Gilja, S. O’Driscoll, B.M. Yu, A. Afshar, S.I. Ryu, K.V. Shenoy, T.H. Meng. 2008. Signal Processing Challenges for Neural Prosthetes. A Review of State-of-Art Systems, IEEE Signal Processing Magazine 18.
Millán, J. del R. 2004. On the Need for On-line Learning in Brain-Computer Interfaces. International Joint Conference on Neural Networks.
Vapnik, V. 2000. The nature of statistical learning theory. 2nd ed. New York: Springer.
Reath, A. 1999. Autonomy, ethical. In Routledge encyclopedia of philosophy, ed. E. Craig. London: Routledge.
MacKay, D. 2003. Information theory, inference, and learning algorithms. Cambridge UP.
Arkin, R. 1998. Behavior-based robotics. Cambridge: MIT.
Nehmzow, U. 2006. Scientific methods in mobile robotics. London: Springer.
Matthias, A. 2004. The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology 6:175–183.
Miall, R.C., and D.M. Wolpert. 1996. Forward models for physiological motor control. Neural Networks 9:1265–1279.
Kawato, M. 1999. Internal models for motor control and trajectory planning. Current Opinion in Neurobiology 9:718–727.
Bufalari, S., F. Cincotti, F. Babiloni, L. Giuliani, M.G. Marciani, and D. Mattia. 2007. EEG patterns during motor imagery based volitional control of a brain computer interface. International Journal of Electromagnetism 9:214–219.
Dennett, D. 1969. Content and consciousness. London: Routledge & Kegan Paul.
Hornsby, J. 2000. Personal and Sub-Personal: A Defence of Dennett’s Original Distinction. In New Essays on Psychological Explanation, Special Issue of Philosophical Explorations, eds. M. Elton, and J. Bermudez, 6–24.
Kanizsa, G. 1955. Margini quasi-percettivi in campi con stimolazione omogenea. Rivista di Psicologia 49:7–30.
Philiastides, M.G., and P. Sajda. 2006. Temporal characterization of the neural correlates of perceptual decision making in the human brain. Cerebral Cortex 16:509–518.
Owen, A.M., M.R. Coleman, M. Boly, M.H. Davis, S. Laureys, and J.D. Pickard. 2006. Detecting awareness in the vegetative state. Science 313:1402.
Kant, I. 1983. Grounding for the Metaphysics of Morals. In Kant’s Ethical Philosophy, ed. J.W. Ellington. Indianapolis: Hackett.
Millán, J. del R. 2007. Tapping the mind or resonating minds? In European visions for the knowledge age, a quest for new horizon in the information society, ed. P.T. Kidd, 125–132. Macclesfield: Cheshire Henbury.
Farah, M.J. 2002. Emerging ethical issues in neuroscience. Nature Neuroscience 5:1123–1129.
Nordmann, A. 2007. If and then: A critique of speculative nanoethics. Nanoethics 1:31–46.
Warwick, K. 2003. Cyborg morals, cyborg values, cyborg ethics. Ethics and Information Technology 5:131–137.
Tamburrini, G. 2006. Artificial intelligence and Popper’s solution to the problem of induction. In Karl Popper: A centenary assessment. Metaphysics and epistemology, vol. 2, eds. I. Jarvie, K. Milford, and D. Miller, 265–284. London: Ashgate.
Santoro, M., D. Marino, and G. Tamburrini. 2008. Robots interacting with humans. From epistemic risk to responsibility. Artificial Intelligence and Society 22:301–314.
Christman, J. 2003. Autonomy in moral and political philosophy. Stanford encyclopedia of philosophy, http://plato.stanford.edu/entries/autonomy-moral/
Hansson, S.O. 2007. The ethics of enabling technology. Cambridge Quarterly of Healthcare Ethics 16:257–267.
Merkel, R., G. Boer, J. Fegert, T. Galert, D. Hartmann, B. Nuttin, and S. Rosahl. 2007. Intervening in the Brain. Changing psyche and society. Berlin: Springer.
Lucivero, F., and G. Tamburrini. 2008. Ethical monitoring of brain-machine interfaces, A note on personal identity and autonomy. AI and Society 22:449–460.
Reynolds, C., and R.W.Picard. 2004. Affective sensors, privacy, and ethical contracts. Proceedings of CHI’04, ACM, 1103–1106.
Freud, S. 1933. New introductory lectures on psycho-analysis. The standard edition of the complete psychological works of Sigmund Freud, vol. 22, 1–182. London: Hoghart.
I wish to thank an anonymous reviewer, Giuseppe Trautteur, Federica Lucivero, and Giovanni Boniolo for helpful and stimulating comments. I benefited from discussions on BCI systems and ethics with Febo Cincotti, Edoardo Datteri, José del R. Millán, Donatella Mattia, Stefano Rodotà, and Matteo Santoro.
About this article
Cite this article
Tamburrini, G. Brain to Computer Communication: Ethical Perspectives on Interaction Models. Neuroethics 2, 137–149 (2009). https://doi.org/10.1007/s12152-009-9040-1
- Brain-computer interfaces
- BCI communication protocol
- Personal identity persistence
- Human-machine cooperative problem solving
- Sub-personal psychology