Social and ethical concerns raised by neurorobotics can cover the same issues as those raised by traditional robots and more generally by other novel technologies that affect people’s everyday activities. However, neurorobotics can also generate novel concerns. The novelty originates from bio-inspired robotics and biomechatronics. That is the case, for instance, of the bi-directional translation that characterises neurorobotics, which links brain research to information technology, designing computational structures for robots inspired by the study of the nervous systems of humans and animals, and using the study of bio-inspired robots to improve the understanding of the human brain. As a result, the ethical and social concerns raised by the traditional robotics can acquire a new meaning and philosophical depth. Neurorobotics aims to endow machines with greater awareness, capacity for abstraction, reasoning; in short, with a greater degree of intelligence - a quality we have come to associate with humanity. It does so by attempting to emulate the mechanisms that predicate intelligence in humans, for which the nervous system provides a physical manifestation. What does this say about the relative condition of man and machine? Do they exist along a common spectrum? Is this a rational pursuit? Alternatively, is the notion that we may analyse, understand and reproduce what makes us human rooted in something other than reason?
In addition, as neurorobotics aims to improve the capabilities of existing robotic systems and, as a consequence, extend the use made of robotics and automation technology, some of the existing concerns can be exacerbated. This is particularly the case for those application areas that stand to directly benefit from advances brought about by neurorobotics, such as those discussed earlier.
Responsible Research and Innovation in the HBP
This section provides a brief introduction to the way in which broader ethical and social questions are being addressed in the HBP. This overview helps to contextualise this article, which arose from such work in the project. The HBP brings together a large number of people from a broad range of disciplines to work on several potentially contentious and sensitive topics, ranging from animal experimentation to the possibility of conscious machines (Rose 2014; Lim 2014). It, therefore, incorporated a Society and Ethics sub-project from its inception. This Ethics and Society Programme adopts the principles and practices of Responsible Research and Innovation (RRI). RRI is a key concept in research and innovation governance, which in the European framework programme Horizon 2020, concentrated on ethics, governance, open access, science education, public engagement and gender equality (European Commission 2013). It is based on the attempt to render research and innovation acceptable, socially desirable and sustainable (von Schomberg 2011). The UK Engineering and Physical Sciences Research Council has established the so-called AREA framework for RRI, which is an acronym for anticipation, reflection, engagement, and action. This AREA framework is implemented in the HBP (Stahl et al. 2019).
In practice, the RRI programme of the HBP is implemented by four work packages that focus on technology foresight (Aicardi et al. 2018), philosophical and neuroethical reflection (Salles et al. 2019), public engagement, and ethics support (Stahl et al. 2016). Ethics Support uses an approach of ethics dialogues to implement RRI (Stahl et al. 2019). As part of the ethics-related work of the RRI programme, the scholars in the programme collaborate with colleagues from the scientific and technical disciplines to identify and address ethical challenges. This article is the result of an engagement that brought together researchers in neurorobotics with members of the RRI group.
The following discussion draws on the work of the RRI group with a particular emphasis on several of the topics that were identified previously. These sections focus on the following issues that are all well-researched and discussed, but which take on new urgency in neurorobotics, namely dual use, collaboration between academia and industry, and data governance.
Dual Use
As the discussion above suggests, misuse or harmful use is one of the ethical concerns affecting neurorobotics in particular and neurotechnology in general (see also Ienca et al. 2018). General dual-use and misuse issues in neurorobotics include potential future military applications of mobile robotics, amongst others. More specifically within the HBP, concerns have been raised about potential dual-use and misuse of neurorobotics platform tools. The HBP Ethics and Society division’s approach to address these issues is set out in its “Opinion on ‘Responsible Dual Use’: Political, Security, Intelligence and Military Research of Concern in Neuroscience and Neurotechnology” (Ethics & Society 2018). This approach to dual use issues is broader than the one set out in the EU Horizon 2020 policies (European Commission 2019). In line with the overall principles and practices of RRI in the HBP described above, responsible dual-use approach implements the AREA framework to facilitate anticipation, reflection, engagement and action and is implemented through ethics dialogues (Ulnicane 2020). Moreover, it draws on the concept of ‘dual-use research of concern’ (DURC) that the US government applies to prevent the malicious application of life science research.
As with all research projects funded by the EU Horizon 2020, the HBP has to have an exclusive focus on civil applications. According to EU policy, “dual-use items are normally used for civilian purposes but may have military applications, or may contribute to the proliferation of weapons of mass destruction” (European Commission 2019, p. 33). The Horizon 2020 policy does not rule out dual-use research, i.e. ‘the development of generic technologies, products or knowledge that may meet the needs of both civil and military end-users (known as ‘dual-use’ goods or technologies), provided that the research itself has a clear focus on civil applications’ (European Commission 2019, p. 35).
The work of the HBP Ethics and Society division (Ethics & Society 2018) has identified several limitations and ambiguities in the EU approach. First, it is based on a civil-military use dichotomy, which fits an outdated, historical understanding of the dual use concept (see e.g. Molas-Gallart 1997) that over time has been expanded to refer to research and technology that can be used for both beneficial and harmful purposes in a broader sense (e.g. Ienca et al. 2018; Oltmann 2015). To highlight this broader range of potentially beneficial and harmful uses of neuroscience, HBP partners have explored other types of applications, for example, political, security and intelligence (Mahfoud et al. 2018). Second, the EU definitions of dual-use quoted above (based on the EU export control regulations), focus on items, goods, and technologies. However, in science project such as the HBP, a lot of research and development is still at an early stage (which is also the case for neurorobotics) i.e. well before potential uses have crystallised and the export of goods, items and technologies can be considered. The RRI approach pursued in the HBP allows us to facilitate early anticipation and reflection on potential uses, develop ways to support responsible use and avoid misuse.
Thus, the responsible dual-use approach developed in the HBP has a strong focus on involving scientists and engineers early on in anticipation and reflection on potential uses of their research as well as on engaging stakeholders and policy-makers. Responsibility in this context refers to
processes and practices within research and development systems, and the extent to which they encourage or constrain capacity of all those involved in the management and operation of research to reflect upon, anticipate and consider the potential social and ethical implications of their research, to encourage open discussion of these, with a view to ensuring that their research and development does indeed contribute to the health and well-being of citizens, and to peace and security. (Ethics & Society 2018, p. 9)
Moreover, according to the Ethics and Society division’s Opinion, the introduction of RRI can enable clarification of what might count as Dual Use Research of Concern.
Academic Research–Industry Partnerships
As outlined earlier, neurorobotics is emerging at the confluence of neuroscience and robotics, with the potential for a wide array of industrial, medical and healthcare applications. It, therefore, attracts interest from prospective partnerships between public research and industry. In the European Union and European countries, cooperation and transfer of knowledge and technology between public research and industry are seen as key to delivering research and innovation for economic growth and societal impact. For all their expected benefits, these partnerships face recurring barriers that can lead to problematic consequences, and that raise significant ethical and social concerns which reflect broad issues common to research areas with such high application potential as neurorobotics.
Orientation-related barriers to relations between academic research and industry are well-identified (divergences in goals, in time horizons, in working practices, in expectations, in incentives), yet transaction-related barriers are also important and appear much more difficult to mitigate. The latter particularly relate to intellectual property (IP) management and administrative procedures (e.g. overselling of research and unrealistic expectations by Technology Transfer Offices, financial or regulatory conflicts over patents and other IP rights) (Bruneel et al. 2010). Rather counter-intuitively, policies forcefully pushing universities to develop research commercialization may actually have increased transaction-related barriers (Bruneel et al. 2010; Hall et al. 2014). Questions surrounding IP may be particularly pertinent for neurorobotics—for instance, whether it should be possible to patent a control structure generic to the human brain or even specific to one human brain, which ties into essential ethical discussions such as gene patenting. These questions, in particular, the potential increase of transaction-related barriers induced by forceful commercialisation strategies, are moreover very pertinent for the HBP itself, and thus for neurorobotics as a research programme requiring the kind of large interdisciplinary collaborative organisation exemplified by the HBP.
Bruneel et al. (2010) identify the following factors as beneficial to cooperation between academic research and industry: inter-organisational trust, the experience of collaboration, and the breadth of interaction channels. An important way to encourage these is to think about partnerships in much broader terms than strict technology transfer: collaborative research (e.g., informal interactions, institutional agreements, research consultancy), knowledge transfer (e.g., researchers and recent graduates hiring, cooperative education, secondments), and research support (e.g., endowment, shared facilities) (Bruneel et al. 2010; Mascarenhas et al. 2018; Perkmann et al. 2013).
Finally, academic research collaborating with industry raises significant ethical and social concerns in itself. Research independence and integrity of research are a high priority, as is avoiding conflicts of interest, professional misconduct, and double binds (Evans and Packham 2003). Then, there is a concern regarding the privatisation of common goods: valuable research outcomes may only benefit the interests of a few, while support for the research process as a whole, including its inevitable failures, is mutualised across European citizens through their taxes (Rose et al. 2015). Another issue is the little-investigated impact that academic researchers’ external engagements can have on their academic missions of research and teaching (Perkmann et al. 2013). Neurorobotics’ overarching goal of bridging between neuroscience and robotics carries with it the real risk that neurorobotics may become pulled into different directions by the interests of neuroscientific research and robotics applications. Compounded by the disproportionate capacity for investment by big industrial players compared to public research funding bodies, this raises the issue that foreseen applicative benefits may start weighing on the research agenda of neurorobotics to the detriment of neuroscience. The pull of promissory innovation and socio-economic benefits could even lead neuroscientific research to privilege its translation potential for neurorobotics applications over other objectives.
Data Governance
A final set of concerns to discuss relates to data governance. Data governance is essential to responsible innovation in the increasingly global context of neuroscience and ICT research, particularly considering the importance of big data analytics for the general advancement of neuroscience. Neurorobotics is no exception to this, and the innate complexity of these endeavours presents some specific challenges. There are potentially competing calls for openness and transparency (Salerno et al. 2017), and parallel concerns regarding data security (Siddiqa et al. 2016), privacy and data protection. In addition to negotiating this tension, data governance processes must attempt to prepare for unpredictable futures and address issues of intellectual property, authorship, and contributor credit since ownership of data is not always straightforward and may be shared or contested. Neurorobotics raises specific data governance challenges. In pursuing the unique set of approaches characteristic of neurorobotics, linking simulated “brains” with physical sensors; working toward novel control technologies; and creating experimental design and simulation, a complex and overlapping data lifecycle is generated. This is significant in terms of developing appropriate data governance strategies because these are contextually contingent upon data lifecycle phases and distinct ethical issues arise at different stages (Fothergill et al. 2019). In the longer term, this could present possibilities for unforeseen vulnerabilities or security lapses and may increase the risk of a loss of data fidelity or misallocation of metadata, which could result in a data breach or threaten replicability. Such potential concerns, coupled with the disciplinary diversity within neurorobotics, necessitate a data governance approach which is fully grounded in a theoretical awareness of the need for integrated ethical frameworks within innovation processes.