1 Introduction

The introduction of new technologies always requires us to re-evaluate our moral concepts, meanings and values [1]. For example, if we consider progressively autonomous technologies already in development, such as social robots or service robots that are used in diverse domains like education, health, quality of life, entertainment, communication [2], we can see how they effectively challenge moral concepts like responsibility and autonomy. In ethics of technology, there is now a branch of studies focusing on ‘value dynamism’ or ‘value change’, which entails that technologies often change or shape the value frameworks people use to evaluate them and introduce new values that were not considered relevant or were unforeseen at the moment of their design phase [3]. For example, the birth control pill has changed our frameworks for sexuality, or technologies like Google Glass have had an impact on the meaning of privacy [4].

Finally, the ethics and the “normative criticality” of robotics has been the object of lively and divisive contemporary debates, referring with such a term to the intrinsic features of those systems that make them capable of involving public issues, and of challenging the democratic quality of decision-making processes [5]. Those debates have ranged from the fields of philosophy and international legal theory, to Studies of Ethical Legal and Social Aspects (ELSA) or more recently Responsible Research and Innovation (RRI) related to new science and technology. RRI is an approach to science and innovation governance developed in the last year in the European Union policy environment, which aims to realise the alignment of research and innovation activities with beneficial societal goals and needs, by introducing participatory methods and tools into the work of science and innovation actors [6].

Theorists of RRI view responsibility as a positive opportunity to innovate for the common good, promote positive moral obligations, and embed ethical contributions into the development of new technologies [7]. However, over the last few years, the ethical dimension in the academic debate surrounding RRI has remained vague, and has lacked methods to put its theoretical insights into practice [8]. Many have pointed out that RRI has failed to engage citizens in decision making regarding technology, although they also suggest that the moral notion of responsible innovation as advanced by this paradigm has been trying to be both normatively critical and practically relevant and, thus, deserves to be further articulated [9]. In like manner, social robotics as an emerging field requires an urgent reflection on the incorporation of ethical evaluations into its vision, and on novel responsible frameworks that can contribute to social benefits and reduce or eliminate social robots’ undesired and risky impacts on society.

The twofold contribution of the paper is to bridge the gap between the theoretical assumptions of responsible innovation and its realisation in practice, as well as to explicitly integrate social robotics innovation with an ethical dimension that can be operationalised in empirical tools and methods. In order to do that, I critically discuss the Collingridge Dilemma, also known as the dilemma of control, and recent efforts in Science and Technology Studies to address such a dilemma. I demonstrate how such efforts have neglected some wider implications of the dilemma, whose relevance is instead crucial for advancing the understanding of ethical theory and practice in social robotics.

The paper is structured as follows. In the first section, I briefly describe the main points of the Collingridge Dilemma, which has been considered as a dilemma related to the uncertainty of technological systems and innovation. In the second section, I explore the advantages and drawbacks of different strategies that have been advanced in Science and Technology Studies over recent years to address the dilemma, which are, respectively: Techno-Moral Scenarios, Socio-technical Experimentations, and the Mediation Theory. I argue that these solutions to the Dilemma have failed to provide an overarching normative and empirical strategy to the ethical and responsible development of emerging technologies. In particular, I concentrate on research gaps related to inclusiveness, value pluralism, full-life cycle monitoring of technologies, the introduction of ethical considerations at the early stage of development of technologies, and to the meta-governance of innovation with the creation of new standards and responsible frameworks. I show how these dimensions are instead crucial for advancing the research, design and development of social robotics as an emerging field, and for dealing with the implication of control in the Collingridge Dilemma. To address these challenges, in the last section of the paper, I investigate two guiding principles that have been identified in responsible innovation literature, i.e., inclusion and responsiveness. I demonstrate how these two principles can be operationalised to ground a more comprehensive approach to the issue of control in social robotics, one that allows a critical examination on how social robotic platforms can be embedded in human social practices, and aligned with societal values and standards.

2 The Collingridge Dilemma

The ‘dilemma of control’ is a term coined for the first time by Collingridge with regard to technology assessment and policy strategies, which was described as such:

The social consequences of a technology cannot be predicted early in the life of the technology. By the time undesirable consequences are discovered, however, the technology is often so much part of the whole economic and social fabric that its control is extremely difficult. This is the dilemma of control. When change is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult and time consuming. [10, p.11]

The so-called ‘Collingridge Dilemma’ shows two distinct aspects: one is related to a lack of available information at the earlier stages of technologies development, and to the impossibility of an exact prediction on their possible and future impacts, while the second refers to a power problem. Indeed, this last aspect states that change or control of technologies are really complex and difficult questions at the later stages of their development, since technologies by that time have become societally embedded, and influencing their innovation trajectory and regulation might come into conflict with other social, economic and political factors.

To put it differently, when the effects of technologies are still invisible and unknown it is difficult to socially intervene on them because of a lack of information. In like manner, when the effects of technologies are visible and known it is equally difficult to socially intervene on them because their processes are not so manageable anymore, due to power relations that make them socially entrenched [10].

Different approaches have been developed over the recent years in Science and Technology Studies to address the implications of the Collingridge Dilemma. The common ground between those approaches relies on their attempt to grasp the possible future impacts of technology and to provide means and modalities to influence and guide them within a responsible and informed framework. In the following section I discuss these different solutions to the Collingridge Dilemma, by showing how their strategies, although not exempted from shortcomings, have had the merit to emphasise the need for frameworks that can anticipate, regulate, and understand the uncertainty and direction of technological innovation. I also examine how this point can be considered especially relevant in social robotics innovation, due to the urgency of developing and invigorating effective strategies for dealing with the ethical and societal dimensions of this emerging field.

3 Addressing the Dilemma. Strategies and Shortcomings

Among the solutions advanced to resolve the dilemma of control, in Science and Technology Studies some have proposed the use of ‘techno-moral scenarios’ (hereby, TMSs). TMSs employ tools and methods from empirical qualitative methods, and aim to anticipate the impact of technologies, by looking at possible and problematic practices that can be the object of societal discussions and modification [11]. Promoters of TMSs have had the merit of stressing the importance of an empirically informed investigation, which is provided by scenarios descriptions and narrations about patterns or recurring tropes and types regarding technologies [12]. However, they have failed to adopt a real inclusive approach, since even if the discussion of scenarios is open to different stakeholders, their creation does not amount to a participatory process of co-creation and mainly remains a task performed by researchers, without the presence of the public or other institutional stakeholders, which come often later in the picture, as recognised in the same TMSs literature [13]. Moreover, even if TMSs promoters have recognised a plurality of moral standards that continuously evolve over time, in their work ‘moral regimes’ and ‘accountability regimes’ – i.e., the set of values and norms that a specific community considers very important, along with the institutionalised rules that held accountable all the social actors involved in science and innovation – are often conceived as contingent on the situations under analysis, and lead to different paradigms of resolution (i.e., risk, fault, and so on) [13,14,15]. Compatibility, combination, and the possible trade-off between these different paradigms are not sufficiently addressed in such a framework, and the task of redefining and reconstructing new standards is undertheorised, since a predominant role is given to an explanatory goal that seeks to predict future scenarios based on the knowledge of the past and the present [13].

The question of value pluralism and the creation of new standards is instead especially relevant in the case of social robotics. For example, the introduction of social robotic platforms into the workplace and how this affects work-related processes, including how knowledge, status hierarchies, and roles evolve over time, is a highly debated topic that requires investigating social robots in relation to individual, team and organisational practices and norms [16]. Several studies have shown how the engagement with a social robot application in hospitals, like a pharmaceutical-dispensing robot or telepresence robots and surgical robots, differs on the basis of role configurations, and can lead to diverse experiences by the part of technicians, assistants, trainees, nurses, and residents, whose perspectives on well-being and organisational performance must be all taken into account to enhance rather than inhibit new organisational needs and structures [17,18,19].

Other solutions advanced to resolve the dilemma of control have promoted the idea of regulation instead of anticipation and speculative scenarios, and have implemented the strategy of ‘socio-technical experimentation’ (hereby, STE) [20, 21]. All theoretical strands involved in STE share the idea of ‘society as a laboratory to experiment with technologies’ [22, p. 69]. For STE promoters the gradual and experimental introduction of technology in society can be assessed through a constant monitoring and learning from such kinds of socio-experiments, relying on Collingridge’s emphasis on trial–error learning and incremental decision-making and flexibility [23].

However, this excessive emphasis on experimentalism into social life introduces some risks. Indeed, STE promoters consider ethics as experimental, i.e., and hypothesis to be tested à la Dewey [24, p239; 21], and the social experiment paradigm for technologies is primarily discussed in a safety/risks assessment [25]. Even if a set of conditions under which a social experiment with technologies could be called responsible has been advanced by theorists of STE [22], such a set remains explorative and useful for testing current practices in the light of new circumstances, but do not provide justificatory or explanatory criteria that may orient and ensure future responsible forms of experimentation. STE promoters consider responsibility as forward-looking, and often relegate it to a matter of ‘democratisation of science’[26] and, as such, as involving primarily or exclusively scientific and engineering fields, individual researchers [27], owners of technologies [28], scientists and engineers [29], without involving a wider net of social actors. On the contrary, pointing out normative criteria for inclusive and responsible co-design techniques involving end-users and experts together can be an important dimension for social robotics innovation, as highlighted in recent studies that are trying to understand not merely the interactions with the robots already available, but what makes them ‘social’ [30, 31].

Finally, to address the dilemma of control, another strategy that has been recently advanced is the mediation theory, whose promoters claim that this can be a valid alternative to the lack of empirical analysis in TMSs and the lack of anticipation in STE [4, p294]. From a post-phenomenological perspective, mediation theorists aim to study innovation trajectories as “interactions from within”: from the perspective of the experience, practices, and interpretations of human agents that are co-shaped in the act of mediation with technologies [32]. Although mediation theorists are right in affirming that normative frameworks co-evolve in interaction and in complex interplay with technological introductions, they seem to reduce the analysis of such dynamics to the actuality of experiences and practices around technologies, and to ‘contingent’ normative evaluations by the part of users [4, p301]. In this sense, their strategies lack an overarching ameliorative and normative framework that can individuate and explain why certain configurations about social reality may be defective or socially unjust. For example, mediation theorists are not able to justify why and to what extent the legal or corporate formulations of the conception of privacy surrounding new technologies might be dominant or just [4, p310]. Moreover, mediation analysis is often based ‘at the threshold of society’ [4], which means at the earlier stages of development of technologies. However, an explicit focus on explorer-test versions is often infeasible. Current research has recognised that the early stages of technology development are the stages with the greatest uncertainty, least available data, and relevant scarcity or lack of methods and tools to identify the characteristics of emerging technologies and the dynamic of the market context in which they are introduced [33]. As Collingridge put it, at this stage there is a greater potential to influence outcomes and direction, but it is difficult to identify what are the questions on emerging technologies that we can reasonably answer and what criteria need a critical assessment. More research is needed in this sense, that aims to foster interdisciplinary inquiry on the level of technological maturity, the level of maturity of the market (and, generally speaking, society), and on the full-life cycle monitoring of technologies [34]. Besides this, an overarching analysis that aims to address the issues posed by the Collingridge Dilemma cannot neglect the analysis of technologies already place, that have become entangled in society and embedded in social dynamics in modalities that might be hidden or implicit, i.e., not easy to detect and question.

4 Responsible Social Robotics. Operationalising Inclusion and Responsiveness

Predictions indicate that robots will become an integral and essential part of our social environments, and they are already taking over tasks in many social domains [35]. The development of robotic skills that are able to understand the social context, to respect social norms and recognise social structures are fundamental for ensuring control in human–robot interaction practices, but to date the question of how social robots can comply with social reality in meaningful and societal desiderable ways is still highly debated [36]. In particular, social robotics exacerbate the Collingridge Dilemma and encounters a tripartite problem according to scholars: a description problem related to the lack of long-term studies and theories on the effects, peculiarity, novelty of social robots on individuals and communities; and evaluation and regulation problems, since, due to the lack, limitation in scope or incompleteness of research results that are related to the descriptive phase, also the research-based evaluation of the potential harms and benefits of social robotics, and the potential regulatory strategies and recommendation by policy makers and legislators, are hampered [37]. Also recent surveys in social robotics works have reported that the current state-of-the-art has not yet established a set of benchmarks to define and develop models for social robots that have high levels of acceptability and interaction, and that future research requires multidisciplinary approaches to meet this challenge [38].

As explored in the previous section of the paper, the implications that are neglected by recent solutions to the Collingridge Dilemma deal with a number question related to pluralism, inclusiveness, the introduction of societal and ethical considerations at the early stage of development of technological products, and the necessity of considering not merely the products and processes of innovation, but the meta-governance of innovation, namely the values, norms and principles that shape or underpin policy action and governance of technologies. One important point to note is that, by considering these neglected implications in the dilemma, important considerations can be drawn also for theory and practice of responsible innovation, starting from the need to develop and invigorate qualities of openness and diversity of methods, to the desideratum to improve the flexibility and societal accountability in decisions-making processes surrounding innovation [39].

This discourse is especially relevant in the case of social robots, that are designed for interacting in collaborative settings with individuals and groups. In this last section, I reflect on the relevance of the neglected implications of the Collingridge Dilemma for the research, design and development in social robotics. I explore two guiding principles that have been identified in responsible innovation literature, e.g., inclusion and responsiveness, and potential methods to put in practice and operationalise these principles in the field of social robotics. My aim will be to suggest a more comprehensive perspective widening the inclusiveness, transdisciplinarity, and social and ethical sustainability of social robotics as an emerging field.

4.1 Inclusion

One relevant research question is how users and stakeholders are defined and by whom. This is not a new concern, since feminist and social construction of technology approaches have already engaged with the question of the co-construction of technology and users, the diversity and often recalcitrant nature of users, and finally the types of power relations between them [40]. Also Bruno Latour and studies in Actor-Network Theory have explained that the power of design has implications that go beyond the immediate use and function of products to reach socio-political consequences for society [41]. However, these theories are often highly general and theoretical, as recognised by scholars engaging in social robotics research [42]. To properly address this issue and the effective inclusion of users and stakeholders, in innovation processes surrounding social robotics there is the need to shift from a user-centred approach to a “society-centred approach” [43], which may be able to explicitly reflect on the complex question of users’ experience, but also to repositionate robot design into larger ecological systems, and to incorporate social and institutional dimensions, and other actors like direct or indirect stakeholders, experts, philosophers, sociologists, policy makers, and laypeople.

In this scenario, one prominent principle advanced in responsible innovation literature may be useful: inclusion.Footnote 1 Inclusion has been defined as the practice of open questions and decisions concerning innovation to inclusive dialogue, engaging relevant stakeholders at the early stage of innovation processes [6], but many definitions have been advanced in the literature [44], of which ‘engage stakeholders’ and ‘collaborate interdisciplinarily’ are arguably the most recurring in responsible innovation practices [45]. Recently, studies on inclusion as a pivotal responsible innovation-dimension have claimed that the meaning of this principle must not be merely conflated with the direct inclusion of social actors, but include the balance role of framing and selecting co-evolution processes, by stimulating innovation actors to reflect on the ethics of setting diverse and possibly contrasting values and new priorities [46].

Many users-centred design (USD) approaches have been proposed and applied in socially assistive robotics, and common methods used by robotics researchers and teams have included the use of qualitative methods like questionnaires, focus groups, tests in labs and other settings, through which users can evaluate the acceptability and usability of robotic platforms [47]. However, to properly assess robot design and users’ experience as highly transdisciplinary and holistic, there is still the need to implement iterative methods advancing the active and participatory role of users, and other formal or informal societal actors. These can include not merely ex post feedbacks as in the case of questionnaires in USD methodologies, but, for example, the collaborative discussion and co-creation of solutions for the robot design in the early stages of development, as some recent projects on assistive robots have demonstrated (e.g., the case of ASTRO, a robot supporting elderly in walking and other physical activities, whose design has been the result of iterative co-creations sessions with end-users, formal and informal caregivers [48]). Methods for participatory design of autonomous social robots contribute to updating the robot behaviours in response to the evolution of real-world social dynamics as they are emerging, with in-situ and interactive approaches involving domain experts, non-roboticists, and robotic platforms [49]. The results of studies on educational social robots have also suggested that co-design provides an accessible context through which key stakeholders (students, parents, teachers) can inform and ensure design requirements and preferences, and, overall, contribute to the implementation of a culturally responsive robot [50]. Research in social robotics can also be enriched by the study of how the characteristics of users (age, gender, education, robot familiarity, mood) and the inclusion of family and friends, in combination with the attributes of the robot, can affect and influence social robot acceptance, since these are key variables to monitor in situations where the robotic platform deals with vulnerable people across long and continued time periods [51].

This re-conceptualisation of USD methodologies that I am here proposing implies also finding methods to clarify who all the stakeholders are, including stakeholders in larger societal contexts beyond users [52]. One interesting example of this theoretical move is the proposal for Integrative Social Robotics (ISR), a novel paradigm that goes beyond a user-centred perspective and suggests five principles to be implemented in research, design and development processes in social robotics: to target social interactions and not robots per se in design processes, since this opens up a research space for looking at wider socio-cultural changes; to promote the interdisciplinarity composition of research teams; to consider under the stakeholders group not only the persons and institutions of the immediate application context but also society at large, with its normative and evaluative practices; to include continuous and comprehensive short-term regulatory feedback loops and contextual (spatial, temporal, institutional) factors; to promote value-driven analysis (empirical and conceptual) throughout social robotics processes [42, 53]. In this paradigm the transition towards a real transdisciplinary integration in social robotics requires an interdependence between mixed methods from Social Sciences and Humanities and Science and Engineering research, since this can lead to a greater understanding of the different types of experienced sociality with robots [54]. A transition towards a transdisciplinary integration was also at the centre of the Science for Robotics and Robotics for Science vision, according to which robotics in those last years has evolved from a mostly technological to an increasingly scientific field, which includes and cross-fertilise many different research applications and domains, from Social Sciences and Humanities to many others [55]. Along those lines, recent discussions on robotics initiatives have explored how the logics and capabilities necessary to perform research in such a field need a ‘structural preparedness for interdisciplinarity’, a co-construction and negotiation with a diverse network of societal partners, like practitioners, users, other disciplines researchers and collaborators, governmental and organisational institutions and structures, funding agencies [56].

The methods and paradigms discussed above can be considered as modalites to put into practice and pursue ‘inclusion’ in social robotics in these two meanings, since they seek to integrate a broader spectrum of stakeholders as active participants in robot design and development, and to foster transdisciplinary collaborations. These are certainly not exhaustive, and other various methods that incorporate the diversification of expertise and perspectives in the practices of innovation actors may be applicable in the case of social robotics. However, the opening up to diverse voices is not exhausted in the principle of inclusion, but extends to the shift of priorities according to social changes, which is the core aspect of another responsible innovation-principle, i.e., responsiveness.

4.2 Responsiveness

Inclusion is not the only principle that is worth emphasising in responsible innovation literature. Overall, RRI is directed at orientating science and innovation towards socially desirable and acceptable outcomes, via dynamic and inclusive processes before Collingridge's concerns on technological lock-in [57]. One of RRI’s distinctive principles is responsiveness, and the language of responsibility has always permeated the discourse of responsible innovation scholars and of European policies in these last years [58]. Beyond the inclusion principle, which is associated to more efficient mechanisms that integrate different views and perspectives, responsiveness has a fundamental role in the ‘alignment work’ [59], i.e., societal actors are aligned and arranged in a way that they are dependent to one another, and make decisions and debates in more deliberate ways about the shared values, norms and principles that shape policy action on technology and innovation [6]. Responsiveness, in particular, demands that such alignment should be embedded in particular institutional contexts and always adjusted to the possible multiple dimensions and tensions of RRI [6]. Therefore, responsiveness deals with emerging societal perspectives, visions and norms inside innovation processes [60]. In a nutshell, the responsiveness principle argues that technological development should be responsive to values and needs of society, and adjust and change the direction of innovation based on new reconfigurations of those latter along the way [6]. However, despite the works of scholars and scientific and engineering communities, RRI activities have remained often a separate and self-referential activity, without appropriate processes for citizens and stakeholder engagement, and without appropriate social responsibility mechanisms [61]. In a similar way, it is crucial to understand and implement governance and regulatory mechanisms, in modalities that support and include societal values, and that further extend RRI framework to adequately address grand societal challenges, like climate change, social justice and many others, and place an emphasis on the social justification and foundation of policies on technologies [62].

Finding novel modalities for enhancing the principle of inclusion, by dealing with the complex questions of users’ experience and transdisciplinarity integration, is not the only strategy to implement responsibility in innovation processes. I claim that theory and practice in social robotics must also drive the attention to the responsiveness principle, and, in particular, to value-driven analysis, which has been one of the tentative solutions advanced to counteract the uncertainty of innovation processes, as laid out in the Collingridge Dilemma. The issue that technologies can bear and possess values has sparked much controversy, and generally one can distinguish between opponents of value embedding that are value-neutrality defenders and promoters of value embedding [63].Footnote 2 The value-neutrality thesis is widely criticised and rejected by philosophers of technology with different backgrounds, since most of them endorse that technologies are not merely neutral instruments or medium, but value laden artefacts. In this sense, famous is the example made by Winner, according to which the design of a bridge over the Long Island parkways prevented and restricted the access of Afro-Americans and so embodies a precise political significance and racist value, and can be subject of societal and political evaluation [64]. A similar example can be made for the design of socially assistive robots that, although can promote more effective care with the use of machine learning techniques for tailored recommendations, can also raise several concerns about privacy, fairness, and the social practice of storing, archiving, collecting, and monitoring data concerning vulnerable groups like the elderly [65].

Approaches that sustain a value-driven analysis for technologies have two fundamental merits. Firstly, by exploring the relationship between technical artefacts and moral values, they continue and build on the works of ethicists of technology such as Winner and on the Value Sensitive Design (VSD) or Design for Values (DfV) paradigm of recent years, which has become paramount in philosophy of technology and is often advocated in the literature of RRI as an approach to emerging technologies that comprises different theoretical and empirical methodologies and applications [66, 67]. The VSD paradigm claims that to have a real and meaningful impact on society, ethical reflections on technological systems should be put ex ante their development, and shape systems in the design phase where forms of design-requirements for systems can be advanced, in order to proactively address the issue of embedding robots and other technologies in society [68]. The methodology adopted by VSD involves a tripartite process: a conceptual investigation that allows to identify direct and indirect stakeholders, and values at stake in the design of a technology; an empirical investigation with qualitative or quantitative methods that aims to examine stakeholders’ ‘understandings, contexts, and experiences’; and finally, a technical investigation that is concerned with the specific features and architecture of new or existing systems [66]. VSD illustrates how responsible innovation can and could operate in action, concentrating on two aspects that can be useful for innovators actors: collect as much knowledge as possible on the consequence of innovation processes in iterative and long-term modalities; evaluate societal and ethical values related to those processes [69, 70]. These two aspects may arguably seek to overcome the Collingridge dilemma and, specifically, the description and evaluation problems in social robotics already mentioned, by reducing the uncertainty and unpredictability in the early stages of social robotic platforms development, and proactively inform types of robot design to attain ethically desirable ends.Footnote 3

However, as those approaches on values primarily rely on theoretical and conceptual frameworks, empirical theories and methodologies that implement, assess and verify in practice the idea that robotic systems respect ‘certain values’ in their design and deployment are still scarce. There is an ongoing debate on what counts as an empirically informed study in ethics of technology [71], and, in particular, value-sensitive fieldworks are considered complex phenomena due to the implicit, abstract, and conflicting nature of values, which are often affected by broader institutional and sociological forces [72]. In paradigms like VSD often remains unspecified what theory of values can be adopted, and what distinguishes values from mere preferences [73]. To address this situation, many approaches in recent years have tried to integrate an explicit normative foundation in VSD, referring to values specific to care ethics in the design of assistive robotics [74], or to higher-order source of values that can be respected or promoted, such as, for example, the United Nation Sustainable Development Goals (SDGs) [75]. Recently, other criticisms have arised on the incompleteness of VSD approaches, including the lack of clear methods for power distribution and collaboration between individual VSD practitioners and VSD teams to include regulatory authorities or oversight institutions [76], or for the continuous monitoring and iteration of the three phases [77]. In sum, despite the success, one of the most pressing challenges for VSD projects is the development of best practices and measures to discover, frame, and define values both in theory and practice [78].

Notwithstanding these criticisms related to the lack of normative criteria and empirical methods for collaboration and monitoring, the approaches that promote value-driven analysis in technology design have another important merit. Such approaches initiate a research programme that conceive technologies as an integral part of human normative institutions and norms, and many different actors and groups have to deal with the responsibility and accountability dimensions inherent to the deployment, use and regulation of socio-technical systems.The term ‘Sociotechnical systems’ refer to Sociotechnical systems design, conceived by Trist, Emery and others to intend the performance of work systems, in which the behaviours of human actors co-evolve and interactionally relate with the operations of technology for dealing with technological uncertainty [79]. An interactional understanding implies that the impact of technological systems on users is essentially shaped by the features of their design, the social context in which they are used and embedded, and the people and multiple and different social institutions involved in their use [67]. An essential task in the development and regulation of technologies in socio-technical systems is the process of ‘re-design’, i.e., an ongoing activity that is sustained not only by designers themselves, but also from users and institutions at large, which continuously can reconfigure and monitor the way technologies are used and the values or properties that are attached to them [70]. I argue that this feature of ‘re-design’ is especially crucial in the case of social robotic platforms with autonomous and learning capabilities, which may evolve and acquire emergent properties and behaviours, well beyond the designers’ intentions or motivations in the design phase. As robots grow as prominent actors in social settings, experimental studies need to unpack the potential they have to be advisers in home environments or other occupations and sectors in the future, and to impact factors like trust, competence, perception, and intention of people in real-life contexts [80].

Over and above a mere consideration of what is technologically feasible, sociotechnical systems that include social robotic platforms must be oriented to the identification of what is socially and ethically sustainable, and to the operationalisation of potential design requirements, organisational practices, and policy actions accordingly. In Science and Technology Studies scholarship, a key method that has been proposed is to implement practices of “making and doing”, in which STS scholars simultaneously look at the engagement of social actors and ecological dimensions and reflexively learn from those latter to formulate and develop novel strategies for the field [81]. Other attempts move exactly in that direction, such as meaningful human control (MHC) perspectives, with the goal to identify, evaluate and proactively operationalise more responsible design requirements or other potential regulative strategies for socio-technical systems [82]. These and other strategies dealing with a responsiveness-dimension can contribute to giving meaning to public values in Human–Robot interaction, and suggest modalities to safeguard them.

Ultimately, responsiveness as a principle has emerged mainly in terms of public acceptability, acknowledging the importance of uptaking more responsible forms of research and innovation [45]. But the issue of responsiveness extends far beyond the Dilemma put forward by David Collingridge on the challenge to foresee innovation and to avoid the irreversibility of technological locks-in, and includes a more relevant and contemporary dilemma, the “dilemma of societal alignment”, i.e., shape science, technology and innovation to ensure that their development processes are aligned with the values and needs of different publics [83, p318]. To address such a dilemma, one of the most effective strategies would be to adopt a wider perspective on responsibility in innovation processes surrounding social robotics, one that imply the implementation of strategies and methods for exercising control over important domains of life (health, education, work, and many others), a pluralism about values, and freedom from domination or oppression in the socio-relational sphere related to societal or political structures. The meaning of public values like transparency, accountability, and responsibility is often not clear or even contested in multi stakeholder partnerships dealing with innovation and technologies [84]. In like manner, the term ‘social sustainability’ remains vague, since social issues are difficult to comprehend, and their consequences result from competing interests and values and intersecting social facts [85]. But notwithstanding these uncertainties, social sustainability and the maintenance of public values require a fully integrated process within communities that embraces plurality, cohesion, quality of life and work, equity of access to key infrastructures and services, and, last but not least, a better democratic governance [86].

If we want to translate this discourse in the context of the dilemma with social robotic platforms, the use of public participation and engagement tools may try to ameliorate the complexity related to science-society relationships, by tracking and integrating values and needs of lay citizens, policy makers, businesses, associations, and other individuals or groups. Among the tools and strategies that strengthen science-society relationships, different proposals may be discussed and taken into consideration: the proposal for human rights, democracy and rule of law impact assessment by organisations to provide societal feedback prior to the designing and developing of new AI and robotics systems [87]; strategies for implementing responsible innovation into business frameworks and start-ups, for example with the creation of codes of conduct, or sustainability-oriented practices [88]; multi-level analysis on the acceptability of robots in social services, based on both individual and contextual level variables, which can provide informations for policy-makers and organisations [89]; the proposal for the extension of the Technology Readiness Level (TRL) scale in the three further scales of Legal, Organisational and Societal Readiness Levels which may ensure cross-border and cross-domain interoperability of innovation projects [90]; methods to further define the situatedness and changing configurations of socio-technical systems, which explores micro- and macro-level understandings of values for designing social robots in a variety of social spheres [91], and many others. The common ground between all those proposals can be summarised in the adoption of a social sustainability perspective, which serves to provide, change, and negotiate criteria during the phase of prototyping and experimentation of social robotic platforms. The challenge of responsiveness requires an ‘experiment and adapt’ approach that may involve real-world settings, and first and foremost a monitoring aim, through the establishment of dedicated governance structures that in a more deliberative way address novel directions in innovation [92, pp. 68–69]. In sum, with the analysis of responsiveness as a principle, I wanted to initiate not merely a substantial reflection on the evident criticalities raised by the introduction of social robotics in the daily-life, professional, and social sphere of individuals and groups, but a discussion on how effective theoretical frameworks and empirical tools can translate this introduction in an opportunity for creating and sustaining public values.

5 Conclusions

Social Robotics is a relatively young field, and its innovation processes have to meet the challenge of foreseeing and guiding the development of robotic platforms in socially desirable ways. In this paper I have shed light on the wider implications of the Collingridge Dilemma in Science and Technology Studies, which is a dilemma on the social control of the innovation trajectories. I have demonstrated that a wider reflection on questions related to inclusiveness, flexibility, and accountability is still missing in the contemporary debates that aim to overcome the dilemma. Building on this premise, in the last section of the paper I have shown how social robotics deals with these neglected questions and exacerbates the dilemma. To address this point, I have suggested adopting and operationalising the two guiding principles of inclusion and responsiveness in social robotics research, design, and development, which may arguably allow for a greater inclusiveness, transdisciplinarity, and social and ethical sustainability of this emerging field.