Engineering ethics in I-CPHS is a challenging issue. Although ethics considerations are discussed in the literature, when these intersect with industrial systems, especially considering safety aspects, there are still several issues to be addressed, if such systems are to be operating successfully in society. In I-CPHS the humans take multiple roles e.g., as operators, supervisors, or mere participants in such processes, and as such, they are significantly affected, and the way they interact with, use, or are considered by such industrial systems. I-CPHS will need to operate within society, enable human-to-human as well as human-to-machine interactions, and even collaborate with humans towards common goals, ethics is an emerging concern. It is, therefore, imperative to consider how ethics can be engineered in industrial systems and how this can be realized during their lifecycle i.e., from design to development, operation, and even maintenance and disposition.
Despite some futuristic aspects considered in the presented use cases, both are to a high-degree technologically feasible today, and can be utilized with some commercial off-the-shelf (COTS) solutions, e.g., in-door geo-localization, augmented reality, intelligent and cooperative digital assets, exoskeletons, intelligent wearable systems and garments, ad-hoc IoT sensors (humidity temperature, noise, gases, pollutants), video monitoring, etc. However, the integration of such technologies needs to be done in a consistent way in I-CPHS, to allow for certification of the I-CPHS, including its fuzzy parts that integrate ethical decision-making. Being able to guarantee a deterministic behavior of an I-CPHS is seen as challenging, including the certification of its behaviors that need to be compliant with the regulatory framework of their operational environment, and adhere to the ethical and societal engineered constraints.
The two case studies presented here exemplify how the ethical aspects in combination with traditional consideration of control decisions for safety interrelate and must be addressed in a combined form in the context of intelligent autonomous I-CPHS. On the one hand, safety is of paramount importance, but also on the other hand, how the safety can be achieved and according to which criteria that are also in-line with the ethical societal norms. Such efforts should for instance strive towards e.g. saving maximum human lives irrespective of material and infrastructure destruction. The case studies clearly illustrate that it is worth paying attention to the justification of the integration of ethical behaviors in I-CPHS during its design phase, to be included its lifecycle, incl. testing, certification, operation and decommission.
These case studies can be easily generalized, and demonstrate that if the designer does not take the opportunity to integrate into the I-CPHS some basic ethical behavioral mechanisms, then the end-result consequences may be worst than if s/he does it (84 saved people vs.63 for the first case study and 32 vs 28 for the second one). Such integration would facilitate social acceptance (cf. third factor in the introduction) since the I-CPHS will do “its best” to save as many lives as possible.
A challenging issue relates to the utility function that is assessed to identify the “most” ethical consequentialist-based behavior. In both example scenarios in this paper, a well-defined metric was used i.e., the lives of people saved and casualties, which constitute the utility function. As such, calculating the consequences via this utility function, is trivial, as societal norms universally dictate that life loss should be minimized in such hazardous scenarios, and prevail over material costs. However, in more complex situations, defining a utility function is difficult, and different cultures may not share a similar view on the same aspect. Even if they do, other questions arise e.g., how long into the future such consequences should be calculated, etc. Further studies are compulsory, either for its calculation (e.g., the time horizon for which it is estimated, fine modeling of classes of injuries etc) or for decision mechanisms (e.g., possible compensations), opening the debate about the well known other kinds of dilemma in ethics.
Another challenging issue raised is if indeed all potential alternatives can be calculated, especially considering that I-CPHS need to operate in highly complex environments under uncertainty. This approach assumes that the designer cannot anticipate everything but does his/her best to imagine, design, and implement an ethical controller. Thus sufficient flexibility is available for the possible existence of unconsidered situations and states. The initial default ethical behaviors have been integrated to avoid spending an infinite amount of time trying to consider all the possibilities, and because each I-CPHS should have a basis upon which decisions can be made and evolve from that starting point. Finally, the coupling of deontological and consequentialist strategies may help designers manage in the mid-term a realistic way i.e., facing the unexpected as stated by Valckenaers et al (2011), the ethical risks of I-CPHS evolving jointly with humans.
This approach has exemplified the different aspects of engineering ethics in I-CPHS, based on the usage of explicit rules that could be followed. This line of thinking stems from traditional control decisions in industrial systems when considering the tasks that such systems have to carry out. However, because of the issues already discussed, e.g., uncertainty and infeasibility of the calculations of all possible alternatives in complex scenarios, we need to move beyond this paradigm. AI and more specifically machine learning fueled I-CPHS, will need not explicit rules, but goals on what is acceptable or not, and they will attempt to maximize the compliance to such goals via their own reasoning and exposure to operating environments. Therefore, investigating ethics in AI-fueled I-CPHS is seen as paramount, especially when it comes to complex industrial cases where e.g., the safety of humans is affected.
Realizing proper digital twins that sufficiently and accurately capture the real world is another challenging issue. Digital twins can help to a degree, but appropriate simulator of the environment is needed, so that possible consequences can be realistically assessed, and in an interacting complex infrastructure this goes well beyond of what digital twins can do. As such, a simulator of the environment to be able to simulate possible consequences of actions is needed. The creation of realistic digital twin requires the integration of various behavioral and multi-physics models, including that of humans (and crowds), which may be complicated to realize. For instance, the two presented case studies rely on very simple digital twins. The simulations carried out should not be treated literally as accuracy to real-world conditions, and fire-propagation models were not seen as important, but the main goal was to show the need to address machine ethics in the context of I-CPHS, and the potential benefits of adopting a digital twin approach. For example, in the NetLogo simulation, humans are modeled as reactive agents: their behaviors are simple, purely reactive, and programmed using basic NetLogo instructions. Also, the fire-propagation model used is simple and lacks realism; however, there are significant scientific efforts on modeling fire propagation and some fine-tuned discrete event simulators are now available, ready to be integrated into digital twins (Freire and DaCamara 2019). Easy integration of such disparate models and frameworks in digital twin simulations can enhance the quality of results and reassure designers about the feasibility of the application proposed in this approach, at least in the context of fire management.
Another challenging aspect for ethical I-CPHS involved in critical situations is that decisions need to be made in real-time and continuously as the situation evolves. As a result, the simulation of all these models and behaviors must be done in short times to enable an accurate, fast, and up to date reaction of the I-CPHS. Training in advance on a vast amount of potential situations, and utilizing transfer learning, may reduce this time, as the simulations do not start from scratch. However, as discussed, the complexity may vary, and this complexity may constitute a strong limitation when addressing machine ethics (Brundage 2014).
This work has made it evident that the future intelligent autonomous I-CPHS will heavily depend also on the collaboration on of humans and machines within the context of I-CPHS. As such, asymmetric solutions e.g. handling over the control to humans in critical situations is not seen as efficient (maybe only as a transitional aspect) as, for instance, it can be affected by the inefficiencies of human reaction in critical situations, or the limited time to react, or even the false assessment of the situation. Similarly computerized only solutions without proper consideration of the human element, will probably lead to inefficiencies. The issue needs to be addressed in a holistic manner, and emphasis needs to be put on the cooperation of humans and CPS within the context of I-CPHS, so that potentially optimal results may be achieved. However, how this can be done, is expected to be situation specific, and is seen as future work.
Quantity and availability of appropriate data, especially when it pertains to humans, is another challenging issue. The collection of detailed data may infringe upon the privacy of humans. While, in some cases, this might be acceptable e.g., in critical industrial environments, this might not be the general case e.g., within a smart city. For instance, collecting data needed to locate humans, would also need to monitor their interactions, which can be seen as a paradoxical situation, where, to be ethical, the approach requires detailed monitoring, putting at risk other ethical aspects (spying on worker localization, etc.). In addition, compliance with legal frameworks such as the European General Data Protection Regulation (GDPR) will also need to be considered, as I-CPHS will operate within the society. As such, privacy-preserving approaches need to be developed and considered, so that ethical decisions can be made even in the presence of such seemingly contradicting concerns.
Quality of data is paramount for having informed decisions, and as such, to apply such approaches, major conceptual and technical issues need to be solved, even if some COTS technical solutions exist nowadays (e.g., geo-localization of people). Data with insufficient quality or bias may lead to erroneous or biased decisions from the side of the ethical controller. For instance, in the discussed scenarios, the ethical controller may suggest wrong decisions because of a faulty sensor used by the digital twin, which could make the situation worst e.g., limiting safety options via false guidelines to the personnel. Training and validating I-CPHS behaviors automatically in numerous ethical dilemmas (Benabbou et al 2020) is also needed, to investigate if a consistent behavior of I-CPHS is evident.
In this work, we have examined I-CPHS, from a standalone point of view, where it needs to make decisions. Such decisions in this work are carried out in a centralized manner and assume all conditions are met so that a decision can be made. However, we need to generalize and expand this way of thought towards the system of I-CPHS, as even specific domains e.g., manufacturing (Colombo et al 2013) are moving towards it. In a system of I-CPHS, complexity increases, as several challenges arise. For instance, data is not owned by a single I-CPHS, but is federated and needs to be made available to the specific I-CPHS taking a decision in its local context. Also, the issue of local vs.global optimal ethical decisions is raised. In complex scenarios, decisions taken by one I-CPHS may influence the parameters used by another I-CPHS to decide for its actions, and as such, an interplay of such aspects emerges. Often I-CPHS will also need to coordinate among themselves, and especially when humans and robot collaboration and interaction arises, e.g., in safety scenarios (Wagner 2020). In systems of I-CPHS, the I-CPHS need to include negotiation among them and also assess how the key aspects they consider for their decision-making processes are subtle to eternal influences from other stakeholders. Such considerations could lead to better global decisions that include both the ethical considerations raised, and also utilize e.g., expanded utility functions (beyond the local context).
Concluding, we can consider that it is no more possible, given the technological evolution, to avoid paying attention to ethical aspects in Industry 4.0, especially when it comes to I-CPHS design, development and operation. The increasing prevalence of autonomous systems in various sectors, not only in industry 4.0 or logistics but also in services (hospital facing strain situations, etc.), will raise more ethical considerations and challenges. Unfortunately, it is clear that although sometimes, ethical issues are acknowledged, industrialists do not fully know how to handle them effectively, covering engineering as well as operational aspects. For example, the autonomous train use case has stemmed from some discussions with stakeholders, and it is evident that at this stage, the industry focuses more on technology e.g., on image detection, train power control, energy management, etc. rather than autonomous train decisions, their evaluations, and their impacts. As such, we are still at an early stage where the responsibility is still mitigated to humans, while the technical means aim to provide a bit better clarity on the situation. However, due to the complexity and uncertainty issues discussed, we need to investigate more sophisticated systems, potentially heavily relying on AI, that can take better and more rapid decisions that humans do. However, such solutions may be best realized considering human–machine collaboration within I-CPHS, and of course it needs to satisfy the different constraints put by society, law and ethics.