A Human-Centred Design to Break the Myth of the “Magic Human” in Intelligent Manufacturing Systems
- 1.1k Downloads
The techno-centred design approach, currently used in industrial engineering and especially when designing Intelligent Manufacturing Systems (IMS) voluntarily ignores the human operator when the system operates correctly, but supposes the human is endowed with “magic capabilities” to fix difficult situations. But this so-called magic human faces with a lack of elements to make the relevant decisions. This paper claims that the Human Operator’s role must be defined at the early design phase of the IMS. We try to show with examples of systems from manufacturing as well as from energy or transportation that the Human Centred Design approaches place explicitly the “human in the loop” of the system to be automated. We first show the limits of techno-centred design methods. Secondly we propose the principles of a balanced function allocation between human and machine and even a real cooperation between them. The approach is based on the system decomposition into an abstraction hierarchy (strategic, tactical, operational). A relevant knowledge of the human capabilities and limits leads to the choice of the adequate Level of Automation (LoA) according to the system situation.
KeywordsTechno-centred design Human centred design Human in the loop Levels of automation Human-machine cooperation Intelligent manufacturing systems
This paper is relevant to industrial engineering, energy and services in general, but is focused on Intelligent Manufacturing Systems (IMS). It deals with the way the human operator is considered from a control point of view when designing IMS that integrates human beings.
The complexity of industrial systems and human organizations that control them is increasing with time, as well as their required safety levels. These requirements evolve accordingly with past negative experiences and industrial disasters (Seveso, Bhopal, AZF, Chernobyl…). In France, the Ministry for Ecology, Sustainable Development and Energy (Ministère de l’Écologie, du Développement durable et de l’Énergie) has led a study in technological accidents that occurred in France in 2013 (“inventaire 2014 des accidents technologiques”). It has shown that the three first domains expressed in terms of numbers of accidents are manufacturing, water and waste treatment. This study has also highlighted that even if “only” 11 % of the root causes come from a “counter-productive human intervention”, human operators are often involved in accidents at different levels: organizational issues; default in control, monitoring and supervision; bad equipment choice; and lacks in knowledge capitalization from past experiences.
At the physical level: industrial ergonomic studies, norms and methods (MTM, MOST…) are a clear illustration of this;
At the informational and decisional levels: industrial lean and kaizen techniques aim to provide the operator with informational and decisional capabilities to react and to improve the manufacturing processes for predefined functioning modes of the manufacturing system.
Meanwhile, these industrialist-oriented technical solutions lack human-oriented considerations when dealing with higher and more global decisional and informational levels such as scheduling, supervision, etc. as well as when abnormal and unforeseen situations and modes occur. This holds also true for the related scientific research activity. And this is even truer for less mature and more recent research topics such as those dealing with the design of control in IMS architectures. In addition, and specifically to IMS, where it is known that emerging (unexpected) control behaviours can occur during manufacturing, the risk to face possible accidents or unexpected and possibly hazardous situations when using un-human-aware control systems increases.
The objective of this paper is then to foster researchers dealing with the design of control systems in IMS to question the way they consider the real capabilities and limitations of the human beings. It is important to note that, at our stage of development, this paper remains highly prospective and contains only a set of human-oriented specifications that we think researchers must be aware of when designing their control in IMS. For that purpose, before providing these specifications, the following part describes the consequence of designing un-human-aware control systems in IMS, which corresponds to what we call a “techno-centred” approach.
2 The Classical Control Design Approach in IMS: A Techno-Centred Approach
As introduced, we consider in this paper the way the human operator is integrated within the control architectures in IMS. Such “Human-in-the-loop” Intelligent Manufacturing Control Systems are denoted, for simplification purpose, HIMCoS in this paper. These systems consider the intervention of human (typically, information providing, decision making or direct action on physical components) during the intelligent control of any functions relevant to the operational level of manufacturing operations, being for example scheduling, maintenance, monitoring, inventory management, supply, etc. Intelligence in manufacturing control refers to the ability to react, learn, adapt, reconfigure, evolve, etc. with time using computational and artificial intelligence technics, the control architecture being typically structured using Cyber-Physical Systems (CPS) and modelled using multi-agent or holonic principles, in a static or dynamic way (i.e., embedding self-organizational capabilities). The intervention of the human is limited in this paper to the decisional and information aspects (we do not consider direct and physical action on the controlled system for example).
2.1 An Illustration of the Techno-Centred Approach in IMS
To illustrate what we call the techno-centred design approach in this context, let us focus and consider a widespread studied IMS domain: distributed scheduling in manufacturing control. Research activities in this domain foster a paradigm that aims to provide more autonomy and adaptation capabilities to the manufacturing control system by distributing functionally or geographically the informational and decisional capabilities among artificial entities (typically agents or holons). This paradigm creates “bottom-up” emerging behavioural mechanisms complementarily to possible “top-down” ones generated by a centralized and predictive system to limit or to force this emerging behaviour evolving within pre-fixed bounds . This paradigm encourages designers to provide these entities with cooperation or negotiation skills so that they can react and adapt more easily to the growing level of uncertainty in the manufacturing environment while controlling the complexity of the manufacturing system by splitting the global control problem into several local ones.
This is a usual design approach in manufacturing and industrial engineering. Paradoxically, it can also be identified even in the widespread and historical field of Decision Support Systems (DSS), see for example [9, 10] or . This design approach can be characterized as “techno-centred”, which means that priority is assigned to the solving of technical issues to the detriment of the human aspects. A techno-centred approach consists in automating a maximum number of functions (in nominal or degraded mode) in a pre-defined context and in assuming that the human operator will supervise and handle all the situations that where not foreseen.
2.2 The Hidden Assumption of the “Magic Human”
Solve all the problems for which there is no anticipated solving process,
Provide the good information to the control system in due time,
Decide among alternatives in real time (i.e., as fast as possible or in due date), whenever it is necessary (cf. the concept of DSS),
Ensure with full reliability the switching between control configurations and especially the recovery towards normal operating conditions after unforeseen perturbation/degradation.With derision, we call him the magic human (Fig. 2).
As a synthesis of our point of view, Fig. 2 sums up the techno-centred design approach in HIMCoS. In this figure, the dotted lines represent the fact that the human operator is not really considered during the design of the HIMCoS.
2.3 The Risks of Keeping This Assumption Hidden
Assuming the human operator a magic human is obviously not realistic but it is a reality in research in industrial engineering. In light of the mentioned reference in the introduction to the French ministry study, a techno-centred design pattern in HIMCoS is risky since it leads to overestimate the ability of the human operator who must perfectly behave when desired, within due response times, and who is also perfectly able to react facing unexpected situations: How can we be sure that he is able to realize all what he is intended to do and in the best possible way? And more, do his human reaction times comply with the high-speed ones of computerized artificial entities? Thus, what if he takes too much time to react? What if he makes wrong or risky decisions? What if he simply does not know what to do?
Moreover, one specificity in HIMCoS renders the techno-centred approach more risky. Indeed, as explained before, “bottom-up” emerging behaviours will occur in HIMCoS. Emerging behaviours are never faced (nor sought) in classical hierarchically/centralized control approaches in manufacturing. This novelty, analysed with regards to the need to maintain and guarantee especially the safety levels in manufacturing systems makes it more crucial. Typically, is the human operator ready to face the unexpected in front of complex self-organizing complex systems? This critical issue has seldom been addressed, see for example . And, on the opposite point of view, what to do in case of unexpected events, for which no foreseen technical solution is available whereas the human is the only entity really able to invent one?
2.4 Why a so Obvious Assumption Remains Hidden?
From our point of view, three main reasons explain why this assumption remains hidden and is seldom explicitly pointed out.
The first one comes from the fact that researchers in industrial engineering are often not expert in or even aware of ergonomics, human factor or human-machine systems. A second one comes from the fact that integrating the human operator will require introducing undesired qualitative and blurring elements coupled to hardly reproducible and evaluable behaviours including complex experimental protocols potentially involving several humans as “guinea pigs” for test purpose. Last, the technological evolution in CPS, infotronics and information and communication technologies facilitates the automation of control functions (denoted LoA: level of automation), which make it easier for researchers to automate as much as possible the different control functions they consider.
For all these reasons, researchers, consciously or not, “kick into touch” or sidestep the integration of the human dimension, when designing their HIMCoS or their industrial control system.
3 Towards a More Human-Centred Control Design in IMS
In HIMCoS, and in industrial engineering in general, it is nowadays crucial to revise the basic research design patterns to adopt a more human-centred approach to limit the hidden assumption of the magic human. Obviously, this challenge is huge and complex but a growing number of researchers, especially in ergonomics and human-engineering address now this objective especially in the domain of industrial engineering. They typically work on the introduced LoA and the Situation Awareness as a prerequisite [13, 14]. Some EU projects have also been launched to illustrate this recent evolution (e.g., SO-PC-PRO “Subject Orientation For People Centred Production” and MAN-MADE “MANufacturing through ergonoMic and safe Anthropocentric aDaptive workplacEs for context aware factories in Europe”).
This paper does not intend to provide absolute solutions but rather aims to set the alarm bell ringing and to provide some guidelines and insights for researchers to manage this hidden assumption of the “magic human” when designing the HIMCoS.
The main principle of a human-centred approach for the designer is to anticipate the functions that the human will operate, thus to determine the information he will need to understand the state of the system, to formulate a decision and last, to act. This anticipation must be accompanied by several main principles. Below is proposed a list that focuses on the decisional and informational aspects (for example, ergonomic aspects are not studied here, but can result in several items of the list).
The human can be the devil and unfortunately, some design engineers consider him as a devil: his rationality is bounded (cf. Simon’s principle); he may forget, make mistakes, over-react, be absent or even be the root cause of a disaster. For example because of a bad understanding of the behaviour of a system that decides to switch to a secure mode (Three Mile Island), or acting bad because of a lack of trust in the automated system or a lack of knowledge about the industrial system (Chernobyl) . The controlled system must be designed accordingly to manage this risk. A typical example of such a system is the one of the “dead-man’s vigilance device” in train transportation where the conductor must trigger frequently a system so that the control system knows that he is really on command. This is a first level for a mutual control between the system and the human assuming that the other is of limited reliability. More elaborated levels would address the issue of wrong or abnormal command signals either from the human or the control system. This principle links then our discussion to safety (RAM: reliability, availability and maintainability) and FDI (fault detection isolation) studies, not only from technical point of view but also from human one.
The human can be the hero, and more often that we believe: he may save life using innovative unexpected behaviours (e.g., Apollo 13). The whole human-machine system must be designed to allow as much as possible the human to integrate unforeseen processes and mechanisms.
The human can be the powerless witness: He may be unable to act despite being sure he is right and the automated system wrong, for example, the spectator of a disaster due to the design of an automated but badly sized plant (Fukushima) . The system must be designed to ensure its controllability by the human whenever desired.
The human is accountable, legally and socially speaking: the allocation of authority and responsibilities between human and machines is not so easy to solve (e.g., automatic cruise, air traffic control, automatized purchase processes for supply, automatic scheduling of processes, etc.). The designer must consider this aspect when designing and allocating decisional abilities among entities. In other words, if the human is accountable, he must be allowed to fully control the system.
The human must always be aware of the situation: According to Endsley , Situation Awareness (SA) is composed of three levels: SA1 (perception of the elements), SA2 (comprehension of the situation), SA3 (projection of future states). Thus each of these SA levels must be considered to ensure that humans can take decisions and make their mental models of the system evolve continuously (e.g., to take over the control or just to know what is the situation).
The LoA must be adaptive: some tasks must be automated and some others cannot be. But the related LoA must not be predefined and fixed forever. It must evolve according to situations and events, sometimes easing the work of the human (for example, in normal conditions) and other times, sending him back the control of critical tasks (for example, when abnormal situations occur). As a consequence, the control system must cooperate differently with the human according to situations: tasks allocation must be dynamic and handled in an adaptive way.
The diversity and repeatability of decisions must be considered, typically to avoid boring repetitive actions/decisions. This also requires to explicit as much as possible all the rare decisions for which the human was not prepared. For that, a time-based hierarchy (e.g., strategic, tactic and operational levels) and a typology of decisions (e.g., according to skill, rule or knowledge-based behavior) can be defined.
Therefore, the human mental workload must be carefully addressed: related to some of the previous principles, there exists an “optimal workload”, between nothing to do, inducing potentially lack of interest and too much things to do, inducing stress and fatigue. A typical consequence is that the designer must carefully define different time horizons (from real time to long term), balance the reactions times of the human with the one of the controlled industrial system. This is one of the historical issues dealt with by researchers in human engineering .
4 Proposal of a Human-Centred Design Framework
A Know-How (KH, knowledge and processing capabilities and capabilities of communication with other assistants and with the environment: sensors, actuators), and
A Know-How to Cooperate (KHC) allowing the assistant to cooperate with others (e.g., gathering coordination abilities and capabilities to facilitate the achievement of the goals of the other assistants) .
Recent works have shown that team Situation Awareness can be increased when the humans cooperate with assistant machines equipped with such cooperative abilities. Examples were shown in several application fields: air traffic control, cockpit of the fighter aircraft and human robot cooperative rescue actions .
The aim of this paper was to raise awareness of the risk of maintaining hidden and true the “magic human” assumption when designing HIMCoS and at a more general level, industrial control systems with the human in the loop as a decision maker.
The suggested human-centred design aims to reconcile two apparently antagonist behaviours: the imperfect human, who can correct and learn from his errors, and the attentive and inventive human capable of detecting problems and bringing solutions even if they are difficult and new. With a human-centred design approach in IMS, human resources can be amplified by recent ICT tools to support them with decision and action. The integration of such tools leads to the question of the level of automation, since these tools could become real decision partners and even real collaborators for humans .
- 1.Cardin, O., Trentesaux, D., Thomas, A., Castagna, P., Berger, T., El-Haouzi, H.B.: Coupling predictive scheduling and reactive control in manufacturing hybrid control architectures: state of the art and future challenges. J. Intell. Manuf. doi: 10.1007/s10845-015-1139-0 (2016)
- 6.Lee, J., Bagheri, B., Kao, H.-A.: A cyber-physical systems architecture for industry 4.0-based manufacturing systems. Manuf. Lett. 3, 18–23 (2015)Google Scholar
- 7.Gaham, M., Bouzouia, B., Achour, N.: Human-in-the-Loop Cyber-Physical Production Systems Control (HiLCP2sC): a multi-objective interactive framework proposal, service orientation in holonic and multi-agent manufacturing, pp. 315–325, Springer (2015)Google Scholar
- 8.Zambrano Rey, G., Carvalho, M., Trentesaux, D.: Cooperation models between humans and artificial self-organizing systems: Motivations, issues and perspectives. In: 6th International Symposium on Resilient Control Systems (ISRCS), pp. 156–161 (2013)Google Scholar
- 10.Mac Carthy, B.: Organizational, systems and human issues in production planning, scheduling and control. In: Handbook of production scheduling, pp. 59–90, Springer, US (2006)Google Scholar
- 12.Valckenaers, P., Van Brussel, H., Bruyninckx, H., Saint Germain, B., Van Belle, J., Philips, J.: Predicting the unexpected. Comput. Ind. 62, 623–637 (2011)Google Scholar
- 14.Pacaux-Lemoine, M.-P., Debernard, S., Godin, A., Rajaonah, B., Anceaux, F., Vanderhaegen, F.: Levels of Automation and human-machine cooperation: application to human-robot interaction. In: IFAC World Congress, pp. 6484–6492 (2011)Google Scholar
- 15.Schmitt, K.: Automations influence on nuclear power plants: a look at three accidents and how automation played a role. Int. Ergon. Assoc. World Conf., Recife, Brazil (2012)Google Scholar
- 18.Sheridan, T.B.: Telerobotics, automation, and human supervisory control, MIT Press (1992)Google Scholar
- 19.Sentouh, C., Popieul, J.C.: Human–machine interaction in automated vehicles: The ABV project. In: Risk management in life-critical systems, pp. 335–350, ISTE-Wiley (2014)Google Scholar
- 20.Millot, P.: Cooperative organization for enhancing situation awareness. In: Risk management in life-critical systems, pp. 279–300, ISTE-Wiley, London (2014)Google Scholar
- 21.Millot, P., Boy, G.A.: Human-machine cooperation: a solution for life-critical systems? Work, 41 (2012)Google Scholar