Advertisement

A Human-Centred Design to Break the Myth of the “Magic Human” in Intelligent Manufacturing Systems

  • Damien TrentesauxEmail author
  • Patrick Millot
Chapter
Part of the Studies in Computational Intelligence book series (SCI, volume 640)

Abstract

The techno-centred design approach, currently used in industrial engineering and especially when designing Intelligent Manufacturing Systems (IMS) voluntarily ignores the human operator when the system operates correctly, but supposes the human is endowed with “magic capabilities” to fix difficult situations. But this so-called magic human faces with a lack of elements to make the relevant decisions. This paper claims that the Human Operator’s role must be defined at the early design phase of the IMS. We try to show with examples of systems from manufacturing as well as from energy or transportation that the Human Centred Design approaches place explicitly the “human in the loop” of the system to be automated. We first show the limits of techno-centred design methods. Secondly we propose the principles of a balanced function allocation between human and machine and even a real cooperation between them. The approach is based on the system decomposition into an abstraction hierarchy (strategic, tactical, operational). A relevant knowledge of the human capabilities and limits leads to the choice of the adequate Level of Automation (LoA) according to the system situation.

Keywords

Techno-centred design Human centred design Human in the loop Levels of automation Human-machine cooperation Intelligent manufacturing systems 

1 Introduction

This paper is relevant to industrial engineering, energy and services in general, but is focused on Intelligent Manufacturing Systems (IMS). It deals with the way the human operator is considered from a control point of view when designing IMS that integrates human beings.

The complexity of industrial systems and human organizations that control them is increasing with time, as well as their required safety levels. These requirements evolve accordingly with past negative experiences and industrial disasters (Seveso, Bhopal, AZF, Chernobyl…). In France, the Ministry for Ecology, Sustainable Development and Energy (Ministère de l’Écologie, du Développement durable et de l’Énergie) has led a study in technological accidents that occurred in France in 2013 (“inventaire 2014 des accidents technologiques”). It has shown that the three first domains expressed in terms of numbers of accidents are manufacturing, water and waste treatment. This study has also highlighted that even if “only” 11 % of the root causes come from a “counter-productive human intervention”, human operators are often involved in accidents at different levels: organizational issues; default in control, monitoring and supervision; bad equipment choice; and lacks in knowledge capitalization from past experiences.

Obviously, the capabilities and limits of the human operator during manufacturing have been widely considered for several years, and very intensively by industrialists. This attention has been mainly paid at an operational level:
  • At the physical level: industrial ergonomic studies, norms and methods (MTM, MOST…) are a clear illustration of this;

  • At the informational and decisional levels: industrial lean and kaizen techniques aim to provide the operator with informational and decisional capabilities to react and to improve the manufacturing processes for predefined functioning modes of the manufacturing system.

Meanwhile, these industrialist-oriented technical solutions lack human-oriented considerations when dealing with higher and more global decisional and informational levels such as scheduling, supervision, etc. as well as when abnormal and unforeseen situations and modes occur. This holds also true for the related scientific research activity. And this is even truer for less mature and more recent research topics such as those dealing with the design of control in IMS architectures. In addition, and specifically to IMS, where it is known that emerging (unexpected) control behaviours can occur during manufacturing, the risk to face possible accidents or unexpected and possibly hazardous situations when using un-human-aware control systems increases.

The objective of this paper is then to foster researchers dealing with the design of control systems in IMS to question the way they consider the real capabilities and limitations of the human beings. It is important to note that, at our stage of development, this paper remains highly prospective and contains only a set of human-oriented specifications that we think researchers must be aware of when designing their control in IMS. For that purpose, before providing these specifications, the following part describes the consequence of designing un-human-aware control systems in IMS, which corresponds to what we call a “techno-centred” approach.

2 The Classical Control Design Approach in IMS: A Techno-Centred Approach

As introduced, we consider in this paper the way the human operator is integrated within the control architectures in IMS. Such “Human-in-the-loop” Intelligent Manufacturing Control Systems are denoted, for simplification purpose, HIMCoS in this paper. These systems consider the intervention of human (typically, information providing, decision making or direct action on physical components) during the intelligent control of any functions relevant to the operational level of manufacturing operations, being for example scheduling, maintenance, monitoring, inventory management, supply, etc. Intelligence in manufacturing control refers to the ability to react, learn, adapt, reconfigure, evolve, etc. with time using computational and artificial intelligence technics, the control architecture being typically structured using Cyber-Physical Systems (CPS) and modelled using multi-agent or holonic principles, in a static or dynamic way (i.e., embedding self-organizational capabilities). The intervention of the human is limited in this paper to the decisional and information aspects (we do not consider direct and physical action on the controlled system for example).

2.1 An Illustration of the Techno-Centred Approach in IMS

To illustrate what we call the techno-centred design approach in this context, let us focus and consider a widespread studied IMS domain: distributed scheduling in manufacturing control. Research activities in this domain foster a paradigm that aims to provide more autonomy and adaptation capabilities to the manufacturing control system by distributing functionally or geographically the informational and decisional capabilities among artificial entities (typically agents or holons). This paradigm creates “bottom-up” emerging behavioural mechanisms complementarily to possible “top-down” ones generated by a centralized and predictive system to limit or to force this emerging behaviour evolving within pre-fixed bounds [1]. This paradigm encourages designers to provide these entities with cooperation or negotiation skills so that they can react and adapt more easily to the growing level of uncertainty in the manufacturing environment while controlling the complexity of the manufacturing system by splitting the global control problem into several local ones.

The most emblematic proposals relevant to this paradigm are PROSA [2], ADACOR [3], and more recently ADACOR2 [4]. Nowadays, the initial paradigm has evolved but basics remain the same: up-to-date proposals are typically linked to the concepts of “intelligent product” [5] and “cyber-physical systems” [6]. Researchers in this domain (and more generally in industrial engineering) often consider that the human operator is the supervisor of the whole [7, 8]. Figure 1 (from [8]) is a typical illustration. According to this approach, the human operator fixes the objectives, tune parameters, constraints and rules. He then influences or explicitly controls artificial entities. Even if such works try to integrate the human operators, a lot of other ones do not even consider this possibility, which is inconsistent in a real life context.
Fig. 1

An example of a techno-centred design of a HIMCOS [8]

This is a usual design approach in manufacturing and industrial engineering. Paradoxically, it can also be identified even in the widespread and historical field of Decision Support Systems (DSS), see for example [9, 10] or [11]. This design approach can be characterized as “techno-centred”, which means that priority is assigned to the solving of technical issues to the detriment of the human aspects. A techno-centred approach consists in automating a maximum number of functions (in nominal or degraded mode) in a pre-defined context and in assuming that the human operator will supervise and handle all the situations that where not foreseen.

2.2 The Hidden Assumption of the “Magic Human”

In fact, in a techno-centred design of HIMCoS, there is a hidden assumption: the human operator is considered as an omniscient person that will:
  • Solve all the problems for which there is no anticipated solving process,

  • Provide the good information to the control system in due time,

  • Decide among alternatives in real time (i.e., as fast as possible or in due date), whenever it is necessary (cf. the concept of DSS),

  • Ensure with full reliability the switching between control configurations and especially the recovery towards normal operating conditions after unforeseen perturbation/degradation.

    With derision, we call him the magic human (Fig. 2).
    Fig. 2

    Techno-centred HIMCoS design approach

As a synthesis of our point of view, Fig. 2 sums up the techno-centred design approach in HIMCoS. In this figure, the dotted lines represent the fact that the human operator is not really considered during the design of the HIMCoS.

2.3 The Risks of Keeping This Assumption Hidden

Assuming the human operator a magic human is obviously not realistic but it is a reality in research in industrial engineering. In light of the mentioned reference in the introduction to the French ministry study, a techno-centred design pattern in HIMCoS is risky since it leads to overestimate the ability of the human operator who must perfectly behave when desired, within due response times, and who is also perfectly able to react facing unexpected situations: How can we be sure that he is able to realize all what he is intended to do and in the best possible way? And more, do his human reaction times comply with the high-speed ones of computerized artificial entities? Thus, what if he takes too much time to react? What if he makes wrong or risky decisions? What if he simply does not know what to do?

Moreover, one specificity in HIMCoS renders the techno-centred approach more risky. Indeed, as explained before, “bottom-up” emerging behaviours will occur in HIMCoS. Emerging behaviours are never faced (nor sought) in classical hierarchically/centralized control approaches in manufacturing. This novelty, analysed with regards to the need to maintain and guarantee especially the safety levels in manufacturing systems makes it more crucial. Typically, is the human operator ready to face the unexpected in front of complex self-organizing complex systems? This critical issue has seldom been addressed, see for example [12]. And, on the opposite point of view, what to do in case of unexpected events, for which no foreseen technical solution is available whereas the human is the only entity really able to invent one?

2.4 Why a so Obvious Assumption Remains Hidden?

From our point of view, three main reasons explain why this assumption remains hidden and is seldom explicitly pointed out.

The first one comes from the fact that researchers in industrial engineering are often not expert in or even aware of ergonomics, human factor or human-machine systems. A second one comes from the fact that integrating the human operator will require introducing undesired qualitative and blurring elements coupled to hardly reproducible and evaluable behaviours including complex experimental protocols potentially involving several humans as “guinea pigs” for test purpose. Last, the technological evolution in CPS, infotronics and information and communication technologies facilitates the automation of control functions (denoted LoA: level of automation), which make it easier for researchers to automate as much as possible the different control functions they consider.

For all these reasons, researchers, consciously or not, “kick into touch” or sidestep the integration of the human dimension, when designing their HIMCoS or their industrial control system.

3 Towards a More Human-Centred Control Design in IMS

In HIMCoS, and in industrial engineering in general, it is nowadays crucial to revise the basic research design patterns to adopt a more human-centred approach to limit the hidden assumption of the magic human. Obviously, this challenge is huge and complex but a growing number of researchers, especially in ergonomics and human-engineering address now this objective especially in the domain of industrial engineering. They typically work on the introduced LoA and the Situation Awareness as a prerequisite [13, 14]. Some EU projects have also been launched to illustrate this recent evolution (e.g., SO-PC-PRO “Subject Orientation For People Centred Production” and MAN-MADE “MANufacturing through ergonoMic and safe Anthropocentric aDaptive workplacEs for context aware factories in Europe”).

This paper does not intend to provide absolute solutions but rather aims to set the alarm bell ringing and to provide some guidelines and insights for researchers to manage this hidden assumption of the “magic human” when designing the HIMCoS.

The main principle of a human-centred approach for the designer is to anticipate the functions that the human will operate, thus to determine the information he will need to understand the state of the system, to formulate a decision and last, to act. This anticipation must be accompanied by several main principles. Below is proposed a list that focuses on the decisional and informational aspects (for example, ergonomic aspects are not studied here, but can result in several items of the list).

The human can be the devil and unfortunately, some design engineers consider him as a devil: his rationality is bounded (cf. Simon’s principle); he may forget, make mistakes, over-react, be absent or even be the root cause of a disaster. For example because of a bad understanding of the behaviour of a system that decides to switch to a secure mode (Three Mile Island), or acting bad because of a lack of trust in the automated system or a lack of knowledge about the industrial system (Chernobyl) [15]. The controlled system must be designed accordingly to manage this risk. A typical example of such a system is the one of the “dead-man’s vigilance device” in train transportation where the conductor must trigger frequently a system so that the control system knows that he is really on command. This is a first level for a mutual control between the system and the human assuming that the other is of limited reliability. More elaborated levels would address the issue of wrong or abnormal command signals either from the human or the control system. This principle links then our discussion to safety (RAM: reliability, availability and maintainability) and FDI (fault detection isolation) studies, not only from technical point of view but also from human one.

The human can be the hero, and more often that we believe: he may save life using innovative unexpected behaviours (e.g., Apollo 13). The whole human-machine system must be designed to allow as much as possible the human to integrate unforeseen processes and mechanisms.

The human can be the powerless witness: He may be unable to act despite being sure he is right and the automated system wrong, for example, the spectator of a disaster due to the design of an automated but badly sized plant (Fukushima) [15]. The system must be designed to ensure its controllability by the human whenever desired.

The human is accountable, legally and socially speaking: the allocation of authority and responsibilities between human and machines is not so easy to solve (e.g., automatic cruise, air traffic control, automatized purchase processes for supply, automatic scheduling of processes, etc.). The designer must consider this aspect when designing and allocating decisional abilities among entities. In other words, if the human is accountable, he must be allowed to fully control the system.

Therefore:

The human must always be aware of the situation: According to Endsley [16], Situation Awareness (SA) is composed of three levels: SA1 (perception of the elements), SA2 (comprehension of the situation), SA3 (projection of future states). Thus each of these SA levels must be considered to ensure that humans can take decisions and make their mental models of the system evolve continuously (e.g., to take over the control or just to know what is the situation).

The LoA must be adaptive: some tasks must be automated and some others cannot be. But the related LoA must not be predefined and fixed forever. It must evolve according to situations and events, sometimes easing the work of the human (for example, in normal conditions) and other times, sending him back the control of critical tasks (for example, when abnormal situations occur). As a consequence, the control system must cooperate differently with the human according to situations: tasks allocation must be dynamic and handled in an adaptive way.

The diversity and repeatability of decisions must be considered, typically to avoid boring repetitive actions/decisions. This also requires to explicit as much as possible all the rare decisions for which the human was not prepared. For that, a time-based hierarchy (e.g., strategic, tactic and operational levels) and a typology of decisions (e.g., according to skill, rule or knowledge-based behavior) can be defined.

Therefore, the human mental workload must be carefully addressed: related to some of the previous principles, there exists an “optimal workload”, between nothing to do, inducing potentially lack of interest and too much things to do, inducing stress and fatigue. A typical consequence is that the designer must carefully define different time horizons (from real time to long term), balance the reactions times of the human with the one of the controlled industrial system. This is one of the historical issues dealt with by researchers in human engineering [17].

4 Proposal of a Human-Centred Design Framework

For sure it is not possible to draw a generic model of a HIMCoS that complies for each possible case with all the previous principles. Despite this, we can propose a human-centred design framework to provide to researchers in IMS (and in more general, in industrial engineering) with some ideas to limit the magic human effect in their control system. For that purpose, Fig. 3 presents such a global framework. As suggested before, the process has been decomposed into 3 levels: operational for the short run, tactical at a higher hierarchical level for achieving the intermediate objectives and strategic at the highest level. The human may be apparently absent of the lower level, but this does not mean a fully automated system. We can therefore consider the automation in the system in 3 subsets as in nuclear plant control: one subset is fully automated, a second one is not fully automated but the feedback experience enables to design procedures that the human must follow (a kind of automation of human), and the last subset is neither automated nor foreseen and therefore must be achieved thanks to the human inventive capabilities. This requires paying a particular attention when designing the whole system so that the humans are able to play the best of them especially when no technical solution is available!
Fig. 3

Human-centred HIMCoS design approach

This framework features some of the previously introduced principles. For example, a mutual observation (through cooperation) is performed to consider the limited reliability of either the human or the intelligent manufacturing control system. Also, different time horizon levels are proposed. But some other principles can be hardly represented in this figure. This is typically the case for the one dealing with the adaptive LoA. Research in this field is very active since few years. A famous guideline based on 10 levels has been proposed by [18], where at level 1, the control is completely manual while at level 10, the control is fully automatized. The 4th intermediary level corresponds to the DSS (the control selects a possible action and proposes it to the operator). At level 6, the control lets a limited time to the operator to counterbalance the decision before the automatic execution of the decision. This can be specified for each level (strategic, tactical, and operational). For example, it is nowadays conceivable that the Intelligent Manufacturing Control system depicted in Fig. 3 changes itself the operational decision level from a level 1–4 to the level 10 because of the need to react within milliseconds to avoid an accident while it lets the tactical decision level unchanged to an intermediary level. Researchers in automated vehicle addressed adaptive LoA, which may be inspiring in industrial engineering [19]. Works on Humans Machines Cooperation is one very promising track since the current technology allows embedding more and more decisional abilities into machines and transform them into efficient assistants (CPS, avatar, agents, holons…) to humans for enhancing performance. In such a context, it is suggested that each of these assistants embed:
  • A Know-How (KH, knowledge and processing capabilities and capabilities of communication with other assistants and with the environment: sensors, actuators), and

  • A Know-How to Cooperate (KHC) allowing the assistant to cooperate with others (e.g., gathering coordination abilities and capabilities to facilitate the achievement of the goals of the other assistants) [13].

Recent works have shown that team Situation Awareness can be increased when the humans cooperate with assistant machines equipped with such cooperative abilities. Examples were shown in several application fields: air traffic control, cockpit of the fighter aircraft and human robot cooperative rescue actions [20].

5 Conclusion

The aim of this paper was to raise awareness of the risk of maintaining hidden and true the “magic human” assumption when designing HIMCoS and at a more general level, industrial control systems with the human in the loop as a decision maker.

The suggested human-centred design aims to reconcile two apparently antagonist behaviours: the imperfect human, who can correct and learn from his errors, and the attentive and inventive human capable of detecting problems and bringing solutions even if they are difficult and new. With a human-centred design approach in IMS, human resources can be amplified by recent ICT tools to support them with decision and action. The integration of such tools leads to the question of the level of automation, since these tools could become real decision partners and even real collaborators for humans [21].

References

  1. 1.
    Cardin, O., Trentesaux, D., Thomas, A., Castagna, P., Berger, T., El-Haouzi, H.B.: Coupling predictive scheduling and reactive control in manufacturing hybrid control architectures: state of the art and future challenges. J. Intell. Manuf. doi: 10.1007/s10845-015-1139-0 (2016)
  2. 2.
    Van Brussel, H., Wyns, J., Valckenaers, P., Bongaerts, L., Peeters, P.: Reference architecture for holonic manufacturing systems: PROSA. Comput. Ind. 37, 255–274 (1998)CrossRefGoogle Scholar
  3. 3.
    Leitão, P., Restivo, F.: ADACOR: a holonic architecture for agile and adaptive manufacturing control. Comput. Ind. 57, 121–130 (2006)CrossRefGoogle Scholar
  4. 4.
    Barbosa, J., Leitão, P., Adam, E., Trentesaux, D.: Dynamic self-organization in holonic multi-agent manufacturing systems: The ADACOR evolution. Comput. Ind. 66, 99–111 (2015)CrossRefGoogle Scholar
  5. 5.
    McFarlane, D., Giannikas, V., Wong, A.C.Y., Harrison, M.: Product intelligence in industrial control: theory and practice. Annual Rev. Control 37, 69–88 (2013)CrossRefGoogle Scholar
  6. 6.
    Lee, J., Bagheri, B., Kao, H.-A.: A cyber-physical systems architecture for industry 4.0-based manufacturing systems. Manuf. Lett. 3, 18–23 (2015)Google Scholar
  7. 7.
    Gaham, M., Bouzouia, B., Achour, N.: Human-in-the-Loop Cyber-Physical Production Systems Control (HiLCP2sC): a multi-objective interactive framework proposal, service orientation in holonic and multi-agent manufacturing, pp. 315–325, Springer (2015)Google Scholar
  8. 8.
    Zambrano Rey, G., Carvalho, M., Trentesaux, D.: Cooperation models between humans and artificial self-organizing systems: Motivations, issues and perspectives. In: 6th International Symposium on Resilient Control Systems (ISRCS), pp. 156–161 (2013)Google Scholar
  9. 9.
    Oborski, P.: Man-machine interactions in advanced manufacturing systems. Int. J. Adv. Manuf. Technol. 23, 227–232 (2003)CrossRefGoogle Scholar
  10. 10.
    Mac Carthy, B.: Organizational, systems and human issues in production planning, scheduling and control. In: Handbook of production scheduling, pp. 59–90, Springer, US (2006)Google Scholar
  11. 11.
    Trentesaux, D., Dindeleux, R., Tahon, C.: A multicriteria decision support system for dynamic task allocation in a distributed production activity control structure. Int. J. Comput. Integr. Manuf. 11, 3–17 (1998)CrossRefGoogle Scholar
  12. 12.
    Valckenaers, P., Van Brussel, H., Bruyninckx, H., Saint Germain, B., Van Belle, J., Philips, J.: Predicting the unexpected. Comput. Ind. 62, 623–637 (2011)Google Scholar
  13. 13.
    Millot, P.: Designing human-machine cooperation systems. ISTE-Wiley, London (2014)CrossRefGoogle Scholar
  14. 14.
    Pacaux-Lemoine, M.-P., Debernard, S., Godin, A., Rajaonah, B., Anceaux, F., Vanderhaegen, F.: Levels of Automation and human-machine cooperation: application to human-robot interaction. In: IFAC World Congress, pp. 6484–6492 (2011)Google Scholar
  15. 15.
    Schmitt, K.: Automations influence on nuclear power plants: a look at three accidents and how automation played a role. Int. Ergon. Assoc. World Conf., Recife, Brazil (2012)Google Scholar
  16. 16.
    Endsley, M.R.: Toward a theory of situation awareness in dynamic systems. Hum. Factors: J. Hum. Factors Ergon. Soc. 37, 32–64 (1995)CrossRefGoogle Scholar
  17. 17.
    Trentesaux, D., Moray, N., Tahon, C.: Integration of the human operator into responsive discrete production management systems. Eur. J. Oper. Res. 109, 342–361 (1998)CrossRefzbMATHGoogle Scholar
  18. 18.
    Sheridan, T.B.: Telerobotics, automation, and human supervisory control, MIT Press (1992)Google Scholar
  19. 19.
    Sentouh, C., Popieul, J.C.: Human–machine interaction in automated vehicles: The ABV project. In: Risk management in life-critical systems, pp. 335–350, ISTE-Wiley (2014)Google Scholar
  20. 20.
    Millot, P.: Cooperative organization for enhancing situation awareness. In: Risk management in life-critical systems, pp. 279–300, ISTE-Wiley, London (2014)Google Scholar
  21. 21.
    Millot, P., Boy, G.A.: Human-machine cooperation: a solution for life-critical systems? Work, 41 (2012)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.LAMIH, UMR CNRS 8201University of Valenciennes and Hainaut-CambrésisValenciennesFrance

Personalised recommendations