Social Science Contributions to Engineering Projects: Looking Beyond Explicit Knowledge Through the Lenses of Social Theory

With this paper, we will illustrate the synergetic potential of interdisciplinary research by demonstrating how socio-scientific perspectives can serve engineering purposes and contribute to engineering assignments. This especially concerns eliciting knowledge models and ergonomically optimizing technological design. For this purpose, we report on our findings on socio-technical arrangements within smart factory research initiatives that are part of the IMPROVE project. We focus our findings on the systemic interplay between the formally modelled plant, its actual physical state and the social environment. We also look at how operators, as parts of the plant’s environment, adapt themselves and thereby develop their own particular work culture. We then integrate these findings by reconstructing this operator work culture as a specific way of performing, accounting for and addressing particular issues. We enhance these findings using the lenses of recent concepts developed in the field of social theory, namely the praxeological understanding of tacit knowledge, systems theory differentiations and an actor-network-theory understanding of human-machine agency. Applying these concepts from social theory, we revisit our empirical findings and integrate them to provide context-sensitive, socioscientifically informed suggestions for engineering research on knowledge models concerning HMI design.


Introduction
Imagine you are in a factory and hear someone (or something) say: "Although this seems to be a batch problem, you should rather consider checking the rolls". With the contemporary smartification of industries, you cannot know whether you are hearing a human operator or a decision support system (DSS) speaking. The machines and devices in smart factories are no mere high-tech devices; they connect, cooperate and synergize with human actors. They are no simple tools of monitoring and control but a nexus of humancomputer interaction (HCI), enhancing the capabilities and capacities of both human operators and the industrial plant complex. This development raises new concerns, issues and objectives of human machine interface (HMI) design: How to organize changing knowledge models? How to integrate informal intervention suggestions and formal automation? How to introduce new tools like smart glasses, tablets, smart watches? And, not least, how to alter rather passive tools of knowledge assessment like dashboards and operating panels so that they include a close, pro-active entanglement of these features, rendering the smart factory, in terms of HCI, sensorimotor and self-reflexive? However, these are well-known issues of HMI engineering, especially concerning smart, recursively learning software and human machine cooperation. Topics such as responsibility management, the avoidance of human errors and overcoming bounded rationality have been intensively discussed by HMI engineers and HCI scholars. [1] [2] With the critical momentum created by HMI software's ability to learn from its users' behavior, human-centered engineering is required and must be adjusted to the new assignments deriving from this significant technical shift: there are no longer any clear boundaries between passive technological tools and active users. While operators have developed their own particular ways of handling (their) HMIs, these interfaces themselves are beginning to be designed and act in a creative style, since they are the media technologies through which new patterns of knowledge and expertise can emerge and evolve.
Because of all this, new forms and sets of expertise for interdisciplinary research are being invoked. We claim that the social sciences can contribute a suitable perspective and methodology that goes beyond the canon of cognitive science, psychology and ergonomics that has been used so far to support and enhance HMI research. The social sciences are specialized in analyzing and understanding integral shifts of technology that affect not only work routines but the necessities of organization having to do with entrepreneurial, economic and personnel issues. Furthermore, the social sciences, like sociology and Science & Technology Studies (STS) in particular, are designed to tackle the kinds of complex entanglements that withstand technical approaches of proper explication, operationalization, formalization and trivialization. Patterns and meanings of complex work routines and conduct cannot be modeled by means of data mining for they cannot be operationalized accurately in numbers.
Being focused on the study of societal phenomena, the social sciences possess a specific repertoire of theories, concepts, methods and strategies for understanding the diverse and simultaneous dynamics of practices and situations that draw together knowledge, awareness, technology, incorporated routines, work cultures, organizational frameworks and business requirements. There are specialized disciplines for each of these aspects; however, when it comes to such complex assemblages with opaque effects and conditions, the social sciences can offer mediation and informed intuitions to supplement design processes and complement implementation scenarios.

2
Introducing our role(s) as social science researchers

What do social scientists do?
With the turn from HCI to human machine cooperation, it is necessary to draw stronger epistemological connections between the individual actors, experts and stakeholders involved in and affected by HMI development and assessment. Since the social sciences specialize in questioning the things that are taken for granted, they can thus help bridging the assumptive bounds that seem to analytically separate them. This conduct of questioning and looking out for the danger of implicit ontologies is accompanied by a sensitivity for latent, implicit structures, practices and knowledges. In the social sciences it is common to conceive 'knowledge' in plural terms because there are many different kinds, forms and constellations of knowledge(s). The peculiar perspectives, conduct, thinking and mindset of social science research have already been indicated. So far, we have claimed that this includes capacities which can contribute to engineering projects beyond technological expertise. Socio-scientific perspectives have their own particular ways and strategies of research. Some readers will already have recognized that the social sciences not only have their own epistemological and methodological set(up), but also a specific style of thought and communication. Sociologists in particular tend to produce illustrative and passionate narrations instead of modest reports. This is not a (mere) result of the social sciences being socalled soft sciences, but rather reflects their genuine interest in issues of self-reflection and self-involvement. [3] Hence, we tell the story of our particular scientific role within and experiences with the IMPROVE project, for our experiences are a great part of -or at least complement to -our research material and practices. [4] Social science research is empirical research; looking for the prerequisites of (inter)actions and practices, and that reconstructs seemingly non-social entities like technology as deeply entangled with social, cultural path dependencies and events. We therefore look for social artifacts (like documents, reports, communications) as well as for social processes like work practices, decision making, etc. Sociology stresses terms like practices to highlight that conscious and rational modi operandi are the exception, not the standard of how actors act. From this perspective the social and our everyday lives mostly consists of latent structures and behavioral routines that are usually reflected and reasoned about only retrospectively, not in advance. Thus, the social sciences have developed an elaborate terminology around latency and implicit patterns that shed some light on meanings embedded with social practices like work routines and HCI beyond the explicit dimensions of phenomena and data.
Another distinctive feature of empirical, socio-scientific research is that it regularly regards the social sciences regard qualitative research as sufficient for a certain subset of questions. Such research is characterized by small sample sizes, often have highly diverse data types (e.g. observations, documents, interviews) and an iterative, open (i.e. not fully formalized) methodological conduct. We organize, reflect and report our research process using the technique of memoing. Memos are a specific type of data: they are research notes that help to organize further research steps and which reconstruct the (subjective) research project to compensate this sociologically inevitable subjectivity. We process these data by coding, i.e. clustering different semantic commonalities. These commonalities are studied by means of comparison, organized by structural features of the smallest and greatest similarities/differences. As a result, we identify central concepts hidden in the data and codes which then structure our account of the phenomena in question. Social scientists handle the given lack of standardization by a theoretically informed and detailed description of how they reach their conclusion. [ , we collected three ethnographic meeting protocols, one complementary ethnographic protocol of the interview situation with our HMI  partners, 29 project-internal documents (mental models, project plans, communication,  HMI drafts, etc.), eight memos, and, both elicited by our HMI partners, three operator interviews, eight technician interviews. Our designated role with IMPROVE was to investigate and reflect on the socio-technical arrangements involved with or stressed by the project's technological development or by smart factory environments. We were commissioned to derive a methodological toolkit from our research experiences which was intended to work as a ready to use methodology for further inquiries into socio-technical arrangements. Eventually, this is a methodological conundrum because socio-scientific methods often require practical training and can barely be standardized. Yet this was also an opportunity to develop strategies and precedents for interdisciplinary diplomacy [8]. For this purpose, we developed and set up a preliminary, socio-scientifically informed methodology that is formulated in a propaedeutic manner and takes into account discrepancies of disciplinary perspectives, demands and their matters of concern. Yet, this toolkit is rather a how-to-become-socio-scientific than a how-to-do-social-science. Furthermore, it has been our task to design a learning center for operators of smart factory plants. For this purpose, we made specific didactic suggestions regarding our own research on field-specific socio-technical arrangements and drafted tentative training scenarios and tools. Again, we brought in a dash of sociology, since we proposed not a positive, finalized learning center but a recursive re-learning center that would reflect the dynamic developments of both the usage of our toolkit and the matters of concern.
We we started our participation in the IMPROVE project with a review of project documents and the industrial engineering literature. We revisited the industrial sociology literature and papers on STS and the sociology of technology to deepen and broaden our understanding of the project's specific topics, issues and perspectives. We focused on requirement engineering, knowledge elicitation and modelling, as well as on HMI and DSS in smart factories and issues of human error and responsibility accounts. We thus focused on the role of tacit knowledge. By doing this, we discovered the fundamental discrepancy between engineering and sociological terminology mentioned in chapter one. However, we stressed the sociological or praxeological term of tacit knowledge, which contains explicitly non-explicit (and even non-explicable) knowledges (e.g. body movement, sensing practices or of latent structures and interrelations of communication). Hence, we concentrated our research efforts on the elicitation and exploitation of these technically neglected aspects of tacit, implicit practice knowledges.
We assisted and consulted the elicitation process and post-processed the empirical data (interview recordings, mental models, etc.). Inquiring [sic!] these empirical data, we identified central socio-technical arrangements. We also protocolled the project progress meetings in a socio-scientific manner to get our own understanding of engineering mindsets, assignments, communication and organization.
Methodologically, our research case produced two key concepts: "operators' work culture/particularization" and "explicity/implicity". These key concepts include codes like responsibility, routines, sensitivity and bounded rationality of operators, several industrial assignments, and the boundary between the formal, technological plant and the vast, vibrant plant environment (e.g. batch quality, machine quality, environmental parameters, organizational amendments, economic demands and limits). Thus, the physical plant itself is not identical to its formal, technological and scientific model, a technical discrepancy that is continuously bridged in terms of actualisation and contextualisation by human, social praxes. It is this realm beyond automatization which renders human operation necessary (aside from normative regulations for supervising high-risk processes). However, we wanted to avoid complacent sociological research that would exclusively follow the essential interests of our discipline. Therefore, we tried to work more towards applicable results and strategies of interdisciplinary diplomacy because these two assignments are interrelated: while interdisciplinary diplomacy is an applicable contribution, playing (with) the role of an applied science researcher supported our immersion in the disciplinary realms of our project partners. The results of this (re)search for a synergy of application orientation and mediation will be explained and illustrated in the subsequent chapters.

Empirical findings on socio-technical arrangements in HMI supported operating of smart factory plants
First of all, in social science, especially in qualitative research, theory is not always presented before empirical analysis. This is due to the fact that we distinguish between empirical and social theory. Social science, and quantitative research in particular, has empirical theories that have to be taken into account in advance in order to render deduction tests or explanations feasible. Qualitative social research, however, applies so-called social theory, which consists of non-empirical, theoretical considerations that give the social scientist perspectives, concepts, terms, suspicions or connections that can be used to organize the study and its findings according to the chosen social theory. This is a necessary part of socio-scientific research, since social phenomena are vastly complex and entangled in terms of synchronicity and overdetermination. Thus, classic deductive nomological models are insufficient, and simple, empirically grounded statements will fail to organize the socio-scientific knowledge and facts. Hence, social scientists apply similarly complex theoretical considerations in advance in order to identify significant data and draw plausible connections beyond everyday life plausibilities. However, social theory is not the absolute first step of social science activities. It is rather integrated in an iterative research cycle, as also described by Grounded Theory. Grounded Theory is one of the first and greater disseminated proposals regarding how to do socio-scientific research in a way that generates theory from empirical data. [9] Such grounded theories are always, somehow, associated with social theory assumptions. Eventually, however, they must be integrated by means of greater social theories. Hence, social theory consists of both the presumed concepts that render socio-scientific observations or interpretations feasible and the conclusive integration of findings in a common terminology and conceptual perspective.
Nevertheless, we will now begin with a report on our empirical findings which will then be enhanced by social theory accounts. These social theories will elevate our qualitative findings on an analytical level of deeper understanding that can provide insights for technical design, planning and organization issues.
Concerning socio-technical arrangements, we encountered two significant features regarding human-operated industrial production. There is 1) a particular relation between a machine's formal system and its unstable environment. The formal machine system is the explicit, technological model that grounds any automation or overall technological design, e.g. the correlation between orifice width and film diameter, and the unstable environment is all those qualities which elude the idealized models due to their complexity (e.g. given climate variations and machine deterioration, market fluctuations). Hence, this complex environment is composed of physical and social entities and processes. In addition to this, operators established 2) a specific kind of work culture that derives from the intersection of historic, socio-economic subjectivity and their concrete task of flexibly operating these machines. We have already indicated that this is a necessity that results from the discrepancies between formalized technology and actual materiality.
Following our experience with and analysis of operator interviews, industrial plant operators have developed a work culture centered around particularity. This is no arbitrary quality, but a strategic outcome of the tension between the machine's formal system and its unstable environment. By work culture, we mean the self-descriptions, accountings and practical structuring of (re-)actions. We call it culture because it is a contingent result of adaptation between a specific (organizational) role within the given formal and practical limits of operators' work. While this work culture re-addresses itself as being particular to every individual, it organizes these individuals within their common situation through cultural presets of self-conception, praxis and perceiving. Thus, operator self-descriptions integrate an individualization of their role with the given requirements for common routines, semantics, etc. As a result of our tracing this work culture, we have learned that operators describe their own tasks as highly particular and often as incomprehensible from the exclusive perspective of formalization. Thus, the particularity produced by this work culture is a social reaction of personal and physical individuality confronted with boundedrationality operating situations. This particularity results from plain psycho-physical, factual individuality and as a reaction to complex situations that cannot be reasoned beyond (individual) intuition.
However, there are also strategic reasons for operators to avoid describing their work as trivial so as to articulate their (technical) indispensability. Nevertheless, the resources of this self-presentation are no artifact of this strategy. Particularity within operating is rather a result of organizationally given frameworks (e.g. which adjustments are permitted to be done, which production assignments are set) and the actual, material complexity of the operated plant itself.
Thus, the operators who were studied produced a conglomeration of individualization figures, e.g. comparisons to driving a car (although it is a common practice, everyone has his own particular way of doing it) and explicit demarcations of the individuality and particularity of their work: Everybody has another way of handling this problem, everyone has his or her favorite way of solving such situations and issues, etc. Or, as they said: ask any operator about a specific problem and you will get at least one opinion, but probably more than one.
However, in contrast to these claims, operating work is, partially, more standardizable than depicted in these interviews, a fact which results in the further automatization of production plants. Automation is easily regarded as a threat of job loss by industrial workers. This conduct can result in rejection of automation innovations. Although this rejection is another issue, the practical results for operator practices and self-conceptions have to be taken into account. Nevertheless, there is a particularity which exists through the plant's realm of opaqueness, where different solutions can be applied, and thus different problems are identified. As a result, particularity is no simple self-display but part of the operators' practical self-specialization. However, there is a communality of operating which can be found on several levels (facility-wide, company-wide, etc.). This is indicated by the operators' shared terminology and conception of the plant. Beyond these facility-and individual-ordered cultures, other role-specific cultures and discrepancies can be found. These are marked by conflicts between technologists and operators in particular. There are, in general, latent conflicts and tensions which are expressed by the plain fact of how important it was, during our research, to guarantee that our elicitations were not and cannot be used for operator assessment. These conflicts represent a basic tension between educated, abstract and formal expertise on the one side, and practical, application-grounded expertise on the other. Intriguingly, during the experimental production scenarios, operating experience could not be successfully applied (as those scenarios were exceptional), nor could the formal models be completely confirmed. These conflicts can also be understood as tensions between those who design new technology and install and initialize plants, and those who maintain plants and keep them running. From our STS perspective, this is a particular socio-technical conflict of technologically shaped and undergirded social order: e.g. the responsibility of accounting for or reflecting organizational structures by means of technology access and the particular meanings that it has for its users. [10] [11] Thus, the tensions that appear when discussing a plant's reality happen between the two poles of formally understanding a machine complex and practically operating it in running scenarios.
These examples and observations are highly redundant in our given data. They applied to both case studies and they were repeated with each interviewed and observed operator. Even the one exceptional interviewee, an operator who was regarded as someone who shared the technological expertise of technologists and was familiar with both engineering expertise and practical efforts of operating, confirmed this particularity: whenever it comes to practically operating a plant, particularity is required or just results from the particularity of events.
Significantly, operators did not stress their particularity when describing their individual skill-sets. Instead, they particularized the description of their practices when it came to tacit knowledge requirements. They described their sensitivity towards the plant itself as being multi-sensitive: a feeling for the temperature and tension of the produced film, and even olfactory and acoustic signals were taken into account. Hence, their particularity cannot be reduced to mere expressions of an individualization discourse or a responsibility-handling strategy. It is first and foremost their practical way of operating plants beyond formality.
However, the operators we studied often told us about their indispensability, that not everything could be automated. This is, obviously, an expression of their own (economic) interest, highlighting their own importance within the industrial processes. Therefore, they often referred to the necessity of their work in terms of the limits of automatization. Human work cannot be completely replaced by software, they claim. From a neutral standpoint, automatization limits imply much more than the concerns uttered by operators. Those limits are less of a purely technological limitation than the economic limits of technological development (e.g. sensors) and legal or organizational limits (e.g. accountability requirements). Requirements that are not covered by contemporary software infrastructures include problems of accountability, responsible learning, etc. However, when it comes to tacit knowledge, the socio-technical arrangements of the plant itself and its human operators have their own path dependency and their own evolution: they are adapted to each other and their development cannot be understood without each other. Anything else would require a radical redesign.
This incorporated and situated, tacit knowledge contains many valuable capacities that could, in principal, be extracted from the field's practices. However, it cannot be transformed into formal, explicit knowledge and remains, thus far, a human peculiarity and privilege. While the use of machine and deep learning technologies enables machines to reach insights beyond human scales, they are still unable to smoothly integrate themselves into an ever-changing environment. This is a result of machines' strict mode of perception and due to limited real world training and test data. While their cognitive processing is impressive, they often ignore events and data that are out of their preset scope. Hence, they fail in terms of human perception since we are used to understanding and assessing everything from our very own perceptive scope. Nevertheless, this human perceptual worldliness must not neglect the greater sensory opportunities that machines present. However, neither engineers nor operators seem to regard human work in this way and are merely addressing an opaque complexity to render their industrial value and indispensability plausible.

Social Theory Plugins
We will briefly present three selected social theory perspectives in connection with our empirical findings. Since we want to avoid a simple eclecticism of theoretical possibilities, the selected concepts presented below are all commensurable with each other and will be reflectively interconnected in the final theoretical subsection. Three different social theories were chosen concerning their specific theoretical and terminological accounts: systems theory (system/environment), practice theory (tacit knowledge), actor-network-theory (agency). We chose those conceptualizations for they each feature particular sensitivities towards the empirical field of our research.
First, we will introduce systems theory for its accounts and differentiation of system and environment can organize our research inquired socio-technical arrangements' complexity and take into account the difference between the assumed, formalized concept of an industrial plant and its actual physical existence.
Second, we will tackle tacit knowledge which already is a theoretical account of practice theory (or: praxeology). Tacit knowledge fills in the gap between formalized rules and models and the actual physical entities. It is a modus operandi which we encountered directly in our research field. With the lens of praxeology we will thus enhance our empirical findings.
Third and last, we will introduce actor-network-theory (ANT) and its theoretical account of "agency" which focusses on the relations that make action and behavior possible at all. Hence, we will use the concept of agency to shed (more) light on the difficult relationship of human-machine interaction and co-operation (in the rather descriptive, not normative sense). Such a concept of agency will allow us to integrate the other two social theories into one analytical canon for the concept of network driven agency touches both systemic complexity and the body, physicality orientated praxeology.
However, note that although we try to create a coherent theoretical setup for the analysis of our findings, it works rather heuristically and in terms of applied science or (socio-scientific) technique. From a genuine sociological point of view, however, this needs further theoretical elaboration and empirical substantiation. What we present here is, so far, rather an illustration and proof of concept.

A systems theory of (smart) factories
The social theory accounts we are first to introduce are 'systems' and 'environment' as they are defined by contemporary, sociological system theory. It is a conceptualization that focusses on process, time and distributed logics of cognition and perception. We will begin with a basic introduction of the central systems theory terminology: autopoiesis, system and environment. Also, we will highlight the cybernetic design of systems theory and how our empirical findings can be organized in autopoietic systems and their environments. This organization will then be used to structure our empirical findings and to identify its contained complexity requirements in particular.
The sociological Systems Theory that we stress here might remind engineers of cybernetics. There are, by design, many similarities and congruencies, since sociology's systems theory is, basically, sociologically applied cybernetics [12]. From a technological point of view, production plants are operatively closed systems and appear identical to their formal depiction. Any external event or stimulus that occurs within the scope of this formal setup of the plant will be reproduced as a part of the plant itself. Nevertheless, formal models can be applied, and smart factories are even capable of further processing those data points into their own behavioral model. When they detect an increase of heat, they can compensate in order to maintain the plant's functional homeostasis. However, what suffices for the formal system might be different from the needs of the actual physical plant. Required transformations run through the different systems with their frames of reference e.g. formal correctness and practical viability. This reference discrepancy might cause errors or confusions, but a forced breaching of the system bounds would lead to a malfunction: e.g. breaking down production for the sake of formal comprehension. This is an error typical of bureaucratic processes that tend to ignore non-bureaucratic semantics of logic.
Systems theory calls such selective reproduction processes autopoiesis [13] since the systems (e.g. plants) are marked and constituted by their own boundaries, which are constantly reproduced by the decision regarding whether something is part of the plant (or the system) or not. The very feature of sociological systems theory is that any event might be interpreted as an event of autopoiesis. This way, interconnections between entities become more complex, but environmental differences of meaning can be integrated and a system's stability can be explained by understanding its boundaries. Furthermore, unintentional effects can be traced back to their systemic source so that amendments do not lead to infinite sequences of (re)irritating the treated system. This radical systems theory conceptualization was actually developed in neurobiology and has been intensively and extensively redesigned in terms of social theory by Niklas Luhmann [14].
The technological design, development and maintenance of a plant are, however, second order cybernetics, which is a systemic observation that takes into account both the differentiated system and its acts of differentiation, including their residues. Adjusting and further developing components is a reflection on the functionality of the whole system, and operating is even more a matter of highlighting the external overview of the plant in terms of restoring its formal normality and applying (inter-)actions beyond the formal scope. However, with the development of formal models and their formalization grades, system boundaries are set. It is important to recognize that not only is a plant's common environment (socio-economic or material environments, i.e. markets, climate, batches, etc.) beyond the systemic scope of the plant, but so are parts of the plant that seem to belong to it itself, e.g. randomly deteriorating components. Furthermore, overdetermined correlations are part of the system's environment insofar as they cannot be rationalized and raise the risk of system's misunderstanding, i.e. being unable to continue its behavioral reproduction (e.g. if intervening is not optional but the chosen measure is only possibly correct).
Being aware of the systemic boundaries of a plant supports its conceptualization towards socio-technical assignments of organizing responsibilities, accountabilities and handling ergonomic assignments within different, complex environments. It signifies organizational and social demands as well as technological limitations that might be complemented through human work. Human operating work can thus be regarded as a complementary system that reproduces itself from everything that lies within the re-entry of the plant's system but falls out of its formal scope. 'Re-entry' describes how, for any system, it is necessary not only to differentiate between stimulus acceptance or non-acceptance (e.g. a reflection that might indicate an unclean film) but also between those events that concern this differentiation and those that do not (e.g. a bird flying over the facility).
With this, we encounter another feature of systems theory perspective: Even though cybernetics was, traditionally, a philosophy of control, modern systems theory no longer uses the term 'control' because of the system bounds that render immediate control impossible. Instead, regulation is defined as a specific, evolved arrangement of irritations. This requires interface systems that arrange different irritations (as action resources) in a way that renders operating and maintenance feasible. For smart factories, this also requires an HCI that can organize, foster and synergize knowledge from this irritation ecology.
However, this systems theory perspective can structure the different areas that are at stake or matter when it comes to technological and organizational or economic design. Regarding HMI and other technological design issues, HMIs are used as a materialized systems-interface. While this theoretical perspective will depict anything as inter-systemic irritations, HMI can take into account insights about a system's order by structuring its own knowledge-and decision-management in a way that is informed by identified system boundaries. This corresponds, for example, with required rule behavior, which addresses the physical plant as the plant's systems environment, etc. Note that the software/hardware and algorithms behind an HMI can only work as second order cybernetics if their system environment is both computably commensurable and can observe its first order systems under its own system code. In other words, second order cybernetics software requires explicit and expectable objects. However, second order cybernetics can easily be attained by integrating human actors. This can even be enhanced through HMI or DSS technology that is specifically designed not as a second order, but as a second cybernetic entity of order and organization, providing feedback to its surrounding systems (which are environmental towards the HMI itself) and shaping irritations in terms of translation by supplementing semantic interaction homeostasis, i.e. by identifying signals and interventions, or means and results. As a constructive act within itself, systems theory analysis reflects the engineers' own adaptation towards their use-cases and technologies, thus integrating constructive reflection into the theoretical framework of design: taking into account system functions and bounds without mistaking it for the differentiation of explicit and implicit data, knowledge, processes, structures, etc.

Tacit knowledge beyond explicity
The role of tacit knowledge for social science and also engineering purposes already has been highlighted above. However, this is a specific, elaborated social theory account, known as 'practice theory' which tries to resemble and understand social events and continuities as a result of incorporated knowledge and routines, bodies and their (physical) environment. Hence, we will begin with an illustration of how focusing on a corporal dimension works as a social science epistemology. After briefly describing the concept of tacit knowledge and its relation to routines, rules and explicit knowledge, we will elaborate on the tacit knowledge we found during our empirical research and which conclusions can be drawn by using the lenses of practice theory.
In contrast to engineering that is informed by ergonomics or cognitive science [15], sociologists make use of the more radical concept of implicit knowledge. While the engineering of knowledge modelling and knowledge-driven data mining understands implicit knowledge as proto-explicit knowledge hidden in someone's mind, the sociology of practice labels this hidden knowledge, which yet has to be elicited, as an explicit one. Engineers thus transform their not-yet-explicit knowledge into explicit knowledge using methods which only use knowledge-orientated protocols and are designed to literally make respondents tell what they think which highlights the boundaries between explicit and implicit knowledge. Such methods suffer from the fact that respondents usually cannot do their work or answer that way because they do not actually think consciously about what they do and are thus furtherly strained by thinking about what they might think while doing their work. What is easily mistaken for implicit knowledge of a practice is rather explicit knowledge that is reconstructed in hindsight (as most thoughts are anyway); the actual tacit knowledge of a practice is in its actual process and gets partially lost during its reconstructive explication.
Praxeological social theory tackle this issue by differentiating knowledge in terms of its form: can it be transmitted through textual media? If not, it is implicit or tacit knowledge. It is knowledge that is completely dissolved in unconscious bodily practices or in emergent behavior patterns (e.g. how we use our native language), or that is generally explicable but cannot be learned by using its explicit form (e.g. you cannot learn to ride a bike by merely observing it or reading a book about it) [16]. Since engineering is accustomed to processing explicit data, which often can be easily quantified or brought into clear logi(sti)c schemes, there is, quite frankly, a whole new world of knowledge out there.
Tacit knowledge can be elicited by qualitative observations and techniques of praxis immersion. While latent, emergent interaction patterns might be comprehended through mere long-term observations, some tacit knowledge might require a rather detailed, videographically supported method. Furthermore, some knowledge even to be elicited through participation in the practices which are studied. A large set of methodological suggestions and readjustments of perspective within sociology deals with eliciting, analyzing and interpreting tacit knowledge for empirical science purposes [17]. Therefore, HMI for operators in smart factories would benefit from such qualitative research. Socio-scientifically inspired qualitative research might lack objectivity and generalizability, but it can nevertheless make a significant difference in the quality of technology design.
However, tacit knowledge is ubiquitous and can be taken into account for all kinds of purposes. Any explicit knowledge requires whole sets of tacit knowledges, like grammar or an understanding of specific terminology. Much explicit knowledge is already contextualized and therefore is not regarded as involving tacit knowledge and can be taken for granted. When it comes to contextual practice, however, tacit knowledge is much more significant and visible (since it requires specific incorporated knowledge that might not be shared). This especially concerns rule-based practices like checking a machine's condition and being able to focus on particular, distinct features. Following this relation of tacit and explicit knowledge, acquiring tacit body knowledge can, indeed, be supported by explicit knowledge provisions. It might be required to present this explicit support in media that is not text-based, like videos or physical support (from a teacher or trainer). It is worth noting that learning and training tacit knowledge can be illustrated and investigated, again, by qualitative research. Therefore, for the commissioned learning center and training scenarios, we suggested a relearning center that would learn from its own practice-objective relations and their developments.
As we have discussed in the above section on our empirical findings, particularity is stressed and narratively constructed by operators whenever it comes to the application of tacit knowledge. With their particularity narrative, they can address those unspeakable things which defy plain explication. Since those particularized contents systematically fall out of the methodological scope of classic knowledge acquisition, the indicated content is structurally missed. It is important to note that not all tacit knowledge is indicated by communication or significant events or phenomena. Sometimes tacit knowledge has to be dug out of all the obviousness within a field. Actually, the indication of narrative self-description by particularization is kind of a field-specific situation, and in this specific case it might even lead to the most significant issues of practical knowledge. However, a lot more can be extracted from the field by observation methods or video analysis (if possible). The engineering differentiation of explicit and implicit knowledge is orientated towards explicit knowledge because engineering is focused on use cases. However, strong research on a field's tacit knowledge might not only contribute to positive technological developments but might also render observable complementary problem cases that indicate transintentional, perturbing or irritating disturbances that result from the use cases' positivity. For example, physical interfaces like smart glasses might interfere with safety requirements for wearing helmets, or be generally inappropriate if performing maintenance also requires dirty and rough work that might damage such equipment. We propose, in contrast, an additional problem-case focus that highlights not the positive technology but rather the frames of reference that situate and perturb socio-technical entities and connections.
In conclusion, HMI and DSS design can profit from this perspective and complementary research. Some content, like rules, can be put into the knowledge-and rule-organizing structure of the DSS, and some issues might be even capable of ordinal processing. And even beyond that, social sciences can highlight the expertise beyond explicit knowledge and help to think of an HMI and DSS that is able to synergize an operator's particular experience and further enhance their system regulation skills by providing feedback in the form of successful and handy solutions (e.g. applying a recommender system). Or, vice versa, an HMI and DSS would be able to warn of reported intuitive deceptions. Socioscientifically informed research on tacit knowledge can also be applied for the purpose of automatization. Tacit knowledge not only takes additional, complementary data into account, it also reveals the complexity of such practices that seem to be simple routines, and it can thus contribute to modeling processes for automatization purposes. [18]

Conceptualizing human-machine agency
In this section we will introduce the theoretical account of 'agency' to better understand human-machine interaction. We will begin with an basic introduction of Actor Network Theory (ANT) and pragmatism (as it is applied by social theory). Since these social theories can appear to be counterintuitive, we will offer typical examples that highlight their analytical and interpretative value. We will then integrate our empirical findings into this theoretical account and furthermore conclude with a synthesis of the social theory accounts chosen for this study. [19] This last sub-section reflects on the common conception of agency, of capabilities of being active or passive. HMI engineering has found that HMI software is not a mere tool which simply enhances its users' capabilities, but rather shapes its users' behavior and thus it can sometimes be risky if human actors do not correct machine errors [20]. Nevertheless, there is a theoretical perspective missing that can systematize these insights beyond mere empirical observations. There are several theories which focus their perspective on the issue of action networks, i.e. the interconnected entities which enable the whole network to act since no single entity would be able to act (for further illustration: imagine that, if you were the only thing that existed, there would be nothing to towards which you could possibly act). Although it is some decades old now, ANT [21] is still being discussed and has been further developed by many authors [22]. A similar, much older theoretical approach had been developed pragmatist philosophy [23].
As engineering usually focuses on use-cases, its tool-conception of technology affirms the common structure of activity and passivity. Insofar as human actors are regarded as detached entities who are just using technological tools, this was at least discipline-intrinsically self-sufficient. However, engineering is increasingly taking into account research on humans themselves. Humans become objects themselves when it comes to technological design and development. Cognitive science, ergonomics and psychology have restructured our conception of humans within engineering and thus require an appropriate theory that makes it possible to overcome the classical differentiation between tools and users, passivity and activity. And emerging engineering fields like human-centered software development prove that demand of a sociologically supplemented knowledge acquisition.
Agency is the result of those very difficult networks. These networks are no simple condition of action, but they are intersections of interplay and, in terms of systems theory, accountable as actions. Things also behave in correspondence with each other. There is no so-called "reflex arc", i.e. the hypothesis that a stimulus, the cognitive processing of a stimulus and a response were subsequently ordered and distinct phases of action of behavior. Rather it is a matter of being a sensorimotor nexus [24] that integrates the perceived environment into the organism. You can find therein the behavioral description of a cybernetic conception of human action once again. As described above, stability within complex practices exclusively derives from second-order cybernetics that can organize vibrant irritations. Since objective environments react depending on an organism's actions, this also goes the other way around. This means that meaning is defined as, first of all, a direct link between perception, conduct and behavior. If one encounters an emergency, one usually reacts way before one recognizes what is going on, and this is an indispensable feature, in terms of evolution.
This also touches on issues of social order. Hence, social order might be taken into account in HMI design. There are identifications of situations, of sets of stimuli and responses (which are situationally identical) that are taken for granted or as inevitable. As a result, deviance requires high cognitive effort (like acting weird on purpose). This is an important fact which was already stated by Durkheim [25] and concerns issues of responsibility, behavior control and raising consciousness.
However, a stimulus already requires sensorimotor sensitivity. Perceiving something, e.g. focusing the eyes, is no mere sensory task, but a task of motoric adjustment. The detection of a significant phenomenon (e.g. a threat) requires and involves specific, bodily conduct.
The important fact and lesson of this theoretical conceptualization is that those things we usually discuss as explicit knowledge (e.g. feelings, intentions, reasons, meanings) are not additional intermediators of stimuli and responses but are operationally identical. This perspective can be hard to grasp since this operational identity is separated in analytical reconstruction. This is what we address as conscious thought. We can, for example, flee spontaneously from a fire, but we will intellectually separate our motives, movements and means in retrospect. Thus, reflexive and practical thought is operationally separated, a fact that must be taken into account for scenarios that combine practical routines and mindful intervention.
However, there is an analytical meaning in this active-passive conception which stresses the role of motives and goals: active actors have goals while passive actors have none. Passive actors are therefore very predictable since they do not decide on or rearrange stimuli and responses (which are themselves also stimuli just as stimuli are at the same time also responses). Thereby, passive actors seem qualified as means. Nevertheless, within agency networks, goals and ends might be identified, but they do not actually contribute to a plain, objective description of events. Specifically, they might play a role but are superfluous regarding effective outcomes and can even lead in such detailed descriptions to infinite regressions of motives. A common example from ANT [26] is cars, which feature the security measure that they can only be driven if the seatbelts are fastened. In this scenario, the primary motive for buckling up before driving is the motivation to drive itself, while safety is its actual outcome. Such planned networks can nonetheless be manipulated, e.g. one could put the seatbelt behind one's back, but this would not really pay off and, furthermore, it indicates that there are other actors participating, like discourses of law, law enforcement and security. However, safety is no arbitrary outcome since some engineers have designed the car that way. Again, their motivation might nonetheless derive from legal prescriptions, economic benefits or specific engineering discourses. Legal measures might be mere reactions to faster driving cars. This development, again, can be identified as an effect of economic necessities and then be interpreted as a result of capitalism, which is, again, again (sic!) a result of necessities within a growing structure of work and goods distribution, and so on and so forth. Some technology might seem to have a specific goal and, in the end, appear to be the result of the motivation of some graduate students to get their PhDs.
In conclusion, this further illustrates the sociological concept of tacit knowledge, as it is beyond explicit reflection or consciousness, rendering any action and practice possible and feasible at all. Thus, our chosen theoretical accounts can be integrated. We understand tacit knowledge better when we follow the concept of distributed, synergetic agency and comprehend the adjustment of behavior and thought through cybernetic complexes and nexuses. For example, when we want to remember a thing, we just decide to do so (even though we are used to this image) but have to write a note in our calendars or knot a handkerchief since our whole self-control is not an immediate capacity but a strategical effort in arranging our own environment, rendering likely those (re)actions which we desire. Although quite different social theories, we can present the sensorimotor actor-networks as a certain synthesis of the social theories introduced above. Systems theory can be taken into account for the infrastructure and logistics of agency correspond with the inter-systemic irritations between any system and its environment: there is no general, central point of structure or meaning but a vibrant complex of practices and events that are less describable in terms of control but in terms of regulation, resilience and homeostasis.
Consequently, HMI is a huge part of human machine cooperation, and such an ontologically symmetrical perspective, as depicted in this section, can help to understand the agency that results from human machine interaction and can thus inspire HMI and DSS design considerations with ethnographic insights, contemplation and immersion [27]. For IMPROVE, we therefore delivered a qualitative methodological toolbox that grants access to structures of tacit knowledge and the difficult inter-reactions in HCI scenarios. Again, we cannot produce general knowledge for any (smart) industry scenario, but generic tools and strategies to handle each case in particular. The theoretical and disciplinary perspective we offered can, thus far, only provide general remarks on the design of HMI and DSS infrastructures since they have to reflect symmetrical agency: the DSS has to listen and to speak as well; like operators learn to use an HMI, an HMI has to learn to be used (or to use) operators. Since this also corresponds to the postulated particularity of operator work culture, these are initial, general instructions deriving from our research and reflection. A smart factory DSS must learn from situations and its operators and organize and synergize given content. A smart HMI and DSS must be coded counterintuitively, it has to produce vibrant dynamics that are by necessity reduced by the particular, practical relation between device and operator. Simply put: It has to start making 'stupid' requests because clever software is set up, but a smart industry setup has to follow the particular interreactions of a particular individual in a particular environment. Standardization cannot synergize this particularity without erasing it. Hence, it has to be the consequence of operating, not its prerequisite. This way, by amplifying the required particularity of particular environments, systems and scenarios can, in the long run, become a viable way for further standardization if there are smart digital infrastructures that can assess, construct and disseminate collective operating knowledges and behavioral patterns.
HMI can restructure this set-up and smooth the transition from traditional to smart industry. HMI and smartification affect behaviour and perceptions, and both organize the interaction between operator and plant. If the HMI contains a DSS that offers not only knowledge and suggestions but learns from the dynamic interaction triangle between plant, operator and interface, smartification changes can turn a high-risk challenge to a form of incremental progress. This is an opportunity to take systems theory into account and to order cases, databases and other informative or instructive notes according to the identified systems and environments. Furthermore, it is possible to analytically separate tacit knowledge from complex, semi-implicit rule knowledge. That way, DSSs are able to reduce the over-determination of case structures: incidences can be differentiated and knowledge organized by the skills required, according to the identified systemic areas. If the physical plant environment interferes with the formal system, HCI is sufficient to resolve this loss of homeostasis. But operators can also enhance the HMI's knowledge organization (and can enhance high-tech interfaces fed with real-time data) and act as applicators and logisticians of tacit knowledge practices. With most HMI in smart factories, mechanical adjustments such as adjusting a V-belt, which required almost tacit body skills, were replaced by interfaces that represent machine movements and statuses through distinct, yet rather abstract, numbers. While this is a very good basis for further automatization, HMI should be designed in a way that makes it possible for operators to handle those issues that are beyond numbers or where it would be inappropriate to treat issues in a forceful, formalized way.

Summary and outlook
Summing up, we have introduced the specific perspectives, capacities and strategies of social science researchers. We have then presented our own involvement with the IMPROVE project. In the following sections, we illustrated our central empirical findings and created accounts and interpretations for them by means of social theory which consisted of three different theoretical accounts: systems, practice and agency. We concluded the last section with an integrative synthesis of these accounts, implying first instructive suggestions that could be distilled from our analysis. We have depicted our findings concerning socio-technical arrangements within smartified industry. We began by focusing especially on the interplay between operators and HMI, and thus identified those environmental connections that are beyond a formal technological scope and which are located in social areas. Beyond basic environmental factors like climate, market, batch and social norms and organizational bureaucracy, the physical plant is also part of the plant's environment with regard to its technologically purified model.
We then sketched the application of three exemplary social theory perspectives to engineering objects and objectives and thereby demonstrated how they can contribute to engineering assignments and why they are appropriate candidates for interdisciplinary research projects. The concept of tacit knowledge, the practical body knowledge that lies beyond textual explicity and consciousness, can contribute new fields of knowledge to research concerning HMI design, knowledge models and the ergonomics of technological fields. Systems theory provides a particular perspective that makes it possible to systematically differentiate the factors of a plant's model that lie beyond proper formalization. It is even possible to use a DSS to categorize and synergistically integrate operating cases. The symmetrical, pragmatistic perspective of ANT is an appropriate way of thinking through HMI design since it can respect the interplay between operators and software and understand interaction and coproduced agency, which is, again, a way of integrating tacit knowledge in the technological design. Eventually, we drew connections between our observations and insights into operators' work culture.
With this paper, we have illustrated the capacity of the social sciences to contribute to engineering purposes, especially concerning HMI development. We have done this by ap-plying a socio-scientific perspective in order to identify the described socio-technical arrangements and by cultivating a social theory-inspired perspective on the data and technological assignments. In doing so, we ignored thus far implicated, practical method issues. However, in order to successfully design an interdisciplinary project that combines engineering and the social sciences, and to be able to take tacit knowledge into account and make full use of social theory's potential, methodological readjustments and advancements are required. Since sociologists of practice are already focused on tacit knowledge, they have already developed a battery of methods like technography [28], specifically detailed interview interpretations or breaching experiments. Yet, those methods must be reinterpreted as particular tools for socio-scientifically informed engineering research. Also, issues of and strategies for field access have to be discussed and revisited. Hence, as an outlook, we want to suggest further methodological considerations and practical, tentative experiments in the form of interdisciplinary projects between the social sciences and engineering.