1 Introduction

Human-Machine Interaction is an important subject of study not only among sociologists and philosophers, but also among engineers. The perspectives of perception and conceptualisation differ significantly between these two groups. This raises the question of an inter- and transdisciplinary exchange between those who are closer to the immediate processes of producing and using those machines and those who study these processes on larger scales.

Such an exchange across disciplinary boundaries is not easy. The present article emerged from a lengthy discourse between an software engineer and a philosopher and tries to add a small contribution to this large debate.

We concentrate on the concept of intentionality and show how this is related to concepts which are central to engineering such as usefulness and functionality, but also to concepts in the humanities such as destination and essentiality. We consider machines as socio-technical systems not only in terms of their function, but also with regard to their production and usage, a perspective that is largely obligatory in engineering and even formally anchored in the VDI standard 3780 VDI (2000).

Another central notion is the concept of a system, which is used in a much more focussed way in the engineering sciences (up to the large field of Systems Engineering) than it is apparently the case in humanities. We contrast this in particular with philosophical work from the Moscow Methodological Circle around G.P. Shchedrovitsky in the 1960s on organisations, which stands in the tradition of Hegelian dialectical thinking and describes in more detail forms of institutionalisation of cooperative action. Shchedrovitsky and his followers developed these conceptualisation further later on in the context of a Methodological School of Management Shchedrovitsky (2014) and applied them in their Organisational Activity Games.

Machines are generally made up of parts, hence the categorical part-whole relationship comes into focus. At first, it seems that this part-whole relationship in the understanding of machinic processing extends only within the limits of the Artificial. But where is this boundary to be located? This boundary between the “Organismic” and the “Artificial” [Gräbe and Kleemann (2020), ch. 4] quickly becomes obsolete if one starts to compare the structured mode of operation of machines with the structured mode of operation of organisations that produce or use machines and thus emphasises the socio-technical character of the Machine Problem. In this way, however, the concept of machine scales up to that global “machine” of institutionalised forms of socio-cultural cooperative action and, in view of the intended and unintended global effects of that World of Machines, cannot be meaningfully discussed without such a perspective.

With regard to the focus on current philosophical debate, we orient ourselves on the Onlife Manifesto Floridi (2015), since central positions of a debate that has lasted several years are particularly precisely and pointedly explicated there, especially in its preface. In our perception, there seem to be epistemological nuances here to our approach, which ultimately manifest themselves in the different appreciation of co-evolutionary developmental dynamics and thus in a specific view of cooperative action in both conceptualisations. This requires further discussion.

2 Technology and Modes of Production

The technical possibilities of humanity have grown rapidly in the last 150 years and with them the possibilities of intentional action to achieve anticipated effects in the real world. The unfolding of the multi-optionality Laitko (2001) of such intentions in a common reality is nevertheless full of contradictions, on both small and large scales. These contradictions cannot be meaningfully addressed solely from the perspective of the use of those means, but must also take into account their modes of production and emergence VDI (2000).

The development of modern technologies as combination of globally available procedural knowledge, socially institutionalised procedures and private procedural skills Gräbe (2022a) is thus closely related to the respective modes of production – technological advances change the mode of production and changed modes of production enable the departure for new technological shores.

The interrelation of technological development and the development of productive forces can be conceptualised, for example, in a theory of long waves Gräbe (2013). It states that in periods of about 50 years, comprehensive technological upheavals and the subsequent consolidation of these developments in the socio-cultural structures of the society can be identified, with periods of a basic technological revolution alternating with a revolution in the organisation of production. Revolution here is not to be understood as sudden change, but in the sense of a “gradualness of revolution” as developed by Rainer Thiel in Thiel (2000). In such double periods, also new professional profiles and training structures emerge, such as the profession of an engineer since the second half of the 19th century Kaiser and König (2020) or the profession of a manager in the second half of the 20th century.

With the digital transformation, we are undoubtedly dealing with another such fundamental technological upheaval, which stands in peculiar correspondence to that dawn of the computer age in the 1960s. In the past 50 years, the “colleague computer” has evolved from a tool and digital assistant to an equal and possibly soon dominant partner, if one is to believe the diverse analyses about Human-Machine Interaction.

In his Onlife Manifesto Floridi (2015), Luciano Floridi comprehensively contrasts the opportunities and risks of this technological game change. With four major headings (slightly modified here) “Game Over for Modernity?”, “Frankenstein and Big Brother”, “Dualism is Dead?” (control and complexity, public and private) and “Good Onlife Governance” (the relational self, digital literacy, care for attentional capabilities), fields of thought are delineated in order to more precisely compare opportunities and risks in these contexts and also to ask whether some developments perceived as opportunities entail great risks and some risks unexpected opportunities.

Floridi establishes a close relationship between the notions of perception and concept:

The world is grasped by human minds through concepts: perception is necessarily mediated by concepts, as if they were the interfaces through which reality is experienced and interpreted. Concepts provide an understanding of surrounding realities and a means by which to apprehend them. However, the current conceptual toolbox is not fitted to address new ICT-related challenges and leads to negative projections about the future: we fear and reject what we fail to make sense of and give meaning to. [Floridi (2015), Preface]

An engineer will ask what this has to do with actions of engineering. The challenges are indeed great, especially to organise cooperative action in interdisciplinary teams and to develop common conceptualisations for concrete challenges. What do “sense” and “meaning” mean in such spaces of cooperative action Goodwin (2018) and for the cooperative subjects that there emerge?

So what does “we” mean for Floridi? How does this “we” as a “relational self” [Floridi (2015), 4.1] relate to those processes of institutionalisation of technological procedures VDI (2000) that practically cannot take place in any other way than socio-technically, involving skilled people as “human resources”? Those processes were and still are made by people. Klaus Fuchs-Kittowski continues to emphasise his 50-year-old thesis

The question then, as now, was: What is the position of the human being in the highly complex information technology system? Our answer to the question has always been: Human is the only creative productive force, he must be and remain the subject of development. Therefore, the concept of full automation, according to which the human being is to be eliminated step by step from the process, is mistaken. [Fuchs-Kittowski et al. (1976), ch. 1.1.3], [Fuchs-Kittowski (2001), p. 10]

Floridi focuses rather on “blurring of distinctions”:

The deployment of information and communication technologies (ICTs) and their uptake by society radically affect the human condition, insofar as it modifies our relationships to ourselves, to others and to the world. The ever-increasing pervasiveness of ICTs shakes established reference frameworks through the following transformations:

i. the blurring of the distinction between reality and virtuality;

ii. the blurring of the distinctions between human, machine and nature;

iii. the reversal from information scarcity to information abundance; and

iv. the shift from the primacy of entities to the primacy of interactions. [Floridi (2015), Preface]

3 Intentionality and Mental Worlds

Let us return to the question of intentionality in such a highly technological world raised at the beginning. With Marx’s 11th Feuerbach thesis – “the point is to change the world” – in this paper we always grasp intentionality as intentional and thus purposeful action. This does not devaluate the “interpreting philosophers” (Marx ibid.), because thinking is also action. In Shchedrovitsky’s work, thinking as action occupies a central place, whereby he clearly distinguishes between mental activity and pure thought [Shchedrovitsky (2014), pp. 33-48]. Understanding and meaning also occur there [Shchedrovitsky (2014), pp. 40-44], as they do in Floridi’s work, but the term sense is not developed. In the section “Natural and Artificial” [Shchedrovitsky (2014), p. 22] however, Shchedrovitsky explains differences within the perception of practical and scientific persons.

The concepts and categories of the natural and the artificial are essential for the 20th century. They were also important before, but today a mass of complex upheavals is based on these concepts.

Take this example. A piece of chalk is lying on the table. Now I pick it up, I can throw it out of the open window, I can unclench my fingers and it will start to fall. How will a practical person and a scientific person describe these processes? They will describe them in fundamentally different ways.

The practical person will say, ’The chalk was lying on the table, then I picked it up and threw it. The chalk flew through the air because I threw it,’ or, ’The chalk started to fall because I was holding it at first and then I unclenched my fingers and stopped holding it.’ The scientific person would say, ’Everything that the practical person said is nonsense. If the chalk flies then it is not flying because the person threw it, but according to the laws of nature. There is a law of inertia, a law of attraction and a law of resistance of material, and therefore the entire curve of its flight is determined by applying these three laws. Moreover, you say that it was lying on the table. It was not in fact lying there, but was flying at a constant speed.’

Intentionality, as Floridi emphasises, cannot be studied without the mental worlds of the intenders, which are essentially constituted by the experiences of their cooperative action at different scales. Cooperative action leads to shared mental worlds and vice versa. Cooperative action and common conceptual worlds are thus in a relationship of mutual development. These conceptual worlds are the environment for decisions as to whether individual concepts are “making sense” or “failing to make sense” [Floridi (2015), p. 7].

Like the cooperative contexts of action, the mental worlds also differ in a mode of production based on a deep division of labour. For the development engineer, who in his everyday work assembles solutions from components and services of independent third parties and his own developments in a component-based technological world (see Szyperski (2002) for details of such a perspective), his mental world may look different than for the production engineer, who is more closely bound to the “the hassle of the lowlands” of practical everyday cooperation. One can assume, however, that for these engineers the concrete connections and mutual dependencies in a World of Technical Systems Gräbe (2020) dominate their mental world and not the more general socio-cultural structures of the reality, which may be in the foreground for the mental world of a philosopher.

That categorisation of “Natural and Artificial” [Shchedrovitsky (2014), p. 22], which we have provisionally assumed here as a basic difference of the mental worlds of two professions in a modern society, motivated J. Mittelstraß in 2011 to a campaign against such a “Leonardo World” Mittelstraß (2017), which was taken up in the humanities in a completely different way than in the “science”. We are using here this picture of a schism of a formerly unified science that has developed more or less parallel to the unfolding of the industrial society since the beginning of the 20th century.

This schism of mental worlds also played a major role at the legendary CPOV conferenceFootnote 1 that U. J. Schneider organised in Leipzig in 2010 to place the developments of Wikipedia in the context of conceptual and meaning-generating processes. The clash between Wikipedians (the Wikipedia makers) and Wikipedists (the sociologically grounded investigators of those practices) could not have been clearer, reflecting on those practices. The Wikipedians were confronted with the accusation that their reflections were a bit homespun, while the Wikipedists had to put up with the question of whether they were not measuring today’s developments with yesterday’s standards Haber (2010).

We can therefore diagnose a “Clash of Civilisations”, but probably not in Huntington’s understanding (this question also devotes further analysis), but in the way real-world developments are reflected in the unity of a common socio-cultural reality of both the engineers and the philosophers. Intentionalities are embedded in such mental worlds. The accusation that the Wikipedians’ mental world is too naively constructed comes from the Wikipedists and marks a certain moment of arrogance and refusal of an urgently needed inter- and transdisciplinary discourse. We cannot further elaborate here on the strong question of intentionality of those intentionality researchers.

4 Machines and Information

Intentionality in Human-Machine relationship essentially depends on linguistic forms of transmission, if the concept of language is taken broadly enough and subsumes formalised and institutionalised forms of interaction such as pressing a button or the gesture when starting a machine, which have clear semantic meaning and thus can be counted under the heading of verbal abbreviations. Such verbal abbreviations play an important role as shortcuts of speaking in established practices in contexts whose reproduction is institutionally sustainably organised through cooperative action on a larger scale. We come back to the role of such shortcuts in the constitution of mental worlds and Human-Human Interaction at the end of our text.

In the following, we refer to such forms of interaction as transmission of information. We leave the concept of information vague and initially ignore the complexity of related conceptualisations as discussed, e.g., in Capurro et al. (1997); Janich (2006); Klemm (2003). The aim of our explanations is to explore the space of the non-intentional in this Human-Machine exchange relationship in more detail and, in particular, to ask about the role of unintended information in this context.

Does such a notion make sense at all in the interaction with a machine? Is that machine not precisely a product of pure intentionality? How can there be anything unintended in the interaction with a tool of human action? Well, some tools are bulky, one occasionally slips or cuts one’s finger with a sharp knife if one is careless. So unintended effects can certainly occur when tools are used incorrectly. But unintended information?

How does information relate to the use of tools in a systemic context such as that of TRIZFootnote 2, the Theory of Innovative Problem Solving Koltze and Souchkov (2018)? The minimal technical system as a basic notion in TRIZ is reduced to the relationship tool \(\rightarrow\) acts on \(\rightarrow\) object and concentrates on the purely functional effect of the tool to change the state of the processed object. Energy supply, transmission and control as further components of a complete technical system [Lyubomirsky et al. (2018), ch. 4.2] are in this context of a minimal technical system contributed by the purpose-setting activities of the human who uses this tool. This changes with the transition to more complete technical systems, to which parts of these additional functions are transferred. In Lyubomirsky et al. (2018), a large number of examples is gathered that demonstrate such transfers in the course of the evolution of technical systems. Ten trends of evolution of technical systems are formulated there, including the three trends of increasing coordination, controllability and dynamisation, which are connected with the growing importance of information processing in that evolution.

Such laws of the development of technical systems were already formulated by G.S. Altshuller, the “father of TRIZ”, in the 1960s and systematically developed for the first time in the context of an S-curve analysis in Altshuller (1979). Opinions differ among various authors as to what of this really has the character of a law (and in what sense) and what is to be regarded as patterns, trends or evolution lines. While the authors of Lyubomirsky et al. (2018) consistently speak of 10 trends and a transition “from S-curve analysis to pragmatic S-curve analysis”, in [Koltze and Souchkov (2018), ch. 4.8] the evolution of technical systems is modelled in 5 laws and 10 “lines of development or trends”, among which the trends of increasing controllability and of increasing automation are directly linked to information processes, in the first case across the system boundary and in the second case within the system.

However, in both cases it should be a matter of intended information, because control means to transfer human intention to the machine. In the case of automation, we are dealing with system-internal processes of generation and consumption of information without human intervention, but this intention of the machine to generate that information is a consequence of the intention of the cooperative action of people who, on the one hand, built the machine and, on the other hand, use it, as conceptualised in VDI (2000).

5 Unintended Information

The picture of Human-Machine Interaction that has been conveyed so far is that of a individual who transfers his intentions one-to-one into the pure functionality of a machine and thus the anticipated equals the real effect. Such a picture has often been criticised and may therefore be overdrawn here, but it points to a problematic concept in the tradition of certain argumentations on the subject of Human-Machine Interaction.

In such argumentations machines and individuals interact with each other but those interactions are not strongly enough considered in their mutual dependence. Such an individualistic instrumental dimension does underestimate cooperative use of tools and their cooperative production based on the division of labour. The relation of such argumentations to the conceptualisations of human social structures and the connection to socio-cultural development remain difficult. We argue that the conceptualisation of socio-culturality crosses various dimensions up to ethics and morality but should be rooted in a concept of human and cooperative human action. Without such grounding it remains unclear what “intentions” are, how they are constituted and what is consequently to be named “unintended”.

A general point in such argumentations seems to be that the flow of information in Human-Machine Interactions – “intended” as “unintended” – is considered as largely directed to guarantee or restrict the freedom of action of human individuals.

In some argumentations “autonomy” of machines is presupposed, as if the machines pursue their own purposes. Intentions seem to be independent of humans and realised in an algorithmic-automatic, ultimately “inhuman” way, although Human-Human Interactions certainly precede those Human-Machine Interactions. But is it not precisely the question of the genesis of technology and its forms of Human-Human Interactions that a sound conception must address? Technology cannot be meaningfully reduced to an artefactual dimension, but always includes interactions between people. At least in the understanding of the definition of technology given by the VDI – the founded in 1856 German Association of Engineers – in their Guideline 3780 VDI (2000).

Human-Machine Interaction in the digital age seems to be a new topic, which is mostly shaped by algorithmic questions or those of information processing, e.g. in Algorithmic Cultures Seyfert and Roberge (2017). Concepts as Stalder’s Culture of Digitality Stalder (2016) or Schetsche’s Digital Knowledge Revolution Schetsche (2006) remain largely unnoticed in the new Human-Machine Interaction debates. Stalder’s and Schetsche’s notion of culture can neither be reduced to an algorithmic nor to a pure informational one.

6 Human-Machine Interaction – The Engineer’s Perspective

We had already pointed out with VDI (2000) that in the engineering field the concept of machines (more precisely: material systems VDI (2000)) is not limited to their artefactual dimension, but must encompass on the one hand the production and on the other hand the use of those material systems. This view still remains narrow insofar as it does not directly address comprehensive effects of this production or use on socio-cultural structures. However, it at least helps to create conceptual foundations for such broader debates in technology assessment.

Lets explain the multidimensionality of such relations between “experts” and “users” with special emphasis on – intended and unintended – information on an example – a use case in the language of engineers.

The customer (you, my neighbour) comes to the expert (me, an engineer) with a broken device. “Can you help me?” As an ardent follower of the moral principles of the GNU Manifesto Stallman (1985), I am of course happy to help my friends and neighbours with their problems free of charge. “Let me see, what happened?” You explain the circumstances to me. I throw my hands up in horror: “Didn’t you read the manual? You must never use the device in this way!” You: “But we’ve been using it like that for a long time and had never any problems, until now”. A little interim conclusion: We are talking about unintended use and unintended behavior, but unintended information?

So I take my diagnostic tool to extract the data from the memory of the device (hereafter we use the term “brain” – mostly without quotes – as a data memory and not as a neurological unit).

First version: The brain is empty. Me: “There’s nothing recorded in the brain.” You: “Yes, I turned off this recording function out of fear for the privacy of my data.” Me: “If I don’t even have the data that is indispensable according to the state of the art, I can’t help you. Buy a new device.”

Second version: The data can be extracted from the memory, I start the repair and after a while the device is up and running again.

Now let’s have a closer look at the flow of information between me and “the brain”. More precisely, there are two such information flows – the first from the “brain” to my diagnostic tool (from “machine” to “machine”) and the second from my diagnostic tool to me. Of course, privacy plays a role if data about weeks of use of the device by a third party are considered, but who owns the data we are talking about? Does the customer, my neighbour, own the data or her device, the “brain”? After all, she didn’t collect the data, the “brain” did. You can of course object that this was completely intended, but what does that say? There are many other “brains” out there – technical and human ones – that observe customers every day (like the “brain” of the device), collect data and make up their own conclusions. Self-impression management is a tedious business.

At least the information in question is now not only in the “brain” of her device, but also in the “brain” of my diagnostic tool (my “machine”) and I have seen unintended information about her lifestyle. Fortunately, I promised not to gossip about this.

But what about the diagnostic tool, where did I get it and how is it connected to the device which I examined with it? Both obviously belong to a larger productive context in which she (as a customer), me (as an unpaid service partner) and the producer of the device and the diagnostic tool play a role as independent third parties. The device and the diagnostic tool are obviously pieces of equipment that have been manufactured by the producer in larger quantities according to a specific design. Devices are used by customers, diagnostic tools by service partners, but diagnostic tools only exist because customers occasionally have problems with their devices and then go to the service partner for help.

So our (her and me) specific service relationship is just one among many, and each of the service partners uses standardised tools (“machines”) and procedures (“machine based operations”) to track down the problems. Over time, it turns out that some problems reoccur. For example, customers use the device in a specific way that causes problems over and over again. But even after this is now written in bold face as a warning in the manual, these kinds of problems still don’t stop. Fortunately, they can be easily fixed by experienced service partners (as me) if the data memory can be read out.

Of course, problem classification is only possible if the service partners exchange information about the problems, preferably not just fragments, but the complete information. Then we can all improve the products together (first and foremost the producer, of course) and the customers are more satisfied. But you guess it probably already – the complete information about my service cases has my diagnostic tool, not me. Hence Machine-Machine Interaction is required at this point once more.

Should the customer have a right to object? This is a difficult question e.g. for car manufacturers. It is less about autonomous driving than about the multitude of driver assistance systems (the brain of a modern car).

Within that Machine-Machine Interaction the information – intended as well as unintended – converges in an even larger brain, the brain of the company that produces these devices. Does this mean we are leaving the Human-Machine Interaction area? That depends on how the notion of machine is conceptualised.

What is the quintessence of our thought experiment? We see that in a concrete use case the concept of machine quickly unfolds into several dimensions with contradictory requirements and thus intentions.

7 Socio-Technical Systems

How have these dimensions of contradictoriness been dealt with so far? Machines exist since ancient times Kaiser and König (2020), but the machine age or industrial age (we do not differentiate that here) is usually only referred to as the period from the middle of the 19th century onward. In those times machines were technologically developed in a systematic manner and used in a dominant way in the organisation of production. In particular, tools and machines were produced by and using machines. With Industry 4.0, the “automated factory” is the vision, i.e. the perfected machine producing machine.

Under such real-world conditions, it will therefore be problematic to reject a conceptualisation in which the modern factory also passes for a machine. This idea was already formulated 150 years ago:

Once adopted into the production process of capital, the means of labor passes through different metamorphoses, whose culmination is the machine, or rather, an automatic system of machinery (system of machinery: the automatic one is merely its most complete, most adequate form, and alone transforms machinery into a system), set in motion by an automaton, a moving power that moves itself; this automaton consists of numerous mechanical and intellectual organs, so that the workers themselves are cast merely as its conscious linkages. [Marx (1858), ch. 13]

In the engineering domain the notion technical system is used instead of machine. This term is also controversial, as it quickly turns out that there are actually only socio-technical systems.

Ian Sommerville Sommerville (2007) starts with the concept of a system as a “meaningful set of interconnected components that work together to achieve a specific goal”. A technical system is part of a World of Technical Systems. From the perspective of such a system, the neighbouring systems appear as Black Boxes, of which only “meaning” (in the form of a specification) and “purpose” (as provided function) are relevant in the given context. The system itself is seen as a White Box whose functionality has to be designed, modelled, implemented, integrated and tested before the system can go into operation. Nevertheless, the neighbouring systems are more than Black Boxes, because the system not only accesses neighbouring systems through interfaces, but is also dependent on the promised performance being made available via these interfaces at the right time Gräbe (2020).

As in the VDI norm VDI (2000) also Sommerville strongly emphasises the social embedding of technical systems.

Socio-technical systems contain one or more technical systems, but beyond that – and this is crucial – the knowledge of how the system should be used to achieve a broader purpose. This means that these systems have defined work processes, human operators as integral part of the system, are governed by organisational policies and are affected by external constraints such as national laws and regulations. [Sommerville (2007), p. 48]

This already complicates Human-Machine Interaction issues. What does it mean to interact with such a “machine” with human operators as an integral part? What does it mean to convince the human operator to fulfill my needs, my intentions, even if “the machine does not want this”, i.e. the defined work processes and organisational policies do not provide for this? Do the intentions of managers or even individual human operators as subjects of individual action come into conflict with the individual intentions of the customer or must the entire cooperative context of that socio-technical system, the organisation [Shchedrovitsky (2014), p. 10], [Sommerville (2007), ch. 2.3], also be attributed an independent “intentionality”?

In the context of TRIZ and also in Sommerville’s understanding, a system is characterised by the fact that it realises an emergent function that only unfolds in the interaction of its parts [Petrov (2020), p. 17, vol. 1], Shchedrovitsky (2014), [Sommerville (2007), p. 49]. In his analysis of the part-whole relationship to be conceptualised in this context, Shchedrovitsky distinguishes functional and attributive properties of the parts [Shchedrovitsky (2014), p. 96], but it is only the operational dimension which turns the totality of parts with connections as a system of first kind into a unified whole, a system of second kind [Shchedrovitsky (2014), p. 98]. Such a “living system” cannot be divided up into parts. “Hegel puts this very clear when he said that a living system has no parts, only a dead body has parts. If we cut a unitary living body into parts, what we obtain are parts of a dead body instead of parts of the living body.” [Shchedrovitsky (2014), p. 91]

Such a structuring of the world into small and larger systems leads in our search for an adequate concept of intentionality to that imagined borderline between the natural and the artificial already discussed above, at which the dominance of intentional action as an external purpose-setting, of controllability and intentionality from the outside seems to pass to a movement according to internal laws in the given context. Today one even expects systemic resilience Holling (2001) from such a “movement according to internal laws” particularly for systems of large socio-ecological dimensions.

But is there really such a boundary where a dominance mode changes or is it a specific form of movement of a fundamental contradiction of that systemic conceptualisation itself between adaptation to external constraints and internal laws of movement? And don’t these internal laws of motion describe the functional interaction of the parts of a system to constitute the emergent function? But then the internal laws of motion of the system would be the outer constraints of its parts.

It is precisely that self-similar pattern that Shchedrovitsky exploits to conceptualise the systemic structure of the world. This says nothing about whether the world “really is like this”, but it describes to a good extent the mental world acompanying the engineering practices according to which that “artificial” world is built, and has also been successfully applied to socio-economic management structures in the last 50 years Gassmann et al. (2020); APCQ (2018); APICS (2017); Ackoff (2001).

However, this seems to differ from Floridi’s approach to the world: “The world is grasped by human minds through concepts ... that provide an understanding of surrounding realities and a means by which to apprehend them.” [Floridi (2015), Preface]. It sounds if you have concepts first and then you use them to develop “an understanding of surrounding realities” and as “means to apprehend them”. Of course, Floridi’s argumentation is more subtle but we use this rough argument to mark a problematic point in the debate. Shchedrovitsky’s approach focuses on the cooperative action of living organisations – “the organisation as a form of the life of the collective” [Shchedrovitsky (2014), p. 30] – and postulates a co-evolutionary connection between those “surrounding realities”, “concepts”, “intentionalities”, and “means to apprehend them”.

8 The Human-Machine Relationship

In the previous section it was shown that the Human-Machine relationship scales up to a system, a “machine” of global dimension. What are the conceptual foundations and approaches for grasping this dimension of the topic in the shift towards a digital future with its increasing importance of processes of exchange of information? Is there a need for a Cybernetic Anthropology, as Karl Steinbuch Steinbuch (1971) brought up at the time, or rather a Cybernetic Sociology in the sense of Stanislaw Lem’s Summa technologiae Lem (1964)?

A system concept based on the emergent cooperative functionality of components is developed in more detail also in Gräbe and Kleemann (2020). Accordingly, the Human-Machine relationship is a system with its own concepts, terminology, forms of movement and laws, as developed from an engineering point of view e.g. in Goldovsky (1983, 2018); Lyubomirsky et al. (2018); Petrov (2020); Sommerville (2007). What are consequences of the emergent character of such a system, the “special properties that affect the system as a whole, and are not related to individual parts of the system” [Sommerville (2007), p. 49]? It means that it is impossible to get all important systemic aspects from an analysis of the parts of such a system only. Even a synthesis of these details is insufficient to grasp that emergent character. On the other hand there is no other way than through such an analysis and synthesis in which partial truths are worked out only to be discarded later on. The systemic concept attempts to process this epistemologically challenging dialectical contradictoriness by contextualisation and reduction to the respective essentials Gräbe (2020).

Shchedrovitsky describes that process of the mental decomposition of the whole as a system of first kind into parts.

A complex object is represented as a system in the first place, when we have distinguished it from its surroundings by either completely breaking all of its connections or by preserving them in the form of functional properties; in the second place, when we have divided it into parts (mechanically or according to its inner structure) and thus obtained a totality of parts; in the third place, we have connected the parts and turned them into elements; in the fourth place, when we have organised the connections into a unified structure; and when, in the fifth place, we have put this structure back in its previous place, thus delineating this system as a unity. [Shchedrovitsky (2014), p. 89 ff.] (our emphases)

In those functional properties the usefulness of those elements is encoded and transforms them to components that only “come to life” in a sufficiently efficient environment that guarantees their operating conditions Gräbe (2022b).

This specific, but characteristic for our highly technical world approach to the dialectical contradiction between parts and the whole makes it possible to assemble viable parts via specifications of their pure functionality of both the operational requirements and the services provided, via their “connections”, into larger systems with just such a potentiality of a pure (emergent) function. Nevertheless each individual system of such a multi-layered granularly composed World of Technical Systems – and that world is a synonym for the World of Machines when taking a concept of machine as unfolded here – remains a “dead body” [Shchedrovitsky (2014), p. 91] and only comes to life when the operating conditions are practically provided as throughput of material, energy and information in the qualitative and quantitative form precisely described by the specifications. The organisation of those throughput conditions, which Shchedrovitsky describes in more detail with his second concept of a system [Shchedrovitsky (2014), p. 98 ff.], is a processual view that ties processes to the “connections” of the functional structure of that system of first kind, derives organisational conditions of matter from this and finally asks about the practical organisation of matter itself. The “blurring of the distinction between reality and virtuality” (Floridi) thus loses its vagueness, becomes a sharp distinction between virtuality and reality and constitutes the space also for the unintended.

Virtuality is thus closely linked to the concepts of intentionality, usefulness and functionality of the parts of a systemically structured world that are already mentally permeated. The same mental scheme can also be applied to parts of the world whose mental penetration is still pending or is only slightly advanced and which revealed so far only their “inner laws of motion” to the mental activity of human. In these early phases of gaining knowledge, destination as a fifth concept comes into the focus. Since, as a rule, we will only take the trouble of mental activity of analysis for those parts of the world from which we expect a certain usefulness. In the interplay of usefulness and purpose, of external and internal views of that part of the world we assign it an imagined main useful function as destination before we start to embark into its more detailed analysis as a system of the first kind. In TRIZ, this aspect of intentional anticipation of the required result as Ideal Final Result or Ideal Machine [Koltze and Souchkov (2018), ch 4.1] is a conceptual cornerstone.

In this understanding, the hammer is there for hammering – this is its destination (as “ideal machine”) and purpose (as a tool), with this functionality, this “machine” is useful when set in motion in this systemic world by the hand of an expert, and hence intentionally used. But one can also use the same hammer as a wedge to prevent a balcony door from slamming, as A. Kuryan argued in a Facebook debate on the concept of system Luckcuck et al. (2019).Footnote 3

The common real world of use of such “machinic tools” contains thus clearly a large number of moments of indetermination if considered in the spectrum of multi-optionality and multi-functionality of different mental worlds. The unintended reveals itself at the interface of both worlds as contradiction between theory and practice and thus ultimately as a problem in the use of those machines, the small ones as well as the large socio-technical systems, the organisations.

9 The Uninteded and Problem Solving

This brings problem solving into the focus as the main engineering task in the interrelationship of production and use of machines and more generally “material systems” VDI (2000).

“Uninteded information” marks such a problem perception of a “uninteded effect” in the mental world first of all of individuals as in the repair example above, which can condense into a cooperatively perceived problem if that problem occurs individually many times. In the special “use case” above, in the cooperative space of action shared by manufacturers and service partners, an initial solution has already been found – the service partners have tools to fix the problem when the data is recorded in the “brain”, and the users have a notice printed in bold face in the manual not to use the device in this way or at least not to switch off the recording function.

The fact that a part of the user group still does not pay attention to this warning is connected to a second problem perception that is less technical in nature but has penetrated to the mental world of those users in a more mediated way – the question of the privacy of data. Both problem perceptions are in veritable contradiction to each other, a “physical contradiction” in TRIZ terminology [Koltze and Souchkov (2018), p. 67]: The device must record the data so that it can be repaired if necessary, and it must not record the data in order to guarantee data privacy. There is no space here to discuss in more detail the engineering potential of such a contradiction-oriented systematic innovation methodology. Nor is it possible to discuss what it means that problems are not only imagined, but have a real-world background and express themselves as harmful effect – another central concept of TRIZ. At the same time, we limit ourselves to the latter perspective, since a positive unintended effect is usually not problematic, although TRIZ also addresses the possible harmfulness of a useful but excessive effect [Koltze and Souchkov (2018), p. 121].

Such a harmful effect prevents the user of a machine from using it according to her intentions and, in the civil legal system with its foundations in contractual laws and laws of obligations, leads to the question of the attribution of the consequences of that harmful effect and, if necessary, their monetary compensation. The unintended effect thus revives the social relationship between user and manufacturer of that machine in the dispute about the private attribution of these consequences of the actions of the user of the machine. Even if detailed questions can be clarified in individual cases by lawyers, the term state of the art plays a central role here. Manufacturers are in a bad position if their machines were not designed and produced according to this state of the art, and the question of guilt is quickly clarified in such a case.

This divides the world of harmful effects into a serious and a less serious part. In the following, we will only deal with the serious part, which marks the limits of the state of the art as the current level of globally available processual knowledge.

10 The Uninteded and Systemic Development

As a rule, the formulation of an unintended or even accepted harmful effect is preceded by its perception as a problem in the real world and thus as event in the operation of the World of Technical Systems, which, as worked out above, forms a unified whole in the operational dimension, which can only be structured with the concept of a system of second type, since the partitioning as a system of first type according to purely functional aspects says little about dependencies in the operational dimension.

Unintended effects propagate in that system of operational dependencies and generate time-critical new unintended effects in other places in the Network of Socio-Technical Systems through disturbances of the operating conditions for those subsystems as a consequence of the unintended original effect. These chains of effects can be partially anticipated in descriptions of larger systems with many parts. But this reproduces the relationship between the intended and the unintended (or even only the “harmful”) at a higher systemic level. Such considerations form the basis for the concept of resilience Holling (2001) as the ability of a large system to locally confine a wider range of unintended effects. Other forms of organising socio-cultural processes with a high proportion of unintendedness are “agile” methods or “adaptive management”, special systemic development modes under which parts of the system can be rapidly reconfigured or reorganised. In such systemic contexts, the evolutionary trends of increasing coordination, controllability and dynamisation in system development Lyubomirsky et al. (2018) often manifest themselves as anti-trends, which once more highlights the dialectical character of those trends of systemic development.

The unintended is thus a constitutive part of our real-world actions and a consequence of the fundamental limitation of the procedural knowledge that is globally available at any given time. At the same time, it drives the feedback cycle of justified expectations and experienced results in cooperative action Gräbe (2020), which propels the further development of that globally available procedural knowledge. In this sense, not only “perception is necessarily mediated by concepts” (Floridi), but also “concepts are necessarily rooted in perceptions”. From there, in the sense of Marx’s 11th Feuerbach thesis, we have to move on to the transition of this development of knowledge into the systemic development of socio-cultural forms of institutionalisation of cooperative action and thus into the hierarchical structuring of the World of Socio-Technical Systems.

The self-similarity of the systemic concept is well suited to formulate such a hierarchisation and at the same time processual bindings through hierarchies on different scales in a methodologically uniform way. A scaling of environment-system relationship plays an important role especially in socio-ecological models Holling (2001). Such multi-scale systems are at the same time necessary to express the contradictions between context and internal system dynamics at different scales. The “machinic” character of every adequately modelled ecological system has active Human-Human Relationships, i.e. Human-Human Interactions, as its “environment”. Thus, once more Human-Human Interaction is the decisive point in order to understand the “concept of machine” adequately at all.

11 Scientific Thought as Planetary Phenomenon

These systemic processes of development of ever more powerful and ever more comprehensive socio-technical systems as “machines” have nowadays reached a global dimension, in which the unintended effects, which principially cannot be avoided, are increasingly throwing off the finely balanced equilibrium of material and biochemical exchange processes that has evolved over tens of thousands to millions of years as the “operating conditions” of our socio-cultural system.

Systemically, such equilibria are conceptualised as steady-state equilibria with relatively complicated attractor structures as spaces of possible development [Prigogine and Stengers (1976), part 2]. In the last 50 years, global observation structures have been developed and are operated that allow us to record the experienced results on this global scale of our actions. The development that V.I. Vernadsky visionarily anticipated with his concept of Noosphere at the end of the 1930s Vernadsky (1938) has thus since been further advancing.

The digital transformation, especially with its more comprehensive possibilities for distributed collection and processing of data, lifts these forms of description to a completely new level. We know better than ever what are the consequences of our actions. On our own, we are not yet adequately able to translate this knowledge into cooperative action. The digital transformation thus puts the reconstruction of our socio-cultural structures towards a new concept of freedom on the agenda.

But how we are to understand this freedom if it is not the foolish freedom to do the wrong thing? How do we preserve ourselves and the world with us from our arbitrariness after we have stepped a little way out of the conditional structure of ’co-evolution’? [Dahm et al. (2005), p. 8]

This ask the authors of the “Potsdam Manifesto” that was published by the VDW, the Association of German Scientists, in the Einstein Year 2005 and signed by a large number of scientists.

In the face of the challenges listed above, freedom can only mean to act cooperatively and to responsibly bind oneself to do so. With the 17 SDGs as a result of the UNESCO World Programme of Action on Education for Sustainable Development, a reasonable part of mankind has embarked on this path under the aegis of UNESCO.

12 Open Culture and the Media Machine

We have shown that the unintended plays an important role in the further development of our technical and social abilities, starting with simple neighbourly help in joint problem solving up to further development of our socio-cultural possibilities as humanity. The unintended often enters our life surprisingly as an unexpected effect, as something not yet known, as something known in principle but mentally suppressed, as the effect of crossing a boundary that should not actually (intentionally) be crossed, or as something negatively intended, as harmful effect. Classifying knowledge about these unintended effects as “unintended information” to be hidden from the public by the Media Machine prevents us from dealing appropriately with this developmental potential.

An Open Society in digital format must therefore above all be committed to an Open Culture and thus to specific construction principles of that Media Machine as the core of such an Open Culture. The reorganisation of the globally available procedural knowledge in digital form and the development of appropriate tools has rapidly progressed, especially in the field of scientific and technical information, since Paul Ginsparg’s arXiv.org Ginsparg (2011) was set up in 1991. The agreement on common conceptual standards in machine-readable form for important sub-questions of a world conception has gained massive momentum in the last 15 years, as can already be seen in the purely quantitative development of the Linked Open Data World McCrae et al. (2019).Footnote 4 In such a mental world, there is no such thing as “unintended information”.

Powerful forces oppose this, which perceive “unintended information” as a threat and loss of control. They use all the technical possibilities of the modern Media Machine to push back and marginalise the unintended. In this way, however, it is no longer real life that becomes the corrective for development, but the cooperative mental world of a more or less comprehensive social group. Corresponding obvious developments of the Media Machine in China and now also in Russia should not conceal the fact that “leaking” is also hardly being pursued in Western democracies (J. Assange, E. Snowden). Attempts of a comprehensive use of this Media Machine to influence elections do not only originate from “Russian hackers”, but were also specifically used in the election campaign of D. Trump as well as in the Brexit vote (S. Bannon, Cambridge Analytica). The limits of the possibility of keeping the city of Shanghai, with its 26 million inhabitants, in lockdown for several weeks in the spring of 2022, while using nationwide control of the Media Machine to suppress any form of “unintended information” and thus resistance, marks particularly clearly the limits of a balance between the officially intended and the real life situation, and thus another front of challenges in relation to that Media Machine.

Floridi addresses such questions in Floridi [2015, 2.1 to 2.3]. In our examples above, however, it is not so “hard to identify who has control of what, when, and within which scope” (ibid.) if the difference between the intended and the unintended is used in the analytical tools as concept.

Floridi [2015, 3.1] asks “what does it mean to be human in a hyperconnected era?” In his items 3.1 to 3.8 (ibid.), cooperative action remains underexposed as “private”, which seems to be embedded in a “public” rather than constituting it. The development of the structures of self-movement of an Open Culture in the last 70 years witness something else, as is shown in Stalder (2016) and Schetsche (2006) in more detail.

13 Conclusion

With the increasing power of the means, of the “machines”, the power of the unintended, but above all the power of the well-known but accepted “harmful effects” also increases. These “harmful effects” reveal themselves as “uninteded” only in the operation of these machines and in surprising ways. The verbalisation of these “surprises” as problems is a starting point for further development of the state of the art and the technological solution of these problems drives the systemic development in the Network of Socio-Technical Systems. Formulation and solution of problems, however, fall apart in time and constitute a further World of Harmful Effects, which are certainly at least temporarily accepted in the practical operation of machines. Through specific modes of operation, risk assessments as well as constructive and non-constructive measures, attempts are made to limit and minimise the influence of such effects.

The impact of those “harmful effects” – known as side effects to the global procedural knowledge but accepted in the institutionalised forms of our socio-cultural action – has now reached a global dimension. This includes not only major industrial accidents such as in Harrisburg on 28 March 1979, in Chernobyl on 26 April 1986, in Fukushima on 11 March 2011, the dam break of Brumadinho on 25 January 2019 or the devastating explosion of almost 3000 tons of ammonium nitrate in the port of Beirut on 4 August 2020. This also includes questions as the storage of the radioactive waste as “harmful side effect” of nuclear energy production that was praised as so “peaceful” in the 1960s.

The study of Human-Machine Interaction thus stops on half of the way if it does not also address the Human Interaction with this global machine, the “machine, or rather automatic system of machinery [...], set in motion by an automaton, a moving power that moves itself” (Marx). However, this machine stands for the socialisation of our cooperative action in socio-cultural forms of institutionalisation. Marx’s view on it at this point of his writings as “automaton” reduces the diversity of use values to a single abstract pure functionality of “value proposition” as exchange value. This is a description of the “machine” as a system of first kind, which would do credit to any recent TRIZ analysis. But it is about more, about the “spiritually-vivid cosmos” Dahm et al. (2005) of the potentials and realia of systemic development of that global “machine” as a system of second kind.

These days we are witnessing an abrupt end of 40 years of hope for “thinking in a new way”. A “new thinking” that is not only associated with the name of the politician Gorbachev but also with the name of the scientist Vernadsky (1938), the Einstein-Russell Manifesto (1955) and also the Potsdam Manifesto (2005) Dahm et al. (2005). We are confronted with a pushback in mankind’s attempt to get its own socio-cultural development under the control of its cooperative action and to consciously develop it systemically as a system of second kind of networking of organisations. Ernst Bloch warned early on of “losses in stepping forward”, but also pointed to the power of “concrete utopias”, which that new world carries as “not settled”, as traces of hope for a new time of unfolding.