Abstract
In the previous chapter we used Ashby’s cybernetic theory to discuss the “experimental arche” of organizations. This arche referred to a continuous and risky process of control, design and operational regulation with respect to organizational transformation processes. At the heart of our discussion of the experimental arche was Ashby’s regulatory logic, stating that, in order to regulate a particular concrete system, one has to: Select essential variables and desired values Identify parameters, disturbing the essential variables Design an infrastructure (a “mechanism”) by means of which: Disturbances are attenuated The system’s transformation processes can be realized Regulatory potential (regulatory parameters) becomes available And, given 1, 2, and 3: select values of regulatory parameters (= select regulatory actions) in the face of actual disturbances.
Moreover, in this Ashby-based notion of regulation, one needs a model of the behavior of the concrete system: a transformation. According to Ashby (1958), a good (conditional, single-valued) transformation relates the selected variables and parameters in such a way that predictions can be made about the behavior of the concrete system. To arrive at such a transformation, the black-box method was introduced – a method enabling a regulator to derive a transformation based only on the values of the variables and parameters that are chosen to describe the concrete system that should be regulated. Ashby’s black box method seems to suggest that we can “objectively” select variables and parameters, and derive a transformation connecting them based on trial and error, without, as Ashby puts it, “reference to prior knowledge”. If this is what regulating systems is about, one might say that it does not contain much risk. It is “just” a matter of selecting variables/parameters; observation and deduction. The risk attached to it may have to do with the mistakes we make in selecting variables/parameters or in deducing a conditional transformation from empirical observations; or it may have to do with time-constraints we face while regulating; or with the probabilities appearing in a transformation and governing the behavior of the system.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
This is easy and straightforward with a limited number of input and output states. It quickly becomes tiresome, however, if the number of states increases. If the number of input-states is denoted by #X and the number of output-states by #Y, the total number of possible trivial machines is #Y#X. In the example, there are 4 possible trivial machines and the observer needs at most 3 trials to determine the right one. If the number of states only moderately increases, the number of possible machines increases rapidly (cf. von Foerster 1970).
- 2.
Beware: this table is different from the Ashby-based regulation-tables, for both the values of output states and the internal parameter co-determining the output are given in the table.
- 3.
Although most of the non-trivial machines we have to deal with have a large number of input and output states, Von Foerster’s argument even holds for non-trivial machines with few input-states and output-states (as the one from our example, which only has four input and four output states).
- 4.
To see the resemblance, it is important to note that what von Foerster treats as input for the operation is called “operand” by Ashby, and von Foerster’s output is Ashby’s transform.
- 5.
From standard algebra, one can learn that, for any function f, if a and λ exist such that f(a) = λf(a), then a is called the eigenvector and λ the eigenvalue. In von Foerster’s formalism, λ seems to be set to 1, and a is called the eigenvalue (e.g., Lipschitz 1987).
- 6.
In many formal representations of nerve-cells, such internal states are represented by threshold-functions.
- 7.
Note that only the index in Pt refers to a moment in time. The other indices indicate that both the eigenvalue and the motor output are based on Pt.
- 8.
“Ich habe meine Vermutungen über molekulare Rechenprozesse nur vorgelegt, um anzudeuten, dass es Perspektive gibt, die auf eine Mitwirkung der Moleküle an dem großen Drama des bewussten Denkens hindeuten […]” (von Foerster 1991, p. 93).
- 9.
“die Theorie des selbstreferentiellen, in sich geschlossen Erkennens erst jetzt die Form erwinnt in der […] die Unzugänglichkeit der Außenwelt ”an sich“[…] zum Ausdruck gebracht [werden kann]”.
- 10.
Of course, a large number of methods for divergence of ideas (e.g. tools for fostering creativity) exist (and are) used for supporting problem-solving. The main focus here is on more traditional decision aids e.g. system dynamics of multi-criteria analysis, that have a long tradition as management decision-aids.
- 11.
This is congruent with Aristotle’s ideas on responsibility – cf. Hughes (2001).
References
Ackoff, R. L. (1978). The art of problem solving. New York: Wiley.
Ashby, W. R. (1958). An Introduction to cybernetics. London: Chapman & Hall.
Cybernetics & Human Knowing 2003 10(3–4). Special issue on von Foerster
Dreyfus, H. L., & Dreyfus, S. E. (1986). Mind over Machine. Oxford: Basil Blackwell.
Glanville, R. (2003). Machines of wonder and elephants that float through air. Cybernetics and Human Knowing, 10(3,4), 91–106.
Hogart, R. M. (1994). Judgment and choice. Chichester: Wiley.
Hughes, G. J. (2001). Aristotle on ethics. London: Routledge.
Kahneman, D., Slovic, P., & Tversky, A. (1982). Judgement under uncertainty: Heuristics and biases. Cambridge: Cambridge University Press.
Lindsay, P. H., & Norman, D. A. (1977). Human information processing. New York: Academic Press.
Lipschitz, S. (1987). Linear Algebra. New York: McGraw-Hill.
Luhmann, N. (1984). Soziale systeme. Frankfurt am Main: Suhrkamp.
Luhmann, N. (1990). Das erkenntnisprogramm des Konstruktivismus und der unbekannt bleibende Realität. In N. Luhmann (Ed.), Soziologische Aufklärung (5) (pp. 31–58). Opladen: Westdeutscher Verlag.
March, J. G. (1978). Bounded rationality, ambiguity, and the engineering of choice. Reprinted In D. E. Bell, H. Raiffa & A. Tversky (Eds.). (1988) Decision Making (pp. 33–57). Cambridge: Cambridge University Press.
Maturana, H. R., & Varela, F. J. (1980). Autopoiesis and cognition: The realization of the living. Dordrecht: Reidel.
Maturana, H. R., & Varela, F. J. (1984). The Tree of knowledge. Boston: Shambhala.
Nisbett, R. E., & Ross, L. (1980). Human inferences: Strategies and shortcomings of social judgment. Englewood Cliffs, NJ: Prentice Hall.
Pask, G. (1992). Different kinds of cybernetics. In G. van de Vijver (Ed.), New perspectives on cybernetics (pp. 11–31). Deventer: Kluwer.
Rittel, H. W. J., & Webber, M. M. (1973). Dilemmas in a general theory of planning. Policy Sciences, 4(2), 155–169.
Rosenhead, J. (ed). (1989). Rational analysis for a problematic world. Chichester: Wiley.
Segal, L. (1986). The dream of reality. New York: Norton.
Schütz, A., & Lückmann, T. (1994). Strukturen der Lebenswelt (I). Franktfurt am Main: Suhrkamp.
Thompson, R. F. (1976). Introduction to physiological psychology. New York: Harper & Row.
Varela, F. J. (1984). Two principles of self-organization. In H. Ulrich & G. J. B. Probst (Eds.), Self-organization and management of social systems (pp. 2–24). Berlin: Springer.
Varela, F. J. (1988). Cognitive science: A cartography of current ideas. Cambridge, MA: MIT Press.
Varela, F. J., Thompson, E., & Rosch, E. (1993). The embodied mind. Cambridge, MA: MIT Press.
von Foerster, H. (1970). Molecular ethology: An immodest proposal for semantic clarification. In G. Ungar (Ed.), Molecular mechanisms in memory and learning (pp. 213–248). New York: Plenum.
von Foerster, H. (1988). KybernEthik. Berlin: Merve Verlag.
von Foerster, H. (1981). Observing systems. Seaside, CA: Intersystems Publications.
von Foerster, H. (1984). Principles of self-organization – in a socio-managerial context. In H. Ulrich & G. J. B. Probst (Eds.), Self-organization and management of social systems (pp. 2–24). Berlin: Springer.
von Foerster, H. (1991). Was ist Gedächtnis, dass es Rückschau und Vorschau ermöglicht? In S. J. Schmidt (Ed.), Gedächtnis (pp. 56–95). Frankfurt am Mian: Suhrkamp.
von Foerster, H. (1992). Entdecken oder Erfinden: Wie lässt sich verstehen? In H. Guman & H. Maier (Eds.), Einführung in den Konstruktivismus (pp. 41–88). München: Piper.
von Foerster, H. (2002). Understanding understanding. Heidelberg: Springer.
von Foerster, H., & Poerksen, B. (2002). Understanding systems. New York: Kluwer.
Winograd, T., & Flores, F. (1986). Understanding computers and cognition. Norwoord, NJ: Ablex.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Achterbergh, J., Vriens, D. (2009). The Experimental Arche Continued: Von Foerster on Observing Systems. In: Organizations. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-00110-9_3
Download citation
DOI: https://doi.org/10.1007/978-3-642-00110-9_3
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-00109-3
Online ISBN: 978-3-642-00110-9
eBook Packages: Business and EconomicsBusiness and Management (R0)