Abstract
In this chapter, I will provide a precise definition of Boundary Set S, setting forth the theoretical and mathematical framework within which to approach immediate awareness and knowing how. For those who find mathematical notation or formulas burdensome, please feel free to simply skip over to sections with which you feel more comfortable. I have tried to get at the more “intuitive” ideas underlying the formal parts by using some illustrations or analogies so as to make the following chapters more reader-friendly.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
My analysis of this task is not intended as definitive of the steps for doing it. Those steps differ relative to wound presentation, overall medical condition of the victim, and conditions for conducting the task, including the availability of surgical tools, medications, including anaesthesia and other aids. For example, how the task is performed can differ depending upon whether one is in a controlled sterile environment such as a hospital surgical OR, or uncontrolled environment such as in combat. I am assuming the most primitive conditions.
This was especially the case in combat conditions that existed in prior wars fought by the U.S. Those kinds of conditions no longer generally exist.
The terms ‘proximal’ and ‘distal’ are borrowed from anatomy, but can be used to unfold the structure [or anatomy] of our knowing.
See Stephen Kosslyn, “Visual Mental Images in the Brain: How Low Do They Go,” presented at a meeting of the American Association for the Advancement of Science on the Cognitive Neuroscience of Mental Imagery, February, 2002. According to Kosslyn, using images this way also causes the same effects on memory and the body as occur during actual perception, but the two functions are not identical.
The description here assumes few technological assists such as X-ray, as in severe combat conditions.
Gary Stix, “Boot Camp for Surgeons,” in Scientific American, September 1995, p. 24.
Ian Stewart, Nature’s Numbers, New York, Basic Books, 1995, p. 123.
Throughout this section, I follow Stuart Kauffman’s use of random Boolean networks. See his The Origins of Order: Self-Organization and Selection in Evolution, Oxford University Press, 1993, and his At Home in the Universe: The Search for the Laws of Self-Organization and Complexity, Oxford University Press, 1995.
Ian Stewart, Nature’s Numbers, New York, Basic Books, 1995, p. 100.
M. Estep, “Toward Alternative Methods in Systems Analysis: The Case of Qualitative Knowing,” in Cybernetics and Systems Research, Vol. 2, Robert Trappl, (ed.), Elsevier Science Publishers B.V. (North-Holland), 1984.
I am using the term immense as defined by Walter M. Elsasser, Atom and Organism: A New Approach to Theoretical Biology, Princeton University Press, 1966, as cited by Alwyn Scott in Stairway to the Mind, Springer-Verlag, 1995. An immense number J = 10110 In contrast to a finite number of items which can be put on a list and examined, for an immense number of items this is not possible. There would not be sufficient memory capacity in any computer which could ever be built to store an immense number of items.
E. Steiner, Methodology of Theory Building, Sydney, Australia, Educology Research Associates, 1988.
F. Harary, Graph Theory, Massachusetts, Addison-Wesley, 1969, p. 199.
M. Estep, “Toward a SIGGS Characterization of Epistemic Properties of Educational Design,” in Applied General Systems Research, George Klir, (ed.), NATO Conference Series, New York, Plenum Press, 1978, pp. 917–935.
See Ralph Grimaldi, Discrete and Combinatorial Mathematics, Third Edition, Reading, Massachusetts, Addison-Wesley Publishing Company, 1994, pp. 374–375.
For the sake of argument, we assume the epistemic universe is like the filled Julia set of a polynomial map on a Riemann sphere where S=C ? {8} of the form g(z) = z2 + c. The boundary Julia set is the set of points that don’t go off to infinity under iterations of g. [See Blum, 1989]. Boundary Set S is rule-bound in this sense.
James Albus, Brains, Behavior, and Robotics, Peterborough, New Hampshire, BYTE Books, 1981.
Hartley, 1928.
I arbitrarily sorted these kinds into a matrix of one/many paths and one/many termini. ‘Pr’ stands for “protocolic,” which means one path to one terminus; it is also possible to have a performance with one path but leading to many termini; ‘Co’ stands for “conventional,” which means many paths leading to a single terminus or many paths leading to many termini. ‘In’ stands for “innovative,” which means combining one or more given paths with one or more given termini in new ways; ‘Cr’ stands for “creative” which means producing new paths or new termini.
Ian Stewart, Nature’s Numbers, New York, Basic Books, 1995, p. 117.
Ibid., p. 94.
James L. McClelland and David E. Rummelhart, Parallel Distributed Processing, Volumes 1 and 2, Cambridge: MIT Press, 1986.
Stuart Kauffman, At Home in the Universe: The Search for the Laws of Self-Organization and Complexity, New York, Oxford University Press, 1995, p. 18.
Ibid.
Bertrand Russell, 1984, p. 80.
Kauffman, 1995, p. 56.
Computationally, the size of a problem instance is the length of (the description of) the instance, measured in some standard units, usually binary.
See Russell, 1984.
Alwyn Scott, Stairway to the Mind, Springer-Verlag, 1995, p. 129f, gives an excellent summary of Koch and Crick’s statement of the problem.
Ibid., p. 129.
See Cohen and Stewart, The Collapse of Chaos, New York, Penguin Books, 1994, p. 41. I have obviously used an example which is not strictly bodily kinaesthetic, but the relation between primitive relations such as imagining and other kinds of knowing is evident.
See Gottlob Frege, “Über Begriff und Gegenstand” in Vierteljahrsschrift für wissenschaftliche Philosophie, 16, 1892, pp. 192–205; and “Der Gedanke” in Beiträge zur Philosophie des deutschen Idealismus, I, 1918–1919, pp. 58–77; and “Gedank engefüge” in Beiträge zur Philosophie des deutschen Idealismus, 3, 1923–1926, pp. 36–51.
I am referring here to Hebb’s postulate of learning. See D. O. Hebb, The Organization of Behavior, New York, Wiley, 1949. According to Haykin, [Simon Haykin, Neural Networks: A Comprehensive Foundation, New York, Macmillan, 1994, pp. 51–53], Hebb’s postulate has been a subject of intense experimental interest among neurophysiologists and neuropsychologists for many years. Empirical research has shown that “a time-dependent, highly local, and strongly interactive mechanism is responsible for one form of long-term potentiation (LTP) in the hippocampus.” LTP is a use-dependent and long-lasting increase in synaptic strength that may be induced by short periods of high-frequency activation of excitatory synapses in the hippocampus. Experimentation showing that LTP in certain hippocampal synapses is Hebbian appears to have been replicated by other investigators.
See Miguel A. L. Nicolelis, et al, “Sensorimotor Encoding by Synchronous Neural Ensemble Activity at Multiple Levels of the Somatosensory System,” in Science, AAAS, Vol. 268, 2 June 1995.
Kauffman, 1995, p. 84, cites the research of Derrida and Weisbuch showing this.
The relations between these kinds of ordered but flexible behavior in humans and lower animals have been explored to some degree in research into the rhythmic movements of the rat’s trigeminal system. That system is a multilevel, recurrently interconnected neural network generating complex emergent dynamic patterns. See Nicolelis, et al., “Sensorimotor Encoding by Synchronous Neural Ensemble Activity at Multiple Levels of the Somatosensory System,” in Science, AAAS, Vol. 268. 2 June 1995.
Kauffman, 1991. Also see Stein, 1988.
Kauffman, 1995, p. 78.
The concept basin of attraction is given meaning within stability theory. ‘Stability’ is a general term characterizing response to perturbations. Stability applies to different aspects of a dynamical system and can refer to individual points (local stability), to trajectories (asymptotic local stability), to families of trajectories (attractors), or to an entire dynamical system (that is, whether or not there is a unique attractor). Following Eubank and Farmer [1989], an attractor is an invariant set of components (points) that “attracts” nearby states. The basin of attraction is the set of points that are attracted to it. Formally, Q. is an attractor if there is a neighborhood N about it such that F t (N)?? as t—8 and ? cannot be broken into pieces ?1?2,.. .such that Ft(?1) ?Ft(?2) = Ø The basin of an attractor is the set of points attracted to ?, that is (x: limt?8 Ft ? ?). There may be many attractors, each with its own distinct basin of attraction in any given dynamical system.
Much of supervised and reinforcement learning theory as applied to neural networks is based upon a reactive stimulus-response model, not an anticipatory, proactive model. Elsewhere, I have argued that the concept ‘learning’ and the concept ‘coming to know’ are neither identical nor equivalent. It is the latter that these theorists aim for, yet it requires epistemological analysis, not psychological analysis as does ‘learning’. Where I consider the property of self-organization in neural networks, I will of course be addressing only unsupervised learning [or coming to know]. From this, it is also evident that I will not be concerned with backpropagation.
Cohen and Stewart, 1994.
At this point, I want to stress again that a great deal of caution must attend the use of dynamical systems theory and phase space models of Boundary Set S. My effort is to use the best and most precise mathematical tools and models available to a scientific as well as philosophic study of the nature of this kind of knowing which in terms of its epistemic properties has much in common with what Dreyfus earlier referred to as commonsense know how and understanding of human beings.
This has been established over the past several decades in both engineering and
computational neuroscience. For example, see D. J. Bell, Mathematics of Linear and Nonlinear Systems, Oxford University Press, 1990; Feldman, A.G., Biophysics 11, 565, 1966; various publications by Hiroaki Gomi and Mitsuo Kawato, most recently their “Equilibrium-Point Control Hypothesis Examined by Measured Arm Stiffness During Multijoint Movement,” in Science, American Association for the Advancement of Science, Volume 272, 5 April 1996, pp. 117–120, and Blum, Lenore, Lectures on a Theory of Computation and Complexity over the Reals (or an Arbitrary Ring), Berkeley, International Computer Science Institute, 1989.
Stuart Kauffman, 1993 and 1995.
S. Wright, “Evolution in Mendelian Populations,” Genetics, Vol. 16, number 97, 1931; and S. Wright, “The Roles of Mutation, Inbreeding, Crossbreeding and Selection in Evolution,” Proceedings of the Sixth International Congress in Genetics, Vol. 1, number 356, 1932.
See Berthoz and Israel.
Substantial empirical research has established this claim, in addition to that of Berthoz and Israel. See Gardner’s Frames of Mind: The Theory of Multiple Intelligences, Basic Books, 1993. See especially references included under bodily-kinaesthetic intelligence.
A thorough analysis of the epistemic structure of touching requires an analysis of probes and their epistemic and spatial relations to our body. Moreover, what we know of the human use of the fingers to explore or come to know the texture and shape of objects has much in common with results of scientific neural experimentation with the rat trigeminal system. We know that rats rely on rhythmic movements of their facial whiskers much as humans rely on coordinated movements of fingertips to explore or come to know objects in their proximal environment. The trigeminal system is a multilevel, recurrently interconnected neural network which generates complex emergent dynamic patterns of neural activity manifesting synchronous oscillations and even chaotic behavior [see Nicolelis, et al, “Sensorimotor Encoding by Synchronous Neural Ensemble Activity at Multiple Levels of the Somatosensory System,” in Science, AAAS, Vol. 268, 2 June 1995.
Again, the terms ‘close’ and ‘distant’ as related to epistemic relations have meaning in relation to proximity with the human body, the ultimate instrument of all our external knowing. I am not happy with the distinction between touching and moving as I have left it here, and am not resigned to the distinctions between them as I have drawn them.
See Stephanie Forrest and John H. Miller, “Emergent Behavior in Classifier Systems,” in Emergent Computation, Stephanie Forrest, (ed.), Cambridge: MIT Press, 1991, pp. 213–227.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2003 Springer Science+Business Media Dordrecht
About this chapter
Cite this chapter
Estep, M. (2003). Boundary Set S: At the Core of Multiple Intelligences. In: A Theory of Immediate Awareness. Springer, Dordrecht. https://doi.org/10.1007/978-94-017-0183-9_5
Download citation
DOI: https://doi.org/10.1007/978-94-017-0183-9_5
Publisher Name: Springer, Dordrecht
Print ISBN: 978-90-481-6251-2
Online ISBN: 978-94-017-0183-9
eBook Packages: Springer Book Archive