Skip to main content

Advertisement

Log in

Machine and person: reconstructing Harry Collins’s categories

  • Original Article
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

Are there aspects of human intelligence that artificial intelligence cannot emulate? Harry Collins uses a distinction between tacit aspects of knowing, which cannot be digitized, and explicit aspects, which can be, to formulate an answer to this question. He postulates three purported areas of the tacit and argues that only “collective tacit knowing” cannot be adequately digitized. I argue, first, that Collins’s approach rests upon problematic Cartesian assumptions—particularly his claim that animal knowing is strictly deterministic and, thus, radically different from human knowing. I offer evidence that human linguistic intelligence depends upon embodied forms of animal intelligence. Second, I suggest the development of deep-learning algorithms means that Collins’s mimesis assumption (that successfully realized explicit instructions to machines are needed to confirm human-artificial intellectual equivalence) is no longer appropriate; equivalent accomplishment of goals is what counts (as he also concedes). However, persons must realize and integrate many goals and also deal with failures; a general-purpose AI capable of integrating all the needs and goals of human existence resists development. Third, I explain how Michael Polanyi’s understanding of tacit knowing, quite different than Collins’s concept of the tacit, exemplifies features that are missing in contemporary AI. I make use of evolutionary theory, studies of animal intelligence, biological insights, individual construction of meaning, and notions of human responsibility to argue for the existence of three categories that distinguish human from artificial intelligence.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. See Harry Collins, Tacit and Explicit Knowledge (Chicago: University of Chicago Press, 2010). Charles Lowney provides a generally appreciative extended review of this book in his “Ineffable, Tacit, Explicable and Explicit: Qualifying Knowledge in the Age of ‘Intelligent’ Machines” [Tradition and Discovery 38:1 (2011–2012)], 18–37.

  2. Collins spends much of his first three chapters exploring the surprisingly elusive notion of explicitness. He summarizes his findings at TEK 81.

  3. Further elaboration on the diverse components included in Polanyi’s notion of tacit knowing and then an analysis of different treatments of the tacit by Harry Collins, Neil Gascoigne and Tim Thornton, and Stephen Turner is found in Walter Gulick, “Relating Polanyi’s Tacit Dimension to Social Epistemology: Three Recent Interpretations” [Social Epistemology 30:3 (2016)], 297–325.

  4. Michael Polanyi, Knowing and Being: Essays by Michael Polanyi, ed. Marjorie Grene (Chicago: University of Chicago Press, 1969), 126–127.

  5. See Gilbert Ryle, The Concept of Mind (London: Hutchinson, 1949); Daniel Kahneman, Thinking, Fast and Slow (New York: Farrar, Straus and Giroux, 2011); and Eugene Gendlin, “Befindlichkeit: Heidegger and the Philosophy of Psychology” in Edward S. Casey and Donata M. Schoeller, eds., Saying What We Mean: Implicit Precision and the Responsive Order: Essays by Eugene T. Gendlin (Evanston, IL: Northwestern University Press, 2017).

  6. Eric Kandel, In Search of Memory: The Emergence of a New Science of Mind (New York: W. W. Norton, 2007), 132.

  7. See Jean Bocharova, “The Emergence of Mind: Personal Knowledge and Connectionism” [Tradition and Discovery 41:3 (2014–2015), 20–31] for a helpful discussion of neural nets in relation to Polanyi’s critique of the neural model.

  8. Stephen Wolfram claims that “Purpose is something that comes from history” [“Artificial Intelligence and the Future of Civilization” in John Brockman, ed., Possible Minds: Twenty-Five Ways of Looking at AI (New York: Penguin, 2019), 283]. He is thinking of evolutionary emergence as well as human experience and contrasting history with mere physical process. But there is more to purpose than memory and accumulated continuity. Even in the amoeba’s following traces of glucose there is a nascent form of intentionality supported by primeval emotion. And there is internal drive that is irreducible to external forces. Polanyi gets at this point when he speaks of “an active principle which controls and sustains” the tacit intelligence of animal existence, “an urge to achieve intellectual control over the situations confronting it” [Personal Knowledge (New York: Harper Torchbooks, 1964), 132].

  9. John M. Barry, The Great Influenza (New York: Penguin Books, 2018), 100–101.

  10. Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control (New York: Viking, 2019), 48.

  11. Polanyi, Personal Knowledge, x.

  12. Russell, Human Compatible, 84.

  13. Ibid., 89–90.

  14. A number of the essays in Knowing and Being describe and give examples of tacit knowing. Polanyi describes the subsidiary-focal relation as presenting a “from-to” structure of consciousness. There are many factors including language in the “from” dimension in contrast to the relative simplicity of focal meaning, so in a number of articles beginning with “Polanyi’s Theory of Meaning: Exposition, Elaboration, and Reconstruction” [Polanyiana 2:4—3:1 (1992–1993), 7–42], I have argued for a “from-via-to” structure of consciousness in which the “via” represents the articulation provided by language in everyday human consciousness. See Walter Gulick, “Polanyian Biosemiotics and the From-Via-To Dimensions of Meaning” [Tradition and Discovery 39:1 (2012–2013), 18–33] for a recent version of this structure. The three categories I describe at the end of this article, EEK, CCK, and PPK, roughly correspond to the “from,” “via,” and the “to” respectively, each of which is energized by a tacit drive to know and control.

  15. Harry Collins, Artifictional Intelligence: Against Humanity’s Surrender to Computers (Cambridge, UK: Polity Press, 2018).

  16. An important sub-theme in AI is indicated in the book’s subtitle: we must protect ourselves from the manipulation and control that the internet and social media make possible.

  17. To be sure, Collins’s reason for the irreducibility of social knowledge to adequate computerization differs from the reasons I offered in my first crucial point. He bases his claim about irreducibility on the speed of change in social relations, not on the irreducibility of focal insights to lower level subsidiary particulars.

  18. See AI 192—TEK 115 and AI 3–4, 36, and 80 for good examples of what Collins means by a broken text.

  19. Stuart Russell states, “The Turing test is not useful for AI because it’s an informal and highly contingent definition: it depends on the enormously complicated and largely unknown characteristics of the human mind, which derive from both biology and culture. There is no way to ‘unpack’ the definition and work back from it to create machines that will provably pass the test” (Human Compatible, 41).

  20. See Mihály Héder and Daniel Paksi, “Autonomous Robots and Tacit Knowledge” [Appraisal 9:2 (2012), 8–14] for a thoughtful exploration of whether autonomous machines can approximate human intentional action. They introduce their reflections by discussing what might be learned from what a particular bicycle riding robot can and can’t do.

  21. Collins makes the interesting distinction between mimeomorphic and polimorphic actions. “Mimeomorphic actions are carried out by executing the same externally visible behavior every time.” (AI 65) “Actions where the associated behaviors are responsive to context and meaning are called polimorphic actions.” (TEK 55) Collins cites the salute as an example of an intended, mimeomorphic action that does not change according to context, whereas linguistic greetings are polimorphic, changing according to context and culture. Crucially, he thinks all non-human animals, lacking language, can only execute mimeomorphic actions. I will challenge that view shortly.

  22. Collins calls his view of the radical difference between human and other animals “Social Cartesianism”—see TEK 125.

  23. Harry Collins, “Symbols, Strings and Social Cartesianism: Response to Mihály Héder,” Polanyiana 21:1–2 (2012), 59.

  24. Frans de Waal, Are We Smart Enough to Know How Smart Animals Are? (New York: W. W. Norton, 2016), 109–110. Donald Griffin, Jane Goodall, and Marc Bekoff are just a few of the better known writers among a great number of evolutionary biologists, cognitive ethologists, and social neuroscientists who would strongly disagree with the stance towards animal intelligence Collins takes.

  25. Michael Tomasello, A Natural History of Human Thinking (Cambridge, MA: Harvard University Press, 2014), 2.

  26. Eric Kandel, The Disordered Mind: What Unusual Brains Tell Us about Ourselves (New York: Farrar, Straus, and Giroux, 2018), 57.

  27. Discussion with Harry Collins in Cambridge UK on June 28, 2019.

  28. See Hubert L. Dreyfus, What Computers Still Can’t Do: A Critique of Artificial Intelligence (Cambridge, MA: MIT Press, 1992).

  29. In a gracious memorial tribute to Dreyfus, Collins summarizes his relationship to and difference from Dreyfus’s world of thought. See “Remembering Bert Dreyfus” (AI & Society 34:2 [June 2019], 373–376).

  30. Susanne Langer, Philosophy in a New Key: A Study in the Symbolism of Reason, Rite, and Art, 3rd ed. (Cambridge, MA: Harvard University Press, 1957). See especially Chapters III and IV.

  31. Imagistic symbols are based on perception just as external signals are, but unlike signals they are freed in imagination from their origins and can relate to possible experience through representation. The connotations of words are even more arbitrarily conventional than images; only very rarely does a word’s shape or sound have a representative function (“bow-wow” or “woof-woof” in imitation of a dog’s bark). The conventionality of words calls into question the idea that the meaning of language can be captured by the bottom-up pattern recognition characteristic of deep learning computers. Collins argues convincingly against Geoffrey Hinton that bottom-up pattern recognition is insufficient to account for linguistic meaning, and that this inability of computers inhibits them from achieving the intelligence of humans. Some mixture of bottom-up representational pattern recognition and top-down meaning or pattern imposition, Collins states, is essential to the way any intelligence can successfully engage and understand the world. A part of Collins’s interesting discussion of these issues in Chapter 6 of AI that needs further development is his qualified acceptance of items like circles, squares, and cones as bottom-up symbols that provide the necessary stability for commonality of perception across cultures. No, the cross-cultural universals are things like water, stones, and people.

  32. Langer, 96–97.

  33. Collins’s discussion of CTK is vitiated by a strange discussion in which he states, “my brain’s neurons are connected to the neurons of every other brain with which it is ‘in touch’ via my five senses” (TEK 132). Even if he is trying to make a point metaphorically, his language is more misleading than illuminating.

  34. Michael Polanyi, Knowing and Being, 120.

  35. Michael Polanyi, Personal Knowledge, 70–77.

  36. Mark Coeckelbergh notes that skill in craftsmanship has a social dimension; it is “about doing things together” (“Skillful Coping with and through Technologies: Some Challenges and Avenues for a Dreyfus-inspired Philosophy of Technology” [AI & Society 34:2 (June 2019), 284]. Both Dreyfus’s emphasis on embodied acts and Collins’s emphasis on social settings are properly honored in this comment.

  37. Satinder Gill adopts a Polanyian perspective with which I agree concerning the importance of the embodied individual as the basis for evaluating Collins’s approach to tacit knowing. She states that “the somatic, the relational and collective categories of Collins are collapsed in this personal act of knowing where the body mediates experience of knowing how, knowing that, and knowing when” [Tacit Engagement: Beyond Interaction (New York: Springer, 2015), 124].

  38. Mukerjee, The Gene: An Intimate History (New York: Scribners, 2016), 197.

  39. My discussion in this section is influenced by Michael Polanyi, The Study of Man (Chicago: University of Chicago Press, 1959), 94–99.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Walter B. Gulick.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gulick, W.B. Machine and person: reconstructing Harry Collins’s categories. AI & Soc 38, 1847–1858 (2023). https://doi.org/10.1007/s00146-020-01046-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-020-01046-3

Keywords

Navigation