AI & SOCIETY

, Volume 6, Issue 3, pp 197–220 | Cite as

The social acceptability of AI systems: Legitimacy, epistemology and marketing

  • Romain Laufer
Article
  • 91 Downloads

Abstract

The expression, ‘the culture of the artificial’ results from the confusion between nature and culture, when nature mingles with culture to produce the ‘artificial’ and science becomes ‘the science of the artificial’. Artificial intelligence can thus be defined as the ultimate expression of the crisis affecting the very foundation of the system of legitimacy in Western society, i.e. Reason, and more precisely, Scientific Reason. The discussion focuses on the emergence of the culture of the artificial and the radical forms of pragmatism, sophism and marketing from a French philosophical perspective. The paper suggests that in the postmodern age of the ‘the crisis of the systems of legitimacy’, the question of social acceptability of any action, especially actions arising out of the application of AI, cannot be avoided.

Keywords

Artificial intelligence Culture of the artificial Epistemology Sophism Common sense psychology Social acceptability Postmodernism Aesthetics 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes and References

  1. 1.
    Herbert Simon —La science des systèmes. Paris (EPI) 1974.Google Scholar
  2. 2.
    When one considers the debate on the social acceptability of Artificial Intelligence, one must immediately distinguish between those who believe it exists and is (or will be) well defined, those who believe it is not possible to say whether it exists or not, and those who believe it is possible to prove it does not exist. The first category can be termed ‘A.I. scientists’. The latter can be divided into the optimists and the pessimists. The optimists constitute what Dreyfus calls the ‘Artificial Intelligentsia’, i.e. Simon, Minsky, Feigenbaum. They believe that what is at stake is actually, through the operation of computers, achieving two goals: the understanding of human intelligence and the use of the power of intelligence thus understood. The pessimists correspond to a group that we shall designate by the expression ‘concerned scientists’. While this category is logically possible and should be composed of preoccupied scientists, I must say that I cannot find a concrete example of this position. This may be due to the fact that those who are negative with respect to A.I. tend to deny its very existence. The second category can be called ‘A.I. technologists’, as they believe that A.I. systems are technological systems which should not nessarily be assimilated with the production of actual intelligence, even if their performance in terms of simulating human behavior cannot be limited on ana priori basis. The optimists of this category can be found among the many researchers in the field of A.I. who, although they do not share the ideology or dogmas of the ‘Artificial Intelligentsia’, devote their time and interest to the development of ever more sophisticated systems. The pessimists could be termed ‘concerned technologists’ as they are afraid of the power developed by these techniques and as they would like to see limitations put on certain types of applications. Joseph Weisenbaum seems to support such a position inComputer Power and Human Reason. Finally, we find those who think it can be proven that developing A.I. is not a project one can carry out with a computer. These could be called the ‘Sceptics’. Here again two positions can be found. The pessimistic view is best represented by H. Dreyfus in his philosophical criticism of the notion of A.I. We may note that this pessimism concerns the very existence of A.I. and not its consequences, as something which does not exist cannot have consequences — at least, as long as nobody contends it exists and persuades other people of its existence. This explains the latent accusation made to the “Artificial Intelligentsia” by Dreyfus who sees the latter's enthusiasm and certitude relative to the existence of A.I. as part of a conscious or unconscious strategy to obtain funding and recognition by business and academia: in other words, A.I. could be the name for a marketing strategy rather than the name of an intellectual achievement. This accusation is made explicitly by Winograd and Flores who share with Dreyfus the idea that A.I. is not a philosophically sound concept, but have an otherwise positive attitude toward the technological developments which have taken place under this relatively inadequate banner.Google Scholar
  3. 2a.
    H.A; Simon, Ch. A. Newell, ‘Heuristic Problem Solving: the Next Advance in Operations Research’, Operation Research, Vol 6 Jan. Fév. 1958. M; Minsky:La santé de l'esprit, Paris (Interédition), 1988. A. Feigenbaum and J. FeldmanComputers and Thought, New York, (McGraw Hill), 1963.Google Scholar
  4. 2b.
    J. Weizenbaum,Computer Power and Human Reason, San Francisco (W. H. Freeman and Company), 1976.Google Scholar
  5. 2c.
    H.L. Dreyfus,Intelligence Artificielle: mythes et limites, Paris (Flammarion), 1984.Google Scholar
  6. 2d.
    T. Winograd et Frenando Flores:L'intelligence artificielle en question, Paris (PUF), 1989.Google Scholar
  7. 3.
    These hypothesis have already been defined and applied in former works. See R. Laufer and C. Paradeise:Marketing Democracy: Public Opinion and Media Formation in Democratic Societies, New Brunswick, (Transaction Books), 1990, and R. Laufer ‘The question of the Legitimacy of the computer = an Epistemological Point of view’ in J. Berleur et al.The Information Society = Evolving Landscapes — New York (Springer — Verlag — Captus) 1990.Google Scholar
  8. 4.
    We could note that theTuring Criterion for artificial intelligence does not depend on the fact that the machine behind the curtain is a Turing machine. H. Dreyfus' critique of A.I., however rigorous and complete, addresses essentially this special case where the ‘body’ (or material part of the machine) is reduced to what is required for a Turing machine to be specified.Google Scholar
  9. 5.
    Le Robert, Le Larousse du xxème siècle, The Webster, Le Dictionnaire Philosophique de Lalande.Google Scholar
  10. 6.
    Encyclopedia Universalis, p. 1252.Google Scholar
  11. 7.
    M. Narcy, ‘A qui la parole? Platon et Aristote face à Protagoras’ in B. Cassin,éd. Position de la Sophistique, Paris, (VRIN), 1986.Google Scholar
  12. 8.
    For a more detailed analysis, cf. R. Laufer and C. Paradeise, op. cit., pp. 170–189.Google Scholar
  13. 9.
    Emmanuel Kant,Logique, Paris, (VRIN), 1982.Google Scholar
  14. 10.
    I. Kant,The Critique of Pure Reason, Trans. G.M.D., Meikllejohn, London and Melbourne (Dent), 1984, p. 210.Google Scholar
  15. 11.
    I. Kant,The Critique of Pure Reason, London, (Oxford University Press), 1958, p. 351.Google Scholar
  16. 12.
    I. KantAnthropologie, Paris, (VRIN), 1979.Google Scholar
  17. 13.
    I. Kant,The Critique of Pure Reason, London, (Dent), 1984, p. 269.Google Scholar
  18. 14.
    Michel Serres,La Traduction, Paris, (Editions de Minuit), 1974, p. 165.Google Scholar
  19. 15.
    Auguste Comte, Cours de Philosophie Positive, 45ème leçon, in Pierre Arnaud. Texte Choisis, Paris (Bordas), p. 65.Google Scholar
  20. 16.
    Herbert Simon,La Science des Systèmes, Paris, (EPI), 1974.Google Scholar
  21. 17.
    Joëlle Proust,Question de Forme, (Fayard), 1988.Google Scholar
  22. 18.
    A. Soulez, éd.Manifeste du Cercle de Vienne, Paris, (PUF), 1985.Google Scholar
  23. 19.
    E. Durkeim,Sociologie et Pragmatisme, Paris (VRIN), 1955, pp. 28–29.Google Scholar
  24. 20.
    S. Stitch, FromFolk psychology to cognitive sciences, the cases against belief. Cambridge, (MIT Press), 1986.Google Scholar
  25. 21.
  26. 22.
  27. 23.
  28. 24.
    Husserl E.,La Crise des sciences européennes et le phénoménolgie transcendantale, Paris, (Gallimard). 1976, p. 89.Google Scholar
  29. 25.
    Guy Celerier,La psychologie génétique et le cognitisme inLe Débat, Paris, nov – déc 1987, n°47.Google Scholar
  30. 26.
    On the relationship between art and marketing see Romain Laufer: ‘Système de Légitimité, art et marketing’, inRencontres de l'Ecole du Louvre — Paris (Documentation Française) 1987.Google Scholar
  31. 27.
    We may note that Middle Age Theology did use the expression divine intelligence. This could be accounted for by the fact that for the divinity to know and to do is the same (pragmatic) thing.Google Scholar
  32. 28.
    Translated from H. Dreyfus,Intelligence Artificielle, Mythes et Limites, Flammarion, 1979, p. 3. H. Dreyfus quotes first Plato: Euryphron VII, then Marvin Minsky Computation: finite and infinite machines, Englewood Cliffs, prentice - Hall, 1967, p. 106.Google Scholar
  33. 29.
    William James,Philosophie de l'Expérience, Flammarion éd. Paris, 1914, pp. 208–209.Google Scholar
  34. 30.
    P. Feyerabend.Contre la Méthode — Paris (Seuil) 1979.Google Scholar
  35. 31.
    Jürgen Habermas, Theorie des Kommunikativen Handelns. Suhrkamp, Frankfurt/Main, 1981.Google Scholar
  36. 32.
    Jean-François Lyotard:La Condition Post-Modern, Paris (Minuit) 1979.Le Différend. Paris (Minuit) 1984.Google Scholar

Copyright information

© Springer-Verlag 1992

Authors and Affiliations

  • Romain Laufer
    • 1
  1. 1.Groupe HECJouy-en-JosasFrance

Personalised recommendations