From Intelligence to Rationality of Minds and Machines in Contemporary Society: The Sciences of Design and the Role of Information


The presence of intelligence and rationality in Artificial Intelligence (AI) and the Internet requires a new context of analysis in which Herbert Simon’s approach to the sciences of the artificial is surpassed in order to grasp the role of information in our contemporary setting. This new framework requires taking into account some relevant aspects. (i) In the historical endeavor of building up AI and the Internet, minds and machines have interacted over the years and in many ways through the interrelation between scientific creativity and technological innovation. (ii) Philosophically, minds and machines can have epistemological, methodological and ontological differences, which are based on the distinct configuration of human intelligence and artificial intelligence. Their comparison with rationality and its various forms is particularly relevant. (iii) Scientifically, AI and the Internet belong to the sciences of the artificial, because they work on designs that search for specific aims, following selected processes in order to achieve expected results. (iv) Technologically, AI and the Internet require the support of information and communication technologies (ICT). These have an instrumental role regarding the existence of AI and the Internet. Also ICT shape their diverse forms of configuration over the years.

Within this framework, this paper offers a new context of analysis that goes beyond Simon’s and follows four main steps: (i) the interaction between scientific creativity and technological innovation as the philosophico-methodological setting for Artificial Intelligence and the Internet; (ii) artificial intelligence and human intelligence as epistemological basis for machines and minds, where the differences between artificial intelligence and human intelligence are made explicit (under the consideration of “computational intelligence”) and the analysis of minds and machines is made from the perspective of rationality (“symbolic rationality” and “adaptive rationality”); (iii) intention and its difference with design of machine learning are considered to distinguish human intelligence from artificial intelligence; and (iv) the internal and external aspects of artificial designs in contemporary society are considered through the perspective of rationality, which leads to the transition from intelligence to rationality in the Internet as well as to the historicity of information (how aims, processes, and results can be based on conceptual revolutions).

This is a preview of subscription content, log in to check access.


  1. 1.

    The historical and thematic coordinates of AI can be found in Copeland and Proudfoot (2007); and Copeland (1993). On methodological issues, see Gillies (1996); and Cristianini (2014).

    Many other aspects were discussed in the papers published in Boden (1990); Floridi (2004); and Frankish and Ramsey (2014). The discussion on human mind is a complementary topic to the present paper. A new approach to this well-known topic can be found in Ludwig (2015).

  2. 2.

    An analysis of Herbert Simon’s approach to AI was already made in Gonzalez (2007a). All the references of Simon’s publications related to the topics discuss in this paper are available in Gonzalez (2003).

  3. 3.

    “The phrase ‘artificial intelligence’ is invoked as if its meaning were self-evident, but it has always been a source of confusion and controversy”, Lewis-Kraus (2016, p. 5).

  4. 4.

    Science and technology can be distinguished in conceptual terms even though they are closely interrelated in practical terms. Thus, scientific activities and technological undertakings are conceptually distinguishable in terms of aims, processes, and results as well as regarding values on ends and means. See Gonzalez (2015a).

  5. 5.

    Solving concrete problems in a practical sphere is the crucial feature of applied sciences, see Niiniluoto (1993).

  6. 6.

    “Scientification” of a practice is achieved when the researcher is able to get the patterns in the problem-solving of concrete problems, within a given practical sphere. Pharmacology is a science of design where there is frequently this transition from some practical knowledge to a genuine scientific contribution.

  7. 7.

    On the difference between “applied science” and “application of science” and the role of technology, see Gonzalez (2013b, especially, pp. 17–18).

  8. 8.

    Simon (1996), third edition. The first edition was in 1969, but was updated in the second edition (1981) and in the final version (15 years later).

  9. 9.

    On his contributions to AI, see Simon (1991b).

  10. 10.

    These points on Simon’s approach are discussed in a number of publications, such as Gonzalez (2008, 2011a, 2013a, and 2015b, ch. 8, pp. 203–228).

  11. 11.

    For Dasgupta, Simon adopts operationalism insofar as “to know or understand something (such as a concept or an idea) one must know the operations (or procedures or rules) by which that something can be realized” (Dasgupta 2003, pp. 688–689).

  12. 12.

    James Fetzer points out “the static difference between computer systems and thinking things,” which is related to the distinction between signs that are meaningful for use in a system (“symbol systems”) and signs that are meaningful for the users of the system (“semiotic systems”), where the marks can function as signs for the users of the systems without having to work as signs for those systems.

    Fetzer also mentions a dynamic difference between computer systems and thinking things. This happens when “digital machines are under the control of programs as causal implementations of algorithms, where ‘algorithms’ in turn are effective decision procedures. Effective decision procedures are completely reliable in producing solutions to problems within appropriate classes of cases that are invariably correct and they do in a finite number of steps. If these machines are under the control of algorithms but minds are not, then there is a dynamic difference” (Fetzer 2004, p. 130).

  13. 13.

    Luciano Floridi has insisted on the revolutionary changes introduced by the Internet based on the “digital revolution.” See Floridi (1999, pp. 1–9 and 56–87). These changes can commonly be explained in terms of conceptual revolutions, as it has happened in other well known changes in the history of science.

  14. 14.

    In the history of Artificial Intelligence, this interaction between scientific creativity and technological innovation can be traced in the research topics of AI. Among them are machine learning, evolutionary computing, expert systems, neuron-like computing, … See, in this regard, Copeland and Proudfoot (2007, pp. 429–482; especially, pp. 429–446).

    There is a connection between “data driven” approaches or “statistical AI” and machine learning technology: “From spelling correction to face recognition, including question answering, machine translation, information retrieval, a series of problems in machine intelligence were (partly) conquered over the past decade by the deployment of data intensive methods,” Cristianini (2014, p. 38).

  15. 15.

    Among other factors, the relations between scientific creativity and technological innovation depend on the kinds of knowledge involved in both and their interaction, see Gonzalez (2013b).

  16. 16.

    This learning machine is central for the tasks made in the social networks such as Facebook, where the software learns by crunching data rather than having to be explicitly programed. The Economist, v. 491, n. 8984, 9–15 April 2016, p. 9.

  17. 17.

    Here it is assumed that “knowledge” is more than “information” as well as more than “data”, insofar as knowledge categorizes and organizes information and gives the context for data. This triple distinction is emphasized by Nicholas Rescher (1999).

  18. 18.

    This book is the first volume of the systematic presentation of Rescher’s philosophy: A System of Pragmatic Idealism. See also Rescher (2003).

  19. 19.

    See, in this regard, Hinton and Salakhutdinov (2016). Reducing the Dimensionality of Data with Neural. Science, v. 313, (28 July 2016), pp. 504–507.

    Certainly, Artificial Intelligence programs can be designed and implemented for a kind of phenomena and, thereafter, deployed in new contexts. The interaction between programs and contexts can affect the behavior of the system, which also includes the possibility of the change of the code itself. Recent contributions to AI through neural nets can have this type of evolution.

    In this regard, it seems to me of interest the work done in the firm BenovelentAI of London, because BenovelentAI’s version of AI “is a form of machine learning that can draw inferences about what it has learned. In particular, it can process natural language and formulate new ideas from what it reads. Its job is to sift through vast chemical libraries, medical databases, and conventionally presented scientific papers, looking for potential drug-molecules,” Article “The shoulders of gAInts (Medicine and Computing),” The Economist, January 7th 2017, p. 60.

  20. 20.

    See, in this regard, Rescher (1988, pp. 2–3).

  21. 21.

    Although this is the common case, there is the possibility of writing a program that is specifically designed to be surprising. This might be done, for example, if the program can rewrite itself to do something that the original designer never anticipated (i.e., a kind of “mutation”).

  22. 22.

    The technological innovations in Bletchley Park (such as the Bombe or Colossus I and II) where a consequence of the need to gather information for practical purposes. See Hodges (2014).

  23. 23.

    On the contacts between von Neumann and Turing, see Hodges (2014), pp. 151, 158, 163, 167, 183–184, and 520. On their approaches and the results obtained following them, see Hodges (2014), pp. 109, 111, 121–122, 149, 161, 376–377, 379-382, 406–407, 409, 413, 428, 430-431, 444, 446, 489, 513–514, 520, 526, 624, and 654.

  24. 24.

    Commonly, AI includes operativity in the performance of some functions, either with well established goals or with evolutive objectives, and some type of inferences, either according to certain rules run by the program or by patterns that evolve from the designs.

  25. 25.

    Cf. Hodges (2014), p. 455. Regarding “intelligence,” Turing’s biographer also recognizes that “when Alan first began to use the word, it was applied to chess playing and other kinds of puzzle-solving. (…) But people had always used the word [intelligence] in a broader sense, involving some insight into reality, rather than the ability to achieve goals or solve puzzles or break ciphers,” Hodges (2014, p. 535).

    The founder of a big company related to the Internet, Facebook, has maintained that “we're nowhere near understanding how intelligence actually works.” Mark Zuckerberg at the Mobile World Congress, Barcelona, 22.2.2016, available in: (accessed on 9.8.2016).

  26. 26.

    According to Jack Copeland and Diane Proudfoot, Turing’s section on “Intelligence as an emotional concept” (1948) sets out the externalist thesis that whether an entity—real or artificial—is intelligent “is determined, at least in part, by our responses to the entity’s behaviour,” (Copeland and Proudfoot 2007, p. 451).

  27. 27.

    Turing also considers “something as behaving in an intelligent manner,” (Turing 1948; Copeland, 204, p. 431).

  28. 28.

    Cf. Hodges (2014, p. 455). On the issue of the IQ tests, see Urbach (1974).

  29. 29.

    This is basically what Peter Strawson used to call “principle of significance,” which he associated with Kant’s views on human knowledge. See Strawson (1966, p. 16).

  30. 30.

    On this sphere of the values, see Rescher (1993, 1999, ch. 3, pp. 73–96). Rescher (1993) is the second volume of his book A System of Pragmatic Idealism.

  31. 31.

    Regarding the differences between changes in the terms “evolution” and “historicity,” see Gonzalez (2011c).

  32. 32.

    This dual aspect of historicity is also present in the sciences of design. In addition, there are other aspects of historicity, such as the interrelation of researchers over time. In the case of Alan Turing, this can be seen in the book of Andrew Hodges, Alan Turing: The Enigma. In Simon, there is also plenty of information in his intellectual autobiography Models of My Life.

  33. 33.

    See Turing (1950). In this paper he presented the “Turing test” for machine intelligence. There is another version of the “Turing test” in a BBC radio broadcast recorded in January 1952. See Copeland and Proudfoot (2007, pp. 448–449) as well as Sterrett, S., "Turing’s Two Tests for Intelligence," Minds and Machines, v. 10, n. 4, (2000), pp. 541–559.

  34. 34.

    Cf. Wittgenstein (1974), V, 23. English translation Wittgenstein (1978, pp. 280–281). “The calculus makes no predictions, but by means of it you can make predictions,” Wittgenstein (1976, XV, p. 150).

  35. 35.

    On the role of complexity regarding AI, see Urquhart (2004, pp. 18–27). The other sort of complexity is broader than the specific of AI and deals with “complex systems” in general.

  36. 36.

    On the limits of instrumental rationality, see Nozick (1993, ch. 5, pp. 133–181 and 207–217).

  37. 37.

    This is a component that leads to evaluative rationality or rationality of ends. See, in this regard, Rescher (1988, pp. 92–106).

  38. 38.

    All the work done at Bletchley Park for deciphering naval Enigma needed to go beyond the signs in order to make sense of the messages sent to the Germans U-boats.

  39. 39.

    Cf. Lewis-Kraus, “The Great A.I. Awakening”, The New York Times Magazine, 14 December 2016, pp. 7–18. Available in, accessed on 12.3.2017.

  40. 40.

    Obviously, one thing is what AI commonly does and another is what AI can do or even what it should do. The last 20 years have seen an outpouring of new contributions to AI, mainly in the sphere of machine learning through neural networks. But, from an epistemological perspective, there are still differences in the main qualitative parameters between artificial intelligence and human intelligence.

  41. 41.

    Cf. Simon, “Rationality,” in Gould, J. and Kolb, W. L. (eds.), A Dictionary of the Social Sciences, The Free Press, Glencoe, IL, 1964, pp. 573–574. Reprinted in Simon, H. A., Models of Bounded Rationality. Vol. 2: Behavioral Economics and Business Organization, The MIT Press, Cambridge, MA, 1982, pp. 405–407.

  42. 42.

    Turing was especially interested in the question to whether it is possible for machinery to show intelligent behavior. Cf. Turing (1948; Copeland 2004).

  43. 43.

    On multi-tasks, see Dong et al. (2015). Multi-Task Learning for Multiple Language Translation. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, Beijing, 26–31 July 2015, pp. 1723–1732.

  44. 44.

    Even when there is “deep learning” in machines, which has brought great success to some important technological firms in recent decades (such as Google, Facebook, Apple, etc.), Artificial Intelligence in the Internet commonly works on specific tasks.

  45. 45.

    On the origin and characteristics of these examples, see Hodges (2014, chs. 3, 4, and 5). More details on ENIAC are available in Haigh, Priestley, and Rope (2016).

  46. 46.

    For Simon, “intelligence was a matter of reasoning, and reasoning was a matter of symbolic manipulation,” (Cristianini 2014, p. 39).

  47. 47.

    These contributions can be seen in historical terms, such as in the book of Hodges’s on Turing, or in more thematic terms, such as in the volume edited by Copeland with the essential Turing.

  48. 48.

    An analysis of Simon’s ideas on Artificial Intelligence is in Gonzalez (2007a).

  49. 49.

    “Human rationality” is understood here in the sense of “universal rationality”, i.e., the kind of rationality present in each human being. Simon also recognized the difference between the kind of rationality in human agents (mainly in economics) and rationality in AI. On the former, see Simon (1983).

  50. 50.

    That AI machines cannot lie is important for several reasons: (a) to lie is to say or to write a false statement on purpose, which requires the intention of deceiving someone in a given context; (b) a lie requires a full understanding of the meaning of the statement (otherwise it is an error, a lapsus, an involuntary mistake, etc.); and (c) a lie also includes a free act, which is different from the mere saying or writing a false statement and, therefore, involves ethical responsibility. Following the present analysis, (a) and (b) are not available in the current AI, and (c) it is difficult to accept for AI, insofar as there is no actual free will in it. What machines can do—and often do—is offer false information, which is completely different from the concept of “lie.” On the connected issues, see here Sect. 4) “Intention, Design of Machine Learning, and Intelligence.”

  51. 51.

    On the distinction between “behavior” and “activity”, see Gonzalez (2015b, pp. 175–179, 182, 202, 206, 210, 219-225, 233 and 293n). See also note 72 here.

  52. 52.

    “Volition and sense-knowledge cannot be described independently of one another,” Anscombe (2000 [1963], n. 36, p. viii; see also, pp. 67–70). “There are two features present in wanting; movement towards a thing and knowledge (or at least opinion) that the thing is there,” Anscombe (2000[1963], p. 68).

  53. 53.

    Frequently, “intention” is linked to “want” in two ways, insofar as intention has a component of volition and when it includes an attitude regarding the objective or aim of the intention.

  54. 54.

    On the existence and characteristics of we-intentions, see Tuomela and Miller (1988), Tuomela (1991), Tuomela (2013), and Bratman (1993). According to Raimo Tuomela, “joint intention” requires a normative component: a joint commitment regarding an agreement—or the belief in an agreement—either explicit or implicit. See Tuomela (1993).

  55. 55.

    “Fully and normally developed NIs [natural intelligences] seem entrapped in a semantic stance. Strictly speaking, we do not consciously cognize pure meaningless data”, Floridi (2011, p. 39).

  56. 56.

    “One of the major dissimilarities between current generation artificial intelligence systems (AIs) and human natural intelligences (NIs) is that AIs can identify and process only data (uninterpreted patterns of differences and invariances), whereas NIs can identify and process mainly informational contents (in the weak sense of well-informed patterns of meaningful data)”, Floridi (2011, p. 39).

  57. 57.

    Human intelligence is contextual insofar as it is not a disembodied intelligence.

  58. 58.

    “Humans differ from (currently existing) machines in that humans have more sensitive, highly developed bodies (…). This sensitivity can allow humans to be affected by non-computable aspects of their environment,” Lyngzeidetson and Solomon (1994, p. 553).

  59. 59.

    This ontological openness is the basis for human freedom, understood as a fundamental feature of the human being, which is exercised in the free-will, when something is chosen, and opens up to the ethical evaluation of human action.

  60. 60.

    Intentionality is a feature of some actions, which reveals the existence of an intention in the agent who performs the action. Cf. von Wright (1976, pp. 415–435, especially, p. 423); and von Wright (1983, p. 42). Also intentionality might be in an action of several agents (in joint action, cooperation, solidarity, etc.).

  61. 61.

    See the impressive contributions of the research made by Google in areas such as translation, image recognition, etc. A good example is the cat paper: Le et al. (2012). See also Dean et al. (2012).

  62. 62.

    Luciano Floridi insists on the need to distinguish three main approaches (computationalism, connectivism, and dynamicism) in AI instead of just two (2011, p. 37).

  63. 63.

    Article “March of the machines. What history tells us about the future of artificial intelligence—and how society should respond.” The Economist, June 25th 2016, p. 9.

  64. 64.

    Cf. Hinton, Osindero and Teh, (2006); Hinton and Salakhutdinov (2016); and Dean et al. (2012).

  65. 65.

    See, in this regard, Laskow (2016).

  66. 66.

    A version of this article appeared in print on December 18, 2016, on page MM40 of the Sunday Magazine with the headline: “Going Neural”.

  67. 67.

    Cf. Gillies (1996, pp. 17–55). This chapter 2 of the book includes a comparison with Simon’s views on machine learning.

  68. 68.

    Google’s research team of the cat paper works with unlabeled data: Le et al. (2012). See also Dean et al. (2012).

  69. 69.

    For Simon (1978), “rationality” was a process related to social events and artificial phenomena.

  70. 70.

    There is a non-observational knowledge here: “Knowledge of one’s own intentional actions – I can say what I am doing without looking to see,” Anscombe (2000 [1963] n. 28, p. vii; see also pp. 49–51).

  71. 71.

    This difference between “intention” and “intentionality” is based on G. H. von Wright. Cf. von Wright (1976, pp. 415–435, especially, p. 423); and von Wright (1983, p. 42).

  72. 72.

    The distinction between “activity” and “behavior” is developed in Gonzalez (2015b, pp. 175–179, 182, 202, 206, 210, 219-225, 233 and 293n).

    Regarding action and behavior, the differences are also noticeable in American English, the language used in the works on AI analyzed here. “In effect, in the well-known Webster’s Dictionary (ninth new collegiate edition), action appears with three main senses in the human case: (1) “an act of will”; (2) “the manner or method of performing”; and (3) “a thing done.” In addition, it includes the sense of “a man of action,” linked to the idea of initiative. Putting it all together, action is something human which could be carried out with initiative, but, above all, action is based on an act of will, which appears in a manner of performing and brings about a result (a thing done). Meanwhile, “behavior,” according to the Webster's Dictionary, has three senses: (a) “the response of an individual, group or species to its environment;” (b) “the manner of conducting oneself;” and (c) “the way in which something (as a machine) behaves.” Thus, in behavior the external element prevails as in the starting point—a response to stimulation or to its environment—as in its form of evolving—the manner of conducting oneself—, and it exhibits a regular pattern—the functionality of a machine or procedures which could be similar to it—. Therefore, the point of convergence between action and behavior is in the process—the manner of performing—insofar as there is involved an external factor and the performance could be repeated; whereas action—in the Dictionary—has a broader scope: its teleological character, with its starting in an act of will, goes to an end (a thing to be done) and gives more possibilities to human initiative.” Gonzalez (1997, pp. 225–226).

  73. 73.

    This is the case in the high frequency trading (i.e., buying and selling of high frequency made by supercomputers), where the starting point is still in the designer.

  74. 74.

    There are also other aspects in AI, which might be synchronic or diachronic, such as photo classification or music selection.

  75. 75.

    A key aspect for the future is the development of “the Internet of the things.”

  76. 76.

    According to the founder of Facebook, around 4.000 million inhabitants of our planet still do not have access to the Internet (Mark Zuckerberg at the Mobile World Congress, Barcelona, 22.2.2016, in:, accessed on 8.8.2016).

  77. 77.

    There is now a widespread recognition that AI and the Internet involved a revolution, which was above all digital. But these relevant changes require conceptual revolutions in order to shift things.

  78. 78.

    See Alan Turing’s papers with the analysis made by B. Jack Copeland in the book Essential Turing, published by Clarendon Press in 2004.

  79. 79.

    On the difference in Simon between the symbolic problem solver (in Artificial Intelligence) and the universal decision maker (in economics), see Dasgupta (2003, pp. 683–707; especially, pp. 694–695).

  80. 80.

    “We see that reason is wholly instrumental. It cannot tell us where to go; at best it can tell us how to get there. It is a gun for hire that can be employed in the service of whatever goals we have, good or bad,” Simon (1983, pp. 7–8). The consequences of this view for AI are clear: see Simon (1995b, p. 24).

  81. 81.

    On the three kinds of knowledge in technology, including the evaluative one, see Gonzalez (2015a).

  82. 82.

    On the role of values regarding sciences such as economics, which are also sciences of the artificial, see Gonzalez (2013c).

  83. 83.

    On the distinction between applied science and application of a science see Niiniluoto (1993, pp. 1–21; especially, pp. 9–19); and Gonzalez (2013b, pp. 12, 17–18, 25 and 27–28).

  84. 84.

    Commonly, “classical AI” is GOFAI (Good Old-Fashioned AI), which is symbolic. Artificial Intelligence also includes other approaches, such as connectionism (of which there are several varieties), evolutionary programming, and situated and evolutionary robotics. In this regard, see Boden (2014, pp. 89–107; especially, p. 89). For Floridi, the main approaches to AI are three: computationalism, connectivism, and dynamicism. See Floridi (2011, p. 37).

  85. 85.

    Floridi maintains that “AI and cognitive science study agents as informational systems that receive, store, retrieve, transform, generate and transmit information. This is the information processing view. Before the development of connectionist and dynamic-system models of information processing, it was also known as the computational view. The latter expression was acceptable when a Turing machine (Turing (1936)) and the machine involved in the Turing test (Turing (1950)) were inevitably the same. The equation information processing view = computational view has become misleading, however, because computation, when used as a technical term (effective computation), refers only to the specific class of algorithmic symbolic processes that can be performed by a Turing machine, that is recursive functions (Turing (1936), Minsky (1967), Floridi (1996b), Boolos et al. (2002),” Floridi (2011, p. 35).

  86. 86.

    Even in the “New AI” there is “behavior” and “adaptation” in evolutionary terms rather than in terms of historicity (Alonso 2014). On the distinction between process, evolution, and historicity in a dynamic complexity, see Gonzalez (2013a, pp. 299–311; especially, pp. 304–307).

  87. 87.

    On those components as constitutive elements of science, see Gonzalez (2005, pp. 3–49; especially, pp. 10–11).

  88. 88.

    This case has some similarities with economics as a science of design: see in this regard Gonzalez (2008).

  89. 89.

    Simon’s “approach did not deliver the kind of results that today we would consider to be successful. For example it did not deliver translation, nor summarisation, nor viable robot navigation,” (Cristianini 2014, p. 39).

  90. 90.

    See, for example, a central claim of the cat paper: “experimental results using classification and visualization confirm that it is indeed possible to build high-level features from unlabeled data,” Le, Ranzato, Monga, Devin, Chen, Corrado, Dean, and Ng (2012), p. 2.

  91. 91.

    Even the ordinary users of social networks, such as Facebook and Snapchat, are well aware of the historicity of information.


  1. Alonso, E. (2014). Actions and Agents. In K. Frankish & W. Ramsey (Eds.), The Cambridge handbook of Artificial Intelligence (pp. 232–246). Cambridge: Cambridge University Press, (reprinted in 2015).

    Google Scholar 

  2. Anscombe, G. E. M. (2000), Intention. Cambridge, MA: Harvard University Press. Reprint of 2nd edition of the book (1963), first paperback at HUP.

  3. Boden, M. (Ed.). (1990). The Philosophy of Artificial Intelligence. Oxford: Oxford University Press.

    Google Scholar 

  4. Boden, M. (2014). GOFAI. In K. Frankish & W. Ramsey (Eds.), The Cambridge handbook of Artificial Intelligence (pp. 89–107). Cambridge: Cambridge University Press, (reprinted in 2015).

    Google Scholar 

  5. Bratman, M. (1993). Shared intention. Ethics, 104, 97–113.

    Article  Google Scholar 

  6. Copeland, B. J. (1993). Artificial Intelligence. Oxford: Blackwell.

    Google Scholar 

  7. Copeland, B. J. (Ed.). (2004). Essential turing. Oxford: Clarendon Press.

    Google Scholar 

  8. Copeland, B. J., & Proudfoot, D. (2007). Artificial Intelligence: History, Foundations, and Philosophical issues. In P. Thagard (Ed.), Philosophy of Psychology and Cognitive Science (pp. 429–482). Amsterdam: Elsevier.

    Google Scholar 

  9. Cristianini, N. (2014). On the current paradigm in Artificial Intelligence. AI Communications, 27, 37–43. doi:10.3233/AIC-130582.

    MathSciNet  Google Scholar 

  10. Dasgupta, S. (2003). Multidisciplinary creativity: The case of Herbert A. Simon. Cognitive Science, 27, 683–707.

    Article  Google Scholar 

  11. Dean, J., Corrado, G. S., Monga, R., Chen, K., Devin, M., Le, Q. V., Mao, M. Z., Ranzato, M. A., Senior, A., Tucker, P., Yang, K. & Ng, A. Y. (2012). Large distributed deep networks. In: Neural Information Processing Systems, NIPS2012, Lake Tahoe, Nevada. Available in (accessed on 11.3.2016).

  12. Dong, D., Wu, H., He, W., Yu, D. & Wang, H. (2015). Multi-task learning for multiple language translation. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, Beijing, 26–31 July 2015 (pp. 1723–1732).

  13. Fetzer, J. H. (2004). The Philosophy of AI and its critique. In L. Floridi (Ed.), The Blackwell guide to the Philosophy of Computing and Information (pp. 119–134). Oxford: Blackwell.

    Google Scholar 

  14. Floridi, L. (1999). Philosophy and computing. London: Routledge.

    Google Scholar 

  15. Floridi, L. (Ed.). (2004). The Blackwell guide to the Philosophy of Computing and Information. Oxford: Blackwell.

    Google Scholar 

  16. Floridi, L. (2011). Philosophy of Information. Oxford: Oxford University Press.

    Google Scholar 

  17. Frankish, K., & Ramsey, W. (Eds.). (2014). The Cambridge handbook of Artificial Intelligence. Cambridge: Cambridge University Press, (reprinted in 2015).

    Google Scholar 

  18. Gillies, D. (1996). Artificial Intelligence and scientific method. Oxford: Oxford University Press.

    Google Scholar 

  19. Gonzalez, W. J. (1986). La Teoría de la Referencia. Strawson y la Filosofía Analítica. Salamanca-Murcia: Ediciones Universidad de Salamanca and Publicaciones de la Universidad de Murcia.

    Google Scholar 

  20. Gonzalez, W. J. (1997). Rationality in economics and scientific predictions: A critical reconstruction of bounded rationality and its role in economic predictions. Poznan Studies in the Philosophy of the Sciences and the Humanities, 61, 205–232.

    MathSciNet  Google Scholar 

  21. Gonzalez, W. J. (2003). Herbert A. Simon: Filósofo de la Ciencia y economista (1916–2001). In W. J. Gonzalez (Ed.), Racionalidad, historicidad y predicción en Herbert A. Simon (pp. 7–63). A Coruña: Netbiblo.

    Google Scholar 

  22. Gonzalez, W. J. (2005). The philosophical approach to Science, Technology and Society. In W. J. Gonzalez (Ed.), Science, Technology and Society: A philosophical perspective (pp. 3–49). Netbiblo: A Coruña.

    Google Scholar 

  23. Gonzalez, W. J. (2007a). Configuración de las Ciencias de Diseño como Ciencias de lo Artificial: Papel de la Inteligencia Artificial y de la racionalidad limitada. In W. J. Gonzalez (Ed.), Las Ciencias de Diseño: Racionalidad limitada, predicción y prescripción (pp. 41–69). Netbiblo: A Coruña.

    Google Scholar 

  24. Gonzalez, W. J. (2007b). Análisis de las Ciencias de Diseño desde la racionalidad limitada, la predicción y la prescripción. In W. J. Gonzalez (Ed.), Las Ciencias de Diseño: Racionalidad limitada, predicción y prescripción (pp. 3–38). Netbiblo: A Coruña.

    Google Scholar 

  25. Gonzalez, W. J. (2008). Rationality and prediction in the Sciences of the Artificial: Economics as a Design Science. In M. C. Galavotti, R. Scazzieri, & P. Suppes (Eds.), Reasoning, rationality, and probability (pp. 165–186). Stanford: CSLI Publications.

    Google Scholar 

  26. Gonzalez, W. J. (2011a). Complexity in Economics and prediction: The role of parsimonious factors. In D. Dieks, W. J. Gonzalez, S. Hartman, Th Uebel, & M. Weber (Eds.), Explanation, prediction, and confirmation (pp. 319–330). Dordrecht: Springer.

    Google Scholar 

  27. Gonzalez, W. J. (2011b). The problem of conceptual revolutions at the present stage. In W. J. Gonzalez (Ed.), Conceptual revolutions: From Cognitive Science to Medicine (pp. 7–38). A Coruña: Netbiblo.

    Google Scholar 

  28. Gonzalez, W. J. (2011c). Conceptual changes and scientific diversity: The role of historicity. In W. J. Gonzalez (Ed.), Conceptual revolutions: From Cognitive Science to Medicine (pp. 39–62). Netbiblo: A Coruña.

    Google Scholar 

  29. Gonzalez, W. J. (2013a). The Sciences of Design as Sciences of Complexity: The dynamic trait. In H. Andersen, D. Dieks, W. J. Gonzalez, Th Uebel, & G. Wheeler (Eds.), New challenges to Philosophy of Science (pp. 299–311). Dordrecht: Springer.

    Google Scholar 

  30. Gonzalez, W. J. (2013b). The roles of scientific creativity and technological innovation in the context of complexity of Science. In W. J. Gonzalez (Ed.), Creativity, innovation, and complexity in Science (pp. 11–40). Netbiblo: A Coruña.

    Google Scholar 

  31. Gonzalez, W. J. (2013c). Value ladenness and the value-free ideal in scientific research. In Ch. Lütge (Ed.), Handbook of the philosophical foundations of Business Ethics (pp. 1503–1521). Dordrecht: Springer.

    Google Scholar 

  32. Gonzalez, W. J. (2015a). On the role of values in the configuration of technology: From axiology to ethics. In W. J. Gonzalez (Ed.), New perspectives on technology, values, and ethics: Theoretical and practical. Boston Studies In the Philosophy and History of Science (pp. 3–27). Dordrecht: Springer.

    Google Scholar 

  33. Gonzalez, W. J. (2015b). Philosophico-methodological analysis of prediction and its role in Economics. Dordrecht: Springer.

    Google Scholar 

  34. Haigh, T. H., Priestley, M., & Rope, C. H. (2016). ENIAC in action. Making and remaking of the modern computer. Cambridge, MA: The MIT Press.

    Google Scholar 

  35. Hinton, G. E., Osindero, S., & Teh, Y. W. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18, 1527–1554.

    MathSciNet  Article  MATH  Google Scholar 

  36. Hinton, G. E., & Salakhutdinov, R. R. (2016). Reducing the dimensionality of data with neural. Science, 313, 504–507.

    MathSciNet  Article  MATH  Google Scholar 

  37. Hodges, A. (2014). Alan Turing: The Enigma. London: Vintage Books/Random House.

    Google Scholar 

  38. Laskow, S. (2016). “With no Human Supervision, 16.000 Computers Learn to Recognize Cats.” (26.6.12) Available in: (accessed on 21.3.2016).

  39. Le, Q. V., Ranzato, M. A., Monga, R., Devin, M., Chen, K., Corrado, G. S., Dean, J., & Ng, A. Y. (2012). Building high-level features using large scale unsupervised learning. In Proceedings of the 29 th International Conference on Machine Learning, Edinburgh, Scotland, UK, 2012. (accessed on 11.3.2016).

  40. Lewis-Kraus, G. (2016). “The Great A.I. Awakening”, The New York Times Magazine, 14 December 2016, pp. 1–37. (accessed on 12.3.2017).

  41. Ludwig, D. (2015). A pluralist theory of mind. Dordrecht: Springer.

    Google Scholar 

  42. Lyngzeidetson, A. E., & Solomon, M. K. (1994). Abstract complexity theory and the mind-machine problem. The British Journal for the Philosophy of Science, 45(2), 549–554.

    MathSciNet  Article  Google Scholar 

  43. Newell, A. & Simon, H. A. (1976). Computer Science as empirical enquiry: symbols and search [1975 ACM Turing Award lecture]. Communications of the Association for Computing Machinery, v. 19, n. 3, pp. 113-126. Reprinted in Boden, M. (ed.), The Philosophy of Artificial Intelligence. Oxford: Oxford University Press, 1990, pp. 105–132.

  44. Niiniluoto, I. (1993). The aim and structure of applied research. Erkenntnis, 38, 1–21.

    Article  Google Scholar 

  45. Nozick, R. (1993). The nature of rationality. Princeton: Princeton University Press.

    Google Scholar 

  46. Rescher, N. (1988). Rationality: A philosophical inquiry into the nature and the rationale of reason. Oxford: Clarendon Press.

    Google Scholar 

  47. Rescher, N. (1992). Our science as our science. In N. Rescher (Ed.), Human knowledge in idealistic perspective (pp. 110–125). Princeton: Princeton University Press.

    Google Scholar 

  48. Rescher, N. (1993). The validity of values: Human values in pragmatic perspective. Princeton: Princeton University Press.

    Google Scholar 

  49. Rescher, N. (1999). Razón y valores en la Era científico-tecnológica. Barcelona: Paidós.

    Google Scholar 

  50. Rescher, N. (2003). Rationality in pragmatic perspective. Lewinston, NY: The Edwin Mellen Press.

    Google Scholar 

  51. Simon, H. A. (1964). Rationality. In Gould, J. and Kolb, W. L. (eds.). A Dictionary of the Social Sciences. Glencoe, IL: The Free Press, pp. 573–574. Reprinted in Simon, H. A. (1982). Models of Bounded Rationality. Vol. 2: Behavioral Economics and Business Organization, Cambridge, MA: The MIT Press. pp. 405–407.

  52. Simon, H. A. (1978). Rationality as process and as product of thought. American Economic Review, 68(2), 1–16.

    Google Scholar 

  53. Simon, H. A. (1983). Reason in human affairs. Stanford: Stanford University Press.

    Google Scholar 

  54. Simon, H. A. (1991a). Mind as machine: The cognitive revolution in Behavioral Science. In R. Jessor (Ed.), Perspectives in Behavioral Science: The Colorado lectures (Vol. 3). Boulder, CO: Westview Press.

    Google Scholar 

  55. Simon, H. A. (1991b). Models of my life. New York, NY: Basic Books, (reprinted in Cambridge, MA: The MIT Press, 1996).

    Google Scholar 

  56. Simon, H. A. (1995a). Artificial Intelligence: An Empirical Science. Artificial Intelligence, 77(1), 95–127.

    Article  Google Scholar 

  57. Simon, H. A. (1995b). Machine as Mind. In K. M. Ford, C. Glymour, & P. J. Hayes (Eds.), Android epistemology (pp. 23–40). Menlo Park, CA: AAAI/MIT Press.

    Google Scholar 

  58. Simon, H. A. (1996). The Sciences of the Artificial (3rd ed.). Cambridge, MA: The MIT Press.

    Google Scholar 

  59. Sterrett, S. (2000). Turing’s two tests for intelligence. Minds and Machines, 10(4), 541–559.

    Article  Google Scholar 

  60. Strawson, P. F. (1966). The Bounds of Sense. An Essay on Kant´s Critique of Pure Reason. London: Methuen.

    Google Scholar 

  61. Thagard, P. (1992). Conceptual revolutions. Princeton: Princeton University Press.

    Google Scholar 

  62. Tuomela, R. (1991). We will do it: An analysis of group-intentions. Philosophy and Phenomenological Research, 51, 249–277.

    Article  Google Scholar 

  63. Tuomela, R. (1993). What are joint intentions? In R. Casati & G. White (Eds.), Philosophy and Cognitive Sciences (pp. 543–547). Kirchberg am Wechsel: The Austrian Ludwig Wittgenstein Society.

    Google Scholar 

  64. Tuomela, R. (2013). Social Ontology: Collective intentionality and group agents. Oxford: Oxford University Press.

    Google Scholar 

  65. Tuomela, R., & Miller, K. (1988). We-intentions. Philosophical Studies, 53, 115–137.

    Article  Google Scholar 

  66. Turing, A. (1948). Intelligent machinery. National Physical Laboratory. In B. J. Copeland (Ed.), Essential Turing (Vol. 2004, pp. 410–432). Oxford: Clarendon Press.

    Google Scholar 

  67. Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59, 433–460.

    MathSciNet  Article  Google Scholar 

  68. Urbach, P. (1974). Progress and degeneration in the ‘I.Q. Debate’ (I). British Journal for the Philosophy of Science, 25, 99–135.

    Article  Google Scholar 

  69. Urquhart, A. (2004). Complexity. In L. Floridi (Ed.), The Blackwell guide to the Philosophy of Computing and Information (pp. 18–27). Oxford: Blackwell.

    Google Scholar 

  70. von Wright, G. H. (1976). Determinism and study of man. In J. Manninen & R. Tuomela (Eds.), Essays on explanation and understanding (pp. 415–435). Dordrecht: Reidel.

    Google Scholar 

  71. von Wright, G. H. (1983). Practical reason. Oxford: B. Blackwell.

    Google Scholar 

  72. Wittgenstein, L. (1974), Bemerkungen über die Grundalgen der Mathematik ed. by G. H. von Wright, R. Rhees and G. E. M. Anscombe, Frankfurt: Suhrkamp. English translation by G. E. M. Anscombe: Remarks on the foundations of mathematics, 3rd ed., Oxford: Blackwell, 1978.

  73. Wittgenstein, L. (1976). Lectures on the Foundations of Mathematics In R. G. Bosanquet, N. Malcom, R. Rhees & Y. Smithies, C. Diamond (Ed.), Hassocks: Harvester Press.

Download references


This paper was prepared at the Centre for Philosophy of Natural and Social Sciences (London School of Economics). It is part of the research project FFI2016-79728-P, which is supported by the Spanish Ministry of Economics, Industry and Competitiveness (AEI).

I am grateful to Jeffrey Barrett for the discussions on recent approaches to Artificial Intelligence. In addition, the contribution made by the referees is also recognized. Their points have been specifically useful in making explicit some conceptual features and in developing some aspects in connection with recent advancements in AI.

Author information



Corresponding author

Correspondence to Wenceslao J. Gonzalez.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Gonzalez, W.J. From Intelligence to Rationality of Minds and Machines in Contemporary Society: The Sciences of Design and the Role of Information. Minds & Machines 27, 397–424 (2017).

Download citation


  • Intelligence
  • Rationality
  • Minds
  • Machines
  • Contemporary
  • Society
  • Sciences
  • Design
  • Role
  • Information