What was/is “Cognitive Science” (CS)?

My purpose here is not to discuss current research, nor to provide a comprehensive account of the history of cognitive science, but rather to try to place the emergence of cognitive science, of which cognitive psychology is only a part, in historical context.

CS was and remains an interdisciplinary research program, the roots of which reach back before 1945, but which began to take shape in the late 1940s and 1950s and was first explicitly called by that name in the early 1960s. As we will soon see, the disparate mix of disciplines in the program changed over time, but the common thread running through this history appears to have been consistent: the goal, or should we say, the hope of uniting a newly reassertive cognitive psychology, or at least the idea that mental processes are and should be part of psychological science, with efforts coming from computer science and other sources to replicate or develop functional machine equivalents of those very mental processes. Cybernetics appeared at first to be, and was indeed presented as, the royal road to achieving such a program. This road was not taken, but concepts or metaphors borrowed or modified from cybernetics and computer science, such as information processing, have had long lives. Another common thread that ran throughout this history was less explicit, but has become central to critical historical and philosophical scholarship on cognitive science – the extreme complexity and stickiness of the fundamental philosophical and methodological issues at stake in such an ambitious program. The most significant among these issues was the apparent inability of the participants in this program to agree on the meanings of key terms, or on how to address the mind-brain relationship.

What follows is a broad, deliberately eclectic and selective overview of the institutional and epistemic situation during the early postwar period, focusing primarily on psychology and artificial intelligence (AI). It is neither possible nor necessary here either to go into all aspects of the 1950s, or to delve into the changes that took place after 1960, though I will make a remark or two about this in the conclusion. I begin with the institutional situation, because this was fundamentally important, but is often ignored in inner-disciplinary or philosophical accounts.

Institutional Contexts

The defeat of Nazism and the authoritarian dictatorship in Japan led to a radical realignment of global power relations. The resulting constellation was dominated by the United States. As the victorious and practically undamaged superpower, now in possession of nuclear weapons, the US clearly held sway over a prostrate but re-nationalising Europe and East Asia. The Soviet Union was also economically and demographically damaged, but projected itself successfully as a victorious superpower after acquiring nuclear weapons on its own. Nonetheless, the fundamentally asymmetrical character of the situation can be expressed as follows: Between 1940 and 1955, the nominal GDP of the US grew by more than 80 per cent, from $228 billion to $425.5 billion; the total in 1950, $299.8 billion, was roughly equal to that of the UK, France, Germany, India, China and Japan combined (O’Neill, 2024).

These asymmetrical power relations, following as they did the successful mobilisation of a wide spectrum of sciences in the service of the war effort, gave weight and significance to the revolution in science policy in the US from 1945 onward (for an overview see Ash, 2018 and the literature cited there.). Support for basic research shifted from the great foundations (Carnegie and Rockefeller) to the Federal government and the research arms of the armed services; the acronyms AEC, NSF, NIH, DOD stand for the agencies established in this period. The result was a vast expansion and increased complexity of the university and non-university research establishments. However, alongside the Federal government, the great foundations (soon joined by the Ford Foundation) remained in place and expanded their activities, especially in the biomedical and social sciences (for the social sciences, see Solovey, 2013; Cohen-Cole, 2014). The predominance of the emerging science-technology-military-industrial complex was hard to ignore, but funding was not limited to militarily relevant projects. Indeed, as we will see, support for many of the activities that led to CS came from smaller private foundations. In this context, both public and private money was there to support a wide range of initiatives, many of which were highly innovative. The institutional expansion and the sheer size of the American research establishment made possible the rise and development of multiple theoretical approaches alongside one another.

A partial indicator of what all this meant for psychology and the social sciences is the enormous increase in the size of the American Psychological Association; its membership grew from 4183 in 1945 to over 12,000 in 1960 (Heims, 1993, p. 3; Ash, 2010, p. 18). Also in 1945, 19 (later 17) APA Divisions were introduced. The fact that the largest of these were for clinical and personnel psychology indicates the shift of membership in the discipline from basic to applied or professional psychology that had begun long before 1945. The important point for the topic at hand is the fragmentation of psychology and other sciences into sub-disciplines. Put simply, CS did not come from psychology in general, but rather from developments in two sub-disciplines of psychology, specifically in experimental and to a lesser extent developmental psychology, but also, of course, from many other disciplines, including neurophysiology, mathematics, linguistics, computer science, and engineering (see below).

The international impact of all this has often been oversimplified by using the term “Americanisation” (Thue, 2006), or referring to “American hegemony” (Krige, 2006), but this is not entirely correct. Though it is true that in West Germany and many smaller European countries psychologists and social scientists oriented their research projects and citation practices to those of the US, psychology and related disciplines retained considerable autonomy in France and in Britain, where contributions to CS and AI were developed largely independently of the U.S., though they were rapidly acknowledged there, at least by Jerome Bruner. In addition, “Americanisation” of the social sciences in the West was contested by a version of Pavlovism advanced in the Soviet Union in the context of the attempted Stalinisation of nearly all of the sciences there in the late 1940s and early 1950s (for the context, see Kojevnikov, 2004). (Luciano Meccaci refers to Pavlovism briefly in his contribution to this issue. See also Ruiz & Sánchez, 2020) Orthodox Pavlovism reduced cognition to a form of higher nervous activity. However, as recent research has shown, psychologists in the Soviet sphere of influence found ways to appear compliant with this orthodoxy, while also making space for non-reductionistic psychological concepts (Pleh, 2024). The following section addresses CS as it emerged in the U.S. and Great Britain.

Epistemic Dimensions

I turn now to some of the multiple epistemic dimensions that fed into CS, which I can only name or sketch here. Howard Gardner (1987, 16 ff.) lists the following “key theoretical inputs to cognitive science”: mathematics and computation, the neuronal model, the cybernetic synthesis, information theory, and neuropsychological syndromes. I will discuss all of these except the last briefly here (see also the paper by Stahnisch in this special issue). A common thread in this multiplicity is the ubiquity of machine metaphors of various kinds. Psychological theorising has always made use of analogies or metaphors taken from the technologies that prevailed at a given time. For example, Hermann Helmholtz (1971/1868, 168) compared the human nervous system to a system of telegraph wires (Hoffmann, 2003). Pavlov spoke of a central telephone switchboard in his account of the neural basis of a “second signal system” a generation later (Gerovitch, 2002), and the computer was the go-to technology from the 1950s onward. However, as we will see, this does not necessarily imply that all theories in a given time rely upon machine metaphors from the same technology to the same extent.

CS is often described as the result of a successful revolt against the predominance of behaviourism. However, such a linear succession of “paradigms” does not capture the complex history of the 1950s. As George Mandler, an active contributor, put it “The term ‚revolution‘ is probably inappropriate … Stimulus-response behaviourism was not violently displaced. Rather as a cognitive approach evolved behaviourism faded because of its failure to solve basic questions of human thought and action, and memory in particular” (Mandler, 2002, p. 339). In fact, behaviourist or neobehaviourist learning theory continued to prosper throughout this period, aided in part by the disciplinary fragmentation just mentioned. Machine theories were prevalent in this field, as they had been since John Watson issued the first behaviourist manifesto prominently featuring Pavlovian reflexology. Rather more sophisticated was the role of physics in the work of Clark L. Hull (1943) at Yale, whose learning theory was based on what he termed “hypothetical-deductive” reasoning from basic principles of Pavlovian conditioning, in order to derive testable propositions. In this he appeared to follow what he took to be philosophers of science‘s accounts of Newton‘s reasoning. In contrast, Hull’s colleague and friend Kenneth Spence, who built an experimentalist’s empire of his own at the University of Iowa from the 1940s onward, developed a rather different theory of conditioning which, though also mathematical, did not share Hull’s model of theory-making (Spence, 1956). Also in the 1950s B.F. Skinner (1954) elaborated his own principles of operant conditioning, in which he leaned heavily on Ernst Mach’s economical philosophy of science rather than Newton. Relevant for this discussion is that in all of these approaches, “central” mental operations, or indeed everything that happened between S and R, including central physiological processes, were black boxed. I should add that the emphasis on “peripheral” processes and the black boxing of central processes was also enforced in psychophysics and sensory psychology; thus, the object of the cognitivists‘ revolt was not only behaviourism.

However, the opposition among behaviourists to invoking cognitive processes to account for behaviour was not total. Edward Tolman at Berkeley had already developed an alternative approach to learning theory emphasizing purposive behaviour and variables intervening between S and R in the 1930’s, which was influenced by Gestalt psychology and the work of Egon Brunswik in Vienna, whom he brought to Berkeley in part because Brunswik had developed a “probablistic” theory of cognition in the early 1930s. Tolman took up a version of this approach, e.g., in his studies of rodents‘ behaviour at “choice points” in mazes (Tolman, 1938), and most influentially in his concept of “cognitive maps” (1948), later applied to human judgment and decision-making by Kenneth Hammond (e.g. Hammond, 2000). It is important to note a methodological issue here: As Tolman had learned from the Gestalt psychologists, cognition-like processes were more likely to be observed in animals and humans when the stimulus situation was structured to let them happen. This version of behaviourism was not widely adopted by other behaviourists, but was surely more congenial to the so-called “behaviouralism” then fashionable in social science, especially political science, than Hull’s version. Jerome Bruner and others proved receptive to Tolman’s ideas. Because learning, psychophysics, sensory psychology, perception, and cognition were separate research areas, they could run parallel to one another, rather than one replacing the other.

The re-emergence and transformations of cognitive psychology ran parallel to these multiple versions of behaviourism or neobehaviourism, as well as the systems approaches that took hold in the social sciences (Erickson et al., 2013). The reception of Piaget’s studies of cognitive development was only one of several developments. Its impact was limited at first primarily to developmental psychology, because his studies showed that different cognitive capabilities or operations emerged at different developmental stages. However, the idea of refocusing on cognitive processes was also taken up by others in the U.S.

Here is where the young Jerome Bruner enters our story. He and Leo Postman, along with Bruner’s undergraduate student Cecile Goodman, developed a series of experiments designed to show that cognition was not receptive, but proactive, specifically that perceptual and conceptual learning are guided by conscious and unconscious expectancies, hypotheses or models tested by cues (Bruner, 1980/1951; Bruner and Goodman, 1980/1947). Prominent in these early studies of children’s judgments of the size of coins, or adults‘ reactions to briefly presented normal or unusual (manipulated) playing cards, was the attention paid to subjects‘ motivations and (conscious or unconscious) dispositions to search for or ignore stimuli, that is, to effects of motivation and personality on perception, or perceptual behaviour, as they strategically called it (Bruner and Postman, 1980/1949). The methodological agenda was to show that it was possible to study such topics experimentally while following the strict protocolls established in psychophysics and senory psychology.

Bruner‘s primary input into the emergence of CS came in the monograph A Study of Thinking, written with Jacqueline Goodnow and J. L. Austin (Bruner et al., 1956). In this work the authors attempted to show how crucial aspects of perception are included in human reasoning, memory and creativity. The authors acknowledged earlier pioneering work by Gestalt psychologists Max Wertheimer and Karl Duncker on productive thinking, and paid homage to Tolmans’s ideas of “intervening variables” and “cognitive maps”, while focusing on what they called the unconscious “strategies” employed by subjects in problem solving, “a pattern of decisions in the acquisition, retention and utilization of information” to meet certain objectives, which could be modified by results or maintained conservatively (Ibid., 54; Boden, 2006, p. 305). Bruner later said that the term “strategy” came from a remark by John von Neumann at a seminar at the IAS in Princeton in 1951, where he said, according to Bruner, that “information seeking systems” would have strategies that would influece not only what information was taken up (or focused on), but also what information was searched for (Bruner, 1980, p. 112, cited in Boden, 2006, p. 308). In his famous 1957 paper, “Going Beyond the Information Given,” Bruner (1980/1957, 219) cited a statement by Cambridge professor Frederick Bartlett, who was well known for similar emphasis on “scripts” or “schemata” in human problem solving, on the role of “characters that run beyond what is directly observed by the senses.”.

The significance of all this for our discussion appears clear: Bruner had no hesitation about speaking of cognition as a set of concrete mental operations, or filling in the black box between S and R with mental representations, strategies, that is, cognitive operations inferred from experimental observation. (This is quite different from Helmholtz’s idea of “unconscious inference,” on which the step from physiological information to inferences about its significance is based.) In this work, however, we find no direct collaboration with computer science or use of machine analogies, other than continuous reference to the concept of “information”. The symbolic impact of Claude Shannon seems evident here, but we should remember that in Shannon’s theory information is content-neutral, which it definitely was not for Bruner or other psychologists. And it is doubtful that Bruner ever attempted to engage in the precise mathematical-logical theorizing that Shannon was known for. However, Shannon’s view, advanced in this period, that computers engaged in “symbolic processing” (Shannon & Weaver, 1949) may have given Bruner and others permission to work with the “information” concept or the “information processing” metaphor.

Computer Science and Cybernetics

I turn now very briefly to computers and computer science, especially to John von Neumann. Neumann’s central role in developing EDVAC (the Electronic Discrete Variable Computer), the world’s first digital computer, during the Second World War, as part of the effort to mechanise ballistics, is well known. Perhaps less well known is that von Neumann’s description of EDVAC’s mode of operation in a 1945 working paper (cited in Boden, 2006, p. 196) cited a foundational paper by neuropsychiatrist Warren McCulloch and Walter Pitts, “A Logical Calculus of the Ideas Immanent in Nervous Activity” (McCulloch and Pitts, 1943), which also played an important role in the development of cybernetics. The model presented there was concerned primarily with the mathematical rules governing the input and output of signals. The thesis was that the all-or-none operation of (idealised) neurons in the central nervous system (CNS) meant that neural events could be represented by means of propositional logic. Understanding the computer as being built from idealised neurons rather than vacuum tubes appeared strange to engineers, and perhaps to psychologists as well, once they learned about it, but von Neumann hoped that his theory of natural and human automata would improve understanding of the design of computers and the human nervous system. McCulloch and Pitts claimed for their part that all “higher” mental processes could be the results of activities in “neural nets,” as they called them. Though McCulloch claimed at first to be reluctant about making grandiose claims, he later consistently stated that “Any behaviour that can be logically, precisely completely and unambiguously described in a finite number of symbols is computable by a neural network” (Dupuy, 2001, 58).

Norbert Wiener built upon, but also transformed, this work in his Cybernetics (Wiener, 1948a). The term came from the Greek word for steersman, but Wiener meant the entire steering apparatus, including the steering stick or wheel and the rudder, that is a combination of human and machine operations. The aim, expressed in the book’s subtitle, was to study “control and communication in animal and machine,” and was thus transdisciplinary from the outset. The key concept, of course, was that of the feedback loop, derived in part from applied research on artillery targeting of moving aircraft, but not limited to such problems. Anti-aircraft guns were controlled by humans who learned to anticipate flight patterns while firing, but the idea that machines might be programmed to engage in such reflexive behaviour was tantalising. However, though a machine metaphor was prominent and a prominent French scholar speaks of a “mechanisation of mind” in this connection (Dupuy, 2000), cybernetics was not a reductionist program, did not propose mechanical explanations for all of human behaviour, and did not rely on a simple analogies to existing or imagined computers. Rather, cybernetics was presented as a tertium quid integrating mind and machine in the mode of circular rather than unidirectional causality, which could be, and soon was, brought to other fields, for example, in G. E. Hutchinson’s study of biological, chemical and physical processes in a lake populated with organisms, from which his students later developed systems and community ecology (Hutchinson, 1948). Wiener’s title named communication as well as control. As he put it in an article circulated to the first participants in the Macy Conferences mentioned above in 1946, “The unifying idea of these diverse disciplines is the MESSAGE and not any special apparatus acting on messages” (cited in Heims, 1993, 22. Emphasis in the original. See also Wiener, 1948b). If you are reminded by this of Marshall McLuhan’s later slogan “The medium is the message,” you would not be mistaken.

Wiener developed these ideas in intensive conversation with McCulloch, Pitts, and a select group of other mathematicians, engineers and social scientists in a series of influential conferences supported by the Josiah Macy Foundation from 1946 to 1953 (Heims, 1993; Hines, 2015). The colleagues came to call themselves the Cybernetics Group, and their discussions led to the emergence of a common community with a shared idiom, but as Heims (1993) has shown, the group was not as cohesive as it was later described to be. Harvard sociologist Talcott Parsons, a guest, joined in the chorus at least at times: “It could now be plausibly argued that the basic form of control in action systems was the cybernetic type” and not the coercive-compulsive power relations studied in political science (cited by Heims, 1993, p. 184). Anthropologist Gregory Bateson, a regular participant, later applied the concept of feedback in his studies of small social systems, such as families, which ultimately led him to develop the double bind concept. Cultural anthropologist Margaret Mead, then a researcher and curator at the Museum of Natural History in New York, attended regularly, shared Bateson’s interest in small groups, and argued that personality and culture could be viewed as cybernetic systems with purpose, feedback and communicative links. “Still” Heims (1993, 71) suggests, “Mead and (Lawrence K.) Frank lived in a different intellectual universe from that inhabited by Pitts, McCulloch or von Neumann”.

The ambivalent role of the human sciences is also evident when we briefly consider the impact of Noam Chomsky (extensively discussed in Boden, 2006, Chap. 9). I claim no expertise on linguistics and look forward to any comments colleagues might have on this issue, but I note that Chomsky did write a foundational paper on hierarchies of languages, which has had a long life. And he also worked with Bruner briefly. But the profound misunderstanding at the root of this particular case of interdisciplinarity became manifest, at the latest with his Aspects of the Theory of Syntax (Chomsky, 1965), if not earlier. Chomsky was always focused on a theory of language as such, and not on the actual uses of language, or the psychological processes involved in language learning. Psychologists tried to use Chomsky’s ideas as guide to studying such processes, but eventually realised that this was not what he was interested in.

The Birth of CS and AI

It was at just this time, in the mid-1950s, that CS as an interdisciplinary project took hold, or “came together” as Boden (2006, 282) puts it. Boden nicely encapsulates the multiple inputs by pointing to the miraculous year (annus mirabilis) 1956 (ibid., 330). In addition to A Study of Thinking, already discussed, Boden names (inter alia) George Miller’s classic paper, “The Magic Number Seven, Plus or Minus Two”, which appeared in that year. The central claim of the paper is about the limits of human attention – the ability of the “mental apparatus” to “hold onto” only a limited number of items (information inputs) at a time. In the same paper Miller mentions short cuts and “chunking” (e.g., when people try to remember telephone numbers) as tactics subjects use to overcome or work around such limits (Boden, 2006, p. 288). Of course, both of these key publications were by psychologists.

Boden (2006, 330) also names two meetings as key events of 1956: the Summer Research Project on Artificial Intelligence at Dartmouth, often called the “birthplace” of AI, and the symposium at the IEEE (Institute of Electrical and Electronics Engineers) on information theory at MIT. Gardner (1985, 138), citing a statement by George Miller, and Boden (2006, 330) both cite the first meeting as the birth date of CS. However, the Dartmouth meeting was organised not by psychologists, but by mathematicians John McCarthy, Nathaniel Rochester, Claude Shannon, and Marvin Minsky (whose working paper entitled “Heuristic Aspects of the Artificial Intelligence Problem,” was written that year but was not published until 1961 under the title “Steps to Artificial Intelligence”; see Boden, 2006, p. 324; for the original title see Simon, 1991, p. 210). There Allen Newell and Herbert A. Simon, who came not from mathematics or psychology, but rather from engineering and management theory, respectively, presented their “logic machine” or Logic Theorist (LT). In his later memoir and publications on this subject, Simon (1991) emphasises the contributions of Otto Selz and the Gestalt psychologists, who had attempted to describe thinking in operation, but had used “vague” terms like “insight”. Actually, as Gerd Gigerenzer (1996, see also Gigerenzer & Sturm, 2007) points out, what Newell and Simon did bears little resemblance to the qualitative descriptions of dynamic thinking offered by Selz and Wertheimer, and he suggests that Simon cited them to create an impression of historical continuity. However, rather than relying on what subjects said about their thought processes, as Wertheimer and Duncker had done, Newell and Simon wrote programs for computers to carry out such operations non-verbally, determining on the basis of conversations with family and friends which operations to model. What the LT provided was not a theory of intelligence, but machine-generated proofs of the theorems of the Principia Mathematica of Whitehead and Russell (1910–1913), a foundational work of symbolic logic. It is worth mentioning here that although the Dartmouth seminar where they first presented the LT had been called a meeting on artificial intelligence, Newell and Simon did not call what they had achieved AI at the time, but proposed more neutral-sounding names. The distinction between “strong” and “weak” AI hadn’t been invented yet, but Newell and Simon were well aware of Ross Ashby’s brash claim that his goal was to copy the living brain (Ashby, 1952). Whether this was what AI was to be, or whether the term was to be used only for computers being programmed to carry out problem-solving operations with no direct claim to imitate what brains do, remains a central issue of debate today, just as it was in the 1950s.

We might add 1958 as a second annus mirabilis. In that year, a meeting on “The Mechanisation of Thought Processes” at the National Physical Laboratory in London combined AI with psychology and neurophysiology for the first time (Boden, 2006, 330), and Donald Broadbent published Perception and Communication, which is conventionally cited to mark the systematic introduction of the idea of “information processing” into psychology (Broadbent, 1958). A diagram Broadbent inserted into his book shows how he tried to open up the black box, at least in theory (see Boden, 2006, 292). In the same year, in a paper entitled “Elements of a theory of human problem solving,” Newell, J. P. Shaw, and Simon introduced a further development of the LT, which they called the GPS (General Problem Solver), which of course is not what we know by that acronym, but a program for simulating human thought. Simon (1991, 221) grandiloquently termed this “the first explicit and deliberate exposition of information processing psychology, but without using that or any other trademark name”.

As many of these examples indicate, a highly visible result of these interdisciplinary encounters was the use of terms from computer science (or more precisely from Shannon), especially “information processing,” to describe what happens in sensory organs and later in elementary as well as so-called “higher” cognitive operations. Talk of an “information processing model” became nearly ubiquitous from this time onward. Viewed from the present, it is perhaps appropriate to speak of “information processing” as a highly productive metaphor rather than a clearly formulated conception. It’s questionable whether anyone actually asked whether neurons actually “fire” the way vacuum tubes do, as Wiener and others seemed to imply, or whether Shannon’s information theory had anything to do with entropy in physical systems or could be taken literally to describe the transfer of genetic information. The machines were there, they worked, and the volume of information they could process grew daily. The growing power of the machines lent power to the metaphor. The irony here seems clear in retrospect: Just as advocates of cognitive science were beginning to gain ground within psychology, without necessarily relying on direct machine metaphors, they adopted reductionistic sounding metaphors from cybernetics, thus establishing or appearing to establish an affinity with these successful technologies.

The story does not end, but conventionally peaks with the foundation of the “Center for Cognitive Studies” at Harvard in 1960, headed by Miller and Bruner. The Center was funded by a generous, unrestricted grant from the Carnegie Foundation, and thus illustrates yet again the role of private funding mentioned above. However, as Gigerenzer (1996) has claimed, computer science or even computers themselves played a very small role at the Center, for the simple reasons, according to him, that they couldn’t get the mainframe that had been purchased with the grant money to work properly and the PC hadn’t been invented yet, so computers were not yet part of the working toolkit of experimenting psychologists. A participant in the workshop at which this paper was presented remarked on the basis of personal knowledge that the Harvard mainframe actually worked well enough, but the second point seems valid nonetheless.

Conclusions

Of course, I have not been able to present or even name all of the inputs into or aspects of this complex history. In this paper, I hope to have shown the following. First, World War II was of central importance for interdisciplinary collaborations and innovations in technology and science that were in turn the roots of the postwar transformations that led to the emergence of CS as an explicit program. However, the relation of the political (and academic-political) situation after 1945 and the emergence of CS as an explicit research program was one of enablement, not linear causality. I note here as well that, with the exception of the RAND Corporation, initial funding for this effort came from private foundations, not the military. Second, the story of CS should not be told as a triumphalist tale of linear progress from Behaviourism to CS to Cognitive Neuroscience. Rather, CS was from the start and has remained a complex mix of machine and other metaphors, ideas, and paper as well as metal or plastic tools. Moreover, behaviourist learning theory was not replaced by, but continued to guide research in learning parallel to the rise of CS. And many trends that are often portrayed as integral components of CS ran parallel to, rather than in interaction with, one another.

A central aim of this approach to interdisciplinarity was the effort to bridge the widely perceived gaps between natural and technical with the social sciences and cultural studies. We might now say this was an attempt to overcome the “two cultures” divide described by C.P. Snow at just this time (Snow, 1961); the historical background to Snow’s iconic lecture cannot be discussed here. However, the creators of CS made no effort at first to establish a new discipline; each group remained affiliated with the discipline or complex of disciplines from which it came. A certain tension remained at the core of the project: The machine dreams of the AI community were not easily reconciled with the idea of autonomous mental processes and opposition to reductionist neobehaviourist learning theory that drove cognitive psychology.

In conclusion, I would like to make two points about what came later that might be relevant for discussion of the contemporary situation. First, a better marker for the establishment of CS as a discipline (or inter-discipline) than the founding of the Center for Cognitive Studies at Harvard was the establishment of the journal Cognitive Science in 1978. Ironically, it appears that, following the founding of the journal, psychology became more dominant in the field than ever before (Genter, 2010). In view of this development we might well ask: Did the interdisciplinarity that had marked CS from the start somehow end, or did it take new forms? Second: Historians love irony; it’s our favorite trope. So I will end with an ironic point about AI. It appears that much of the current hype and also many of the current passionate debates around AI repeat much older debates. This appears to be especially true of the debate over whether AI-programmed machines will “take over” and drive their creators out of the labs, which is reminiscent of similar fears in connection with automation (for further background see Nowotny, 2021). Have the purveyors of today’s AI hype and their adherents reinvented the wheel? Do they even know they have been doing that? Or is such talk merely an artefact of the kind of overexcited discourse that both journalists and entrepreneurs think they need to engage in, in order to gain attention? Stay tuned!