1 Introduction

In Washington, DC, on April 2, 1968, the doors of the Uptown Theater opened for the premiere of 2001: A Space Odyssey. HAL 9000, the film’s artificial intelligence (AI) system, introduced the crowded screening to the concept of generalisable machine intelligence [1]. Designed in consultation with Massachusetts Institute of Technology computer scientist Marvin L. Minsky, the emphatic HAL embodied Minsky’s belief that “within a generation the problem of creating ‘artificial intelligence’ will be substantially solved [2].” Exiting the Washington cinema on a Tuesday night, patrons might be forgiven for thinking the same thing.

This paper explores the ways in which computer scientists at the Massachusetts Institute of Technology (MIT) popularised conceptions of the future of computing in the United States in the 1960s. It is not a technical history of computing at MIT; nor is it an exploration of the people and personalities that drove advances in the field. Instead, this paper demonstrates that MIT researchers positioned the university as an indispensable resource for the development of computing technologies through the construction of visions of the future. By disentangling conceptions of the future of intelligent machines and the mid-twentieth century AI research agenda, this paper connects the imaginaries that surrounded AI development at MIT with the research process. It draws from examples across film, television and print media to contend that while efforts by MIT researchers to develop, advance and popularise imaginaries in the fields of space exploration and nanotechnology in the 1970s are well documented, actions taken by the university’s computer scientists in the 1960s have so far been overlooked [3]. It suggests that while the state and its institutions appeal to the future to direct scientific agendas, so too do scientists strive to regain control of the research environment in which they operate.

Futures are generated through an “unstable field of language, practice and materiality” in which actors and agents compete for the right to represent progress and deterioration [4]. With regard to science, visions of potential futures are deployed to stimulate a desire to see potential technologies become realities [5]. Technoscientific futures constructed by MIT’s computer scientists can, therefore, be understood as ‘sociotechnical imaginaries’ that sought to shape contemporaneous perceptions of computing in 1960s America rather than accurately predict its future. In the canonical text for understanding the relationship between technology and the social order, Sheila Jasanoff and Sang-Hyun Kim defined sociotechnical imaginaries as “collectively imagined forms of social life and social order reflected in the design and fulfilment of nation-specific scientific and/or technological projects [6].” Jasanoff contends that the state creates imaginaries to legitimise its position as a champion of science and technology, and in turn, to harness landmark national projects to reinforce conceptions of nationhood by describing what the future could look like and prescribing what the future should look like. Sociotechnical imaginaries, Jasanoff argues, permeate the sphere of public policy, reconfiguring the socioeconomic environment by determining which citizens are included—and which are excluded—from the benefits of technoscientific development [6].

Jasanoff’s sociotechnical imaginaries belong to one of two scholarly approaches in the history of futurity of science and technology. This first approach suggests that futures play a generative role in shaping socioeconomic, scientific, and political life at the point of production. Understandings of the utility of expectations within research, the deployment of “promissory stories”, and reflections on the temporal structures of science and capitalism belong to this group [5, 7]. Characterised as ‘futurology’, the second approach explores the role that forecasting plays in contesting sites of action. It examines the way in which prediction can be used to control or protest futures and the role that “fictive scripts” play when deployed by scientists to win and sustain patronage. Analysis of the way in which futures are used by governments to manage the economy, and of the processes by which socioeconomic models of the future become entangled with political ideology is central to this scholarly approach [4, 8,9,10]. This paper is concerned with the first sort, its primary unit of analysis is the sociotechnical imaginary, which scholars have expanded to include collective visions of the future by corporations and social movements [11]. It contends that scholars have overlooked efforts beginning in the 1960s by MIT computer scientists to produce visions of the future. In doing so, it argues that while the state and its institutions appealed to the future to control scientific agendas, so too did state-managed scientists to regain control of the research environment. This paper proceeds by examining the impact of the Cold War on academia and the shifting terms of the university’s relationship with the state that provoked researchers to generate futures. It explores the role that scientists played in shaping specific futures through the development of “diegetic prototypes”, the way in which researchers carefully managed scientific narratives, and details the processes by which MIT’s computer scientists exploited a positivist media environment to popularise futures.

2 Sociotechnical imaginaries beyond the nation-state

Two overlapping technoscientific futures encouraged Americans to look forward in the mid-twentieth century. First, in the wake of the 1957 Sputnik shock, the state asserted greater control over the scientific agenda, investing in landmark national projects and deepening the incorporation of academia into the military industrial complex to plan, forecast and speculate [12, 13]. Secondly, popularised by activists and civil society organisations in the 1960s, an alternative future contained visions of looming disasters such as ecological collapse, atomic war, and unsustainable population growth [8]. Though distinct in character, these parallel futures, of awe and of anxiety, of high-optimism and of fear, provoked a growing number of scientists to look at the future as well as to the future. As President John F. Kennedy’s 1961 speech to Congress made clear, prior to the 1960s, the United States had “never specified long-range goals on an urgent time schedule, or managed resources and time so as to ensure their fulfillment” [14]. As new areas of scientific contest grew in the shadow of the Cold War, the state manufactured a research environment concerned with the domination of strategic industries. A new technoscientific ecology emerged based on the proliferation of academic research sponsored by the state, an imperative for scientific “manpower” that fuelled university enrolments, and the development of new agencies to connect and coordinate research efforts [15, 16].Footnote 1 The National Science Foundation was established in 1951 to formalise and institutionalise science management and funding. Organisations such as the Advanced Research Projects Agency (ARPA), established in 1958, determined which strategic defence areas—such as space, ballistic missile defence, and nuclear test detection—to promote and which to neglect [17]. These organisations enabled the state to steer the research agenda in its academic institutions towards strategically sensitive areas and away from foundational science [18].

To reassert authority over the direction of research, scientists were forced to indirectly influence organisations such as ARPA to regain a measure of control over university science. In 1946, for example, as MIT’s wartime Radiation Laboratory became the Research Laboratory of Electronics (RLE), one of its first responsibilities was to study missile guidance as part of Project Meteor under a contract awarded by the U.S. Navy’s Bureau of Ordnance [19]. As these types of sponsored projects increased in number, however, so too did the influence of the universities undertaking them [20]. By the 1960s, the concentration of state patronage in influential institutions precipitated emergent properties that shifted the balance of power towards researchers. As enrolments, funding and sponsored research projects increased, state agencies lost their monopoly on the production of technoscientific futures, and, by extension, the formalisation and institutionalisation of science management in America. By the 1960s MIT had been transformed through intensifying ties with military institutions. The university had undertaken a “special” role in the Second World War, placing it in a lucrative position from which to benefit from military contacts and contracts [19]. Sponsored research contributed approximately 80 per cent of MIT’s operating budget in the 1960s; in 1969, 96 per cent of sponsored research funding came from federal agencies [21]. In 1963, MIT launched the Project on Mathematics and Computation (Project MAC) to focus on research into computing technologies with a $2 million grant from ARPA. In 1969, the newly formed Artificial Intelligence Laboratory split from Project MAC to focus on research into machine intelligence. Amidst a backdrop of intensifying ties with state institutions, ballooning research budgets, and a major uptick in student enrolments, MIT’s computer scientists sat at the epicentre of a powerful research ecosystem.

For the state, patronage proved to be a double-edged sword. As MIT reached a critical mass of expertise and influence its researchers were no longer bound by the terms of the futures prescribed by ARPA or the National Science Foundation. MIT’s computer scientists sought to regain control over the research agenda through scientific sleight of hand, fabricating futures to set the direction of AI research, and, in turn, the academic climate in which they operated. While the state and its institutions appealed to the future to direct scientific agendas, so too did scientists to regain control of the research environment.

3 Producing and circulating scientific narratives

Technoscientific futures can be concentrated into artefacts. In the entertainment industry, for example, scientists can shape the narrative and visual content of specific futures through the development of ‘diegetic prototypes’ [22]. MIT’s Marvin Minsky influenced the development of HAL 9000 in 2001: A Space Odyssey to reflect his views on the proximity and desirability of generalisable machine intelligence and the potency of the ‘symbolic’ approach he believed was necessary to achieve it. Science consultants typically play two roles within the entertainment industry. First, as technical consultants, they advise on visual technologies to aid in the development of special effects in TV and film. Second, consultants can provide advice to directly impact the narrative and content of visual media. These consultants advise on the ‘believability’ of a film’s narrative, how best to situate scientific elements within a broader cultural context, and on the use of science as a tool to create drama [22]. This group also advises regarding the development of diegetic prototypes that act as representations of conceptual or early-stage technologies. In this way, diegetic prototypes serve to both prescribe futures in which certain technologies ought to exist and describe futures in which the development of these technologies is sensible, realistic, or achievable.

Marvin Minsky was one of several technical partners that advised on the portrayal of technologies in 2001: A Space Odyssey [22]. Minsky contributed to the development of HAL, a ‘generalisable machine intelligence’, providing advice on the capabilities of such a system and the technical hurdles that must therefore be cleared to undertake its development. Generalisable machine intelligence refers to a hypothetical synthetic agent with the capacity to understand or learn any intellectual task that a human being can. These systems differ from so-called ‘narrow’ programmes that are able to excel at a particular task but are incapable of working effectively in multiple domains. Minsky believed that narrow AI systems would give way to generalisable agents, commenting a number of times that such programmes would be developed “within a generation” during the 1960s [2]. Fundamental to building generalisable systems Minsky believed was a process known as ‘symbolic reasoning’ in which physical patterns are combined into expressions and manipulated via processes to produce new expressions that enable actions across a wide range of domains. An influential artificial intelligence paradigm in the 1960s, symbolic reasoning produced a number of significant technical advances in areas such as natural language processing and mathematics that encouraged researchers to consider more complex applications. The symbolic branch of artificial intelligence stood in contrast to the ‘connectionist’ or machine learning school of artificial intelligence, which proposes that systems ought to mirror the interaction between neurons of the brain to independently learn from data. Artificial neural networks, which dominate the field today, belong to this group. This distinction was represented by HAL 9000. Minsky stated that HAL was “supposed to have the best of both worlds” in reference to the use of case-based reasoning and heuristics representative of the symbolic approach to artificial intelligence, which he viewed as a promising path towards the development of generally intelligent systems [23]. As a result, the system can be understood as a diegetic prototype designed to demonstrate the utility of generalisable machine intelligence and its attainability through the extrapolation of the symbolic approaches popular in the 1960s. The power of such imaginaries to direct the course of AI research is well-documented. Cave, Dihal, and Dillon have argued that “Narratives of intelligent machines matter because they form the backdrop against which AI systems are being developed, and against which these developments are interpreted and assessed [24].” My account follows in that tradition.

HAL’s capabilities included speech synthesis, computer vision, lip reading as well as traits including planning, recognition, and comprehension. While a number of contemporary and historical critics have suggested that HAL, in using these abilities to commit acts of violence against his crew members, reflected ‘hostility’ on the part of the filmmakers towards computers, this was not the view of Minsky—a researcher who had devoted his career to the study of AI. Nor was it a view held by director Stanley Kubrick, who told Playboy Magazine in 1968 that he was “not hostile toward machines at all” and that he was instead concerned with the effective management of the interrelationship between humans and computers [25]. The filmmakers aimed to use HAL not to warn of the threat posed by machine intelligence, but of a failure to effectively manage the process of human–machine symbiosis, a concept introduced by MIT researcher J.C.R. Licklider in 1960 [26].

Through HAL, Kubrick and Minsky prescribed futures in which generalisable machine intelligence ought to exist––by entangling the future of computing with that of space travel, commercialisation, and communication––and described futures in which the development of such systems appeared feasible. HAL aimed to prove that generalisable systems contained technical knowledge suitable for a wide array of applications, while demonstrating that the effective management of such systems was necessary to successfully manage risks. By fabricating imaginaries that demonstrated both its utility and viability, Minsky moved to shape the direction of artificial intelligence research by promoting the symbolic approach to AI development [27]. Though diegetic prototypes were powerful tools for the fabrication of sociotechnical imaginaries, they existed as just one mechanism for the popularisation of particular futures. Throughout the 1960s, MIT researchers would support such activity with a constellation of approaches concerned with the promotion of imaginaries as they looked to contest some futures and solidify others.

Scientific narratives enable the popularisation of technoscientific imaginaries. While MIT researchers told scientific stories across many mediums, they found uniquely fertile ground in American television. The 1960s existed as a transitory period in science communication on American television in which researchers exerted creeping levels of influence over the production and circulation of scientific narratives. Television in the 1950s was characterised by researchers taking a supporting role in the development process, a focus on “everyday science”, funding driven by industrial scientific partners, the idealisation of white, moderately wealthy citizens as the target audience, and a focus on entertainment. By the 1970s, however, researchers sought opportunities to work on television, audiences were encouraged to “think like a scientist”, science television was increasingly funded by federal agencies, researchers began to consider the inclusion of diverse scientists, and the reconsideration of personal relationships with science was encouraged [28]. MIT’s computer scientists were the beneficiaries of these trends, taking advantage of a climate in which scientists became more willing and more able to directly shape narratives on American television. Where scientists used television to sell products in the 1950s, by the end of the 1960s researchers were increasingly afforded the latitude to tell provocative, emotional stories that emphasised the role of science within society. This was the environment in which MIT’s researchers would share visions of the future: a crystallising public understanding of the relationship between science and its impact on political, economic and social life.

The production of scientific narratives relies heavily on external partners to shape the narrative of television content. To assert control, the university developed its own show, “MIT Science Reporter”, communicating advances across a wide range of disciplines including space exploration, nuclear energy and computing from 1963 to 1966 [29]. In 1963, MIT professor of computer science Fernando J. Corbato appeared on the programme to discuss his Compatible Time-Sharing System (CTSS). Corbato noted that, after several improvements to increase the reliability of computers, the ‘batch processing method’ in which many requests were combined into a single large job for the computer to process, was the next major bottleneck in the development of computing technologies. The solution, explained Corbato, was CTSS, which involved attaching consoles to a central computer that were each afforded a portion of its time. This enabled multiple individuals to use the computer simultaneously, which, he stated, would be necessary for the mass adoption of computing technologies [30]. Corbato, in positioning time-sharing as fundamental to the growth of the computing industry, aimed to popularise a future in which the failure to adopt such systems would stymie general access to computing technologies.

Futures can warn and promote, and hold the power to exclude as well as include. Certain narratives represented the notion that science—and by extension the future—were primarily ‘for’ white women and men, often obscuring the role that researchers from minority backgrounds played in the development of technologies. In 1961, MIT’s Jerome Wiesner, the then director of the Research Laboratory of Electronics, discussed machine cognition in “The Thinking Machine”, a show developed by MIT in collaboration with CBS as part of its Tomorrow television series. Wiesner stated that computing will “spark a revolution that will change the face of the earth” [31]. The show was heavily geared towards an idealised middle class, white audience; Wiesner and presenter David Wayne were white men, while white women featured in a number of cutaway segments to ‘illustrate’ the way in which computers and humans learn. The use of a clip featuring the transformation of a robot into a white woman from the 1927 film Metropolis, though gainfully understood as a diegetic prototype, also underscores the way in which the visions of the future of computing—particularly in the context of human–machine symbiosis—were presented as inseparable from notions of whiteness [31]. The fabrication of highly visual futures geared towards an idealised audience failed to include constituencies who were responsible for the development of particular computing technologies: black researchers at MIT working at Project MAC, a computation-focused research group, and the Research Laboratory of Electronics. There was no universal experience of race at MIT’s computing labs during this period, with some black researchers noting apathy and others hostility [32]. Where there was consistency, however, was in the failure of MIT’s computer scientists to include minorities within the futures they would popularise. An August 1970 edition of Ebony Magazine contained a feature on black students at MIT that made an appeal for inclusion of black people in technoscientific futures, stating that “our communities need people who are qualified professionals in science” [33, 34]. Cave and Dihal have suggested that the “Whiteness both illuminates particularities of what (Anglophone Western) society hopes for and fears from these machines, and situates these affects within long-standing ideological structures that relate race and technology.” In this way, MIT’s computer scientists both fabricated futures in which computing was for an idealised white constituency, and overlooked the role that minority researchers played in the development of the very technologies on which they were based.

4 Positivism and the future

The form that imaginaries took on consumption was governed by the influence of the organisations, institutions, and individuals that sat between MIT’s computer scientists and their public facing output. Positivist portrayals—which MIT researchers co-opted in the construction of certain imaginaries—were predicated on close relationships between researchers and science journalists, a desire to emphasise applications and results, and epideictic language to praise or blame [35]. The scholarly consensus in science communication reads that positivist portrayals in which science is seen as a rational, neutral and heroic endeavour have historically been perpetrated by science journalists. As a result, science reporting has therefore often been “generally uncritical of its subject” [35]. Though scholars have suggested that uncritical attitudes serve those interests that have used science in the legitimation of the prevailing social order, they have so far largely overlooked the role that uncritical attitudes play in empowering scientists to appeal to the future in emphasising applications of emerging technologies. This receptive stance permitted MIT’s computer scientists to engage in a highly effective promotion of futures, fabricating visions to underwrite speculated scientific and technical activity.

Positivist journalistic portrayals of MIT’s computer scientists existed throughout the 1960s. Reporting on the development of an MIT system that synthesised human speech in 1961, Florida newspaper the St. Petersburg Times optimistically noted “speech synthesisers will play an important role as communication between man and machine becomes more advanced” [36]. In a September 1966 edition of Scientific American, no fewer than six professors associated with MIT’s computing breakthroughs wrote for the magazine, including Minsky and Corbato, on artificial intelligence and time-sharing respectively [37]. Though Scientific American was aimed at a small, technically versed audience, scientists wrote directly to this constituency without a journalist intermediary. With the exception of a note in the author biographies that makes clear ARPA’s role in funding MIT research, the authors were otherwise free to represent their views—including applications and warnings—without clarification or caveat. Writing with Robert Fano, Corbato stated that the Compatible Time-Sharing System would act as an “extraordinarily powerful library serving an entire community-in short, an intellectual public utility [38].” Corbato and Fano constructed visions of future applications to develop what Mads Borup refers to as ‘constitutive expectations’ to attract the interest of necessary allies [5, 38]. Minsky would similarly exploit the platform to expound his arguments regarding the viability and utility of generally intelligent machines if developed in conjunction with appropriate safeguards, writing “Whether or not we could retain some sort of control of the machines…the nature of our activities and aspirations would be changed utterly by the presence on earth of intellectually superior beings [39].”

MIT’s computer scientists co-opted an uncritical media to win technical arguments in the academic sphere. By popularising visions using the press, researchers competed for space with their contemporaries to create an environment in which MIT’s computing technologies, not that of a rival academic institution, were positioned as crucial to the fulfilment of computing futures. One example came in 1969 when Marvin Minksy and his longtime friend and collaborator Seymour Papert published Perceptrons: An Introduction to Computational Geometry. The book, which was warmly received by critics, contains a critique of ‘connectionist’ approaches in the field of artificial intelligence by examining the Perceptron, a single layer neural network developed by psychologist Frank Rosenblatt in 1958 [40, 41]. Perceptrons is widely considered to have diverted research away from connectionist approaches and towards symbolic reasoning [27]. Minsky’s efforts to discredit connectionism involved the popularisation of new visions to challenge those that promoted connectionist research, such as those in the New York Times in 1958 [42]. In a 1968 article in the New York Times, Minsky outlined a proof of concept for a machine designed to support construction engineers. Fundamental to the success of such machines, the article explained, was the ability for a system to ‘know’ the intrinsic qualities of objects—a feature heavily associated with symbolic systems [43]. Epitomised by Marvin Minsky, MIT researchers generated futures to win technical arguments within the academic arena, legitimising certain approaches and discrediting others to cement their position within the scientific ecosystem.

5 Contest, coopertion, and futurity

During the 1960s, MIT’s computer scientists produced and popularised imaginaries to regain control of the research agenda from the state. Amidst the growth of the university’s expertise and influence, researchers constructed futures to influence state organs such as ARPA, and, as a result, regulate the downstream impact that these organisations had on the academic research environment. Led by artificial intelligence researcher Marvin Minsky, MIT’s computer scientists utilised diegetic prototypes to demonstrate both the utility and viability of particular futures and the paths that research must therefore take to realise them. The university’s computer scientists asserted greater control in the production and circulation of scientific narratives to popularise technoscientific futures containing promises of advancement for certain societal constituencies themselves conditional on the adoption of MIT’s computing technologies. In exploiting a potent strain of positivism in American science journalism, researchers amplified visions by bringing the weight of the US media apparatus to bear on the legitimation and popularisation of imaginaries.

Minsky’s efforts to shape the development of HAL 9000 in 2001: A Space Odyssey to prove the proximity and desirability of generalisable machine intelligence was representative of a wider trend that included repackaging imagery from films such as Metropolis in “The Thinking Machine”. The development of television shows such as the “MIT Science Reporter” enabled the university to construct futures whose fulfilment was conditional on the use of both its technologies and their underlying concepts, such as time sharing, human–machine symbiosis, and symbolic reasoning. Not content with the use of autonomous media platforms to popularise visions, the university’s computer scientists co-opted a positivist media environment by writing directly to the public in the instance of Scientific American, or via carefully choreographed messaging in the national media outlets like The New York Times and in local papers such as Florida’s St. Petersburg Times.

This paper has argued that MIT’s computer scientists fabricated and popularised visions of the future of AI prior to the efforts of colleagues in the fields of space exploration and nanotechnology in the 1970s. It has acknowledged that instruments of the state appealed to the future to guide research towards strategically sensitive areas in the context of Cold War technoscientific contest. It asserts, however, that intensifying ties between military and academic institutions provided researchers with the ability to construct independent visions of the future to regain control of the research environment. By foregrounding the porous nature of relationships that govern groups shaping particular futures, the paper has demonstrated that the state’s monopoly on sociotechnical imaginaries can be understood in less certain or absolute terms. While the future is a competitive arena, MIT’s computer scientists indicate that––in the case of AI and computer science––the contest to represent progress and deterioration can be read in less totalising, unidirectional, and martial terms.