Skip to main content

Martial Bliss: War and Peace in Popular Science Robotics

Abstract

In considering how to best deploy robotic systems in public and private sectors, we must consider what individuals will expect from the robots with which they interact. Public awareness of robotics—as both military machines and domestic helpers—emerges out of a braided stream composed of science fiction and popular science. These two genres influence news media, government and corporate spending, and public expectations. In the Euro-American West, both science fiction and popular science are ambivalent about the military applications for robotics, and thus we can expect their readers to fear the dangers posed by advanced robotics while still eagerly anticipating the benefits to be accrued through them. The chief pop science authors in robotics and artificial intelligence have a decidedly apocalyptic bent and have thus been described as leaders in a social movement called "Apocalyptic AI." In one form or another, such authors look forward to a transcendent future in which machine life succeeds human life, thanks to the march of evolutionary progress. The apocalyptic promises of popular robotics presume that presently exponential growth in computing will continue indefinitely, producing a "Singularity." During the Singularity, technological progress will be so rapid that undreamt of changes will take place on earth, the most important of which will be the evolutionary succession of human beings by massively intelligent robots and the "uploading" of human consciousness into computer bodies. This supposedly inevitable transition into post-biological life looms across the entire scope of pop robotics and artificial intelligence (AI), and it is from beneath that shadow that all popular books engage the military and the ethics of warfare. Creating a just future will require that we transcend the apocalyptic discourse of pop science and establish an ethical approach to researching and deploying robots, one that emphasizes human rather than robot welfare; doing so will require the collaboration of social scientists, humanists, and scientists.

This is a preview of subscription content, access via your institution.

Notes

  1. 1.

    Path dependence is a feature of technological developments across a wide array of fields. For example, part of what enabled the Moog synthesizer to outsell the Buchla synthesizer was its keyboard, unnecessary for the most part but appealing to people who thought producing music depended upon such a traditional element (see Pinch and Bijsterveld 2003, 549–51). In a more dramatic example from the realm of digital music, the development of MIDI sound files in the 1980s eventually led to the entire realm of digital music conforming to its stylistic approach (Lanier 2010, 7–16).

  2. 2.

    Moravec’s work has been labeled “seminal” and he has been described as the single most important individual in the field of mobile robotics by one of his colleagues in an interview with me and as a “living legend” elsewhere (Gutkind 2006, 93). De Garis, on the other hand, has yet to produce a product that lives up to his considerable self-promotion.

  3. 3.

    This is a rather charitable description of his status in film. De Garis is interviewed in Ken Gumb’s Building Gods, a film never released, is one among many doomsayers in the film Last Days on Earth, is one of those interviewed in the BBC documentary Human v2.0, and has moments in Ray Kurzweil’s Transcendent Man (a movie that is actually about Kurzweil).

  4. 4.

    Academic articles do not generally affect the lay public. Rather, they are meant to convince other scientists that they should change their beliefs and behaviors to reflect those of the author. They are, in Latour’s words, “trials of strength.”

  5. 5.

    Obviously, Frankenstein’s monster is not, itself, a robot. Nevertheless, Asimov chose it for its symbolic value as an artificially intelligent human creation feared and rejected by its creator.

  6. 6.

    When I was Visiting Researcher at Carnegie Mellon University’s Robotics Institute during the summer of 2007, I found that Moravec and Kurzweil were widely respected on account of their technical work, while Warwick was a very distant third in significance and there were no kind words for de Garis.

  7. 7.

    I am deeply grateful to Geoff Hollinger, who discussed this meeting with me, and Stuart Anderson, who provided me with a recording of the event.

  8. 8.

    One exception to this is Sun Microsystem’s former Chief Scientist, Bill Joy, who wrote a widely publicized essay in Wired that encouraged that we relinquish progress in some fields, including robotics (Joy 2000).

  9. 9.

    The confluence of moral machines and a friendly partnership with robots is rampant in popular science books covering Japanese robotics. See, for example, Schodt (1988) and Hornyak (2006).

  10. 10.

    Moravec explicitly rejects the idea that robots will have sexual feelings or activity (Moravec 1999, 118).

  11. 11.

    This is a relatively naïve position in that it implies that war happens only where resources are scarce. While scarcity (real or perceived) may be a motivating factor in most or even in all human warfare, it certainly is not the only cause for war. Even in a post-scarcity economy, therefore, it would seem at least plausible that war could be carried out for ideological or other reasons.

  12. 12.

    There is little correspondence here between Kurzweil’s forecasts and reality. Already, destructive technologies are far more efficacious than defensive technologies, as anyone who follows the U.S. Strategic Defense Initiative missile shield will tell. We have no answer to nuclear war at present, nor is there one on the immediate horizon; thus, it is difficult to be as optimistic as Kurzweil that future technologies will offer greater defensive than offensive power. Kurzweil’s position—widely adopted in tech enthusiast circles—may be an example of what Lee Bailey calls “enchantments.” Bailey believes that enchantments “narrow the focus of a society’s consciousness into a consensus agreement on certain beliefs and behaviors, such as optimism about technological progress” (Bailey 2005, 3). Rejecting the claims of AI advocates, Bailey believes that robots do not and will not possess human equivalence (Bailey 2005, 196–198) and that faith in the enchantments he describes diminishes human experience in the world (pp. 228–229).

  13. 13.

    Perhaps the most significant piece of advice on offer today is that we focus on our human potential to create great things. Just as Singer deplores that our creative energies seem focused upon producing weapons of war (Singer 2009, 435–436), influential technocrat Jaron Lanier rejects any and all ways of making human beings subservient or inferior to machines in favor of using technology to produce meaningful human relationships (Lanier 2010).

  14. 14.

    Philosophers Wendell Wallach and Colin Angle take this code to indicate that American engineers are professionally obligated to work toward providing robots with ethical governors as this would presumably help guarantee the public welfare (Wallach and Angle 2009, 25).

  15. 15.

    The specter of human subservience to machines has already been suggested by Sparrow, who believes that not only is it plausible that human beings could be placed in harm’s way in order to defend or reacquire expensive military aspects but that it may well have already happened (Sparrow 2009, 173).

  16. 16.

    Arkin’s mode of address confirms this, as when he writes in his epilogue: “Hopefully the goals of this effort will fuel other scientists’ interest to assist in ensuring that the machines that we as roboticists create fit within international and societal expectations and requirements” (Arkin 2009, 212).

  17. 17.

    The 15 items and essays so identified are: Atwood (2009, 2011); Atwood and Klein (2007); Berry (2006, 2008, 2010); Govers (2008, 2010); Marsh (2009, 2010a, b); Mitsuoka (2011); Robot Magazine Staff (2008); Robot Magazine Staff et al. (2009); and Spinetta (2009). It should be noted that one of these (Atwood and Klein) describes a robot designed to rescue injured soldiers in the field; it is difficult to imagine serious ethical problems with such a robot, as long as it is safe to use. Although the articles generally lack reference to important ethical concerns, they frequently assert the familiar mantras that military use will benefit civilians and that human beings will always remain “in the loop” when robots deploy lethal force. This latter is, of course, highly debatable, and its use appears politically motivated rather than accurately representative of the future.

References

  1. Arkin, R. (2009). Governing lethal behavior in autonomous robots. Boca Raton: Chapman and Hall.

    Book  Google Scholar 

  2. Asimov, I. [1953] 1991. Caves of steel. New York: Bantam Books.

  3. Asimov, I. [1956] 1957. The $$. Garden City: Doubleday.

  4. Asimov, I. [1983] 1991. The robots of dawn. New York: Del Rey.

  5. Atwood, T. (2009). Future bytes: Raytheon’s agile exoskeleton. Robot, 18, 90.

    Google Scholar 

  6. Atwood, T. (2011). Future bytes: Warfighter robot upgrade. Robot, 26, 90.

    Google Scholar 

  7. Atwood, T., & Klein, J. (2007). VECNA’s battlefield extraction-assist robot. Robot, 7, 24–28.

    Google Scholar 

  8. Bailey, L. W. (2005). The enchantments of technology. Urbana: University of Illinois Press.

    Google Scholar 

  9. Berry, K. (2006). Team robotics: DARPA grand challenge. Robot, 2, 76–79.

    Google Scholar 

  10. Berry, K. (2008). DARPA urban challenge. Robot, 10, 24–28.

    Google Scholar 

  11. Berry, K. (2010). Uncle Sam wants robots! Robot, 21, 48–49.

    Google Scholar 

  12. Brand, S. (1987). The media lab: Inventing the future at MIT. New York: Viking.

    Google Scholar 

  13. Čapek, K. [1923] 2001. R.U.R. Mineola: Dover.

  14. Crevier, D. (1993). AI: The tumultuous history of the search for artificial intelligence. New York: Basic Books.

    Google Scholar 

  15. De Garis, H. (2005). The artilect war: Cosmists vs. Terrans. A bitter controversy concerning whether humanity should build godlike massively intelligent machines. Palm Springs: ETC Publications.

    Google Scholar 

  16. Dreyfus, H., & Dreyfus, S. (1986). Mind over machine: The power of human intuition and expertise in the era of the computer. New York: The Free Press.

    Google Scholar 

  17. Friedman, R. (2010). 10 questions for Ray Kurzweil. Time (December 6). Retrieved 20 December 2010 from http://www.time.com/time/magazine/article/0,9171,2033076,00.html.

  18. Geraci, R. M. (2008). Apocalyptic AI: Religion and the promise of artificial intelligence. Journal of the American Academy of Religion, 76(1), 138–166.

    Article  Google Scholar 

  19. Geraci, R. M. (2010). Apocalyptic AI: Visions of heaven in robotics, artificial intelligence, and virtual reality. New York: Oxford University Press.

    Google Scholar 

  20. Govers, F. X. (2008). The MULE: Anatomy of a U.S. Army warrior bot. Robot, 12, 30–31.

    Google Scholar 

  21. Govers, F. X. (2010). U.S. Army holds ‘robot rodeo’ at Ford Hood. Robot, 20, 24–27.

    Google Scholar 

  22. Gutkind, L. (2006). Almost human: Making robots think. New York: W.W. Norton.

  23. Haraway, D. (1997). Modest_Witness@Second_Millennium.Female_Man©_Meets_Oncomouse™. New York: Routledge.

  24. Hornyak, T. N. (2006). Loving the machine: The art and science of Japanese robots. New York: Kodansha International.

    Google Scholar 

  25. Joy, B. (2000). Why the future doesn’t need us. Wired 8.04 (April). Retrieved June 2007 from www.wired.com/wired/archive/8.04/joy.html.

  26. Kurzweil, R. (1999). The age of spiritual machines: When computers exceed human intelligence. New York: Viking.

    Google Scholar 

  27. Kurzweil, R. (2005). The Singularity is near: When humans transcend biology. New York: Penguin.

    Google Scholar 

  28. Lang, F. (1927). Metropolis. Berlin: Universum Film AG.

    Google Scholar 

  29. Lanier, J. (2010). You are not a gadget: A manifesto. New York: Knopf.

    Google Scholar 

  30. Latour, B. (1983). Give me a laboratory and I will raise the world. In K. Knorr-Cetina & M. Mulkay (Eds.), Science observed: Perspectives on the social study of science (pp. 141–170). Beverley Hills: Sage.

    Google Scholar 

  31. Latour, B. (1987). Science in action: How to follow scientists and engineers through society. Cambridge: Harvard University Press.

    Google Scholar 

  32. Latour, B. (2005). Reassembling the social: An introduction to actor-network-theory. New York: Oxford University Press.

    Google Scholar 

  33. Levy, D. (2006). Robots unlimited: Life in a virtual age. Wellesley: A.K. Peters.

    Google Scholar 

  34. Levy, L. (2007). Love and sex with Robots: The evolution of human–robot relationships. New York: HarperCollins.

    Google Scholar 

  35. Liebowitz, S. J., & Margolis, S. E. (1995). Path dependence, lock-in, and history. Journal of Law, Economics, & Organization, 11(1), 205–226.

    Google Scholar 

  36. Marsh, T. (2009). Robobusiness 2009 highlights: Military and healthcare. Robot, 18, 52–55.

    Google Scholar 

  37. Marsh, T. (2010a). 2009 AUVSI Conference Coverage: Latest-Generation Unmanned Systems for Military and Commercial Applications. Robot, 20, 74–77.

    Google Scholar 

  38. Marsh, T. (2010b). Snapshot: Guess who’s coming to dinner? Robot, 21, 50.

    Google Scholar 

  39. Massimov, K.K. (2011). Posting on Twitter.com (May 22). Retrieved 24 May 2011 from http://twitter.com/KarimMassimov_E?_escaped_fragment_=/KarimMassimov_E#!/KarimMassimov_E.

  40. Minsky, M. (1994). Will robots inherit the earth? Scientific American (October). Retrieved 14 June 2007 from http://web.media.mit.edu/~minsky/papers/sciam.inherit.html.

  41. Mitsuoka, G. (2011). AUVSI North America 2010: Unmanned systems land in Denver. Robot, 26, 42–45.

    Google Scholar 

  42. Moravec, H. (1978). Today’s computers, intelligent machines and our future. Analog 99, 2 (February), 59–84. Retrieved 5 August 2007 from http://www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1978/analog.1978.html.

  43. Moravec, H. (1988). Mind children: The future of robot and human intelligence. Cambridge: Harvard University Press.

    Google Scholar 

  44. Moravec, H. (1992). Letter from Moravec to Penrose. Electronic-mail correspondence published in Thinking Robots. In R. B. Miller & M. T. Wolf (Eds.), An aware Internet, and cyberpunk librarians: The 1992 LITA President’s Program (pp. 51–58). Chicago: Library and Information Technology Association.

    Google Scholar 

  45. Moravec, H. (1999). Robot: Mere machine to transcendent mind. New York: Oxford University Press.

    Google Scholar 

  46. Moshkina, L. & Arkin R. (2007). Lethality and autonomous systems: Survey design and results. Technical Report GIT-GVU-07-16, Georgia Institute of Technology. http://smartech.gatech.edu/handle/1853/20068.

  47. Pinch, T. J., & Bijsterveld, K. (2003). ‘Should one applaud?’: Breaches and boundaries in the reception of new technology in music. Technology and Culture, 44(3), 536–559.

    Article  Google Scholar 

  48. Schodt, F. L. (1988). Inside the robot kingdom: Japan, mechatronics, and the coming robotopia. New York: Kodansha International.

    Google Scholar 

  49. Singer, P. W. (2009). Wired for war: The robotics revolution and conflict in the 21st century. New York: Penguin.

    Google Scholar 

  50. Sparrow, R. (2009). Building a better warbot: Ethical issues in the design of unmanned systems for military applications. Science and Engineering Ethics, 15(2), 169–187.

    Google Scholar 

  51. Spinetta, L. (2009). Predator UAV: The ultimate teleoperated robot. Robot, 16, 20–23.

    Google Scholar 

  52. Staff, R. M. (2008). Snapshot: Global hawk unmanned aerial vehicle. Robot, 12, 32.

    Google Scholar 

  53. Staff, R. M., Lastrapes, T., Newhouse, S., & Krasny, D. (2009). Battelle multi-use robotic system: Robots to clean tanks of mighty B-52 Stratofortress. Robot, 15, 32–35.

    Google Scholar 

  54. Stross, C. (2005). Accelerando. New York: Penguin.

    Google Scholar 

  55. Telotte, J. P. (1995). Replications: A robotic history of the science fiction film. Champaign: University of Illinois Press.

    Google Scholar 

  56. Tierney, J. (2008). The future is now? Pretty soon, at least. New York Times (June 3). http://www.nytimes.com/2008/06/03/science/03tier.html.

  57. Vinge, V. [1993] 2003. Technological Singularity (revised edition). Retrieved 29 August 2009 from www.rohan.sdsu.edu/faculty/vinge/mis/WER2.html.

  58. Wallach, W., & Angle, C. (2009). Moral machines: Teaching robots right from wrong. New York: Oxford University Press.

    Google Scholar 

  59. Warwick, K. [1997] 2004. March of the machines: The breakthrough in artificial intelligence. Chicago: University of Illinois Press.

  60. Weinberg, S. (1992). Dreams of a final theory: The scientist’s search for the ultimate laws of nature. New York: Vintage Books.

    Google Scholar 

  61. Wilson, D. (2005). How to survive a robot uprising: Tips on defending yourself against the coming rebellion. New York: Bloomsbury USA.

    Google Scholar 

  62. Young, J. R. (2009). Founder of Singularity University talks about his unusual new university. Chronicle of Higher Education (Feb 3). http://chronicle.com/wiredcampus/article/3592/founder-of-singularity-university-talks-about-his-unusual-new-institution.

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Robert M. Geraci.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Geraci, R.M. Martial Bliss: War and Peace in Popular Science Robotics. Philos. Technol. 24, 339 (2011). https://doi.org/10.1007/s13347-011-0038-3

Download citation

Keywords

  • Apocalyptic AI
  • Artificial intelligence
  • Ethics
  • Hans Moravec
  • Military
  • Morality
  • Popular science
  • Ray Kurzweil
  • Robotics
  • Science fiction
  • War