Skip to main content

Some Economic Incentives Facing a Business that Might Bring About a Technological Singularity

  • Chapter
  • First Online:
Singularity Hypotheses

Part of the book series: The Frontiers Collection ((FRONTCOLL))

Abstract

A business that created an artificial general intelligence (AGI) could earn trillions for its investors, but might also bring about a “technological Singularity” that destroys the value of money. Such a business would face a unique set of economic incentives that would likely push it to behave in a socially sub-optimal way by, for example, deliberately making its software incompatible with a friendly AGI framework.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 84.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The term foom comes from AGI theorist Eliezer Yudkowsky. See Yudkowsky (2011).

  2. 2.

    The small investors who did seek to influence the probability of a utopia foom would essentially be accepting Pascal’s Wager.

  3. 3.

    A friendly AGI framework, however, might be adopted by an AGI-seeking firm if the framework reduced the chance of unsuccessful and didn’t guarantee a foom the firm might deliberately create would be utopian.

  4. 4.

    The anthropic principle could explain how the slave point could be very low even though AGI hasn’t yet been invented. Perhaps the “many worlds” interpretation of quantum physics is correct and in 99.9 % of the Everett branches that came out of our January 1, 2000 someone created an AGI that quickly went foom and destroyed humanity. Given that we exist, however, we must be in the 0.1 % of the Everett branches in which extreme luck saved us. It therefore might be misleading to draw inferences about the slave point from the fact that one hasn’t yet been created. For a fuller discussion of this issue see Shulman (2011).

  5. 5.

    See Penrose (1996).

  6. 6.

    http://papers.nber.org/papers/w17394

References

  • Penrose, R. (1996). Shadows of the mind: A Search for the missing science of consciousness. New York: Oxford University Press.

    Google Scholar 

  • Shulman, C., & Nick, B. (2011). How hard is artificial intelligence? The evolutionary argument and observation selection effects. Journal of Consciousness Studies.

    Google Scholar 

  • Yudkowsky, E. (2011). Recursive self-improvement, less wrong, 6 Sept 2011 http://lesswrong.com/lw/we/recursive_selfimprovement/.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to James D. Miller .

Editor information

Editors and Affiliations

Robin Hanson on Miller’s “Some Economic Incentives Facing a Business that Might Bring About a Technological Singularity”

Robin Hanson on Miller’s “Some Economic Incentives Facing a Business that Might Bring About a Technological Singularity”

James Miller imagines a public firm whose product is an artificial intelligence (AI). While this AI device is assumed to become the central component of a vast new economy, this firm does not sell one small part of such a system, nor does it attempt to make a small improvement to a prior version. Miller instead imagines a single public firm developing the entire system in one go. Furthermore, if this firm succeeds, it succeeds so quickly that there is no chance for others to react -- the world is remade overnight.

Miller then focuses on a set of extreme scenarios where AIs “destroy the value of money”. He gives examples: “mankind has been exterminated, … the new powers that be redistribute wealth independent of pre-existing property rights, …, [or] all sentient beings are merged into a single consciousness”. Miller’s main point in the paper is that a firm’s share prices estimate its financial returns conditional on money still having value, yet we care overall about unconditional estimates. This can lead such an AI firm to make socially undesirable investment choices.

This is all true, but is only as useful as the assumptions on which it is based. Miller’s chosen assumptions seem to me quite extreme, and quite unlikely. I would have been much more interested to see Miller identify market failures under less extreme circumstances.

By the way, an ambitious high-risk AI project seems more likely to be undertaken by a private firm, vs. a public firm. In the US, private firms accounted for 54.5 % of aggregate non-residential fixed investment in 2007, and they seem 3.5 times more responsive to changes in investment opportunities.Footnote 6 Public firms mostly only undertake the sorts of investments that can give poorly informed stock speculators reasonable confidence of good returns. Public firms leave subtler opportunities to private firms. Since 83.2 % of private firms are managed by a controlling shareholder, a private firm would likely, when choosing AI strategies, consider scenarios where the value of money is destroyed. So to the extent that public firm neglect of such scenarios is a problem, we might prefer private firms to do ambitious AI research.

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Miller, J.D. (2012). Some Economic Incentives Facing a Business that Might Bring About a Technological Singularity. In: Eden, A., Moor, J., Søraker, J., Steinhart, E. (eds) Singularity Hypotheses. The Frontiers Collection. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-32560-1_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-32560-1_8

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-32559-5

  • Online ISBN: 978-3-642-32560-1

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics