Advertisement

Some Economic Incentives Facing a Business that Might Bring About a Technological Singularity

  • James D. Miller
Chapter
Part of the The Frontiers Collection book series (FRONTCOLL)

Abstract

A business that created an artificial general intelligence (AGI) could earn trillions for its investors, but might also bring about a “technological Singularity” that destroys the value of money. Such a business would face a unique set of economic incentives that would likely push it to behave in a socially sub-optimal way by, for example, deliberately making its software incompatible with a friendly AGI framework.

Keywords

Power Level Hedge Fund Development Path Negative Externality Small Investor 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Introduction

A business that created an artificial general intelligence (AGI) could earn trillions for its investors, but might also bring about a “technological Singularity” that destroys the value of money. Such a business would face a unique set of economic incentives that would likely push it to behave in a socially sub-optimal way by, for example, deliberately making its software incompatible with a friendly AGI framework. Furthermore, all else being equal, the firm would probably have an easier time raising funds if failure to create a profitable AGI resulted in the destruction of mankind rather than the mere bankruptcy of the firm. Competition from other AGI-seeking firms would likely cause each firm to accept a greater chance of bringing about a Singularity than it would without competition, even if the firm believes that any possible Singularity would be dystopian.

In writing this chapter I didn’t seek to identify worst-case scenarios. Rather, I sought to use basic microeconomic thinking to make a few predictions about how a firm might behave if it could bring about a technological Singularity. Unfortunately, many of these predictions are horrific.

The Chapter’s General Framework

This chapter explores several scenarios in which perverse incentives can cause actors to make socially suboptimal decisions. In most of these scenarios a firm must follow one of two possible research and development paths. The chapter also makes the simplifying assumption that a firm’s attempt to build an AGI will result in one of three possible outcomes:
  • Unsuccessful—The firm doesn’t succeed in creating an AGI. The firm’s owners and investors are made worse off because of their involvement with the firm.

  • Riches—The firm succeeds in creating an AGI. This AGI performs extremely profitable tasks, possibly including operating robots which care for the elderly; outperforming hedge funds at predicting market trends; writing software; developing pharmaceuticals; and replacing professors in the classroom. Although an AGI which brings about outcome riches might completely remake society, by assumption money still has value in outcome riches.

  • Foom—the AGI experiences an intelligence explosion that ends up destroying the value of money.1 This destruction assumption powers most of my results. Here is how outcome foom might arise: An AGI with human-level intelligence is somehow created. But this AGI has the ability to modify its own software. The AGI initially figures out ways to improve its intelligence to give itself slightly better hardware. After the AGI has made itself a bit smarter, it becomes even better at improving its own intelligence. Eventually, through recursive self-improvement, the AGI experiences an intelligence explosion, possibly making it as superior to humans in intelligence as we are to ants.

The Singularity gives me an opportunity to play with an assumption that would normally seem crazy to economists: that a single firm might obliterate the value of all past investments. My property-destruction assumption is reasonable because if any of the following conditions—all of which (especially the first) are plausible side effects of a foom—hold, you will not be better off because of your pre-foom investments:
  • Mankind has been exterminated;

  • scarcity has been eliminated;

  • the new powers that be redistribute wealth independent of pre-existing property rights;

  • everyone becomes so rich that any wealth they accumulated in the past is trivial today;

  • all sentient beings are merged into a single consciousness;

  • the world becomes so weird that money no longer has value, e.g. we all become inert mathematical abstractions.

For investments made in the past to have value today, there must exist certain kinds of economic and political links between the past and present. Anything that breaks these necessary connections annihilates the private value of past investments. As a Singularity would create massive change, it has a significant chance of dissolving the links necessary to preserving property rights.

Small, Self-Interested Investors

This chapter extensively discusses the decisions of small, self-interested investors. A single small investor can’t affect what happens in any of the three outcomes, nor influence the probability of any of the outcomes occurring. Because of how financial markets operate, small investors have zero (rather than just a tiny) effect on the share price of companies. If the fundamentals of a company dictate that its stock is worth $20, then if you buy the stock its price might go slightly above $20 for a tiny amount of time. But as the stock would still, fundamentally, be worth $20 your actions would cause someone else to sell the stock, pushing the price per share back to $20. For large companies, such as IBM, almost all investors are small, because none of them can permanently change the stock price. Billionaire Bill Gates owning $100 million of IBM stock would still qualify him as a small investor in IBM.

Even though a small investor acting on his own can’t seriously affect a firm, anything which makes the company more or less attractive to most small investors will impact a company’s stock price and its ability to raise new capital.

I assume that investors are self-interested and care only about how their decisions will affect themselves. Since small investors can’t individually impact what happens to a company, assuming small investors are self-interested is probably an unnecessary assumption. But I make the assumption to exclude the possibility that a huge percentage of investors will make their investment decisions based on moral considerations of what would happen if their actions determined how others invested. This self-interested assumption is consistent with the normal behavior of almost all investors.

No Investment without Riches

An AGI-seeking firm would have no appeal to small, self-interested investors if the firm followed a research and development path that could lead only to outcomes unsuccessful or foom. An obvious condition for a self-interested individual to invest in a firm is that making the investment should sometimes cause the individual to become better off. If an AGI-seeking firm ended up being unsuccessful, then its investors would be made worse off. If the firm achieved outcome foom, then although a small investor might have been made much better or worse off because of the firm’s activities, his investment in the firm cannot have been a cause in the change in his welfare because, by assumption, a single small investor can’t affect the probability of a foom or what happens in a foom, and how you are treated post-foom isn’t influenced by your pre-foom property rights.

More troubling, the type of foom an AGI might cause would have no effect on small investors’ willingness to fund the firm. Rational investors consider only the marginal costs and benefits of making an investment. If an individual’s investment would have no influence over the type and probability of a foom, then rational investors will ignore what type of foom a firm might bring about, even though the collective actions of all small investors affect the probability of a foom and the type of foom that might occur.

Imagine that a firm follows one of two possible research and development paths. Each path leads to a 98 % chance of unsuccessful, a 1 % probability of riches, and a 1 % chance of a foom. Let’s further postulate that if outcomes unsuccessful or riches occurs, then the firm and its investors would be just as well off under either path. The foom that Path 1 would create, however, would be utopian, whereas the foom that Path 2 would bring would kill us all. Small, self-interested investors would be just as willing to buy stock in the firm if it followed Path 1 or Path 2. In a situation in which it’s slightly cheaper to follow Path 2 than Path 1, the firm would have an easier time raising funds from small, self-interested investors if it followed Path 2. And the situation is going to get much worse (Fig. 8.1).
Fig. 8.1

When the type of foom is irrelevant to investment decisions

Some might object that my analysis places too high a burden on the assumption that an individual small investor has zero effect on stock prices and that if I slightly weakened this assumption my results wouldn’t hold. These objectors might claim that since a utopian foom would give everyone billions of years of bliss then even a slight chance of influencing foom would impact the behavior of a small investor.2 To the extent that this objection is true the results in this chapter become less important. But an investor with a typical discount rate might not significantly distinguish between, say, living for fifty years in bliss or living forever in such a state.

The actions of members of the Singularity community show that most people who believe in the possibility of a Singularity usually ignore opportunities to only slightly increase their subjective probability of a utopian foom occurring. Many members of this community think that the Singularity Institute for Artificial Intelligence is working effectively to increase the probability of a positive intelligence explosion, and the more resources this organization receives the greater the chance of a utopian foom. Yet most people with such beliefs (including this author) spend money on goods such as vacations, candy, and video games rather than donating all the resources they use to buy these goods to the Institute. Furthermore, the vast of majority people who believe in the possibility of a utopian Singularity and think that cryonics would increase the chance of them surviving to Singularity don’t sign up with a cryonics provider such as Alcor (although this author has). The revealed preferences of Singularity “believers” show that I’m not putting too high a burden on my “zero effect” assumption.

Even if, however, investors do act as if their actions impact the type and probability of a foom, there would still be colossal externalities to investors’ decisions because people other than the investors, the firm, and the firm’s customers would be impacted by the investors’ decisions. Basic economic theory could easily show that the investors’ decisions almost certainly won’t be optimal because these investors, compared to what would be socially optimal, would invest too little in firms that might bring about a utopian foom and too much in businesses that could unleash a dystopian foom.

Deliberately Inconsistent with a Pre-Existing Friendly AGI Framework

Let’s now postulate that after a firm has chosen its research and development path it has some power to alter the probability of a foom occurring. Such flexibility could hinder a firm’s ability to attract investors.

For example, let’s again assume that a firm must follow one of two research paths. As before, both paths have a 98 % chance of leading to unsuccessful. Two percent of the time, the firm will create an AGI, and we assume the firm will then have the ability to decide whether the AGI will undergo an intelligence explosion and achieve foom, or not undergo an intelligence explosion and achieve riches. Any foom that occurs through Path 3 will be utopian, whereas a foom that results from Path 4 will result in the annihilation of mankind. Recall that a small, self-interested investor will never invest in a firm that could achieve only outcomes unsuccessful or foom. To raise capital, the firm in this example would have to promise investors that it would never pick foom over riches. This would be a massively non-credible promise for a firm that followed Path 3, because everyone—including the firm’s investors —would prefer to live in a utopian foom then to have the firm achieve riches. In contrast, if the firm intended to follow Path 4, everyone would believe that the firm would prefer to achieve riches than experience a dystopian foom (Fig. 8.2).
Fig. 8.2

A non-credible promise

So now, let’s imagine that at the time the firm tries to raise capital, there exists a set of programming protocols that provides programmers with a framework for creating friendly AGI. This framework makes it extremely likely that if the AGI goes foom, it will be well disposed towards humanity and create a utopia.3

To raise funds, an AGI-seeking firm would need to choose a research and development path that makes it difficult for the firm to use the friendly AGI framework. Unfortunately, this means that any foom the firm brings about unintentionally would be less likely to be utopian than if the firm had used the friendly framework.

Further Benefits From a World-Destroying Foom

A bad foom is essentially a form of pollution, meaning it’s what economists call a “negative externality”. Absent government intervention, investors in a business have little incentive to take into account the pollution their business creates because a single investor’s decision has only a trivial effect on the total level of pollution. Economists generally assume that firms are indifferent to the harms caused by their pollution externalities, because firms are neither helped nor hurt by the externalities they create. An AGI-seeking firm, however, might actually benefit from a bad foom externality.

To understand this, imagine that an investor is choosing between investing in the firm or buying a government bond. If the firm achieves outcome riches, then the firm will end up giving the investor a higher payoff than the bond would have. If the firm achieves unsuccessful, then the bond will outperform the firm. But if the firm achieves foom, then both the firm and bond offer the same return: zero. A foom destroys the value of all investments, not just those of the AGI-seeking firm. For an investor in an AGI-seeking firm, riches gives you a win, unsuccessful a loss, but foom a tie. Consequently, all else being equal, a firm would have better success attracting small, self-interested investors when it increased the probability of achieving foom at the expense of decreasing the chance of achieving unsuccessful (while keeping the probability of riches constant) (Fig. 8.3).
Fig. 8.3

When unsuccessful deters investors more than a dystopian foom does

A Shotgun Strategy

Pharmaceutical companies often take a shotgun approach to drug development by testing a huge number of compounds, knowing that only a few will be medically useful. An AGI-seeking firm could take a shotgun approach by writing thousands of recursive self-improving programs, hoping that at least one brings about riches. You might think that this approach would have little appeal to an AGI-seeking firm, because one foom would cancel out any number of riches, so the probability of foom would be very high. But, as the following example shows, this isn’t necessarily the case:

Assume a business could follow one of two paths to produce an AGI:
  • Path 7:

  • Probability of Unsuccessful = 0.5

  • Probability of Riches = 0.5

  • Probability of Foom = 0

  • Path 8:

  • Probability of Unsuccessful = 0.01

  • Probability of Riches = 0.01

  • Probability of Foom = 0.98

Imagine an investor is deciding between buying stock in the firm, or purchasing a government bond. The stock would provide a higher return in outcome riches, but a lower return in outcome unsuccessful. If the firm takes Path 7, then the stock will outperform the bond half of the time.

If the firm follows Path 8 then 98 % of the time a foom occurs, rendering irrelevant the investor’s decision. When deciding whether to buy the firm’s stock, consequently, the investor should ignore the 98 % possibility that a foom might occur and then realize that, as with Path 7, the stock will outperform the bond half of the time. If, given the same outcome, the firm would pay the same dividends in Path 7 and Path 8, then the investor should be equally willing to invest in the firm regardless of the path taken (Fig. 8.4).
Fig. 8.4

A shotgun approach

The Effects of Competition among AGI-Seeking Firms

The existence of multiple AGI-seeking firms would likely cause them all to accept a greater probability of creating a foom, even if that foom would annihilate mankind. When considering some risky approach that could result in a dystopian foom, an AGI-seeking firm might rationally decide that if it doesn’t take the risk, another firm eventually will. Consequently, if that approach will result in the destruction of mankind, then our species is doomed anyway, so the firm might as well go for it. The situation becomes even more dire if firms can pick how powerful to make their AGIs. To give you an intuitive feel for this situation, consider the following story:

Pretend you find a magical book that gives its reader the power to conjure a genie. As the book explains, when conjuring, you must specify how strong the genie will be by picking its “power level”. Unfortunately, you don’t know how powerful a genie is supposed to be.

There is an ideal power level called the slave point. The closer your chosen power level is to the slave point, the more likely that the summoned genie will become your slave and bestow tons of gold on you. The book does not tell you what the slave point is, although its probability distribution is given in the Appendix.

According to the book, the lower your power level is, the more likely the summoned genie will be useless. But the further above the power level is from the slave point, the more likely the genie will become too strong for you to control. An uncontrollable genie will destroy mankind.

Basically, if the slave point is high, then a genie is an inherently weak creature that needs to be instilled with much power to not be useless. In contrast, if the slave point is low, then genies are inherently very strong, and only by giving one a small amount of power can you hope to keep it under control.

You decide on a power level, and begin conjuring. But just before you are about to finish the summoning spell, the book speaks to you and says, “Know this, mortal. There are other books just like me, and they are all being used by men just like you. You are all fated to summon your genies at the exact same time. Each genie will have the same slave point. If any of you summons a world-destroying genie, then you shall all die. The forces of chance are such that even if multiple conjurers pick the same power level, it is possible that you shall summon different types of genie since, given the slave point, all the relevant probabilities are independent of each other”.

The book then asks, “Do you want to pick the same power level as you did before learning of the other conjurers?” No–you realize–you should pick a higher one. Here’s why:

Imagine that the slave point will be either high or low, and that if the slave point is low, another conjurer will almost certainly destroy the world. If it is high, then there is a chance that the world will survive. In this circumstance, you should pick the same power level you would if you knew the slave point would be high.

In general, the lower the slave point, the more likely it is that another conjurer will destroy the world, rendering your power level decision irrelevant. This means that learning about the other books’ existences should cause you to give less weight to the possibility of the slave point taking on a lower value and the less weight you give to the slave point being low, the higher your optimal power level will be.

You should further reason that the other conjurers will reason just as you have, and pick a higher power level than they would have had they not known of the other conjurers’ existences. But the higher the power level others pick, the more likely it is that if the slave point is low one of the other conjurers will destroy the world. Consequently, realizing that the other conjurers will learn of each others’ existence will cause you to raise your power level even further.

Let’s now leave the genie story, and investigate how correlations among fooms influence research and development paths. If the probabilities of the AGI-seekers going foom are perfectly correlated—meaning that if one or more goes foom they all go foom, and if one or more doesn’t go foom then none go foom—and the wealth obtained by achieving riches is unaffected by the total number of firms that achieve riches, then the possibility of the other firms going foom would have no effect on any one firm’s chosen research path. This is because other firms’ possible fooms matter only to a business when the business itself doesn’t cause a foom.

If, however, the probability of the firms going foom is positively (but not perfectly) correlated then the possibility of another firm going foom will affect each firm’s optimal research path. To see this, we need a model of how fooms are correlated. We will do this by assuming that there is some inherent ease or difficulty in creating an AGI that is common to all AGI-seekers. Borrowing terminology from the conjurer story, let’s label this commonality a “slave point”. Firms, I assume, have imperfect information about the slave point.

If the slave point is low, then it’s relatively easy to create an AGI, but an AGI can easily get out of control and go foom. If the slave point is high, then creating an AGI is very difficult, and unless the AGI is made powerful enough the firm will achieve outcome unsuccessful.

We assume that each firm’s research and development path simply consists of its chosen power level, which measures how strong the firm makes its AGI. For a given slave point, the higher the power level, the more likely that a foom will happen, and the less likely that outcome unsuccessful will occur.

The slave point might be low because there is a huge class of relatively simple self-improving algorithms that could be used to create an AGI, and it is just through bad/good luck that researchers haven’t already found one.4 Or, perhaps the slave point is low because the “software” that runs our brains can be easily inferred from DNA analysis and brain scans, while our brain’s “hardware” is inferior in speed and memory to today’s desktop computers.

The slave point might be high because, for example, our brains use a type of quantum computing that is faster than any computer we have now, and any type of AGI would need quantum processors.5 Or perhaps it’s high because our brain’s source code arises in part through epigenetic changes occurring during the two years after conception, and any software that could produce a human-level or above intelligence would have to be far more complex than the most sophisticated programs in existence today.

Similarly to what happened in the conjuring story, knowledge of other AGI-seeking firms causes each firm to put less weight on the possibility of the slave point being low. This occurs because the lower the slave point the more likely it is that a firm will go foom. And if one firm goes foom, all the other firms’ research and development choices become irrelevant. Each firm, therefore, should give less importance to the possibility of the slave point being low than it would if it were the only AGI-seeker.

This section has so far assumed that without a foom, the other AGI-seekers have no influence over our firm’s payoff. But normally, a firm benefits more from an innovation if other firms haven’t come up with a similar innovation. So let’s now return to our conjuring story, to get some insight into what happens when the benefit each firm receives from outcome riches decreases with the number of other firms that achieve riches.

After taking into account the other conjurers’ existence, you pick a new, higher power level and start summoning the genie. But before you finish the spell, the book once again speaks to you, saying, “Know also this, mortal. Gold is valuable only for what it can buy, and the more gold that is created, the less valuable gold will be. Therefore, the benefit you would receive from successfully conjuring a genie goes down the more other controllable genies are summoned”. The book then says, “Do you wish to pick the same power level as you did before you learned of the economics of gold?” Probably not, you conclude. This new information should again cause you to give less weight to the possibility of the slave point being low, but it should also cause you to be less willing to risk destroying the world. Here’s why:

Conditional on the other conjurers not destroying the world, other conjurers are more likely to summon a controllable genie at a lower slave point. Conditional on the world not being destroyed, you get a greater benefit from summoning a useful genie when fewer useful genies have been summoned. Consequently, learning of the economics of gold should cause you to give greater importance to the possibility of the slave point being high, because it’s when the slave point is high that you have the best chance of being the only one, or at least one of the few, who summons a useful genie.

If the slave point is low, chances are that either someone else’s genie will destroy the world, or lots of useful genies will appear. Both possibilities reduce the value of getting a useful genie, and so you should give less weight to the possibility of the slave point being low. And, all else being equal, the less weight you give to the possibility of the slave point being low, the higher the power level you should pick.

But you also realize that the expected benefit of achieving riches is now smaller than it was before you learned the economics of gold. This factor, all else being equal, would cause you to be less willing to destroy the world. This second factor should cause you to pick a lower power level. So it’s ambiguous whether learning the economics of gold should cause you to raise or lower your power level.

If, however, you are in a military competition with the other conjurors (analogous to if both the United States and Chinese militaries sought to create an AGI) and would greatly suffer if they but not you summoned a controllable genie then learning about the economics of gold would unambiguously cause you to raise your power level.

What is to be Done?

Markets excel at promoting innovation, but have difficulty managing negative externalities. This combination might plunge humanity into a bad foom. Governments sometimes increase social welfare by intervening in markets to reduce negative externalities. Unfortunately, this approach is unlikely to succeed with fooms.

National regulation of artificial intelligence research would impose such a huge burden on an economically and militarily advanced nation that most such nations would be wary of restricting it within their borders. But there exists no body able to impose its will on the entire world. And as the inability of national governments to come to a binding agreement on limiting global warming gases shows, it’s difficult for governments to cooperate to reduce global negative externalities. Furthermore, even if national governments wanted to regulate AGI research, they almost certainly couldn’t stop small groups of programmers from secretly working to create an AGI.

A charitable organization such as The Singularity Institute for Artificial Intelligence offers a potential path to reducing the probability of a bad foom. Ideally, such an organization would create a utopian foom. If this proves beyond their capacity, their work on friendly AGI could lower the cost of following research and development paths that have a high chance of avoiding a bad foom.

Footnotes

  1. 1.

    The term foom comes from AGI theorist Eliezer Yudkowsky. See Yudkowsky (2011).

  2. 2.

    The small investors who did seek to influence the probability of a utopia foom would essentially be accepting Pascal’s Wager.

  3. 3.

    A friendly AGI framework, however, might be adopted by an AGI-seeking firm if the framework reduced the chance of unsuccessful and didn’t guarantee a foom the firm might deliberately create would be utopian.

  4. 4.

    The anthropic principle could explain how the slave point could be very low even though AGI hasn’t yet been invented. Perhaps the “many worlds” interpretation of quantum physics is correct and in 99.9 % of the Everett branches that came out of our January 1, 2000 someone created an AGI that quickly went foom and destroyed humanity. Given that we exist, however, we must be in the 0.1 % of the Everett branches in which extreme luck saved us. It therefore might be misleading to draw inferences about the slave point from the fact that one hasn’t yet been created. For a fuller discussion of this issue see Shulman (2011).

  5. 5.

    See Penrose (1996).

  6. 6.

References

  1. Penrose, R. (1996). Shadows of the mind: A Search for the missing science of consciousness. New York: Oxford University Press.Google Scholar
  2. Shulman, C., & Nick, B. (2011). How hard is artificial intelligence? The evolutionary argument and observation selection effects. Journal of Consciousness Studies.Google Scholar
  3. Yudkowsky, E. (2011). Recursive self-improvement, less wrong, 6 Sept 2011 http://lesswrong.com/lw/we/recursive_selfimprovement/.

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  1. 1.EconomicsSmith CollegeNorthamptonUSA

Personalised recommendations