Often we are interested in systems that can reason about the results of their actions, and that act to maximize these results. Game theory allows us to predict how such systems will act. This chapter starts by introducing basic concepts from game theory, such as the notions of autonomous choice, utility, rationality, and intelligence. These concepts give us a framework to rigorously identify conditions under which players have incentives to act in manners that can be seen as independent, coordinated, or competitive. Matched to these conditions, respectively, game theory provides us analytical tools for predicting the behavior of players in these situations; namely, strictly dominant (or dominated) strategies, Nash Equilibrium , mixed strategies, and mixed strategy Nash Equilibrium.

1 The Role of Game Theory in CPS Design

With the increasing pace of CPS innovation, the number of interacting systems is also increasing. At least two factors have contributed to this trend. First, as devices and systems are endowed with more computational power, their functionality increases. More computation implies more decision-making. Unless we delegate more control to these devices—which means endowing them with more autonomy—we will simply be unable to process all the information needed to make such decisions. Second, because networking of devices provides more opportunities for both controlling and optimizing their performance, it is increasing and, as a result, the interaction between systems is also increasing.

Predicting the behavior of interacting autonomous systems can be challenging, and increasing the number of such systems only makes the situation more so. Game theory is a discipline that provides important analytical and computational methods that can help us in analyzing such systems and predicting their behavior.

2 Games , Players , Strategies, Utilities, and Independent Maximization

For the purposes of this chapter, a game consists of two players, both having a finite set of strategies (or plays ) to choose from. It also consists of two utility function (one for each player) that assign a utility to each outcome, where an outcome is determined by the combined choices made by the two players. We assume that the utilities possible for each player are ordered, that is, there is always a sense in which one utility is greater than, or greater than or equal to, another utility. We then further assume that both players are rational, in that they are trying to choose the play that maximizes the payoff that will be determined by the utility of the combined play of the two players.

The challenge is that each player can choose their own play, but they have no control over the other player’s play. Thus we can see each player as trying to independently maximize his or her utilities, in a setting where the other player’s choice can influence the final utility. We further assume that the players are intelligent , in the sense that they are aware of the other player’s utilities, and are able to reason about their expected behavior, given the assumption of their rationality.

The general problem posed by a game is “what strategy will the players choose, given the definition of the game?”

3 Rationality , Independence and Strictly Dominant (or Dominated) Strategies

Since utility functions capture the quality that the players are trying to maximize, they also induce a pattern of incentives for the players to act in a certain way. In fact, when players are working strictly to find the choices that maximize their utility, the utilities can be seen as forcing them to act in a certain way. This is not to say that all systems in the real world are working to maximize a specific utility. Rather, in a situation where we can justifiably identify a utility that players are working to maximize, the utility function can be productively used to analyze their choices in this manner.

In the rest of this chapter, we will consider three patterns of incentive induced by the utilities. For each of these patterns, we will also introduce a powerful analytical tool that can be used to predict the behavior of players in games that exhibit this pattern. We will start with the simplest and work our way to more challenging ones.

3.1 The Independence Pattern

The most elementary utility pattern is independence, which is the situation where one player’s utility for a strategy is superior to the others and independent of the other player’s choice. In more concrete terms, consider a two-player game where each player can choose between two strategies, A and B. Both players are hungry and can either (A) do nothing or (B) go to the store and buy lunch. If we assume that the actions of the two players are independent, then it is reasonable to believe that each one’s utility for (B) is higher than (A) no matter what choice the other player makes. We will call this the “Independence Pattern.”

One of the most basic questions we can ask about games is “What would the players do if they were to act rationally?” Here we can define a rational player as one that chooses the strategy that maximizes his or her utility. In the case of lunch, the independence of the two players’ actions makes it easy to see that choice (B) is rational for both. But how can we answer this question for games where players are not independent? Those cases are, in fact, the primary focus of game theory.

To analyze such games, it is often convenient to represent combined utilities using a table. The rows will represent the first player’s choice, and the columns will represent the second player’s choice. In each cell, there will be two values, representing, respectively, the utilities of the first and second players. To represent utility patterns, we will indicate lower utilities with the minus sign – and higher utilities with the plus sign +. The lunch example would be represented as follows:

The utility pair u 1u 2 such as + − in the cell at row (B) and column (A) represents, respectively, the positive utility for Player 1 and the negative utility for Player 2, when Player 1 chooses to get lunch (B) and Player 2 chooses to do nothing (A). Ignoring for a moment that we already know the answer to the question, let us consider how we can systematically use this table to determine the rational preference for each player. The way to do this is to analyze the table from the point of view of each of the two players, completely ignoring the utilities of the other player (but not their choices). We can visualize this analysis using two tables, one from the point of view of Player 1:

and one from the point of view of Player 2:

In both cases, we simply replaced the plus/minus signs by a star sign * to indicate that we are ignoring the other player’s utility in that utility pair. Hiding the other player’s (irrelevant) utility makes it clear for each player that strategy B is always positive.

What this exercise illustrates is that we only need to know that, for all possible choices that the other player could make, one of our choices will always have greater utility than the other. Visually, this means that, for Player 1, the justification for (B) is based solely on comparisons between utilities in the same column. Going “down” in this table always leads to an improvement in utility, making (B) always a better option for Player 1. Similarly, for Player 2, the justification for (B) is based solely on comparisons between utilities in the same row. Going “right” in this table always leads to an improvement in utility, making (B) always a better option for Player 2.

When an option has this kind of relation to another option, as is the case with options B and A above, respectively, we say that the first strictly dominates the second. Strict dominance is an important concept in game theory, as it captures effectively a pattern of reasoning that can be correctly used by a rational system to decide to exclude certain choices in favor of other ones that will always be more effective at maximizing the utility of the strategy choice.

Because the rational choice for both players is (B), the “solution” to this game is the play (BB), which represents the choice made by both players, respectively, when we assume that they are rational.

When we get to more complex examples of the Independence Pattern the user will find it helpful to remember two key observations. The first is represented by the two last tables above: the right way to read utility tables from the point of view of each player is to ignore the other player’s utilities when you are doing that.

The second observation is more subtle, and requires discerning from the analysis above a pattern that is deeper than the Independence Pattern. In particular, in deciding that (B) is better than (A) from both players’ point of view, we do not really need to know that all the lower utilities are equally low, or that all the higher utilities are equally high. The following series of examples will help us understand the significance of this observation.

Example 7.1: A Basic Lunch Returning to our lunch example, we can imagine that the players’ utility represents their need to get a meal, and that we will represent one meal’s worth by the number 1. In this case, the Independence Pattern can be instantiated to the following concrete game:

Here the solution is still (BB). Note that a + or − on the original table is a valuation for the respective player’s utility for that choice and assuming unchanged choice by the other player.

Example 7.2: An Asymmetric Lunch The way we determined that (BB) is the solution in the case of the Independence Pattern also applies to other—possibly less obvious—situations. For example, it applies in exactly the same way when the utility of one player is greater for his or her lunch than the other player. The following table represents such a situation:

The utilities of Player 1 still always increase when we go down, and those of Player 2 always increase when we go right. Thus, (BB) is still the solution to this game.

Example 7.3: A Split Lunch The previous example may appear to suggest that the Independence Pattern only applies in cases when the two players’ utilities are independent. This would mean that it only applies when there is little or no “real” interaction between the two players. This is not the case. This means that this rather simple analysis can be useful in cases when there is a substantive interaction between players. As a first example of such a situation, consider the case when the two players go to the same supermarket, to buy the same type of lunch, and there is only enough for one person. To avoid the use of fractions in utilities, we will now count getting one lunch as a utility of 2, and half a lunch as a utility of 1. With this convention, we can represent this situation using the following table:

Does the above analysis still apply to this case? It may be a bit surprising to find out at the answer is yes. One way to see why the analysis still applies is that the first player’s utilities still improve when we go down, and the second player’s utilities improve when we go right. To make it easier to see that this is the case, we will break up the table into two, each one representing a player’s view. Player 1 sees:

and Player 2 sees:

So, the solution is again (BB). The outcome of (BB) does not change if there is “waste” in sharing the lunch, which would be represented by a higher value in place of the “2” in these tables.

Example 7.4: A Small-Auction Lunch Imagine a situation where both players only have a penny, and when only one goes to buy lunch an auction will view this as low demand, and make available for a penny a great meal with a utility of 5. But when both go at the same time, the auction sees this has high demand, and makes a modest meal with a utility of 1. In this case, our table looks like this:

Seeing that (BB) is the solution to this game even when the “lunch alone” option has significantly higher utility may lead us to be a bit suspicious of dominant strategy; and it may even lead us to question the way in which we have been analyzing games to determine how their constraints play out in terms of player choices. This is healthy skepticism, because it prepares us for the next example, which pushes the Independence Pattern to the limit.

Example 7.5: A Small-Auction vs. Fridge Lunch Imagine a slightly different situation where both players actually have a readily available lunch in their fridge that they could prepare only if they both chose to stay at home. Imagine further that this lunch was so good that they would both give it a utility of 4. But the auction lunch, which would have a utility of 5 if one of them goes alone, is still slightly better. The following utility table illustrates this situation:

This situation still fits the Independence Pattern, and the choice of (B) still dominates that of (A) for each player, independent of what the other player chooses to do. Thus, the rational solution to this game is the choice (BB). This is a peculiar outcome, because the utility for both (AA) is higher for both players than (BB). So, how can the rational choice for both lead them to (BB)?

We can confirm the pattern’s logic by checking that the first player’s utilities always increase when we go down, and the second player’s utilities always increase when we go right. To see why this down/right pattern really does force any two rational players to choose (B), it helps to consider what happens if they make any other choice. Making his or her choice independently, Player 1 can only pick one of the two options. From the point of view of Player 1, picking (A) means he or she could end up with a nice lunch if the other player stays in, but they could end up with no lunch if the other player goes out. In contrast, picking (B) means they would get the best lunch if the other stays in, and a passable lunch if the other goes out. So, whichever choice the other player makes, choosing (B) improves Player 1’s lunch.

This example has the same features as The Prisoners’ Dilemma, a classic example in game theory. More background about this game can be found in the article Prisoner’s Dilemma.

3.2 The Cost of Lacking Communication and Trust Can Be Unbounded

To convince ourselves of the soundness of the above analysis, it is important to realize that each player must make his or her decision independently. This does not mean that this is what any two people in this situation should do, rather, it is clarifying how the formal notion of games that we are studying works. We said that we are studying games where each player is trying to maximize utility, and that, for this particular game, the utilities are as shown in the table. We did not say anything about players’ ability to communicate or their ability to trust each other; as a result, we have to exclude the possibility of the players coordinating, because the ability to communicate and trust are strong assumptions that we cannot make without changing our original problem statement. In fact, a profound lesson that can be drawn from this example is that the costs of lacking communication and/or trust can be unbounded: we can replace 4 and 5 in this example by any pair of arbitrarily large values, and as long as the first is less than the second, rationality and self-interest forces both players to choose (B). Lacking communication and trust can be arbitrarily costly for everyone involved.

4 Coordination , Intelligence , and Nash Equilibrium

In the last section we saw the Independence Pattern, where utilities had this form:

We also saw how strict dominance can be used to determine that the rational behavior of two players in such a game has to be (BB). At the same time, we also saw that (BB) may not be the highest possible payoff for both players, but it is the highest payoff that they can guarantee independently.

The power of strict dominance lies in its usefulness in narrowing down the set of possible rational strategy pairs to a smaller set. However, it will not always be possible to find strictly dominant strategies (or more specifically, strictly dominated strategies to exclude). It is therefore useful to consider how to interpret games where there are no strictly dominated strategies, and where we have more than one possible rational outcome.

4.1 The Coordination Pattern

Consider a two-player game where each player can choose between two strategies: going to a movie (A) or going to a play (B). Both players only care about being together. Let us call this the Coordination Pattern. The following table represents this pattern:

It is clear that in this case there is no strictly dominant strategy: For each player, (A) is better if, and only if, the other player chooses (A), and the same holds for (B). We have two cases where there is a win-win choice, (AA) and (BB), but achieving either depends critically on coordination.

When we considered our Small-Auction vs. Fridge Lunch example, we noted that communication and mutual trust would have been needed to arrive at a better outcome than that provided by the dominant strategy. Here, there is no dominant strategy at all. The absence of a dominant strategy can be viewed as the absence of a reward to always unilaterally select one strategy versus the other. In such cases, communication is key. However, trust is no longer necessary: the utilities put both players in a situation where (a) it is in their interest only to communicate their intent truthfully and (b) once they have shared their intent, the other player is only motivated to act in a manner that is optimal for both of them.

4.2 Nash Equilibrium

Note that this type of reasoning reflects intelligence on the part of the players, in the sense that it takes into account that they are aware of the other player’s utilities and decision-making process. The observation that we can predict the outcomes of games more precisely when we take into account not only each player’s rationality but also their ability to reason about the other player’s decision-making process is attributed to John Nash. It is his name that is acknowledged in the term “Nash Equilibrium ,” which refers to the set of plays (strategy combinations) out of which no player has an incentive to depart unilaterally. In the example above, the set { (AA), (BB) } is the Nash Equilibrium for this game. The game motivates both players to only be in one of these plays. And once they are in one of them, they would only be motivated to move to another one in coordination with the other players.

4.3 Determining the Nash Equilibrium

With one additional condition, the Nash Equilibrium for a game pattern is simply the set of all strategy combinations with (++) utilities. The extra condition is that each player should only have a plus (+) option as the maximum utility for any one of his strategies. This is the case for both the Lunch and Coordination Patterns. Thus, in the Independence Pattern, the Nash Equilibrium is the set { (BB) }.

Example 7.6: An Asymmetric Four-Strategy Game To check our understanding of this method of computing the Nash Equilibrium set, we will consider a game with four strategies and asymmetric utilities:

When we mark the highest utility in each choice for the first player, we get the following table:

When we mark the highest utility in each choice for the second player, we get the following table:

Combining all the marks into one table we get the following, where the cells that have utility now marked (++) form the Nash Equilibrium set:

As this table shows, the Nash Equilibrium set is { (CC) }. Thus, if both players reason rationally, taking into account the other player’s utilities as options, the first player would choose (C) and the second would play (C). These are the choices that each player can make independently and secure the maximum possible payoff, given the utilities for the different choices for both players.

4.4 Eliminating Strictly Dominated Strategies Preserves Nash Equilibria

In games where there are a large number of possible strategies, it is useful to remove from consideration (or eliminate) strategies that a rational player would never choose. Strict dominance gives us just the right tool for doing so, as any strictly dominated strategy can be safely eliminated in this manner. What is more, eliminating one choice for one play can reveal other dominated strategies for the other player (since they are also intelligent, and can determine for themselves that the first player would never play that strategy).

This technique is synergistic with the notion of Nash Equilibria: eliminating strictly dominated choices does not remove any elements of the Nash Equilibrium of a game.

Exercise 7.1 Remove strictly dominated strategies from the game presented in this last example. Repeat this process until there are no more strictly dominated strategies. Draw the table for the reduced game. Once you have done so, determine the Nash Equilibrium for the reduced game.

5 Competitiveness , Privacy , Mixed Strategies

So far, we have seen an example where strict dominance alone can be used to determine how two rational players will behave (the Independence Pattern), and one where it cannot be applied (the Coordination Pattern). In the latter case, we were able to use the idea of the Nash Equilibrium to determine the set of plays (strategy pairs) that the two players would be simultaneous motivated to choose. There are, however, games where the Nash Equilibrium would have no elements. The following table represents an example game pattern:

We will call this the Competitive Pattern. In this pattern, there are no win-win plays. In fact, every play is win-lose. In concrete instances of this pattern, if the values of minus and plus in each cell are consistently equal in magnitude but opposite in sign, this is what would be called a zero-sum-game.

5.1 Mixed Strategy Games

Whereas the Coordination Pattern incentivized both players to communicate truthfully, the Competition Pattern incentivizes them to keep their decisions as private as possible. In fact, if anything, this utility pattern could give each player an incentive to mislead the other player.

How would rational players act in such situations?

If we are looking at just a single round of the game, what we can say in this situation is very limited. In fact, all we can do is advise both sides to work hard on keeping their planned strategy secret. But in addition to the usual difficulties in keeping secrets, this situation becomes harder if the game is played multiple times. Then players can simply observe each other and infer the decision-making process of the other side. If one side succeeds in doing this, they can ensure only desirable utilities, and the other only undesirable ones.

This situation gives rise to the idea of a mixed strategy, in contrast to what we have discussed so far, which was a pure strategy. To play with a pure strategy is simply to select one of the possible strategies. To play with a mixed strategy is to select a probability distribution and use it to select from among all the available strategies. As long we are able to make random choices, this can be an effective way to mitigate the risk of being subjugated by the other player. The key to doing so effectively becomes the selection of the right distribution.

5.2 Selecting a Mixed Strategy (or, Mixed Strategy Nash Equilibria)

In selecting the random distribution, each player’s goal will, in fact, have to be to reduce the other player’s incentive to pick a particular strategy. To do this, we (and each player) will have to analyze the other player’s expected payoff. We will illustrate this concept in more detail when we consider a concrete example.

It is important to note that we cannot select a mixed strategy at the level of game patterns, but rather, must do so within the concrete games. This is different from strict dominance and pure strategy Nash Equilibria, which could be determined at pattern level. The reason for this is that the expected payoff is sensitive to the concrete value of the utility in each situation.

Example 7.7: Feud Consider the following concrete instance of the Competition Pattern:

If Player 1 is deciding on a mixed strategy, he or she must select a distribution for choosing between (A) and (B). The distribution consists of two probabilities, p 1A and p 1B, both of which must be values between 0 and 1, and together they must also add up to 1. The two probabilities represent the relative frequency with which, in the long term, Player 1 will choose A and B, respectively.

Now we need to focus on the payoff of the second player in the case of each play by Player 1. If Player 1 plays A, then Player 2 will choose the A to maximize their outcome (which will have value 6). If Player 1 plays B, then Player 2 will choose B (which will have value 8). What Player 1 can do through its choice of distribution is to equate the expected payoff for the second player so that it is equal in the cases of Player 2’s playing (A) or (B). The expected value for each of Player 2’s options is determined by summing the product of utilities and probabilities in each case. For Player 2’s option (A), that would be

$$\displaystyle \begin{aligned}E\text{(2A)}\ = \ 6p_{\text{1A}}\ + 7p_{\text{1B}},\end{aligned}$$

and for Player 2’s option (B), that would be

$$\displaystyle \begin{aligned}E\text{(2B)}\ = \ 5p_{\text{1A}}\ + \ 8p_{\text{1B}}.\end{aligned}$$

If we want these two expected values to be equal, then we want to solve for E(2A) = E(2B), or substituting the right-hand side from the two equations above we get

$$\displaystyle \begin{aligned}6p_{\text{1A}}\ + 7p_{\text{1B}}\ = 5p_{\text{1A}}\ + \ 8p_{\text{1B}}.\end{aligned}$$

This is one equation with two unknowns, which means we need another equation. The equation we have is p 1A + p 1B = 1, from which we can determine that p 1B = 1 − p 1A. We can use that to replace all p 1B’s in the above equation by a term that only has p 1A. This yields the following equation:

$$\displaystyle \begin{aligned}6p_{\text{1A}}\ + 7(1 - p_{\text{1A}})\ = \ 5p_{\text{1A}}\ + \ 8(1 - p_{\text{1A}}).\end{aligned}$$

By simplifying, we get

$$\displaystyle \begin{aligned}7 - p_{\text{1A}} = 8 - 3p_{\text{1A}}.\end{aligned}$$

From which we can determine that 2p 1A = 1, or p 1A = 0.5, and so, p 1B = 0.5 as well. Here we got an even split between the two choices, but that is not always the case. In fact, even in this game, Player 2’s optimal strategy will not be an even split. But first, to check our answer, let us make sure that these probabilities do ensure that the expected payoff for Player 2 is the same between the two choices. We do that simply by substituting these values in the equations we wrote above

$$\displaystyle \begin{aligned} E\text{(2A)} & = 6p_{\text{1A}} + 7p_{\text{1B}} = 6 \cdot 0.5 + 7 \cdot 0.5 = 3 + 3.5 = 6.5, \\ E\text{(2B)} & = 5p_{\text{1A}} + 8p_{\text{1B}} = 5 \cdot 0.5 + 8 \cdot 0.5 = 2.5 + 4 = 6.5. \end{aligned} $$

So, indeed, our calculations were correct. And if Player 2 knows (or sees) that Player 1 will be making choices according to this distribution, there will be no immediate incentive to choose between the two strategies.

Player 2 still has an incentive to make sure that Player 1 cannot benefit by choosing one strategy over the other. To do so, he or she will make choices based on a random distribution, and will determine two probabilities p 2A and p 2B analogous to the ones Player 1 used. To make this determination, Player 2 analyzes Player 1’s expected payoffs as follows:

When player 1 chooses (A), the payoff is

$$\displaystyle \begin{aligned}E\text{(1A)} = p_{\text{2A}} + 5p_{\text{2B}}.\end{aligned}$$

For Player 1’s option (B), that would be

$$\displaystyle \begin{aligned}E\text{(1B)} = 2p_{\text{2A}} + 3p_{\text{2B}}.\end{aligned}$$

Equating both expectations we get

$$\displaystyle \begin{aligned}p_{\text{2A}} + 5p_{\text{2B}} = 2p_{\text{2A}} + 3p_{\text{2B}}.\end{aligned}$$

Using the substitution p 2B = 1 − p 2A we get

$$\displaystyle \begin{aligned}p_{\text{2A}} + 5(1 - p_{\text{2A}})\ = 2p_{\text{2A}} + 3(1 - p_{\text{2A}}),\end{aligned}$$

which simplifies to

$$\displaystyle \begin{aligned}p_{\text{2A}} + 5(1 - p_{\text{2A}}) = 2p_{\text{2A}} + 3(1 - p_{\text{2A}}),\end{aligned}$$

which simplifies to 5 − 4p 2A = 3 − p 2A and then to 2 = 3p 2A, which means that p 2A = 2∕3, and p 2B = 1∕3.

The uneven split in probabilities in this case can be explained as follows: if Player 2 played an even split between (A) and (B), Player 1 would eventually notice, and start playing (A) more often, because, on average, it gives a higher payoff than playing (B). With Player 2 playing even split, Player 1’s expected payoff for (A) is 1 ⋅ 0.5 + 5 ⋅ 0.5 = 3, which is higher than that for (B), which is 2 ⋅ 0.5 + 3 ⋅ 0.5 = 2.5. Even though this might seem like a small difference, if Player 1 notices, he or she will start playing (A) consistently to maximize their expected payoff. In contrast, the probabilities we calculated for Player 2 make Player 1’s payoffs be 1 ⋅ 2∕3 + 5 ⋅ 1∕3 = 7∕3 and 2 ⋅ 2∕3 + 3 ⋅ 1∕3 = 7∕3, giving Player 1 incentive to both maximize its expected payoff and minimize the chance that the other player can predict the next play.

The distribution s (p 1A, p 1A) and (p 2A, p 2B), together, constitute the mixed strategy Nash Equilibrium for this game.

6 Chapter Highlights

  1. 1.

    Game Theory and CPS

    1. (a)

      Trends

      • Networking is going up

      • Automation is going up

      • Result: More interaction

    2. (b)

      Game theory gives us tools to

      • model self-interest and awareness of others

      • predict and encourage certain outcomes

    3. (c)

      A game consists of

      • Players (2)

      • Choices/Strategies (2)

      • Utilities (at least ordered)

      • What choices are made to maximize each ones OWN utility

    4. (d)

      Patterns of utilities direct us to different analysis tools

  2. 2.

    Independence Pattern and Dominance

    1. (a)

      Basic pattern table (basic example: going out to lunch)

    2. (b)

      How to read the table from each player’s point of view

    3. (c)

      How to determine what’s the best decision

      • The concept of a strictly dominant strategy

    4. a.

      Each one can make that decision independent

    5. (d)

      Warning: The pattern is not as simple as it may seem!

  3. 3.

    Cooperation Pattern and Equilibria

    1. (a)

      Basic pattern (example: going out)

      • No dominance

      • Coordination need

      • Communication valuable

      • Note that these are often sub-patterns

  4. 4.

    Competition Pattern and Mixed Strategies

    1. (a)

      Basic pattern (example: two shops that cell lunch)

    2. (b)

      No dominance

    3. (c)

      No simple equilibrium

    4. (d)

      Must not share information!

    5. (e)

      Must mix

    6. (f)

      Concrete values in utilities become critical!

7 Study Problems

  1. 1.

    Calculate the mixed strategy Nash Equilibrium for this concrete game:

    Be sure to include the four probabilities, and to check that your answer does equalize the expected payoff for the other player.

8 To Probe Further