Theory and Decision

, Volume 80, Issue 3, pp 451–462 | Cite as

Optimal stealing time

Article

Abstract

We study a dynamic game in which players can steal parts of a homogeneous and perfectly divisible pie from each other. The effectiveness of a player’s theft is a random function which is stochastically increasing in the share of the pie the agent currently owns. We show how the incentives to preempt or to follow the rivals change with the number of players involved in the game and investigate the conditions that lead to the occurrence of symmetric or asymmetric equilibria.

Keywords

Stealing Stochastic games Optimal timing Pie allocation 

1 Introduction

In this paper, we introduce and study what we call the stealing game. A stealing game is a dynamic game in which a number of agents steal portions of a homogeneous and perfectly divisible pie from each other. The portion of the pie that a player can steal is stochastic. However, the expected value of this random variable is increasing in the agent’s current holdings such that larger players are able on average to steal larger portions. Within such a framework, agents must decide when and whom to rob, with the goal of finishing the game as the leader, i.e., the player who holds the largest share of the pie.1

Our primary goal is to solve for the optimal timing strategies of the agents. We want to find the best moment for a player to behave aggressively and steal part of the pie owned by his rivals. Such a decision is affected by an intuitive trade-off between preempting or postponing one’s move. A player who moves as soon as possible eliminates the possibility of being preempted, but he is then forced to passively suffer the potential retaliation of those who waited. On the other hand, a player who postpones his move can observe the new state of the world and react optimally. However, the agent faces the risk of being preempted and robbed by a rival, in which case his market share goes down as does the expected effectiveness of his stealing attempt.

We characterize the pure strategy equilibria of the stealing game under different specifications for the number of players, the duration of the game, and the number of stealing possibilities players are endowed with. We start by explicitly solving a two-period stealing game in which players have a single stealing opportunity. Despite its simplicity, this setting highlights the strategic peculiarities of the game and shows how the above-mentioned trade-off has different solutions depending on the number of participants. No player postpones his move in a two-player game. A three-player game displays multiple equilibria and, in some of them, all agents postpone their moves. Finally, when the number of players is larger than three, we show that the number of preempting equilibria is strictly larger than the number of postponing equilibria and that asymmetric equilibria may also occur. We then generalize some of these results to a setting in which n players have K stealing opportunities in a stealing game that lasts for \(T>K\) periods.

The paper is organized as follows: Sect. 2 reviews the relevant literature. Section 3 formally introduces the stealing game. Section 4 defines the equilibria of the game when players have a single stealing opportunity and there are only two periods. Section 5 generalizes the results, and Sect.  6 concludes.

2 Literature review

In terms of approach, modeling strategy, and topic of investigation, the stealing game has ties with various strands of the literature. The game is a timing game, i.e., a game in which agents must decide when to move (in our specific case, when to use their stealing attempts). The stealing game actually shares something in common with different archetypes of timing games. The two-player game belongs, in fact, to the class of preemption games. These are games in which it is better to act before one’s rivals; famous examples are the Stackelberg quantity game (Stackelberg 1934) and the centipede game (Rosenthal 1981). On the other hand, the game with more than two players displays some features that are typical of a war of attrition (Maynard Smith 1974), a strategic situation in which preempting the others is disadvantageous.2

The stealing game can also be seen as a dynamic contest where agents compete over different periods with the goal of winning a final prize. Within the rich literature on dynamic contests (see Konrad 2009 for a review), the framework recently discussed in a paper by Sela and Erez (2013) shares some similarities with our stealing game. The authors study a specific form of repeated competition where, in any given period, a firm wins a contest against its rival with a probability that is positively affected by the firm’s relative allocation of a finite resource. Firms are budget-constrained: they start with a given budget, and this budget is progressively eroded by the allocation they implement in each period. The game thus differs in a number of dimensions with respect to a stealing game such as the payoff structure (a prize in each period versus a unique final prize in our game), the number of players involved (\(n=2\) versus \(n\ge 2\) in our game), and the way the agents compete (by allocating resources versus by stealing resources in our game). Nevertheless, an important feature that characterizes both games is the idea that players are budget-constrained (in our case, with respect to the number of stealing possibilities) and thus must choose the timing profile of their actions as well as the fact that, in any given period, the outcome of the interactions among agents is stochastic.

The idea that players compete by allocating finite resources across periods is also reminiscent of the Colonel Blotto game introduced by Borel (1921). In this game, two contestants must simultaneously deploy their armies over various battlefields, and in every battlefield, victory goes to the agent who positions the greater force. The winner of the game is the agent who wins in the majority of battlefields. Indeed, the basic structure of the stealing game is a specific version of a Blotto game where players can deploy at most one unit (i.e., one stealing possibility) over a subset of battlefields/periods. Our game is then enriched by other elements, such as the presence of more than two players and the positive relationship between a player’s strength and the expected effectiveness of his move.

Concerning the last point, Rinott et al. (2012) introduce and study a gladiator game. The game is a stochastic version of the Blotto game where the coaches of two teams of gladiators must decide how to allocate a finite amount of “total strength” within their teams. Gladiators are then involved in a sequence of one-to-one fights. In any given fight, the probability of winning is a probabilistic function of the fighters’ strength. At the end of each fight, the winner recovers his initial strength and remains to fight a new challenger. Thus, some stochastic elements in the determination of the winner as well as a positive relationship between a player’s current strength and his probability of winning are characteristics similar to our stealing game.

Finally, and partly moving to a different strand of the literature, Dubovik and Parakhonyak (2014) study a dynamic model of targeted competition (i.e., a model in which a player can compete/fight against a specific chosen rival). More precisely, three drug cartels compete over three markets, where each market is served by a different couple of cartels. Each cartel can allocate resources to fight the rival in any of the markets where he operates, and the amount of damage that a cartel can inflict on a rival is positively related to the cartel’s local strength, as measured by its manpower. There are thus some important similarities to the approach that we adopt in modeling the stealing game. In fact, the stealing game also provides a model of targeted competition (whenever \(n\ge 3\), each player must choose not only when to steal but also from whom). Moreover, our model also features a positive relationship between the current strength of a player and his expected ability to damage a rival. On the other hand, these two models differ in a number of ways. For instance, in our game, the aforementioned relationship is stochastic rather than deterministic, all players are active in a common market, agents do not accrue payoffs over time, and the analysis is not restricted to a situation with three players.

3 The stealing game

The stealing game is a discrete-time stochastic dynamic game in which \(n\ge 2\) risk-neutral players compete for the possession of a perfectly divisible resource the size of which is constant and normalized to 1. Let \(\pi _{i}^{t}\in \left[ 0,1\right] \) be the share of the resource that agent \( i\in \left\{ 1,...,n\right\} \) holds at time \(t\in \left\{ 1,...,T+1\right\} \) where \(T\ge 2\) is finite and common knowledge. The vector \(\pi ^{t}=\left( \pi _{1}^{t},...,\pi _{n}^{t}\right) \) such that \(\sum _{i}\pi _{i}^{t}=1\) thus defines the allocation at time t with \(\pi ^{1}=\left( \frac{1}{n},...,\frac{1}{n}\right) \).

The goal of the players is to be the largest shareholder at the moment the game is over (i.e., at \(t=T+1\)). Throughout the game, the only way in which an agent can increase his holdings is to steal part of the resource from someone else. Each agent is endowed with \(K<T\) stealing opportunities. A player’s problem consists of deciding when to use these opportunities (agents can use at most one stealing opportunity per period) and which opponent to target (agents can steal from a single rival).3 The vector \(k^{t}=\left( k_{1}^{t},...,k_{n}^{t}\right) \) with \(k_{i}^{t}\in \left\{ 0,...,K\right\} \) describes players’ remaining stealing opportunities at the beginning of period t.

The state of the game at time t is thus defined by \(\theta ^{t}=\left( \pi ^{t},k^{t}\right) \). In any period \(t\in \left\{ 1,...,T\right\} \), agents first observe \(\theta ^{t}\) and then simultaneously choose whether to remain inactive (action \(a_{i}^{t}=\emptyset \)) or steal from a specific opponent j (action \(a_{i}^{t}=j\)). Obviously, an agent who runs out of stealing opportunities must necessarily remain inactive. We indicate with \(A_{i}^{t}\) a player’s action space and with \(a^{t}=(a_{1}^{t},...,a_{n}^{t})\) an action profile.

Whenever an agent plays \(a_{i}^{t}=j\), the maximal amount \( s_{i}^{t}\in \left[ 0,1\right] \) that agent i can steal from j is determined by the realization of the random variable \(S_{i}^{t}\). Let \(\bar{s }_{i}^{t}\) denote the expected value of \(S_{i}^{t}\). The following assumption states that on average larger players are better thieves, i.e., players whose stealing attempts are expected to be more effective.

Assumption 1

\(\bar{s}_{i}^{t}=f(\pi _{i}^{t})\) with \( f(0)=0\) and \(f^{\prime }(\cdot )>0\).

Clearly, there may be cases of “excess demand,” i.e., situations in which one or more players simultaneously steal from agent j but j’s holdings are not enough to satisfy aggregate demand. More formally, \(\sum _{l:a_{l}^{t}=j}s_{l}^{t}>\pi _{j}^{t}\). Whenever such an event occurs, we assume that thieves obtain a share that is proportional to the strength of their stealing attempts. We can thus define the actual amount \(y_{i}^{t}\le s_{i}^{t}\) that agent i manages to steal from j as follows:
$$\begin{aligned} y_{i}^{t}=\min \left\{ s_{i}^{t},\frac{s_{i}^{t}}{\sum \nolimits _{l:a_{l}^{t}=j}s_{l}^{t}}\pi _{j}^{t}\right\} . \end{aligned}$$
(1)
Agents’ payoffs are determined by the final allocation of the resource. More precisely, the player who, in period \(T+1\), holds the largest share gets a prize of size 1. The others get zero. If there is more than one market leader, the prize is equally shared among the winners.

We follow the standard practice in the stochastic games literature (see for instance Maskin and Tirole 2001) and focus on Markov strategies. Let \( h_{t}=\left( a^{1},...,a^{t-1}\right) \) be the history of the game at the beginning of period t, i.e., the sequence of actions chosen up to period \( t-1\). A Markov strategy depends only on the current state \(\theta ^{t}\) and not on the entire history of play \(h_{t}\). Formally, it is given by \(\sigma _{i}=\left( \sigma _{i}^{1},...,\sigma _{i}^{T}\right) \) where \(\sigma _{i}^{t}\) maps the current state into actions, i.e., \(\sigma _{i}^{t}:\varTheta \rightarrow A_{i}^{t}\). Notice that we only consider pure strategies as every \(\sigma _{i}^{t}\) selects a specific action in \(A_{i}^{t}\) and does not involve any randomization. We indicate with \(\sigma =\left( \sigma _{1},...,\sigma _{n}\right) \) a profile of pure Markov strategies.

We use as a solution concept the notion of Markov perfect equilibrium (MPE). A MPE is a subgame perfect equilibrium in which all players use Markov strategies (see again Maskin and Tirole 2001). Let \(\bar{u}_{i}\left( \sigma \right) \) indicate the agent’s expected payoff given the strategy profile \( \sigma \). A profile \(\hat{\sigma }=\left( \hat{\sigma }_{1},...,\hat{\sigma } _{n}\right) \) is a pure strategy MPE if for any \(t, h_{t}\), and i
$$\begin{aligned} \bar{u}_{i}\left( \hat{\sigma }_{i},\hat{\sigma }_{-i}\mid h_{t}\right) \ge \bar{u}_{i}\left( \sigma _{i},\hat{\sigma }_{-i}\mid h_{t}\right) \quad \text { for all }\sigma _{i}. \end{aligned}$$
(2)

3.1 Some preliminary results

The following lemma, whose proof is trivial and is therefore omitted, reduces the set of strategies that can be part of a MPE. It states that in any equilibrium, all agents use all their stealing opportunities.

Lemma 1

Let the strategy profile \(\hat{\sigma }=\left( \hat{\sigma }_{1},...,\hat{ \sigma }_{n}\right) \) be a Markov perfect equilibrium. Then, \(\hat{\sigma } _{i} \) is such that \(k_{i}^{T+1}=0\) for any \(i\in N\).

Lemma 2 defines instead the relationship between an agent’s current holdings \(\left( \pi _{i}^{t}\right) \) and the expected value of his loot \(\left( \bar{y}_{i}^{t}\right) \). It states that \(\bar{y}_{i}^{t}\) is strictly increasing in \(\pi _{i}^{t}\). The result immediately follows from Assumption 1 and the definition of \(y_{i}^{t}\) (see expression 1).

Lemma 2

\(\bar{y}_{i}^{t}=g(\pi _{i}^{t})\) with \(g(0)=0\) and \(g^{\prime }(\cdot )>0\).

In the remaining of the paper, we will also extensively use the notion of a “circle” of players.

Definition 1

Given any set of agents \(M\subseteq N\) where \(\left| M\right| =m\ge 2\), a circle of players \(C_{m}^{t}\) forms in M at time t if for any \( i\in M\) there is a unique \(j\in M\) such that \(a_{j}^{t}=i\).

In other words, a circle of players \(C_{m}^{t}\) is such that at time t all agents who belong to the subset M steal from each other.

In the following, we investigate the stealing game under various specifications for the parameters. We adopt a “bottom-up” approach, that is, we start from the simplest possible setting and progressively generalize the analysis. Our primary goal is to solve for the optimal timing decisions of the players. The fact that the stealing game is a dynamic game that may involve many players competing over many periods often renders unfeasible a complete characterization of the equilibria in terms of strategy profiles that define a complete plan of action for every possible contingency that may arise. Therefore, we will often define the equilibria of the game in terms of the action profiles that emerge on the equilibrium path rather than in terms of complete strategy profiles. In other words, we do not distinguish between strategy profiles that may differ in terms of off-path equilibrium behavior but still lead to the same equilibrium outcome.

4 The game with \(T=2\) and \(K=1\)

We start the analysis of the game by focusing on a basic, yet highly informative, case. More precisely, we study a stealing game that lasts two periods (i.e., \(T=2\)) and in which players have only one stealing opportunity (i.e., \(K=1\)). We first analytically solve the game with two and three players, then extend the results to the case in which \(n>3\).

4.1 The two-player game

If \(n=2\), each player has only one opponent from which he can steal. The game is essentially a \(2\,\times \,2\) game where players must only decide when to use their single stealing opportunity. The following proposition states that in the unique equilibrium of the game, both agents steal from each other in \( t=1\). The proof is trivial (in expectations stealing in \(t=1\) is strictly dominant) and is therefore omitted.

Proposition 1

The stealing game with two players has a unique MPE. In this equilibrium, both agents belong to the circle \(C_{2}^{1}\).

4.2 The three-player game

When \(n=3\), the stealing game presents multiple equilibria. These can be characterized as follows:

Proposition 2

The three-player stealing game has four pure strategy MPE:
  • two equilibria are such that every agent belongs to a circle \( C_{3}^{1}\) (all players move in \(t=1\));

  • two equilibria are such that every agent belongs to a circle \( C_{3}^{2}\) (all players move in \(t=2\)).

Defining the set of players as \(N=\left\{ a,b,c\right\} \), Proposition 2 thus identifies the following equilibrium outcomes:
$$\begin{aligned} O_{1}= & {} \left( (b,\emptyset ),(c,\emptyset ),\left( a,\emptyset \right) \right) \quad O_{3}=\left( (\emptyset ,b),(\emptyset ,c),\left( \emptyset ,a\right) \right) \\ O_{2}= & {} \left( (c,\emptyset ),(a,\emptyset ),\left( b,\emptyset \right) \right) \quad O_{4}=\left( (\emptyset ,c),(\emptyset ,a),\left( \emptyset ,b\right) \right) \end{aligned}$$
All the equilibria are Pareto equivalent with \(\bar{u}_{i}\left( \hat{\sigma } \right) =\frac{1}{3}\) for all i since all players are equally likely to finish the game as the largest shareholder. The interesting feature of the three-player game, versus the two-player game, is that equilibria exist in which all agents postpone their moves. While in the two-player case, any strategy that prescribes a player to postpone his move is dominated, a similar relationship does not hold when the number of players equals three. There exist in fact states of the world where postponing one’s move pays off. Consider, for instance, a situation in which player a is the unique agent who moves in the first period, and assume he steals from player b. Clearly, this is an ideal scenario for player c, because he can now observe the new state \(\theta ^{2}\) and then decide how to use his stealing opportunity (which is still fully effective).

4.3 The game with \(n>3\) players

We now extend the analysis of the stealing game to a situation in which more than three players compete over two periods and have a single stealing opportunity. As before, our primary interest lies in investigating the timing of agents’ moves and their decision whether to preempt or postpone their stealing opportunity. The following proposition defines the preempting equilibria, the ones in which all players move in \(t=1\).

Proposition 3

The stealing game with more than three players has multiple pure strategy MPE in which all players move in \(t=1\). All these equilibria are such that every agent belongs to a circle of players.

Notice that the number of preempting equilibria rapidly explodes with the number of players. In fact, for any \(n>3\), equilibrium profiles are not only those that support the \((n-1)!\) possible circles that involve all the players (i.e., the circles \(C_{n}^{1}\)) but also those in which the set N is partitioned and smaller circles (possibly of different sizes) emerge in every part.

There also exist postponing equilibria in which all the players use their stealing opportunities in \(t=2\). However, the conditions that define them are stricter, as shown by the following proposition.

Proposition 4

The stealing game with more than three players has multiple pure strategy MPE in which all players move in \(t=2\). All these equilibria are such that every agent belongs to a circle of players, and every circle contains at least three players.

Proposition 4 indicates that there cannot exist postponing equilibria that feature circles made of two players because within any circle of this kind, players would like to deviate in order to preempt their rival. In fact, Proposition 1 showed that the only circle that qualifies as an equilibrium when \(n=2\) is the one in which both players move in \(t=1\).

Comparing Propositions 1 through 4, it is possible to state three additional results that characterize a stealing game in which \(n\ge 2\) agents compete over two periods and have a single stealing opportunity:
  • whenever \(n\ne 3\), not all the strategy profiles where every agent belongs to a circle of players are equilibria.

  • for any \(n\ge 4\), the number of preempting equilibria is strictly larger than the number of postponing equilibria.4

  • for any \(n\ge 4\), there exist equilibria that are asymmetric with respect to the timing decision.5

5 The game with \(T>2\) and \(K<T\)

As a further generalization, we consider a stealing game that lasts for T periods and where players have \(K\in \left\{ 1,...,T-1\right\} \) stealing possibilities. The widening of players’ action spaces, paired with the stochastic nature of the game, rapidly enlarges the state space. This fact renders unfeasible a clear categorization of agents’ best responses. As such, not only a proper characterization of the equilibria, but also the mere description of the action profiles that emerge along the various equilibrium paths, appear to be out of reach.

It is, however, possible to state some very general results. These maintain the same qualitative features as those presented in the previous sections, at least for what concerns the timing of agents’ first move. The main insights are that in equilibrium, all players may remain idle for some initial periods (the two-player case being an exception), and that when a player uses his first stealing opportunity, he necessarily belongs to a circle of players. The following proposition formalizes these results:

Proposition 5

All pure strategy MPE of a stealing game in which \(n\ge 2\) players compete over \(T>2\) periods and have \(K<T\) stealing opportunities are such that
  • if \(n=2\), both agents belong to the circles \(C_{2}^{t}\) for any \( t=\left\{ 1,...,K\right\} \);

  • if \(n\ge 3\), each agent belongs to a circle \(C_{m}^{t^{*}}\) where \(t^{*}\) is the period in which the agent uses his first stealing opportunity. In particular, \(t^{*}=1\) if \(m=2\) whereas \(t^{*}\in \left\{ 1,...,T-K+1\right\} \) if \(m\ge 3\).

6 Conclusions

We analyzed what we called the stealing game, a stochastic game in which players must decide when to steal portions of a homogeneous good from each other with the goal of finishing with the largest share. The peculiarity of the game is that the expected effectiveness of a player’s theft is increasing in the agent’s holdings. We showed that in a stealing game with two agents, players always want to preempt their rival and thus employ their stealing opportunities as soon as possible. Alternatively, we showed that with three players, the game also displays equilibria in which all the agents postpone their moves. Finally, we showed that when the number of players is larger than three, asymmetric equilibria exist, and not all the players necessarily steal in the same periods.

Footnotes

  1. 1.

    As an example of a situation that matches some of the key features of the game, consider the case of electoral competition among political candidates. By campaigning on specific topics, a candidate may target a particular opponent and thus “steal” a portion of his voters. Moreover, larger players (i.e., candidates with many supporters) are usually able to raise more funds, so they can afford more expensive campaigns, which are in turn expected to be more effective.

  2. 2.

    More recent literature on timing games has focused on generalizing former results (Bulow and Klemperer 1999), in providing a unified framework to study preemption games and wars of attrition (Park and Smith 2008), or in experimentally testing some of the theoretical results (Brunnermeier and Morgan 2010).

  3. 3.

    There are a number of things to notice here. First, we set \(K<T\) because we are interested in studying a situation in which stealing opportunities are a scarce resource, and players must decide when to use them. Second, an agent can freely change the rival he targets across periods: agent a may steal from b in a certain period and then from c in a subsequent period. Finally, we assume for simplicity that there are no explicit monetary costs associated with the act of stealing. Such an assumption implies little loss of generality since all the results would remain valid as long as stealing costs do not exceed a certain threshold.

  4. 4.

    Consider, for instance, a stealing game with \(n=5\). Proposition 3 implies that preempting equilibria can emerge only in partitions (5) and (3, 2). The number of preempting equilibria is thus 44: there exist \(4!=24\) equilibrium outcomes in partition (5) and 20 equilibrium outcomes in partition (3, 2) (ten couples can be drawn from a set of 5 elements; for any of these couples there are two possible circles that can emerge in the part that involves 3 players). On the contrary, Proposition 4 states that postponing equilibria can emerge only in partition (5) since players must necessarily belong to a circle \(C_{5}^{2}\). It follows that there are only 24 postponing equilibria.

  5. 5.

    Let \(N=\left\{ a,b,c,d,e\right\} \). The outcome \(O_{1}=\left( (b,\emptyset ),(a,\emptyset ),(\emptyset ,d),(\emptyset ,e)(\emptyset ,c)\right) \) is an example of an asymmetric equilibrium: a and b belong to the circle \( C_{2}^{1}\) and move in \(t=1\) while cd, and e belong to a circle \( C_{3}^{2}\) and move in \(t=2\).

Notes

Acknowledgments

I thank an anonymous referee for very helpful comments that substantially improved the paper. I am also grateful to Pascal Courty, Dino Gerardi, Edoardo Grillo, Dorothea Kubler, Vilen Lipatov, Marco Mariotti, Ignacio Monzon, and Karl Schlag for useful suggestions and discussions. All remaining errors are my own.

References

  1. Borel, E. (1921). La theorie du jeu les equations integrales a noyau symetrique. Comptes Rendus de l’Academie 173, 1304–1308. English translation by Savage, L., 1953. The theory of play and integral equations with Skew Symmetric Kernels. Econometrica, 21, 97–100.Google Scholar
  2. Brunnermeier, M. K., & Morgan, J. (2010). Clock games: Theory and experiments. Games and Economic Behavior, 68, 532–550.CrossRefGoogle Scholar
  3. Bulow, J., & Klemperer, P. (1999). The generalized war of attrition. American Economic Review, 89, 175–189.CrossRefGoogle Scholar
  4. Dubovik, A., & Parakhonyak, A. (2014). Drugs, guns, and targeted competition. Games and Economic Behavior, 87, 497–507.CrossRefGoogle Scholar
  5. Harsanyi, J., & Selten, R. (1988). A general theory of equilibrium selection in games. Cambridge: MIT Press.Google Scholar
  6. Konrad, K. A. (2009). Strategy and dynamics in contests. Oxford, UK: Oxford University Press.Google Scholar
  7. Maynard Smith, J. (1974). Theory of games and the evolution of animal contests. Journal of Theoretical Biology, 47, 209–221.CrossRefGoogle Scholar
  8. Maskin, E., & Tirole, J. (2001). Markov perfect equilibrium I. Observable actions. Journal of Economic Theory, 100, 191–219.CrossRefGoogle Scholar
  9. Park, A., & Smith, L. (2008). Caller number five and related timing games. Theoretical Economics, 3, 231–256.Google Scholar
  10. Rinott, Y., Scarsini, M., & Yu, Y. (2012). A Colonel Blotto gladiator game. Mathematics of Operations Research, 37, 574–590.CrossRefGoogle Scholar
  11. Rosenthal, R. (1981). Games of perfect information, predatory pricing, and the chain store paradox. Journal of Economic Theory, 25, 92–100.CrossRefGoogle Scholar
  12. Sela, A., & Erez, E. (2013). Dynamic contests with resource constraints. Social Choice and Welfare, 41, 863–882.CrossRefGoogle Scholar
  13. von Stackelberg, H. (1934). Marktform und Gleichgewicht. Vienna and Berlin: Springer Verlag.Google Scholar

Copyright information

© Springer Science+Business Media New York 2015

Authors and Affiliations

  1. 1.Department of Economics and StatisticsUniversity of Torino and Collegio Carlo AlbertoTorinoItaly

Personalised recommendations