The evolution of control in the digital economy


Control over digital transactions has steadily risen in recent years, to an extent that puts into question the Internet’s traditional openness. To investigate the origins and effects of such change, the paper formally models the historical evolution of digital control. In the model, the economy-wide features of the digital space emerge as a result of the endogenous adaptation (co-evolution) of users’ preferences (culture) and platform designs (technology). The model shows that: a) in the digital economy there exist two stable cultural-technological equilibria: one with intrinsically motivated users and low control; and the other with purely extrinsically motivated users and high control; b) before the opening of the Internet to commerce, the emergence of a low-control-intrinsic-motivation equilibrium was favored by the specific set of norms and values that formed the early culture of the networked environment; and c) the opening of the Internet to commerce can indeed cause a transition to a high-control-extrinsic-motivation equilibrium, even if the latter is Pareto inferior. Although it is too early to say whether such a transition is actually taking place, these results call for a great deal of attention in evaluating policy proposals on Internet regulation.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6


  1. 1.

    The intentional absence of control over users’ actions (e.g. in the provision of content) is a key feature of most sharing-based platforms such as Wikipedia, YouTube, Flickr, as well as the communities of free software developers and peer-to-peer file sharing networks. Similarly, the lack of control plays an important role in the decentralized mechanisms of relevance and accreditation that are implemented in on-line marketplaces such as Amazon and eBay. All these platforms can be generally considered as instances of what Benkler (2006) calls peer production. For a detailed discussion of the role that users’ decisional autonomy plays in peer production, see Benkler and Nissenbaum (2006).

  2. 2.

    See Robert Booth, Government plans increased email and social network surveillance, The Guardian, April 1, 2012, available at: time checked: April 24, 2012).

  3. 3.

    For a detailed analysis of ACTA and related criticisms, see McManis (2008).

  4. 4.

    See Jonathan Weisman, After an Online Firestorm, Congress Shelves Antipiracy Bills, The New York Times, January 20, 2012, available at: time checked: April 24, 2012).

  5. 5.

    See Claire Cain Miller and Miguel Helft, Web Plan From Google and Verizon Is Criticized, The New York Times, August 9, 2010, available at: (last time checked: April 25, 2012). For more detail on the concept of “net neutrality”, see Wu (2003a).

  6. 6.

    Lessig (1999, 2006) coined the well-known catchphrase “Code is Law” capturing the idea that, in the digital space, software code - as opposed to law, market and social norms - becomes the most powerful regulator of all. This is due to two main factors: first, the weakness of traditional law as a tool of on-line regulation; and second, the specific features of code that are associated with its malleability and nearly perfect enforceability. Overall, it is the combination of these specific features of code that, according to Lessig, makes cyberspace an arena of (potentially) perfect control. Obviously this does not mean that, at present, control is close to being perfectly implemented in the digital space. When one looks at the diffusion of open-source initiatives, as well as the adoption of multi-licensing in the distribution of software packages, it is clear that there still exist wide segments of the media industry that are characterized by low level of control. What the argument of Lessing suggests, however, is just that, even in these segments, control is potentially available and if it is not implemented there must be good reasons for it.

  7. 7.

    Similar provisions are included in the terms of service of most digital platforms. See, for instance, art. 5.5 in Facebook’s Terms of Service: “if you repeatedly infringe other people’s intellectual property rights, we will disable your account when appropriate”, available at (last time checked: April 25, 2012).

  8. 8.

    See Zittrain (2000) on the creation of so-called trusted systems.

  9. 9.

    See MacKinnon (2012) on Apple’s App censorship practices.

  10. 10.

    For a similar approach, see Benkler (2002b).

  11. 11.

    This way of modelling motivational crowding out is generally called “marginal”. An alternative is to assume “categorical” crowding out. On the distinction between marginal and categorical crowding out, see Bowles (2012).

  12. 12.

    In particular, this assumption underestimates the possibility that an interior optimal rate of control exists. At the same time, however, the interior optimal rate of control would be itself a function of intrinsic motivation. It follows that, in the presence of users with heterogeneous motivations, two optimal rates of control would still exist, one with relatively low control with respect to the other. By focusing on corner solutions, we simply approximate (one or both of) these interior points and make the model easier to study.

  13. 13.

    I choose to consider users with both intrinsic and extrinsic motivations instead of purely intrinsically motivated users for two reasons: first of all, in most parts of the digital economy, users who are both intrinsically and extrinsically motivated tend to be more frequent than purely intrinsically motivated users (see, for instance, Hertel et al. 2003; Lakhani and Wolf 2005; Hars and Ou 2001); second, the comparison of effort level associated with pure intrinsic motivation and pure extrinsic motivation would make it necessary to impose additional constraints on parameters λ and ϕ, without relevant effects on the final results.

  14. 14.

    A more complex version of the model could include other behavioral types, such as purely intrinsically motivated user or users with different degrees of intrinsic motivation. At this stage, however, I prefer to favor simplicity and leave more complex specifications for further research.

  15. 15.

    For similar interpretations on the qualitative properties (i.e. optimality) of equilibria activating intrinsic motivation in an evolutionary game theoretic setting, see Belloc and Bowles (2011, 2013).

  16. 16.

    Digital rights management (DRM) systems are an example of access control technology that adds code to digital content that disables the simple ability to copy or distribute that content - at least without the technical permission of the DRM system itself (Lessig 2006). Presently, DRM is in common use by the entertainment industry (e.g. audio and video publisher). Many on-line music stores, such as Apple Inc.’s iTunes Store, as well as many e-book publisher also use DRM, as do cable and satellite service operators to prevent unauthorized use of content or services.

  17. 17.

    Deep packet inspection (DPI) systems are a form of computer network packet filtering that read and classify Internet traffic as it passes through a network, enabling the identification, analysis, blockage and even alteration of information (MacKinnon 2012). Initially, DPI were used mainly to secure private internal networks. Recently, Internet service providers (ISPs) have also started to apply this technology on the public network provided to consumers. Common uses of DPI by ISPs are lawful intercetp, policy definition and enforcement, targeted advertising, quality service and copyright enforcement.

  18. 18.

    The two graphs are adaptations of the data reported in Zittrain (2008).

  19. 19.

    Internet Systems Consortium (ISC) is a non-profit public benefit corporation dedicated to supporting the infrastructure of the universal connected self-organizing Internet - and the autonomy of its participants - by developing and maintaining core production quality software, protocols, and operations. For more detail on ISC and the data reported in Fig. 3, see (last time checked: April 30, 2012).

  20. 20.

    The Computer Emergency Response Team (CERT) Coordination Center is a research center located at Carnegie Mellon University’s Software Engineering Institute with the aim of studying Internet security vulnerabilities. The same data were originally reported by Zittrain (2006). The data are available only for the period 1988-2003 because in 2004 CERT announced it would no longer keep track of security incidents, since attacks had become so commonplace as to be indistinguishable from one another.

  21. 21.

    Software designed to infiltrate and damage a computer system (Zittrain 2006).

  22. 22.

    A similar device is usually employed in population genetics to study the effects of random migration among groups.

  23. 23.

    On the relationship between risk-dominance and stochastic stability, see (Foster and Peyton Young 1990).

  24. 24.

    At the beginning of 2012, after widespread protests, the vote on the two bills was indefinitely postponed by the U.S. Congress.

  25. 25.

    See Google-Verizon Proposal for a legislative framework for network neutrality, available at: (last time checked: May 3, 2012).

  26. 26.

    See Cain Miller and Helft, supra note 5.

  27. 27.

    See Zack Whittacker, Wikipedia losing contributors: Fatal flaw, the community editors?, ZDNet, Augist 4, 2011, available at: (last time checked: May 3, 2012).


  1. Abbate J (1999) Inventing the internet. MIT Press, Cambridge

    Google Scholar 

  2. Aghion P, Dewatripont M, Rey P (2004) Transferable control. J Eur Econ Assoc 2(1):115–138

    Article  Google Scholar 

  3. Aghion P, Tirole J (1997) Formal and real authority in organizations. J Polit Econ 105(1):1–29

    Article  Google Scholar 

  4. Baker G, Gibbons R, Murphy KJ (1999) Informal authority in organizations. J Law Econ Org 15(1):56–73

    Article  Google Scholar 

  5. Belloc M, Bowles S (2011) International trade, factor mobility and the persistence of cultural-institutional diversity, unpublished manuscript

  6. Belloc M, Bowles S (2013) The persistence of inferior cultural-institutional conventions. Am Econ Rev Pap Proc 103(3):1–7

    Article  Google Scholar 

  7. Benabou R, Tirole J (2003) Intrinsic and extrinsic motivation. Rev Econ Stud 70:489–520

    Article  Google Scholar 

  8. Benkler Y (1998) Overcoming agoraphobia: Building the commons of the digitally networked environment. Harvard J Law Technol 11(2):287–400

    Google Scholar 

  9. Benkler Y (2001) Siren songs and amish children: Autonomy, information, and law. N Y Univ Law Rev 76:23–113

    Google Scholar 

  10. Benkler Y (2002a) Coase’s penguin, or linux and the nature of the firm. Yale Law J 112(3):369–446

    Article  Google Scholar 

  11. Benkler Y (2002b) Intellectual property and the organization of information production. Int Rev Law Econ 22:81–107

    Article  Google Scholar 

  12. Benkler Y (2006) The Wealth of Networks: How Social Production Transforms Markets and Freedom

  13. Benkler Y (2012a) A free irresponsible press: Wikileaks and the battle over the soul of the networked fourth estate. Harvard Civili Rights-Civil Liberties Law Review forthcoming

  14. Benkler Y (2012b) Wikileaks and the protect-ip act: A new public-private threat to the internet commons. Doedalus, J Am Acad of Arts Sci 140(4):154–164

    Google Scholar 

  15. Benkler Y, Nissenbaum H (2006) Commons-based peer production and virtue. J Pol Philos 14(4):394–419

    Article  Google Scholar 

  16. Berners-Lee T (1999) Weaving the web: The original design and ultimate destiny of the world wide web. HarperCollins Publisher Inc., New York

    Google Scholar 

  17. Bisin A, Verdier T (2001) The economics of cultural transmission and the dynamics of preferences. J Econ Theory 97:298–319

    Article  Google Scholar 

  18. Bollier D (2008) Viral spiral: How the commoners built a digital republic of their own. The New Press, New York

    Google Scholar 

  19. Bowles S (1985) The production process in a competitive economy: walrasian, neo-hobbesian, and marxian. Am Econ Rev 75(1):16–36

    Google Scholar 

  20. Bowles S (2006) Microeconomics: Behavior, institutions and evolutions. Princeton University Press, Princeton

    Google Scholar 

  21. Bowles S, Choi J-K, Hopfensitz A (2003) The co-evolution of individual behaviors and social institutions. J Theor Biol 223:135–147

    Article  Google Scholar 

  22. Bowles S, Hwang S-H (2008) Social preferences and public economics: Mechanism design when social preferences depend on incentives. J Public Econ 92(8-9):1811–20

    Article  Google Scholar 

  23. Bowles S (2012) Economic incentives and social preferences. Journal of Economic Literature forthcoming

  24. Charness G, Cobo-Reyes R, Jimenez N, Lacomba JA, Lagos F (2011) The hidden advantage of delegation: Pareto-improvements in a gift-exchange game. The American Economic Review forthcoming

  25. Conner K, Rumelt R (1991) Software piracy: An analysis of protection strategies. Manag Sci 37:125–139

    Article  Google Scholar 

  26. Deci EL, Ryan RM (1985) Intrinsic motivation and self-determination in human behavior. Plenum Press, New York

    Google Scholar 

  27. (2010). In: Deibert R, Palfrey J, Rohozinski R, Zittrain J (eds) Access controlled: The shaping of power, rights and rule in cyberspace. MIT Press, Cambridge

  28. Elkin-Loren N, Salzberger EM (2000) Law and economics in cyberspace. Int Rev Law Econ 19:553–581

    Article  Google Scholar 

  29. Falk A, Kosfeld M (2006) The hidden costs of control. Am Econ Rev 96 (5):1611–30

    Article  Google Scholar 

  30. Fehr E, Holger H, Wilkening T (2010) The lure of authority: Motivation and incentice effects of power, unpublished manuscript

  31. Foster DP, Peyton Young H (1990) Stochastic evolutionary game dynamics. Theor Popul Biol 38(2):219–232

    Article  Google Scholar 

  32. Frey BS (1997) Not hust for the money: an economic theory of personal motivation. Edward elgar publishing inc., Chelthenham, UK

    Google Scholar 

  33. Frey BS, Jegen R (2001) Motivation crowding theory: a survey of empirical evidence. J Econ Surv 15(5):589–611

    Article  Google Scholar 

  34. Gagne M, Deci EL (2005) Self-determination theory and work motivation. J Organ Behav 26:331–362

    Article  Google Scholar 

  35. Goldsmith J, Wu T (2006) Who controls the internet? illusions of a borderless wolrd. Oxford University Press, New York

    Google Scholar 

  36. Hars A, Ou S (2001) Working for free? motivations of participating in open source projects, system Sciences, 2001. Proceedings of the 34th Annual Hawaii International Conference on. IEEE

  37. Hertel G, Niedner S, Herrmann S (2003) Motivation of software developers in open source projects: An internet-based survey of contributors to the linux kernel. Res Policy 32:1159–1177

    Article  Google Scholar 

  38. Himanen P (2001) The Hacker Ethic: A radical approach to th Philosophy of business. Random House Inc., New York

    Google Scholar 

  39. Irlenbusch B, Ruchala GK (2008) Relative rewards within team-based compensation. Labour Econ 15:141–167

    Article  Google Scholar 

  40. Johnson DR, Post D (1996) Law and borders: The rise of law in cyberspace. Stanford Law Rev 48(5):1367–1402

    Article  Google Scholar 

  41. Lakhani KR, Wolf RG Feller J, Fitzgerald B, Hissman SA, Lakhani K (eds) (2005) Why hackers do what they do: Understanding motivation and effort in free/open source software projects. MIT Press, Cambridge

  42. Landini F (2012) Technology, property rights and organizational diversity in the software industry. Struct Chang Econ Dyn 23(2):137–150

    Article  Google Scholar 

  43. Leiner BM, Cerf VG, Clarck DD, Kahn RE, Kleinrock L, Lynch DC, Postel J, Roberts LG, Wolff SS (2001) The past and future history of the internet. Commun ACM 40(2):102–108

    Article  Google Scholar 

  44. Lessig L (1996) The zones of cyberspace. Stanford Law Rev 48(5):1403–1411

    Article  Google Scholar 

  45. Lessig L (1999) Code and other laws of cyberspace. Basic Books, New York

    Google Scholar 

  46. Lessig L (2006) Code: version 2.0. Basic Books, New York

    Google Scholar 

  47. MacKinnon R (2012) Consent of the networked: The worldwide struggle for internet freedom. Basic Books, New York

    Google Scholar 

  48. Marx K (1970) Il Capitale: Critica dell’Economia Politica. Roma, Newton Compton editori

    Google Scholar 

  49. McManis CR (2008) The proposed anti-counterfeiting trade agreement (acta): Two tales of a treaty. Houston Law Rev 46(4):1235–1256

    Google Scholar 

  50. Mitchell WJ (1995) City of Bits: Space, Place, and the Infobahn. MIT Press, Cambridge

    Google Scholar 

  51. Naidu S, Hwang S-H, Bowles S (2010) Evolutionary bargaining with intentional idyosincratic play. Economic Letters forthcoming

  52. Parsons T (1963) On the concept of political power. Proceedings of the American Philosphical Society June, 232–262

  53. Posner RA (1974) Theories of economic regulation, mimeo

  54. Post D (1995) Anarchy, state and the internet. Journal of Online Law (3)

  55. Reidenberg JR (1998) Lex informatica: The formulation of information policy rules through technology. Texas Law Rev 76(3):553–593

    Google Scholar 

  56. Shaw A (2008) The problem with the anti-countereiting trade agreement (and what to do about it). KEStudies 2

  57. Shy O, Thisse J-F (1999) A strategic approach to software protection. J Econ Manag Strateg 8:163–190

    Article  Google Scholar 

  58. Simon HA (1951) A formal theory of the employment relationship. Econometrica 19(3):293–305

    Article  Google Scholar 

  59. Slive J, Bernhardt D (1998) Pirated for profit. Can J Econ 31:886–899

    Article  Google Scholar 

  60. Sterling B (2002) The hacker crackdown: Law and disorder on the electronic frontier. Bantam Books, New York

    Google Scholar 

  61. Strahilevitz LJ (2003) Charismatic code, social norms, and the emergence of cooperation on the file-swapping networks. Virginia Law Rev 89(3):505–595

    Article  Google Scholar 

  62. Takayama L (1994) The welfare implications of unauthorized reproduction of intellectual property in the presence of demand network externalities. J Ind Econ 42:155–166

    Article  Google Scholar 

  63. von Hippel E (2005) Democratizing Innovation. MIT Press, London

    Google Scholar 

  64. Weber M (1978) Economy and society. University of California Press, Berkeley

    Google Scholar 

  65. Wu T (2003a) Network neutrality, broadband discrimination. J Telecommun High Technol Law 2:141–176

    Google Scholar 

  66. Wu T (2003b) When code isn’t law. Virginia Law Rev 89(4):679–751

    Article  Google Scholar 

  67. Wu T (2010) The master switch: The rise and fall of information empires. Random House Inc., New York

    Google Scholar 

  68. Young HP (1998) Individual strategy and social structure: an evolutionary theory of institutions. Princeton University Press, Princeton

    Google Scholar 

  69. Zittrain J (2000) What publisher can teach the patient: Intellectual property and privacy in an era of trusted privication. Stanford Law Rev 52(5):1201–1250

    Article  Google Scholar 

  70. Zittrain J (2003) Internet points of control. Boston College Law Rev 44(653)

  71. Zittrain J (2006) The generative internet. Harvard Law Rev 119(7):1974–2040

    Google Scholar 

  72. Zittrain J (2008) The future of the internet and how to stop it. Yale University Press, New Haven and London

    Google Scholar 

Download references


The author is grateful to Ugo Pagano, Sam Bowles as well as participants to the ISLE 2012 conference at the University of Rome 3 for the useful discussions and comments. The usual caveat applies.

Author information



Corresponding author

Correspondence to Fabio Landini.


Appendix A


of Lemma 1 The derivative of Eqs. 4 with respect to a gives us the following best-response function for EI- and PE-users when paired with a generic designer j: a E I, j = ϕ + λ(1 − t) and a P E, j = ϕ. By substituting away for t, we obtain the best-response level of a reported in the lemma. □


of Proposition 1 {E I, L} is proven to be Nash equilibrium as long as: (a) (ϕ + λ)2/2 > ϕ 2/2, and (b) q(ϕ + λ) − γ η k > q ϕδ/2. Condition (a) is self-explained. Condition (b) reduces to \(\delta >2(\gamma \eta k - q\lambda )=\underline {\delta }\). Similarly, {P E, H} is a Nash equilibrium as long as: (c) ϕ 2/2 > ϕ 2/2 − μ and (d) q ϕγ k < q ϕδ/2. Condition (c) is self-explained. Condition (d) reduces to \(\delta <2\gamma k=\overline {\delta }\). For 0 < η < 1, \(\underline {\delta }<\overline {\delta }\) is always true. It follows that: (i) when \(\delta >\overline {\delta }\) condition (b) is satisfied but not condition (d), hence {E I, L} is the only Nash equilibrium; (ii) when \(\delta <\underline {\delta }\) condition (d) is satisfied but not condition (b), hence {P E, H} is the only Nash equilibrium; and (iii) when \(\underline {\delta }<\delta <\overline {\delta }\) conditions (b) and (d) are simultaneously satisfied, hence both {E I, L} and {P E, H} are Nash equilibria. Corollary 1.1 follows from the fact that two necessary conditions for {P E, L} and {E I, H} to be Nash equilibria are that PE is a best-response to L and EI is a best response to H, but this is impossible because it would violate conditions (a) and (c) above. Corollary 1.2 follows directly from points (i), (ii) and (iii) above. □


of Proposition 2 For any λ > 0, a necessary and sufficient condition for {P E, H} to be Pareto efficient is that q(ϕ + λ) − γ η k < q ϕδ/2, which reduces to \(\delta <\overline {\delta }\). Otherwise, {E I, L} Pareto dominates {P E, H}. This, together with the results of Proposition 1, implies that: (i) if \(\delta <\overline {\delta }\), then {P E, H} is Pareto efficient and it is also the only Nash equilibrium of the game; (ii) if \(\delta >\overline {\delta }\), then {E I, L} is Pareto dominant and it is also a Nash equilibrium. Points (i) and (ii), together with the fact that for \(\underline {\delta }<\delta <\overline {\delta }\) both {E I, L} and {P E, H} are Nash equilibria, prove the proposition. □


of Proposition 3 The five cultural-technological equilibria are derived by simply solving the system (10)–(11) for \({\Delta }\omega _{EI}^{\tau }=0\) and \({\Delta }\omega _{L}^{\tau }=0\). The proof in this case is omitted. The asymptotic properties of each equilibrium are derived by analyzing the Jacobian Matrix J(ω E I , ω L ) associated with system (10)–(11), which takes the following form:

$$J=\left( \begin{array} {ll} (1-2\omega_{EI})\left[\omega_{L}\left( \frac{\lambda^{2}}{2}+\phi\lambda+\mu \right)-\mu \right] &\quad\quad\quad\,\, (\omega_{EI}-\omega_{EI}^{2})\left( \frac{\lambda^{2}}{2}+\phi\lambda+\mu \right) \\ \quad\quad\,\,\,(\omega_{L}-{\omega_{L}^{2}})\left[q\lambda+\gamma k (1-\eta)\right] & (1-2\omega_{L})\left\lbrace\omega_{EI} \left[q\lambda+\gamma k (1-\eta)\right]+\frac{\delta}{2}-\gamma k \right\rbrace \end{array} \right)$$

At {0, 0}, we have

$$J=\left( \begin{array}{cc} -\mu & 0 \\ 0 & \frac{\delta}{2}-\gamma k \end{array} \right)$$

from which it follows that

$$ Tr(J)=-\mu+\frac{\delta}{2}-\gamma k \;\;\;\;\;\;\;\; and \;\;\;\;\;\;\;\; Det(J)=-\mu\left( \frac{\delta}{2}-\gamma k\right) $$

Since Tr (J) < 0 and Det (J) > 0 for any δ < 2γ k, {0, 0} is asymptotically stable.At {1, 0}, we have

$$J=\left( \begin{array}{cc} \mu & 0 \\ 0 & q\lambda-\gamma\eta k+\frac{\delta}{2} \end{array} \right)$$

from which it follows that

$$ Tr(J)=\mu+ q\lambda -\gamma\eta k+\frac{\delta}{2} \;\;\;\;\;\;\;\; and \;\;\;\;\;\;\;\; Det(J)=\mu \left( q\lambda -\gamma\eta k+\frac{\delta}{2}\right) $$

Since Tr (J) > 0 and Det (J) > 0 for any δ > 2(γ η kq λ), {1, 0} is unstable.At {0, 1}, we have

$$J=\left( \begin{array}{cc} \frac{\lambda^{2}}{2}+\phi\lambda & 0 \\ 0 & -\frac{\delta}{2}+\gamma k \end{array} \right)$$

from which it follows that

$$ Tr(J)=\frac{\lambda^{2}}{2}+\phi\lambda -\frac{\delta}{2}+\gamma k \;\;\;\;\;\;\;\; and \;\;\;\;\;\;\;\; Det(J)=\left( \frac{\lambda^{2}}{2}+\phi\lambda\right)\left( \gamma k-\frac{\delta}{2}\right) $$

Since Tr (J) > 0 and Det (J) > 0 for any δ < 2γ k, {0, 1} is unstable.At {1, 1}, we have

$$J=\left( \begin{array}{cc} -\frac{\lambda^{2}}{2}-\phi\lambda & 0 \\ 0 & -q\lambda+\gamma\eta k -\frac{\delta}{2} \end{array} \right)$$

from which it follows that

$$ \begin{array}{l} Tr(J)=-\frac{\lambda^{2}}{2}-\phi\lambda -q\lambda +\gamma\eta k -\frac{\delta}{2} \;\;\;\;\;\;\;\; and \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; Det(J)=-\left( \frac{\lambda^{2}}{2}+\phi\lambda\right) \left( \gamma\eta k -q\lambda -\frac{\delta}{2}\right) \end{array} $$

Since Tr (J) < 0 and Det (J) > 0 for any δ > 2(γ η kq λ), {1,1} is asymptotically stable.At \(\lbrace \omega _{EI}^{*},\omega _{L}^{*} \rbrace \), we have

$$J=\left( \begin{array} {cc} 0 & \frac{\left( 2\gamma k-\delta\right)\left[2q\lambda-2\gamma\eta k+\delta\right]}{4\left[q\lambda+\gamma k(1-\eta)\right]^{2}} \left( \frac{\lambda^{2}}{2}+\phi\lambda+\mu\right) \\ \frac{2\mu\lambda(\lambda+2\phi)}{\left[\lambda(\lambda+2\phi)+2\mu\right]^{2}} \left[ q\lambda+\gamma k(1-\eta)\right] & 0 \end{array}\right)$$

from which it follows that

$$ \begin{array}{l} Det(J)=- \frac{\left( 2\gamma k-\delta\right)\left[2q\lambda -2\gamma\eta k+\delta\right]}{4\left[q\lambda +\gamma k(1-\eta)\right]^{2}} \left( \frac{\lambda^{2}}{2}+\phi\lambda +\mu\right) \;\;. \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \\\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad. \;\; \frac{2\mu\lambda(\lambda +2\phi)}{\left[\lambda(\lambda +2\phi)+2\mu\right]^{2}} \left[ q\lambda +\gamma k(1-\eta)\right] \end{array} $$

Since Det (J) < 0 for any δ > 2(γ η kq λ), \(\lbrace \omega _{EI}^{*},\omega _{L}^{*} \rbrace \) is a saddle. □


of Proposition 4 From Definition 3 and the value of \(\omega _{EI}^{*}\) and \(\omega _{L}^{*}\) reported in Proposition 3 it follows that:

  • μψ(2γ kδ)/2[δ + 2(q λγ k η] ⇔

    $$ r_{01}=\omega_{EI}^{*}=\frac{2\gamma k-\delta}{2\left[q\lambda +\gamma k(1-\eta)\right]}\;\;\;\;and\;\;\;\;r_{10}=1-\omega_{L}^{*}=\frac{\psi}{\psi +2\mu} $$
  • μ < ψ(2γ kδ)/2[δ + 2(q λγ k η] ⇔

    $$ r_{01}=\omega_{L}^{*}=\frac{2\mu}{\psi +2\mu}\;\;\;\;and\;\;\;\;r_{10}=1-\omega_{EI}^{*}=\frac{\delta + 2(q\lambda -\gamma k\eta)}{2\left[q\lambda +\gamma k(1-\eta)\right]} $$

where ψ = λ(λ + 2ϕ). According to Definition 5, E 0 is SSS if and only if r 10 < r 01. Simple algebra shows that, given Eqs. 20 and 21, the latter condition holds if and only if k < [ψ(2q λ + δ) + 2μ δ]/2γ(2μ + η ψ) = k . The second part of the proposition follows directly from Proposition 2. □

Appendix B

Payoffs in Table 1

Let us indicate with U i, j the utility of an i-type user when matched with a j-type designer, and with π j, i the return to an j-type designer when matched with a i-type user. Moreover, let us write a i, j as the best-response level of a for an i-type user when matched with a j-type designer. Given Eqs. 1 and 2 we have:

$$ U_{EI,j}=[\phi +\lambda(1-t)]a_{EI,j}-\frac{a_{EI,j}^{2}}{2}-\mu t \;\;\;\; , \;\;\;\; U_{PE,j}=\phi a_{PE,j}-\frac{a_{PE,j}^{2}}{2} $$
$$ \pi_{L,i}=q a_{i,L} -\gamma\eta(\lambda)k \;\;\;\; , \;\;\;\; \pi_{H,i}=q a_{i,H}-\frac{\delta}{2} $$

where η(λ) takes the following form:

$$ \eta(\lambda)=\left\{\begin{array}{ll} 1, & \text{if } i=PE \\ & \\ \eta, & \text{if } i=EI \end{array}\right. $$

By replacing into Eqs. 23 and 24 the value for a i, j reported in Lemma 1, and substituting away for t (i.e. replacing t = 0 and t = 1 for a match with an L- and a H-type designer respectively), we obtain the following results:

$$ U_{EI,L}=\frac{(\phi +\lambda)^{2}}{2} \;\;\;\; , \;\;\;\; U_{EI,H}=\frac{\phi^{2}}{2}-\mu \;\;\;\; , \;\;\;\; U_{PE,L}=U_{PE,H}=\frac{\phi^{2}}{2} $$
$$ \pi_{L,EI}=q (\phi +\lambda) -\gamma\eta k \;\;\;\; , \;\;\;\; \pi_{L,PE}=q\phi -\gamma k \;\;\;\; , \;\;\;\; \pi_{H,EI}=\pi_{H,PE}=q\phi -\frac{\delta}{2} $$

Replicator equations

The systems of replicator equations represented by Eqs. 10 and 11 is obtained as follows. Let’s write the probability that an agent (user and designer) of type i switches to type j at time τ as \(p_{ij}^{\tau }\). Given the updating process described above we have:

$$ p_{ij}^{\tau}=\left\{\begin{array}{ll} \beta \left( V_{j}^{\tau}-V_{i}^{\tau}\right), & \text{if }V_{j}^{\tau}>V_{i}^{\tau} \\ & \\ 0, & \text{if }V_{j}^{\tau}\leq V_{i}^{\tau} \end{array}\right. $$

for i, j = E I, P E and ij in the case of users and i, j = L, H and ij in the case of designers. On this basis, the expected fractions of EI-users in period τ + 1 is given by:

$$ \omega_{EI}^{\tau + 1}= \omega_{EI}^{\tau}-\omega_{EI}^{\tau} (1-\omega_{EI}^{\tau})\alpha \sigma_{PE}\beta(V_{PE}^{\tau}-V_{EI}^{\tau})+(1-\omega_{EI}^{\tau})\omega_{EI}^{\tau} \alpha \sigma_{EI}\beta(V_{EI}^{\tau}-V_{PE}^{\tau}) $$

where σ P E and σ E I are two binary functions such that σ P E = 1 if \(V_{PE}^{\tau }>V_{EI}^{\tau }\) and is zero otherwise, σ E I = 1 if \(V_{EI}^{\tau }\geq V_{PE}^{\tau }\) and is zero otherwise, and σ P E + σ E I = 1. Equation 29 reads as follows: the expected fraction of EI-users at τ+1 is given by the fraction of EI-users at τ (first term), minus the fraction of EI-users who are paired with an PE-user and switch their type (second term), plus the fraction of PE-users who are paired with an EI-user and switch their type (third term). Similarly, the expected fractions of L-designers in period τ + 1 is given by:

$$ \omega_{L}^{\tau + 1}= \omega_{L}^{\tau}-\omega_{L}^{\tau} (1-\omega_{L}^{\tau})\alpha \sigma_{H}\beta(V_{H}^{\tau}-V_{L}^{\tau})+(1-\omega_{L}^{\tau})\omega_{L}^{\tau} \alpha \sigma_{L}\beta(V_{L}^{\tau}-V_{H}^{\tau}) $$

where σ H = 1 if \(V_{H}^{\tau }>V_{L}^{\tau }\) and is zero otherwise, σ L = 1 if \(V_{L}^{\tau }\geq V_{H}^{\tau }\) and is zero otherwise, and σ H + σ L = 1. Subtracting \(\omega _{I}^{\tau }\) and \(\omega _{L}^{\tau }\) from both sides of Eqs. 29 and 30 respectively and rearranging we get Eqs. 10 and 11.

Stochastic dynamical system

In the stochastic environment described in Section 4, the expected fraction of EI-users in period τ + 1 is given by

$$ \begin{array}{l} \omega_{EI}^{\tau + 1}= [\omega_{EI}^{\tau}-\omega_{EI}^{\tau} (1-\omega_{EI}^{\tau})\alpha \sigma_{PE}\beta(V_{PE}^{\tau}-V_{EI}^{\tau})+\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+(1-\omega_{EI}^{\tau})\omega_{EI}^{\tau} \alpha \sigma_{EI}\beta(V_{EI}^{\tau}-V_{PE}^{\tau})]\chi_{u}^{\tau}+\nu_{EI}^{\tau}(1-\chi_{u}^{\tau}) \end{array} $$

where σ P E and σ E I are two binary functions such that σ P E = 1 if \(V_{PE}^{\tau }>V_{EI}^{\tau }\) and is zero otherwise, σ E I = 1 if \(V_{EI}^{\tau }\geq V_{PE}^{\tau }\) and is zero otherwise, and σ P E + σ E I = 1, and where

$$ \chi_{u}=\frac{n_{u}^{\tau}}{n_{u}^{\tau}+\varepsilon s_{u}^{\tau}}=\frac{1}{1+\varepsilon\rho_{u}} $$

is a normalizing factor that varies according to the number of new users who enter into the economy. The part of Eq. 31 inside the square brackets refers to the inside population and reads as follows: the expected fraction of EI-users at τ+1 is given by the fraction of EI-users at τ (first term), minus the fraction of EI-users who are paired with an PE-user and switch their type (second term), plus the fraction of PE-users who are paired with an EI-user and switch their type (third term). Once such updating process is completed, \(s_{u}^{\tau }\) new users enter the economy with probability ε. The fraction of EI-users at the beginning of next period is thus given by the updated fraction of EI-users normalized by the new size of the users’ population (i.e. multiplication by \(\chi _{u}^{\tau }\)), plus the fraction of EI-users that are included in the set of new entrants (i.e. \(\nu _{EI}^{\tau }(1-\chi _{u}^{\tau })\)). Similarly, the expected fractions of L-designers in period τ + 1 is given by:

$$ \begin{array}{l} \omega_{L}^{\tau + 1}= [\omega_{L}^{\tau}-\omega_{L}^{\tau} (1-\omega_{L}^{\tau})\alpha \sigma_{H}\beta(V_{H}^{\tau}-V_{L}^{\tau})+\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+(1-\omega_{L}^{\tau})\omega_{L}^{\tau} \alpha \sigma_{L}\beta(V_{L}^{\tau}-V_{H}^{\tau})]\chi_{d}^{\tau}+\nu_{L}^{\tau}(1-\chi_{d}^{\tau}) \end{array} $$


$$ \chi_{d}=\frac{n_{d}^{\tau}}{n_{d}^{\tau}+\varepsilon s_{d}^{\tau}}=\frac{1}{1+\varepsilon\rho_{d}} $$

where σ H = 1 if \(V_{H}^{\tau }>V_{L}^{\tau }\) and is zero otherwise, σ L = 1 if \(V_{L}^{\tau }\geq V_{H}^{\tau }\) and is zero otherwise, and σ H + σ L = 1. Subtracting \(\omega _{EI}^{\tau }\) and \(\omega _{L}^{\tau }\) from both sides of Eqs. 31 and 33, respectively, we get:

$$ {\Delta}\omega_{EI}^{\tau}= \omega_{EI}^{\tau} (1-\omega_{EI}^{\tau})\alpha\beta(V_{EI}^{\tau}(\omega_{L}^{\tau})-V_{PE}^{\tau}(\omega_{L}^{\tau}))\chi_{u}+(1-\chi_{u})(\nu_{EI}^{\tau}-\omega_{EI}^{\tau}) $$
$$ {\Delta}\omega_{L}^{\tau}= \omega_{L}^{\tau} (1-\omega_{L}^{\tau})\alpha\beta(V_{L}^{\tau}(\omega_{EI}^{\tau})-V_{H}^{\tau}(\omega_{EI}^{\tau}))\chi_{d}+(1-\chi_{d}) (\nu_{L}^{\tau}-\omega_{L}^{\tau}) $$

Equations 35 and 36 represent a system of differential equations which describes how the distribution of types \(\lbrace \omega _{EI}^{\tau }, \omega _{L}^{\tau } \rbrace \) evolves over time. The main difference with the system composed of Eqs. 10 and 11 is that this time there are also some stochastic components represented by variables χ u , χ d , \(\nu _{EI}^{\tau }\) and \(\nu _{L}^{\tau }\). The latter are the sources of exogenous variation that make a transition between the basins of attraction of the two stable equilibria E 0 and E 1 possible.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Landini, F. The evolution of control in the digital economy. J Evol Econ 26, 407–441 (2016).

Download citation


  • Internet control
  • Internet regulation
  • Motivation
  • On-line law enforcement
  • Technology
  • Endogenous preferences
  • Evolutionary games

JEL Classification

  • C73
  • D02
  • K00
  • L23