Abstract
The importance of network externalities for the development of technology and industry structure has been recognized in evolutionary economic for a long time. However, network externalities are no isolated phenomena. They are based on competing standards in a comprehensive network of technology lines that are based on one another and remain to various degrees interoperable or compatible. As some evidence from the ICT sector inparticular shows, compatibility and tying or bundling of standards may be employed as strategic tools. The present paper investigates the economic role of tied standards for the dynamics of competition between standards. A replicator model operating on an aggregated level is complemented by an agentbased simulation with explicit representation of the network structure among users. A variety of effects are studied, including the role of initial usage share, manipulation of compatibility, expansion of vendors into other segments, as well as the network structure and central or peripheral positioning of agents. The agentbased model contrasts a complete network and a regular ring network with asymmetric network structures derived from Barabàsi and Albert’s preferential attachment mechanism and triadic closure.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
When usersboth corporate and privateconsider employing a new technology, say VoiceoverIP telephony (VoIP), their choice between different available standards or products implementing this technology may be severely limited. While many standards^{Footnote 1} may be available in theory, practical usability depends on which standard predominates the users’ direct environment, which ones are used by their business partners etc. This effect, network externalities, has extensively been investigated (David 1985; Katz and Shapiro 1985; Arthur et al. 1987). However, there is a second constraint, introduced by compatibility to other standards already employed by the respective users; they may for instance need to consider, if the desired VoIP software works well with the used operating system, office software, computer and network hardware, etc. Network externalities will then develop not only within but also across segments.
Vendors of standards have been known to use this to their advantage: Microsoft’s breakthrough famously came with agreements to couple their software with IBM hardware. Another wellknown case linked to the company is the bundling of its operating system Windows with its web browser Internet Explorer. Today, many large companies in the ICT (information and communication technology) sector maintain extensive portfolios of partly bundled or integrated products. While the phenomenon is by no means limited to the ICT sector, the strength of network externalities in ICT makes examples in this sector both more numerous and more obvious.^{Footnote 2}
Obvious strategies to gain an advantage for a competing standard in such a setting include increasing the compatibility with major competitors in other segments,^{Footnote 3} introducing spinoff standards in other segments, and reducing compatibility with weaker competitor’s products in order to drive them out of business.
Can strategic exploitation of network effects of this type be demonstrated in a simple evolutionary model?^{Footnote 4} Can it be demonstrated for (1) the initial usage share in either segment (putting incumbants at an advantage) (2) the compatibility between standards across segments (3) the positioning of initial adopters? Is there a point beyond which reduction of compatibility is desirable for a competitor? Is it wise to expand into another segment to control the standard in that segment directly—given the capacity to do so?
The present contribution offers a replicator dynamic model of standard competition with crosssegment ties. A standard replicator equation is used in which the compatibility with and the usage shares of standards in one or several other segments take the role of the evolutionary fitness. The replicator model yields a firstorder dynamic system that allows to investigate the impact of initial conditions and of compatibility terms. These would be what governs the strategic actions of major competitors in sectors with densely interconnected standards such as the information and communication technology.
Direct interaction between agents on the microlevel sometimes leads to the emergence of nontrivial macrolevel dynamics. Therefore, it is necessary to show that benchmark models operating at an aggregated level will still work if a microlayer of massive numbers of interacting agents is included. For this, an agentbased version of the model is added. The deterministic dynamics resulting from the (aggregated level) replicator model are replaced by transition probabilities between user groups of standards.^{Footnote 5} For the trivial network structure of a complete graph, this results in a stochastic dynamic system which is equivalent to the macrolevel model in its behavior while for other network structures, the probabilities change locally, i.e. between agents, depending on their neighborhood.
Section 2 gives a brief literature overview before the model is discussed in Sect. 3. Section 4 discusses simulations and results. These analyses are contrasted with some evidence of strategic use of standard tying in the ICT (information and communication technology) sector in Sects. 5, 6 concludes.
2 Literature Review
While there were some earlier considerations of increasing returns in economic theory (see, e.g. Sraffa 1926; Young 1928), the specific role of increasing returns to the number of consumers, and thus the phenomenon of network externalities, was only analyzed in detail starting in the 1980s.^{Footnote 6}
There are two major schools to approach the modeling of network externalities: One relies on game theory and the analysis of equilibria for rational agents in a game theoretic setting. This approach was pioneered by Katz and Shapiro (1985, 1986). It received much attention and drew a large number of contributions in the subsequent years; Farrell and Klemperer (2007) provide an overview. The other line of research emphasized path dependence, selfreinforcing feedbacks, and dynamics. Nelson’s (1968) and Fisher and Pry’s (1971) models of technological change may, without directly focusing on network externalities, have been the first predecessor of this class of models. The tradition of literature fully developed in the 1980s with David’s (1985, 1992) historical analyses and Arthur et al.’s (1983[1982], 1987, 1988, 1989) urn scheme (Eggenberger–Pólya process) models. The consensus among the two traditions holds that network effects tend to lead to a lockin with only one alternative as the uncontested standard which may bring certain disadvantages in the form of technological alternatives that are not viable any more because the user base concentrates on another, potentially inferior technology.^{Footnote 7}
As the body of literature and evidence grew, scholars turned to further details including tying of standards across sectors, or rather across subsectors/segments within a larger sector. As an example, this may be thought of as an operating system, an office software package, a web browser, a database system and numerous other categories of software and hardware products that require a certain compatibility with one another in order to work properly.^{Footnote 8} Many of the larger vendors are active in several or almost all of these subsectors. It is obvious that network externalities may also unfold indirectly^{Footnote 9} across sectors and that it opens a large variety of strategic options to any commercial vendor. The idea of tied standards was initially proposed by Choi (2004) and analyzed in a game theory framework. Later, an evolutionary replicator model was put forward (Heinrich 2014); this model will be used and extended for the analysis in the present article.
With recent advances of network theory (Watts and Strogatz 1998; Barabási and Albert 1999; Vázquez 2003) and their application to the field of technology diffusion and network externality models, it became clear that the result of swift and complete lockin and monopolization relied crucially on the implicitly assumed network structure of a complete network. The properties of other network structures in this respect including lattice networks, Watts–Strogatz random graphs (i.e., small world networks), and scale free networks (both Barabàsi–Albert preferential attachment networks and and Vazquez’ connect nearest neighbor (CNN) networks) were investigated subsequently (Delre et al. 2007; Lee et al. 2006; Frenken et al. 2008; Uchida and Shirayama 2008; Pegoretti et al. 2009). It was found that networks with large diameter effectively inhibited complete lockin (Uchida and Shirayama 2008). Small world networks on the other hand^{Footnote 10} resemble the findings of the complete network (Delre et al. 2007; Lee et al. 2006; Pegoretti et al. 2009). Some of the findings by Uchida and Shirayama (2008) also hint that clustering (CNN instead of BA scale free networks) may reduce the probability of a lockin as well.
However, comprehensive studies of mechanisms and marketstrategic and economic consequences of complex (tied multisector or multisegment) network externality systems have yet to be conducted. One difficulty of this is the dimension of the resulting problems with many free variables the effect of which would need to be systematically analyzed. This and the need to take the network structure into account suggests simulation as the best option for this analysis which will form the centerpiece of the present article. This allows not only to investigate the effect of neighborhood structures but also the possible role and plausibility of strategic use of tying and network externalities.
3 A Model of Network Externalities and Tied Standards
3.1 Replicator Dynamic Model with Implicit Network Structure
The present study will be based on a replicator model that largely follows the model proposed in Heinrich (2014). The numerical study below requires assuming specific parameter sets. Different values for several of the more interesting parameters (with otherwise plausible parameter settings) will be considered in order to study the sensitivity of the system with respect to those parameters.
The starting point of the model is a replicator equation^{Footnote 11}
where \(p_{i,j,t}\) are the usage shares of standard i in segment^{Footnote 12} j at time t, \(f_{i,j,t}\) is the evolutionary fitness term of this standard, and \(\phi _{j,t}\) is the average evolutionary fitness in segment j at time t.
where \(f_{j',t}=\left( \begin{array}{c}f_{1,j',t}\\ f_{2,j',t}\\ ...\end{array}\right) \) and \(p_{j',t}=\left( \begin{array}{c}p_{1,j',t}\\ p_{2,j',t}\\ ...\end{array}\right) \) are vectors with components for each standard of the segment \(j'\) and \(^T\) denotes the transpose. The fitness term must include measures of the size of the standard’s network or usage share and, if tying between standards across segments is to be taken into account, also such measures for compatible standards in other segments. Compatibility denotes the interoperability between two standards. Can files created with program 1 be opened with program 2? Is program 1 available for operating system 3 on mobile device 4? Is it usable in the same way as former industry standard program 5? In reality, there are many cases of limited interoperability, therefore it seems fitting to to denote interoperability between any two standards \(i'\) and i as a real valued number between 0 and 1, \(a_{i'i} \in [0,1]\) (if in the same segment) or \(c_{i'i} \in [0,1]\) (if in different segments).
Consider the vector of the fitnesses of standards in segment \(j'\) at time t, \(f_{j',t}\)
where \(p_{j,t}\) is the vector of the corresponding population shares, \(w_{j',j}\) are parameters indicating the weight^{Footnote 13} of the compatibility with standards in segment j on the fitness terms in segment \(j'\), and A and C are matrices of the compatibilities of standards in two segments (\(C_{j',j}\) between segments j and \(j'\)) or between standards in the same segment (A). Let the elements of those matrices be denoted \(a_{i'i}\) and \(c_{i'i}\) respectively and hold values between 0 and 1 which indicate to what degree standard i is compatible to standard \(i'\) from the point of view of a user of standard \(i'\). Note that for most technologies, this compatibility structure would be assumed to be symmetric^{Footnote 14} but there may be exceptions.^{Footnote 15}
Consider for illustrative purposes a system with one segment and two standards. The above function would become
and would not have to contain any crosssegmental compatibility matrices C. In case of a twosegment system with two standards in each segment, the fitness function for standards in segment 1 would read
Note that this is still a very general model except for two aspects: First, the fitness terms resulting from compatibilities across different segments are additive in this model. They could also^{Footnote 16} be connected multiplicatively, but this would lead to a very strong effect of single segments in a very large model,^{Footnote 17} while an additive model allows the effects to be tuned with the parameters w. Second, no standalonefitness term (one without relation to network externalities) is included. This would only add additional variables without contributing substantially to the purpose of the model, to investigate and illustrate the effect of compatibility on the development of usage shares and market power.
For this model, the direct effects of different variables can now be studied; particularly of interest is whether the compatibility terms have a positive or a negative effect.^{Footnote 18} This is derived in “Appendix 1”. For the terms of matrix A, this yields \(\frac{\partial p_{1,1,t+1}}{\partial a_{11,1}}\ge 0\), \(\frac{\partial p_{1,1,t+1}}{\partial a_{12,1}}\ge 0\), \(\frac{\partial p_{1,1,t+1}}{\partial a_{21,1}}\le 0\), \(\frac{\partial p_{1,1,t+1}}{\partial a_{22,1}}\le 0\). If A is to be symmetric, hence \(\alpha =a_{12,1}=a_{21,1}\) and \(0<p_{1,1,t}<0.5\), we further derive that \(\frac{\partial p_{1,1,t+1}}{\partial \alpha } > 0\). For the direct influence of the terms of the intersegmental compatibility martix C, it is obtained that \(\frac{\partial p_{1,1,t+1}}{\partial c_{11,1,2}}\ge 0\), \(\frac{\partial p_{1,1,t+1}}{\partial c_{12,1,2}} \ge 0\), \(\frac{\partial p_{1,1,t+1}}{\partial c_{21,1,2}}\le 0\), \(\frac{\partial p_{1,1,t+1}}{\partial c_{22,1,2}} \le 0.\) However, in the intersegmental case, there may be irreducible indirect effects that work by first influencing the usage shares in the other segment and then taking an indirect effect by means of these. This will be discussed in more detail in Sect. 4.2.
The proper way to analyze this is by computing the attractors in this dynamical system and assessing their stability. For a single segment with two standards with symmetric A and \(\alpha <1\), this unsurprisingly yields the result that there are two stable equilibria (the monopolization of the segment by the two standards respectively) and one unstable tipping equilibrium (for detailed derivation, see “Appendix 2”).
This is in agreement with the present analysis and the previous literature: network externalities must, if present, have a very strong effect towards asymmetric market power and usage shares and ultimately towards monopolization.
3.2 AgentBased Simulation Model with Explicit Network Structure
Macrolevel models like the replicator dynamics above and like many of the network externality models in the literature generally assume a complete network between the agents and often also—sometimes depending on the interpretation—generally homogeneous agents. This does not live up to accurately representing what is observed in the real world. The purpose of those models, creating simplified mathematical representations of the real world in order to identify general characteristics, should therefore be complemented by analyses that drop these simplifications, that allow for heterogeneity and, perhaps more importantly, for a greater variety of network structures. It must be shown that the general characteristics identified in macrolevel models continue to hold there. Further, the effect of network structures can be investigated as can be mechanisms that rely on their characteristics (the friendship paradox is addressed as an example in a simulation in Sect. 4.3).
The proper tool for such an analysis is agentbased modeling and simulation (Pyka and Fagiolo 2005; Elsner et al. 2015; Gräbner 2015). Specifically, the aggregated level development equations from the above replicator model are dropped; agents are modelled explicitly and are periodically allowed to reconsider their adoption decision. Adoption decisions may be assumed to be costly (perhaps requiring new equipment), and are therefore not taken lightly or reconsidered frequently. In making adoption decisions, agents do take network externalities into account but only those that arise from their direct neighbors. That is, connections indicate nothing more and nothing less than the potential need to interact by making use of the standard in question such that the choice of the connected neighbor causes an external effect on the agent.^{Footnote 19} This would, in turn, prompt the agent to take her information about previous adoption decisions by neighbors into account in her own adoption decision. It is reasonable to assume that agents are perfectly informed about their neighbor’s adoption decisions, since the network externality gives incentive to try to be coordinated and neighbors would therefore have an incentive to announce their adoption decisions both immediately and truthfully. In order to keep the model close and comparable to the aggregated level replicator model above, the future population shares \(p_{i,j,t+1}\) in the replicator model are used as probabilities for the agent to adopt the respective technologies, hence
with
where the variables with superscript L indicates quantities in the immediate neighborhood of the respective agent that are not necessarily constant across the network. Furthermore the usage shares \(p_{j',t}^L\) are absolute, not relative, usage shares, that is nonadopters count as a seperate share which means that agents who encounter no adopters in their neighborhood will also not adopt any technologies (since all adoption probabilities are then multiplied with 0), agents with a small share of adopters in their vicinity will also only have a small (but positive) share to join a standard’s usage network.
Five network structures will be studied:

1.
Complete network All agents are direct neighbors of all other agents. This should correspond most directly to the aggregated level replicator model above (with only minor changes, such as a stochastic term for the agent’s technology adoption decision instead of deterministic development equations for the shares). It is meant as a mere benchmark case.

2.
Regular 1d grid Agents are arranged in a circle and directly connected to n neighbors (out of a total of N agents) to both sides. For the following simulations the parameter setting \(n=30\), \(N=1000\) are used. Grid networks are known to have constant betweenness centrality, high clustering, and a large diameter relative to the number of vertices. As discussed in the literature review above, they tend to cancel out monopolization effects in network externality models of technology diffusion.

3.
Barabàsi–Albert preferential attachment networks Starting with one agent, new agents are added and connected to k nodes with a probability proportional to their current degree (number of direct neighbors). This produces a heavily asymmetric degree distribution which is, in fact, scale free; such networks are known to also have a small diameter. The parameter settings used below are \(k=2\), \(N=1000\).

4.
Barabàsi–Albert preferential attachment networks with triadic closure Since realworld networks tend to be highly clustered, something which is not the case for Barabàsi–Albert networks, clustering is increased here by using triadic closure. m open triads, unconnected nodes which have a common neighbor, are randomly selected and closed. The parameter setting used in the simulations below is \(k=1\), \(N=1000\), \(m=1000\) which gives the network as many edges as network (3) but a larger diameter. Note that triadic closure is similar to Vázquez’ (2003) connect nearest neighbor (CNN) network generating mechanism: as nodes of higher degree are more likely to be selected, this should increase the asymmetry of the degree distribution (and indeed combine two powerlaw generating mechanisms).

5.
Barabàsi–Albert preferential attachment networks with triadic closure like network (4) but with parameters \(k=2\), \(N=1000\), \(m=1000\) which gives the network a diameter similar to that of network (3) but higher density.
These five structures cover both basic benchmark cases for comparison with the simple aggregated level replicator (the complete network, and, to a lesser extent, the grid network) and network structures that include many realistic features also observed in real life networks including clustering (grid network, preferential attachment with triadic closure) and small diameter (smallworld property) as well as scalefree degree distribution (preferential attachment networks). The literature discussed in Sect. 2 offers some guidance on what to expect in models with these network structures: clustering should tend to reduce network externalities and subsequent monopolization effects, high density and scalefree degree distributions may counteract this reduction.
4 Simulation Analysis
Simulation offers a convenient and reliable method to study the behavior of complex systems at least in parts of their potentially vast possibility space. With the limits of analytical tractability of the general model exhausted in the face of a large number of free parameters, this section first turns to Monte Carlo simulation to study the development of some representatives of the general class of models before proceeding to agentbased simulation in order to analyze effects of the network structure and to verify that the general characteristics derived for the aggregated level model continue to hold with an agentbased microlevel.
4.1 Experimental Design
Most of the following simulation studies will consider two segment models with two standards in each of them (hence quadratic \(2 \times 2\) matrices A and C for both segments). It is further assumed that all standards are ”tied” to one other standard in the other segment, hence having higher compatibility with this than with the other one; for convenience the first standard in both segments and the second standard in both segments are considered ”tied”.^{Footnote 20} Matrices \(C_{1,2}\) and \(C_{2,1}\) are assumed to be transposes, i.e. intersegmental compatibility is symmetric.^{Footnote 21}
The specific effects that are to be studied with either aggregated level Monte Carlo simulation (MC) or agentbased simulation (ABM) are listed in Table 1 with the resulting developments shown in more detail in the respective figures as indicated in the table. Some aspects are discussed in more detail in the next sections.
The simulation study starts by investigating effects 1 through 5 in one, two, and threesegment replicator models. These are fully deterministic, hence a singlerun Monte Carlo simulation suffices. For this part of the study, the variable of interest as indicated in the table is varied while the other parameters are kept constant.
For comparison to the twosegment models below, a onesegment Monte Carlo simulation^{Footnote 22} with variations in the initial usage share (left panel) and intrasegmental compatibility (right panel) is shown in Fig. 1. As predicted analytically, it is shown that a high oneway compatibility (\(a_{12}\) but not \(a_{21}\)) can help recovering from an inferior position with low usage share but only if \(a_{12}\) exceeds a certain threshold.^{Footnote 23} From the theoretical analysis in Sect. 3.1 and Appendices “1” and “2” it becomes clear that this must be the case for all models of this type. For the multisegment models, however, this is less easy to assess.
If not indicated otherwise, the parameters for the following two and threesegment Monte Carlo simulations are set as follows: \(A=\left( \begin{array}{c c}1 &{} 0.9\\ 0.9 &{} 1\end{array}\right) \), \(C=\left( \begin{array}{c c}0.1 &{} 0\\ 0 &{} 0.1\end{array}\right) \), \(w_{1,1}=w_{2,2}=1\), \(w_{1,2}=w_{2,1}=2\); the settings for the initial usage shares vary according to the scenario needed for the study of the respective effect.
The agentbased simulation follows the same principle (just one effect or variable is varied ceteris paribus) but with 100 runs per effect and setting with all studies repeated for all five network types under investigation. The illustrations in Figs. 8 through 13 show the average and the 90% intervals. The most central purpose of the agentbased simulation is to confirm that the findings of the aggregated level models persist in the agentbased version. Further, the effects of the network structure etc. are to be assessed.
4.2 Monte Carlo Simulation of the Replicator Dynamic Model
4.2.1 Initial Usage Shares (Effects 1, 2)
Higher initial usage shares of a standard i have a direct positive effect on future usage shares of the standard itself, i.e. the direct network externality effect (Fig. 1). It will also have a positive effect on future usage shares of any standards that have a high twoway compatibility with standard i—be it in the same or other segments. The direct effect of higher compatibility of a standard i to any other standards will also be beneficial for the future usage shares of i (Fig. 2). There can, however, be indirect effects similar to the ones discussed in relation to crosssegmental compatibility below.
4.2.2 InterSegmental Compatibility (Effects 3, 4)
As seen already in the onesegment case in Fig. 1, compatibility also has a direct positive effect on the future usage shares of the involved standards, though it may be desirable to have low compatibility to weaker competitors in order to drive them out of business. This is also true for crosssegmental compatibility as shown in Figs. 3 and 4. Figure 3 demonstrates that sufficiently high compatibility even between minority standards will help to expand usage shares and eventually establish a position of dominance in both segments.^{Footnote 24} Note that the same effects can be shown for cases with higher numbers of segments.^{Footnote 25}
4.2.3 When Is it Time to Reduce Compatibility? (Effects 3, 4)
Let there be two pairs of tied standards^{Footnote 26} across two segments; one pair has low usage shares, the other one is dominant. A standard i in the pair with comparatively low usage shares may try to improve its position by establishing compatibility with the other standard in the other segment (i.e. increasing \(c_{12}\). This standard in the other segment is then temporarily highly compatible both standard i and its competitor in the same segment. If standard i, as would be expected, increases its usage share—is there a threshold beyond which it is better to end this temporary engagement, reduce \(c_{12}\) again and return to the initial twopair situation? Fig. 5 hence shows a setting with a compatibility matrix \(C=\left( \begin{array}{c c}0.103 &{} 0.05\\ 0 &{} 0.1\end{array}\right) \) and assumes that a standard’s vendor is theoretically capable to reduce compatibility terms by inserting artificial obstacles preventing interaction between the standards. In these simulation runs, \(c_{12}\) is reduced to 0 if the usage share of standard 1 in segment 1, \(p_{1,1,t}\) reaches a threshold level of \(th\times \max (p_{2,t})\).^{Footnote 27} The result is a direct effect that decreases the standard’s usage share growth (upper panel), which is, however, offset by an indirect effect after some time. The indirect effect works through the shifts in the other segment (lower panel).
The answer to the question—should the compatibility be reduced—consequently depends on the time frame across which the standard vendor attempts to maximize their usage shares. In the immediate future, the direct effect dominates (thus, compatibility should be decreased), after some time, this may be offset by indirect effect (thus, compatibility should be left as high as it is if this time frame is the relevant one). Given that competition between standards in reality is much less stylized with frequent new developments, innovations etc., many vendors may prefer to choose shorter time frames as the basis for their decisions.
4.2.4 Expansion into the Other Segment (Effect 5)
The commercial vendor of a standard in segment 1, standard \(i=1\), dissatisfied with her standard’s compatibility with standards in segment 2, may consider to expand into this segment. She would thereby create a standard with a higher compatibility with her standard in segment 1. Starting with the basic setup as introduced above, a new standard in segment 2 is created with an initial usage share of \(p_{3,2,t}=0.01\) and the compatibility matrices are changed to accommodate this additional standard.^{Footnote 28} Both the effect of the usage share of standard i in segment 1 (with \(c_{13}=0.2\)) and that of the compatibility term with the new standard \(c_{13}\) (with \(p_{1,1,0}=0.45\)) are studied (Fig. 6). In both experiments, the remainder of the second segment is initially shared equally between the other two standards in this segment. It is shown that the newly introduced standard is quickly able to corner the second segment, if the usage of standard i in the first segment was large enough before expanding into the second segment. This improves the position of standard i in the first segment further .
4.3 AgentBased Simulation
4.3.1 Network Structure and Initial Usage Share (Effect 6)
The agentbased simulation is conducted in two steps: First, it is to be shown that the characteristic effects found for the macrolevel replicator model either in the analytical setup or in the Monte Carlo simulations above, are still present in the agentbased version (otherwise they might be accidental results of the macrolevel model). Second, the impact of the network structure and that of the initial share of total adopters (\(\sum _i p_{i,j,t}\)) can be investigated.
Where not indicated otherwise, the agentbased simulations use the same parameter setup as the Monte Carlo simulations with 75% early adopters in each segment (25% of the agents as nonadopters).^{Footnote 29} A single time step indicates one agent reevaluating her adoption decisions, i.e. with \(N=1000\) agents and \(t_{max}=10000\) time steps, the average agent will reevaluate her decision 10 times. As seen below, this leads to a rather slow development compared to what is to be expected in the real world and compared to the replicator dynamic above.
Figure 10 shows the result of 9 example runs in a complete network for 9 different initial relative shares of the second segment while the first segment is divided \(p_{1,0}=\left( \begin{array}{c}0.6\\ 0.4\end{array}\right) \) for all 9 runs. The simulations show that the monopolization towards the pair of tied standards with the larger overall usage share as predicted above does occur and that it does occur in both segments. This was to be expected since the complete network was used in this case. Results for all network structures under consideration here (for 100 runs per setting) are shown in a more compressed form (not the entire development, just the final usage shares) in Fig. 8. As would be expected, the resulting curve is sshaped with very asymmetric starting distributions converging to monopolization much quicker; the effect can be reproduced for all network structures.
4.3.2 Network Structure and Intersegmental Compatibility (Effect 7)
To a lesser degree, this does also hold for the effects of intersegmental compatibility as analyzed above and as shown in the results of agentbased studies in Figs. 9 and 11. Considerable variation remains for some parameter values and some network structures; for the \(k=1\) preferential attachment network with triadic closure (diagram D in Figs. 9, 11), the effects are much less pronounced than for the other network types, but they certainly remain detectable. The effects are strongest in the case of the complete network (diagrams A) and the regular grid (diagrams B). Interestingly, this can also be said for values between \(p_1=0.2\) and \(p_1=0.8\) in the effect of the initial usage share in Fig. 8. Here too, the preferential attachment networks (and to a lesser degree the grid network in diagram B) lead to slightly less pronounced effect. The effect is clearly detectable but with a lot of variation. In the center, the clear sshape of the curve does not appear to emerge in these cases. Asymmetric network structures (C through E) allow for more isolated (even clustered in cases D and E) subcommunities and consequently tend to preserve initial usage shares compared to any outside effects homogenizing the network.
4.3.3 Initial NonAdopters (Effect 8)
The effect of the initial total usage share (\(\sum _i p_{i,j,0}\)) is given in Fig. 12. Starting from an initial equal sharing \(\left( p_{j,0}=\left( \begin{array}{c}0.5\\ 0.5\end{array}\right) \right) \) between the standards, the shares of the then largest standard are studied after \(t_{max}=10{,}000\) time steps. While the complete network leads to an asymmetric final distribution, particularly if the initial total usage share was low, this is not so much the case for the other network structures; particularly network structure D (\(k=1\) preferential attachment network with triadic closure) does not deviate far from the initial distribution \(p_{j,0}=\left( \begin{array}{c}0.5\\ 0.5\end{array}\right) \) and also has a markedly lower standard deviation. This is probably also a result of the presence of multitudes of isolated clusters.
4.3.4 Positioning of Initial Adopters (Effect 9)
This section considers Feld’s (1991) friendship paradox as an example to demonstrate the potential of positioning in networks in this context. Many other effects of this family are conceivable. The ”paradox” refers to the phenomenon that an agent’s neighbors (”friends”) in a network with asymmetric degree distribution do on average have more neighbors and are thus more central. To study this effect, the competition between two standards, one with and one without the benefits of this phenomenon, is considered. The standard with the benefits of this effect selects random neighbors of arbitrarily chosen agents as initial adopters (Fig. 13). Unsurprisingly, it is found to have no effect in the network structures with homogeneous degree distributions (complete network and regular grid, A and B). For all other network structures, this leads to the sshaped curve being changed to a concave one with the lower end (low initial shares) being inflated upwards. Hence, a network of moderately high to high final usage shares results from wellconnected initial adopters. It should be noted that commercial vendors frequently seek to employ such a strategy (trying to win over more wellconnected individuals first) by approaching institutions like universities with favorable contracts etc.
5 Evidence from the ICT Sector
While the literature does not offer any previous models on strategic use of network externalities and market power in the case of tied standards, there is a small literature tradition on product bundling (Choi 2004; Nalebuff 2004; Miao 2010; Eisenmann et al. 2011) and there are some empirical observations that strongly imply the systematic employment of such strategies. This section will discuss a few examples from the ICT sector.
Luksha (2008) recounts several cases of cooptation of organizational networks (supplier networks, user networks) by a dominant firm for its own purposes. His examples include Microsoft as the dominant player in the PC operating systems’ segment forcing cooperation between the two major competitors in the PC processor segment (which is clearly tied to PC operating systems), Intel and AMD. Luksha also identifies firms that actively shape and coordinate their user networks, namely Sun (then the vendor of a wide array of IT products including Java, StarOffice, MySQL, Solaris), Google, and Intel. Luksha does not go into detail on this, but it is clear that this activity is targeted on entrenching and extending the dominant position and the usage shares of the various products. This can be done by bundling products (across tied segments), particularly, in case of firms that offer a wide variety of related products like Sun (at the time), Google, Apple, and Microsoft. It can also be accomplished by acquiring more wellconnected users (say, university students and faculty) and perhaps by coercing the help of prominent institutions (say, by offering special deals to universities)—in fact, aggressive marketing to otherwise privileged user groups is rather common. The case of Sun’s Java offers a prime example of a huge marketing campaign that was run to position the new standard in the mid 1990s, probably in full awareness of the intricacies of network externalities (Garud et al. 2002).
Further, many successful commercial ICT standards started out by securing a temporary bundling or at least a licensing cooperation with one of the major players in a tied subsector: Microsoft’s success in the 1980s came after (and as a direct consequence of) its alliance with IBM in the 1970s; one of the first strategic actions of Sun’s deployment of Java in 1995 was a licensing agreement with Microsoft (Garud et al. 2002); the three successful mobile operating systems (Android, iOS, Windows Mobile) were established by major vendors of other (tied) standards, the Google search engine (in combination with other Google online services) in case of Android and PC operating systems in case of Apple’s iOS and Microsoft’s Windows Mobile. Sure enough, they swiftly displaced the early mobile operating systems by less wellpositioned vendors, such as Blackberry and Symbian (West and Mace 2010).^{Footnote 30}
More recently there has been speculation about which segment is likely to determine the future development of the ongoing mobile device platform competition with some scholars arguing that attracting thirdparty developers is the most important aspect (Ghazawneh and Henfridsson 2013) while others contend that mobile online services (app stores, integration with social networks) play a crucial role compared to more traditional (software and hardware) segments (Kenney and Pon 2011). Specifically, the platforms that emerged as superior, Apple iOS, Google Android, and Windows Mobile were identified as those able to generate revenue in those services. An interesting strategy may be that of Google, which generates its revenue and the network externalities crucial in keeping the platform competitive in entirely different segments (Kenney and Pon 2011). Note, however, that this particular analysis in Kenney and Pon (2011) does not place too much value on the integration of and compatibility among the segments; as the simulations above suggest, this may be another crucial effect, which may be at the bottom of the success of the tightly closed Apple platform.
6 Conclusions
One of the open problems of the economic analysis of information and communication technologies is the problem of ”tied” segments, i.e. in segments that are subject not only to network externalities originating in the same segments but also to those that unfold in other, connected segments. A practical example is provided by the large number of integrated software products in the ICT sector with the vendors of these products partly locked in fierce battles for dominance of one segment or another, and partly engaged in quests of integrating their products across segments and cementing their commanding position in those markets.
The present article proposed a model for the analysis of standards in tied segments. In a simple replicatordynamic model, the feasibility for commercial vendors of standards of making strategic use of network externalities acting within and across segments has been demonstrated. While initial usage share is—as variously pointed out by previous literature—crucially important for the success or failure of standards, it was shown that compatibility with other standards does have a considerable direct effect which may offset a low usage share. Compatibility is in this context understood as the interoperability between standards, the potential for users and groups of users to make efficient use of both standards at the same time.
It was also shown, however, that there is a second, indirect effect of intersegmental compatibility which may make it desirable for sufficiently strong competitors to reduce certain compatibility terms. This may serve to extend the dominating position in one segment or to expand into another while displacing other standards there on the compatibility of which the earlier success of this competitor relied. It has been shown that the time scope of the consideration is one crucial element in choosing between reducing and maintaining compatibility—in the very short term, the direct effect will prevail.
Nevertheless, a replicator model must remain at a fairly high level of abstraction. It derives its legitimacy from its claim of being an aggregated version of microlevel dynamics—but does the micro level, if modelled explicitely actually follow this behavior? For the present model, it could be shown in Sect. 3.2 that all results derived for the aggregated replicator dynamic version can be recovered in a fully agentbased model. Five different network structures were studied; the effects under consideration could be found in all of them, though the effect of compatibility terms is much less pronounced in user network structures with asymmetric degree distributions and especially in those with local clusters (introduced in this case by triadic closure), and many network structures tend to preserve at least isolated niches of competing standards.
Finally, the agentbased model also allowed an assessment of the potential role of the positioning of initial adopters in the network structure—as an example of this potentially vast class of effects, the strategic use of Feld’s friendship paradox has been investigated—he effect is considerable for all network structures with asymmetric degree distribution (it cannot exist in regular grid networks).
Unsurprisingly, ample evidence of strategic use of network effects along various lines can be found as well as detailed in Sect. 5. While the section focussed on the effects also studied on a theoretical level in this paper, it would be expected that most practical strategies would be more intricate, making use of not just the initial market share and interference with compatibility but also the network structure itself: The positioning of initial adopters can be manipulated through targeted advertisement. Strategic approaches would also not attempt positioning in the entire network (hence, the entire world) at once but focus on specific niches, niches that are large enough to form an installen base but clustered to withstand outside influences by other competitors. With the rise of social media and big data, the knowledge about the social network structure has developed immensely and commercial vendorsm may soon develop the capability of influencing the network structure itself.^{Footnote 31} This would not only have a huge potential of upsetting established industry and market structures but would also entail pressing ethical questions.
Notes
This may include, e.g., skype, Google and Facebook’s Videochats, tox, as well as the products of a wealth of other providers operating predominantly on the national level in different countries.
For more examples, see Sect. 5.
This seems important for small competitors, with Microsoft’s initial contract with IBM falling in this category.
Evolutionary model in this context refers to a model that emphasizes the development and the dynamics remaining agnostic towards the existence of an equilibrium. It will become clear in Sect. 2 that much of the relevant literature in the field of competition with network externalities comes from evolutionary economics. It will also become clear, that evolutionary techniques such as replicator dynamics and agentbased modeling are wellsuited to approach these questions.
Transition probabilities here refers to the probabilities that a user will continue using or switch to a certain standard (thus becoming part of its user group).
Though Demsetz (1968) quotes an 1859 paper by Chatwick who may have been the first who realized the strategic consequences for commercial vendors of standards in sectors with network externalities when he distinguished between ”competition in a market” and ”competition for a market”.
Aside from these asymptotically stable equilibria, the complete monopolization of the sector for just one alternative, there are also one or more unstable tippingpoint equilibria between the basins of attraction of the stable equilibria.
More concrete examples are given in Sect. 5.
Note that this is not the same as Katz and Shapiros’s (1985) indirect network effect, which is a network effect that unfolds not directly via the number of users but by means of another mechanism, perhaps reduced costs.
That is, Watts–Strogatz random graphs with more than a certain threshold of rewired edges, values of around 20% are suggested. However, the threshold for this phase transition may depend on other environment variables.
This is a discretization of the canonical form \(\frac{dp_{i,j}}{dt}=p_{i,j,t}(f_{i,j,t}\phi _{j,t})\).
This refers to a subsector or one of several types of goods within the same sector such that interacting network externalities can be expected (as in the above example of operating systems, office software, web browsers etc.). Of course, in reality the sector association of these segments would not be unique or homogeneous, but different segments and even different standards within segments would align only to varying degrees. This is reflected in the compatibility matrices A and C in the present model, which allow the level of interaction of network externalities to be finetuned or investigated.
This allows to weight different tied segments differently, especially the segment the evolutionary fitness of which is being computed with the above equation.
In formal terms, A would be a symmetric matrix and \(C_{j,j'}\) would be the transpose of \(C_{j',j}\), \(C_{j,j'}=C_{j',j}^T\). Since a standard should be perfectly compatible with itself, all main diagonal elements of A should further equal 1.
For instance, a software may offer to read, open, and transform but not to write file types of a competing software.
It is unlikely that all network effects follow the same functional form, therefore an approach was chosen that seemed relatively universal and flexible yet integrates easily with the evolutionary model and the simulation below. For a comprehensive account of proposed functional forms of network externalities, see Swann (2002).
Incompatibility with all standards in one segment would lead to the fitness term being multiplied by zero for this segment and, thus, to the overall fitness also being reduced to zero.
The partial derivatives of the replicator equation with respect to the variables in question is used to assess this.
A potential extension for a more elaborate model would be to consider links of different strength; the present model considers just one type and strength of links, hence a nonweighted network.
Hence \(c_{11}\) will for instance generally be larger than \(c_{12}\) except in runs in which variation of these terms is studied.
The parameters for the fitness weight of the two segments will be set to 1 for intrasegmental influence, \(w_{1,1}=w_{2,2}=1\), and 2 for intersegmental influence, \(w_{1,2}=w_{2,1}=2\) but some of the runs will be contrasted by runs without intrasegmental network effect (hence, \(w_{1,1}=w_{2,2}=0\)) in order to show a more pronounced intersegmental effect.
Which is between 0.4 and 0.5 for the current settings.
Initial usage shares in the shown case are \(p_{1,0}=\left( \begin{array}{c}0.3\\ 0.7\end{array}\right) \), \(p_{2,0}=\left( \begin{array}{c}0.6\\ 0.4\end{array}\right) \); the compatibility term of the first (tied) standards in both segments is varied; about \(c=0.135\) is sufficient for the pair to become dominant.
A threesegment model as shown in Fig. 7 uses the same matrices A for all segments and the basic intersegmental compatibility matrices \(C=\left( \begin{array}{c c}0.1 &{} 0\\ 0 &{} 0.1\end{array}\right) \) between segments 1 and 3 and between segments 2 and 3 while only varying \(c_{12}\) between segments 1 and 2. (\(w_{1,1}=w_{2,2}=w_{3,3}=1\), \(w_{1,2}=w_{2,1}=w_{3,1}=w_{1,3}=w_{3,2}=w_{2,3}=2\)).
For instance, both standards of each pair might be offered by the same respective vendor.
Note that \(p_{2,t}\) (with indices for segment and time, but without an index for the standard) denotes the vector of usage shares of segment 2 at time t. \(\max (p_{2,t})\) thus denotes the highest usage share in this segment at this time.
I.e. matrix \(A_2\) now has to be a \(3 \times 3\) matrix, C has to be a \(2 \times 3\) matrix: \(A_2=\left( \begin{array}{c c c}1 &{} 0.9 &{} 0.9\\ 0.9 &{} 1 &{} 0.9\\ 0.9 &{} 0.9 &{} 1\end{array}\right) \), \(C=\left( \begin{array}{c c c}0.1 &{} 0 &{} c_{13}\\ 0 &{} 0.1 &{} 0\end{array}\right) \).
The early adopters are randomly chosen from without regard of their specific position in the network. This may serve as a neutral benchmark case in which no strategic use of the positioning in the network is made; effect 9 changes this assumption and studies an example of strategical use of the network structure for the positioning of early adopters. The share of early adopters is more closely studied in effect 8.
Note that there is a strand of literature that analyzes technological change as successive emergence of sectors and segments within sectors (see e.g. Saviotti and Pyka 2008, 2013)—with all implications for tying between the respective standards; for the ICT sector, this is discussed by Bass and Bass (2001).
Aggressive attempts by social networks like Facebook and Linkedin to connect users that seem to isolated or too passive may represent first attempts of doing exactly that. Their practical merits may, however, seem questionable.
If \(w_{1,1}\) is too large, the discrete system is not an adequate discretization to the continuous system from which it is derived above as the discrete form would in this case not only destabilize the equilibria but also likely violate the restrictions of \(0\le p_{i,j,t}\le 1\) which must hold since the \(p_{i,j,t}\) are usage shares.
References
Arthur, W. B. (1988). Selfreinforcing mechanisms in economics. In P. W. Anderson, K. J. Arrow, & D. Pines (Eds.), The economy as an evolving complex system, Santa Fe Institute Studies in the Sciences of Complexity. Redwood City California: AddisonWesley.
Arthur, W. B. (1989). Competing technologies, increasing returns, and lockin by historical events. Economic Journal, 99(394), 116–31.
Arthur, W. B., Ermoliev, Y. M., & Kaniovski, Y. M. (1983 [1982]). A generalized urn problem and its applications. Cybernetics 19:61–71, translated from Russian (Kibernetica 1:49–56)
Arthur, W. B., Ermoliev, Y. M., & Kaniovski, Y. M. (1987). Path dependent processes and the emergence of macrostructure. European Journal of Operational Research, 30, 294–303.
Barabási, A. L., & Albert, R. (1999). Emergence of scaling in random networks. Science, 286(5439), 509–512. doi:10.1126/science.286.5439.509.
Bass, P. I., & Bass, F. M. (2001). Diffusion of technology generations: A model of adoption and repeat sales. In Working paper, Bass Economics, Frisco, TX, http://www.bassbasement.org/F/N/BBDL/Bass%20and%20Bass%202001.pdf
Choi, J. P. (2004). Tying and innovation: A dynamic analysis of tying arrangements. The Economic Journal, 114(492), 83–101.
David, P. A. (1985). Clio and the economics of QWERTY. American Economic Review, 75(2), 332–337.
David, P. A. (1992). Heroes, herds and hysteresis in technological history: Thomas Edison and ’the battle of the systems’ reconsidered. Industrial and Corporate Change, 1(1), 129–180. doi:10.1093/icc/1.1.129.
Delre, S., Jager, W., & Janssen, M. (2007). Diffusion dynamics in smallworld networks with heterogeneous consumers. Computational Mathematical Organization Theory, 13, 185–202. doi:10.1007/s1058800690072.
Demsetz, H. (1968). Why regulate utilities? Journal of Law and Economics, 11(1), 55–65.
Eisenmann, T., Parker, G., & Van Alstyne, M. (2011). Platform envelopment. Strategic Management Journal, 32(12), 1270–1285. doi:10.1002/smj.935.
Elsner, W, Heinrich, T, & Schwardt, H. (2015). Microeconomics of complex economies: Evolutionary, institutional, neoclassical, and complexity perspectives. Amsterdam, NL, San Diego, CA: Academic Press.
Farrell, J., & Klemperer, P. (2007). Chapter 31 coordination and lockin: Competition with switching costs and network effects. In Armstrong, M., Porter, R. (eds) Handbook of Industrial Organization, Handbooks in Economics, vol. 3, Elsevier, pp. 1967 – 2072. doi:10.1016/S1573448X(06)030317.
Feld, S. L. (1991). Why your friends have more friends than you do. American Journal of Sociology, 96(6), 1464–1477.
Fisher, J., & Pry, R. (1971). A simple substitution model of technological change. Technological Forecasting and Social Change, 3, 75–88. doi:10.1016/S00401625(71)800057.
Frenken, K., Silverberg, G., & Valente, M. (2008). A percolation model of the product lifecycle. In DRUID Working Papers 0820, http://EconPapers.repec.org/RePEc:aal:abbswp:0820
Garud, R., Jain, S., & Kumaraswamy, A. (2002). Institutional entrepreneurship in the sponsorship of common technological standards: The case of sun microsystems and java. The Academy of Management Journal, 45(1), 196–214.
Ghazawneh, A., & Henfridsson, O. (2013). Balancing platform control and external contribution in thirdparty development: The boundary resources model. Information Systems Journal, 23(2), 173–192. doi:10.1111/j.13652575.2012.00406.x.
Gräbner, C. (2015). Agentbased computational models: A formal heuristic for institutionalist pattern modelling? Journal of Institutional Economics FirstView, 1–21, doi:10.1017/S1744137415000193.
Heinrich, T. (2014). Standard wars, tied standards, and network externality induced path dependence in the ICT sector. Technological Forecasting and Social Change, 81, 309–320. doi:10.1016/j.techfore.2013.04.015.
Katz, M. L., & Shapiro, C. (1985). Network externalities, competition and compatibility. American Economic Review, 75(3), 424–440.
Katz, M. L., & Shapiro, C. (1986). Technology adoption in the presence of network externalities. Journal of Political Economy, 94(4), 822–41.
Kenney, M., & Pon, B. (2011). Structuring the smartphone industry: Is the mobile internet os platform the key? Journal of Industry, Competition and Trade, 11(3), 239–261. doi:10.1007/s1084201101056.
Lee, E., Lee, J., & Lee, J. (2006). Reconsideration of the winnertakeall hypothesis: Complex networks and local bias. Management Science, 52, 1838–1848. doi:10.1287/mnsc.1060.0571.
Luksha, P. (2008). Niche construction: The process of opportunity creation in the environment. Strategic Entrepreneurship Journal, 2(4), 269–283. doi:10.1002/sej.57.
Miao, C. H. (2010). Tying, compatibility and planned obsolescence. The Journal of Industrial Economics, 58(3), 579–606. doi:10.1111/j.14676451.2010.00425.x.
Nalebuff, B. (2004). Bundling as an entry barrier. The Quarterly Journal of Economics, 119(1), 159–187.
Nelson, R. R. (1968). A ”diffusion model” of international productivity differences in manufacturing industry. American Economic Review, 58, 1219–1248.
Pegoretti, G., Rentocchini, F., & Vittucci Marzetti, G. (2009). An agentbased model of competitive diffusion: Network structure and coexistence. In OPENLOC working paper series, No. 16/2009, Department of Economics, University of Trento, Italy.
Pyka, A., & Fagiolo, G. (2005). Agentbased modelling: A methodology for neoSchumpeterian economics. In H. Hanusch & A. Pyka (Eds.), The Elgar companion to NeoSchumpeterian economics. Cheltenham: Edward Elgar.
Saviotti, P. P., & Pyka, A. (2008). Micro and macro dynamics: Industry life cycles, intersector coordination and aggregate growth. Journal of Evolutionary Economics, 18, 167–182. doi:10.1007/s0019100700771.
Saviotti, P. P., & Pyka, A. (2013). From necessities to imaginary worlds: Structural change, product quality and economic development. Technological Forecasting and Social Change, 80(8), 1499–1512. doi:10.1016/j.techfore.2013.05.002.
Sraffa, P. (1926). The laws of returns under competitive conditions. The Economic Journal, 36(144), 535–550.
Swann, G. P. (2002). The functional form of network effects. Information Economics and Policy, 14(3), 417–429.
Uchida, M., & Shirayama, S. (2008). Influence of a network structure on the network effect in the communication service market. Physica A: Statistical Mechanics and its Applications, 387(21), 5303–5310. doi:10.1016/j.physa.2008.06.012.
Vázquez, A. (2003). Growing network with local rules: Preferential attachment, clustering hierarchy, and degree correlations. Physical Review E, 67(056), 104. doi:10.1103/PhysRevE.67.056104.
Watts, D. J., & Strogatz, S. H. (1998). Collective dynamics of ’smallworld’ networks. Nature, 393(6684), 440–442. doi:10.1038/30918.
West, J., & Mace, M. (2010). Browsing as the killer app: Explaining the rapid success of apple’s iphone. Telecommunications Policy, 34(5–6), 270–286. doi:10.1016/j.telpol.2009.12.002.
Young, A. A. (1928). Increasing returns and economic progress. The Economic Journal, 38(152):527–542, http://www.jstor.org/stable/2224097.
Acknowledgements
For many helpful and instructive comments and discussions, I am grateful to Sidonia von Proff, Ben Vermeulen, Claudius Gräbner, Wolfram Elsner, to the participants of the Annual Meeting of the Evolutorischen Ausschuss of the Verein für Socialpolitik 2015, the Annual Conference of the European Association for Evolutionary and Political Economy 2015, and the Conference on Complex Systems 2015, as well as to two anonymous reviewers. All remaining errors are my own.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflicts of interest
There are no conflicts of interest.
Ethical Standard
The research presented in this paper is in compliance with accepted ethical standards for good science.
Appendices
Appendix 1: Derivation of the Marginal Effects of the Compatibility Terms in the 2Segment Replicator Model
From Eq. (1) the evolutionary fitness of standard 1 in segment 1 follows as
(and analogously that for standard 2 in segment 1). The average fitness in segment 1 is
and the usage share of standard 1 in segment 1 at time \(t+1\) follows as
and for \(p_{2,1,t+1}\) analogously. The direct effects of different variables can now be studied; particularly the compatibility terms are of interest. For the terms of matrix A,
and, specifically if A is to be symmetric, hence \(\alpha =a_{12,1}=a_{21,1}\),
which is \(\frac{\partial p_{1,1,t+1}}{\partial \alpha } > 0\) if, and only, if \(0<p_1,1,t<0.5\).
The direct influence of the terms of the intersegmental compatibility matrix C are obtained in the same way as
Appendix 2: Equilibrium and Stability Analysis of the Symmetric 2Segment Replicator Model
In this case, we have
which has the equilibrium set (most easily seen from the secondto last form)
with solutions (assuming \(w_{1,1}\ne 0\))
and stability conditions following from the system’s eigenvalue \(\lambda \) (linearized for the equilibria) with the equilibrium being stable if, and only if, \(\lambda <1\).
thus
The first two equilibria are thus stable if \(w_{1,1}\) is small enough in comparison to \(\alpha \),^{Footnote 32} specifically if \(2>(1\alpha )w_{1,1}\). These are the monopolization equilibria. The third equilibrium, the tipping point, is never stable. An equilibrium and stability analysis for the continuous form of the system for specific numerical examples with nonvanishing intersegmental compatibility matrix (but vanishing intrasegmental compatibility term) is conducted in Heinrich (2014) with very similar results (i.e. stable monopolization equilibria but unstable tipping point equilibrium).
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Heinrich, T. Network Externalities and Compatibility Among Standards: A Replicator Dynamics and Simulation Analysis. Comput Econ 52, 809–837 (2018). https://doi.org/10.1007/s1061401797064
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s1061401797064