1 Introduction

Many systems of interacting objects or individuals in natural and social sciences can be described by complex graphs [1, 5, 8, 10, 18]. Individuals positioned in vertices of such graphs interact along edges with their neighbors. The structure of neighborhoods may have a quite complex topology resulting from various random processes which describe mechanisms of growing graphs. To mimic the “the rich get richer” rule, Barabási and Albert used the preferential attachment in growing their graphs [13]. A preferential attachment rule says that a new vertex is linked with already existing ones with probabilities proportional to their degrees. Such a procedure leads to a scale-free graph with a power-law degree distribution, P(k)∼k −3. This was heuristically understood in [13] and proved mathematically in [7, 11].

Since then there were proposed many generalizations and extensions of the preferential attachment procedure. In the Erdös-Rényi random graph [13], an edge is created with a fixed probability between any two vertices of a given finite set. It is easy to see that degrees of vertices in the Erdös-Rényi graph follow the Poisson distribution. The Erdös-Rényi procedure was generalized to graphs with vertices with an internal fitness in [4, 6]. In such models, the probability of creating an edge between two vertices depends on their fitnesses. Various distributions of internal fitnesses lead to scale-free graphs with various exponents. In growth models which generalize directly the preferential attachment rule, probability of linking a new vertex with an already existing one is proportional to the product of its degree and fitness [4, 17].

In this paper, we generalize the above procedure to allow both vertex fitnesses and degrees to evolve in the coupled dynamics. Namely, we introduce a variable which describes an internal state of a given vertex—its weight. We allow weights of vertices to undergo a simple dynamics with rates proportional to their current degrees. At the same time, our preferential attachment procedure takes into account both weights and degrees of vertices—a probability of linking a new vertex with an already existing one is proportional to the product of its degree and weight. Our model is a coupled dynamics toy model. Its simplicity allows us to prove rigorously that generated graphs are scale free and to derive analytically power-law exponents which depend on parameters of the weight dynamics.

Recently, in the framework of evolutionary game theory, there were analyzed models with co-evolving graph structure and strategy profile [12, 14, 16, 19]. In spatial games, players are located at vertices and play games with their neighbors. The payoff of any player is then the sum of payoffs resulting from individual games. Players may simultaneously change their strategies and rewire connections with other players taking into account their payoffs. This leads to co-evolutionary model of graph structure and strategies. In [14], scale free networks were obtained in special multi-adaptive games. In above and other models, topological and strategic properties were obtained by means of computer simulations and various approximations. To the best of our knowledge, our model is the first coupled dynamics with analytically derived power-law exponents. To prove our results we introduce a fairly general new approach.

2 Coupled Dynamics of Graph Growth and Weights

We will now define precisely our discrete-time dynamical model. We assume that every vertex may have one of two weights, w 1>0 and w 2>0, satisfying w 1+w 2=1. The mechanism of the graph growth combines a classical procedure of the preferential attachment [13] and a simple mutation dynamics of weights. At time t=1, the graph consists of two vertices connected by a single edge, both with the w 1 weight. Now we describe inductively the dynamics. At any time t+1 we have two substeps. In the first substep, we add one new vertex with the weight w 1 and connect it with the probability \(\frac{k_{i}w_{i}}{2t}\) to one of the vertices present at time t, where k i is the degree of the vertex i and w i is its weight. Observe that the sum of degrees of all vertices at time t is equal to 2t. Because \(\sum_{i}\frac{k_{i}w_{i}}{2t} < 1\), it is possible that no vertex will be chosen to be linked with the new one. In that case we link the new vertex with itself and assume that its degree is 2. In the second substep, we choose one vertex with the probability proportional to its degree, in order to upgrade its weight. Then we assign to the chosen vertex a weight w i with the probability w i , i=1,2.

Let us observe that for w 1=1 we retrieve the original Barabási-Albert model. On the other hand, in the case of w 1=w 2=1/2, although vertices do not differ with respect to their weights, our model is not reduced to the original one because at every step, with 1/2 probability a self-connected vertex is created. As a consequence, with a very high probability our graph is not connected.

Let N k (t)=[N k (1,t),N k (2,t)]T be the column vector of expected number of vertices of the degree k and weights w 1 and w 2 at time t. We assume that both substeps are performed independently. It follows that

(1)

for k>2, t≥2 and i,j=1,2; ij, with the initial condition N 1(1,1)=2. We do not write recurrence relations for N k (i,t),k=1,2; the asymptotic behavior of N k (i,t) does not depend on them as it will be seen below.

We can write Eq. (1) in the following matrix form:

$$ N_{k}(t+1) = \biggl(I-\frac{G_{k}}{t}\biggr)N_{k}(t) + \frac{k-1}{2}FN_{k-1}(t), $$
(2)

where G k =[g ij ] i,j=1,2, where \(g_{ij} = -\frac{k}{2}w_{i}\) if ij and \(g_{ii} = \frac{k}{2}\), F=[f ij ] i,j=1,2, where f ii =w i /2 and f ij =0 for ij.

The above recurrence equation is a two-dimensional generalization of a scalar equation which served as a starting point in the analysis of the original Barabási-Albert model [9]. Our approach is different. As a special case (w 1=1) we re-derive the exponent for the original Barabási-Albert model. Using the induction on k in Eq. (2), we can prove that like in the original model, the graph evolves in the linear way, that is for every k, \(\frac{N_{k}}{t} \to v_{k}\) when t→∞, for some vector of constants v k =[v k (1),v k (2)]T. To be more precise, the linear evolution follows from the fact that the matrix (I+G k ) is positive definite (details will be provided in a separate paper).

It is easy to see that rates of linear evolution, v k , satisfy the following system of linear equations:

$$ v_{k-1} = \frac{1}{k-1}F^{-1}(I+G_{k})v_{k}. $$
(3)

Let s k =v k (1)+v k (2), then \(r_{k}(i)=\frac {v_{k}(i)}{s_{k}},i=1,2\), is the fraction of vertices of the weight w i in the population of k-degree vertices. It is easy to see that r k (i) converges to w i as k→∞. Our main result is that the appropriate rate of this convergence implies the power law of vertex degree distribution. Our approach is very general and it enables us to calculate analytically the exponent of the power law and to show its dependence on the weight w 1. From Eq. (3) we get that

$$ \frac{s_{k-1}}{s_{k}} = r_{k}(1)\alpha_{k}(1) + r_{k}(2)\alpha_{k}(2), $$
(4)

where

$$ \alpha_{k}(i) = \biggl(\frac{1}{w_{i}}-1\biggr) + \frac{1}{k}\biggl(\frac{3}{w_{i}}-1\biggr) + O\biggl(\frac {1}{k^{2}} \biggr). $$
(5)

3 Results and Proofs

Let us now formulate our main result.

Theorem

The distribution of vertex degrees in our coupled preferential attachment and weight dynamics satisfies the power law, that is s k k βc for some positive constant c, where

$$ \beta= 5 + d(w_{1}) \biggl(\frac{1}{w_{1}}- \frac{1}{w_{2}} \biggr), $$
(6)

where d(w 1) is given in (30).

In Fig. 1 we present β, an exponent of the power law, as a function of the weight w 1. For w 1→0 and w 1→1, the exponent tends to 3. For w 1=1 it is expected because in that case our model becomes the standard Barabási-Albert one. For w 1=0, our model behaves like the Barabási-Albert one in the limit of the infinite k. We also see that the exponent of the power law is symmetric with respect to the line w 1=w 2=1/2 which again is a consequence of the fact that the exponent describes the behavior of the network in the limit of the infinite k and therefore it does not depend on initial conditions. The maximum β=5 is obtained for w 1=w 2=1/2. As it was mentioned before, although vertices do not differ with respect to their degrees, our model is not reduced to the original one because self-connected vertices are created with 1/2 probability.

Fig. 1
figure 1

Power-law exponent β as a function of the weight ω 1, the solid line is the graph of the formula (30), open circles represent results of stochastic simulations of the process of building the graph, the dotted line is just the guide for the eye

We have also performed stochastic simulations of the graph growth for certain values of w 1 and obtained power-law exponents numerically. We build the network of 109 vertices and we repeat the simulation 1000 times to have more data. Results of computer simulations agree with the analytical solution quite well as it can be seen in Fig. 1. The discrepancy grows as we approach w 1=1/2. In the limiting case we get numerically β=4.92 instead of the rigorous analytical result β=5. In this case, self-connected vertices are created with 1/2 probability. One then needs to build big networks to get the right exponent. Necessity of having big networks to get large exponents was discussed in [15]. They showed there that to get β=5 one really needs 1012 vertices which is beyond our computational capabilities.

Special Cases

We will first derive an approximate formula for β as a function of w 1 in the vicinity of w 1=1/2. The starting point is Eq. (3). We begin with the following mathematical result.

Lemma

If a sequence n k , k=0,1,…, of positive real numbers satisfies the following recurrence equations:

$$\frac{n_{k-1}}{n_{k}} = 1 + \frac{\beta}{k} + O\biggl(\frac {1}{k^\theta}\biggr), $$

where θ>1, then for some constant c we have

$$n_{k}k^{\beta} \rightarrow c \quad \mbox{\textit{when} } k \rightarrow \infty. $$

Proof

We write n k in the form:

$$ \frac{1}{n_{k}} = \frac{1}{n_{1}}\frac{n_{1}}{n_{2}}\frac {n_{2}}{n_{3}}\cdots \frac{n_{k-1}}{n_{k}} $$
(7)

hence

$$ \frac{1}{n_{k}} = \frac{1}{n_{1}}\prod_{j=2}^{k} \biggl(1+\frac{\beta}{j} + O\biggl(\frac{1}{j^{\theta}}\biggr )\biggr)= \frac{1}{n_{1}} \prod_{j=2}^{k}\biggl(1+ \frac{\beta}{j}\biggr) \prod_{j=2}^{k} \biggl(1 + O\biggl(\frac{1}{j^{\theta}}\biggr)\biggr). $$
(8)

Analogous equation is satisfied by the sequence n k =k β which satisfies the assumption of the lemma. It is easy to see that the second product has a limit. We then multiply Eq. (8) by n k =k β and the lemma is proved. □

Now we come back to Eq. (3) which can be written in the following form:

$$ \everymath{\displaystyle }\begin{array} {l} v_{k-1}(1) = \frac{1}{k-1}\biggl( \frac{k+2}{w_{1}}v_{k}(1) - kv_{k}(2)\biggr), \\ \noalign{\vspace{5pt}} v_{k-1}(2) = \frac{1}{k-1}\biggl( \frac{k+2}{w_{2}}v_{k}(2) - kv_{k}(1)\biggr). \end{array} $$
(9)

First we consider the case w 1=w 2=1/2. We add the above two equations and get one-dimensional recurrence equations,

$$ s_{k-1} = s_{k} + \frac{5}{k-1}s_{k}. $$
(10)

The theorem follows immediately from the lemma with the power-law exponent β=5. Now we set w 1=1/2+ϵ, expand Eqs. (9) in powers of ϵ, keep linear terms only and get

$$ \everymath{\displaystyle }\begin{array} {l} v_{k-1}(1) = \frac{1}{k-1} \bigl[2(1-2\epsilon) (k+2)v_{k}(1) - kv_{k}(2)\bigr], \\ \noalign{\vspace{5pt}} v_{k-1}(2) = \frac{1}{k-1}\bigl[2(1+2 \epsilon) (k+2)v_{k}(2) - kv_{k}(1)\bigr]. \end{array} $$
(11)

We add and subtract the above two equations and get

$$ \everymath{\displaystyle }\begin{array} {l} s_{k-1}=\frac{1}{k-1} \bigl[(k+4)s_{k}-4\epsilon(k+2)w_{k}\bigr], \\ \noalign{\vspace{5pt}} w_{k-1}=\frac{1}{k-1}\bigl[(3k+4)w_{k}-4 \epsilon(k+2)s_{k}\bigr], \end{array} $$
(12)

where s k =v k (1)+v k (2) and w k =v k (1)−v k (2).

We set w k =2(d(ϵ)/k+ϵ)s k for some function d(ϵ). At the moment this is an ansatz allowing us to solve the system of recurrence equations. It follows directly from the proposition stated and proved below. Equation (12) now read

$$ \everymath{\displaystyle }\begin{array} {l} s_{k-1} = s_{k} + \biggl( \frac{5-8\epsilon d(\epsilon)}{k-1}-\frac{16\epsilon d(\epsilon)}{k(k-1)}+ O\bigl(\epsilon^{2}\bigr) \biggr)s_{k}, \\ \noalign{\vspace{5pt}} s_{k-1} = s_{k} + \biggl( \frac{1}{k-1} + \frac{2d(\epsilon)}{(k-1)\epsilon}+ O\biggl(\frac {1}{k^{2}}\biggr) \biggr)s_{k}. \end{array} $$
(13)

If we neglect ϵ 2 term, then the first equation in (13) tells us that s k satisfies the power law with the exponent β=5−8ϵd(ϵ) and the second one tells us that the exponent is equal to \(1+\frac{2d(\epsilon )}{\epsilon}\). The consistency requires that these two expressions are equal and we get

$$ \beta(\epsilon) = 5 - 16\epsilon^{2} + O\bigl(\epsilon^{3} \bigr ). $$
(14)

The rigorous expression for β is given below in (30).

Proofs

Let us now come back to our theorem. We will examine the rate of convergence of r k (i) to w i . □

Proposition

\(r_{k}(i) = w_{i} + \frac{d_{i}}{k} + O(\frac{1}{k^{\theta}})\), where θ>1 and d 1+d 2=0.

The theorem follows directly from the above proposition and Eqs. (45) with d(w 1)=d 1.

Proof of the Proposition

From (4) and (5) we have:

$$ r_{k}(1) = \frac{A_{k}r_{k-1}(1)-B_{k}}{1-(1+C_{k})r_{k-1}(1)}, $$
(15)

where:

$$ \everymath{\displaystyle }\begin{array}{c} A_{k} =\frac{\frac{k}{2}\frac{w_{1}}{w_{2}}+\frac{1}{w_{2}}}{\frac {k}{2}(1+\frac{1}{w_{1}})+\frac{1}{w_{1}}}, \qquad B_{k} =-\frac {\frac{k}{2}}{\frac{k}{2}(1+\frac{1}{w_{1}})+\frac{1}{w_{1}}}, \\ \noalign{\vspace{5pt}} C_{k} =-\frac{\frac{k}{2}(1+\frac{1}{w_{2}})+\frac{1}{w_{2}}}{\frac {k}{2}(1+\frac{1}{w_{1}})+\frac{1}{w_{1}}} . \end{array} $$
(16)

We expand A k , B k and C k in powers of 1/k and get

$$ \everymath{\displaystyle }\begin{array}{c} A_{k} = A + \frac{A(1)}{k} + O\biggl(\frac{1}{k^{2}}\biggr), \qquad B_{k} = B + \frac{B(1)}{k} + O\biggl(\frac{1}{k^{2}} \biggr), \\ \noalign{\vspace{5pt}} C_{k} = C + \frac{C(1)}{k} + O\biggl(\frac{1}{k^{2}}\biggr), \end{array} $$
(17)

where A, B, and C are limits of A k , B k , and C k as k→∞, so we have:

$$ A =\frac{\frac{w_{1}}{w_{2}}}{1+\frac{1}{w_{1}}}, \qquad B = - \frac{w_{1}}{1+w_{1}}, \qquad C =- \frac{1+\frac{1}{w_{2}}}{1+\frac{1}{w_{1}}} $$
(18)

and

$$ \everymath{\displaystyle }\begin{array}{c} A(1) = {\frac{2 w_1}{ ( 1-{w_1} ) ( {1+ w_1} ) ^{2}}},\qquad B(1) = {\frac{2 {w_1}}{ ( 1+{w_1} ) ^{2}}}, \\ \noalign{\vspace{5pt}} C(1) = {\frac{ 2w_{1} -4 w_{1}^{2}}{ ( 1-{w_1} ) (1+ {w_1} ) ^{2}}}. \end{array} $$
(19)

Proof of the convergence of the sequence k(r k (i)−w i )

We use the recurrence formula (15) for r k (1) and obtain the following equation:

(20)

where

$$ \eta(k) = \bigl(A(1)r_{k-1}(1)-B(1)\bigr) \bigl(1-(1+C)w_{1} \bigr)+C(1)r_{k-1}(1) (Aw_{1}-B). $$
(21)

It is easy to see that η(k) is convergent, the limit is denoted by η.

Let γ(k)=k(r k (1)−w 1). It follows from Eqs. (2021) that

$$ \gamma(k) = m\gamma(k-1) + b + O\biggl(\frac{1}{k}\biggr) $$
(22)

for some constants, m,b, where 0<m<1. Now let γ 2(k)=(k−1)+b for k>1 and γ 2(1)=γ(1). It can be easily shown that

$$ \bigl|\gamma_{2}(k)-\gamma(k)\bigr| \leq P\sum_{i=1}^{k-1} \frac{m^{i}}{k-i} $$
(23)

for some constant P. Now we have to prove that the sequence \(l_{k} = \sum_{i=1}^{k-1} \frac{m^{i}}{k-i}\) converges to 0. We have

$$ l_{k} = \frac{\sum_{i=1}^{k-1}\frac{m^{-i}}{i}}{m^{-k}}. $$
(24)

We use the Stolz theorem and have

$$ \lim _{k\rightarrow\infty} l_{k} = \lim _{k\rightarrow \infty} \frac{\frac{m^{-k}}{k}}{m^{-(k+1)}-m^{-k}} = 0. $$
(25)

We have showed that

$$r_{k}(i) = w_{i} + \frac{d_{i}}{k} + o\biggl( \frac{1}{k}\biggr). $$

Proof that \(r_{k}(i)= w_{i} + \frac{d_{i}}{k} + O(\frac {1}{k^{\theta}})\) for some θ>1

Let us denote the limit of the sequence γ(k) by γ hence γ=+b. We would like to prove that there exists σ>0 such that lim k→∞ k σ(γ(k)−γ)=0. We denote k σ(γ(k)−γ) by ζ σ (k). We subtract γ=+b from (22) and multiply the new equation by k σ (for σ∈(0,1)) and obtain

$$ \zeta_{\sigma}(k) = m \zeta_{\sigma}(k-1) + O\biggl( \frac{1}{k^{1-\sigma}}\biggr). $$
(26)

We define a supporting sequence: ζ(0)=ζ θ (0) and ζ(k)=(k−1). We get

$$ \bigl|\zeta(k)-\zeta_{\sigma}(k)\bigr| \leq W\sum_{i=1}^{k-1} \frac {m^{i}}{(k-i)^{1-\sigma}} $$
(27)

for some positive constant W. We use again the Stolz theorem and obtain that sequences, {ζ(k)} and {ζ σ (k)}, have the same limit hence it is equal to 0. We can take θ=1+σ and the Proposition is proved.

We have also obtained the formula for d(w 1) present in (6),

$$ d(w_{1}) = \frac{\eta}{(1-(1+C)w_{1})^{2}-(A-B(1+C))}. $$
(28)

From (21) we get

$$ \eta=-2\,{\frac{ ( 2\,{w_1} - 1 ) {w_1}\, ( {w_1}^{2}-{w_1} +1 ) }{ ( w_1 -1 ) ( w_1+1 ) ^{2}}} $$
(29)

and finally

$$ \beta(w_1)={\frac{2 {w_1}^{2}-2 {w_1}+3}{2 {w_1}^{2}-2 {w_1}+1}}. $$
(30)

Let us observe that ϵ expansion around w 1=1/2 agrees with (14).

4 Conclusions

We introduced a coupled dynamics of growing scale-free graphs with evolving vertex weights. In our generalized preferential attachment procedure, a probability of a new link is proportional to the product of a degree and a weight of a given vertex. Vertex weights evolve as the graph grows. Our main general result is that an appropriately fast convergence of the percentage of vertices of a given weight implies the power law of the overall degree distribution. We derive analytically power-law exponents and show that they depend on parameters of the weight dynamics. Our approach involves two-dimensional recurrence relations as opposed to the original one-dimensional Barabási-Albert model. To the best of our knowledge, our model is the first one with the coupled dynamics of the graph growth and evolution of vertex weights (fitnesses) with analytically derived power-law exponents. Methods developed here can be used in other models of growing graphs.