1 Introduction

Random sequential adsorption (RSA) refers to a process in which particles appear sequentially at random positions in some space, and if accepted, remain at those positions forever. This strong form of irreversibility is often observed in dynamical interacting particle systems; see [5, 6, 13, 19, 21] and the references therein for many applications across various fields of science. One example concerns particle systems with hard-core interaction, in which particles are accepted only when there is no particle already present in its direct neighborhood. In a continuum, the hard-core constraint says that particles should be separated by at least some fixed distance.

Certain versions of RSA are called parking problems [22], where cars of a certain length arrive at random positions on some interval (or on \(\mathbbm {R}\)). Each car sticks to its location if it does not overlap with the other cars already present. The fraction of space occupied when there is no more place for further cars is known as Rényi’s parking constant. RSA or parking problems were also studied on random trees [8, 24], where the nodes of an infinite random tree are selected one by one, and are declared active if none of the neighboring nodes is active already, and become frozen otherwise.

We will study RSA on random graphs, where as in a tree, nodes either become active or frozen. We are interested in the fraction of active nodes in the large-network limit when the number of nodes n tends to infinity. We call this limiting fraction the jamming constant, and it can be interpreted as the counterpart of Rényi’s parking constant, but then for random graphs. For classical RSA with nearest-neighbor blocking, the jamming constant corresponds to the normalized size of a greedy maximal independent set, where at each step one vertex is selected uniformly at random from the set of all vertices that have not been selected yet, and is included in the independent set if none of its neighbors are already included. The size of the maximal greedy independent set of an Erdős–Rényi random graph was first considered in [15]; see Remark 1 below. Recently, jamming constants for the Erdős–Rényi random graph were studied in [2, 23], and for random graphs with given degrees in [1, 3, 4]. In [3], random graphs were used to model wireless networks, in which nodes (mobile devices) try to activate after random times, and can only become active if none of their neighbors is active (transmitting). When the size of the wireless network becomes large and nodes try to activate after a short random time independently of each other, the jammed state with a maximal number of active nodes becomes the dominant state of the system. In [23], random graphs with nearest-neighbor blocking were used to model a Rydberg gas with repelling atoms. In ultra-cold conditions, the repelling atoms with quantum interaction favor a jammed state, or frozen disorder, and in [23] it was shown that this jammed state could be captured in terms of the jamming limit of a random graph, with specific choices for the free parameters of the random graph to fit the experimental setting.

In this paper we consider three generalizations of RSA on the Erdős–Rényi random graph. The generalizations cover a wide variety of models, where the interaction between the particles is repellent, but not as stringent as nearest-neighbor blocking. The first generalization is inspired by wireless networks. Suppose that each active node causes one unit of noise to all its neighboring nodes. Further, a node is allowed to transmit (and hence to become active) unless it senses too much noise, or causes too much noise to some already active node. We assume that there is a threshold value K such that a node is allowed to become active only when the total noise experienced at that node is less than K, and the total noise that would be caused by the activation of this node to its neighboring active nodes, remains below K. We call this the Threshold model. In the jammed state, all active nodes have fewer than K active neighbors, and all frozen nodes would violate this condition when becoming active. This condition relaxes the strict hardcore constraint (\(K=1\)) and combined with RSA produces a greedy maximal K-independent set, defined as a subset of vertices U in which each vertex has at most \(K-1\) neighbors in U. The Threshold model was studied in [17, 18] on two-dimensional grids in the context of distributed message spreading in wireless networks.

The second generalization considers a multi-frequency or multi-color version of classical RSA. There are K different frequencies available. A node can only receive a ‘higher’ frequency than any of its already active neighbors. Otherwise the node gets frozen. As in the Threshold model, the case \(K=1\) reduces to the classical hard-core constraint. But for \(K\ge 2\), this multi-frequency version gives different jammed states, and is also known as RSA with screening or the Tetris model [10]. In the Tetris model, particles sequentially drop from the sky, on random locations (nodes in case of graphs), and stick to a height that is one unit higher than the heights of the particles that occupy neighboring locations. This model has been studied in the context of ballistic particle deposition [16], where particles dropping vertically onto a surface stick to a location when they hit either a previously deposited particle or the surface.

The third generalization concerns random Sequential Frequency Assignment Process (SFAP) [9, 11]. As in the Tetris model, there are K different frequencies, and a node cannot use a frequency at which one of its neighbors is already transmitting. But this time, a new node selects the lowest available frequency. If there is no further frequency available (i.e. all the K different frequencies are taken by its neighbors), the node becomes frozen. The SFAP model can be used as a simple and easy-to-implement algorithm for determining interference-free frequency assignment in radio communications regulatory services [11].

The paper is structured as follows. Section 2 describes the models in detail and presents the main results. We quantify how the jamming constant depends on the value of K and the edge density of the graph. Section 3 gives the proofs of all the results and Section 4 describes some further research directions.

1.1 Notation and Terminology

We denote an Erdős–Rényi random graph on the vertex set \([n]=\{1,2,\dots ,n\}\) by \(G(n,p_n)\), where for any \(u\ne v\), (uv) is an edge of \(G(n,p_n)\) with probability \(p_n=c/n\) for some \(c>0\), independently for all distinct (uv)-pairs. We often mean by \(G(n,p_n)\) the distribution of all possible configurations of the Erdős–Rényi random graph with parameters n and \(p_n\), and we sometimes omit sub-/superscript n when it is clear from the context. The symbol \({\mathbbm {1}}_A\) denotes the indicator random variable corresponding to the set A. An empty sum and an empty product is always taken to be zero and one respectively. We use calligraphic letters such as \(\mathcal {A},\) \(\mathcal {I}\), to denote sets, and the corresponding normal fonts such as A, I, to denote their cardinality. Also, for discrete functions \(f:\{0,1,\dots \}\mapsto \mathbbm {R}\) and \(x>0\), f(x) should be understood as \(f({\left\lfloor x \right\rfloor })\). The boldfaced notations such as \({\varvec{x}}\), \({\varvec{\delta }}\) are reserved to denote vectors, and \({\left\| \cdot \right\| }\) denotes the sup-norm on the Euclidean space. The convergence in distribution statements for processes are to be understood as uniform convergence over compact sets.

2 Main Results

We now present the three models in three separate sections. For each model, we describe an algorithm that lets the graph grow and simultaneously applies RSA. Asymptotic analysis of the algorithms in the large-graph limit \(n\rightarrow \infty \) then leads to characterizations of the jamming constants.

2.1 Threshold Model

For any graph G with vertex set V, let \(d_{\max }(G)\) denote the maximum degree, and denote the subgraph induced by \(U\subset V\) as \(G_{\scriptscriptstyle U}\). Define the configuration space as

$$\begin{aligned} \Omega _K(G)=\{U\subset V:d_{\max }(G_{\scriptscriptstyle U})<K\}. \end{aligned}$$
(1)

We call any member of \(\Omega _K(G)\) a K-independent set of G. Now consider the following process on \(G(n,p_n)\): Let \(\mathcal {I}(t)\) denote the set of active nodes at time t, and \(\mathcal {I}(0):=\varnothing \). Given \(\mathcal {I}(t)\), at step \(t+1\), one vertex v is selected uniformly at random from the set of all vertices which have not been selected already, and if \(d_{\max }(G_{\scriptscriptstyle \mathcal {I}(t)\cup \{v\}})<K\), then set \(\mathcal {I}(t+1)=\mathcal {I}(t)\cup \{v\}\). Otherwise, set \(\mathcal {I}(t+1)=\mathcal {I}(t).\) Note that, given the graph \(G(n,p_n)\), \(\mathcal {I}(t)\) is a random element from \(\Omega _K(G(n,p_n))\) for each t, and after n steps we get a maximal greedy K-independent set. We are interested in the jamming fraction I(n) / n as n grows large, and we call the limiting value, if it exists, the jamming constant.

To analyze the jamming constant for the Threshold model, we introduce an exploration algorithm that generates both the random graph and the greedy K-independent set simultaneously. The algorithm thus outputs a maximal K-independent set equal in distribution to \(\mathcal {I}(n)\).

Algorithm 1

(Threshold exploration) At time t, we keep track of the sets \(\mathcal {A}_k(t)\) of active vertices that have precisely k active neighbors, for \(0\le k\le K-1\), the set \(\mathcal {B}(t)\) of frozen vertices, and the set \(\mathcal {U}(t)\) of unexplored vertices. Initialize by setting \(\mathcal {A}_k(0)=\mathcal {B}(0)=\varnothing \) for \(0\le k\le K-1\), and \(\mathcal {U}(0)=V\). Define \(\mathcal {A}(t):=\bigcup _{k}\mathcal {A}_k(t)\). At time \(t+1\), if \(\mathcal {U}(t)\) is nonempty, we select a vertex v from \(\mathcal {U}(t)\) uniformly at random and try to pair it with the vertices of \(\mathcal {A}(t)\cup \mathcal {B}(t)\), mutually independently, with probability \(p_n\). Suppose the set of all vertices in \(\mathcal {A}(t)\) to which the vertex v is paired is given by \(\{v_1,\dots ,v_r\}\) for some \(r\ge 0\), where for all \(i\le r\), \(v_i\in \mathcal {A}_{k_i}(t)\) for some \(0\le k_i\le K-1\). Then:

  • If \(r<K\) and \(v_i\notin \mathcal {A}_{K-1}(t)\) for all \(i\le r\) (i.e. \(\max _i k_i<K-1\)), then put v in \(\mathcal {A}_r(t)\) and move each \(v_i\) from \(\mathcal {A}_{k_i}(t)\) to \(\mathcal {A}_{k_i+1}(t)\). More precisely, set

    $$\begin{aligned}&\mathcal {A}_r(t+1)=\mathcal {A}_r(t)\cup \{v\},&\mathcal {A}_{k_i}(t+1)&=\mathcal {A}_{k_i}(t)\setminus \{v_1,\dots ,v_r\},\\&\mathcal {A}_{k_i+1}(t+1)= \mathcal {A}_{k_i}(t)\cup \{v_i\},&\mathcal {B}(t+1)&=\mathcal {B}(t),\\&\mathcal {U}(t+1)=\mathcal {U}(t)\setminus \{v\}.&\\ \end{aligned}$$
  • Otherwise, if \(r\ge K\) or \(\mathcal {A}_{K-1}(t)\cap \{v_1,\dots ,v_r\} \ne \varnothing \), declare v to be blocked, i.e. \(\mathcal {B}(t+1)= \mathcal {B}(t)\cup \{v\}\), \(\mathcal {A}_{k}(t+1)= \mathcal {A}_{k}(t)\) for all \(0\le k\le K-1\) and \(\mathcal {U}(t+1)=\mathcal {U}(t)\setminus \{v\}\).

The algorithm terminates at \(t=n\) and produces as output the set \(\mathcal {A}(n)\) and a graph \(\mathcal {G}(n)\). The following result guarantees that we can use Algorithm 1 for analyzing the Threshold model:

Proposition 1

The joint distribution of \((\mathcal {G}(n),\mathcal {A}(n))\) is identical to the joint distribution of \((G(n,p_n),\mathcal {I}(n))\).

Observe that \(|\mathcal {U}(t)|=n-t\). Our goal is to find the jamming constant, i.e. the asymptotic value of A(n) / n. For that purpose, define \(\alpha ^n_k(t):=A_k({\left\lfloor nt \right\rfloor })/n\), and the vector \({\varvec{\alpha }}^n(t)=(\alpha ^n_0(t),\ldots ,\alpha ^n_{K-1}(t))\) for \(t\in [0,1]\). We can now state the main result for the Threshold model.

Theorem 1

(Threshold jamming limit) The process \(\{{\varvec{\alpha }}^n(t)\}_{0\le t\le 1}\) on \(G(n,p_n)\), with \(p_n=c/n\), converges in distribution to the deterministic process \(\{{\varvec{\alpha }}(t)\}_{0\le t\le 1}\) that can be described as the unique solution of the integral recursion equation

$$\begin{aligned} \alpha _k(t)=\int _0^t\delta _k({\varvec{\alpha }}(s)){\mathrm {d}}s, \end{aligned}$$
(2)

where

$$\begin{aligned} \delta _k({\varvec{\alpha }})= {\left\{ \begin{array}{ll} -c\alpha _{0}{\mathrm {e}}^{-c\alpha _{\scriptscriptstyle \le K-1}}\sum \nolimits _{r=0}^{K-2}c^r\alpha _{\scriptscriptstyle \le K-2}^r/r!+{\mathrm {e}}^{-c\alpha _{\scriptscriptstyle \le K-1}}, &{}\quad k=0,\\ c(\alpha _{k-1}-\alpha _k){\mathrm {e}}^{-c\alpha _{\scriptscriptstyle \le K-1}}\sum \nolimits _{r=0}^{K-2}c^r\alpha _{\scriptscriptstyle \le K-2}^r/r! &{}\quad 1\le k\le K-2,\\ \qquad \qquad \quad +{\mathrm {e}}^{-c\alpha _{\scriptscriptstyle \le K-1}}c^k\alpha _{\scriptscriptstyle \le K-2}^k/k!,&{}\\ c\alpha _{K-2}{\mathrm {e}}^{-c\alpha _{\scriptscriptstyle \le K-1}}\sum \nolimits _{r=0}^{K-2}c^r\alpha _{\scriptscriptstyle \le K-2}^r/r! &{}\quad k=K-1,\\ \qquad \qquad \quad +{\mathrm {e}}^{-c\alpha _{\scriptscriptstyle \le K-1}}c^{K-1}\alpha _{\scriptscriptstyle \le K-2}^{K-1}/(K-1)!, \end{array}\right. } \end{aligned}$$
(3)

with \(\alpha _{\scriptscriptstyle \le k}=\alpha _0+\dots +\alpha _k\). Consequently, as \(n\rightarrow \infty \), the jamming fraction converges in distribution to a constant, i.e.,

$$\begin{aligned} \frac{I(n)}{n}{\xrightarrow {d}}\sum _{k=0}^{K-1}\alpha _k(1). \end{aligned}$$
(4)

Figure 1 displays some numerical values for the fraction of active nodes given by \( \sum _{k=0}^{K-1}\alpha _k(1)\), as a function of the average degree c and the threshold K. As expected, an increased threshold K results in a larger fraction. Figure 1 also shows prelimit values of this fraction for a finite network of \(n=1000\) nodes. These values are obtained by simulation, where for each value of c we show the result of one run only. This leads to the rougher curves that closely follow the smooth deterministic curves of the jamming constants. If we had plotted the average values of multiple simulation runs, 100 say, this average simulated curve would be virtually indistinguishable from the smooth curve. This not only confirms that our limiting result is correct, but it also indicates that the limiting results serve as good approximations for finite-sized networks. We have drawn similar conclusions based on extensive simulations for all the jamming constants presented in this section.

Fig. 1
figure 1

Fraction of active nodes as a function of c, for \(0\le c\le 10\) and several K-values. The smooth lines display \(\sum _{k=0}^{K-1}\alpha _k(1)\) and the rough lines follow from simulation of a network with \(n=1000\) nodes

Remark 1

It can be checked that Theorem 1 gives the known jamming constant for \(K=1\). In this case, (2) reduces to

$$\begin{aligned} \alpha _0(t)=\int _0^t{\mathrm {e}}^{-c\alpha _0(s)}{\mathrm {d}}s \end{aligned}$$
(5)

with \(\alpha _0(0)=0\). Thus the value of the jamming constant becomes \(\alpha _0(1)=c^{-1}\log (1+c)\), which agrees with the known value [15, Theorem 2.2(ii)].

Remark 2

Theorem 1 can be understood intuitively as follows. Observe that when a vertex v is selected from \(\mathcal {U}\), it will only be added to \(\mathcal {A}_k\) if it is not connected to \(\mathcal {A}_{K-1}\), and it has precisely \(k\le K-1\) connections to the rest of \(\mathcal {A}\). Further, if the selected vertex v becomes active, then all the vertices in \(\mathcal {A}_j\), to which v gets connected, are moved to \(\mathcal {A}_{j+1}\), \(0\le j<K-1\). The number of connections to \(\mathcal {A}_{k}\), in this case, is Bin\((A_{k},p_n)\) and that to \(\bigcup _{i=0}^{K-2}\mathcal {A}_i\) is Bin\((A_{\scriptscriptstyle \le K-2},p_n)\), and we have the additional restriction that the latter is less than or equal to \(K-1\). The expectation of Bin\((A_{k},p_n)\) restricted to Bin\((A_{\scriptscriptstyle \le K-2},p_n)\le K-1\) is given by \(A_kp_n\) times another binomial probability (see Lemma 1). This explains the first terms on the right side of (3). Finally, taking into account the probability of acceptance to \(\mathcal {A}_k\) gives rise to the second terms on the right side of (3).

Remark 3

Algorithm 1 is different in spirit than the exploration algorithms in the recent works [4, 23]. The standard greedy algorithms in [4, 23] work as follows: Given a graph G with n vertices, include the vertices in an independent set I consecutively, and at each step, one vertex is chosen randomly from those not already in the set, nor adjacent to a vertex in the set. This algorithm must find all vertices adjacent to I as each new vertex is added to I, which requires probing all the edges adjacent to vertices in I. However, since in the Threshold model with \(K\ge 2\) an active node does not obstruct its neighbors from activation per se, we need to keep track of the nodes that are neither active nor blocked. We deal with this additional complexity by simply observing that the activation of a node is determined by exploring the connections with the previously active vertices only. Therefore, Algorithm 1 only describes the connections between the new (and potentially active) vertex and the already active vertices (and the frozen vertices in order to complete the construction of the graph). Since the graph is built one vertex at a time, the jamming state is achieved precisely at time \(t=n\), and not at some random time in between 1 and n as in [4, 23].

Remark 4

For the other two RSA generalizations discussed below we will use a similar algorithmic approach, building and exploring the graph one vertex at a time. These algorithms form a crucial ingredient of this paper because they make the RSA processes amenable to analysis.

2.2 Tetris Model

In the Tetris model, particles are sequentially deposited on the vertices of a graph. For a vertex v, the incoming particle sticks at some height \(h_v\in [K]=\{1,2,\ldots ,K\}\) determined by the following rules: At time \(t=0\), initialize by setting \(h_v(0)=0\) for all \(v\in V\). Given \(\{h_v(t):v\in V\}\), at time \(t+1\), one vertex u is selected uniformly at random from the set of all vertices that have not been selected yet. Set \(h_u(t+1)=\max \{h_w(t):w\in V_u\}+1\) if \(\max \{h_w(t):w\in V_u\}<K\), where \(V_u\) is the set of neighboring vertices of u, and set \(h_u(t+1)=0\) otherwise. Observe that the height of a vertex can change only once, and in the jammed state no further vertex at zero height can achieve non-zero height. Note that K now has a different interpretation than in the Threshold model. In the Tetris model, the number of possible states on any vertex ranges from 0 and K, whereas in the Threshold model the vertices have only two possible states (active/frozen), and K determines “the flexibility” in the acceptance criterion. We are interested in the height distribution in the jammed state. Define \(\mathcal {N}_i(t):=\{v:h_v(t)=i\}\) and \(N_i(t)=|\mathcal {N}_i(t)|\). We study the scaled version of the vector \((N_1(n),\ldots , N_K(n))\) and refer to \(N_i(n)/n\) as the jamming density of height i.

Again, we assume that the underlying interference graph is an Erdős–Rényi random graph on n vertices with independent edge probabilities \(p_n=c/n\), and we use a suitable exploration algorithm that generates the random graph and the height distribution simultaneously.

Algorithm 2

(Tetris exploration) At time t, we keep track of the set \(\mathcal {A}_k(t)\) of vertices at height k, for \(0\le k\le K\), and the set \(\mathcal {U}(t)\) of unexplored vertices. Initialize by putting \(\mathcal {A}_{k}=\varnothing \) for \(0\le k\le K\) and \(\mathcal {U}(0)=V\). Define \(\mathcal {A}(t):=\bigcup _{k}\mathcal {A}_k(t)\). At time \(t+1\), if \(\mathcal {U}(t)\) is nonempty, we select a vertex v from \(\mathcal {U}(t)\) uniformly at random and try to pair it with the vertices of \(\mathcal {A}(t)\), independently, with probability \(p_n\). Suppose that the set of all vertices in \(\mathcal {A}(t)\) to which the vertex v is paired, is given by \(\{v_1,\dots ,v_r\}\) for some \(r\ge 0\), where each \(v_i\in \mathcal {A}_{k_i}(t)\) for some \(0\le k_i\le K\). Then:

  • When \(\max _{i\in [r]} k_i\le K-1\), set \(h_v(t+1)=\max _{i\in [r]} k_i+1\), and \(h_u(t+1)=h_u(t)\) for all \(u\ne v\).

  • Otherwise \(h_u(t+1)=h_u(t)\) for all \(u\in V\).

The algorithm terminates at time \(t=n\), when \(\mathcal {U}(t)\) becomes empty, and outputs the vector \((\mathcal {A}_1(n),\ldots ,\mathcal {A}_K(n))\) and a graph \(\mathcal {G}(n)\).

Proposition 2

The joint distribution of \((\mathcal {G}(n),\mathcal {A}_1(n),\ldots ,\mathcal {A}_K(n))\) is identical to that of \((G(n,p_n), \mathcal {N}_1(n),\ldots , \mathcal {N}_K(n))\).

Due to Proposition 2 the desired height distribution can be obtained from the scaled output produced by Algorithm 2. Define \(\alpha _k^n(t)=A_k(nt)/n\) as before. Here is then the main result for the Tetris model:

Theorem 2

(Tetris jamming limit) The process \(\{{\varvec{\alpha }}^n(t)\}_{0\le t\le 1}\) on the graph G(nc / n) converges in distribution to the deterministic process \(\{{\varvec{\alpha }}(t)\}_{0\le t\le 1}\) that can be described as the unique solution of the integral recursion equation

$$\begin{aligned} \alpha _k(t)=\int _0^t\delta _k({\varvec{\alpha }}(s)){\mathrm {d}}s, \end{aligned}$$
(6)

where

$$\begin{aligned} \delta _k({\varvec{\alpha }})= {\left\{ \begin{array}{ll} \big ( 1-{\mathrm {e}}^{-c\alpha _{k-1}}\big ){\mathrm {e}}^{-c(\alpha _k+\dots +\alpha _{\scriptscriptstyle K})}, \quad &{}\mathrm {for }\ k\ge 2,\\ {\mathrm {e}}^{-c(\alpha _1+\dots +\alpha _K)}, &{}\mathrm {for }\ k=1. \end{array}\right. } \end{aligned}$$
(7)

Consequently, the jamming density of height k converges in distribution to a constant, i.e., for all \(1\le k\le K\),

$$\begin{aligned} \frac{N_k(n)}{n}{\xrightarrow {d}}\alpha _k(1), \quad \mathrm{as} \ n\rightarrow \infty . \end{aligned}$$
(8)

Figure 2 shows the jamming densities of the different heights for \(K=2,3,4\) and increasing average degree c. Observe that in general the jamming heights do not obey a natural order. For \(K=2\), for instance, the order of active nodes at heights one and two changes around \(c\approx 4.4707\). Similar regime switches occur for large K-values as well. In general, for relatively sparse graphs with small c, the density of active nodes can be seen to decrease with the height, possibly due to the presence of many small subgraphs (like isolated vertices or pair of vertices). But as c increases, the screening effect becomes prominent, and the densities increase with the height. Related phenomena have been observed for parking on \(\mathbbm {Z}\) [9], and on a random tree [10]. However, the models considered in [9, 10] are different from the ones considered here in the sense that the heights [9, 10] are unbounded (there are no frozen sites as in this paper). Furthermore, Fig. 3 displays the fraction of active nodes as a function of the average degree. Notice here also that the jamming constant is increasing with K, as expected.

Fig. 2
figure 2

Jamming densities of the different heights in the Tetris model as a function of c for \(0\le c\le 20\)

Theorem 2 also gives the jamming constant \(\alpha _1(1)+\cdots +\alpha _K(1)\) for the limiting fraction of active, or non-zero nodes. For \(K=1\) this corresponds to the fraction of nodes contained in the greedy maximal independent set, and as expected, relaxing the hard constraints by introducing more than one height (\(K\ge 2\)) considerably increases this fraction.

Fig. 3
figure 3

Fraction of active nodes in the Tetris model as a function of c for \(0\le c\le 10\)

Remark 5

As in the Threshold model, it can be observed that the number of connections to the set \(\mathcal {A}_i\) will be distributed approximately as Poi\((c\alpha _i)\). Now, at any step, a selected vertex v is added to \(\mathcal {A}_i\) if and only if it has a connection to at least some vertex in \(\mathcal {A}_{i-1}\), and has no connections with \(\bigcup _{j=i}^K\mathcal {A}_{j}\). The probability of this event can be recognized in the function \(\delta _k\).

2.3 Sequential Frequency Assignment Process (SFAP) Model

The SFAP works as follows: Each node can take one of K different frequencies indexed by \(\{1,2,\ldots ,K\}\), and neigboring nodes are not allowed to have identical frequencies, because this would cause a conflict. One can see that if the underlying graph G is not K-colorable, then a conflict-free frequency assignment to all the vertices is ruled out. The converse is also true: If there is a feasible K-coloring for the graph G, then there exists a conflict-free frequency assignment. Determining the optimal frequency assignment, in the sense of the maximum number of nodes getting at least some frequency for transmission, can be seen to be NP-hard in general (notice that \(K=1\) gives the maximum independent set problem). This creates the need for distributed algorithms that generate a maximal (not necessarily maximum) conflict-free frequency assignment. The SFAP model provides such a distributed scheme [11]. As in the Threshold model and Tetris model, the vertices are selected one at a time, uniformly at random amongst those that have not yet been selected. A selected vertex probes its neigbors and selects the lowest available frequency. When all K frequencies are already taken by its neighbors, the vertex gets no frequency and is called frozen.

Denote by \(f_v(t)\) the frequency of the vertex v, and by \(\mathcal {N}_i(t)\) the set of all vertices using frequency i at time step t. As before, we are interested in the jamming density \(N_i(n)/n\) of each frequency \(1\le i\le K\). Again we consider the Erdős–Rényi random graph, and the exploration algorithm is quite similar to that of the Tetris model, except for different local rules for determining the frequencies.

Algorithm 3

(SFAP exploration) At time t, we keep track of the set \(\mathcal {A}_k(t)\) of vertices currently using frequency k, for \(1\le k\le K\), the set \(\mathcal {A}_0(t)\) of vertices that have been selected before time t, but did not receive a frequency (frozen), and the set \(\mathcal {U}(t)\) of unexplored vertices. Initialize by setting \(\mathcal {A}_{k}=\varnothing \) for \(0\le k\le K\) and \(\mathcal {U}(0)=V\). Define \(\mathcal {A}(t):=\bigcup _{k}\mathcal {A}_k(t)\). At time \(t+1\), if \(\mathcal {U}(t)\) is nonempty, we select a vertex v from \(\mathcal {U}(t)\) uniformly at random and try to pair it with all vertices in \(\mathcal {A}(t)\), independently with probability \(p_n\). Suppose that the set of all vertices in \(\mathcal {A}(t)\setminus \mathcal {A}_0(t)\) to which the vertex v is paired is given by \(\{v_1,\dots ,v_r\}\) for some \(r\ge 0\), where each \(v_i\in \mathcal {A}_{k_i}(t)\) for some \(1\le k_i\le K\). Then:

  • If the set \(\mathcal {F}_v(t):=\{1,\dots ,K\}\setminus \{k_i:1\le i\le r\}\) of non-conflicting frequencies is nonempty, then assign the vertex v the frequency \(f_v(t+1)=\min \mathcal {F}_v(t)\), and \(f_u(t+1)=f_u(t)\) for all \(u\in \mathcal {A}(t)\).

  • Otherwise set \(f_v(t+1)=0\), and \(f_u(t+1)=f_u(t)\) for all \(u\in \mathcal {A}(t)\).

The algorithm terminates at time \(t=n\) and outputs \((\mathcal {A}_1(n),\ldots ,\mathcal {A}_K(n))\) and a graph \(\mathcal {G}(n)\). Again, we can show that this algorithm produces the right distribution.

Proposition 3

The joint distribution of \((\mathcal {G}(n),\mathcal {A}_1(n),\ldots ,\mathcal {A}_K(n))\) is identical to that of \((G(n,p_n)\), \(\mathcal {N}_1(n),\ldots , \mathcal {N}_K(n))\).

Again, define \(\alpha _k^n(t)=A_k(nt)/n\).

Theorem 3

(SFAP jamming limit) The process \(\{{\varvec{\alpha }}^n(t)\}_{0\le t\le 1}\) converges in distribution to the process \(\{{\varvec{\alpha }}(t)\}_{0\le t\le 1}\) that can be described as the unique solution to the deterministic integral recursion equation

$$\begin{aligned} \alpha _k(t)=\int _0^t\delta _k({\varvec{\alpha }}(s)){\mathrm {d}}s, \end{aligned}$$
(9)

where, for all \(1\le k\le K\),

$$\begin{aligned} \delta _k({\varvec{\alpha }})={\mathrm {e}}^{-c\alpha _k}\prod _{r=1}^{k-1}\big (1-{\mathrm {e}}^{-c\alpha _r}\big ). \end{aligned}$$
(10)

Consequently, the jamming density at height k converges in probability to a constant, i.e. for all \(1\le k\le K\),

$$\begin{aligned} \frac{N_k(n)}{n}{\xrightarrow {d}}\alpha _k(1), \quad \mathrm{as} \ n\rightarrow \infty . \end{aligned}$$
(11)

It is straightforward to check that the system of equations in (9) has the solution

$$\begin{aligned} \alpha _1(t)&=\frac{1}{c}\log (1+ct), \nonumber \\ \alpha _i(t)&=\frac{1}{c}\log ({\mathrm {e}}^{c\alpha _{i-1}(t)}-c\alpha _{i-1}(t)),\quad \text{ for } i\ge 2. \end{aligned}$$
(12)

As in the Tetris model, the proportion of nodes with the same frequency is a relevant quantity. We plot the jamming densities for the first four frequencies for increasing values of c in Fig. 4a. Observe that in this case the density decreases with the frequency. The total number of active nodes is given by the sum of the heights, as displayed in Fig. 4b.

Fig. 4
figure 4

SFAP model with \(K=4\)

Remark 6

Observe that at each step the newly selected vertex v is added to the set \(\mathcal {A}_k\) if and only if v has at least some connections to all the sets \(\mathcal {A}_j\) with \(1\le j<k\), and has no connections with the set \(\mathcal {A}_k\). Further, as in the previous cases, since the number of connections to the set \(\mathcal {A}_j\), in the limit, is Poisson\((c\alpha _j)\) distributed, we obtain that this probability is given by the function \(\delta _k\).

Remark 7

In the random graphs literature the SFAP model is used as a greedy algorithm for finding an upper bound on the chromatic number of the Erdős–Rényi random graph [15, 20]. However, the SFAP version in this paper uses a fixed K, which is why Theorem 3 does not approximate the chromatic number, and gives the fraction of vertices that can be colored in a greedy manner with K given colors instead.

3 Proofs

In this section we first prove Theorem 1 for the Threshold model. The proofs for Theorems 2 and 3 use similar ideas except for the precise details, which is why we present these proofs in a more concise form. For the same reason, we give the proof of Proposition 1 and skip the proofs of the similar results in Proposition 2 and Proposition 3.

3.1 Proof of Theorem 1

Proof of Proposition 1

The difference between the Threshold model and Algorithm 1 lies in the fact that the activation process in the Threshold model takes place on a given realization of \(G(n,p_n)\), whereas Algorithm 1 generates the graph sequentially. To see that \((\mathcal {G}(n),\mathcal {A}(n))\) is indeed distributed as \((G(n,p_n),\mathcal {I}(n))\), it suffices to produce a coupling such that the graphs \(G(n,p_n)\) and \(\mathcal {G}(n)\) are identical and \(\mathcal {I}(t)=\mathcal {A}(t)\) for all \(1\le t\le n\). For that purpose, associate an independent uniform[0, 1] random variable \(U_{i,j}\) to each unordered pair (ij) both in the Threshold model and in Algorithm 1, for \(1\le i<j\le n\). It can be seen that if we keep only those edges for which \(U_{i,j}\le p_n\), the resulting graph is distributed as \(G(n,p_n)\). Therefore, when we create edges in both graphs according to the same random variables \(U_{i,j}\), we ensure that \(G(n,p_n)=\mathcal {G}(n).\)

Now to select vertices uniformly at random from the set of all vertices that have not been selected yet, initially choose a random permutation of the set \(\{1,2,\ldots , n\}\) and denote it by \(\{\sigma _1,\sigma _2,\ldots ,\sigma _n\}\). In both the Threshold model and Algorithm 1, at time t, select the vertex with index \(\sigma _t\). Now, at time t, Algorithm 1 only discovers the edges satisfying \(U_{\sigma _t,j}\le p_n\) for \(j\in \mathcal {A}(t)\). Observe that this is enough for deciding whether \(\sigma _t\) will be active or not. Therefore, if \(\sigma _t\) becomes active in the Threshold model, then it will become active in Algorithm 1 as well, and vice versa. We thus end up getting precisely the same set of active vertices in the original model and the algorithm, which completes the proof. \(\square \)

We now proceed to prove Theorem 1. The proof relies on a decomposition of the rescaled process as a sum of a martingale part and a drift part, and then showing that the martingale part converges to zero and the drift part converges to the appropriate limiting function. Let \(\xi _k^n(t+1)\) be the number of edges created at step \(t+1\) between the random vertex selected at step \(t+1\) and the vertices in \(\mathcal {A}_k(t)\). Also, for notational consistency, define \(\xi _{-1}^n\equiv 0 \), and let \(\xi ^n(t+1):=\sum _{k=0}^{K-1}\xi _k^n(t+1)\). Recall that an empty sum is taken to be zero. Note that, for any \(0\le k\le K-1\),

$$\begin{aligned} A_k^n(t+1)= A_k^n(t)+\zeta _{ k}^n(t+1), \end{aligned}$$
(13)

where

$$\begin{aligned} \zeta _{ k}^n(t+1)=\xi _{k-1}^n(t+1)-\xi _k^n(t+1)+ {\mathbbm {1}_{\left[ \xi ^{n}(t+1)=k\right] }} \end{aligned}$$
(14)

if \(\xi ^n(t+1)\le K-1\) and \(\xi ^n_{K-1}(t+1)=0\), and \(\zeta _{ k}^n(t+1)=0\) otherwise. To see this, observe that at time \(t+1\), if the number of new connections to the set of active vertices exceeds \(K-1\), or a connection is made to some active vertex that already has \(K-1\) active neighbors, then the newly selected vertex cannot become active. Otherwise, the newly selected vertex instantly becomes active, and if the total number of new connections to \(\mathcal {A}^n(t)\) is j for some \(j\le K-1\), then \(\xi ^n_{k}(t+1)\) vertices of \(\mathcal {A}_k(t)\) will now have \(k+1\) active neighbors, for \(0\le k\le K-2\), and the newly active vertex will be added to \(\mathcal {A}_j(t+1)\).

Observe that \(\{{\varvec{A}}^n(t)\}_{t\ge 0}=\{(A_0^n(t),\dots ,A_{\scriptscriptstyle K-1}^n(t))\}_{t\ge 0}\) is an \(\mathbbm {R}^K\)-valued Markov process. Moreover, for any \(0\le k\le K-1\), given the value of \({\varvec{A}}^n(t)\), \(\xi _k^n(t+1)\sim \mathrm {Bin}(A_k^n(t),p_n)\) and \(\xi _0^n(t+1),\dots ,\xi _{\scriptscriptstyle K-1}^n(t+1)\) are mutually independent when conditioned on \({\varvec{A}}^n(t)\). Write \(A_{\le r}^n(t)= A_1^n(t)+\dots +A_r^n(t)\). For a random variable \(X\sim \mathrm {Bin}(n,p)\), denote \(\mathsf {B}(n,p;k)=\mathbbm {P}\left( X\le k\right) \) and \(\mathsf {b}(n,p;k)=\mathbbm {P}\left( X= k\right) \). Now we need the following technical lemma:

Lemma 1

Let \(X_1,\dots ,X_r\) be r independent random variables with \(X_i\) distributed as \(\mathrm {Bin}(n_i,p)\). Then, for any \(1\le R\le \sum _{i=1}^rn_i\),

$$\begin{aligned} \mathbbm {E}\left( X_i|X_1+\dots +X_r\le R\right) =n_ip\frac{\mathbbm {P}\left( Z_1\le R-1\right) }{\mathbbm {P}\left( Z_2\le R\right) } \end{aligned}$$
(15a)

and

$$\begin{aligned} \mathbbm {E}\left( X_i(X_i-1)|X_1+\dots +X_r\le R\right) \le \frac{n_i(n_i-1)p^2}{\mathbbm {P}\left( Z_2\le R\right) }, \end{aligned}$$
(15b)

where \(Z_1\sim \mathrm {Bin}\left( \sum _{i=1}^rn_i-1,p\right) \) and \(Z_2\sim \mathrm {Bin}\left( \sum _{i=1}^rn_i,p\right) \).

Proof

Note that \(\mathbbm {E}\left( X_i|X_1+\dots +X_r=j\right) =n_ij/(n_1+\dots +n_r)\). Therefore,

$$\begin{aligned} \mathbbm {E}\left( X_i\right)&=\mathbbm {E}\left( X_i|X_1+\dots +X_r\le R\right) \mathbbm {P}\left( X_1+\dots +X_r\le R\right) \nonumber \\&+\sum _{j=R+1}^{n_1+\dots +n_r}\mathbbm {E}\left( X_i|X_1+\dots +X_r=j\right) \mathbbm {P}\left( X_1+\dots +X_r=j\right) . \end{aligned}$$
(16)

Thus, since

$$\begin{aligned} \frac{j}{p\sum _{i=1}^rn_i}\mathrm {b}\left( \sum _{i=1}^rn_i,p;j\right) = \mathrm {b}\left( \sum _{i=1}^rn_i-1,p;j-1\right) \end{aligned}$$
(17)

we get

$$\begin{aligned}&\mathbbm {E}\left( X_i|X_1+\dots +X_r\le R\right) \nonumber \\&\quad =\frac{n_ip}{\mathbbm {P}\left( X_1+\dots +X_r\le R\right) }\bigg (1-\frac{1}{p\sum _{i=1}^rn_i}\sum _{j=R+1}^{n_1+\dots +n_r}j \mathbbm {P}\left( X_1+\dots +X_r=j\right) \bigg ) \nonumber \\&\quad =\frac{n_ip}{\mathbbm {P}\left( Z_2\le R\right) }(1-\mathbbm {P}\left( Z_1\ge R\right) )=n_ip\frac{\mathbbm {P}\left( Z_1\le R-1\right) }{\mathbbm {P}\left( Z_2\le R\right) }. \end{aligned}$$
(18)

Further,

$$\begin{aligned} n_i(n_i-1)p^2&=\mathbbm {E}\left( X_i(X_i-1)\right) \nonumber \\&\ge \mathbbm {E}\left( X_i(X_i-1)|X_1+\dots +X_r\le R\right) \mathbbm {P}\left( Z_2\le R\right) \end{aligned}$$
(19)

and the proof is complete. \(\square \)

Using Lemma 1 we get the following expected values:

$$\begin{aligned}&\mathbbm {E}\left( \xi _k^n(t+1){\mathbbm {1}_{\left[ \xi ^n(t+1) \le K-1, \xi ^n_{\scriptscriptstyle K-1}(t+1)=0\right] }}\big \vert {\varvec{A}}^n(t)\right) \nonumber \\&=A_k^n(t)p_n(1-p_n)^{A_{K-1}^n(t)}\mathsf {B}(A_{\scriptscriptstyle \le K-2}^n(t)-1,p_n; K-2), \end{aligned}$$
(20)

and thus, for \(0\le k\le K-1\),

$$\begin{aligned}&\mathbbm {E}\left( \zeta ^n_k(t+1)|{\varvec{A}}^n(t)\right) \nonumber \\&=(A_{k-1}^n(t)-A_k^n(t))p_n(1-p_n)^{A_{K-1}^n(t)}\mathsf {B}(A_{\scriptscriptstyle \le K-2}^n(t)-1,p_n; K-2) \nonumber \\&+\mathsf {b}(A^n_{\scriptscriptstyle \le K-2},p_n;k)(1-p_n)^{A^n_{\scriptscriptstyle K-1}(t)}, \end{aligned}$$
(21)

where \(A_{-1}^n\equiv A_K^n\equiv 0\). For \(\varvec{i}=(i_0,\dots ,i_{\scriptscriptstyle K-1})\in \{1,\ldots ,n\}^K\), define the drift function

$$\begin{aligned} \left. \Delta _k^n({\varvec{i}}):=\mathbbm {E}\left( \zeta ^n_k(t+1)\right| {\varvec{A}}^n(t)={\varvec{i}}\right) . \end{aligned}$$
(22)

Denote \(\delta _k^n(\varvec{\alpha }):=\Delta _k^n(n\varvec{\alpha })\) for \({\varvec{\alpha }}\in [0,1]^{K}\), and

$$\begin{aligned} {\varvec{\delta }}^n({\varvec{\alpha }}):=(\delta _0^n(\varvec{\alpha }),\ldots ,\delta _{K-1}^n(\varvec{\alpha })). \end{aligned}$$

Recall the definition of \({\varvec{\delta }}({\varvec{\alpha }})=(\delta _0({\varvec{\alpha }}),\dots , \delta _{K-1}({\varvec{\alpha }}))\) in (3).

Lemma 2

(Convergence of the drift function) The time-scaled drift function \({\varvec{\delta }}^n\) converges uniformly on \([0,1]^K\) to the Lipschitz-continuous function \({\varvec{\delta }}:[0,1]^K\mapsto [0,1]^K\).

Proof

Observe that \({\varvec{\delta }}(\cdot )\) is continuously differentiable, defined on a compact set, and hence, is Lipschitz continuous. Also, \({\varvec{\delta }}^n\) converges to \({\varvec{\delta }}\) point-wise and the uniform convergence is a consequence of the continuity of \({\varvec{\delta }}\) and the compactness of the support. \(\square \)

Recall that \(A_k^n(0)=0\) for \(0\le k\le K-1\). The Doob-Meyer decomposition of (13) gives

$$\begin{aligned} A_k^n(t)&=\sum _{i=1}^t\zeta _k^n(i)=M_k^n(t)+\sum _{i=1}^t\mathbbm {E}\left( \zeta _k^n(i)|{\varvec{A}}^n(i-1)\right) , \end{aligned}$$
(23)

where \((M_k^n(t))_{t\ge 1}\) is a locally square-integrable martingale. We can write

$$\begin{aligned} \alpha _k^n(t)&=\frac{M_k^n({\left\lfloor nt \right\rfloor })}{n}+\frac{1}{n}\sum _{i=1}^{{\left\lfloor nt \right\rfloor }}\Delta _k^n({\varvec{A}}^n(i-1))\nonumber \\&=\frac{M_k^n({\left\lfloor nt \right\rfloor })}{n}+\frac{1}{n}\int _0^{{\left\lfloor nt \right\rfloor }-1}\Delta _k^n({\varvec{A}}^n(s)){\mathrm {d}}s \nonumber \\&=\frac{M_k^n({\left\lfloor nt \right\rfloor })}{n}+\int _0^{t}\Delta _k^n({\varvec{A}}^n(ns)){\mathrm {d}}s-\int _{({\left\lfloor nt \right\rfloor }-1)/n}^t\Delta _k^n({\varvec{A}}^n(ns)){\mathrm {d}}s \nonumber \\&= \frac{M_k^n({\left\lfloor nt \right\rfloor })}{n}+\int _0^{t}\delta _k^n({\varvec{\alpha }}^n(s)){\mathrm {d}}s-\int _{({\left\lfloor nt \right\rfloor }-1)/n}^t\delta _k^n({\varvec{\alpha }}^n(s)){\mathrm {d}}s. \end{aligned}$$
(24)

First we show that the martingale terms converge to zero.

Lemma 3

For all \(0\le k\le K-1\), as \(n\rightarrow \infty \)

$$\begin{aligned} \sup _{s\in [0,1]}\frac{|M_k^n(ns)|}{n}{\xrightarrow {d}}0. \end{aligned}$$
(25)

Proof

The scaled quadratic variation term can be written as

$$\begin{aligned} \frac{1}{n^2}\langle M_k^n \rangle ({\left\lfloor ns \right\rfloor })&=\frac{1}{n^2}\sum _{i=1}^{{\left\lfloor ns \right\rfloor }}{\mathrm {Var}\left( \zeta _k^n(i)|{\varvec{A}}^n(i-1)\right) }. \end{aligned}$$
(26)

Now, using Lemma 1 we get,

$$\begin{aligned}&\mathbbm {E}\left( \xi _k^n(i)(\xi _k^n(i)-1){\mathbbm {1}_{\left[ \xi ^n(t+1) \le K-1, \xi ^n_{\scriptscriptstyle K-1}(t+1)=0\right] }}\big \vert {\varvec{A}}^n(i-1)\right) \nonumber \\&\le A_k^n(i-1)(A_k^n(i-1)-1)p_n^2(1-p_n)^{A_{\scriptscriptstyle K-1}^n(i-1)}\le c^2 \end{aligned}$$
(27)

for all large enough n, where we have used that \(A_k^n(i-1)\le n\) and \(p_n=c/n\) in the last step. Thus, there exists a constant \(C>0\) such that for all large enough n,

$$\begin{aligned} \mathbbm {E}\left( \zeta _k^n(i)|{\varvec{A}}^n(i-1)\right) \le C. \end{aligned}$$
(28)

Therefore, (26) implies \( \langle M_k^n\rangle /n^2{\xrightarrow {d}}0\) and this proves (25). \(\square \)

Also, Lemma 2 implies that \(\sup _{n\ge 1}\sup _{{\varvec{x}}\in [0,1]^{K}}|\delta _k^n({\varvec{x}})|<\infty \) for any \(0\le k\le K-1\). Therefore,

$$\begin{aligned} \int _{({\left\lfloor nt \right\rfloor }-1)/n}^t\delta _k^n({\varvec{\alpha }}^n(s)){\mathrm {d}}s\le \varepsilon _n', \end{aligned}$$
(29)

where \(\varepsilon _n'\) is non-random, independent of tk and \(\varepsilon _n'\rightarrow 0\). Thus, for any \(t\in [0,1]\),

$$\begin{aligned}&\sup _{s\in [0,t]}{\left\| {\varvec{\alpha }}^n(s)-{\varvec{\alpha }}(s)\right\| } \nonumber \\&\quad \le \sup _{s\in [0,t]}\frac{{\left\| {\varvec{M}}_n(ns)\right\| }}{n}+\int _0^t\sup _{u\in [0,s]}{\left\| {\varvec{\delta }}^n({\varvec{\alpha }}^n(u))-{\varvec{\delta }}({\varvec{\alpha }}(u))\right\| }{\mathrm {d}}s +\varepsilon '_n. \end{aligned}$$
(30)

Now, since \({\varvec{\delta }}\) is a Lipschitz-continuous function, there exists a constant \(C>0\) such that \({\left\| {\varvec{\delta }}({\varvec{x}})-{\varvec{\delta }}({\varvec{y}})\right\| }\le C {\left\| {\varvec{x}}-{\varvec{y}}\right\| }\) for all \({\varvec{x}},{\varvec{y}}\in [0,1]^K\). Therefore,

$$\begin{aligned} \sup _{u\in [0,s]}{\left\| {\varvec{\delta }}^n({\varvec{\alpha }}^n(u))-{\varvec{\delta }}({\varvec{\alpha }}(u))\right\| }&\le \sup _{{\varvec{x}}\in [0,1]^K}{\left\| {\varvec{\delta }}^n({\varvec{x}})-{\varvec{\delta }}({\varvec{x}})\right\| } \nonumber \\&\quad + C\sup _{u\in [0,s]} {\left\| {\varvec{\alpha }}^n(u)-{\varvec{\alpha }}(u)\right\| }. \end{aligned}$$
(31)

Lemma 2, (30) and (31) together imply that

$$\begin{aligned} \sup _{s\in [0,t]}{\left\| {\varvec{\alpha }}^n(s)-{\varvec{\alpha }}(s)\right\| }&\le C\int _0^t\sup _{u\in [0,s]} {\left\| {\varvec{\alpha }}^n(u)-{\varvec{\alpha }}(u)\right\| }{\mathrm {d}}s+ \varepsilon _n, \end{aligned}$$
(32)

where \(\varepsilon _n{\xrightarrow {d}}0\). Using Grőnwall’s inequality [14, Proposition 6.1.4], we get

$$\begin{aligned} \sup _{s\in [0,t]}{\left\| {\varvec{\alpha }}^n(s)-{\varvec{\alpha }}(s)\right\| }\le \varepsilon _n {\mathrm {e}}^{Ct}. \end{aligned}$$
(33)

Thus the proof of Theorem 1 is complete. \(\square \)

3.2 Proof of Theorem 2

The proof of Theorem 2 is similar to the proof of Theorem 1. Again denote \(A^n_k(t)=|\mathcal {A}_k(t)|\), where \(\mathcal {A}_k(t)\) is the number of active vertices at height k at time t. Note here that \(A^n_0(t)\) is the set of frozen vertices. Let \(\xi _k^n(t+1)\) be the number of vertices in \(\mathcal {A}_k(t)\) that are paired to the vertex selected at time \(t+1\) by Algorithm 2. Then, for \(1\le k\le K\),

$$\begin{aligned} A_k^n(t+1)=A_k^n(t)+\zeta _k^n(t+1), \end{aligned}$$
(34)

where, for \(k\ge 2\),

$$\begin{aligned} \zeta _k^n(t+1)&= {\left\{ \begin{array}{ll} 1 \quad \text {if } \xi _r^n(t+1)=0, \ \forall r\ge k, \ \xi _{k-1}^n(t+1)>0,\\ 0 \quad \text {otherwise}, \end{array}\right. } \nonumber \\ \zeta _1^n(t+1)&= {\left\{ \begin{array}{ll}1 \quad \text {if } \xi _r^n(t+1)=0, \ \forall r\ge 1,\\ 0 \quad \text {otherwise}, \end{array}\right. } \end{aligned}$$
(35)

Indeed, observe that if j is the maximum index for which the new vertex selected at time \(t+1\) makes a connection to \(A_j^n(t)\), and \(j\le K-1\), then v will be assigned height \(j+1\). Therefore,

$$\begin{aligned}&\mathbbm {E}\left( \zeta _k^n(t+1)|{\varvec{A}}^n(t)\right) \nonumber \\&= {\left\{ \begin{array}{ll} \big (1-(1-p_n)^{A_{k-1}^n(t)}\big )(1-p_n)^{A_k^n(t)+\dots +A_{\scriptscriptstyle K}^n(t)}, &{} \text {for } k\ge 2,\\ (1-p_n)^{A_1^n(t)+\dots +A_K^n(t)},&{} \text {for } k=1. \end{array}\right. } \end{aligned}$$
(36)

For \(\varvec{i}=(i_1,\dots ,i_{\scriptscriptstyle K})\in [n]^{K}\), define the drift rate functions

$$\begin{aligned} \left. \Delta _k^n({\varvec{i}})=\Delta _k^n({\varvec{i}}):=\mathbbm {E}\left( \zeta ^n_k(t+1)\right| {\varvec{A}}^n(t)={\varvec{i}}\right) , \end{aligned}$$
(37)

and denote \(\delta _k^n(\varvec{\alpha })=\Delta _k^n(n\varvec{\alpha })\) for \({\varvec{\alpha }}\in [0,1]^{K}\), \({\varvec{\delta }}^n({\varvec{\alpha }})=(\delta _1^n(\varvec{\alpha }),\dots ,\delta _{K}^n(\varvec{\alpha }))\). Also, let \({\varvec{\delta }}({\varvec{\alpha }})=(\delta _1({\varvec{\alpha }}),\dots , \delta _{K}({\varvec{\alpha }}))\) where we recall the definition of \(\delta _k(\cdot )\) from (7).

Lemma 4

(Convergence of the drift function) The time-scaled drift function \({\varvec{\delta }}^n\) converges uniformly on \([0,1]^{K}\) to the Lipschitz continuous function \({\varvec{\delta }}:[0,1]^{K}\mapsto [0,1]\).

The above lemma can be seen from the same arguments as used in the proof of Lemma 2. In this case also, we can obtain a martingale decomposition similar to (24). Here, the increments \(\zeta _k^n(\cdot )\) values are at most 1. Therefore, the quadratic variation of the scaled martingale term is at most 1 / n. Hence one obtains the counterpart of Lemma 3 in this case, and the proof can be completed using similar arguments as in the proof of Theorem 2. \(\square \)

3.3 Proof of Theorem 3

As in the previous section, we only compute the drift function and the rest of the proof is similar to the proof of Theorem 1. Let \(A^n_k(t)=|\mathcal {A}_k(t)|\), where \(\mathcal {A}_k(t)\) is obtained from Algorithm 3, and let \(\xi _k^n(t+1)\) be the number of vertices of \(\mathcal {A}_k(t)\) that are paired to the vertex selected randomly among the set of unexplored vertices at time \(t+1\). Then,

$$\begin{aligned} A_k^n(t+1)=A_k^n(t)+\zeta _k^n(t+1), \end{aligned}$$
(38)

where, for any \(1\le k\le K\),

$$\begin{aligned} \zeta _k^n(t+1)= {\left\{ \begin{array}{ll}1 \quad \text {if } \xi _r^n(t+1)>0, \ \forall \ 1\le r< k, \text { and } \xi _{k}^n(t+1)=0,\\ 0 \quad \text {otherwise}. \end{array}\right. } \end{aligned}$$
(39)

This follows by observing that the new vertex selected at time \(t+1\) is assigned frequency j, for some \(j\le K\), if and only if the new vertex makes no connection with \(\mathcal {A}^n_j(t)\), and has at least one connection with \(\mathcal {A}_k^n(t)\) for all \(1\le k\le j-1\). Hence the respective expectations can be written as

$$\begin{aligned} \mathbbm {E}\left( \zeta _k^n(t+1)|{\varvec{A}}^n(t)\right) =(1-p_n)^{A_{k}^n(t)}\prod _{r=1}^{k-1}\left( 1-(1-p_n)^{A_r^n(t)}\right) , \end{aligned}$$
(40)

for \(1\le k\le K\). Defining the functions \(\Delta \), \(\delta \) suitably, as in the proof of the Tetris model, the current proof can be completed in the exact same manner. \(\square \)

4 Further Research

This paper considers Random Sequential Adsorption (RSA) on the Erdős–Rényi random graph and relaxes the strict hardcore interaction between active nodes in three different ways, leading to the Threshold model, the Tetris model and the SFAP model. The Threshold model constructs a greedy maximal K-independent set. For \(K=1\) it is known that the size of the maximum set is almost twice as large as the size of a greedy maximal set [7, 12]. From the combinatorial perspective, it is interesting to study the size of the maximum K-independent set in random graphs in order to quantify the gap with the greedy solutions. Similarly, in the context of the SFAP model, it is interesting to find the maximum fraction of vertices that can be activated if there are K different frequencies. Another fruitful direction is to determine the jamming constant for the three generalized RSA models when applied to other classes of random graphs, such as random regular graphs or random graphs with more general degree distributions such as inhomogeneous random graphs, the configuration model or preferential attachment graphs.