Correction to: Layered Networks, Equilibrium Dynamics, and Stable Coalitions

An important aspect of network dynamics that has been missing from our understanding of network dynamics in various applied settings is the influence of strategic behavior in determining equilibrium network dynamics. Our main objective hears to say what we can regarding the emergence of stable club networks—and therefore, stable coalition structures—based on the stability properties of strategically determined equilibrium network formation dynamics. Because club networks are layered networks, our work here can be thought of as a first work on the strategic dynamics of layered networks. In addition to constructing a discounted stochastic game model (i.e., a DSG model) of club network formation, (1) we show that our DSG of network formation possesses a stationary Markov perfect equilibrium in players’ membership-action strategies; (2) we identify the assumptions on primitives which ensure that the induced equilibrium Markov process of layered club network formation satisfies the Tweedie Stability Conditions (Tweedie in Stoch Processes Their Appl 92:345–354) and (3) we show that, as a consequence, the equilibrium Markov network formation process generates a unique decomposition of the set of state-network pairs into a transient set together with finitely many basins of attraction. Moreover, we show that if there is a basin containing a vio set (a visited infinitely often set) of club networks sufficiently close together, then the coalition structures across club networks in the vio set will be the same (i.e., closeness across networks in a vio set leads to invariance in coalition structure across networks in a vio set).


Introduction
A coalition is a group of players who, through their own actions, can realize some set of outcomes for its own members [38]. Here we will be interested in the equilibrium dynamics governing the formation and evolution of coalitions as well as the strategic forces which give rise to these dynamics. We will think of a coalition as a group of players belonging to the same club, and we will represent the prevailing club membership structure as a labeled, directed bipartite network. Because we will allow each player to be a member of multiple clubs, each player can be a member of multiple coalitions (see Page and Wooders [34]). 1 Each club network consists of three primitives: a set of players, a set of clubs, and a set of arc labels. In our network model, a player's club membership is represented by a labeled directed arc from the node representing the player to the node representing the club. The arc label, which must be feasible for that player in that club, indicates the action chosen by the player to be taken in the chosen club. Thus, a player establishes a directed connection by choosing a club and a feasible club action. The set of all such player-specific directed club connections is the player's club network and together the union of these player club networks constitute the club network. At each of infinitely many time points players, in light of the prevailing state and club network, are free to noncooperatively alter their club memberships as well as their corresponding club action profiles in accordance with the rules of network formation. We will assume that after players have altered their own club networks, each player receives a stage payoff, a function of the prevailing state-network pair; then, given the prevailing state and the new club network chosen by the players, a new state is generated in accordance with the law of motion. We will assume that players, in making their membership-action choices through discrete time, seek to maximize the discounted vio set of networks is chainable, then a stable coalition structure will emerge. 3 Thus, if club network vio sets are chainable, then in finite time with probability 1, there will emerge from the equilibrium process of club network formation, a basin-specific stable coalition structure.
As a running example we consider a DSG over time allocation networks (a class of club networks sufficiently simple to allow us to illustrate the ideas and results we obtain for our general DSG of network formation and equilibrium network dynamics). We show that because players' payoff functions are naturally affine over the convex, compact feasible set of time allocation networks, players' stationary Markov perfect equilibrium network formation strategies are bang-bang. Thus rather than diversifying their club time across several clubs, each player in each state spends all club time in one and only one club. Moreover, we show that all that is required for the equilibrium time allocation process induced by players' bangbang time allocation strategies to satisfy the Tweedie conditions is for the state space to be compact and the conditional densities over coming states to be continuous in the current state and time allocation network for almost all coming states (i.e., except potential coming states having probability zero of occurring).

Layered Club Networks
We begin with a formal definition of club networks and a discussion of their properties. The discussion here is based, in part, on prior work by the second author with M. Wooders (see Page and Wooders [32][33][34], Wooders and Page [38], and Page et al. [31]). 4 Multiple membership club networks, as defined in Page and Wooders [34], are examples of layered networks in which connections between layers are brought about by overlapping club memberships. In a club network where each player is the member of one and only one club, the induced club membership structure partitions the set of active players, making each club layer isolated-having no connections, via overlapping memberships, to other layers in the network.
In the club network model we construct here, the feasible action sets available to the players who are active in a particular club layer are subsets of a compact metric space of actions specific to that club layer-and these club specific action spaces can differ across layers. In Page and Wooders [34], the underlying metric space of player actions-whose subsets form the various player feasible sets-is the same across club layers. Here the heterogeneity of player action sets across club layers makes defining a metric to measure the distance between club networks a much more delicate task-but we do accomplish this, thereby providing us with a compact metric hyperspace of club networks in which to carry out our game theoretic analysis of the emergence of equilibrium-layered club network dynamics.
Finally, here unlike in Page and Wooders [32][33][34], and Page et al. [31], the game of network formation is dynamic with the equilibrium network dynamics being determined by the law of motion and the stationary Markov perfect equilibrium in behavioral network formation strategies which emerge from the game of network formation.
We begin by defining connections, layers, and networks.

Connections
[N-1] Assume the following: (1) D is a finite set of n players equipped with the discrete metric η D , having typical element d. 5 (2) C is a finite set of m clubs equipped with discrete metric η C having typical element c. We will refer to a player-club pair, (d, c), as a preconnection. A preconnection, (d, c), acquires the status of a connection only when player d has chosen a feasible arc and (a, (d, c)) becomes part of a network. For preconnection, (d, c), the set of all feasible connections is given by The collection, {K dc : (d, c) ∈ D×C}, contains the basic building block of layered networks. Let 2 K dc denote the collection of all closed subsets of K dc (including the empty set), and let K dc ⊂ 2 K dc be the collection of subsets of K dc of size at most 1.
[N-2] Assume the following: (1) Each player, in considering whether or not to connect to a particular club either does not connect or connects in one and only one way.
(2) Each player or group of players can move freely and unilaterally from one club to another via feasible connections. Thus a player can drop his membership in any given club and, if allowed by the feasible arc correspondence, join any other club without bargaining and without seeking the permission of any player or group of players. 6 5 Under the discrete metric the distance between two nodes d and d in D is given by 6 It is interesting to note that strategic network formation with club memberships being determined via bargaining with voting by the club's existing membership can be treated via a discounted stochastic game, but a game played over stationary semi-Markov behavioral strategies or a game in which the state space consists of network-coalition pairs. Here we will focus on stationary Markov behavioral strategies-leaving for future research the semi-Markov case.

Layered Networks
If G cd ∈ K dc , then The feasible set of club networks, K := (K dc ) dc , is given by an n × m array of feasible player-club connections (see, 1 below), where K c := (K dc ) d is the feasible set of all possible c-layers (or club c layers), and where K d := (K dc ) c is the feasible set of all possible d-layers (or player d layers). In array form we have Formally, we have the following definition of a feasible club network, G ∈ K := (K dc ) dc , as an n × m array of feasible player-club connections.

Definition 1 (Feasible Club Networks, c-Layers, d-Layers, and Connection Arrays)
A feasible club network G ∈ K is an n × m array of feasible player-club connections given by where for each player-club pair, (d, c), is player d s part of c-layer, G c ∈ K c , given by the c th column in the club network, G, above, and where G dc ∈ K dc if and only if and where G d ∈ K d is player d s connection choice profile, given by the d th row in the club network, G, above.
If we agree to the notational convention that G dc := a dc then club networks in K = (K dc ) dc can be given a reduced form array representation-as an array of arc types (without loss of information)-as follows: A club network, G ∈ K implicitly determines an arc selection, with domain given by If G(d, c) = ∅, then in network G, the preconnection (d, c) is not elevated to the status of a connection (i.e., player d is not a member of club c in network G either because d chose not to join club c or because player d was not allowed to join club c-i.e., A(d, c) = ∅). Alternatively, if G(d, c) = ∅, then preconnection (d, c) has been elevated to a connection.

Coalition Structure
Each feasible club network, with its implied arc selection, determines a coalition structure. In particular, given arc selection G(·, ·) determined by club network, G ∈ K, each club c has a membership coalition given by the domain of the mapping d −→ G(d, c) for fixed c. In particular, club c has membership coalition in network G, denoted by, S cG , and given by where 2 D is the collection of all subsets of D, including the empty set. Thus, the coalition structure determined by club network G ∈ K is given by In a club network, connections between layers are made through overlapping club memberships. Without this, each club layer is isolated. For example, in layered club network, G ∈ K, if S cG ∩ S c G = ∅, then each player d ∈ S cG ∩ S c G is a member of club c as well as a member of club c . In this way, club layers G c and G c are connected in network G. Note that if in club network, G, players are members of one and only one club, then if the network has multiple nonempty layers, there are no connections between these layers-each layer is isolated precisely because there are no overlapping club memberships. In this case, the coalition structure induced by club network G given by is a partition of the active club members-and players are siloed by their club memberships.

Measuring the Distance Between Club Networks
In order to analyze the co-evolution of strategic behavior, club network structure and equilibrium dynamics, we require a topology for the space of club networks that is simultaneously coarse enough to guarantee compactness of the set of networks and fine enough to discriminate between differences across networks that are due to differences in the ways nodes are connected (via differing arc types) versus differences across networks that are due to the complete absence of connections. We resolve this topological dilemma by equipping the space club networks, K, with the Hausdorff metric h K -making the space of feasible club network connection arrays, a compact metric space (see Appendix 1 on the Hausdorff metric on the hyperspace of layered networks). It is easy to show that if the Hausdorff distance between any pair club networks G and G is less than ε ∈ (0, 1), then the networks can differ only in the ways a given set of player-club pairs are connected-and not in the set of player-club pairs that are connected. In particular, if for networks G and G , h K (G, G ) < ε < 1, then for arcs a and a with ρ A c (a, a ) < ε. Thus, if two club networks are at h K -distance ε < 1, then both club networks G and G have the same coalition structures, i.e., Such closeness will often occur and can only persist in network vio sets (sets of networks visited infinitely often by the equilibrium stochastic process of network formation) i.e., sets belonging to basins of attraction generated by the equilibrium dynamics governing the movements of club networks-all discussed in detail below. We will equip the hyperspace of feasible club network connection arrays, K = (K dc ) dc , with the Hausdorff metric given by if d is not a member of club c and where, due to the single-arc connection rule, But now we add a further condition given by the time allocation constraint. We will require that the d-layers, K d , in each club network G ∈ K := (K dc ) dc , having arc selection, where D(G(d, ·)) is the club domain of player d-the clubs to which d belongs. The feasible set of club networks is, Thus, in a time allocation network, players freely and noncooperatively choose which clubs they want to join as well as their levels of participation in each club they join. Their levels of participation in each club are then expressed as fractions of their total club time-and these fractions must sum to one. Following the notational convention that Again following notational convention (8) and writing out the connections long hand, the connections in the d-layers are given by while the connections in the c-layers are given by We note that the time allocation club networks in K T A and in general the club networks in K satisfy assumptions [N-1]. We will assume for the remainder of the paper that [N-2] holds. We will return to the example above later in the paper.

Discounted Stochastic Games of Club Network Formation
In order to address the questions of whether or not and under what conditions the strategic formation of club networks will lead to the emergence of dynamically stable coalition struc-tures, we will show that our discounted stochastic game (DSG) of club network formation possesses Nash equilibria in stationary Markov perfect behavioral club network formation strategies. It is the players' equilibrium behavioral network formation strategies which determine the equilibrium dynamics of club network formation. By identifying conditions under which stationary Markov perfect equilibria (SMPE) exist in such behavioral strategies and by showing that the resulting equilibrium state and club network dynamics are stable, we will be able to establish the conditions under which stable coalition structures will emerge and persist in the form of stable club networks. In this section we will construct a DSG model of club network formation and show that our model possesses SMPE in behavioral club network formation strategies. The SMPE existence problem in the setting considered here (with uncountable states and compact metric action spaces) is quite difficult and its solution-and counterexamples-are of independent interest (for details see Levy [23], Levy and McLennan [24], Page [29,30], and Fu and Page [15]).
An m-player, nonzero sum, discounted stochastic game, DSG, over the product space of probability measures over player club networks (i.e., the convex, compact metric space of behavioral actions), is given by the following primitives: where is the state space, B is the Borel σ -field of events, and μ is a probability measure. For each player d, K d is the set of all possible player club networks available to player d, while ( d (ω)) is the convex, compact feasible set of behavioral action available to player d in state ω. A feasible behavioral action available to player d in state ω is a probability is player d s payoff function in state ω given valuations (or prices) v d , and q(·|ω, ·) is the law of motion in state ω. If players holding value function profile v = (v 1 . . . , v n ) choose feasible profile of behavioral actions, in state ω, then the next state ω is chosen in accordance with probability measure q(·|ω, G) ∈ ( ) and player d s stage payoff is given by Here πσ (dG) := π(σ 1 (dG 1 ), . . . , σ m (dG n )) := ⊗ n d=1 σ d (dG d ) is the product probability measure representing the random club network determined by the n-tuple of random player club networks, (σ 1 (dG 1 ), . . . , σ n (dG n )), chosen by the players.
(2) ( , B , μ), the state space where is a complete separable metric spaces with metric ρ , equipped with the Borel σ -field, B , upon which is defined a probability measure, μ .
is player d s club network and where K d := K d1 × · · · × K dc × · · · × K dm is a compact metric space of feasible player club networks with typical element, G d , equipped with metric, is the space of all probability measures, σ d , with supports contained in player d s set of club networks, K d , equipped with the compact metrizable weak star topology (a topology denoted by w * d ca ) inherited from ca(K d ), the Banach space of finite signed Borel measures on K d with the total variation norm. 7 We will equip (K d ) with a metric, ρ w * d ca , compatible with the relative w * d ca -topology on (K d ) inherited from ca(K d ) and we will refer to σ d as player d s random player club network. (6) (K) := (K 1 ) × · · · × (K n ), the space of player behavioral action profiles, σ := (σ 1 , . . . , σ n ), equipped with the sum metric, ρ w * ca := d ρ w * d ca , a metric compatible with the relative w * ca -product topology on , and the set of all μ-equivalence classes of measurable profiles (selection profiles), ( 1 4 ) of q(·|ω, G) with respect to μ is such that for each state ω, the function We note that the assumptions above are the usual assumptions underlying discounted stochastic game models (see Appendix 2 for technical notes on these assumptions). 8 The space of μ-equivalence classes of functions L ∞ R is the separable norm dual of the space of μ -equivalence classes of μ-integrable functions, L 1 R . Because the Borel σ -field B is countably generated, L 1 R , is separable. As a consequence, the subset of value function μ-equivalence classes, L ∞ Y d , is a compact, convex, and metrizable subset of L ∞ R for the weak star topology.

Existence
Let DSG be a discounted stochastic game of club network formation satisfying assumptions [A-1] above, with one-shot game, Definition 2 (Nash Equilibria in Behavioral Strategies) A feasible profile of probability measures over player club networks, σ * : is said to be a Nash equilibrium of the one-shot network formation game, Denote by N (ω, v) the set of all Nash equilibria belonging to Under assumptions, [A-1], we know that N (ω, v) is nonempty and ρ w * ca -compact and therefore we know that P(ω, v) is nonempty and ρ Y -compact. Moreover, applying optimal measurable selection results (e.g., Himmelberg et al. [19]) and Berge's maximum theorem (e.g., see 17.31 in Aliprantis and Border [1]), we can show that the Nash correspondences, N (·, ·) and P(·, ·), are upper Caratheodory (also, see Proposition 4.2 in Page [28]). In particular, the Nash correspondence, N (·, ·), is jointly measurable in (ω, v) and N (ω, ·) is upper semicontinuous in v for each ω, and the Nash payoff correspondence, P(·, ·), is jointly measurable in (ω, v) and P(ω, ·) is upper semicontinuous in v for each ω.
Our main existence result is the following: be a discounted stochastic game of club network formation satisfying assumptions [A-1], with upper Caratheodory (uC) Nash correspondences, N (·, ·) and P(·, ·). There exists a pair, For a formal proof we refer the reader to Fu and Page [15]. Informally, the proof proceeds along the following lines: Let S ∞ (P(·, v)) be the set of all μ-equivalence classes of mea- [μ]. It follows from Blackwell's Theorem [9], extended to DSGs, that our discounted stochastic game of club network formation (10) will have stationary Markov perfect equilibria in network formation strategies if and only if there exists a value function profile or equivalently if and only if the Nash payoff selection correspondence, has fixed points, i.e., has at least one value function profile, v * ∈ L ∞ Y , such that v * ∈ S ∞ (P v * ). This is a very difficult fixed point problem because the measurable selection valued correspondence, S ∞ (P (·) ), is neither convex valued nor closed value-nor of course is it upper semicontinuous. Until Fu and Page [15] no results were available to establish the existence of a fixed point for Nash payoff selection correspondences. Essentially what Fu and Page [15] show is that while the Nash payoff selection correspondence, v −→ S ∞ (P v ), is badly behaved, under assumptions [A-1], its underlying upper Caratheodory Nash payoff correspondence, P(·, ·), always contains an upper Caratheodory sub-correspondence, p(·, ·), that has ε-approximate Caratheodory selections for all ε > 0, implying that the Nash payoff selection sub-correspondence, v −→ S ∞ ( p v ), has fixed points. Thus, Fu and Page [15] show that, in general, there exists a Nash payoff sub-correspondence, S ∞ ( p (·) ), having fixed points, i.e., that there exists More fundamentally, Fu and Page [15] are able to show that the upper Caratheodory Nash payoff sub-correspondence, p(·, ·), has ε-approximate Caratheodory selections for all ε > 0, because under assumption [A-1] the underlying upper Caratheodory Nash correspondence, N (·, ·), always contains an upper Caratheodory Nash sub-correspondence, η(·, ·), that takes closed connected values. Thus, because p(ω, v) = ( p 1 (ω, v), . . . , p n (ω, v)) := (U 1 (ω, v 1 , η(ω, v)), . . . , U n (ω, v n , η(ω, v))), the continuity of U d (ω, v d , ·) in behavioral actions σ for each player d = 1, 2, . . . , n together with the closed connectedness of η(ω, v) implies that is upper Caratheodory and interval valued for each player d = 1, 2, . . . , n. It then follows from Corollary 4.3 in Kucia and Nowak [22] that each player's Nash payoff subcorrespondence, p d (·, ·), is Caratheodory approximable and this together with the unusual properties of Komlos convergence [21] of value functions allows us to show that there exists a value function profile In order to complete our informal argument for existence we need only note that by implicit measurable selection (e.g., see Theorem 7.1 in Himmelberg [18]), there exists a profile, σ * (·) = (σ * 1 (·), . . . , σ * m (·)), of a.e. measurable selections of ω −→ η(ω, v * ), such that for each player d = 1, 2, . . . , n, Thus, for each player d, the state-contingent prices given by value function, v * d (·) ∈ L ∞ Y d , incentivize the continued choice by each player d, of behavioral strategy, σ * d (·), and we have for the value function-strategy profile pair, Thus, for value function-behavioral strategy profile pair, (v * , σ * (·)), we have for each player d = 1, 2, . . . , n and for ω a.e. [μ], that (v * , σ * (·)) satisfies the Bellman equation (1 below) and the Nash condition (2 below), Thus here we have argued informally-and shown formally in Fu and Page [15]-that, under the usual assumptions specifying a discounted stochastic game (in this case a club network formation DSG), while the DSG s Nash payoff selection is badly behaved, nonetheless, it naturally possesses (without additional assumptions) an underlying Nash payoff correspondence, P(·, ·), containing sub-correspondences, p(·, ·), which are Caratheodory approximable implying that its induced selection correspondence, v −→ S ∞ ( p v * ), has fixed points. He and Sun [16], by making an additional assumption (that the DSG is Gnonatomic or has a coarser transition kernel) guarantee that DSG s Nash payoff selection correspondence has a convex valued sub-correspondence-and therefore an approximable sub-correspondence. 9 Moreover, He and Sun [16] show that Duggan [12] accomplishes the same thing by assuming that the DSG has a noisy state. In the negative direction, Levy [23] and Levy and McLennan [24] construct counterexamples showing that not all uncountablefinite DSGs have stationary Markov perfect equilibria. They accomplish this by constructing counterexamples in which the Nash correspondences are not approximable-which follows from the fact that in their counterexamples, there is an absence of fixed points. Because the club network formation DSG we analyze here is approximable, we avoid the Levy-McLennan counterexamples.

Example: DSGs over Time Allocation Networks
The one-shot game for a DSG satisfying assumptions [A-1] over time allocation networks, K T A , is given by In a (ω, v)-game each player d has preferences over the d-layer networks in K d given by, where G d = (G d1 , . . . , G dm ) and G dc = {(a dc , (d, c))} for c ∈ D (G(d, ·)), and G dc = ∅ for c / ∈ D (G(d, ·)). Rewriting these preferences as preferences over d-layer networks in reduced from, we have where and where with notational convention, a dc k ∈ (0, 1] ⇐⇒ G dc k = {(a dc k , (d, c k ))} and a dc k = 0 ⇐⇒ G dc k = ∅. We will assume that for each player d and for each (ω, v) ∈ × L ∞ Y , u d (ω, v d , ·) is affine on K T A . Thus, for time allocation networks, G and G in K T A , we have for any λ ∈ [0, 1], where We will also assume that players' one-shot payoff functions, are measurable in ω for each (v d , a d , a −d ) and jointly continuous in (v d , a d , a −d ) for each ω. By Theorem 1 above (also see Fu and Page [15]), there exists a value function profile, v * ∈ L ∞ Y and an n-tuple of measurable function (a * d 1 (ω), . . . , a * d n (ω)) ∈ m × m × · · · × m n-times .
where by notational convention, (d i , c m ))), and where by convention, In fact, given the assumptions that players' one-shot payoff functions are affine over the convex set of time allocation networks (see 28 and 29 above), we know by Corollary 1.4 in Balder [4] that the m-tuple, (a * in ω, the stationary Markov perfect time allocation network, given by the arrays, G * (ω) := (G * dc (ω)) dc := (a * dc (ω)) dc , has a reduced form n × m array, (a * dc (ω)) dc , consisting of 0 s and 1 s. Moreover, in equilibrium, the state-contingent coalition of members of any club c k determined by the equilibrium club network G * (·) is given by

Equilibrium State-Network Dynamics and Stable Coalitions
Returning to our general model, under the profile of stationary Markov perfect equilibrium behavioral strategies σ * (·) := (σ * 1 (·), . . . , σ * n (·)), the equilibrium state and network formation process is given by, with underlying probability space ( × K, B × B K , μ ⊗ σ * ) := (Z, B Z , P * ) consists of random objects, such that in each state z = (ω, G) ∈ × K, and where for any measurable set E ∈ B × B K , E ω := {G ∈ K : (ω, G) ∈ E} (see 2.6.2, the Product Measure Theorem, in Ash [3]). The movements of the process, Z * t ∞ t=0 , are governed by the equilibrium Markov transition kernel, For each (ω, G) ∈ × K, π * (d(ω , G )|ω, G) is given by the product probability measure, Thus, if the current state-club network is (ω, G) = (ω, G 1 , . . . , G n ), with d-layers (G d ) n d=1 , then the probability that the coming state and coming club network lie in the set, R × R ∈ B × B K , is given by Whether or not a stable club network emerges depends on the stability properties of equilibrium state-network dynamics underlying club network formation. Our main objective is to say what we can regarding the emergence of stable club structures-and hence stable coalition structures-based on the stability properties of the equilibrium state-network formation dynamics. We would argue that this is one of the main aspects of network dynamics that has been missing from our understanding of network dynamics in various applied settings-the influence of strategic behavior on network dynamics. Here we present a first attempt. We will proceed as follows: First we state the Tweedie (Stability) Conditions [37] guaranteeing Markov stability. Second, we show that if we slightly strengthen assumptions [A-1] (2) and (15) in our discounted stochastic game model, then the stationary Markov perfect equilibrium (SMPE) of our discounted stochastic game of network formation gives rise to a Markov stable equilibrium state-network formation process satisfying the Tweedie Conditions. Finally, we summarize some of the main implications of the Tweedie Conditions and the resulting Markov stability properties of the equilibrium state-network formation process-such as the existence of finitely many basins of attraction and ergodic probabilities, and the implications of Markov stability for the emergence of stable coalition structures.
We begin by stating the Tweedie Conditions, and after strengthening assumptions [A-1] (2) and (15)(iii) in our discounted stochastic game model, showing that our strengthened discounted stochastic game model, in general, gives rise to an equilibrium state-network process satisfying the Tweedie conditions.

The Tweedie Conditions and Discounted Stochastic Games of Club Network Formation
We say that the Markov transition, π * (·|·), satisfies the Tweedie conditions, [T], provided,

(Drift Condition) There exists (i) a nonnegative-valued measurable function, V (·) : ×
(2) (Uniform Countable Additivity) For any sequence of measurable sets, The intuition behind the Tweedie conditions can be described as follows: Rewriting first condition (36)-the drift condition-we have for each z ∈ × K is the drift operator evaluated at z = (ω, G) ∈ × K. The drift operator measures the expected drift (or movement) of the process away from states in C starting from state z. Drift is measured by the value taken by the drift operator, V (·), at z. We see from the state-contingent bound on the right-hand side of (38) that if we are measuring expected drift from a state z in C and if b > 1, so that b − 1 > 0, then condition (36) will tolerate some drift away from C. However, if we are measuring expected drift from a state z not in C, then condition (36) requires that the expected drift be back toward C. The first Tweedie condition requires that the equilibrium process be such that there exist such a function, V (·) : × K −→ [0, +∞], finite valued at least at one point, a set C, and a bounding constant, b, such (36) is satisfied. The second Tweedie condition requires that the collection of probability measures, π * ( × K) := {π * (dz |z) : z ∈ × K}, determined by discounted stochastic game's strengthened law of motion and the equilibrium behavioral strategies of the players be uniformly countably additive (37)-guaranteeing therefore that the expected drift of the process is a continuous function of state-network pairs (i.e., that there are no discontinuous jumps in the expected magnitude of the drift).
Up until now, we have assumed that the set of states, , is Polish (complete, separable, metric). Our strengthening of [A-1](2) is to assume that [A-1](2) * In the probability space of states, ( , B , μ), , is a compact metric space.
Because the hyperspace of club networks, K, is compact, with the strengthening of [A-1](2) making compact, we have for any sequence, {(ω n , G n )} n , in × K that there is a convergent subsequence {(ω n k , G n k )} k with limit (ω * , G * ) ∈ × K. Given the arguments and observations immediately above this means that the set of probability measures, is weak star compact-because Z := ×K is compact, any sequence {z n } n has a subsequence converging to some z * ∈ Z , implying that any sequence of probability measures, {π * (·|z n )} n , has a subsequence, {π * (·|z n k )} k , setwise converging and therefore weak star converging to π * (·|z * ). Moreover, by Proposition 1.4.2(a)(ii) in Hernandez-Lerma and Laserre [16], because := {π * (·|z) : z ∈ ×K} is weak star compact, the equilibrium Markov transition kernel, z −→ π * (dz |z), is uniformly countably additive on any closed subset C of × K. Thus, the equilibrium Markov kernel, π * (·|·), governing the state-network process, Z t , is such that for any closed subset C of × K, for any sequence of states {S n } ⊂ B × B K with S n ↓ ∅. We will denote by [A-1] * our list of assumptions [A-1] but with [A-1](2) and [A-1](15)(iii) strengthened as above. Our next result, which follows directly from the arguments above, states that any equilibrium state-network process determined by the stationary Markov perfect equilibrium of a DSG of club network formation satisfying assumptions [A-1] * satisfies the Tweedie conditions.

and for any sequence of sets of states {S
In our DSG over time allocation networks discussed above, if we strengthen the stochastic continuity assumptions [A-1](15)(iii) to [A-1](15)(iii) * and if we assume that the state space, , is compact, then the equilibrium state-network process induced by players' bang-bang strategies will satisfy the Tweedie conditions. Moreover, if is compact and we assume that the set of conditional densities, of q(·|ω, G) with respect to μ is such that the function is ρ ×K -continuous (ρ ×K := ρ + h K ) in ω and G a.e. [μ] in ω , then the Tweedie conditions will be satisfied in our DSG over time allocation networks.

Strategically Stable Club Networks and Coalition Structures
By Theorem 2 above, the equilibrium state-network transition, π * (·|·), governing the movements of the state-network process, {Z * t } t , through the space, × K, satisfies the Tweedie conditions. As a consequence, as shown by Tweedie [37], the space ×K can be decomposed in a unique way into a transient set of state-network pairs and a finite collection of largest absorbing sets (i.e., basins of attraction given by Harris sets) consisting of state-network pairs that might persist in the long run (and will persist in the long run if there is only one basin of attraction) and that exhaust all the possibilities for what will happen in the future with regard to the path of the network formation process. 10 Restating Tweedie's decomposition result for the strategically determined equilibrium state-network transition, π * (·|·), we have the following: (i.e., if and only if the process visits each state-network pair contained in any neighborhoods of z * i ∈ R i ⊂ H i infinitely often). 12 Essentially, when the process, Z * t , enters the maximal Harris set H i it stays there for all future periods and becomes a λ i (·)-irreducible, T -process, visiting the topological Harris recurrent state-network pairs, z * i ∈ R i , infinitely oftenpassing through state-network pairs in E i on its way to state-network pairs in R i . Thus, a refinement of the decomposition in Theorem 3 is given by If the process begins at some state-network pair in E, then in finite time with probability 1 the process will leave E and enter one of the basins, H i = R i ∪ E i , where it will remain, visiting each state-network pair in R i infinitely often with probability 1, and if the process visits a state-network pair in E i then with probability 1 it will leave that state-network pair in finite time never to return, perhaps visiting a different state-network pair in E i and with probability 1 leaving that state-network pair in finite time never to return. Because the equilibrium Markov transition, π * (·|·), governing the movements of the statenetwork process, Z * t , is strongly stochastically continuous (i.e., because z n converging to z * implies that π * (·|z n ) converges setwise to π * (·|z * )), π * (·|·) has the strong Feller property (e.g., see Meyn and Tweedie [25], Section 6.1). Moreover, because Z * i t restricted to H i is a λ i (·)-irreducible, T -process, governed by the Markov kernel, π * i (·|·), having the strong Feller property, it follows from Theorem 7.1(iii) in Tuominen and Tweedie [36] that because the state-network space is compact (see [A-2] * ), the set of topologically Harris recurrent (THR) state-network pairs, R i is closed (and hence compact) and E i is open.
Each set of topological Harris recurrent state-network pairs, R i ⊂ Z := ×K, determines a correspondence, with range given by For club network G ∈ R i , there is at least one state ω (and possibly many states) such that (ω, G) ∈ R i . Because R i is compact, R i is compact. Thus, for each 0 < ε < 1, there is a finite set of club network contained in R i , and a corresponding finite set of open balls, covering R i such that any club network G ∈ R i is contained in an open ball of networks, B h K (ε, G * i εh ), with each network in B h K (ε, G * i εh ) being at h K -distance less than ε ∈ (0, 1) from network G * i εh for some h = 1, 2, . . . , N i ε . Given our prior observation concerning the properties of the Hausdorff metric on the hyperspace of club networks, we know that for ε ∈ (0, 1), all club networks contained in the open ball B h K (ε, G * i εh ) have the same coalition structure as club network G * i εh . Thus, for club networks G and G contained in B h K (ε, G * i εh ), for ε ∈ (0, 1) What conditions are sufficient to guarantee that the finite set of networks, have the same coalition structure? Consider the following definition.

Definition 3 (Chainable THR Sets)
Let R i be a compact topologically Harris recurrent (THR) set of state-network pairs with network part R i . We say that R i is chainable if there is an ε ∈ (0, 1) and a finite ε-covering Thus, for any two club networks in R i , there is a ε-open ball path from G to G consisting of pairwise intersecting open balls. Because consecutive open balls are intersecting, that is, because all the networks contained in ∪ k B h K (ε, G * i εh k ) have the same coalition structure. Because R i is chainable such an ε-open ball path with intersecting consecutive open balls can be found between any two club networks in R i , implying that all the club networks in We can distill the arguments and conclusions above in the following Theorem: Theorem 4 (Topological Harris Recurrence and Coalitional Homogeneity) Let {Z * t } t be an equilibrium state-network Markov process governed by the transition kernel, π * (dz |z), satisfying [T] induced by the stationary Markov perfect equilibrium behavioral strategy profile, σ * (·), of a discounted stochastic game of club network formation satisfying assumptions, If R i is chainable, then all club networks in R i have the same underlying coalition structure. Moreover, all club networks in any chainable subset C i of R i contain networks with the same underlying coalition structure.
Proof If R i is chainable, then for any two club networks in R i , there is a ε-open ball path from G to G consisting of pairwise intersecting open balls. Because consecutive open balls are intersecting, that is, because all the networks contained in ∪ k B h K (ε, G * i εh k ) have the same coalition structure. Because R i is chainable such an ε-open ball path with intersecting consecutive open balls can be found between any two club networks in R i , implying that all the club networks in have the same coalition structure.

Summary and Conclusions
If we strengthen the assumptions underlying our discounted stochastic game of club network formation (from [A-1] to [A-1] * ), then because the equilibrium state-network transition kernel, π * (dz |z), induced by the SM P E strategy profile, σ * (·), of the DSG satisfies the Tweedie conditions [T], we know by Theorem 2 in Tweedie [37] and further results by Meyn and Tweedie [25] which build on Tuominen and Tweedie [36], Tweedie [37], and Costa and Dufour [10], that the state-network space, Z := × K, can be uniquely partitioned into a finite number of maximal Harris sets, H i ⊂ Z, and a transient set, E ⊂ Z, and in particular, we know that  [15].

The Fu-Page SMPE Existence Result
Fu and Page [15] show that all convex DSGs (i.e., DSGs satisfying the usual assumptions where players have convex, compact metric action choice sets) have uC Nash correspondences containing continuum-valued uC Nash sub-correspondences. Thus, any DSG satisfying the usual assumptions with finite action sets (as in LM2015) played over the induced behavioral action sets (i.e., all probability measures over pure action sets)-making the induced DSG a convex DSG-has a Nash correspondence containing a continuumvalued uC Nash sub-correspondence. It then follows from the Fu-Page Fixed Point Theorem [15] that there exists an m-tuple of value functions, v * : where P(·, ·) is the induced uC Nash payoff correspondence. To see why a convex DSG always contains a continuum-valued uC Nash subcorrespondence, we begin by noting that the uC Nash correspondence is given by the composition of two mappings. In particular, we have for each (ω, v where N (·) is the Ky Fan correspondence (i.e., the K FC) defined on the metric space of Ky Fan sets, S, in X × X taking set values in the set of Nash equilibria, and K (·, ·) is the collective security mapping (C SM) defined on the set of state-value function profile pairs, (ω, v) ∈ × L ∞ Y , taking Ky Fan set values. Thus, for each (ω, v) ∈ × L ∞ Y , K (ω, v) ∈ S is a Ky Fan set, and for each Ky Fan set, E ∈ S, N (E) is the corresponding nonempty, compact set of Nash equilibria. Equipping the compact metric product space X × X with the sum metric, ρ X ×X := ρ X + ρ X , and then equipping the hyperspace, P f (X × X ), of nonempty closed subsets of X × X with the Hausdorff metric, h ρ X ×X , induced by the sum metric, ρ X ×X , on X × X , the K FC is an h ρ X ×X -ρ X -upper semi-continuous correspondence defined on the compact metric hyperspace of Ky-Fan sets, S, taking nonempty compact Nash equilibrium values (see ). In the language of Hola-Holy [20], the K FC is an USCO and as such contains minimal USCOs (i.e., minimal K FCs ), More importantly, each such minimal K FC belonging to the K FC takes closed connected and minimally essential Nash equilibrium values (see Fu-Page [15]). Thus, any such continuum-valued minimal K FC, when composed with the DSG s Ky Fan valued collective security mapping, K (·, ·) : × L ∞ Y −→ S, delivers a continuum-valued uC Nash sub-correspondence, (ω, v) −→ n (K (ω, v)).
It then follows from the Fu-Page Fixed Point Theorem [15], that there exists an m-tuple of value functions, v * : The key fact making the proof of the Fu-Page [15] fixed point result possible and straightforward is that the composition of the continuum-valued uC Nash sub-correspondence, n(K (·, ·)), with players' m-tuple of Caratheodory payoff functions induces, for each player, an interval-valued uC Nash payoff sub-correspondence, (57) Thus, each player's Nash payoff sub-correspondence p d (·, ·) is Caratheodory approximable (see Kucia and Nowak [22])-making possible a simple proof, using approximation methods, that there exists v * ∈ L ∞ Y such that v * (ω) ∈ P(ω, v * ), a.e.
[μ]. In Fu-Page [15], the fixed point problem-i.e., showing that the Nash payoff selection correspondence has fixed points-takes center stage. This is not the case in Levy-McLennan [24]. In fact, in Levy-McLennan [24] the statement of the one-shot deviation principle is incomplete-more on this below.

The Levy-McLennan DSG Model
In LM2015, there are finitely many players, d = A, B, C, C , D, D , E, F, each with a finite action set, A d . The state space is given by the unit interval, i.e., = [0, 1], equipped with the Borel σ -field, B [0,1] . Each player's stage payoff function is given by where a := (a A , a B , a C , a C , a D , a D , a E , a F is a measurable function, and g d (·, a) is continuous on (− 1 2 , 1 2 ) for each pure action profile a ∈ A. The law of motion governing state transitions is given by To begin restating the LM2015 game over behavioral actions, let be the mapping from behavioral action profiles, (σ d , σ −d ) ∈ (A), into product probability measures πσ := ⊗ F d=A σ d (da d ). Also, as in LM2015, let be the expected payoff vector under behavioral strategy profile in the game starting from state ω 0 . Restating stage payoffs over behavioral actions we have a Caratheodory function on × (A). Also, each player's payoff function in the underlying one shot game (over behavioral actions) is given by Thus, U d (·, ·, ·) is Caratheodory (measurable in ω and jointly continuous in v d and σ ).
It would seem that the LM2015 DSG model satisfies the usual assumptions, and therefore, because all such DSGs have Nash correspondences containing continuum-valued Nash subcorrespondences, the Levy-McLennan counterexamples do not apply to SM P E existence in behavioral strategies in these DSGs.
As noted above in the network formation DSG considered here, because each player's uC Nash payoff sub-correspondence induced by a continuum-valued minimal uC Nash correspondence is Caratheodory approximable, the DSG considered here is approximable.

Final Observations
We close by making two observations: First, concerning an observation made earlier about the one-shot deviation principle, for behavioral strategy, σ (da|·), with σ (da|ω) a Nash equilibrium of the one-shot game X σ (ω, ·) in state ω, where player d s payoff function is it is not automatic that X σ d (ω, σ (ω)) = γ σ d (ω) for each player-as suggested in Levy-McLennan. The fixed point problem remains. In order for the strategy, σ (da|·), with σ (da|ω) being Nash equilibrium of the one-shot game X σ (ω, ·), to be a stationary Markov perfect equilibrium, it must also be true that for each player d and for ω a.e. [μ] (i.e., in addition to the Nash condition, the Bellman condition above must hold). Second, the steps in the construction of Levy-McLennan DSG model are highly abbreviated. Thus, it is difficult to determine whether or not in constructing their DSG model Levy and McLennan make new assumptions implicitly or modify existing assumptions. Here we show that all DSGs satisfying the usual assumptions are approximable. Levy-McLennan [24] start with a static strategic form base game with circular Nash equilibria (i.e., Nash equilibria homeomorphic to the unit circle) and via a sequence of modifications and additions to the base game (perhaps inadvertently making new assumptions or modifying the existing assumptions) construct a DSG that is not approximable and is without SM P E. One way to think about the Levy-McLennan results is that they show that in the class of nonapproximable DSGs, there exists DSGs having no stationary Markov perfect equilibria. Thus, while not all nonapproximable DSGs have SM P E, as shown by Levy and McLennan [24], all approximable DSGs do, as shown by Fu and Page [15]. Moreover, because all DSGs satisfying the usual assumptions have players with Caratheodory approximable Nash payoff sub-correspondences, all such DSGs escape the Levy-McLennan counterexamples, and as we state here (and show formally in Fu-Page, [15]), possess stationary Markov perfect equilibria.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. We will equip each hyperspace of c-layers, 2 K c , with the Hausdorff metric induced by the metric, ρ K c , on the set of c -connections. In defining the Hausdorff metric h K c on 2 K c , we must allow for empty c-layers. For nonempty c-layer G c ∈ 2 K c and connection (a, (d, c)) ∈ K c , we define the distance from (a, (d, c)) to the nonempty c-layer G c to be dist ((a, (d, c)), G c ) := min (a ,(d ,c))∈G c ρ K c ((a, (d, c)), (a , (d , c))); and for c-layers G c = ∅, G c = ∅, we define the excess of G c over G c to be e(G c , G c ) := max (a,(b,c))∈G c dist ((a, (d, c)), G c ).
The Hausdorff distance between nonempty c-layers, G c and G c is given by while The diameter, diam(K c ), of the set of c-connection K c ,is given by ,(d ,c)) and (a ,(d ,c)) in K c ρ K c ((a , (d , c)), (a , (d , c))).
Thus, the Hausdorff metric on the hyperspace of c-layers, 2 K c , is given by (70) Given that the basic building block of a club network array is the hyperspace K dc of feasible c-connections belonging to player d, with an underlying set of connections given by K dc := A(d, c) × ({d} × {c}), and given that each player can take, at most one action in each club, we see that the Hausdorff metric h K dc on K dc reduces to a dc , a dc ) if G dc = ∅ and G dc = ∅.
The Hausdorff metric on the hyperspace of feasible club networks, K = (K dc ) dc , is given by for G := (G c ) c∈C and G := (G c ) c∈C in c∈C K c . Because (K c , ρ K c ) is a compact metric space, we have by Proposition C.2 in Bertsekas and Shreve [6] that (2 K c , h K c ) is a compact metric space of c-layers, and because K dc is an h K dc -closed subset of the h K dc -compact subset of 2 K dc , K dc is h K dc -compact-implying that (K, h K ), given by is a compact metric space.
A sequence, {v n } n ⊂ L ∞ Y , K -converges (i.e., Komlos convergence-Komlos [21]) to v ∈ L ∞ Y , denoted by v n −→ The relationship between w * -convergence and K -convergence is summarized via the following results from Balder [5]: For every sequence of value functions, {v n } n ⊂ L ∞ Y , and v ∈ L ∞ Y the following statements are true: (i) If the sequence {v n } n K -converges to v, then {v n } n w * -converges to v.
(ii) The sequence {v n } n w * -converges to v if and only if every subsequence {v n k } k of{v n } n has a further subsequence, {v n kr } r , K -converging to v.
For any sequence of value function profiles, {v n } n , in L ∞ Y it is automatic that sup n v n (ω) R m dμ(ω) < +∞.
Thus, by the classical Komlos Theorem [21], any such sequence, {v n } n , has a subsequence, {v n k } k that K -converges to some K -limit, v ∈ L ∞ Y .

Strong Stochastic Continuity of the Law of Motion
Under the stochastic continuity assumptions made above, [A-1](14), we have by Scheffee's theorem (see Billingsley [8], Theorem 16.11) that for each ω ∈ , for any sequence of networks {G n } n in (ω) converging to network G * ∈ (ω) (i.e., for each ω ∈ the conditional density mapping, G −→ h(·|ω, G), is continuous in L 1 norm with respect to G). Thus, by Scheffee's theorem the L 1 norm continuity of G −→ h(·|ω, G) with respect to network G in each state ω is equivalent to the continuity of G −→ q(E|ω, G) in each state ω with respect to network G uniformly in E ∈ B (i.e., for each ω ∈ , q(E|ω, ·) is continuous in G, uniformly with respect to E ∈ B ).