Optimal coalition structures for probabilistically monotone partition function games

For cooperative games with externalities, the problem of optimally partitioning a set of players into disjoint exhaustive coalitions is called coalition structure generation, and is a fundamental computational problem in multi-agent systems. Coalition structure generation is, in general, computationally hard and a large body of work has therefore investigated the development of efficient solutions for this problem. However, the existing methods are mostly limited to deterministic environments. In this paper, we focus attention on uncertain environments. Specifically, we define probabilistically monotone partition function games, a subclass of the well-known partition function games in which we introduce uncertainty. We provide a constructive proof that an exact optimum can be found using a greedy approach, present an algorithm for finding an optimum, and analyze its time complexity.


Introduction
A key open problem in multi-agent systems research is how to organise agents into disjoint teams so as to maximise some overall welfare measure. This coalition structure generation (CSG) problem is in general computationally complex: NP-hard even under quite modest assumptions. For this reason, there have been many studies directed at finding easy instances of the problem (see [2,3,17,35,37] for some examples and [32] for a detailed survey).
Coalition/cooperative game theory provides a conventional framework for modelling CSG problems [9,34,40]. A coalition game is defined by a pair comprised of a set of agents and a function that maps coalitions to values. A widely studied subclass of coalition games are characteristic function games (CFGs). For a CFG, the value of a coalition depends only on its members. A relatively less well studied subclass of coalition games are partition function games (PFGs). For a PFG, the value of a coalition depends on its members as well as on the make-up of the non-members. CFGs are a proper subclass of PFGs and CFGs are the most well studied coalition games. Except for a few recent ones, most of the existing methods for solving the CSG problem are for CFGs [32].
Existing solutions for the CSG problem for PFGs were devised either by placing constraints on externalities or else on the function that maps coalition structures to values (see Sect. 6 for details). A common feature of existing work is that it is focussed on games whose properties are known with certainty (we will call such games deterministic). However, stochasticity is inherent to many multi-agent settings. Given this, the goal of our present work is to investigate how to solve the CSG problem for stochastic environments in which some aspects of the problem are not known with certainty. To this end, we build on our prior work [17] in which we considered the CSG problem for PFGs with priority ordered players and a restricted class of value functions, viz., those that satisfy a certain monotonicity property, and devised a polynomial time solution. In this previous work, the notion of monotonicity was deterministic in the sense that, with probability one, the function that maps coalition structures to values satisfies monotonicity. In this paper, we relax the deterministic monotonicity assumption by allowing a certain degree of non-monotonicity. Specifically, we replace the deterministic monotonicity restriction by probabilistic monotonicity. Probabilistic monotonicity means that the value function obeys monotonicity with a certain probability 0 < p ≤ 1 (for the deterministic case p = 1 ). For probabilistically monotone PFGs with priority ordered players, we devise an algorithm for optimally solving the CSG problem and characterize its time complexity.
The need for optimal coalition structures arises in many real-world applications. For example, in the formation of supply chains [18,26]. In such a setting, several different manufacturers of components form coalitions to achieve what they cannot do individually. Externalities arise, for example, from the requirement that all components ultimately conform to the same standards. The cost of standardisation procedures incurred by any coalition depends on the number and structure of other coalitions. Another application is in job scheduling [19]. In a job scheduling problem, there is a set of jobs and a set of machines. Each job must be allocated to a machine such that the overall cost of processing jobs is minimized. For further applications of cooperative games, [12] is a comprehensive set of references. In all such applications, the general problem of optimal coalition structure determination involves a combinatorial search. In many cases, the search space is not always unstructured -often there is some form of inherent regularity in at least a part of the space. For example, consider an airline crew scheduling problem, which requires organising staff into coalitions based on individual characteristics, and optimally scheduling the coalitions. The players, i.e., the crew, are ordered in that any non-optimal placement of an individual in the early part of a schedule can propagate inefficiencies down the chain and reduce the value of the entire partition. It is possible that the earlier in the schedule a non-optimality is introduced, the greater the reduction in the value of the partition as a whole, relative to the optimum. In other words, the search space is structured in that there is a relation between how close a partition is to the optimum, and the value of that partition. The closer a structure is to the optimum, the more likely it is to have a higher value. A lack of certainty in the values can arise because the number of all possible coalition structures is too huge to enable an accurate measurement of their values. It is therefore important to consider such uncertainties.
To the best of our knowledge, we are the first to consider the CSG problem in an uncertain environment. The key contributions of this paper are: 1. Developed a new model of probabilistic monotonicity in partition function games, that extends previously studied deterministic monotonicty. 2. Analysed the model and constructively proved that an exact optimum can be found. 3. Devised a greedy algorithm for solving the CSG problem for probabilistic monotone partition function games. 4. Analysed the time complexity of the devised algorithm.
The remainder of the article is organised as follows. Section 2 provides background. Section 3 is a description of the model under investigation and Sect. 4 of the proposed method. An algorithm for solving the CSG problem is in Sect. 5. A review of related literature is in Sects. 6 and 7 concludes. Appendix A collects together all our theorems and proofs. Appendix B is a description of an algorithm for generating the partitions of a set, and an analysis of its time complexity.

Background
There is a finite non-empty set of players N = {1, ..., n} (see Table 1 for a summary of key notation). The term coalition refers to a non-empty subset of N. The symbol C possibly with sub/superscripts denotes a coalition and C denotes the set of all coalitions of N.
A coalition structure is an exhaustive partition of a set of players into mutually disjoint coalitions. Formally: Definition 1 For any coalition C, let C denote the set of all coalition structures over C.
∀i C i ≠ �, and ∀i∀j ≠ i C i ∩ C j = �.  The symbol possibly with sub/superscripts will denote a coalition structure. An embedded coalition is a coalition together with a specification of how the non-members are organised into coalitions. It is formally defined as follows: Definition 2 Let E denote the set of all embedded coalitions. Then Definition 3 A characteristic function game (CFG) is a pair (N, v 1 ) where v 1 ∶ 2 N → ℝ and 2 N denotes the set of all subsets of N. A partition function game is a pair (N, v 2 ) where v 2 ∶ E → ℝ.
Thus CFGs are a subclass of PFGs.

Definition 4
The value of a coalition structure over N is given by an objective function v ∶ N → ℝ.
In the literature on CSG, the function that maps coalition structures to values, i.e., the objective function v, is a social welfare function. It is commonly assumed to be the sum of coalition values. In the proposed model (described in Sect. 3), however, the value of a structure does not have to be the sum of the values of its coalitions but could be any function.
The CSG problem then is to find an optimal structure, i.e., a structure such that v( ) is the highest between all coalition structures. For n player games, the number of all possible structures is Bell(n) [5,6] where with Bell(0) = Bell(1) = 1 . Since Bell(n) ∼ (n n ) [6], a brute force method for PFGs (and CFGs) is not a feasible.
Given the computational complexity of CSG, it is natural to investigate easy instances of the problem. In prior work [17], we showed how the CSG problem for PFGs can be solved optimally in polynomial time, provided that the set N is ordered and the objective function is deterministically monotonic. Our goal now is to generalize this method to stochastic environments where monotonicity is satisfied with a certain probability. Monotonicity is defined in terms of a player ordering.

Player ordering
In the definition of both CFGs and PFGs, the term coalition refers to a set of players, i.e., there is no notion of ordering. Nevertheless, the Shapley value [36], a well-known axiomatic solution for CFGs, and its adaptation [27] to PFGs are motivated by a bargaining procedure in which the players are assigned a random ordering and coalition formation is viewed as a sequential process that happens as per the ordering. In generalized CFGs [28], attention is paid to ordering within the definition of a game. A generalized CFG is defined in terms of a set of players and a characteristic function that maps orderings on coalitions to numbers.
The notion of ordering is prevalent not only in the definition of cooperative games and solutions to them, but also in their applications to computationally hard optimization problems such as matching, network optimisation, and scheduling [12,22]. In the context of these applications, the notion of ordering is used to determine computationally easy problem instances. For example, consider scheduling. In a scheduling problem [39], there is a set of jobs and a set of machines. Each job must be allocated to a machine and, for each machine, an ordering over its allocated jobs must be determined such that the overall cost of processing jobs as per the order is minimized. Computationally easy instances of this problem are sought by imposing restrictions such as the cost function being monotonic, and the set of jobs being an ordered set. In particular, much of the scheduling literature [19,20] has focussed on a restricted class of objective functions called priority-generating functions. An objective function is said to be priority-generating if a related function called priority function exists which imposes an ordering over the set of jobs by assigning to jobs certain values called priorities. Crucially, priorities are assigned to jobs such that, based on priorities, the optimal schedule can be found in polynomial time. Thus, if a priority function can be found for a given objective function, and it takes polynomial time to compute job priorities using the function, then the scheduling problem can be solved in polynomial time [39].
Motivated by the above observations, in prior work [17] we took a priority-based approach and showed how the CSG problem for PFGs can be solved optimally in polynomial time provided the set N is ordered and the function v is monotonic (see Sect. 2.2). The CSG problem is related to scheduling in that players are analogous to jobs and coalitions to machines. A crucial difference though between existing priority-based scheduling methods [19,20,39] and [17] is that the former require job priorities as an input, while latter requires only the existence of player priorities to be known without requiring the actual priorities as problem input.

Monotonicity
In [17], the players are assumed to be priority ordered, and monotonicity of the objective function v is defined in terms of a distance metric d. For any two structures 1 and 2 , d( 1 , 2 ) denotes the distance between 1 and 2 .
Definition 5 (Deterministic monotonicity) For any two structures 1 and 2 , and a unique optimum : with probability one. ◻ Then a PFG is deterministically monotonic if, for some player ordering, v is monotonic. It was shown that deterministic monotonicity is a property that can be satisfied by PFGs with positive only, negative only, and mixed externalities. Further, deterministic monotonicity was shown to be satisfiable for a wide range of objective functions, i.e., the value of a partition is not restricted to be the sum of the values of its coalitions but can be any function of these values. Illustrations of deterministically monotonic PFGs, and a polynomial time method for solving the CSG problem for deterministically monotonic PFGs are in [17]. Our aim now is to generalise this method to probabilistic monotonicity.

Coalition structure generation in stochastic environments
The players to be partitioned are given by the set N. The priority ordering referred to in Sect. 2.1 exists over certain players in N. The set of these ordered players is denoted ⊆ N with = | | . Each player in N − is called a non-priority player. Let ℙ i ∈ denote the top ith priority player, i.e., the ordering is ℙ 1 ≻ ⋯ ≻ ℙ . The quality of a coalition structure depends on how the priority players are positioned in the structure. The higher up a player is in the ordering, the more important it is to have the player in its optimal position. Any coalition that contains at least one priority player is called a priority coalition. An ordering over induces an ordering over the priority coalitions: they are ordered as per the priorities of their highest priority members. Suppose hpm(X) denotes the highest priority member of coalition X. Suppose that the priority players are spread over m coalitions. Then these m coalitions form a sequence (C 1 , C 2 , … , C m ) such that, ℙ 1 ∈ C 1 , and if ℙ x ≻ ℙ y and ℙ x = hpm(C i ) and ℙ y = hpm(C j ) , then i < j . For any coalition structure over N, there is no ordering over coalitions comprised solely of non-priority players.
We consider stochastic environments modelled as probabilistically monotone PFGs for which it is known that a priority ordering exists, but the ordering itself is unknown, i.e., the identities of the priority players ℙ 1 , … , ℙ are unknown. Solving the CSG problem requires determining these identities and an optimal way of partitioning the players in N. To achieve this, we represent coalition structures as described in Sect. 3.1. In Sect. 3.2, we introduce a distance metric, in terms of which we define probabilistic monotonicity in Sect. 3.3. The proposed method for solving the CSG problem is described in Sect. 4.

Representation
Let ℙ denote the sequence (ℙ 1 , … , ℙ ) . For any k ≤ , ℙ |k will denote the k element prefix of ℙ , i.e., ℙ |k = (ℙ 1 , … , ℙ k ) . ℙ E |k is defined as where Perm( ) denotes the set of all permutations of . A coalition structure over is represented as a sequence = (x 1 , x 2 , … , x ) such that x i ∈ {1, … , } is the index of the coalition to which player ℙ i belongs. Then the set of all structures over is: . E |k will denote the set of all those sequences in whose k element prefix is |k .
Example 1 Consider a PFG with n = 3 players who are members of an airline crew. There are Bell(3) = 5 possible coalition structures. Let = 3 . Column 1 of Table 2 shows how coalition structures will be represented. For three players, there are six possible orderings of players as illustrated in Columns 2 to 7. How the representation of a particular structure is interpreted depends on the player ordering ℙ . For example, for ℙ = (1, 2, 3) (see Column 2), we have ℙ 1 = 1 , ℙ 2 = 2 , and ℙ 3 = 3 . The representation (1, 1, 1) (see Row 1, Column 2) means the structure comprised of the single grand coalition C 1 = {1, 2, 3} . The Let denote the set of all optimal structures over N. For any ∈ N , i will denote the index of the coalition to which player i belongs in the structure . We assume that the optima are unique up to the positions of the top 3 < ≤ n − 3 priority players, i.e., for any two distinct optimal structures 1 ∈ and 2 ∈ ,

Distance measure
To measure the distance between any two structures 1 and 2 over N, we define a metric d in terms of the positions of the priority players, i.e., in terms of the restriction of 1 and 2 to . A restriction 1 can be represented as described in Sect. 3.1.

Theorem 1 The distance function d obeys all metric axioms.
Proof The axioms are identity, symmetry, and triangle inequality.

Identity
For any coalition structure 1 over N, Table 2 An illustration of the coalition structure representation and semantics for each one of the six possible player orderings for n = 3 player games Representation Probabilistic monotonicity is modelled as follows. Let the set be defined as follows: Let the functions f ∶ → , r d ∶ → {<, =, >} , and r v ∶ → {<, =, >} be defined as follows. For any (x, y) ∈ , f (x, y) = (r d (x, y), r v (x, y)) where and The set can be partitioned into nine pairwise disjoint subsets as follows: Then is the union of these nine subsets: Between all these nine subsets, only ee , eg , el , gl , and lg satisfy monotonicity. The union of these is denoted MON :

Definition 8 Each pair in MON is called a monotonicity-satisfying pair.
Probabilistic monotonicity is modelled by a probability distribution (see Table 3) induced by the function v over the set . As per Table 3, Since the elements of are ordered pairs, we have the following : For probabilistic monotonicity, p ge ≥ 0 , p gg ≥ 0 , p le ≥ 0 , and p ll ≥ 0 . In contrast, for deterministic monotonicity, each one of the probabilities p ge , p gg , p le , and p ll is zero. Example 2 is an illustration of probabilistic monotonicity.  Table 4 (the corresponding probability distrubution is shown in Table 5). Suppose the optimum is = ({1, 3}, {2}) , which is represented as the sequence (1, 2, 1) enclosed in oval. Consider Row 4. 1 is further away from the optimum than is 2 and yet v( 1 ) > v( 2 ) . This is a violation of monotonicity. Likewise, monotonicity is violated in the rows 11, 15, and 17. There is no violation in any of the remaining rows.
Clearly, the number of rows in which monotonicity is violated cannot be arbitrary. If we want to manage computational complexity, the degree of non-monotonicity must be bounded.
Definition 9 (Degree of non-monotonicity) The degree of non-monotonicity D is the sum of the cardinalities of ge , gg , le , and ll . and suppose that D satisfies the relation for some 3 < ≤ n − 3 and n > 7 . Our aim then is to solve the following CSG problem: CSG problem definition: For a probabilistically monotone PFG (N, v 2 , v) with the degree of non-monotonicity known to be bounded above by , find the identities of ℙ 1 , … , ℙ and an optimal structure where given as input N and the function r v induced by v.
Note that the actual values of coalition structures as given by v are not part of the input. Rather, only the relation between the values of any two structures is part of the problem input.

The proposed method
In Sect. 4.1 is a brief overview of the three key steps of the proposed method for finding ℙ and an optimal structure . Details of Step 1 are in Sect. 4.2, Step 2 in Sect. 4.3, and Step 3 in Sect. 4.4. A complete formulation of the method is given as an algorithm in Sect. 5.

CSG method: overview
Step 1 Determine who the two top priority players are, and their optimal coalitions. At the end of this step, ℙ |2 and |2 will become known.
Step 2 For each 3 ≤ i ≤ , determine the identity of the player ℙ i and its optimal coalition. At the end of this step, ℙ | and | will become known.

Consider
Step 1. To begin, we know that ℙ 1 ∈ N , so there are n possibilities for the identity of ℙ 1 . We also know that ℙ 1 must belong to the first coalition (denoted C 1 ) in . Then, for any one possibility for the identity of ℙ 1 , say ℙ 1 = x , we know that there must be n − 1 possibilities for the identity of ℙ 2 , i.e, ℙ 2 ∈ N − {x} , and that there must be two possibilities for its optimal coalition, i.e., ℙ 2 must be a member of either C 1 or C 2 . Let Z be the set of all these possibilities and let each element of Z be a quadruple defined as follows: Table 5 Probability distribution  table for the game given in  Table 4 Element of Probability

Page 13 of 45 27
The semantics of quadruple (x, y, 1, z) is that, it is possible that ℙ 1 = x , ℙ 2 = y , x belongs to the coalition C 1 , and y belongs to the coalition C z in . For example, the quadruple (2, 3, 1, 2) means that ℙ 1 = 2 , ℙ 2 = 3 , 2 ∈ C 1 , and 3 ∈ C 2 is a possibility. We find it convenient to introduce terminology for referring to certain pairs of elements of Z.

Definition 10
For any x ∈ N and any y ∈ N − {x} , (x, y, 1, 1) and (y, x, 1, 1) are each others' partners, and (x, y, 1, 2) and (y, x, 1, 2) are each others' partners. A partner pair is in one of the following forms: Lemmas 1 and 2 readily follow from the definition of Z and that of a partner pair.
Lemma 1 All the following assertions are true.
• Every element in Z has a unique partner in Z.
• Every element in Z is the partner of a unique element in Z. ◻

Lemma 2
Any two partner pairs of Z must be in one of the following forms: . Now, we know that there is a unique z ∈ Z such that z ⊧ and for each z ∈ Z , z ̸ ⊧ iff z ≠z . This observation leads to the definition of set Z : Since only one element of Z corresponds to , we get |Z| = |Z| − 1.
Step 1 ( details in Sect. 4.2) involves doing three different tests T1, T2, and T3 with appropriate arguments in such a way that at least |Z| − 2 elements of Z can be determined. Once this is done, one of the elements in Z −Z must correspond to and this concludes Step 1. For Step 2, we define a further test T4 details of which are in Sect. 4.3.

CSG method: Step 1
Sect. 4.2.1 gives a definition of the tests T1, T2, and T3. Section 4.2.2 is a specification of the parameters of these tests. Section 4.2.3 describes how these tests can be used to find an element of MON . Section 4.2.4 describes how these tests can be used to determine who the two top priority players are and their positions in an optimal structure.

The tests T1, T2, T3
The tests T1, T2, and T3 are defined as follows: T1 For any i ∈ N and j ∈ N − {i} , the test T1(i, j) compares the values of any two structures 1 ∈ N and 2 ∈ N such that * in 1 , the players i and t belong to different coalitions but j and s belong to the same coalition, and * in 2 , the players i and t belong to the same coalition but j and s belong to different coalitions where t is an arbitrary element of N − {i, j} and s is an arbitrary element of N − {i, j} . The values of i, j, s, and t are determined on the basis of the elements in Z (see Theorem 5 for details). Let S 1 T1 (i, j) denote the set of all those structures in which the players i and t belong to different coalitions but j and s belong to the same coalition. Let S 2 T1 (i, j) denote the set of all those structures in which the players i and t belong to the same coalition but j and s belong to different coalitions. The test T1(i, j) uses r v to compare the value of any structure from S 1 T1 (i, j) to the value of any structure from S 2 T1 (i, j). T2 For any i ∈ N and any j ∈ N − {i} , the test T2(i, j) compares the values of any two structures 1 ∈ N and 2 ∈ N such that * in 1 , i and t are apart, and j and s are apart, and * in 2 , i and t are together, and j and s are together where t is an arbitrary element of N − {i, j} and s is an arbitrary element of N − {i, j} . The values of i, j, s, and t are determined on the basis of the elements in Z (see Theorem 5 for details) Let S 1 T2 (i, j) denote the set of all those structures in which the players i and t belong to different coalitions, and j and s belong to different coalitions. Let S 2 T2 (i, j) denote the set of all those structures in which the players i and t belong to the same coalition, and j and s belong to the same coalition. The test T2(i, j) uses r v to compare the value of any structure from S 1 T2 (i, j) to the value of any structure from S 2 T2 (i, j). T3 For any i ∈ N and j ∈ N − {i} , the test T3(i, j) compares the values of any two structures 1 ∈ N and 2 ∈ N such that * in 1 , i and j are apart, and * in 2 , i and j are together.
Let S 1 T3 (i, j) denote the set of all those structures in which the players i and j belong to different coalitions. Let S 2 T3 (i, j) denote the set of all those structures in which the players i and j belong to the same coalition. The test T3(i, j) uses r v to compare the value of any structure from S 1 T3 (i, j) to the value of any structure from S 2 T3 (i, j).

The universes of T1, T2, T3
By definition, there are several different ways of doing tests T1(i, j), T2(i, j), and T3(i, j) for any i and j. The set of all these possibilities is defined in terms of the universe of a test.

Definition 11
. Then as per the definition of given in Sect. 3.3, Theorem 2 For any two distinct players i and j, there are

By Lemma 3 and Theorem 19 (see Appendix A). 2. By Lemma 3 and Theorem 20 (see Appendix A). 3. From Lemma 3 and Theorem 21 (see Appendix A).
◻ For any n ≥ 5 and any two distinct players i and j, the cardinalities of the universes of T1, T2, and T3 are lower bounded as per the following assertions (see Theorem 23 in Appendix A for proof): Theorem 22 (see Appendix A) gives the relation between the cardinalities of U T1 (i, j) , U T2 (i, j) , and U T3 (i, j).

Monotonicity-satisfying relations for T1, T2, T3
For any 1 ≤ a ≤ 3 , the universe of Ta(i, j) can be partitioned into three disjoint subsets Then, the set of monotonicity-satisfying relations for a test is defined as follows:

Definition 12
For any 1 ≤ a ≤ 3 , and any two distinct players i and j, the set of monotonicity-satisfying relations for Ta(i, j) is Proof By problem definition D < . So if |U x Ta (i, j)| > for any x, then at least one element of U x Ta (i, j) must belong to S MON . ◻ Theorem 3 Consider any 1 ≤ a ≤ 3 . For any two distinct players i and j, each one of the following assertions must true.
1. More than 2 elements of the universe of Ta(i, j) must obey monotonicity.

The set of monotonicity-satisfying relations for Ta(i, j) must be non-empty.
Proof The universe of T1(i, j) contains more than 3 elements (see Theorem 23 in Appendix A for proof). By the CSG problem definition given in Sect. 3, D < . So more than 2 elements of the universe of T1(i, j) must obey monotonicity.
for any x, then at least one element of U x T1 (i, j) must belong to S MON because D < . So the set of monotonicity-satisfying relations for T1(i, j) must be non-empty.
By Theorem 23 (see Appendix A), |U T2 (i, j)| > 3 and |U T3 (i, j)| > 3 . Thus for each one of the two tests T2(i, j) and T3(i, j), more than 2 elements of their universe must obey Page 17 of 45 27 Table 6 Step 1: Probabilistic monotonicity induced implications Label Implication monotonicity, and the set of their monotonicity-satisfying relations must be non-empty. ◻ Probabilistic monotonicity induces the implications X 1 to X 8 listed in Table 6. In order make deductions on the basis of any implication, it is necessary to ensure that its antecedant is satisfied. Observe that each one of the implications Observe also that each one of the implications X 1 to X 8 has ( 1 , 2 ) ∈ S MON as part of the antecedant of R ⇒ . But the key question is how can a pair ( 1 , 2 ) be found such that ( 1 , 2 ) ∈ S MON when S MON is not part of the problem input (as per the problem definition given in Sect. 3)? The answer lies in Theorem 4.

Theorem 4
Consider any 1 ≤ w ≤ 3 . If D < , then for any two distinct players i and j, at least and at most 3 − 2 invocations to r v are needed to compute an element of the set of monotonicity-satisfying relations for Tw(i, j).
Based on what is returned by r v , one of the following scenarios must occur: • Let a ∈ {<, =, >} . The function r v returns r v (u i ) = a for each 1 ≤ i ≤ , so a ∈ MSR Tw (i, j) and a monotonicity-satisfying relation is found with invocations to r v . Note that this means that an element of U a Tw (i, j) must belong to S MON . • Let a, b and c be any three pairwise distinct elements of {<, =, >} . Suppose r v is applied to the first 3( − 1) elements of the sequence u 1 , u 2 , … and it is found that | must be and therefore a monotonicity-satisfying relation will be found with 3 − 2 invocations to r v . Say a is monotonicity-satisfying, then an element of U a Tw (i, j) must belong to S MON . • If it is neither of the above two scenarios, then a monotonicity-satisfying relation, say a, will be found with more than but fewer than 3 − 2 applications of r v . Consequently, an element of U a Tw (i, j) must belong to S MON .

Application of tests T1, T2, and T3
Those elements of Z that do not correspond to an optimal solution can be eliminated using the tests T1, T2, and T3 with suitable arguments, as characterized in Theorem 5.

Theorem 5
If Z contains at least two partner pairs, then ∃ a 1 ≤ a ≤ 3 such that at least one partner pair can be eliminated by doing Ta (with suitable arguments) at most 3 − 2 times.

Thus, regardless of what the monotonicity-satisfying relation for
Consider Case 2 which also gives rise to two possibilities: either t = s or else t ≠ s . No matter which one of these two possibilities holds, Theorem 3 guarantees that the set of monotonicity-satisfying relation for T1(i, j) must be non-empty. By Theorem 4, at most 3 − 2 comparisons are needed to compute a monotonicity relation for T1(i, j). * If either > or = is a monotonicity-satisfying relation for T1(i, j), then the pair ((j, s, 1, 2), (s, j, 1, 2)) must be eliminated from Z. Because, by the implication X 4 given in Table 6, the antecedant of R ⇒ is true but its consequent is false. This makes the consequent of L ⇒ false. By contrapositive, the antecedant of L ⇒ must be false. In other words, (i, t, 1, 2) ̸ ⊧ and (j, s, 1, 2) ̸ ⊧ . Consequently, ((j, s, 1, 2), (s, j, 1, 2)) ∉ Z . Equivalently, ((j, s, 1, 2), (s, j, 1, 2)) ∈Z. * If either < or = is a monotonicity-satisfying relation for T1(i, j), then the pair ((i, t, 1, 2), (t, i, 1, 2)) must be eliminated from Z. Because, by the implication X 2 given in Table 6, the antecedant of R ⇒ is true but its consequent is false. This makes the consequent of L ⇒ false. By contrapositive, the antecedant of L ⇒ must be false. In other words, t, 1, 2), (t, i, 1, 2)) ∈Z.
Thus, regardless of what the monotonicity-satisfying relation for T1(i, j) is, the elimination of at least one pair of elements is guaranteed for Case 2.
Consider Case 3 which also gives rise to two possibilities: either t = s or else t ≠ s . No matter which one of these two possibilities holds, Theorem 3 guarantees that the set of monotonicity-satisfying relation for T2(i, j) must be non-empty. By Theorem 4, at most 3 − 2 comparisons are needed to compute a monotonicity relation for T2(i, j). * If the monotonicity-satisfying relation for T2(i, j) is either "<" or " = ", then the pair ((i, t, 1, 2), (t, i, 1, 2)) must be eliminated from Z. Because, by the implication X 5 given in Table 6, the antecedant of R ⇒ is true but its consequent is false. This makes the consequent of L ⇒ false. By contrapositive, the antecedant of L ⇒ must be false. In other words, (i, t, 1, 2) ̸ ⊧ and (t, i, 1, 2) ̸ ⊧ . * If the monotonicity-satisfying relation for T2(i, j) is either " = " or ">", then the pair ((j, s, 1, 1), (s, j, 1, 1)) must be eliminated. Because, by the implication X 6 given in Table 6, the antecedant of R ⇒ is true but its consequent is false. This makes the consequent of L ⇒ false. By contrapositive, the antecedant of L ⇒ must be false. In other words, (j, s, 1, 1) ̸ ⊧ and (j, s, 1, 1) ̸ ⊧ .
Thus, regardless of what the monotonicity-satisfying relation for T1(i, j) is, the elimination of at least one pair of elements is guaranteed for Case 3.
Consider Case 4. By Theorem 3 guarantees that the set of monotonicity-satisfying relation for T3(i, t) must be non-empty. By Theorem 4, at most 3 − 2 comparisons are needed to compute a monotonicity relation for T3(i, t). * If the monotonicity-satisfying relation for T3(i, t) is either "<" or " = ", then the pair ((i, t, 1, 2), (t, i, 1, 2)) must be eliminated from Z. Because, by the implication X 7 given in Table 6, the antecedant of R ⇒ is true but its consequent is false. This makes the consequent of L ⇒ false. By contrapositive, the antecedant of L ⇒ must be false. In other words, (i, t, 1, 2) ̸ ⊧ and (t, i, 1, 2) ̸ ⊧ . * If the monotonicity-satisfying relation for T3(i, t) is either " = " or ">", then the pair ((i, t, 1, 1), (t, i, 1, 1)) must be eliminated. Because, by the implication X 8 given in Table 6, the antecedant of R ⇒ is true but its consequent is false. This makes the consequent of L ⇒ false. By contrapositive, the antecedant of L ⇒ must be false. In other words, (i, t, 1, 1) ̸ ⊧ and (t, i, 1, 1) ̸ ⊧ .
Thus, regardless of what the monotonicity-satisfying relation for T3(i, t) is, the elimination of at least one pair of elements is guaranteed for Case 4. ◻ Theorem 5 forms the basis for Step 1, i.e., for computing the set {ℙ 1 , ℙ 2 } of the two top priority players and the coalitions to which they belong in . Specifically, Step 1 is achieved as follows: • Initialize Z as per Equation 7.
• While Z contains more than one partner pair, choose a relevant test (i.e., T1, T2, or T3 as per Theorem 5) and do it in order to eliminate from Z those elements that do not correspond to .
At the end of Step 1, Z contains one partner pair. At this stage, the set {ℙ 1 , ℙ 2 } is known although we do not know which one of these two players is ℙ 1 and which one is ℙ 2 . However, knowledge about the identities of ℙ 1 and ℙ 2 is unnecessary for computing as long as the set {ℙ 1 , ℙ 2 } is known.

CSG method: Step 2
At this stage, the set {ℙ 1 , ℙ 2 } comprised of the two top priority players is known, and it is also known if the two top priority players are together in or if they are apart. The identity of each ℙ i , for 3 ≤ i ≤ n , and its optimal coalition remains to be found.
Step 2 is for finding this in a series of stages. In Stage i, 3 ≤ i ≤ n , the identity of ℙ i and the coalition C j to which ℙ i belongs will be found using the test T4. We will first describe T4 and then explain how it is used. In general, consider any stage k > 2 where the identities and optimal positions of the top k priority players have already been found. Let (k) be the number of coalitions in |k . Let Q(k) = N − {ℙ 1 , … , ℙ k } . There are |Q(k)| = n − k possibilities for the identity of ℙ k+1 , and (k) + 1 possibilities for its optimal coalition. Let V(k) be a set of all these possibilities: The semantics of (x, y) is that, it is possible that ℙ k+1 = x and x belongs to the coalition C y in . Thus |V(k)| = (n − k) × ( (k) + 1) . The problem now is to find an (x, y) ∈ V(k) such that (x, y) ⊧ . The test T4 is for finding such an element by eliminating from V(k) those elements that do not correspond to .
The remainder of this section is organised as follows. Section 4.3.1 gives a definition of the test T4. Section 4.3.2 is a specification of the parameters of T4. Section describes how T4 can be used to find an element of MON . Section 4.3.4 describes how T4 can be used to determine the identities of players ℙ 3 … ℙ and their positions in an optimal structure.

The test T4
The test T4 is defined as follows. For any two distinct elements (a, b) and (c, d) of V(k), the test T4(k, a, b, c, d) uses r v to compare the values of any two structures 1 ∈ N and 2 ∈ N such that (9) V(k) = {(x, y) | x ∈ Q(k), y ∈ {1, … , (k) + 1}} * in 1 , each one of the k top priority players belongs to its respective optimal coalition, player a belongs to C b , player c belongs to any coalition except C d , and * in 2 , each one of the k top priority players belongs to its respective optimal coalition, player c belongs to C d , player a belongs to any coalition except C b .
Let S 1  T4 (k, a, b, c, d) denote 3 the set of all those structures in which each one of the k top priority players belongs to its respective optimal coalition, the player a belongs to C b , and the player c belongs to any coalition except C d . Let S 2 T4 (k, a, b, c, d) denote the set of all those structures in which each one of the k top priority players belongs to its respective optimal coalition, the player c belongs to C d , and the player a belongs to any coalition except C b . Lemma 5 follows readily from the definitions of S 1 T4 (k, a, b, c, d) and S 2 T4 (k, a, b, c, d). a, b, c, d) and S 2 T4 (k, a, b, c, d)

The universe of T4
For any k, a, b, c, and d, there are several ways of doing T4(k, a, b, c, d). The set of all these possibilities is given by the universe of T4(k, a, b, c, d). T4(k, a, b, c, d).  x, a, b, c, d) T4(x, a, b, c, d). ◻ (x, a, b, c, d).

Theorem 6 characterises a bound on the cardinality of U T4
Theorem 6 For any 2 < k < n − 2 , any (a, b) ∈ V(k) , and any (c, d) ∈ V(k) , the cardinality of U T4 (k, a, b, c, d) satisfies the following relation: Proof We are given that a, b, c, d) . Now, in some of the structures in S 1 T4 (k, a, b, c, d) , the k + 1 top priority players are split into (k) coalitions, while in others they are split into (k) + 1 coalitions. Let S 11 T4 (k, a, b, (k, a, b, c, d) . The structures in S 11 T4 (k, a, b, c, (k, a, b, c, d) denote the set of all those structures in which each one of the k top priority players is in its optimal coalition, a ∈ C b , and for some specific x ≠ d , c ∈ C x . Then

Monotonicity-satisfying relations for T4
For any 2 < k ≤ n − 1 , the universe of T4(k, a, b, c, d) can be partitioned into three disjoint subsets U < T4 (k, a, b, c, d) , U = T4 (k, a, b, c, d) , and U > T4 (k, a, b, c, d) as follows: Then the set of monotonicity-satisfying relations for T4(k, a, b, c, d) is defined as follows: Definition 14 For any 2 < k < n , any (a, b) ∈ V(k) , and (c, d) ∈ V(k) , the set of monotonicity-satisfying relations for T4(k, a, b, c, d) is For 2 < k < , let 2 (k) be defined as follows: .
Proof Follows readily from the definitions of , 1 , and 2 (k) . ◻ Theorem 7 If D < , then for each 2 < k ≤ , each one of the following assertions must be true: 1. More than 2 2 (k) elements of the universe of T4(k, a, b, c, d) must obey monotonicity.

The set of monotonicity-satisfying relations for T4(k, a, b, c, d) must be non-empty.
Proof Consider any 2 < k ≤ . By Lemma 6, there are |U T4 (k, a, b, c, d)| distinct ways of doing T4(k, a, b, c, d) for any (a, b) ∈ V(k) and any (c, d) ∈ V(k) . By Theorem 6, By the CSG problem definition of Sect. 3, D < . So D < 2 (k) . Therefore, more than 2 2 elements of the universe of T4(k, a, b, c, d) must therefore obey monotonicity.
Since |U T4 (k, a, b, c, d)| > 3 2 (k) , it follows that at least one of the three sets U <  T4 (k, a, b, c, d) , U = T4 (k, a, b, c, d) , or U > T1 (i, j) must contain 2 (k) or more elements. Otherwise, |U < T4 (k, a, b, c, d) for any x, then at least one element of U x T4 (k, a, b, c, d) must belong to S MON because D < 2 (k) . So the set of monotonicity-satisfying relations for  T4(k, a, b, c, d) must be non-empty. ◻ Probabilistic monotonicity induces the implications X ab and X cd listed in Table 7. For 2 < k ≤ , these implications will be used to determine the identity of ℙ k and its optimal coalition, by eliminating those elements from V(k) that do not correspond to an optimum.  T4(k, a, b, c, d) Label Implication In order to use these implications, a pair ( 1 , 2 ) must be found in the universe of T4 such that ( 1 , 2 ) ∈ S MON . Since S MON is unknown, the question 'how to find such a pair?' is answered in Theorem 8.

Theorem 8
Consider any 2 < k ≤ , any (a, b) ∈ V(k) , and any (c, d) ∈ V(k) . If D < , then at least and at most 3 − 2 invocations to r v are needed to compute an element of the set of monotonicity-satisfying relations for T4(k, a, b, c, d).
Proof Consider any 2 < k ≤ , any (a, b) ∈ V(k) , and any (c, d) ∈ V(k) . We are given that D < . By Lemma 7, D < 2 (k) . So for any x ∈ {<, =, >} , x ∈ MSR T4 (k, a, b, c, d) if |U x  T4 (k, a, b, c, d) k, a, b, c, d) . Suppose the elements of U T4 (k, a, b, c, d) are arranged in some arbitrary sequence denoted u 1 , u 2 , … . Suppose the function r v is applied to the elements of U T4 (k, a, b, c, d) in the order u 1 , u 2 , … . Based on the outcomes of these tests, one of the following scenarios will occur:  k, a, b, c, d)| must be and therefore a monotonicity-satisfying relation will be found with 3 − 2 invocations to r v . Say x is monotonicity-satisfying, then an element of U x T4 (k, a, b, c, d) must belong to S MON . • If it is neither of the above two scenarios, then a monotonicity-satisfying relation, say x, will be found with more than but fewer than 3 − 2 applications of r v . Consequently, an element of U x T4 (k, a, b, c, d) must belong to S MON .

Application of test T4
Theorem 9 Consider any 2 < k ≤ and suppose that D < . Then if |V(k)| ≥ 2 and (a, b) and (c, d) are any two distinct elements of V(k), then at least one element of V(k) can be deleted by doing T4(k, a, b, c, d) at most 3 − 2 times.
Proof Consider any 2 < k ≤ . Consider any two distinct elements (a, b) and (c, d) of V(k). By Theorem 7, the set of monotonicity-satisfying relations for T4(k, a, b, c, d) must be for eachi such thatmod(i, 3) = 0 y for eachi such thatmod(i, 3) = 1 z for eachi such thatmod(i, 3) = 2 non-empty. By Theorem 8, at most 3 − 2 invocations to r v are needed to compute a monotonicity-satisfying relation for T4(k, a, b, c, d). * If either < or = is a monotonicity-satisfying relation for T4(k, a, b, c, d), then element (a, b) must be eliminated from V(k). Because, by the implication X ab given in Table 7, the antecedant of R ⇒ is true but its consequent is false. This makes the consequent of L ⇒ false. By contrapositive, the antecedant of L ⇒ must be false. In other words, (ca, b) ̸ ⊧ and (a, b) must therefore be eliminated from V(k). * If either > or = is a monotonicity-satisfying relation for T4(k, a, b, c, d), then element (c, d) must be eliminated from V(k). Because, by the implication X cd given in Table 7, the antecedant of R ⇒ is true but its consequent is false. This makes the consequent of L ⇒ false. By contrapositive, the antecedant of L ⇒ must be false. In other words, (c, d) ̸ ⊧ and (c, d) must therefore be eliminated from V(k). T4(k, a, b, c, d) is, at least one element of V(k) can be deleted by doing T4(k, a, b, c, d) at most 3 − 2 times. ◻

The CSG method: Step 3
At this stage, ℙ 1 , … , ℙ and | have been determined. An element of E | with the highest value remains to be found. As per Definition 6, for non-optimal structures, probabilistic monotonicity does not hold beyond . An exhaustive search is therefore needed over the space of all those structures in E |

CSG algorithm
The complete CSG method (described in Sect. 4) is summarised as Algorithm 1, the input to which is a set of players N, a mapping r v , and a bound such that Lines 1 to 8 constitute Step 1 (described in Sect. 4.2) of the method during which ℙ 1 , ℙ 2 , and |2 are determined. In Line 1, Z is initialized. In the while loop of Line 2, elements from Z are eliminated by doing relevant tests. In Line 3, any two elements of Z are considered to choose a test Ta (where a ∈ 1, 2, 3 as given in Theorem 5) to do. Then in Line 4, 3 arbitrary elements of the universe of Ta are generated using the method described in Appendix B. In Line 5, a monotonicity-satisfying relation is computed for Ta, and, on the basis of this relation, elements of Z are eliminated as per as per Theorem 4. Once the while loop is exited, ℙ 1 , ℙ 2 , and |2 become known in Line 8. , |V(k)| = (n − k) × ( (k) + 1) . Since |V(k) = (n − k) × ( (k) + 1) , the time taken by Line 10 will be O(n 2 ) . Since any to elements of V(k) may be considered, Line 12 will take constant time. As per Appendix B, the time taken by Line 13 will be ( ) . By Theorem 8, at most 3 invocations to r v are needed to compute MSR T4 . So the time to run Line 14 will be O( ) . Once MSR T4 is computed, eliminations in Line 15 (as per Theorem 9) can be performed in constant time. Since k < n , (k) ≤ n , and < n , at most n 3 × (3 − 2) invocations to r v will be needed to complete all iterations of the for loop. The time complexity of Step 2 is therefore O(n 3 × ) which is O(n 3 × ((n − )!) 2 ).
The time to search the search space that satisfies probabilistic monotonicity, i.e., the time to run Step 1 and Step 2 is O(n 3 × ((n − )!) 2 )) . ◻ As shown in Theorem 10, the time to run the CSG algorithm over the probabilistically monotonic part of the search space is decreasing in , or equivalently, increasing in the degree of non-monotonicity. Depending on , the time complexity may or may not be polynomial in n (see Table 8 ).

Literature review
The problem of optimally partitioning a set of players has been studied from various perspectives. From a system-wide perspective, the aim is to maximize a social welfare function. From the perspective of individual players, the aim is to find solutions that are stable [10,11], or those that are Pareto optimal [1]. We have taken the societal perspective. Regardless of whether optimization is for individuals or for the society, the literature on optimal partitioning can broadly be divided into two categories: partitioning for deterministic environments and partitioning for stochastic environments.
Deterministic environments: Finding a socially optimal structure for CFGs where the value of a structure is the sum of the values of its coalitions is NP-complete [35]. Numerous approaches such as dynamic programming [30,42], branch-and-bound [33], and hybrid methods [25] have been used to solve the CSG problem for CFGs. Ueda et al. [41] showed how concise representation schemes for characteristic functions can be used to efficiently solve the CSG problem. Within the context of CFGs, Dang et al. [13], Chalkiadakis et al. [8], and Habib et al. [23] addressed the CSG problem for overlapping coalitions. Compared to CFGs, there are relatively fewer solutions for the CSG problem for PFGs. Rahwan et al. [31] showed how to find optimal partitions for PFGs by restricting externalities to positive only or negative only but not mixed. Their method prunes the search space on the basis of bounds on the values of groups of coalitions. Epstein et al. [15] used a distributed approach to solve the CSG problem for PFGs with positive only or negative only externalities. Banerjee and Kraemer [4] used a branch-and-bound approach to solve the CSG problem for PFGs by restricting externalities on the basis of agent types. In [15,31], and [4], the value of a partition is the sum of the values of its coalitions. In contrast, we consider probabilistically monotone PFGs for which the value of a structure does not necessarily have to be the sum but can be any function of the values of its coalitions.
For CFGs where the value of a coalition depends both on its members and the tasks the members execute, and the value of a structure is the sum of the values of its constituent coalitions, Prantare and Heintz [29] used a branch-and-bound approach to find an anytime solution to the CSG problem.
Stochastic environments: Compared to deterministic cooperative games, the literature on stochastic games is rather small. Charnes and Granot [10,11] addressed the problem of finding stable solutions to stochastic CFGs for which the values of coalitions are not deterministic but rather random variables with given distribution functions. Suijs et al. [38] showed how certain insurance scenarios can be modelled as CFGs with stochastic payoffs and stable solutions determined. This framework was later applied for modelling water resources [14]. A key distinction between all this work on stochastic models and ours is that they take an economic perspective and pay attention to finding stable solutions. In contrast, we take an algorithmic perspective and address the problem of finding coalition structures that are socially optimal.
For CFGs with the value of a coalition structure given by the sum of the values of its coalitions, Matsumara et al. [24] proposed a framework for probabilistic coalition structure generation. In this model, each agent can belong to no more than one coalition and there is uncertainty about the membership of an agent in a coalition. For this framework, approximation algorithms are given for finding an optimal structure. The key differences between their work and ours is that they consider CFGs, while we consider PFGs. Another difference is that they assume that the value of a coalition structure is the sum of the values of its coalitions, while we allow the value of a coalition structure to be any function of the values of its coalitions. Yet another difference is that their method requires the values of coalitions as an input. In contrast, our method does not require the values of coalitions as input nor does it require the values of coalition structures. Rather, our method only requires the relation (given by r v ) between the values of structures.
For stochastic environments, a different strand of work on coalition formation has dealt with situations requiring coalitions to form repeatedly. This opens up the possibility of introducing learning. For CFGs, Chalkiadakis and Boutilier [7] considered scenarios where agents are typed and there is uncertainty about agent types, and proposed a Bayesian learning framework for coalition formation.
In summary, the key distinguishing features of our work are as follows. Unlike existing work, we solve the CSG problem for PFGs in a stochastic setting. Further, in the existing work on stable solutions for stochastic CFGs, the values of coalitions are random variables with known probability distribution functions. However, our CSG algorithm does not require a known probability distribution function; it is sufficient to know that the degree of non-monotonicity is within a certain bound. Yet another difference is that, unlike existing CSG methods for PFGs, we do not restrict the value of a partition to be the sum of the values of its coalitions, but allow it to be any function of the values of its coalitions. Finally, unlike the existing CSG methods for both CFGs and PFGs, our CSG algorithm does not require the values of structures to be known; rather it is sufficient to know the relation (given by r v ) between the values of structures.

Conclusions
In this paper, we considered the problem of optimally partitioning a set of players into disjoint exhaustive coalitions. We focussed on solving this problem in the context of probabilistically monotone partition function games with priority-ordered players. For such games, we showed how an optimum can be found knowing just that a priority ordering exists but without knowing the actual ordering. We presented a greedy algorithm for solving the problem. The time complexity of the algorithm depends on the degree of non-monotonicity. We showed how the time complexity varies with the degree of non-monotonicity.
There are various avenues for further research. We considered PFGs where the property of monotonicity is violated with a certain non-zero probability. In the future, it will be interesting to consider situations where the probability that any two random structures violate monotonicity depends on the distance between them and their closest optima; those structures that are further away from an optimum are more likely to violate monotonicity than those that are closer. Another possibility is to consider situations where a player can be a member of multiple coalitions as opposed being a member of a single coalition. Since a and b are in the same coalition, these two players may be regarded as a single player. So there will n − 1 players to be arranged in coalitions. We therefore have

Theorem 12
Let K ⊆ N be a set of k players. Let 1 be the set of all those structures of n players such that all the players in K belong to the same coalition, i.e., Proof Since all the players in K belong to the same coalition, this set may be treated as a single player. This means we have n − k + 1 players and therefore | 1 | = Bell(n − k + 1) .
Proof Let 3a be the set of all structures in which all the players in K belong to the same coalition. Let 3b be the set of all structures in which all the players in K ∪ {i} belong to the same coalition.
and By Theorem 15, Since in 6b , a and d are in the same coalition, view them as a single player called ad . Then the players ad , b and c must be in three different coalitions. Applying In the same way, Proof The set 7 has the following composition: where Then By Theorem 13,

Theorem 19
For any i ∈ N , and any j ∈ N − {i} , the cardinalities of S 1 T1 (i, j) , S 2 T1 (i, j) , and U T1 (i, j) are: Proof For any i ∈ N and j ∈ N − {i} , the test T1(i, j) (see Sect. 4.2.1) compares the values of any two structures 1 and 2 such that * in 1 , the players i and t (where t is an arbitrary element of N − {i, j} and is determined on the basis of the elements in Z) belong to different coalitions but j and s (where s is an arbitrary element of N − {i, j} and is determined on the basis of the elements in Z) belong to the same coalition, and * in 2 , the players i and t belong to the same coalition but j and s belong to different coalitions.
S 1 T1 (i, j) denotes the set of all those structures in which the players i and t belong to different coalitions but j and s belong to the same coalition. S 2 T1 (i, j) denotes the set of all those structures in which the players i and t belong to the same coalition but j and s belong to different coalitions.
Since there is no constraint on whether s = t or s ≠ t , must consider both cases.
• The case s ≠ t : The set S 1 T1 (i, j) has the following composition: where By Theorem 16, | T1a | = Bell(n − 1) − 3Bell(n − 2) + 2Bell(n − 3) . | T1b | is the number of structures in which t, j, and s belong the same coalition, less the number of structures in which i, t, j, and s all belong to the same coalition. By Theorem 12 we get • The case s = t : The set S 1 T1 (i, j) has the following composition: By Theorem 14, |S 1 T1 (i, j)| = Bell(n − 1) − Bell(n − 2) . Next, the set S 2 T1 (i, j) has the following composition: ◻ Theorem 20 For any i ∈ N , and any j ∈ N − {i} , the cardinalities of S 1 T2 (i, j) , S 2 T2 (i, j) , and U T2 (i, j) are: Proof For any i ∈ N and j ∈ N − {i} , the test T2(i, j) (see Sect. 4.2.1) compares the values of any two structures 1 and 2 such that * in 1 , the players i and t are apart, and the players j and s are apart, and * in 2 , the players i and t are together, and the players j and s are together.
denotes the set of all those structures in which the players i and t are apart, and j and s are apart. S 2 T2 (i, j) denotes the set of all those structures in which the players i and t belong to the same coalition, and j and s belong to the same coalition. Since there is no constraint on whether s = t or s ≠ t , must consider both cases.

By Theorem 21,
For any n, here is proof that |U T3 (i, j)| > |U T2 (i, j)| : By Theorem 20: By Theorem 21, * For any n and any 2 < k < n − 2 , here is proof that |U T1 (i, j)| > Bell 2 (n − k − 2) . Since 2 < k < n − 2 and since Bell 2 (n − k − 2) is decreasing in k, it is sufficient to prove the above inequality for k = 2 , i.e., it is sufficient to prove that |U T1 (i, j)| > Bell 2 (n − 4) . We know by Theorem 19 that  1 ≤ a ≤ 3 ) can be found with at most 3 − 2 invocations to r v . Then by Theorem 5, at least one element of Z can be deleted by doing Ta (with a and the arguments to Ta as defined in Theorem 5). Also, by Theorem 8, a monotonicity-satisfying relation for T4 can be found with 3 − 2 invocations to r v . Then by Theorem 9, at least one element of V(k) can be deleted by doing T4 (with arguments as defined in Theorem 9). Since Ti and � √ 3 � arbitrary elements of S 2 Ti to achieve |U Ti | = 3 . In order to generate the elements of S 1 Ti or S 2 Ti for any i, it is convenient to first choose a suitable representation for coalition structures. To this end, we will represent the set of players as N = {1, … , n} . Any coalition structure over {1, … , n} will have at most n coalitions. Each coalition structure will be represented by a codeword ( code cs ). The codeword for a coalition structure is a vector of the form (C 1 , C 2 , … , C n ) such that player 1 ≤ x ≤ n is in the coalition C x in the structure. Without any loss of generality, assume that the coalitions in a structure are ordered as follows: coalition C a will precede a coalition C b in if the smallest element of C a is less than the smallest element of C b .
Observe that, with coalitions ordered in this way, coalition structures have the following property. In any coalition structure, player 1 must belong to the first coalition, player 2 must belong to one of the first two coalitions, and so on. In general, if the players 1, … , k ( 1 ≤ k < n ) belong to the first 1 ≤ m ≤ k non-empty coalitions, then player k + 1 must belong to one of the first m + 1 coalitions. For example, for a game of five players, the coalition structure {{1, 2}, {3}, {4, 5}} has the codeword: The following notation will be used for subwords of the codeword of a coalition structure. For a codeword, the subword beginning at index a and ending at index b will be denoted code cs|a,b . For example, we have: With a representation for coalition structures in place, we are now ready to describe a method (see Algorithm 2) for generating coalition structures in this representation. We will describe the procedure for generating elements of S 1 T1 (i, j) as codewords, given i, j, s, and t as inputs (the elements of S 2 T1 (i, j) can be generated analogously). As per Sect. 4.2.1, the set S 1 T1 (i, j) has the following composition: and the constraints that must be satisfied by elements of T1a , T1b and T1c are as follows:   The procedure we are going to describe is an adaptation of the method proposed in [16] for generating all possible partitions of a set {1, … , n} . Since [16] generates all possible partitions, it must be adapted to generate only those that satisfy the above listed constraints for T1a , T1b and T1c . The required adaptation is done as follows.
Note that, in the above listed constraints, s may or may not equal t, so need to consider both cases. However, regardless of whether s = t or not, the players j and s are required to belong to the same coalition in any coalition structure that belongs to T1a , T1b , or T1c . Thus, j and s may be treated as a single player. Consequently, the total number of players will now be reduced to n − 1 . So coalition structures over these n − 1 players must be generated such that every generated coalition structure satisfies each of the above listed constraints for T1a , T1b , and T1c .
Without any loss of generality, suppose j < s . Since j and s are treated as a single player, the set of players will be {1, … , j, … , s − 1, s + 1, … , n} . For convenience, we will encode the players in this set such that the numbers are all consecutive. The encoding is done by the mapping code ag ∶ {1 … , n} → {1, … , n − 1} defined as follows (distinguish between codes ( code ag ) for agents and codewords ( code cs ) for partitions): • code ag (j) = code ag (s) = 1 • code ag (i) = 2 • code ag (t) = 3 • code ag (X y ) = y + 3 where X = N − {i, j, s, t} and the elements of X are in ascending order with X y denoting the yth element of X. Figure 1 is an illustration of this encoding for the example N = {1, … , 9} , i = 5 , j = 3 , s = 7 , and t = 8.
With the description of encoding of agents and coalition structures in place, we are now ready to describe a procedure for generating codewords for the coalition structures in S 1 T1 (i, j) . The method in [16] is used to generate codewords for the partitions of the n − 1 element set of coded players, i.e., the set {code ag (1), … , code ag (n)} where |{code ag (1), … , code ag (n)}| = n − 1 . Each generated codeword is then checked to ensure it satisfies each one of the constraints listed above for T1a , T1b , and T1c . The complete method is presented as Algorithm 2. In Algorithm 2, the procedure GENERATE-UNIVERSE is the main routine. The inputs to it are i, j, s, t, n, and count (the number of elements of S 1 T1 to generate, i.e, count = √ 3 ). Within GENERATE-UNIVERSE is defined a recursive routine SP (Lines 7 to 40) for generating the required codewords.
To begin, in Line 41, the list X is set to {1, … , n} − {i, j, s, t} (note that elements of X are to be in ascending order. The variable index, used to track the number of elements of S 1 T1 generated so far, is initialised to zero. Then the procedure SP is invoked in Line 43. This procedure SP (defined in Lines 7 to 40) is for generating the elements of S 1 T1 recursively: the codewords of all partitions of the set {1, … , n} are obtained from the codewords of all partitions of the set {1, … , n − 1} by appending C n to the respective codewords. The range of values that C n may assume is 1, … , max(C 1 , C 2 , … , C n−1 ) + 1 . Note that, for SP(m, p), the parameter m = max(C 1 , … , C p−1 ) , and the parameter p indicates the current index of the codeword under consideration. Figure 2 (taken from [16]) is an illustration of the recursive generation codewords for the set {1, 2, 3, 4}.
The If statement from Line 9 to 39 is for tracking the number of codewords generated so far. In the If statement between Lines 10 and 29, the elements of a codeword are generated one-by-one, starting from the first element and going to the (n − 1) th element. Once a complete n − 1 element codeword is generated, it is transformed to a corresponding n element codeword by now treating the players j and s as distinct. The transformed codeword is saved in Table[index] (see Lines 31 to 37).
In Lines 12 to 23, checks are done to ensure that a codeword satisfies the required constraints. This check is done in Lines 12 to 16 for the case s ≠ t , and in Lines 19 to 23 for the case s = t . For the case s ≠ t , a constraint is violated if any one of the following conditions is true: • code cs|1,3 = (1, 1, 1) • code cs|1,3 = (1, 2, 1) • code cs|1,3 = (1, 2, 2) For the case s = t , a constraint is violated if the following condition is true: • code cs|2,2 ≠ 2 Once an n − 1 element constraint-satisfying codeword is generated, it is transformed to a corresponding n element codeword that represents a coalition structure over n players as follows. Let tcode cs denote a transformed codeword. Transformation involves treating j and s as distinct players, and is done as follows: • tcode cs (j) = tcode cs (s) = code cs|1,1 • tcode cs (i) = code cs|2,2 • tcode cs (t) = code cs|3, 3 • tcode cs (X x−3 ) = code cs|x,x for each 4 ≤ x ≤ n − 1.
Codeword transformation is illustrated in Figure for  Each transformed codeword is saved in Table (see Lines 31 to 37). Each codeword in Table will be an n element codeword.
Elements of U Ta (i, j) for each 2 ≤ a ≤ 4 can also be generated by using Algorithm 2 with constraint checking conditions suitably modified for each respective case.
We will now analyse the time complexity of generating 3 arbitrary elements of U T1 (i, j) = S 1 T1 (i, j) × S 2 T1 (i, j) using Algorithm 2.

Theorem 24
For any given i, j, s, and t, 3 arbitrary elements of U Ta (i, j) (for each 1 ≤ a ≤ 4 ) can be generated in ( √ ) time.
Elements of U Ta (i, j) for each 2 ≤ a ≤ 4 can also be generated by using Algorithm 2 with constraint checking conditions suitably modified for each respective case. Thus, the time taken to generate 3 distinct elements of U Ta (i, j) will be ( ) for each 2 ≤ a ≤ 4 . ◻ Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.