On the convexity of step out–step in sequencing games
 410 Downloads
Abstract
The main result of this paper is the convexity of step out–step in (SoSi) sequencing games, a class of relaxed sequencing games first analyzed by Musegaas et al. (Eur J Oper Res 246:894–906, 2015). The proof makes use of a polynomial time algorithm determining the value and an optimal processing order for an arbitrary coalition in a SoSi sequencing game. In particular, we use that in determining an optimal processing order of a coalition, the algorithm can start from the optimal processing order found for any subcoalition of smaller size and thus all information on such an optimal processing order can be used.
Keywords
(Cooperative) game theory Relaxed sequencing games ConvexityMathematics Subject Classification
91A12 (Cooperative games) 90B35 (Scheduling theory, deterministic)JEL Classification
C71 C441 Introduction
This paper considers onemachine sequencing situations in which a certain number of players, each with one job, have to be served by a single machine. The processing time of a job is the time the machine takes to process the corresponding job of this player. Every player has an individual linear cost function, specified by an individual cost parameter, which depends on the completion time of this job which is defined as the sum of the processing times of his own job and the jobs that are processed before his own job. There are no further restrictive assumptions such as due dates, ready times or precedence constraints imposed on the jobs. Smith (1956) showed that the total joint costs are minimal if the jobs are processed in weakly decreasing order with respect to their urgency, defined as the ratio of the individual cost parameter and the processing time.
We assume that the players are arranged in an initial order and thus the rearrangement of the initial order to an optimal order will lead to cost savings. To analyze the problem on how these cost savings should be allocated to the players, sequencing games are introduced. The value of a coalition in a sequencing game serves as a benchmark for determining a fair allocation of the optimal cost savings and represents the “virtual” maximal cost savings which this coalition can achieve by means of admissible rearrangements. Which rearrangements are admissible for a coalition is a modeling choice. The classical assumption made in Curiel et al. (1989) is that two players of a certain coalition can only swap their positions if all players between them are also members of the coalition. They show that the resulting sequencing games are convex and therefore have a nonempty core. Relaxed sequencing games arise by relaxing this classical assumption about the set of admissible rearrangements for coalitions in a consistent way.
In Curiel et al. (1993), four different relaxed sequencing games are introduced. These relaxations are based on requirements for the players outside the coalition regarding either their position in the processing order or their starting time. Slikker (2006) considered these four relaxed sequencing games in more detail by investigating the corresponding cores. In van Velzen and Hamers (2003) two further classes of relaxed sequencing games are considered. In relaxed sequencing games the values of coalitions become larger because the set of admissible rearrangements is larger than in the classical case. As a consequence, while classical sequencing games are convex, relaxed sequencing games might not be convex anymore. To the best of our knowledge there is no general convexity result with respect to specific subclasses of relaxed sequencing games.
In Musegaas et al. (2015) an alternative class of relaxed sequencing games is considered, the class of step out–step in (SoSi) sequencing games. In a SoSi sequencing game a member of a coalition is allowed to step out from his position in the processing order and to step in at any position later in the processing order. Providing an upper bound on the values of the coalitions in a SoSi sequencing game, Musegaas et al. (2015) showed that every SoSi sequencing game has a nonempty core. Also, Musegaas et al. (2015) provided a polynomial time algorithm to determine the value and an optimal processing order for an arbitrary coalition in a SoSi sequencing game. This paper shows, by means of this polynomial time algorithm, that SoSi sequencing games are convex. For proving this, we use a specific feature of the algorithm. Namely, for determining an optimal processing order for a coalition, one can use the information of the optimal processing orders of subcoalitions. More precisely, if one wants to know an optimal processing order for a coalition \(S \cup \{i\}\), then the algorithm can start from the optimal processing order found for coalition S. In particular, this helps to analyze the marginal contribution of a player i to joining coalitions S, T with \(S \subseteq T\) and \(i \not \in T\), and thus it helps to prove the convexity of SoSi sequencing games.
The organization of this paper is as follows. Section 2 recalls basic definitions on onemachine sequencing situations and the formal definition of a SoSi sequencing game. Section 3 identifies a number of important key features of the algorithm of Musegaas et al. (2015) that are especially useful in proving the convexity of SoSi sequencing games. In Sect. 4 the proof of convexity for SoSi sequencing games is provided.
2 SoSi sequencing games
This section recalls basic definitions on onemachine sequencing situations and related SoSi sequencing games.
A onemachine sequencing situation can be summarized by a tuple \((N, \sigma _0, p, \alpha )\), where N is the set of players, each with one job to be processed on the single machine. A processing order of the players can be described by a bijection \(\sigma : N \rightarrow \{1, \ldots , N\}\). More specifically, \(\sigma (i)=k\) means that player i is in position k. Let \(\Pi (N)\) denote the set of all such processing orders. The processing order \(\sigma _0 \in \Pi (N)\) specifies the initial order. The processing time \(p_i>0\) of the job of player i is the time the machine takes to process this job. The vector \( p \in \mathbb {R}^N _{++}\) summarizes the processing times. Furthermore, the costs for player i of spending t time units in the system is assumed to be determined by a linear cost function \(c_i : [0, \infty ) \rightarrow \mathbb {R}\) given by \(c_i(t)=\alpha _it\) with \(\alpha _i >0\). The vector \(\alpha \in \mathbb {R}^N _{++}\) summarizes the coefficients of the linear cost functions. It is assumed that the machine starts processing at time \(t=0\), and also that all jobs enter the system at \(t=0\).
A coalitional game is a pair (N, v) where N denotes a nonempty, finite set of players and \(v: 2^N \rightarrow \mathbb {R}\) assigns a monetary payoff to each coalition \(S \in 2^N\), where \(2^N\) denotes the collection of all subsets of N. In general, the value v(S) equals the highest payoff the coalition S can jointly generate by means of optimal cooperation without help of players in \(N \backslash S\). By convention, \(v(\emptyset )=0\).
To tackle the allocation problem of the maximal cost savings in a sequencing situation \((N, \sigma _0, p, \alpha )\), one can analyze an associated coalitional game (N, v). Here N naturally corresponds to the set of players in the game and, for a coalition \(S \subseteq N\), v(S) reflects the maximal cost savings this coalition can make with respect to the initial order \(\sigma _0\). In order to determine these maximal cost savings, assumptions must be made on the possible reorderings of coalition S with respect to the initial order \(\sigma _0\).
The classical (strong) assumption is that a member of a certain coalition \(S \subset N\) can only swap with another member of the coalition if all players between these two players, according to the initial order, are also members of S. Note that the resulting set of admissible reorderings for a coalition is quite restrictive, because there may be more reorderings possible which do not hurt the interests of the players outside the coalition.
 (i)
\(P(\sigma ,i) \subseteq P(\sigma _0,i)\) for all \(i \in N \backslash S\),
 (ii)
\(\sigma ^{1}(\sigma (i)+1) \in F(\sigma _0,i)\) for all \(i \in N \backslash S\) with \(\sigma (i) \ne N\),
The following example provides an instance of a SoSi sequencing game.
Example 2.1
Consider a onemachine sequencing situation with \(N=\{1,2,3\}\). The vector of processing times is \(p=(3,2,1)\), the vector of coefficients corresponding to the linear cost functions is \(\alpha =(4,6,5)\) and the initial order is \(\sigma _0=(1~2~3)\). Let (N, v) be the corresponding SoSi sequencing game. Table 1 provides the values of all coalitions.
The SoSi sequencing game of Example 2.1
S  \(\{1\}\)  \(\{2\}\)  \(\{3\}\)  \(\{1,2\}\)  \(\{1,3\}\)  \(\{2,3\}\)  N 

v(S)  0  0  0  10  3  4  25 
3 On the algorithm for finding the values of the coalitions
Musegaas et al. (2015) provided a polynomial time algorithm to determine an optimal order for every possible coalition and, consequently, the values of the coalitions. For proving convexity of SoSi sequencing games, we use specific key features of this algorithm. In this section we will derive and summarize these specific features. For example, in Theorem 3.4, we will show that in determining an optimal processing order of a coalition \(S \cup \{i\}\) in a SoSi sequencing game, the algorithm can start from the optimal processing order found for coalition S.
Example 3.1
Consider a onemachine sequencing situation \((N, \sigma _0, p, \alpha )\) with \(S \subseteq N\) such that \(S=\{1,2, \dots , 10\}\). In Fig. 3a an illustration can be found of initial processing order \(\sigma _0\) and the partition of S into components. Next, consider processing order \(\sigma \) as illustrated in Fig. 3b that is admissible for S. Note that \(\sigma \) contains less components than \(\sigma _0\). Figure 3b also illustrates the definition of modified components. Note that there is one modified component that is empty, namely \(S_3^{\sigma _0,\sigma }\). Since player 3 belongs to the first modified component, we have \(c(3,S,\sigma )=1\). Moreover, since player 3 is the only player who belongs to the first modified component, we have \(S^{\sigma _0,\sigma }_1=\{3\}\). Similarly, we have \(c(4,S,\sigma )=c(2,S,\sigma )=2,\) and \(c(i,S,\sigma )= 4,\) for all \(i \in S \backslash \{2,3,4\}\).\(\triangle \)
 (i)(\(\sigma \) is componentwise optimal) for all \(i,j \in S\) with \(c(i, S, \sigma ) = c(j, S, \sigma )\):$$\begin{aligned} \sigma (i) < \sigma (j) \Rightarrow u_i \ge u_j. \end{aligned}$$
 (ii)(\(\sigma \) satisfies partial tiebreaking) for all \(i,j \in S\) with \(c(i, S, \sigma _0) = c(j, S, \sigma _0)\):$$\begin{aligned} u_i = u_j, \sigma _0(i)<\sigma _0(j) \Rightarrow \sigma (i) < \sigma (j). \end{aligned}$$
After the preprocessing step, the players in S are considered in reverse order with respect to \(\sigma _0^S\) and for every player the algorithm checks whether moving the player to a certain position later in the processing order is beneficial. If so, then the algorithm will move this player. The algorithm works in a greedy way in the sense that every player is moved to the position giving the highest cost savings at that moment. Moreover, every player is considered in the algorithm exactly once and every player is moved to another position in the processing order at most once. The obtained processing order after the complete run of the algorithm is denoted by \(\sigma _S\).

Property (i): after every step during the run of the algorithm, we have a processing order that is urgency respecting with respect to S.

Property (ii): if during the run of the algorithm a player is moved to a position later in the processing order, then this results in strictly positive cost savings which corresponds to the highest possible cost savings at that instance. In case of multiple options, we choose the component that is most to the left and, in that component, we choose the position that is most to the left.

Property (iii): the mutual order between players who have already been considered will stay the same during the rest of the run of the algorithm.

Property (iv): the processing order \(\sigma _S\) is the unique optimal processing order such that no player can be moved to an earlier component while the total costs remain the same. Also, if there are two players with the same urgency in the same component, then the player who was first in \(\sigma _0\) is earlier in processing order \(\sigma _S\).

Property (v): if it is admissible with respect to \(\sigma _0\) to move a player to a component more to the left with respect to order \(\sigma _S\), then moving this player to this component will lead to higher total costs.
Proposition 3.1
[cf. Lemma 4.1 in Musegaas et al. (2015)] Let \((N,\sigma _0, p, \alpha )\) be a onemachine sequencing situation, let \(S \in 2^N \backslash \{\emptyset \}\) and let \(\sigma \in \mathcal {A}(\sigma _0,S)\) be an optimal order for S. Let \(k,l \in S\) with \(\sigma (k) < \sigma (l)\) and \(c(l,\sigma _0,S) \le c(k,\sigma ,S)\). Then, \(u_k \ge u_l\).
From the previous proposition together with the fact that the algorithm moves a player to the left as far as possible (see property (ii) of the algorithm), we have that if the algorithm moves player k to a later component, then the players from coalition S that player k jumps over all have a strictly higher urgency than player k.
Example 3.2
Consider a onemachine sequencing situation \((N, \sigma _0, p, \alpha )\) with \(S \subseteq N\) such that \(S=\{1,2, \dots , 10\}\). In Fig. 4 an illustration can be found of initial order \(\sigma _0\) together with all relevant data on the cost coefficients and processing times (the numbers above and below the players, respectively). The completion times of the players with respect to this initial order are also indicated in the figure (bottom line in bold).
Player 4: since all followers of player 4 who are members of S have a lower urgency, it is impossible to reduce the total costs by moving player 4 to a different position (see Proposition (3.1)). Hence, \(\sigma \) and v(S) are not changed.
Player 3: there are two components behind player 3. Note that all players in the last component have a lower urgency than player 3. Therefore, it is impossible to reduce the total costs by moving player 3 to the last component. If player 3 is moved to the second component, then the position of player 3 should be directly behind player 4. The resulting cost savings are \(21\) and thus moving player 3 to the second component will not reduce the total costs. Hence, the order depicted in Fig. 9 is the optimal processing order \(\sigma _S\) for coalition S obtained by the algorithm. Furthermore, \(v(S)=278\). \(\triangle \)
The following proposition, which will frequently be used later on, provides a basic property of composed costs per time unit and composed processing times. Namely, if every player in a set of players U is individually more urgent than a specific player i, then also the composed job U as a whole is more urgent than player i.^{2}
Proposition 3.2
Proof
The following lemma compares the processing orders that are obtained from the algorithm with respect to coalition S and coalition \(S \cup \{i\}\), in case player \(i \in N \backslash S\) is the only player in the component of \(S \cup \{i\}\) with respect to \(\sigma _0\). This lemma will be the driving force behind Theorem 3.4, which in turn is the crux for proving convexity of SoSi sequencing games.
Lemma 3.3
Proof
See Appendix A. \(\square \)
From the previous lemma it follows that if one wants to determine an optimal processing order of a coalition in a SoSi sequencing game, then the information of optimal processing orders of specific subcoalitions can be used. More precisely, if one wants to know the optimal processing order \(\sigma _{S \cup \{i\}}\) derived by the algorithm for a coalition \(S \cup \{i\}\) with \(i \not \in S\) and i being the only player in its component in \(\sigma _0\), then it does not matter whether you take \(\sigma _0\) or \(\sigma _S\) as initial processing order, as is stated in the following theorem.
Since the initial order will be varied we need some additional notation. We denote the obtained processing order after the complete run of the algorithm for onemachine sequencing situation \((N,\sigma ,p,\alpha )\) with initial order \(\sigma \) and coalition S by \(\text {Alg}((N, \sigma ,p, \alpha ),S)\). Hence, \(\text {Alg}((N, \sigma _0,p, \alpha ),S)=\sigma _S\).
Theorem 3.4
Proof
We start with proving that the minimum costs for coalition \(S \cup \{i\}\) in the sequencing situation \((N, \sigma _0, p, \alpha )\) is equal to the minimum costs for coalition \(S \cup \{i\}\) in the sequencing situation \((N, \sigma _S, p, \alpha )\). Then, we show that the two corresponding sets of optimal processing orders are equal. Finally, the fact that the algorithm always selects a unique processing order among the set of all optimal processing orders (property (iv)) completes the proof.
First, take \(\sigma ^* \in \mathcal {O}(\sigma _S,S \cup \{i\})\). Since \(\mathcal {A}(\sigma _S,S \cup \{i\}) \subseteq \mathcal {A}(\sigma _0,S \cup \{i\})\), we have \(\sigma ^* \in \mathcal {A}(\sigma _0,S \cup \{i\})\). Moreover, due to (3), we also have \(\sigma ^* \in \mathcal {O}(\sigma _0,S \cup \{i\})\).
It readily follows from the previous theorem that all players in a component to the right of player i with respect to \(\sigma _S\) are not moved to a different component when applying the algorithm to onemachine sequencing situation \((N,\sigma _S,p,\alpha )\) and coalition \(S \cup \{i\}\). This is stated in the following proposition.
Proposition 3.5
The next proposition states that all players in a component to the left of player i with respect to \(\sigma _S\) are, if they are moved by the algorithm, moved componentwise at least as far as the original component of player i in \(\sigma _0\). As a consequence, all players that are in \(\sigma _{S \cup \{i\}}\) to the left of the original component of player i in \(\sigma _0\), are not moved by the algorithm when going from \(\sigma _S\) to \(\sigma _{S \cup \{i\}}\).
Proposition 3.6
 (i)For all \(k \in S\) with \(c(k,S \cup \{i\}, \sigma _{S \cup \{i\}})> c(k,S \cup \{i\},\sigma _S)\) we have$$\begin{aligned} c(k,S \cup \{i\},\sigma _{S \cup \{i\}}) \ge c(i,S \cup \{i\},\sigma _0), \end{aligned}$$
 (ii)For all \(k \in S\) with \(c(k, S \cup \{i\}, \sigma _{S \cup \{i\}})<c(i,S \cup \{i\},\sigma _0)\) we have$$\begin{aligned} c(k, S \cup \{i\}, \sigma _{S \cup \{i\}})=c(k,S \cup \{i\}, \sigma _S). \end{aligned}$$
The previous proposition follows directly from the following, more technical, lemma. This lemma shows that, when applying the algorithm to onemachine sequencing situation \((N,\sigma _S,p,\alpha )\) and coalition \(S \cup \{i\}\), once a predecessor of player i with respect to \(\sigma _S\) is considered by the algorithm, moving this player to a position that is to the left of the original component of player i in \(\sigma _0\) is never beneficial.
Lemma 3.7
Proof
See Appendix B. \(\square \)
4 On the convexity of SoSi sequencing games
The main result of this paper is the following theorem.
Theorem 4.1
Let \((N, \sigma _0, p, \alpha )\) be a onemachine sequencing situation and let (N, v) be the corresponding SoSi sequencing game. Then, (N, v) is convex.
Before presenting the formal proof of our main result, we highlight some of its important aspects beforehand. Using (6), let \(S \in 2^N \backslash \{\emptyset \}\), \(i,j \in N\) and let \(i \ne j\) be such that \(S \subseteq N \backslash \{i,j\}\).

Assumption 1: \(\sigma _0(j) < \sigma _0(i)\).

Assumption 2: \((S \cup \{j\} \cup \{i\})_{c(j,S \cup \{j\} \cup \{i\},\sigma _0)}^{\sigma _0}=\{j\}\) and \((S \cup \{j\} \cup \{i\})_{c(i,S \cup \{j\} \cup \{i\},\sigma _0)}^{\sigma _0}=\{i\}\).
In order to prove Theorem 4.1 we need to compare the marginal contribution of player i to coalition S and the marginal contribution of player i to coalition \(S \cup \{j\}\). As argued above, both marginal contributions can be written as the sum of the positive cost differences of the players who are moved by the algorithm to a different component. In order to compare those cost differences more easily, we first partition the players in \(M^i(S)\), based on their position in the processing orders \(\sigma _S\) and \(\sigma _{S \cup \{j\}}\), in four subsets. Second, we derive from \(\sigma _S\) a special processing order \(\overline{\sigma }\) in such a way that all players from \(M^i(S)\) are in \(\overline{\sigma }\) and \(\sigma _{S \cup \{j\}}\) in the same component. The convexity proof is finished by means of adequately comparing all positive cost differences.
Proof of Theorem 4.1

\(M_1^i(S)\): the set of players in \(M^i(S)\) who are in \(\sigma _S\) to the left of player j,

\(M_2^i(S)\): the set of players in \(M^i(S)\) who are in \(\sigma _S\) between player j and player i, or player i himself.

\(M_{1a}^i(S)\): the set of players in \(M_1^i(S)\) who are in \(\sigma _{S \cup \{j\}}\) to the left of the original component of player j,

\(M_{1b}^i(S)\): the set of players in \(M_1^i(S)\) who are in \(\sigma _{S \cup \{j\}}\) between the original components of player j and player i, or in the original component of player j,

\(M_{1c}^i(S)\): the set of players in \(M_1^i(S)\) who are in \(\sigma _{S \cup \{j\}}\) to the right of the original component of player i.

Claim 1 \(c(k,\sigma _{S \cup \{j\} \cup \{i\}}) = c(k,\sigma _{S \cup \{i\}}) \ge c(i,\sigma _S)\) for all \(k \in M^i(S)\).

Claim 2 \(c(k,\sigma _{S \cup \{j\} \cup \{i\}}) = c(k,\sigma _{S \cup \{j\}})\) for all \(k \in M^i_{1c}(S)\).

Claim 3 \(c(k,\sigma _S) = c(k,\sigma _{S \cup \{j\} })\) for all \(k \in M_2^i(S)\).

Claim 4 \(c(k,\sigma _S) = c(k,\sigma _{S \cup \{j\} })\) for all \(k \in M_{1a}^i(S)\).
 (i)for all \(k \in M^i(S)\):$$\begin{aligned} c(k,\overline{\sigma })=c(k,\sigma _{S \cup \{j\}}), \end{aligned}$$(11)
 (ii)for all \(k \in S \backslash M^i(S)\):$$\begin{aligned} c(k,\overline{\sigma })=c(k,\sigma _S), \end{aligned}$$
 (iii)for all \(k, l \in S\) with \(c(k,\overline{\sigma })=c(l,\overline{\sigma })\):$$\begin{aligned} u_k=u_l, \sigma _0(k)<\sigma _0(l) \Rightarrow \overline{\sigma }(k) < \overline{\sigma }(l). \end{aligned}$$
The following claim states that the cost savings when moving a player in \(M^i_{1a}(S) \cup M^i_{1b}(S) \cup M^i_2(S)\) when going from \(\overline{\sigma }\) to \(\sigma _{S \cup \{i\}}\) is at most the cost savings when moving the same player when going from \(\sigma _{S \cup \{j\}}\) to \(\sigma _{S \cup \{j\} \cup \{i\}}\).
– Claim 5 \(\overline{\delta }_k \le \delta _k\) for all \(k \in M^i_{1a}(S) \cup M^i_{1b}(S) \cup M^i_2(S)\).
Proof
The proof can be found in Appendix E.
 (i)
The extra worth that is obtained by adding player i to coalition S can be split into two parts. The first part is due to the fact that player i joins the coalition and it represents the cost savings for player i in processing order \(\sigma _S\) compared to \(\sigma _0\). The completion time of player i is reduced by the sum of the processing times of the players that jumped over player i when going from \(\sigma _0\) to \(\sigma _S\) without moving any players. The second part represents the cost savings for coalition \(S \cup \{i\}\) by additionally moving players when going from \(\sigma _S\) to the optimal processing order \(\sigma _{S \cup \{i\}}\).
 (ii)
The optimal processing order \(\sigma _{S \cup \{i\}}\) can be obtained from \(\sigma _S\) via \(\overline{\sigma }\) where some players are already (partially) moved to the right.
 (iii)
The cost difference for coalition \(S \cup \{i\}\) when going from \(\sigma _S\) to \(\overline{\sigma }\) can be split into two parts: the cost difference for coalition S and the cost difference for player i. By the definition of \(\overline{\sigma }\) and since \(i \not \in S \cup \{j\}\), player i is not moved when going from \(\sigma _S\) to \(\overline{\sigma }\) and the completion time of player i is reduced by the sum of the processing times of the players that jumped over player i when going from \(\sigma _S\) to \(\overline{\sigma }\), i.e., the sum of the processing times of the players in \(M^i_{1c}(S)\).
 (iv)
Processing order \(\sigma _S\) is optimal for coalition S and thus \(C(\sigma _S,S) C(\overline{\sigma },S) \le 0\).
 (v)
This follows from the definition of \(\overline{\delta }_k\).
 (vi)
This follows from Claim 5.
 (vii)
This follows from \((M^i_{1a}(S) \cup M^i_{1b}(S) \cup M^i_2(S)) \subseteq M^i(S \cup \{j\})\) (cf. Fig. 10) and \(\delta _k > 0\) for all \(k \in M^i(S \cup \{j\})\) due to property (ii) of the algorithm.
 (viii)
This follows from the definition of \(\delta _k\).
 (ix)
This follows from \(M^i_{1c}(S) \subseteq (P(\sigma _S,i) \cap F(\sigma _{S \cup \{j\}},i))\) (cf. Fig. 10).
 (x)
The group of players that jump over player i when going from \(\sigma _0\) to \(\sigma _{S \cup \{j\}}\) can be split into two groups: the group of players that jumped over player i when going from \(\sigma _0\) to \(\sigma _S\) and the group of players that were positioned in front of player i in \(\sigma _S\) but jumped over player i when going from \(\sigma _S\) to \(\sigma _{S \cup \{j\}}\). Hence, \(\{P(\sigma _0,i) \cap F(\sigma _S,i), P(\sigma _S,i) \cap F(\sigma _{S \cup \{j\}},i)\}\) is a partition of \(P(\sigma _0,i) \cap F(\sigma _{S \cup \{j\}},i)\).
 (xi)
Similar to the explanation in (i).
Footnotes
 1.
Processing order (2 3 1) means that player 2 is in the first position, player 3 in the second position and player 1 in the last position.
 2.
Note that this proposition also holds if every < sign is replaced by a >, \(\le \) or \(\ge \) sign.
 3.
Note that in case \(c(r^{S \cup \{i\}},\tau ^{S \cup \{i\}})=c(i,\sigma _0)\), it is not admissible for the algorithm to move player m to component \((S \cup \{i\})^{\sigma _0,\tau ^S}_{c(r^{S \cup \{i\}},\tau ^{S \cup \{i\}})}\) due to requirement (ii) of admissibility, but this is no problem as also in this case the upcoming arguments are still valid.
References
 Curiel I, Pederzoli G, Tijs S (1989) Sequencing games. Eur J Oper Res 40:344–351CrossRefGoogle Scholar
 Curiel I, Potters J, Prasad R, Tijs S, Veltman B (1993) Cooperation in one machine scheduling. Z Oper Res 38:113–129Google Scholar
 Musegaas M, Borm P, Quant M (2015) Step out–step in sequencing games. Eur J Oper Res 246:894–906CrossRefGoogle Scholar
 Slikker M (2006) Relaxed sequencing games have a nonempty core. Nav Res Logist 53:235–242CrossRefGoogle Scholar
 Smith W (1956) Various optimizers of singlestage production. Nav Res Logist Q 3:59–66CrossRefGoogle Scholar
 van Velzen B, Hamers H (2003) On the balancedness of relaxed sequencing games. Math Methods Oper Res 57:287–297CrossRefGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.