Abstract
The main result of this paper is the convexity of step out–step in (SoSi) sequencing games, a class of relaxed sequencing games first analyzed by Musegaas et al. (Eur J Oper Res 246:894–906, 2015). The proof makes use of a polynomial time algorithm determining the value and an optimal processing order for an arbitrary coalition in a SoSi sequencing game. In particular, we use that in determining an optimal processing order of a coalition, the algorithm can start from the optimal processing order found for any subcoalition of smaller size and thus all information on such an optimal processing order can be used.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
This paper considers one-machine sequencing situations in which a certain number of players, each with one job, have to be served by a single machine. The processing time of a job is the time the machine takes to process the corresponding job of this player. Every player has an individual linear cost function, specified by an individual cost parameter, which depends on the completion time of this job which is defined as the sum of the processing times of his own job and the jobs that are processed before his own job. There are no further restrictive assumptions such as due dates, ready times or precedence constraints imposed on the jobs. Smith (1956) showed that the total joint costs are minimal if the jobs are processed in weakly decreasing order with respect to their urgency, defined as the ratio of the individual cost parameter and the processing time.
We assume that the players are arranged in an initial order and thus the rearrangement of the initial order to an optimal order will lead to cost savings. To analyze the problem on how these cost savings should be allocated to the players, sequencing games are introduced. The value of a coalition in a sequencing game serves as a benchmark for determining a fair allocation of the optimal cost savings and represents the “virtual” maximal cost savings which this coalition can achieve by means of admissible rearrangements. Which rearrangements are admissible for a coalition is a modeling choice. The classical assumption made in Curiel et al. (1989) is that two players of a certain coalition can only swap their positions if all players between them are also members of the coalition. They show that the resulting sequencing games are convex and therefore have a non-empty core. Relaxed sequencing games arise by relaxing this classical assumption about the set of admissible rearrangements for coalitions in a consistent way.
In Curiel et al. (1993), four different relaxed sequencing games are introduced. These relaxations are based on requirements for the players outside the coalition regarding either their position in the processing order or their starting time. Slikker (2006) considered these four relaxed sequencing games in more detail by investigating the corresponding cores. In van Velzen and Hamers (2003) two further classes of relaxed sequencing games are considered. In relaxed sequencing games the values of coalitions become larger because the set of admissible rearrangements is larger than in the classical case. As a consequence, while classical sequencing games are convex, relaxed sequencing games might not be convex anymore. To the best of our knowledge there is no general convexity result with respect to specific subclasses of relaxed sequencing games.
In Musegaas et al. (2015) an alternative class of relaxed sequencing games is considered, the class of step out–step in (SoSi) sequencing games. In a SoSi sequencing game a member of a coalition is allowed to step out from his position in the processing order and to step in at any position later in the processing order. Providing an upper bound on the values of the coalitions in a SoSi sequencing game, Musegaas et al. (2015) showed that every SoSi sequencing game has a non-empty core. Also, Musegaas et al. (2015) provided a polynomial time algorithm to determine the value and an optimal processing order for an arbitrary coalition in a SoSi sequencing game. This paper shows, by means of this polynomial time algorithm, that SoSi sequencing games are convex. For proving this, we use a specific feature of the algorithm. Namely, for determining an optimal processing order for a coalition, one can use the information of the optimal processing orders of subcoalitions. More precisely, if one wants to know an optimal processing order for a coalition \(S \cup \{i\}\), then the algorithm can start from the optimal processing order found for coalition S. In particular, this helps to analyze the marginal contribution of a player i to joining coalitions S, T with \(S \subseteq T\) and \(i \not \in T\), and thus it helps to prove the convexity of SoSi sequencing games.
The organization of this paper is as follows. Section 2 recalls basic definitions on one-machine sequencing situations and the formal definition of a SoSi sequencing game. Section 3 identifies a number of important key features of the algorithm of Musegaas et al. (2015) that are especially useful in proving the convexity of SoSi sequencing games. In Sect. 4 the proof of convexity for SoSi sequencing games is provided.
2 SoSi sequencing games
This section recalls basic definitions on one-machine sequencing situations and related SoSi sequencing games.
A one-machine sequencing situation can be summarized by a tuple \((N, \sigma _0, p, \alpha )\), where N is the set of players, each with one job to be processed on the single machine. A processing order of the players can be described by a bijection \(\sigma : N \rightarrow \{1, \ldots , |N|\}\). More specifically, \(\sigma (i)=k\) means that player i is in position k. Let \(\Pi (N)\) denote the set of all such processing orders. The processing order \(\sigma _0 \in \Pi (N)\) specifies the initial order. The processing time \(p_i>0\) of the job of player i is the time the machine takes to process this job. The vector \( p \in \mathbb {R}^N _{++}\) summarizes the processing times. Furthermore, the costs for player i of spending t time units in the system is assumed to be determined by a linear cost function \(c_i : [0, \infty ) \rightarrow \mathbb {R}\) given by \(c_i(t)=\alpha _it\) with \(\alpha _i >0\). The vector \(\alpha \in \mathbb {R}^N _{++}\) summarizes the coefficients of the linear cost functions. It is assumed that the machine starts processing at time \(t=0\), and also that all jobs enter the system at \(t=0\).
The total joint costs of a processing order \(\sigma \in \Pi (N)\) are given by \(\sum _{i \in N}{\alpha _iC_i(\sigma )}\), where \(C_i(\sigma )\) denotes the completion time of player i and is defined by
A processing order is called optimal if it minimizes the total joint costs over all possible processing orders. In Smith (1956) it is shown that in each optimal order the players are processed in weakly decreasing order with respect to their urgency \(u_i\) defined by \(u_i=\frac{\alpha _i}{p_i}\). The maximal total cost savings are equal to the difference in total costs between the initial order and an optimal order.
A coalitional game is a pair (N, v) where N denotes a non-empty, finite set of players and \(v: 2^N \rightarrow \mathbb {R}\) assigns a monetary payoff to each coalition \(S \in 2^N\), where \(2^N\) denotes the collection of all subsets of N. In general, the value v(S) equals the highest payoff the coalition S can jointly generate by means of optimal cooperation without help of players in \(N \backslash S\). By convention, \(v(\emptyset )=0\).
To tackle the allocation problem of the maximal cost savings in a sequencing situation \((N, \sigma _0, p, \alpha )\), one can analyze an associated coalitional game (N, v). Here N naturally corresponds to the set of players in the game and, for a coalition \(S \subseteq N\), v(S) reflects the maximal cost savings this coalition can make with respect to the initial order \(\sigma _0\). In order to determine these maximal cost savings, assumptions must be made on the possible reorderings of coalition S with respect to the initial order \(\sigma _0\).
The classical (strong) assumption is that a member of a certain coalition \(S \subset N\) can only swap with another member of the coalition if all players between these two players, according to the initial order, are also members of S. Note that the resulting set of admissible reorderings for a coalition is quite restrictive, because there may be more reorderings possible which do not hurt the interests of the players outside the coalition.
In a SoSi sequencing game a member of the coalition S is allowed to step out from his position in the processing order and to step in at any position later in the processing order. Note that from an optimality point of view it is clear that one can assume without loss of generality that a member of S who steps out, only steps in at a position directly behind another member of the coalition S. This means that for every player outside S its set of predecessors cannot become larger and its direct follower was already a follower of him in the initial order. Hence, a processing order \(\sigma \) is called admissible for S in a SoSi sequencing game if
-
(i)
\(P(\sigma ,i) \subseteq P(\sigma _0,i)\) for all \(i \in N \backslash S\),
-
(ii)
\(\sigma ^{-1}(\sigma (i)+1) \in F(\sigma _0,i)\) for all \(i \in N \backslash S\) with \(\sigma (i) \ne |N|\),
where \(P(\sigma ,i)=\{j \in N~|~\sigma (j) < \sigma (i)\}\) denotes the set of predecessors of player i with respect to processing order \(\sigma \) and \(F(\sigma ,i)=\{j \in N~|~\sigma (j) > \sigma (i)\}\) denotes the set of followers. Given an initial order \(\sigma _0\) the set of admissible orders for coalition S is denoted by \(\mathcal {A}(\sigma _0,S)\). Correspondingly, Musegaas et al. (2015) defined the Step out, Step in (SoSi) sequencing game (N, v) by
for all \(S \subseteq N\). A processing order \(\sigma ^* \in \mathcal {A}(\sigma _0,S)\) is called optimal for S if
Note that a processing order is admissible for a coalition in a classical sequencing game if there is an equality in condition (i). Therefore, given a coalition, the corresponding set of admissible orders in a SoSi sequencing game is larger than the set of admissible orders in the corresponding classical sequencing game. As a consequence, the values of coalitions in SoSi sequencing games can become larger with respect to classical sequencing games.
The following example provides an instance of a SoSi sequencing game.
Example 2.1
Consider a one-machine sequencing situation with \(N=\{1,2,3\}\). The vector of processing times is \(p=(3,2,1)\), the vector of coefficients corresponding to the linear cost functions is \(\alpha =(4,6,5)\) and the initial order is \(\sigma _0=(1~2~3)\). Let (N, v) be the corresponding SoSi sequencing game. Table 1 provides the values of all coalitions.
Note that the values of the coalitions in the game (N, v) are equal to the values of the coalitions in the classical sequencing game of this one-machine sequencing situation except for the only disconnected coalition, coalition \(\{1,3\}\). Coalition \(\{1,3\}\) cannot save costs in the classical sequencing game because there exists no admissible order other than the initial order. However, in the SoSi sequencing game coalition \(\{1,3\}\) has two admissible orders:Footnote 1
These processing orders are illustrated in Fig. 1. Hence, the value of coalition \(\{1,3\}\) is given by
3 On the algorithm for finding the values of the coalitions
Musegaas et al. (2015) provided a polynomial time algorithm to determine an optimal order for every possible coalition and, consequently, the values of the coalitions. For proving convexity of SoSi sequencing games, we use specific key features of this algorithm. In this section we will derive and summarize these specific features. For example, in Theorem 3.4, we will show that in determining an optimal processing order of a coalition \(S \cup \{i\}\) in a SoSi sequencing game, the algorithm can start from the optimal processing order found for coalition S.
We start with recalling some definitions such as components. For \(S \in 2^N \backslash \{\emptyset \}\), \(\sigma \in \Pi (N)\) and \(s,t \in N\) with \(\sigma (s)<\sigma (t)\), define
The sets of players \(S^{\sigma }[s,t)\), \(\bar{S}^{\sigma }[s,t)\), \(S^{\sigma }(s,t]\) and \(\bar{S}^{\sigma }(s,t]\) are defined in a similar way.
A coalition \(S \in 2^N \backslash \{\emptyset \}\) is called connected with respect to \(\sigma _0\) if for all \(i,j \in S\) and \(k \in N\) such that \(\sigma _0(i)<\sigma _0(k)<\sigma _0(j)\) it holds that \(k \in S\). A connected coalition \(U \subseteq S\) with respect to \(\sigma _0\) is called a component of S with respect to \(\sigma _0\) if \(U \subseteq U' \subseteq S\) and \(U'\) connected with respect to \(\sigma _0\) implies that \(U'=U\). Let \(h(\sigma _0,S) \ge 1\) denote the number of components of S with respect to \(\sigma _0\). The partition of S into components with respect to \(\sigma _0\) is denoted by
where for each \(k \in \{1, \ldots , h(\sigma _0,S)-1\}\), \(i \in S_k^{\sigma _0}\) and \(j \in S_{k+1}^{\sigma _0}\) we have \(\sigma _0(i) < \sigma _0(j)\). In the same way, processing order \(\sigma _0\) divides \(N \backslash S\) into subgroups. For this, define
for all \(k \in \{1,\ldots ,h(\sigma _0,S)-1\}\). Notice that \(\overline{S}_0^{\sigma _0}\) and \(\overline{S}_{h(\sigma _0,S)}^{\sigma _0}\) might be empty sets, but \(\overline{S}_k^{\sigma _0} \ne \emptyset \) for all \(k \in \{1,\ldots ,h(\sigma _0, S)-1\}\). See Fig. 2 for an illustration of the subdivision of S and \(N \backslash S\) into subgroups by means of processing order \(\sigma _0\).
Note that for given \(S \subseteq N\) it is possible that a processing order \(\sigma \in \mathcal {A}(\sigma _0,S)\) contains less components than \(\sigma _0\), because all players of a certain component with respect to S may step out from this component and join other components. For \(\sigma \in \mathcal {A}(\sigma _0,S)\) with \(\sigma _0 \in \Pi (N)\), define modified components \(S_1^{\sigma _0, \sigma }, \dots , S_{h(\sigma _0,S)}^{\sigma _0, \sigma }\) by
for all \(k \in \{1, \ldots , h(\sigma _0,S)\}\). Hence, \(S_k^{\sigma _0,\sigma }\) consists of the group of players that are positioned in processing order \(\sigma \) in between the subgroups \(\overline{S}_{k-1}^{\sigma _0}\) and \(\overline{S}_{k}^{\sigma _0}\).
Note that \(S_k^{\sigma _0, \sigma }\) might be empty for some k while
Moreover, recall that a player is not allowed to move to an earlier component (condition (i) of admissibility), but he is allowed to move to any position later in the processing order and thus we have
for all \(l \in \{1, \ldots , h(\sigma _0,S)\}\). Furthermore, denote the index of the corresponding modified component of player \(i \in S\) in processing order \(\sigma \) with respect to initial processing order \(\sigma _0\) by \(c(i,S,\sigma )\), where
Since the component index of player \(i \in S\) with respect to \(\sigma \) can only be increased (due to condition (i) of admissibility), we have
An illustration of the definitions of components, modified components and the index \(c(i,S,\sigma )\) can be found in the following example.
Example 3.1
Consider a one-machine sequencing situation \((N, \sigma _0, p, \alpha )\) with \(S \subseteq N\) such that \(S=\{1,2, \dots , 10\}\). In Fig. 3a an illustration can be found of initial processing order \(\sigma _0\) and the partition of S into components. Next, consider processing order \(\sigma \) as illustrated in Fig. 3b that is admissible for S. Note that \(\sigma \) contains less components than \(\sigma _0\). Figure 3b also illustrates the definition of modified components. Note that there is one modified component that is empty, namely \(S_3^{\sigma _0,\sigma }\). Since player 3 belongs to the first modified component, we have \(c(3,S,\sigma )=1\). Moreover, since player 3 is the only player who belongs to the first modified component, we have \(S^{\sigma _0,\sigma }_1=\{3\}\). Similarly, we have \(c(4,S,\sigma )=c(2,S,\sigma )=2,\) and \(c(i,S,\sigma )= 4,\) for all \(i \in S \backslash \{2,3,4\}\).\(\triangle \)
To find an optimal order for every possible coalition, Musegaas et al. (2015) provided a polynomial time algorithm. Given a one-machine sequencing situation \((N, \sigma _0, p, \alpha )\) and a coalition \(S \in 2^N \backslash \{\emptyset \}\), the polynomial time algorithm introduced by Musegaas et al. (2015) starts with a preprocessing step. In this preprocessing step, the players within the components of S are reordered such that they are in weakly decreasing order with respect to their urgency. This is done by setting the initial processing order \(\sigma _0\) equal to the processing order \(\sigma _0^S\), where \(\sigma _0^S \in \mathcal {A}(\sigma _0,S)\) is the unique urgency respecting processing order such that for all \(i \in S\)
where a processing order \(\sigma \in \Pi (N)\) is called urgency respecting with respect to S if
-
(i)
(\(\sigma \) is componentwise optimal) for all \(i,j \in S\) with \(c(i, S, \sigma ) = c(j, S, \sigma )\):
$$\begin{aligned} \sigma (i) < \sigma (j) \Rightarrow u_i \ge u_j. \end{aligned}$$ -
(ii)
(\(\sigma \) satisfies partial tiebreaking) for all \(i,j \in S\) with \(c(i, S, \sigma _0) = c(j, S, \sigma _0)\):
$$\begin{aligned} u_i = u_j, \sigma _0(i)<\sigma _0(j) \Rightarrow \sigma (i) < \sigma (j). \end{aligned}$$
Note that (1) states that all players in S stay in their component, i.e., the partition of S into components stays the same. Moreover, condition (i) of urgency respecting states that the players within a component of S are in weakly decreasing order with respect to their urgency. Moreover, a tiebreaking rule in condition (ii) ensures that if there are two players with the same urgency in the same component of S with respect to \(\sigma _0\), then the player who was first in \(\sigma _0\) is earlier in processing order \(\sigma \). Note that the partial tiebreaking condition does not imply anything about the relative order of two players with the same urgency who are in the same component of S with respect to \(\sigma \) but who were in different components of S with respect to \(\sigma _0\). Therefore, an arbitrary urgency respecting order does not need to be unique, but \(\sigma _0^S\), because of condition (1), is.
After the preprocessing step, the players in S are considered in reverse order with respect to \(\sigma _0^S\) and for every player the algorithm checks whether moving the player to a certain position later in the processing order is beneficial. If so, then the algorithm will move this player. The algorithm works in a greedy way in the sense that every player is moved to the position giving the highest cost savings at that moment. Moreover, every player is considered in the algorithm exactly once and every player is moved to another position in the processing order at most once. The obtained processing order after the complete run of the algorithm is denoted by \(\sigma _S\).
The following properties follow directly from the definition and the characteristics of the algorithm for finding the optimal processing order \(\sigma _S\) for coalition S and will be used in this paper in order to show that SoSi sequencing games are convex.
-
Property (i): after every step during the run of the algorithm, we have a processing order that is urgency respecting with respect to S.
-
Property (ii): if during the run of the algorithm a player is moved to a position later in the processing order, then this results in strictly positive cost savings which corresponds to the highest possible cost savings at that instance. In case of multiple options, we choose the component that is most to the left and, in that component, we choose the position that is most to the left.
-
Property (iii): the mutual order between players who have already been considered will stay the same during the rest of the run of the algorithm.
-
Property (iv): the processing order \(\sigma _S\) is the unique optimal processing order such that no player can be moved to an earlier component while the total costs remain the same. Also, if there are two players with the same urgency in the same component, then the player who was first in \(\sigma _0\) is earlier in processing order \(\sigma _S\).
-
Property (v): if it is admissible with respect to \(\sigma _0\) to move a player to a component more to the left with respect to order \(\sigma _S\), then moving this player to this component will lead to higher total costs.
An interesting property for the urgencies of players in an optimal order is that if it is admissible that two players switch position, then the player with the highest urgency should be positioned first. This is stated in the following proposition.
Proposition 3.1
[cf. Lemma 4.1 in Musegaas et al. (2015)] Let \((N,\sigma _0, p, \alpha )\) be a one-machine sequencing situation, let \(S \in 2^N \backslash \{\emptyset \}\) and let \(\sigma \in \mathcal {A}(\sigma _0,S)\) be an optimal order for S. Let \(k,l \in S\) with \(\sigma (k) < \sigma (l)\) and \(c(l,\sigma _0,S) \le c(k,\sigma ,S)\). Then, \(u_k \ge u_l\).
From the previous proposition together with the fact that the algorithm moves a player to the left as far as possible (see property (ii) of the algorithm), we have that if the algorithm moves player k to a later component, then the players from coalition S that player k jumps over all have a strictly higher urgency than player k.
In the following example the algorithm is applied on an instance of a SoSi sequencing game. In this example we use the concept of composed costs per time unit and composed processing times, where the composed costs per time unit \(\alpha _U\) and the composed processing time \(p_U\) for a coalition \(U \in 2^N\) are defined by
and
respectively.
Example 3.2
Consider a one-machine sequencing situation \((N, \sigma _0, p, \alpha )\) with \(S \subseteq N\) such that \(S=\{1,2, \dots , 10\}\). In Fig. 4 an illustration can be found of initial order \(\sigma _0\) together with all relevant data on the cost coefficients and processing times (the numbers above and below the players, respectively). The completion times of the players with respect to this initial order are also indicated in the figure (bottom line in bold).
In the preprocessing step the algorithm processing order \(\sigma \) is set to processing order \(\sigma _0^S\) (see Fig. 5) and we initialize
Next, the players in S are considered in reverse order with respect to \(\sigma _0^S\) and the algorithm starts with the last player of the penultimate component, which is player 6.
Player 6: if player 6 is moved to the last component, then the position of player 6 should be behind player 7 (since the players in the components must stay in weakly decreasing order with respect to their urgencies, see property (i) of the algorithm). The resulting cost savings are
Hence, we update processing order \(\sigma \) by moving player 6 to the position directly behind player 7 (see Fig. 6) and we set \(v(S):=187+31=218\).
Player 5: according to the given urgencies, player 5 should be moved to the position directly behind player 6 if he is moved to a later component. The resulting cost savings are
Hence, we update processing order \(\sigma \) by moving player 5 to the position directly behind player 6 (see Fig. 7) and we set \(v(S):=218+30=248\).
Player 4: since all followers of player 4 who are members of S have a lower urgency, it is impossible to reduce the total costs by moving player 4 to a different position (see Proposition (3.1)). Hence, \(\sigma \) and v(S) are not changed.
Player 1: there are two components behind player 1. If player 1 is moved to a different component, then the position of player 1 should be either directly behind player 4 or directly behind player 10. The resulting cost savings are 18 and 21, respectively. Hence, player 1 is moved behind player 10 (see property (ii) of the algorithm). Processing order \(\sigma \) is updated (see Fig. 8) and v(S) is increased by 21, so \(v(S):=269\).
Player 2: like in the previous step we have again two possibilities, namely moving behind player 4 with cost savings 9 or behind player 10 with cost savings 6. Hence, it is most beneficial to move player 2 behind player 4. Processing order \(\sigma \) is updated (see Fig. 9) and v(S) is increased by 9, so \(v(S):=278\).
Player 3: there are two components behind player 3. Note that all players in the last component have a lower urgency than player 3. Therefore, it is impossible to reduce the total costs by moving player 3 to the last component. If player 3 is moved to the second component, then the position of player 3 should be directly behind player 4. The resulting cost savings are \(-21\) and thus moving player 3 to the second component will not reduce the total costs. Hence, the order depicted in Fig. 9 is the optimal processing order \(\sigma _S\) for coalition S obtained by the algorithm. Furthermore, \(v(S)=278\). \(\triangle \)
The following proposition, which will frequently be used later on, provides a basic property of composed costs per time unit and composed processing times. Namely, if every player in a set of players U is individually more urgent than a specific player i, then also the composed job U as a whole is more urgent than player i.Footnote 2
Proposition 3.2
Let \(U \subsetneq N\) with \(U \ne \emptyset \) and let \(i \in N \backslash U\). If \(u_i < u_j\) for all \(j \in U\), then
or equivalently,
Proof
Assume \(u_i< u_j\) for all \(j \in U\), i.e., \(\alpha _ip_j<\alpha _jp_i,\) for all \(j \in U\). By adding these |U| equations we get \(\alpha _i\sum _{j \in U}{p_j} < p_i\sum _{j \in U}{\alpha _j},\) i.e.,
\(\square \)
The following lemma compares the processing orders that are obtained from the algorithm with respect to coalition S and coalition \(S \cup \{i\}\), in case player \(i \in N \backslash S\) is the only player in the component of \(S \cup \{i\}\) with respect to \(\sigma _0\). This lemma will be the driving force behind Theorem 3.4, which in turn is the crux for proving convexity of SoSi sequencing games.
Lemma 3.3
Let \((N, \sigma _0, p, \alpha )\) be a one-machine sequencing situation, let \(S \subsetneq N\) with \(S \ne \emptyset \) and let \(i \in N \backslash S\) be such that \((S \cup \{i\})_{c(i,S \cup \{i\},\sigma _0)}^{\sigma _0}=\{i\}\). Then, for all \(k \in S\) we have
Proof
See Appendix A. \(\square \)
From the previous lemma it follows that if one wants to determine an optimal processing order of a coalition in a SoSi sequencing game, then the information of optimal processing orders of specific subcoalitions can be used. More precisely, if one wants to know the optimal processing order \(\sigma _{S \cup \{i\}}\) derived by the algorithm for a coalition \(S \cup \{i\}\) with \(i \not \in S\) and i being the only player in its component in \(\sigma _0\), then it does not matter whether you take \(\sigma _0\) or \(\sigma _S\) as initial processing order, as is stated in the following theorem.
Since the initial order will be varied we need some additional notation. We denote the obtained processing order after the complete run of the algorithm for one-machine sequencing situation \((N,\sigma ,p,\alpha )\) with initial order \(\sigma \) and coalition S by \(\text {Alg}((N, \sigma ,p, \alpha ),S)\). Hence, \(\text {Alg}((N, \sigma _0,p, \alpha ),S)=\sigma _S\).
Theorem 3.4
Let \((N, \sigma _0, p, \alpha )\) be a one-machine sequencing situation, let \(S \subsetneq N\) with \(S \ne \emptyset \) and let \(i \in N \backslash S\) be such that \((S \cup \{i\})_{c(i,S \cup \{i\},\sigma _0)}^{\sigma _0}=\{i\}\). Then,
Proof
We start with proving that the minimum costs for coalition \(S \cup \{i\}\) in the sequencing situation \((N, \sigma _0, p, \alpha )\) is equal to the minimum costs for coalition \(S \cup \{i\}\) in the sequencing situation \((N, \sigma _S, p, \alpha )\). Then, we show that the two corresponding sets of optimal processing orders are equal. Finally, the fact that the algorithm always selects a unique processing order among the set of all optimal processing orders (property (iv)) completes the proof.
Note that \(\mathcal {A}(\sigma _S,S \cup \{i\}) \subseteq \mathcal {A}(\sigma _0,S \cup \{i\})\) and thus
Moreover, from Lemma 3.3 we know that for all \(k \in S \cup \{i\}\) we have \(c(k,S \cup \{i\},\sigma _{S \cup \{i\}}) \ge c(k,S \cup \{i\},\sigma _S)\) and thus \(\sigma _{S \cup \{i\}} \in \mathcal {A}(\sigma _S,S \cup \{i\})\). As a consequence, since
we have together with (2) that
Let \(\mathcal {O}(\sigma _0,S \cup \{i\})\) and \(\mathcal {O}(\sigma _S,S \cup \{i\})\) denote the set of optimal processing orders for coalition \(S \cup \{i\}\) in sequencing situations \((N, \sigma _0, p, \alpha )\) and \((N, \sigma _S, p, \alpha )\), respectively. We will show \(\mathcal {O}(\sigma _0,S \cup \{i\}) = \mathcal {O}(\sigma _S,S \cup \{i\})\).
First, take \(\sigma ^* \in \mathcal {O}(\sigma _S,S \cup \{i\})\). Since \(\mathcal {A}(\sigma _S,S \cup \{i\}) \subseteq \mathcal {A}(\sigma _0,S \cup \{i\})\), we have \(\sigma ^* \in \mathcal {A}(\sigma _0,S \cup \{i\})\). Moreover, due to (3), we also have \(\sigma ^* \in \mathcal {O}(\sigma _0,S \cup \{i\})\).
Second, take \(\sigma ^* \in \mathcal {O}(\sigma _0,S \cup \{i\})\). From property (iv) of the algorithm we know that for all \(k \in S \cup \{i\}\) we have \(c(k, S \cup \{i\},\sigma ^*) \ge c(k,S \cup \{i\},\sigma _{S \cup \{i\}})\). Therefore, together with \(c(k,S \cup \{i\},\sigma _{S \cup \{i\}}) \ge c(k,S \cup \{i\},\sigma _S)\) from Lemma 3.3, we know \(\sigma ^* \in \mathcal {A}(\sigma _S,S \cup \{i\})\). Consequently, together with (3), we can conclude \(\sigma ^* \in \mathcal {O}(\sigma _S,S \cup \{i\})\). Hence, we have
Finally, since the algorithm chooses among all optimal processing orders the order in which the players are in a component to the left as far as possible and because the algorithm chooses a fixed order within the components (property (iv)), we have \(\sigma _{S \cup \{i\}} = \text {Alg}((N, \sigma _S,p, \alpha ),S \cup \{i\})\). \(\square \)
It readily follows from the previous theorem that all players in a component to the right of player i with respect to \(\sigma _S\) are not moved to a different component when applying the algorithm to one-machine sequencing situation \((N,\sigma _S,p,\alpha )\) and coalition \(S \cup \{i\}\). This is stated in the following proposition.
Proposition 3.5
Let \((N, \sigma _0, p, \alpha )\) be a one-machine sequencing situation, let \(S \subsetneq N\) with \(S \ne \emptyset \) and let \(i \in N \backslash S\) be such that \((S \cup \{i\})_{c(i,S \cup \{i\},\sigma _0)}^{\sigma _0}=\{i\}\). Then for all \(k \in S \cap F(\sigma _S,i)\) we have
The next proposition states that all players in a component to the left of player i with respect to \(\sigma _S\) are, if they are moved by the algorithm, moved componentwise at least as far as the original component of player i in \(\sigma _0\). As a consequence, all players that are in \(\sigma _{S \cup \{i\}}\) to the left of the original component of player i in \(\sigma _0\), are not moved by the algorithm when going from \(\sigma _S\) to \(\sigma _{S \cup \{i\}}\).
Proposition 3.6
Let \((N, \sigma _0, p, \alpha )\) be a one-machine sequencing situation, let \(S \subsetneq N\) with \(S \ne \emptyset \) and let \(i \in N \backslash S\) be such that \((S \cup \{i\})_{c(i,S \cup \{i\},\sigma _0)}^{\sigma _0}=\{i\}\).
-
(i)
For all \(k \in S\) with \(c(k,S \cup \{i\}, \sigma _{S \cup \{i\}})> c(k,S \cup \{i\},\sigma _S)\) we have
$$\begin{aligned} c(k,S \cup \{i\},\sigma _{S \cup \{i\}}) \ge c(i,S \cup \{i\},\sigma _0), \end{aligned}$$ -
(ii)
For all \(k \in S\) with \(c(k, S \cup \{i\}, \sigma _{S \cup \{i\}})<c(i,S \cup \{i\},\sigma _0)\) we have
$$\begin{aligned} c(k, S \cup \{i\}, \sigma _{S \cup \{i\}})=c(k,S \cup \{i\}, \sigma _S). \end{aligned}$$
The previous proposition follows directly from the following, more technical, lemma. This lemma shows that, when applying the algorithm to one-machine sequencing situation \((N,\sigma _S,p,\alpha )\) and coalition \(S \cup \{i\}\), once a predecessor of player i with respect to \(\sigma _S\) is considered by the algorithm, moving this player to a position that is to the left of the original component of player i in \(\sigma _0\) is never beneficial.
Lemma 3.7
Let \((N, \sigma _0, p, \alpha )\) be a one-machine sequencing situation, let \(S \subsetneq N\) with \(S \ne \emptyset \) and let \(i \in N \backslash S\) be such that \((S \cup \{i\})_{c(i,S \cup \{i\},\sigma _0)}^{\sigma _0}=\{i\}\). Let \(m \in S \cap P(\sigma _S,i)\) and \(l \in S \cap F(\tau _m,m)\) with \(c(l, S \cup \{i\}, \tau _m)<c(i, S \cup \{i\},\sigma _0)\). Then
where \(\tau _m\) denotes the processing order during the run of the algorithm for one-machine sequencing situation \((N,\sigma _S,p,\alpha )\) and coalition \(S \cup \{i\}\) just before player m is considered.
Proof
See Appendix B. \(\square \)
4 On the convexity of SoSi sequencing games
A game \(v \in \text {TU}^N\) is called convex if
for all \(S, T \in 2^N \backslash \{\emptyset \}\), \(i \in N\) such that \(S \subset T \subseteq N \backslash \{i\}\), i.e., the incentive for joining a coalition increases as the coalition grows. Using recursive arguments it can be seen that in order to prove convexity it is sufficient to show (5) for the case \(|T|=|S|+1\) which boils down to
for all \(S \in 2^N \backslash \{\emptyset \}\), \(i,j \in N\) and \(i \ne j\) such that \(S \subseteq N \backslash \{i,j\}\).
The main result of this paper is the following theorem.
Theorem 4.1
Let \((N, \sigma _0, p, \alpha )\) be a one-machine sequencing situation and let (N, v) be the corresponding SoSi sequencing game. Then, (N, v) is convex.
Before presenting the formal proof of our main result, we highlight some of its important aspects beforehand. Using (6), let \(S \in 2^N \backslash \{\emptyset \}\), \(i,j \in N\) and let \(i \ne j\) be such that \(S \subseteq N \backslash \{i,j\}\).
Note that without loss of generality we can assume
-
Assumption 1: \(\sigma _0(j) < \sigma _0(i)\).
-
Assumption 2: \((S \cup \{j\} \cup \{i\})_{c(j,S \cup \{j\} \cup \{i\},\sigma _0)}^{\sigma _0}=\{j\}\) and \((S \cup \{j\} \cup \{i\})_{c(i,S \cup \{j\} \cup \{i\},\sigma _0)}^{\sigma _0}=\{i\}\).
The first assumption is harmless because of the symmetric role of i and j in (6). The second assumption states that player i and j both are the only player in the component of \(S \cup \{j\} \cup \{i\}\) with respect to \(\sigma _0\). In theory this is no restriction since it is always possible to add dummy players with zero processing times and zero costs per time unit (a more formal explanation can be found in Appendix C). This assumption facilitates the comparison of the marginal contribution of player i to coalition S and the marginal contribution of player i to coalition \(S \cup \{j\}\). For example, if one determines the optimal processing order for coalition \(S \cup \{i\}\) via initial processing order \(\sigma _S\) and player i is the only player in its component of \(S \cup \{j\} \cup \{i\}\) with respect to \(\sigma _0\) (and thus also with respect to \(\sigma _S\)), then the players of coalition \(S \cup \{i\}\) are in every component already ordered with respect to their urgency and thus the preprocessing step of the algorithm can be skipped. As a consequence, the marginal contribution of player i to coalition S can be written as the sum of the positive cost difference of the players who are moved by the algorithm to a different component.
In order to denote the different types of players that are moved, we introduce the following notation. For \(U \in 2^N \backslash \{\emptyset \}\) and \(k \in N\) such that \(U \subseteq N \backslash \{k\}\) and \((U \cup \{k\})_{c(k,U \cup \{k\},\sigma _0)}^{\sigma _0}=\{k\}\), let \(M^k(U)\) denote the set of players who are moved to a different component during the run of the algorithm with respect to one-machine sequencing situation \((N, \sigma _U, p, \alpha )\) and coalition \(U \cup \{k\}\). Since the algorithm only moves players to components that are to the right of its original component in \(\sigma _U\), we have
As the algorithm only moves the players of the coalition \(U \cup \{k\}\) and all players outside this coalition are not moved, we have
Moreover, from Propositions 3.5 and 3.6 it follows, respectively, that
and
for all \(l \in M^k(U)\).
In order to prove Theorem 4.1 we need to compare the marginal contribution of player i to coalition S and the marginal contribution of player i to coalition \(S \cup \{j\}\). As argued above, both marginal contributions can be written as the sum of the positive cost differences of the players who are moved by the algorithm to a different component. In order to compare those cost differences more easily, we first partition the players in \(M^i(S)\), based on their position in the processing orders \(\sigma _S\) and \(\sigma _{S \cup \{j\}}\), in four subsets. Second, we derive from \(\sigma _S\) a special processing order \(\overline{\sigma }\) in such a way that all players from \(M^i(S)\) are in \(\overline{\sigma }\) and \(\sigma _{S \cup \{j\}}\) in the same component. The convexity proof is finished by means of adequately comparing all positive cost differences.
Proof of Theorem 4.1
Let \(S \in 2^N \backslash \{\emptyset \}\), \(i,j \in N\) and \(i \ne j\) such that \(S \subseteq N \backslash \{i,j\}\). We will prove
We partition the players in \(M^i(S)\), based on their position in the processing orders \(\sigma _S\) and \(\sigma _{S \cup \{j\}}\), in four subsets. First, note that from (7) it follows that \(M^i(S) \subseteq S \cup \{i\}\) and thus \(j \not \in M^i(S)\). From (8) it follows that all players in \(M^i(S)\) are in \(\sigma _S\) to the left of player i, or player i himself. By assumption 1 we have that player j is to the left of player i in \(\sigma _0\) (and thus also in \(\sigma _S\)). So, we can split \(M^i(S)\) into the following two disjoint sets:
-
\(M_1^i(S)\): the set of players in \(M^i(S)\) who are in \(\sigma _S\) to the left of player j,
-
\(M_2^i(S)\): the set of players in \(M^i(S)\) who are in \(\sigma _S\) between player j and player i, or player i himself.
Based on the position in \(\sigma _{S \cup \{j\}}\), we can split \(M_1^i(S)\) into another three disjoint subsets:
-
\(M_{1a}^i(S)\): the set of players in \(M_1^i(S)\) who are in \(\sigma _{S \cup \{j\}}\) to the left of the original component of player j,
-
\(M_{1b}^i(S)\): the set of players in \(M_1^i(S)\) who are in \(\sigma _{S \cup \{j\}}\) between the original components of player j and player i, or in the original component of player j,
-
\(M_{1c}^i(S)\): the set of players in \(M_1^i(S)\) who are in \(\sigma _{S \cup \{j\}}\) to the right of the original component of player i.
From Proposition 3.5 it follows that all players in \(M_2^i(S)\) are in \(\sigma _{S \cup \{j\}}\) between the original components of player j and player i, so we do not further split \(M_2^i(S)\) into subsets. We have now a partition of \(M^i(S)\) in four subsets, namely \(\{M_{1a}^i(S), M_{1b}^i(S), M_{1c}^i(S), M_2^i(S)\}\). Moreover, if \( i \in M^i(S)\) then \(i \in M_2^i(S)\).
The definition of the partition of \(M^i(S)\) in four subsets explains the position of the corresponding players in the processing orders \(\sigma _S\) and \(\sigma _{S \cup \{j\}}\). The following four claims indicate how the partition also determines the position in the two other processing orders \(\sigma _{S \cup \{i\}}\) and \(\sigma _{S \cup \{j\} \cup \{i\}}\). For notational convenience, we denote \(c(k,S \cup \{i\} \cup \{j\}, \sigma )\) by \(c(k,\sigma )\) for every \(k \in S \cup \{i\} \cup \{j\}\) and \(\sigma \in \Pi (N)\).
-
Claim 1 \(c(k,\sigma _{S \cup \{j\} \cup \{i\}}) = c(k,\sigma _{S \cup \{i\}}) \ge c(i,\sigma _S)\) for all \(k \in M^i(S)\).
-
Claim 2 \(c(k,\sigma _{S \cup \{j\} \cup \{i\}}) = c(k,\sigma _{S \cup \{j\}})\) for all \(k \in M^i_{1c}(S)\).
-
Claim 3 \(c(k,\sigma _S) = c(k,\sigma _{S \cup \{j\} })\) for all \(k \in M_2^i(S)\).
-
Claim 4 \(c(k,\sigma _S) = c(k,\sigma _{S \cup \{j\} })\) for all \(k \in M_{1a}^i(S)\).
The proofs of these four claims can be found in Appendix D. Figure 10 illustrates for all four partition elements of \(M^i(S)\) its position with respect to the original components of player i and player j in the four different processing orders. The solid arrows give the original components and/or the actual positions of player i and j. The dotted arrows give possible positions of player i or j.
We define \(\overline{\sigma } \in \Pi (N)\) as the unique urgency respecting processing order that satisfies
-
(i)
for all \(k \in M^i(S)\):
$$\begin{aligned} c(k,\overline{\sigma })=c(k,\sigma _{S \cup \{j\}}), \end{aligned}$$(11) -
(ii)
for all \(k \in S \backslash M^i(S)\):
$$\begin{aligned} c(k,\overline{\sigma })=c(k,\sigma _S), \end{aligned}$$ -
(iii)
for all \(k, l \in S\) with \(c(k,\overline{\sigma })=c(l,\overline{\sigma })\):
$$\begin{aligned} u_k=u_l, \sigma _0(k)<\sigma _0(l) \Rightarrow \overline{\sigma }(k) < \overline{\sigma }(l). \end{aligned}$$
Note that conditions (i) and (ii) determine the components for the players in S. Next, the urgency respecting requirement determines the order within the components for the players with different urgencies. Finally, in case there is a tie for the urgency of two players in the same component, item (iii) states a tiebreaking rule. As a consequence, due to this tiebreaking rule, we have that \(\overline{\sigma }\) is unique.
Note that \(\overline{\sigma }\) can be considered as a temporary processing order when going from \(\sigma _S\) to \(\sigma _{S \cup \{i\}}\) (cf. Fig. 11). The processing order \(\overline{\sigma }\) is derived from processing order \(\sigma _S\) in such a way that all players from \(M^i(S)\) are in \(\overline{\sigma }\) and \(\sigma _{S \cup \{j\}}\) in the same component. From Claim 3 and 4 it follows that the players in \(M^i_{1a}(S)\) and \(M^i_2(S)\) are in \(\sigma _S\) and \(\sigma _{S \cup \{j\}}\) in the same component and thus those players do not need to be moved. Hence, only the players in \(M^i_{1b}(S)\) and \(M^i_{1c}(S)\) need to be moved. Hence, we start from \(\sigma _S\) and we move all players in \(M^i_{1b}(S)\) and \(M^i_{1c}(S)\) to the components they are in in \(\sigma _{S \cup \{j\}}\). Note that since the tiebreaking rule mentioned in condition (iii) is the same tiebreaking rule as in property (iv) of the algorithm, the mutual order of the players in \(M^i(S)\) is in \(\overline{\sigma }\) the same as in \(\sigma _{S \cup \{j\}}\).
An illustration of the position of the players in \(M^i(S)\) in \(\overline{\sigma }\) can be found in Fig. 12. Note that since \(i \not \in S \cup \{j\}\) it follows that \(c(i,\sigma _S)=c(i,\sigma _{S \cup \{j\}})\). Moreover, we note that \(\overline{\sigma }\) and \(\sigma _{S \cup \{j\}}\) are not necessarily equal to each other as the players in \(M^j(S) \backslash M^i(S)\) are in \(\overline{\sigma }\) and \(\sigma _{S \cup \{j\}}\) in different components. However, as the players in \(M^j(S)\) will be moved to a component to the right when going from \(\sigma _S\) to \(\sigma _{S \cup \{j\}}\), we have
for all \(k \in S \cup \{j\} \cup \{i\}\).
Now we consider the transition from \(\overline{\sigma }\) to \(\sigma _{S \cup \{i\}}\) and its corresponding cost differences. Since the players in \(M_{1c}^i(S)\) are in \(\overline{\sigma }\) already in the component they are in in \(\sigma _{S \cup \{i\}}\), only all players in \(M^i_{1a}(S)\), \(M^i_{1b}(S)\) and \(M^i_{2}(S)\) need to be moved to a component to the right when going from \(\overline{\sigma }\) to \(\sigma _{S \cup \{i\}}\) (see also Fig. 11). We go from \(\overline{\sigma }\) to \(\sigma _{S \cup \{i\}}\) by considering the players in \(M^i_{1a}(S)\), \(M^i_{1b}(S)\) and \(M^i_2(S)\) in an order reverse to the order they are in \(\overline{\sigma }\), i.e., the players are considered from the right to the left. For \(k \in M^i_{1a}(S) \cup M^i_{1b}(S) \cup M^i_2(S)\), denote the processing order just before player k is moved by \(\overline{\tau }_k\) and let \(\overline{r}_k\) denote the player that player k will be moved behind. The cost difference for coalition \(S \cup \{i\}\) due to moving this player, when going from \(\overline{\sigma }\) to \(\sigma _{S \cup \{i\}}\), is denoted by \(\overline{\delta }_k\), i.e.,
Similarly, we can write the marginal contribution of player i to coalition \(S \cup \{j\}\) as the sum of positive cost differences of the players in \(M^i(S \cup \{j\})\). We go from \(\sigma _{S \cup \{j\}}\) to \(\sigma _{S \cup \{j\} \cup \{i\}}\) by considering the players in \(M^i(S \cup \{j\})\) in an order reverse to the order they are in \(\sigma _{S \cup \{j\}}\), i.e., the players are considered from the right to the left. We note that since the mutual order of the players in \(M^i(S)\) is the same in \(\overline{\sigma }\) and \(\sigma _{S \cup \{j\}}\), the order in which the players in \(M^i_{1a}(S) \cup M^i_{1b}(S) \cup M^i_2(S)\) are considered when going from \(\overline{\sigma }\) to \(\sigma _{S \cup \{i\}}\) is the same as the order in which they are considered when going from \(\sigma _{S \cup \{j\}}\) to \(\sigma _{S \cup \{j\} \cup \{i\}}\). For \(k \in M^i(S \cup \{j\})\), denote the processing order just before player k is moved by \(\tau _{k}\) and let \(r_k\) denote the player that player k will be moved behind. The cost difference for coalition \(S \cup \{j\} \cup \{i\}\) due to moving this player, when going from \(\sigma _{S \cup \{j\}}\) to \(\sigma _{S \cup \{j\} \cup \{i\}}\), is denoted by \(\delta _k\), i.e.,
From (12) together with the fact that the players in \(M^i_{1a}(S) \cup M^i_{1b}(S) \cup M^i_2(S)\) are moved to the same component in \(\sigma _{S \cup \{i\}}\) and \(\sigma _{S \cup \{j\} \cup \{i\}}\), and the fact that the players in \(M^j(S) \backslash M^i(S)\) are moved to a component to the right, we have
for all \(k \in M^i_{1a}(S) \cup M^i_{1b}(S) \cup M^i_2(S)\) and \(l \in S \cup \{j\} \cup \{i\}\).
The following claim states that the cost savings when moving a player in \(M^i_{1a}(S) \cup M^i_{1b}(S) \cup M^i_2(S)\) when going from \(\overline{\sigma }\) to \(\sigma _{S \cup \{i\}}\) is at most the cost savings when moving the same player when going from \(\sigma _{S \cup \{j\}}\) to \(\sigma _{S \cup \{j\} \cup \{i\}}\).
– Claim 5 \(\overline{\delta }_k \le \delta _k\) for all \(k \in M^i_{1a}(S) \cup M^i_{1b}(S) \cup M^i_2(S)\).
Proof
The proof can be found in Appendix E.
We are now ready to prove (10). Note that a detailed explanation of the subsequent equalities and inequalities can be found after the equations.
which proves (10).
Explanations
-
(i)
The extra worth that is obtained by adding player i to coalition S can be split into two parts. The first part is due to the fact that player i joins the coalition and it represents the cost savings for player i in processing order \(\sigma _S\) compared to \(\sigma _0\). The completion time of player i is reduced by the sum of the processing times of the players that jumped over player i when going from \(\sigma _0\) to \(\sigma _S\) without moving any players. The second part represents the cost savings for coalition \(S \cup \{i\}\) by additionally moving players when going from \(\sigma _S\) to the optimal processing order \(\sigma _{S \cup \{i\}}\).
-
(ii)
The optimal processing order \(\sigma _{S \cup \{i\}}\) can be obtained from \(\sigma _S\) via \(\overline{\sigma }\) where some players are already (partially) moved to the right.
-
(iii)
The cost difference for coalition \(S \cup \{i\}\) when going from \(\sigma _S\) to \(\overline{\sigma }\) can be split into two parts: the cost difference for coalition S and the cost difference for player i. By the definition of \(\overline{\sigma }\) and since \(i \not \in S \cup \{j\}\), player i is not moved when going from \(\sigma _S\) to \(\overline{\sigma }\) and the completion time of player i is reduced by the sum of the processing times of the players that jumped over player i when going from \(\sigma _S\) to \(\overline{\sigma }\), i.e., the sum of the processing times of the players in \(M^i_{1c}(S)\).
-
(iv)
Processing order \(\sigma _S\) is optimal for coalition S and thus \(C(\sigma _S,S) -C(\overline{\sigma },S) \le 0\).
-
(v)
This follows from the definition of \(\overline{\delta }_k\).
-
(vi)
This follows from Claim 5.
-
(vii)
This follows from \((M^i_{1a}(S) \cup M^i_{1b}(S) \cup M^i_2(S)) \subseteq M^i(S \cup \{j\})\) (cf. Fig. 10) and \(\delta _k > 0\) for all \(k \in M^i(S \cup \{j\})\) due to property (ii) of the algorithm.
-
(viii)
This follows from the definition of \(\delta _k\).
-
(ix)
This follows from \(M^i_{1c}(S) \subseteq (P(\sigma _S,i) \cap F(\sigma _{S \cup \{j\}},i))\) (cf. Fig. 10).
-
(x)
The group of players that jump over player i when going from \(\sigma _0\) to \(\sigma _{S \cup \{j\}}\) can be split into two groups: the group of players that jumped over player i when going from \(\sigma _0\) to \(\sigma _S\) and the group of players that were positioned in front of player i in \(\sigma _S\) but jumped over player i when going from \(\sigma _S\) to \(\sigma _{S \cup \{j\}}\). Hence, \(\{P(\sigma _0,i) \cap F(\sigma _S,i), P(\sigma _S,i) \cap F(\sigma _{S \cup \{j\}},i)\}\) is a partition of \(P(\sigma _0,i) \cap F(\sigma _{S \cup \{j\}},i)\).
-
(xi)
Similar to the explanation in (i).
To conclude, we have shown \(v(S \cup \{i\}) - v(S) \le v(S \cup \{j\} \cup \{i\}) - v(S \cup \{j\})\) which proves the convexity of SoSi sequencing games. \(\square \)
Notes
Processing order (2 3 1) means that player 2 is in the first position, player 3 in the second position and player 1 in the last position.
Note that this proposition also holds if every < sign is replaced by a >, \(\le \) or \(\ge \) sign.
Note that in case \(c(r^{S \cup \{i\}},\tau ^{S \cup \{i\}})=c(i,\sigma _0)\), it is not admissible for the algorithm to move player m to component \((S \cup \{i\})^{\sigma _0,\tau ^S}_{c(r^{S \cup \{i\}},\tau ^{S \cup \{i\}})}\) due to requirement (ii) of admissibility, but this is no problem as also in this case the upcoming arguments are still valid.
References
Curiel I, Pederzoli G, Tijs S (1989) Sequencing games. Eur J Oper Res 40:344–351
Curiel I, Potters J, Prasad R, Tijs S, Veltman B (1993) Cooperation in one machine scheduling. Z Oper Res 38:113–129
Musegaas M, Borm P, Quant M (2015) Step out–step in sequencing games. Eur J Oper Res 246:894–906
Slikker M (2006) Relaxed sequencing games have a nonempty core. Nav Res Logist 53:235–242
Smith W (1956) Various optimizers of single-stage production. Nav Res Logist Q 3:59–66
van Velzen B, Hamers H (2003) On the balancedness of relaxed sequencing games. Math Methods Oper Res 57:287–297
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix A: Proof of Lemma 3.3
In this proof we denote \(c(k,S \cup \{i\},\sigma )\) by \(c(k,\sigma )\) for every \(k \in S \cup \{i\}\) and every \( \sigma \in \Pi (N)\). We prove the lemma with the help of the algorithm. First, note that because player i is the only player in its component in \(\sigma _0\), we have \(\sigma _0^S = \sigma _0^{S \cup \{i\}}\), i.e., the processing orders are the same after the preprocessing step of the algorithm. Therefore, if we go from \(\sigma _0\) to the optimal processing orders \(\sigma _S\) and \(\sigma _{S \cup \{i\}}\), then the steps performed by the algorithm are the same up to the moment that player i is considered. Moreover, since the players are considered in reverse order with respect to \(\sigma ^S_0\), we have
for all \(k \in S \cap F(\sigma _0^S,i)\). Hence, it remains to be proven that also for the players in \(S \cap P(\sigma _0^S,i)\) the lemma is true.
Let player \(m \in S\) be the closest predecessor of player i with respect to \(\sigma _0^S\) for which the lemma is not true, i.e.,
and
for all \(k \in S \cap F(\sigma _0^S,m) \cap P(\sigma _0^S,i)\). We will derive a contradiction. We continue the proof as follows. We look to which component player m will be moved to by the algorithm with respect to coalition S. Then, there is a specific player who is in a component with index at least as high as the component that player m is moved to by the algorithm with respect to coalition S. We show that moving player m behind this specific player is actually more beneficial with respect to coalition \(S \cup \{i\}\), which contradicts the optimality of the algorithm.
Denote the processing order when player m is considered by the algorithm with respect to coalition S by \(\tau ^S\) and with respect to coalition \(S \cup \{i\}\) by \(\tau ^{S \cup \{i\}}\). Let \(r^S\) denote the player that player m will be moved behind with respect to coalition S according to the algorithm. Similarly, let \(r^{S \cup \{i\}}\) denote the player that player m will be moved behind with respect to coalition \(S \cup \{i\}\) according to the algorithm. Note that in case player m is not moved by the algorithm with respect to coalition \(S \cup \{i\}\), then we define player \(r^{S \cup \{i\}}\) as player m. Since \(c(m,\sigma _{S \cup \{i\}}) = c(r^{S \cup \{i\}},\tau ^{S \cup \{i\}})\) and \(c(m,\sigma _S)=c(r^S,\tau ^S)\), we have
As we will see later \((S \cup \{i\})^{\sigma _0,\tau ^S}_{c(r^{S \cup \{i\}},\tau ^{S \cup \{i\}})} \ne \emptyset \), let \(\tilde{r}^S \in (S \cup \{i\})^{\sigma _0,\tau ^S}_{c(r^{S \cup \{i\}},\tau ^{S \cup \{i\}})}\) be such that player m would be moved behind this player in case player m is moved to component \((S \cup \{i\})^{\sigma _0,\tau ^S}_{c(r^{S \cup \{i\}},\tau ^{S \cup \{i\}})}\) according to the algorithm with respect to coalition S . Note that player \(\tilde{r}^S\) is unique because the algorithm always selects a unique player per component. Note that player \(\tilde{r}^S\) might also be player i as in this way we make sure that player \(\tilde{r}^S\) also exists if \(c(r^{S \cup \{i\}},\tau ^{S \cup \{i\}})=c(i,\sigma _0)\).Footnote 3 Note that in case players m and \(r^{S \cup \{i\}}\) coincide (that means that player m is not moved by the algorithm with respect to coalition \(S \cup \{i\}\)), then we define player \(\tilde{r}^S\) as player m.
Since player m is moved behind player \(r^S\) and not behind player \(\tilde{r}^S\), we have due to property (v) of the algorithm that
We distinguish between two cases:
-
Case A: \(\{k \in S~|~\sigma _0^S(m)<\sigma _0^S(k)<\sigma _0^S(i)\} =\emptyset \),
-
Case B: \(\{k \in S~|~\sigma _0^S(m)<\sigma _0^S(k)<\sigma _0^S(i)\} \ne \emptyset \).
Case A [\(\{k \in S~|~\sigma _0^S(m)<\sigma _0^S(k)<\sigma _0^S(i)\} \ne \emptyset \)]
Hence, there are no players of coalition S in between player m and player i in \(\sigma _0^S\). Then, from (14) it follows that for every \(k \in S \cap F(\tau ^S,m)\) we have
Note that this implies that \((S \cup \{i\})^{\sigma _0,\tau ^S}_{c(r^{S \cup \{i\}},\tau ^{S \cup \{i\}})}\) is non-empty.
We will prove that moving player m behind player \(r^S\) is more beneficial than moving player m behind player \(r^{S \cup \{i\}}\), i.e., we will prove that
This would imply that the step made by the algorithm for player m when applied on coalition \(S \cup \{i\}\) is not optimal, which contradicts the optimality of the algorithm. Hence, for Case A, it remains to prove (18).
We distinguish from now on between the following four cases:
-
Case A.1: \(i \not \in N^{\tau ^S}(\tilde{r}^S,r^S]\) and \(i \not \in N^{\tau ^{S \cup \{i\}}}(r^{S \cup \{i\}},r^S]\),
-
Case A.2: \(i \in N^{\tau ^S}(\tilde{r}^S,r^S]\) and \(i \not \in N^{\tau ^{S \cup \{i\}}}(r^{S \cup \{i\}},r^S]\),
-
Case A.3: \(i \in N^{\tau ^S}(\tilde{r}^S,r^S]\) and \(i \in N^{\tau ^{S \cup \{i\}}}(r^{S \cup \{i\}},r^S]\),
-
Case A.4: \(i \not \in N^{\tau ^S}(\tilde{r}^S,r^S]\) and \(i \in N^{\tau ^{S \cup \{i\}}}(r^{S \cup \{i\}},r^S]\).
Case A.1 [\(i \not \in N^{\tau ^S}(\tilde{r}^S,r^S]\) and \(i \not \in N^{\tau ^{S \cup \{i\}}}(r^{S \cup \{i\}},r^S]\)]
Note that this case occurs if \(r^{S \cup \{i\}} \ne m\) and player i is not necessarily moved by the algorithm with respect to coalition \(S \cup \{i\}\). Then, it follows from (17) together with the fact \(c(\tilde{r}^S,\tau ^S)=c(r^{S \cup \{i\}},\tau ^{S \cup \{i\}})\) that
Case A.2 [\(i \in N^{\tau ^S}(\tilde{r}^S,r^S]\) and \(i \not \in N^{\tau ^{S \cup \{i\}}}(r^{S \cup \{i\}},r^S]\)]
Note that this case occurs if \(r^{S \cup \{i\}}=m\) and player i has been moved by the algorithm with respect to coalition \(S \cup \{i\}\) such that \(\tau ^{S \cup \{i\}}(i)>\tau ^{S \cup \{i\}}(r^S)\). Then, using the same arguments as in Case A.1, we have
Case A.3 [\(i \in N^{\tau ^S}(\tilde{r}^S,r^S]\) and \(i \in N^{\tau ^{S \cup \{i\}}}(r^{S \cup \{i\}},r^S]\)]
Note that this case occurs if \(r^{S \cup \{i\}}=m\) and player i has either not been moved by the algorithm with respect to coalition \(S \cup \{i\}\) or it has been moved such that \(\tau ^{S \cup \{i\}}(i)<\tau ^{S \cup \{i\}}(r^S)\). Then, using the same arguments as in Case A.1, we have
Case A.4 [\(i \not \in N^{\tau ^S}(\tilde{r}^S,r^S]\) and \(i \in N^{\tau ^{S \cup \{i\}}}(r^{S \cup \{i\}},r^S]\)]
Note that this case occurs if \(r^{S \cup \{i\}} \ne m\) and player i has been moved by the algorithm with respect to coalition \(S \cup \{i\}\) such that \(\tau ^{S \cup \{i\}}(r^{S \cup \{i\}})<\tau ^{S \cup \{i\}}(i)<\tau ^{S \cup \{i\}}(r^S)\). Then, using the same arguments as in Case A.1, we have
If we can show that \(\alpha _ip_m-\alpha _mp_i>0\), (18) follows.
Let \(\hat{r}^S\) be the direct predecessor of player i in \(\tau ^{S \cup \{i\}}\). Since player i is moved behind player \(\hat{r}^S\) and not behind player \(r^S\), we have due to the optimality of the algorithm that
Consequently, it follows from (17) together with the fact \(c(\hat{r}^S,\tau ^S)=c(i,\tau ^{S \cup \{i\}})\) that we also have
Therefore, together with (16), we can conclude \(\frac{\alpha _i}{p_i} > \frac{\alpha _m}{p_m}\), i.e., \(\alpha _ip_m-\alpha _mp_i>0\).
Case B [\(\{k \in S~|~\sigma _0^S(m)<\sigma _0^S(k)<\sigma _0^S(i)\} \ne \emptyset \)]
Hence, there are players of coalition S in between player m and player i in \(\sigma _0^S\). Therefore, due to the definition of player m, it follows that for every \(k \in S\) with \(\sigma _0^S(m)<\sigma _0^S(k)<\sigma _0^S(i)\) we have
i.e., the statement in the lemma holds for all followers of player m with respect to \(\sigma _0^S\) intersected with \(S \cap P(\sigma _0^S,i)\). First, note that \((S \cup \{i\})^{\sigma _0,\tau ^S}_{c(r^{S \cup \{i\}},\tau ^{S \cup \{i\}})}\) is non-empty because of the following. Due to requirement (i) and (ii) of admissibility we know that the first player in \((S \cup \{i\})^{\sigma _0,\tau ^{S \cup \{i\}}}_{c(r^{S \cup \{i\}},\tau ^{S \cup \{i\}})}\) with respect to \(\tau ^{S \cup \{i\}}\) is also the first player in \((S \cup \{i\})^{\sigma _0, \sigma _0^S}_{c(r^{S \cup \{i\}},\tau ^{S \cup \{i\}})}\) with respect to \(\sigma _0^S\). Therefore, using (19) we know that this player also belongs to \((S \cup \{i\})^{\sigma _0,\tau ^S}_{c(r^{S \cup \{i\}},\tau ^{S \cup \{i\}})}\). Hence, \((S \cup \{i\})^{\sigma _0,\tau ^S}_{c(r^{S \cup \{i\}},\tau ^{S \cup \{i\}})}\) is non-empty.
Next, define player \(l \in S^{\tau ^S}(\tilde{r}^S,r^S]\) as the player in \(S^{\tau ^S}(\tilde{r}^S,r^S]\) who is positioned last with respect to \(\tau ^{S \cup \{i\}}\), i.e., \(\tau ^{S \cup \{i\}}(l) \ge \tau ^{S \cup \{i\}}(k)\) for all \(k \in S^{\tau ^S}(\tilde{r}^S,r^S]\). Note that player l was actually player \(r^S\) in Case A because of (17). From the assumptions in (19) and (15) it follows that
i.e., player l is to the right of player \(r^{S \cup \{i\}}\) in \(\tau ^{S \cup \{i\}}\). We will prove that moving player m behind player l is more beneficial than moving player m behind player \(r^{S \cup \{i\}}\), i.e., we will prove that
This implies that the step made by the algorithm for player m when applied on coalition \(S \cup \{i\}\) is not optimal, which contradicts the optimality of the algorithm. Hence, for Case B, it remains to prove (21).
From the definitions of players \(\tilde{r}^S\) and l, together with (19), it follows that
Moreover, since during the run of the algorithm player m jumps over all players in \(S^{\tau ^S}(m,r^S]\), Proposition (3.1) implies that
for all \(k \in S^{\tau ^S}(m,r^S]\).
Below we distinguish between the following four cases:
-
Case B.1: \(i \not \in N^{\tau ^S}(\tilde{r}^S,r^S]\) and \(i \not \in N^{\tau ^{S \cup \{i\}}}(r^{S \cup \{i\}},l]\),
-
Case B.2: \(i \in N^{\tau ^S}(\tilde{r}^S,r^S]\) and \(i \not \in N^{\tau ^{S \cup \{i\}}}(r^{S \cup \{i\}},l]\),
-
Case B.3: \(i \in N^{\tau ^S}(\tilde{r}^S,r^S]\) and \(i \in N^{\tau ^{S \cup \{i\}}}(r^{S \cup \{i\}},l]\),
-
Case B.4: \(i \not \in N^{\tau ^S}(\tilde{r}^S,r^S]\) and \(i \in N^{\tau ^{S \cup \{i\}}}(r^{S \cup \{i\}},l]\).
Case B.1 [\(i \not \in N^{\tau ^S}(\tilde{r}^S,r^S]\) and \(i \not \in N^{\tau ^{S \cup \{i\}}}(r^{S \cup \{i\}},l]\)]
Using \(c(l,\tau ^{S \cup \{i\}}) \ge c(r^S,\tau ^S)\) from (20), we distinguish between another two cases:
-
Case B.1(i): \(c(l,\tau ^{S \cup \{i\}})=c(r^S,\tau ^S)\),
-
Case B.1(ii): \(c(l,\tau ^{S \cup \{i\}})>c(r^S,\tau ^S)\).
Case B.1(i) [\(c(l,\tau ^{S \cup \{i\}})=c(r^S,\tau ^S)\)]
Since \(\tilde{r}^S \in (S \cup \{i\})^{\sigma _0,\tau ^S}_{c(r^{S \cup \{i\}},\tau ^{S \cup \{i\}})}\) and thus \(c(\tilde{r}^S,\tau ^S)=c(r^{S \cup \{i\}},\tau ^{S \cup \{i\}})\), the assumptions \(i \not \in N^{\tau ^S}(\tilde{r}^S,r^S]\) and \(c(l,\tau ^{S \cup \{i\}})=c(r^S,\tau ^S)\) imply that
For every \(k \in S^{\tau ^{S \cup \{i\}}}(r^{S \cup \{i\}},l] \backslash S^{\tau ^S}(\tilde{r}^S,r^S]\) we know that \(k \in S^{\tau ^S}(m,\tilde{r}^S]\), because \(c(k,\tau ^S) \le c(k,\tau ^{S \cup \{i\}})\) by (19) and \(P(\tau ^S,m)=P(\tau ^{S \cup \{i\}},m)\). Hence, also \(k \in S^{\tau ^S}(m,r^S]\) and thus from (23) it follows that \(u_k>u_m\). Together with Proposition 3.2 applied on the set \(S^{\tau ^{S \cup \{i\}}}(r^{S \cup \{i\}},l] \backslash S^{\tau ^S}(\tilde{r}^S,r^S]\) and player m we have
where the equality follows from the assumption \(i \not \in N^{\tau ^{S \cup \{i\}}}(r^{S \cup \{i\}},l]\). As a consequence,
proving (21).
Case B.1(ii) [\(c(l,\tau ^{S \cup \{i\}})>c(r^S,\tau ^S)\)]
Define Q as the set of players from \(S^{\tau ^S}(\tilde{r}^S,r^S]\) who are positioned in \(\tau ^{S \cup \{i\}}\) in a component to the right of component \((S \cup \{i\})^{\sigma _0,\tau ^{S \cup \{i\}}}_{c(r^S,\tau ^S)}\). Since \(c(l,\tau ^{S \cup \{i\}})>c(r^S,\tau ^S)\) and \(l \in S^{\tau ^S}(\tilde{r}^S,r^S]\), we have \(l \in Q\) and thus \(Q \ne \emptyset \). Select from Q only the players who have in their corresponding component in \(\tau ^{S \cup \{i\}}\) no players from \(S^{\tau ^S}(\tilde{r}^S,r^S]\) in front of him and denote this set of players by \(\mathcal {Q}\), i.e., we select in each component a player of Q (if possible) that is most to the left in \(\tau ^{S \cup \{i\}}\).
Set \(\mathcal {Q}=\{q_1,q_2,\ldots ,q_{|\mathcal {Q}|}\}\) such that \(c(q_k,\tau ^{S \cup \{i\}})<c(q_{k+1},\tau ^{S \cup \{i\}})\) for all \(k \in \{1, \ldots , |\mathcal {Q}|-1\}\). Define \(w_1\) as the first player in \(\overline{(S \cup \{i\})}^{\sigma _0}_{c(r^S,\tau ^S)}\), i.e., the direct follower in \(\tau ^{S \cup \{i\}}\) of component \((S \cup \{i\})^{\sigma _0,\tau ^{S \cup \{i\}}}_{c(r^S,\tau ^S)}\). For \(k \in \{2, \ldots , |\mathcal {Q}|\}\), define \(w_k\) as the first player in \(\overline{(S \cup \{i\})}^{\sigma _0}_{c(q_{k-1},\tau ^{S \cup \{i\}})}\), i.e., the direct follower in \(\tau ^{S \cup \{i\}}\) of component \((S \cup \{i\})^{\sigma _0,\tau ^{S \cup \{i\}}}_{c(q_{k-1},\tau ^{S \cup \{i\}})}\). Note that because of the definition of player l we have \(c(q_{|\mathcal {Q}|}, \tau ^{S \cup \{i\}})=c(l,\tau ^{S \cup \{i\}})\).
The collection of sets \(\{N^{\tau ^{S \cup \{i\}}}[w_k,q_k)~|~k \in \{1, \ldots , |\mathcal {Q}|\}\}\) are by definition mutually disjoint. Moreover, for \(k \in \{1, \ldots , |\mathcal {Q}|\}\), we have \(N^{\tau ^{S \cup \{i\}}}[w_k,q_k) \cap S^{\tau ^S}(\tilde{r}^S,r^S] = \emptyset \) and \((S \cup \{i\})^{\tau ^{S \cup \{i\}}}[w_k,q_k) \subseteq (S \cup \{i\})^{\tau ^{S \cup \{i\}}}(r^{S \cup \{i\}},l]\). If we set \(R=S^{\tau ^S}(\tilde{r}^S,r^S] \cup \bigcup _{k=1}^{|\mathcal {Q}|}{(S \cup \{i\})^{\tau ^{S \cup \{i\}}}[w_k,q_k)}\), then using (22) we have
and
and
Note, for \(k \in \{1, \ldots , |\mathcal {Q}|\}\), we have \(c(q_k,\sigma _0) \le c(q_k, \tau ^S) \le c(r^S,\tau ^S)\). Hence, as \(w_k\) is in \(\tau ^{S \cup \{i\}}\) to the right of component \((S \cup \{i\})^{\sigma _0,\tau ^{S \cup \{i\}}}_{c(r^S,\tau ^S)}\), we have that it is admissible to move player \(q_k\) in front of \(w_k\) with respect to \(\tau ^{S \cup \{i\}}\). In other words, it is admissible to move player \(q_k\) to the tail of component \((S \cup \{i\})^{\sigma _0,\tau ^{S \cup \{i\}}}_{c(q_{k-1},\tau ^{S \cup \{i\}})}\) (and the tail of component \((S \cup \{i\})^{\sigma _0,\tau ^{S \cup \{i\}}}_{c(r^S,\tau ^S)}\) in case \(k=1\)). Since player \(q_k\) is not moved behind player \(w_k\), we have due to property (v) of the algorithm that
Moreover, since \(q_k \in S^{\tau ^S}(m,r^S]\) it follows from (23) that \(u_m<u_{q_k}\). As a consequence,
We have
Note that the second equality follows from \(c(\tilde{r}^S, \tau ^S)=c(r^{S \cup \{i\}}, \tau ^{S \cup \{i\}})\), the fact that \(w_1\) is the direct follower in \(\tau ^{S \cup \{i\}}\) of component \((S \cup \{i\})^{\sigma _0,\tau ^{S \cup \{i\}}}_{c(r^S,\tau ^S)}\), and the assumption \(i \not \in N^{\tau ^S}(\tilde{r}^S,r^S]\). Moreover, the collection of sets \(\{(\overline{S \cup \{i\}})^{\tau ^{S \cup \{i\}}}[w_k,q_k)~|~k \in \{1, \ldots , |\mathcal {Q}|\}\}\) forms a partition of the set \((\overline{S \cup \{i\}})^{\tau ^{S \cup \{i\}}}[w_1,l]\).
For every \(k \in S^{\tau ^{S \cup \{i\}}}(r^{S \cup \{i\}},l] \backslash R\), we know that either \(k \in S^{\tau ^S}(m,\tilde{r}^S]\) or \(c(k,\tau ^S) \ge c(r^S,\tau ^S)\). If \(k \in S^{\tau ^S}(m,\tilde{r}^S]\), then also \(k \in S^{\tau ^S}(m,r^S]\) and thus from (23) we have \(u_k>u_m\). Next, if \(c(k,\tau ^S) \ge c(r^S,\tau ^S)\), then because \(c(l,\tau ^S) \le c(r^S,\tau ^S)\) we know that the swap of players k and l with respect to \(\tau ^{S \cup \{i\}}\) is admissible. Therefore, according to Proposition 3.1, we know \(u_k \ge u_l\). Moreover, since \(l \in S^{\tau ^S}(m,r^S]\), it follows from (23) that \(u_m < u_l\). As a consequence, \(u_m<u_k\). Together with Proposition 3.2 applied on the set \(S^{\tau ^{S \cup \{i\}}}(r^{S \cup \{i\}},l] \backslash R\) and player m we have
where the equality follows from the assumption \(i \not \in N^{\tau ^{S \cup \{i\}}}(r^{S \cup \{i\}},l]\). As a consequence,
proving (21).
Case B.2 [\(i \in N^{\tau ^S}(\tilde{r}^S,r^S]\) and \(i \not \in N^{\tau ^{S \cup \{i\}}}(r^{S \cup \{i\}},l]\)]
Note that this case occurs exactly if \(c(\tilde{r}^S,\tau ^S)< c(i,\sigma _0) < c(r^S,\tau ^S)\) and player i has been moved by the algorithm with respect to coalition \(S \cup \{i\}\), such that \(\tau ^{S \cup \{i\}}(i)>\tau ^{S \cup \{i\}}(l)\). We have
Then, using the same arguments as in Case B.1, we can prove (21). Namely, where we used (16) in Case B.1, we now use the above equation. Hence, player i has already been taken into account and thus for using the same arguments as in Case B.1 we can assume \(i \not \in N^{\tau ^S}(\tilde{r}^S,r^S]\) and \(i \not \in N^{\tau ^{S \cup \{i\}}}(r^{S \cup \{i\}},l]\).
For example, analogous to Case B.1(i), Case B.2(i) goes as follows. Since \(\tilde{r}^S \in (S \cup \{i\})^{\sigma _0,\tau ^S}_{c(r^{S \cup \{i\}},\tau ^{S \cup \{i\}})}\) and thus \(c(\tilde{r}^S,\tau ^S)=c(r^{S \cup \{i\}},\tau ^{S \cup \{i\}})\), the assumption \(c(l,\tau ^{S \cup \{i\}})=c(r^S,\tau ^S)\) implies that
Using exactly the same arguments as in Case B.1(i), we have
As a consequence,
proving (21).
Case B.3 [\(i \in N^{\tau ^S}(\tilde{r}^S,r^S]\) and \(i \in N^{\tau ^{S \cup \{i\}}}(r^{S \cup \{i\}},l]\)]
Note that this case occurs exactly if \(c(\tilde{r}^S,\tau ^S)< c(i,\sigma _0) < c(r^S,\tau ^S)\) and player i has either not been moved by the algorithm with respect to coalition \(S \cup \{i\}\) or it has been moved such that \(\tau ^{S \cup \{i\}}(i)<\tau ^{S \cup \{i\}}(l)\). We have
Then, using the same arguments as in Case B.1, we can prove (21).
Case B.4 [\(i \not \in N^{\tau ^S}(\tilde{r}^S,r^S]\) and \(i \in N^{\tau ^{S \cup \{i\}}}(r^{S \cup \{i\}},l]\)]
Then,
It suffices to show that \(\alpha _ip_m-\alpha _mp_i>0\). Together with the above equation and (16), we can prove (21) by using the same arguments as in Case B.1.
We distinguish between two cases:
-
Case B.4(i): \(c(i,\sigma _0) > c(r^S,\tau ^S)\),
-
Case B.4(ii): \(c(i,\sigma _0) < c(\tilde{r}^S,\tau ^S)\).
Case B.4(i) [\(c(i,\sigma _0) > c(r^S,\tau ^S)\)]
Note that this case occurs exactly if \(\tau ^{S \cup \{i\}}(i)<\tau ^{S \cup \{i\}}(l)\). This means that player i is not necessarily moved by the algorithm with respect to coalition \(S \cup \{i\}\). Then,
Hence, the swap of players i and l with respect to \(\sigma _{S \cup \{i\}}\) is admissible and thus, according to Proposition 3.1, we have \(u_i \ge u_l\). Consequently, together with (23), we have \(u_i \ge u_l > u_m\). Hence, \(\alpha _ip_m-\alpha _mp_i>0\).
Case B.4(ii) [\(c(i,\sigma _0) < c(\tilde{r}^S,\tau ^S)\)]
Note that this case occurs exactly if player i has been moved by the algorithm with respect to coalition \(S \cup \{i\}\) such that \(\tau ^{S \cup \{i\}}(r^{S \cup \{i\}})< \tau ^{S \cup \{i\}}(i)<\tau ^{S \cup \{i\}}(l)\). Suppose \(c(l,\sigma _0)<c(i,\sigma _0)\), then
Hence, the swap of players m and l with respect to \(\sigma _{S \cup \{i\}}\) is admissible and thus, according to Proposition 3.1, we have \(u_m \ge u_l\). This is a contradiction with (23) and thus we know \(c(l,\sigma _0)>c(i,\sigma _0)\). Therefore, using (14), we have \(c(l,\tau ^S)=c(l,\tau ^{S \cup \{i\}})\) which implies \(l=r^S\). As a consequence, if \(k \in S^{\tau ^{S \cup \{i\}}}(r^{S \cup \{i\}},r^S]\), then also \(k \in S^{\tau ^S}(m,r^S]\) and thus from (23) we have \(u_m<u_k\). With the same arguments we used for player l we can conclude \(c(k,\sigma _0)>c(i,\sigma _0)\) and thus \(c(k,\tau ^S)=c(k,\tau ^{S \cup \{i\}})\). Therefore, we have
and
for all \(k \in S^{\tau ^S}(\tilde{r}^S,r^S]\).
Let \(\hat{r}^S\) be the direct predecessor of player i in \(\tau ^{S \cup \{i\}}\). Since player i is in \(\tau ^{S \cup \{i\}}\) behind player \(\hat{r}^S\) and not behind player \(r^S\), although this is an admissible swap, we have
Consequently, it follows from (35) and (36) together with the fact that \(c(\hat{r}^S,\tau ^S)=c(i,\tau ^{S \cup \{i\}})\), that
Moreover, since player m is moved behind player \(r^S\) and not behind player \(\hat{r}^S\), we have due to property (v) of the algorithm that
Therefore, we can conclude \(\frac{\alpha _i}{p_i} > \frac{\alpha _m}{p_m}\), i.e., \(\alpha _ip_m-\alpha _mp_i>0\), which is exactly what we needed.
Appendix B: Proof of Lemma 3.7
In this proof we denote \(c(k,S \cup \{i\},\sigma )\) by \(c(k,\sigma )\) for every \(k \in S \cup \{i\}\) and every \(\sigma \in \Pi (N)\). We continue the proof by means of induction on the number
i.e., the number of players in coalition S between player m and player i in \(\sigma _S\).
Base step If \(n_m=0\), then player m is the closest predecessor of player i with respect to \(\sigma _S\) among all players in S. Then,
and thus (4) is true.
Induction step Assume that (4) holds for every \(k \in S \cap P(\sigma _S,i)\) with \(n_k<n_m\). Since the followers of player m with respect to \(\sigma _S\) intersected with \(S \cap P(\sigma _S,i)\) are exactly the players with \(n_k<n_m\), we actually assume that for every \(k \in S \cap P(\sigma _S,i) \cap F(\sigma _S,m)\) and \(r \in S \cap F(\tau _k,k)\) with \(c(r,\tau _k)<c(i,\sigma _0)\) we have
We distinguish between the following cases:
-
\(u_m \le u_k\) for all \(k \in S^{\sigma _S}(m,l]\),
-
there exists a player \(k \in S^{\sigma _S}(m,l]\) with \(u_m > u_k\).
Case 1 [\(u_m \le u_k\) for all \(k \in S^{\sigma _S}(m,l]\)]
We will show that since moving player m behind player l in \(\sigma _S\) is not beneficial, moving player m behind player l in \(\tau _m\) is also not beneficial.
Note that because of the induction assumption in (37), we know that all players in \((S \cup \{i\})^{\tau _m}(m,l]\) have not been moved by the algorithm and thus \(N^{\tau _m}(m,l] \subseteq N^{\sigma _S}(m,l]\). Moreover, since player l is in both \(\sigma _S\) and \(\tau _m\) to the left of the original component of player i in \(\sigma _0\), we have
and
Note, as processing order \(\sigma _S\) is optimal for coalition S, we have
Since \(u_m \le u_k\) for all \(k \in S^{\sigma _S}(m,l]\), it follows from Proposition 3.2 applied on the set \(S^{\sigma _S}(m,l]\) and player m together with (38) that
where there is an equality if \(S^{\sigma _S}(m,l] = (S \cup \{i\})^{\tau _m}(m,l]\). As a consequence, we have
and (4) follows.
Case 2 [there exists a player \(k \in S^{\sigma _S}(m,l]\) with \(u_m > u_k\)]
Let player \(q \in S^{\sigma _S}(m,l]\) be the closest follower of player m in \(\sigma _S\) with a smaller urgency than player m: \(u_q<u_m \le u_k\) for all \(k \in S^{\sigma _S}(m,q)\). Since \(q \in S \cap P(\sigma _S,i) \cap F(\sigma _S,m)\), it follows from the induction assumption in (37) that
As a consequence, since \(u_q<u_m\), we also have
We distinguish between two cases:
-
Case 2(i): \(S^{\sigma _S}(m,q) \cap S^{\tau _m}(m,l] = \emptyset \), i.e., all players in \(S^{\sigma _S}(m,q)\) have been moved by the algorithm,
-
Case 2(ii): \(S^{\sigma _S}(m,q) \cap S^{\tau _m}(m,l] \ne \emptyset \), i.e., not all players in \(S^{\sigma _S}(m,q)\) have been moved by the algorithm.
Case 2(i) [\(S^{\sigma _S}(m,q) \cap S^{\tau _m}(m,l] = \emptyset \)]
We will show that since moving player q behind player l in \(\tau _q\) is not beneficial, moving player m behind player l in \(\tau _m\) is also not beneficial.
Since player m is a predecessor of player q in \(\sigma _S\), we have
with an equality in case \(c(q,\sigma _S)=c(m,\sigma _S)\). Since all players in \(S^{\sigma _S}(m,q)\) have been moved by the algorithm, we have
if player q has also been moved by the algorithm, and
if player q has not been moved by the algorithm. As a consequence,
if player q has been moved by the algorithm, and
if player q has not been moved by the algorithm, which proves (4).
Case 2(ii) [\(S^{\sigma _S}(m,q) \cap S^{\tau _m}(m,l] \ne \emptyset \)]
We move player m behind player l in \(\tau _m\) in two stages. In the first stage, player m will be moved behind a specific player t. Then, in the second stage, player m will be moved behind player l. Using similar arguments as in Case 1 we can show that the move in the first stage is not beneficial, and using similar arguments as in Case 2(i) we can show that the move in the second stage is not beneficial. As a consequence, since cost differences have an additive structure and because the moves in both stages are not beneficial, moving player m behind player l in \(\tau _m\) is not beneficial.
Let player \(t \in S^{\sigma _S}(m,q)\) be the closest predecessor of player q in \(\sigma _S\) who is also a member of \(S^{\tau _m}(m,l]\). Note that due to the assumption \(S^{\sigma _S}(m,q) \cap S^{\tau _m}(m,l] \ne \emptyset \), player t exists. Because player t is a predecessor of player q in \(\sigma _S\), we have \(u_m \le u_k\) for all \(k \in S^{\sigma _S}(m,t]\). Therefore, using the same arguments as in Case 1, we have
Since player t is a predecessor of player q in \(\sigma _S\) and because \(t \in S^{\tau _m}(m,l]\) (which means that player t has not been moved by the algorithm), we have
with an equality in case \(c(q,\sigma _S)=c(t,\sigma _S)\). Because of the definition of player t we have
if player q has been moved by the algorithm, and
if player q has not been moved by the algorithm. As a consequence, by using the additive structure of cost differences, we have
if player q has been moved by the algorithm, and
if player q has not been moved by the algorithm. Hence, by using the additive structure of cost differences we have shown that moving player m behind player l in \(\tau _m\) is not beneficial.
Appendix C: On Assumption 2 for Theorem 4.1 in Section 4
We will prove that without loss of generality we can assume
and
in order to prove the convexity of SoSi sequencing games. Suppose that player i is not the only player in his component in \(\sigma _0\), i.e.,
Then, for example, the direct predecessor of player i in \(\sigma _0\) is a member of \(S \cup \{j\} \cup \{i\}\). We can define a different one-machine sequencing situation \((\overline{N},\overline{\sigma _0},\overline{p},\overline{\alpha })\) where \(\overline{N}=N \cup \{d\}\) with \(d \not \in N\),
and
Hence, this new sequencing situation \((\overline{N},\overline{\sigma _0},\overline{p},\overline{\alpha })\) is obtained from the original sequencing situation \((N,\sigma _0,p,\alpha )\) by adding a dummy player, with processing time and costs per time unit both equal to zero, directly in front of player i such that the predecessor of player i does not belong to \(S \cup \{j\} \cup \{i\}\) anymore.
Let \((\overline{N},\overline{v})\) be the SoSi sequencing game corresponding to one-machine sequencing situation \((\overline{N},\overline{\sigma _0},\overline{p},\overline{\alpha })\). Although \(\alpha _d=0\) and \(p_d=0\), we can still apply the algorithm with respect to coalition \(S \cup \{j\} \cup \{i\}\) and initial processing order \(\overline{\sigma _0}\), because player d is not a member of \(S \cup \{j\} \cup \{i\}\). The mutual order of the players in N will be the same in \(\text {Alg}((N, \sigma _0,p, \alpha ),S \cup \{j\} \cup \{i\})\) and \(\text {Alg}((\overline{N}, \overline{\sigma _0},\overline{p}, \overline{\alpha }),S \cup \{j\} \cup \{i\})\). Therefore, \(v(S \cup \{j\} \cup \{i\})=\overline{v}(S \cup \{j\} \cup \{i\})\). Similarly, we have \(v(S)=\overline{v}(S)\), \(v(S \cup \{i\})=\overline{v}(S \cup \{i\} )\) and \(v(S \cup \{j\})=\overline{v}(S \cup \{j\})\).
Note that if also the direct follower of player i in \(\sigma _0\) is a member of \(S \cup \{j\} \cup \{i\}\), then we also add a dummy player directly behind player i. By adding a dummy player directly in front and behind player i, player i will be the only player in his component. Moreover, all arguments can also be applied to player j. Hence, without loss of generality we can assume that player i and j are both the only player in their component in \(\sigma _0\).
Appendix D: Proof of Claims 1–4 in Theorem 4.1
Claim 1 [\(c(k,\sigma _{S \cup \{j\} \cup \{i\}}) = c(k,\sigma _{S \cup \{i\}}) \ge c(i,\sigma _S) \) for all \(k \in M^i(S)\)]
From (9) it follows that \(c(k,\sigma _{S \cup \{i\}}) \ge c(i,\sigma _S)\) for all \(k \in M^i(S)\). Moreover, from Proposition 3.5 it follows that if we go from \(\sigma _{S \cup \{i\}}\) to \(\sigma _{S \cup \{j\} \cup \{i\}}\), then the players to the right of player j in \(\sigma _{S \cup \{i\}}\) will stay in the same component. Since all players in \(M^i(S)\) are to the right of player j in \(\sigma _{S \cup \{i\}}\), we have
for all \(k \in M^i(S)\).
Claim 2 [\(c(k,\sigma _{S \cup \{j\} \cup \{i\}}) = c(k,\sigma _{S \cup \{j\}})\) for all \(k \in M^i_{1c}(S)\)]
From Proposition 3.5 it follows that if we go from \(\sigma _{S \cup \{j\}}\) to \(\sigma _{S \cup \{j\} \cup \{i\}}\), then the players to the right of player i in \(\sigma _{S \cup \{j\}}\) will stay in the same component. Since all players in \(M^i_{1c}(S)\) are to the right of player i in \(\sigma _{S \cup \{j\}}\), we have
for all \(k \in M^i_{1c}(S)\).
Claim 3 [\(c(k,\sigma _S) = c(k,\sigma _{S \cup \{j\} })\) for all \(k \in M_2^i(S)\)]
From Proposition 3.5 it follows that if we go from \(\sigma _S\) to \(\sigma _{S \cup \{j\} }\), then the players to the right of player j in \(\sigma _{S }\) will stay in the same component. Since all players in \(M_2^i(S)\) are to the right of player j in \(\sigma _S\), we have
for all \(k \in M_2^i(S)\).
Claim 4 [\(c(k,\sigma _S) = c(k,\sigma _{S \cup \{j\} })\) for all \(k \in M_{1a}^i(S)\)]
From Proposition 3.6 it follows that the players who are in \(\sigma _{S \cup \{j\}}\) to the left of the original component of player j have not been moved when going from \(\sigma _S\) to \(\sigma _{S \cup \{j\}}\). Since all players in \(M^i_{1a}(S)\) are to the left of the original component of player j in \(\sigma _{S \cup \{j\}}\), we have
for all \(k \in M_{1a}^i(S)\).
Appendix E: Proof of Claim 5 in Theorem 4.1
Let \(k \in M^i_{1a}(S) \cup M^i_{1b}(S) \cup M^i_2(S)\), we will prove
Note that we consider the players in \(M^i_{1a}(S) \cup M^i_{1b}(S) \cup M^i_2(S)\) from the right to the left with respect to \(\overline{\sigma }\) and \(\sigma _{S \cup \{j\}}\). So, if \(i \in M^i_{1a}(S) \cup M^i_{1b}(S) \cup M^i_2(S)\), then player i is the first player who is moved. From now on we distinguish between two cases: \(k=i\) and \(k \ne i\).
Case 1 [\(k=i\)]
As player i is the first player who is moved, we have \(\overline{\tau }_i=\overline{\sigma }\) and \(\tau _i=\sigma _{S \cup \{j\}}\). Note that we have
and
In order to prove \(\overline{\delta }_i \le \delta _i\), we compare the two sets of players that player i jumps over in \(\overline{\tau }_i\) and \(\tau _i\). We will show that all players that player i jumps over in \(\overline{\tau }_i\), player i also jumps over in \(\tau _i\). However, there might be some players that player i jumps over in \(\tau _i\) but not in \(\overline{\tau }_i\). It can be shown that these extra players that player i jumps over in \(\tau _i\) all have a higher urgency than player i and thus this results in extra cost savings. Formally, we show the following three statements:
-
Statement 1(a): \((\overline{S \cup \{i\}} )^{\overline{\tau }_i}(i,\overline{r}_i]= (\overline{S \cup \{j\} \cup \{i\}} )^{\tau _i}(i,r_i]\),
-
Statement 1(b): \( (S \cup \{i\})^{\overline{\tau }_i}(i,\overline{r}_i] \subseteq (S \cup \{j\} \cup \{i\} )^{\tau _i}(i,r_i]\),
-
Statement 1(c): if \((S \cup \{i\})^{\overline{\tau }_i}(i,\overline{r}_i] \subsetneq (S \cup \{j\} \cup \{i\} )^{\tau _i}(i,r_i]\), then
$$\begin{aligned} \alpha _{\left( S \cup \{j\} \cup \{i\} \right) ^{\tau _i}(i,r_i] \backslash \left( S \cup \{i\} \right) ^{\overline{\tau }_i}(i,\overline{r}_i]}p_{i}-\alpha _{i}p_{\left( S \cup \{j\} \cup \{i\} \right) ^{\tau _i}(i,r_i] \backslash \left( S \cup \{i\} \right) ^{\overline{\tau }_i}(i,\overline{r}_i]}>0. \end{aligned}$$
Proof of Statement 1(a)
From Claim 1 it follows that \(c(i,\sigma _{S \cup \{i\}})=c(i,\sigma _{S \cup \{j\} \cup \{i\}})\) and thus player i will be moved in both processing orders to the same component, i.e.,
Moreover, by the definition of \(\overline{\sigma }\) and since \(i \not \in S \cup \{j\}\), we have \(c(i,\overline{\sigma })=c(i,\sigma _{S \cup \{j\}})\) and thus
Hence, player i is moved in \(\overline{\tau }_i\) and \(\tau _i\) from the same component and to the same component, so player i jumps in \(\overline{\tau }_i\) and \(\tau _i\) over the same players outside \(S \cup \{j\} \cup \{i\}\). Moreover, because \(\overline{\sigma }(j)<\overline{\sigma }(i)\) and thus \(\overline{\tau }_i(j)<\overline{\tau }_i(i)\), we have \(j \not \in N^{\overline{\tau }_i}(i,\overline{r}_i]\). To summarize,
\(\square \)
Proof of Statement 1(b)
Let \(l \in (S \cup \{i\})^{\overline{\tau }_i}(i,\overline{r}_i]\). From (12) we know that player l is in \(\overline{\tau }_i\) in a component at most as far to the right as in \(\tau _i\), i.e.,
Moreover, from \(l \in (S \cup \{i\})^{\overline{\tau }_i}(i,\overline{r}_i]\) and (50) it follows that
Note that there is a strict inequality because player i is the only player in his component. Combining the previous two equations we get
i.e., player l is also to the right of player i in \(\tau _i\). Now suppose player l is to the right of player \(r_i\) in \(\tau _i\), then player l is to the right of player i in \(\sigma _{S \cup \{j\} \cup \{i\}}\) and thus
where the first inequality follows from \(l \in (S \cup \{i\})^{\overline{\tau }_i}(i,\overline{r}_i]\) and the first equality follows from Claim 1. Therefore, it follows that the swap of player i and player l with respect to \(\sigma _{S \cup \{j\} \cup \{i\}}\) is admissible. From Proposition 3.1 it then follows that \(u_i \ge u_l\). However, since player i jumps over player l when going from \(\overline{\sigma }\) to \(\sigma _{S \cup \{i\}}\), it follows from Proposition 3.1 that \(u_l>u_i\), which contradicts \(u_i \ge u_l\). Therefore l cannot be to the right of player \(r_i\) in \(\tau _i\), so player l is to the left of player \(r_i\) in \(\tau _i\). Combining this result with (51) we have
\(\square \)
Proof of Statement 1(c)
Let \((S \cup \{i\})^{\overline{\tau }_i}(i,\overline{r}_i] \subsetneq (S \cup \{j\} \cup \{i\} )^{\tau _i}(i,r_i]\). For every player \(l \in (S \cup \{j\} \cup \{i\} )^{\tau _i}(i,r_i] \backslash (S \cup \{i\})^{\overline{\tau }_i}(i,\overline{r}_i]\), we have \(u_l>u_i\) since player i jumps over player l when going from \(\sigma _{S \cup \{j\}}\) to \(\sigma _{S \cup \{j\} \cup \{i\}}\) (cf. Proposition 3.1). Combining this with Proposition 3.2 applied on the set \(U=(S \cup \{j\} \cup \{i\} )^{\tau _i}(i,r_i] \backslash (S \cup \{i\})^{\overline{\tau }_i}(i,\overline{r}_i]\) and player i, we have that
i.e., \(\alpha _{\left( S \cup \{j\} \cup \{i\} \right) ^{\tau _i}(i,r_i] \backslash \left( S \cup \{i\} \right) ^{\overline{\tau }_i}(i,\overline{r}_i]}p_{i}-\alpha _{i}p_{\left( S \cup \{j\} \cup \{i\} \right) ^{\tau _i}(i,r_i] \backslash \left( S \cup \{i\} \right) ^{\overline{\tau }_i}(i,\overline{r}_i]}>0\). \(\square \)
Note that if in Statement 1(b) we have equality, then the inequality \(\overline{\delta }_i \le \delta _i\) follows immediately from Statement 1(a). Next, if in Statement 1(b) we have a strict subset, then
The idea behind the previous strict inequality is as follows. Player i jumps in \(\overline{\tau }_i\) over the same players as in \(\tau _i\), but additionally player i jumps in \(\tau _i\) also over some extra players. It follows from Statement 1(c) that these extra players all have a higher urgency and thus the jump of player i over those extra players results in cost savings.
Case 2 [\(k \ne i\)]
The difference with respect to Case 1 is that in this case player k is not necessarily the only player in its component, while in Case 1 player i was the only player in its component of \(S \cup \{j\} \cup \{i\}\) with respect to \(\sigma _0\). Another difference is that now player k might not be the first player who is moved, and thus \(\overline{\tau }_k\) and \(\overline{\sigma }\), and \(\tau _i\) and \(\sigma _{S \cup \{j\}}\) might differ.
Note that
and
In order to prove \(\overline{\delta }_k \le \delta _k\), we compare the two sets of players that player k jumps over in \(\overline{\tau }_k\) and \(\tau _k\). We will show that all players, excluding player j, that player k jumps over in \(\overline{\tau }_k\), player k also jumps over in \(\tau _k\). Formally, we show the following three statements (which are similar to Statements 1(a)–(c)):
-
Statement 2(a): \( \left( (\overline{S \cup \{i\}} )^{\overline{\tau }_k}(k,\overline{r}_k] \right) \backslash \{j\} = (\overline{S \cup \{j\} \cup \{i\}} )^{\tau _k}(k,r_k]\),
-
Statement 2(b): \( (S \cup \{i\})^{\overline{\tau }_k}(k,\overline{r}_k] \subseteq \left( (S \cup \{j\} \cup \{i\} )^{\tau _k}(k,r_k] \right) \backslash \{j\}\),
-
Statement 2(c): if \( (S \cup \{i\})^{\overline{\tau }_k}(k,\overline{r}_k] \subsetneq \left( (S \cup \{j\} \cup \{i\} )^{\tau _k}(k,r_k] \right) \backslash \{j\}\), then
$$\begin{aligned}&\alpha _{\left( \left( (S \cup \{j\} \cup \{i\} )^{\tau _k}(k,r_k] \right) \backslash \{j\}\right) \backslash \left( (S \cup \{i\})^{\overline{\tau }_k}(k,\overline{r}_k]\right) }p_k\\&\quad -\alpha _kp_{\left( \left( (S \cup \{j\} \cup \{i\} )^{\tau _k}(k,r_k] \right) \backslash \{j\}\right) \backslash \left( (S \cup \{i\})^{\overline{\tau }_k}(k,\overline{r}_k] \right) }>0. \end{aligned}$$
Note that in the proof of Statement 1(a) and 1(c) the fact that player i is the only player in its component is not used and therefore the proofs for Statements 2(a) and 2(b) are similar.
Proof of Statement 2(b)
Let \(l \in (S \cup \{i\})^{\overline{\tau }_k}(k,\overline{r}_k]\). From (13) we know that player l is in \(\overline{\tau }_k\) in a component at most as far to the right as in \(\tau _k\), i.e.,
Moreover, from (11) we have \(c(k,\overline{\sigma })=c(k,\sigma _{S \cup \{j\}})\) and thus \(c(k,\overline{\tau }_k)=c(k,\tau _k)\). Together with \(l \in (S \cup \{i\})^{\overline{\tau }_k}(k,\overline{r}_k]\), it follows that
Combining the previous two equations we get
This implies
For this, note that if we have an equality in (52), then \(c(k,\overline{\tau }_k)=c(l,\overline{\tau }_k)\). Moreover, since \(l \in (S \cup \{i\})^{\overline{\tau }_k}(k,\overline{r}_k]\), we know \(\overline{\tau }_k(k)<\overline{\tau }_k(l)\) and thus also \(\tau _k(k)<\tau _k(l)\) (because both \(\tau _k\) and \(\overline{\tau }_k\) are urgency respecting processing orders and moreover because the tiebreaking rule, in case of equal urgencies, mentioned in condition (iii) of \(\overline{\sigma }\) is the same tiebreaking rule as in property (iv) of the algorithm). On the other hand, if there is a strict inequality in (52), then automatically \(\tau _k(k)<\tau _k(l)\). Using similar arguments as in the proof of Statement 1(b) we have
and since \(l \in S \cup \{i\}\), we have
\(\square \)
Now we continue with the main line of the proof. Using similar arguments as in Case 1, it follows from Statement 2(a)–(c) that
We distinguish from now on between the following four cases: Case 2(i) where player k does not jump over player j in both \(\overline{\tau }_k\) and \(\tau _k\); Case 2(ii) where player k jumps over player j in both \(\overline{\tau }_k\) and \(\tau _k\); Case 2(iii) where player k jumps over player j in \(\overline{\tau }_k\) but not in \(\tau _k\) and Case 2(iv) where player k jumps over player j in \(\tau _k\) but not in \(\overline{\tau }_k\).
Case 2(i) [\(j \not \in N^{\overline{\tau }_k}(k,\overline{r}_k]\) and \(j \not \in N^{\tau _k}(k,r_k]\)]
Note that if in Statement 2(b) we have equality, then the inequality \(\overline{\delta }_k \le \delta _k\) follows immediately from Statement 2(a). Next, if in Statement 2(b) we have a strict subset, then
The idea behind the strict inequality is as follows. Player k jumps in \(\overline{\tau }_k\) over the same players as in \(\tau _k\), but additionally player k jumps in \(\tau _k\) also over some extra players. It follows from Statement 2(c) that these extra players all have a higher urgency and thus the jump of player k over those extra players results in cost savings.
Case 2(ii) [\(j \in N^{\overline{\tau }_k}(k,\overline{r}_k]\) and \(j \in N^{\tau _k}(k,r_k]\)]
It follows that
The idea behind the strict inequality is as follows. All players, including player j, that player k jumps over in \(\overline{\tau }_k\), player k also jumps over in \(\tau _k\). However, as player j belongs to coalition \(S \cup \{j\} \cup \{i\}\) and not to coalition \(S \cup \{i\}\), there are some extra cost savings in \(\delta _k\). These extra cost savings are due to the reduction of the processing time for player j due to the jump of player k, namely \(\alpha _jp_k\).
Case 2(iii) [\(j \in N^{\overline{\tau }_k}(k,\overline{r}_k]\) and \(j \not \in N^{\tau _k}(k,r_k]\)]
It follows that
The idea behind the strict inequality is as follows. All players, excluding player j, that player k jumps over in \(\overline{\tau }_k\), player k also jumps over in \(\tau _k\). However, as player k jumps over player j in \(\overline{\tau }_k\) and not in \(\tau _k\), the completion time of player k will increase with at least \(p_j\) more in \(\tau _k\) than in \(\overline{\tau }_k\) and thus the cost savings in \(\overline{\delta }_k\) are less than in \(\delta _k\).
Case 2(iv) [\(j \not \in N^{\overline{\tau }_k}(k,\overline{r}_k]\) and \(j \in N^{\tau _k}(k,r_k]\)]
We will show that this case is not possible. As player k jumps over player j when going from \(\sigma _{S \cup \{j\}}\) to \(\sigma _{S \cup \{j\} \cup \{i\}}\), player j did not jump over player k when going from \(\sigma _S\) to \(\sigma _{S \cup \{j\}}\) (otherwise there would be a contradiction with respect to the urgencies, cf. Proposition 3.1). Hence, the mutual order of player k and player j is in \(\sigma _S\) the same as in \(\sigma _{S \cup \{j\}}\) and thus \(\sigma _S(k) < \sigma _S(j)\). Using Fig. 10 this implies that \(k \in M^i_1(S)\). Moreover, as \( j \not \in N^{\overline{\tau }_k}(k,\overline{r}_k]\) it follows from Fig. 12 that \(k \not \in M_{1a}^i(S)\) and thus \(k \in M_{1b}^i(S)\). Therefore, it follows from Fig. 10c that \(c(k,\sigma _{S \cup \{j\}}) \ge c(j,\sigma _S)\). Hence,
where the last inequality follows from \(j \in N^{\tau _k}(k,r_k]\). Therefore, it follows that the swap of player k and player j with respect to \(\sigma _{S \cup \{j\}}\) is admissible.
From Proposition 3.1 it then follows that \(u_k \ge u_j\). However, since player k jumps over player j when going from \(\sigma _{S \cup \{j\}}\) to \(\sigma _{S \cup \{j\} \cup \{i\}}\), it follows from Proposition 3.1 that \(u_j>u_k\) too, a contradiction.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Musegaas, M., Borm, P.E.M. & Quant, M. On the convexity of step out–step in sequencing games. TOP 26, 68–109 (2018). https://doi.org/10.1007/s11750-017-0455-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11750-017-0455-2