1 Introduction

This paper considers one-machine sequencing situations in which a certain number of players, each with one job, have to be served by a single machine. The processing time of a job is the time the machine takes to process the corresponding job of this player. Every player has an individual linear cost function, specified by an individual cost parameter, which depends on the completion time of this job which is defined as the sum of the processing times of his own job and the jobs that are processed before his own job. There are no further restrictive assumptions such as due dates, ready times or precedence constraints imposed on the jobs. Smith (1956) showed that the total joint costs are minimal if the jobs are processed in weakly decreasing order with respect to their urgency, defined as the ratio of the individual cost parameter and the processing time.

We assume that the players are arranged in an initial order and thus the rearrangement of the initial order to an optimal order will lead to cost savings. To analyze the problem on how these cost savings should be allocated to the players, sequencing games are introduced. The value of a coalition in a sequencing game serves as a benchmark for determining a fair allocation of the optimal cost savings and represents the “virtual” maximal cost savings which this coalition can achieve by means of admissible rearrangements. Which rearrangements are admissible for a coalition is a modeling choice. The classical assumption made in Curiel et al. (1989) is that two players of a certain coalition can only swap their positions if all players between them are also members of the coalition. They show that the resulting sequencing games are convex and therefore have a non-empty core. Relaxed sequencing games arise by relaxing this classical assumption about the set of admissible rearrangements for coalitions in a consistent way.

In Curiel et al. (1993), four different relaxed sequencing games are introduced. These relaxations are based on requirements for the players outside the coalition regarding either their position in the processing order or their starting time. Slikker (2006) considered these four relaxed sequencing games in more detail by investigating the corresponding cores. In van Velzen and Hamers (2003) two further classes of relaxed sequencing games are considered. In relaxed sequencing games the values of coalitions become larger because the set of admissible rearrangements is larger than in the classical case. As a consequence, while classical sequencing games are convex, relaxed sequencing games might not be convex anymore. To the best of our knowledge there is no general convexity result with respect to specific subclasses of relaxed sequencing games.

In Musegaas et al. (2015) an alternative class of relaxed sequencing games is considered, the class of step out–step in (SoSi) sequencing games. In a SoSi sequencing game a member of a coalition is allowed to step out from his position in the processing order and to step in at any position later in the processing order. Providing an upper bound on the values of the coalitions in a SoSi sequencing game, Musegaas et al. (2015) showed that every SoSi sequencing game has a non-empty core. Also, Musegaas et al. (2015) provided a polynomial time algorithm to determine the value and an optimal processing order for an arbitrary coalition in a SoSi sequencing game. This paper shows, by means of this polynomial time algorithm, that SoSi sequencing games are convex. For proving this, we use a specific feature of the algorithm. Namely, for determining an optimal processing order for a coalition, one can use the information of the optimal processing orders of subcoalitions. More precisely, if one wants to know an optimal processing order for a coalition \(S \cup \{i\}\), then the algorithm can start from the optimal processing order found for coalition S. In particular, this helps to analyze the marginal contribution of a player i to joining coalitions ST with \(S \subseteq T\) and \(i \not \in T\), and thus it helps to prove the convexity of SoSi sequencing games.

The organization of this paper is as follows. Section 2 recalls basic definitions on one-machine sequencing situations and the formal definition of a SoSi sequencing game. Section 3 identifies a number of important key features of the algorithm of Musegaas et al. (2015) that are especially useful in proving the convexity of SoSi sequencing games. In Sect. 4 the proof of convexity for SoSi sequencing games is provided.

2 SoSi sequencing games

This section recalls basic definitions on one-machine sequencing situations and related SoSi sequencing games.

A one-machine sequencing situation can be summarized by a tuple \((N, \sigma _0, p, \alpha )\), where N is the set of players, each with one job to be processed on the single machine. A processing order of the players can be described by a bijection \(\sigma : N \rightarrow \{1, \ldots , |N|\}\). More specifically, \(\sigma (i)=k\) means that player i is in position k. Let \(\Pi (N)\) denote the set of all such processing orders. The processing order \(\sigma _0 \in \Pi (N)\) specifies the initial order. The processing time \(p_i>0\) of the job of player i is the time the machine takes to process this job. The vector \( p \in \mathbb {R}^N _{++}\) summarizes the processing times. Furthermore, the costs for player i of spending t time units in the system is assumed to be determined by a linear cost function \(c_i : [0, \infty ) \rightarrow \mathbb {R}\) given by \(c_i(t)=\alpha _it\) with \(\alpha _i >0\). The vector \(\alpha \in \mathbb {R}^N _{++}\) summarizes the coefficients of the linear cost functions. It is assumed that the machine starts processing at time \(t=0\), and also that all jobs enter the system at \(t=0\).

The total joint costs of a processing order \(\sigma \in \Pi (N)\) are given by \(\sum _{i \in N}{\alpha _iC_i(\sigma )}\), where \(C_i(\sigma )\) denotes the completion time of player i and is defined by

$$\begin{aligned} C_i(\sigma )=\sum _{j \in N:\sigma (j) \le \sigma (i)}{p_j}. \end{aligned}$$

A processing order is called optimal if it minimizes the total joint costs over all possible processing orders. In Smith (1956) it is shown that in each optimal order the players are processed in weakly decreasing order with respect to their urgency \(u_i\) defined by \(u_i=\frac{\alpha _i}{p_i}\). The maximal total cost savings are equal to the difference in total costs between the initial order and an optimal order.

A coalitional game is a pair (Nv) where N denotes a non-empty, finite set of players and \(v: 2^N \rightarrow \mathbb {R}\) assigns a monetary payoff to each coalition \(S \in 2^N\), where \(2^N\) denotes the collection of all subsets of N. In general, the value v(S) equals the highest payoff the coalition S can jointly generate by means of optimal cooperation without help of players in \(N \backslash S\). By convention, \(v(\emptyset )=0\).

To tackle the allocation problem of the maximal cost savings in a sequencing situation \((N, \sigma _0, p, \alpha )\), one can analyze an associated coalitional game (Nv). Here N naturally corresponds to the set of players in the game and, for a coalition \(S \subseteq N\), v(S) reflects the maximal cost savings this coalition can make with respect to the initial order \(\sigma _0\). In order to determine these maximal cost savings, assumptions must be made on the possible reorderings of coalition S with respect to the initial order \(\sigma _0\).

The classical (strong) assumption is that a member of a certain coalition \(S \subset N\) can only swap with another member of the coalition if all players between these two players, according to the initial order, are also members of S. Note that the resulting set of admissible reorderings for a coalition is quite restrictive, because there may be more reorderings possible which do not hurt the interests of the players outside the coalition.

In a SoSi sequencing game a member of the coalition S is allowed to step out from his position in the processing order and to step in at any position later in the processing order. Note that from an optimality point of view it is clear that one can assume without loss of generality that a member of S who steps out, only steps in at a position directly behind another member of the coalition S. This means that for every player outside S its set of predecessors cannot become larger and its direct follower was already a follower of him in the initial order. Hence, a processing order \(\sigma \) is called admissible for S in a SoSi sequencing game if

  1. (i)

    \(P(\sigma ,i) \subseteq P(\sigma _0,i)\) for all \(i \in N \backslash S\),

  2. (ii)

    \(\sigma ^{-1}(\sigma (i)+1) \in F(\sigma _0,i)\) for all \(i \in N \backslash S\) with \(\sigma (i) \ne |N|\),

where \(P(\sigma ,i)=\{j \in N~|~\sigma (j) < \sigma (i)\}\) denotes the set of predecessors of player i with respect to processing order \(\sigma \) and \(F(\sigma ,i)=\{j \in N~|~\sigma (j) > \sigma (i)\}\) denotes the set of followers. Given an initial order \(\sigma _0\) the set of admissible orders for coalition S is denoted by \(\mathcal {A}(\sigma _0,S)\). Correspondingly, Musegaas et al. (2015) defined the Step out, Step in (SoSi) sequencing game (Nv) by

$$\begin{aligned} v(S)=\max _{\sigma \in \mathcal {A}(\sigma _0,S)}{\sum _{i \in S}{\alpha _i(C_i(\sigma _0)-C_i(\sigma ))}}, \end{aligned}$$

for all \(S \subseteq N\). A processing order \(\sigma ^* \in \mathcal {A}(\sigma _0,S)\) is called optimal for S if

$$\begin{aligned} \sum _{i \in S}{\alpha _i(C_i(\sigma _0)-C_i(\sigma ^*))}= \max _{\sigma \in \mathcal {A}(\sigma _0,S)}{\sum _{i \in S}{\alpha _i(C_i(\sigma _0)-C_i(\sigma ))}}. \end{aligned}$$

Note that a processing order is admissible for a coalition in a classical sequencing game if there is an equality in condition (i). Therefore, given a coalition, the corresponding set of admissible orders in a SoSi sequencing game is larger than the set of admissible orders in the corresponding classical sequencing game. As a consequence, the values of coalitions in SoSi sequencing games can become larger with respect to classical sequencing games.

The following example provides an instance of a SoSi sequencing game.

Example 2.1

Consider a one-machine sequencing situation with \(N=\{1,2,3\}\). The vector of processing times is \(p=(3,2,1)\), the vector of coefficients corresponding to the linear cost functions is \(\alpha =(4,6,5)\) and the initial order is \(\sigma _0=(1~2~3)\). Let (Nv) be the corresponding SoSi sequencing game. Table 1 provides the values of all coalitions.

Note that the values of the coalitions in the game (Nv) are equal to the values of the coalitions in the classical sequencing game of this one-machine sequencing situation except for the only disconnected coalition, coalition \(\{1,3\}\). Coalition \(\{1,3\}\) cannot save costs in the classical sequencing game because there exists no admissible order other than the initial order. However, in the SoSi sequencing game coalition \(\{1,3\}\) has two admissible orders:Footnote 1

$$\begin{aligned} \mathcal {A}(\sigma _0, \{1,3\})=\left\{ (1~2~3), (2~3~1)\right\} . \end{aligned}$$

These processing orders are illustrated in Fig. 1. Hence, the value of coalition \(\{1,3\}\) is given by

$$\begin{aligned} v(\{1,3\})= \max \left\{ 0, \sum _{i \in \{1,3\}}{\alpha _i(C_i((1~2~3)){-}C_i((2~3~1)))} \right\} = \max \left\{ 0,-12+15\right\} = 3. \quad \triangle \end{aligned}$$
Table 1 The SoSi sequencing game of Example 2.1
Fig. 1
figure 1

The two admissible orders for coalition \(\{1,3\}\) in Example 2.1

3 On the algorithm for finding the values of the coalitions

Musegaas et al. (2015) provided a polynomial time algorithm to determine an optimal order for every possible coalition and, consequently, the values of the coalitions. For proving convexity of SoSi sequencing games, we use specific key features of this algorithm. In this section we will derive and summarize these specific features. For example, in Theorem 3.4, we will show that in determining an optimal processing order of a coalition \(S \cup \{i\}\) in a SoSi sequencing game, the algorithm can start from the optimal processing order found for coalition S.

We start with recalling some definitions such as components. For \(S \in 2^N \backslash \{\emptyset \}\), \(\sigma \in \Pi (N)\) and \(s,t \in N\) with \(\sigma (s)<\sigma (t)\), define

$$\begin{aligned} S^{\sigma }(s,t)= & {} \left\{ i \in S~|~\sigma (s)< \sigma (i)< \sigma (t) \right\} ,\\ \bar{S}^{\sigma }(s,t)= & {} \left\{ i \in N \backslash S~|~\sigma (s)< \sigma (i) < \sigma (t) \right\} , \\ S^{\sigma }[s,t]= & {} \left\{ i \in S~|~\sigma (s) \le \sigma (i) \le \sigma (t) \right\} ,\\ \bar{S}^{\sigma }[s,t]= & {} \left\{ i \in N \backslash S~|~\sigma (s) \le \sigma (i) \le \sigma (t) \right\} . \end{aligned}$$

The sets of players \(S^{\sigma }[s,t)\), \(\bar{S}^{\sigma }[s,t)\), \(S^{\sigma }(s,t]\) and \(\bar{S}^{\sigma }(s,t]\) are defined in a similar way.

A coalition \(S \in 2^N \backslash \{\emptyset \}\) is called connected with respect to \(\sigma _0\) if for all \(i,j \in S\) and \(k \in N\) such that \(\sigma _0(i)<\sigma _0(k)<\sigma _0(j)\) it holds that \(k \in S\). A connected coalition \(U \subseteq S\) with respect to \(\sigma _0\) is called a component of S with respect to \(\sigma _0\) if \(U \subseteq U' \subseteq S\) and \(U'\) connected with respect to \(\sigma _0\) implies that \(U'=U\). Let \(h(\sigma _0,S) \ge 1\) denote the number of components of S with respect to \(\sigma _0\). The partition of S into components with respect to \(\sigma _0\) is denoted by

$$\begin{aligned} S \backslash \sigma _0 = \left\{ S_1^{\sigma _0}, S_2^{\sigma _0}, \ldots , S_{h(\sigma _0,S)}^{\sigma _0} \right\} , \end{aligned}$$

where for each \(k \in \{1, \ldots , h(\sigma _0,S)-1\}\), \(i \in S_k^{\sigma _0}\) and \(j \in S_{k+1}^{\sigma _0}\) we have \(\sigma _0(i) < \sigma _0(j)\). In the same way, processing order \(\sigma _0\) divides \(N \backslash S\) into subgroups. For this, define

$$\begin{aligned} \begin{aligned} \overline{S}_0^{\sigma _0}&= \left\{ i \in N \backslash S~|~\sigma _0(i)< \sigma _0(j) \text { for all } j \in S_1^{\sigma _0} \right\} ,\\ \overline{S}_{h(\sigma _0,S)}^{\sigma _0}&= \left\{ i \in N \backslash S~|~\sigma _0(i) > \sigma _0(j) \text { for all } j \in S_{h(\sigma _0,S)}^{\sigma _0} \right\} ,\\ \overline{S}_k^{\sigma _0}&= \left\{ i \in N \backslash S~|~\sigma _0(j)< \sigma _0(i) < \sigma _0(l) \text { for all } j \in S_{k}^{\sigma _0}, \text { for all } l \in S_{k+1}^{\sigma _0} \right\} , \end{aligned} \end{aligned}$$

for all \(k \in \{1,\ldots ,h(\sigma _0,S)-1\}\). Notice that \(\overline{S}_0^{\sigma _0}\) and \(\overline{S}_{h(\sigma _0,S)}^{\sigma _0}\) might be empty sets, but \(\overline{S}_k^{\sigma _0} \ne \emptyset \) for all \(k \in \{1,\ldots ,h(\sigma _0, S)-1\}\). See Fig. 2 for an illustration of the subdivision of S and \(N \backslash S\) into subgroups by means of processing order \(\sigma _0\).

Fig. 2
figure 2

Partition of the players in S and \(N \backslash S\) with respect to an order \(\sigma _0\)

Note that for given \(S \subseteq N\) it is possible that a processing order \(\sigma \in \mathcal {A}(\sigma _0,S)\) contains less components than \(\sigma _0\), because all players of a certain component with respect to S may step out from this component and join other components. For \(\sigma \in \mathcal {A}(\sigma _0,S)\) with \(\sigma _0 \in \Pi (N)\), define modified components \(S_1^{\sigma _0, \sigma }, \dots , S_{h(\sigma _0,S)}^{\sigma _0, \sigma }\) by

$$\begin{aligned} S_k^{\sigma _0,\sigma }=\left\{ i \in S~|~ \sigma (j)<\sigma (i)<\sigma (l) \text { for all } j \in \overline{S}_{k-1}^{\sigma _0}, \text { for all } l \in \overline{S}_{k}^{\sigma _0}\right\} , \end{aligned}$$

for all \(k \in \{1, \ldots , h(\sigma _0,S)\}\). Hence, \(S_k^{\sigma _0,\sigma }\) consists of the group of players that are positioned in processing order \(\sigma \) in between the subgroups \(\overline{S}_{k-1}^{\sigma _0}\) and \(\overline{S}_{k}^{\sigma _0}\).

Note that \(S_k^{\sigma _0, \sigma }\) might be empty for some k while

$$\begin{aligned} \bigcup _{k=1}^{h(\sigma _0,S)}{S_k^{\sigma _0,\sigma }}=S. \end{aligned}$$

Moreover, recall that a player is not allowed to move to an earlier component (condition (i) of admissibility), but he is allowed to move to any position later in the processing order and thus we have

$$\begin{aligned} \bigcup _{k=1}^{l}{S_k^{\sigma _0,\sigma }}\subseteq \bigcup _{k=1}^{l}{S_k^{\sigma _0}}, \end{aligned}$$

for all \(l \in \{1, \ldots , h(\sigma _0,S)\}\). Furthermore, denote the index of the corresponding modified component of player \(i \in S\) in processing order \(\sigma \) with respect to initial processing order \(\sigma _0\) by \(c(i,S,\sigma )\), where

$$\begin{aligned} c(i,S,\sigma )=k \text { if and only if } i \in S_k^{\sigma _0,\sigma }. \end{aligned}$$

Since the component index of player \(i \in S\) with respect to \(\sigma \) can only be increased (due to condition (i) of admissibility), we have

$$\begin{aligned} c(i,S,\sigma ) \ge c(i,S,\sigma _0). \end{aligned}$$

An illustration of the definitions of components, modified components and the index \(c(i,S,\sigma )\) can be found in the following example.

Example 3.1

Consider a one-machine sequencing situation \((N, \sigma _0, p, \alpha )\) with \(S \subseteq N\) such that \(S=\{1,2, \dots , 10\}\). In Fig. 3a an illustration can be found of initial processing order \(\sigma _0\) and the partition of S into components. Next, consider processing order \(\sigma \) as illustrated in Fig. 3b that is admissible for S. Note that \(\sigma \) contains less components than \(\sigma _0\). Figure 3b also illustrates the definition of modified components. Note that there is one modified component that is empty, namely \(S_3^{\sigma _0,\sigma }\). Since player 3 belongs to the first modified component, we have \(c(3,S,\sigma )=1\). Moreover, since player 3 is the only player who belongs to the first modified component, we have \(S^{\sigma _0,\sigma }_1=\{3\}\). Similarly, we have \(c(4,S,\sigma )=c(2,S,\sigma )=2,\) and \(c(i,S,\sigma )= 4,\) for all \(i \in S \backslash \{2,3,4\}\).\(\triangle \)

Fig. 3
figure 3

Illustration of components and modified components. a Illustration of the components of S with respect to \(\sigma _0\). b Illustration of the modified components of S with respect to \(\sigma \) and initial order \(\sigma _0\)

To find an optimal order for every possible coalition, Musegaas et al. (2015) provided a polynomial time algorithm. Given a one-machine sequencing situation \((N, \sigma _0, p, \alpha )\) and a coalition \(S \in 2^N \backslash \{\emptyset \}\), the polynomial time algorithm introduced by Musegaas et al. (2015) starts with a preprocessing step. In this preprocessing step, the players within the components of S are reordered such that they are in weakly decreasing order with respect to their urgency. This is done by setting the initial processing order \(\sigma _0\) equal to the processing order \(\sigma _0^S\), where \(\sigma _0^S \in \mathcal {A}(\sigma _0,S)\) is the unique urgency respecting processing order such that for all \(i \in S\)

$$\begin{aligned} c(i,S,\sigma _0^S)=c(i,S,\sigma _0), \end{aligned}$$
(1)

where a processing order \(\sigma \in \Pi (N)\) is called urgency respecting with respect to S if

  1. (i)

    (\(\sigma \) is componentwise optimal) for all \(i,j \in S\) with \(c(i, S, \sigma ) = c(j, S, \sigma )\):

    $$\begin{aligned} \sigma (i) < \sigma (j) \Rightarrow u_i \ge u_j. \end{aligned}$$
  2. (ii)

    (\(\sigma \) satisfies partial tiebreaking) for all \(i,j \in S\) with \(c(i, S, \sigma _0) = c(j, S, \sigma _0)\):

    $$\begin{aligned} u_i = u_j, \sigma _0(i)<\sigma _0(j) \Rightarrow \sigma (i) < \sigma (j). \end{aligned}$$

Note that (1) states that all players in S stay in their component, i.e., the partition of S into components stays the same. Moreover, condition (i) of urgency respecting states that the players within a component of S are in weakly decreasing order with respect to their urgency. Moreover, a tiebreaking rule in condition (ii) ensures that if there are two players with the same urgency in the same component of S with respect to \(\sigma _0\), then the player who was first in \(\sigma _0\) is earlier in processing order \(\sigma \). Note that the partial tiebreaking condition does not imply anything about the relative order of two players with the same urgency who are in the same component of S with respect to \(\sigma \) but who were in different components of S with respect to \(\sigma _0\). Therefore, an arbitrary urgency respecting order does not need to be unique, but \(\sigma _0^S\), because of condition (1), is.

After the preprocessing step, the players in S are considered in reverse order with respect to \(\sigma _0^S\) and for every player the algorithm checks whether moving the player to a certain position later in the processing order is beneficial. If so, then the algorithm will move this player. The algorithm works in a greedy way in the sense that every player is moved to the position giving the highest cost savings at that moment. Moreover, every player is considered in the algorithm exactly once and every player is moved to another position in the processing order at most once. The obtained processing order after the complete run of the algorithm is denoted by \(\sigma _S\).

The following properties follow directly from the definition and the characteristics of the algorithm for finding the optimal processing order \(\sigma _S\) for coalition S and will be used in this paper in order to show that SoSi sequencing games are convex.

  • Property (i): after every step during the run of the algorithm, we have a processing order that is urgency respecting with respect to S.

  • Property (ii): if during the run of the algorithm a player is moved to a position later in the processing order, then this results in strictly positive cost savings which corresponds to the highest possible cost savings at that instance. In case of multiple options, we choose the component that is most to the left and, in that component, we choose the position that is most to the left.

  • Property (iii): the mutual order between players who have already been considered will stay the same during the rest of the run of the algorithm.

  • Property (iv): the processing order \(\sigma _S\) is the unique optimal processing order such that no player can be moved to an earlier component while the total costs remain the same. Also, if there are two players with the same urgency in the same component, then the player who was first in \(\sigma _0\) is earlier in processing order \(\sigma _S\).

  • Property (v): if it is admissible with respect to \(\sigma _0\) to move a player to a component more to the left with respect to order \(\sigma _S\), then moving this player to this component will lead to higher total costs.

An interesting property for the urgencies of players in an optimal order is that if it is admissible that two players switch position, then the player with the highest urgency should be positioned first. This is stated in the following proposition.

Proposition 3.1

[cf. Lemma 4.1 in Musegaas et al. (2015)] Let \((N,\sigma _0, p, \alpha )\) be a one-machine sequencing situation, let \(S \in 2^N \backslash \{\emptyset \}\) and let \(\sigma \in \mathcal {A}(\sigma _0,S)\) be an optimal order for S. Let \(k,l \in S\) with \(\sigma (k) < \sigma (l)\) and \(c(l,\sigma _0,S) \le c(k,\sigma ,S)\). Then, \(u_k \ge u_l\).

Fig. 4
figure 4

Initial order \(\sigma _0\) in Example 3.2

From the previous proposition together with the fact that the algorithm moves a player to the left as far as possible (see property (ii) of the algorithm), we have that if the algorithm moves player k to a later component, then the players from coalition S that player k jumps over all have a strictly higher urgency than player k.

In the following example the algorithm is applied on an instance of a SoSi sequencing game. In this example we use the concept of composed costs per time unit and composed processing times, where the composed costs per time unit \(\alpha _U\) and the composed processing time \(p_U\) for a coalition \(U \in 2^N\) are defined by

$$\begin{aligned} \alpha _U=\sum _{i \in U}{\alpha _i}, \end{aligned}$$

and

$$\begin{aligned} p_U= \sum _{i \in U}{p_i}, \end{aligned}$$

respectively.

Example 3.2

Consider a one-machine sequencing situation \((N, \sigma _0, p, \alpha )\) with \(S \subseteq N\) such that \(S=\{1,2, \dots , 10\}\). In Fig. 4 an illustration can be found of initial order \(\sigma _0\) together with all relevant data on the cost coefficients and processing times (the numbers above and below the players, respectively). The completion times of the players with respect to this initial order are also indicated in the figure (bottom line in bold).

In the preprocessing step the algorithm processing order \(\sigma \) is set to processing order \(\sigma _0^S\) (see Fig. 5) and we initialize

$$\begin{aligned} v(S):=\sum _{i \in S}{\alpha _i(C_i(\sigma _0)-C_i(\sigma _0^S))}=187. \end{aligned}$$

Next, the players in S are considered in reverse order with respect to \(\sigma _0^S\) and the algorithm starts with the last player of the penultimate component, which is player 6.

Fig. 5
figure 5

The processing order \(\sigma \) after the preprocessing step in Example 3.2

Fig. 6
figure 6

The processing order \(\sigma \) after player 6 is considered in Example 3.2

Fig. 7
figure 7

The processing order \(\sigma \) after player 5 is considered in Example 3.2

Player 6: if player 6 is moved to the last component, then the position of player 6 should be behind player 7 (since the players in the components must stay in weakly decreasing order with respect to their urgencies, see property (i) of the algorithm). The resulting cost savings are

$$\begin{aligned} \alpha _{S^{\sigma }(6,7]}p_6 - \alpha _6p_{N^{\sigma }(6,7]}&= (\alpha _{10}+\alpha _7) p_6 - \alpha _6 (p_{\overline{S}^{\sigma _0}_3}+p_{10}+p_7)\\&= 11 \cdot 8 - 3 \cdot 19=31. \end{aligned}$$

Hence, we update processing order \(\sigma \) by moving player 6 to the position directly behind player 7 (see Fig. 6) and we set \(v(S):=187+31=218\).

Player 5: according to the given urgencies, player 5 should be moved to the position directly behind player 6 if he is moved to a later component. The resulting cost savings are

$$\begin{aligned} \alpha _{S^{\sigma }(5,6]}p_5 - \alpha _5p_{N^{\sigma }(5,6]}= & {} (\alpha _{10}+\alpha _7+\alpha _6) p_5 - \alpha _5 (p_{\overline{S}^{\sigma _0}_2} +p_{\overline{S}^{\sigma _0}_3}+p_{10}+p_7+p_6)\\= & {} 14 \cdot 9 - 3 \cdot 32=30. \end{aligned}$$

Hence, we update processing order \(\sigma \) by moving player 5 to the position directly behind player 6 (see Fig. 7) and we set \(v(S):=218+30=248\).

Player 4: since all followers of player 4 who are members of S have a lower urgency, it is impossible to reduce the total costs by moving player 4 to a different position (see Proposition (3.1)). Hence, \(\sigma \) and v(S) are not changed.

Player 1: there are two components behind player 1. If player 1 is moved to a different component, then the position of player 1 should be either directly behind player 4 or directly behind player 10. The resulting cost savings are 18 and 21, respectively. Hence, player 1 is moved behind player 10 (see property (ii) of the algorithm). Processing order \(\sigma \) is updated (see Fig. 8) and v(S) is increased by 21, so \(v(S):=269\).

Fig. 8
figure 8

The processing order \(\sigma \) after player 1 is considered in Example 3.2

Player 2: like in the previous step we have again two possibilities, namely moving behind player 4 with cost savings 9 or behind player 10 with cost savings 6. Hence, it is most beneficial to move player 2 behind player 4. Processing order \(\sigma \) is updated (see Fig. 9) and v(S) is increased by 9, so \(v(S):=278\).

Fig. 9
figure 9

The processing order \(\sigma \) after player 2 is considered in Example 3.2

Player 3: there are two components behind player 3. Note that all players in the last component have a lower urgency than player 3. Therefore, it is impossible to reduce the total costs by moving player 3 to the last component. If player 3 is moved to the second component, then the position of player 3 should be directly behind player 4. The resulting cost savings are \(-21\) and thus moving player 3 to the second component will not reduce the total costs. Hence, the order depicted in Fig. 9 is the optimal processing order \(\sigma _S\) for coalition S obtained by the algorithm. Furthermore, \(v(S)=278\). \(\triangle \)

The following proposition, which will frequently be used later on, provides a basic property of composed costs per time unit and composed processing times. Namely, if every player in a set of players U is individually more urgent than a specific player i, then also the composed job U as a whole is more urgent than player i.Footnote 2

Proposition 3.2

Let \(U \subsetneq N\) with \(U \ne \emptyset \) and let \(i \in N \backslash U\). If \(u_i < u_j\) for all \(j \in U\), then

$$\begin{aligned} \frac{\alpha _i}{p_i} < \frac{\alpha _U}{p_U}, \end{aligned}$$

or equivalently,

$$\begin{aligned} \alpha _ip_U-\alpha _Up_i<0. \end{aligned}$$

Proof

Assume \(u_i< u_j\) for all \(j \in U\), i.e., \(\alpha _ip_j<\alpha _jp_i,\) for all \(j \in U\). By adding these |U| equations we get \(\alpha _i\sum _{j \in U}{p_j} < p_i\sum _{j \in U}{\alpha _j},\) i.e.,

$$\begin{aligned} \frac{\alpha _i}{p_i}<\frac{\sum _{j \in U}{\alpha _j}}{\sum _{j \in U}{p_j}}=\frac{\alpha _U}{p_U}. \end{aligned}$$

\(\square \)

The following lemma compares the processing orders that are obtained from the algorithm with respect to coalition S and coalition \(S \cup \{i\}\), in case player \(i \in N \backslash S\) is the only player in the component of \(S \cup \{i\}\) with respect to \(\sigma _0\). This lemma will be the driving force behind Theorem 3.4, which in turn is the crux for proving convexity of SoSi sequencing games.

Lemma 3.3

Let \((N, \sigma _0, p, \alpha )\) be a one-machine sequencing situation, let \(S \subsetneq N\) with \(S \ne \emptyset \) and let \(i \in N \backslash S\) be such that \((S \cup \{i\})_{c(i,S \cup \{i\},\sigma _0)}^{\sigma _0}=\{i\}\). Then, for all \(k \in S\) we have

$$\begin{aligned} c\left( k,S \cup \{i\},\sigma _{S \cup \{i\}}\right) \ge c(k,S \cup \{i\},\sigma _S). \end{aligned}$$

Proof

See Appendix A. \(\square \)

From the previous lemma it follows that if one wants to determine an optimal processing order of a coalition in a SoSi sequencing game, then the information of optimal processing orders of specific subcoalitions can be used. More precisely, if one wants to know the optimal processing order \(\sigma _{S \cup \{i\}}\) derived by the algorithm for a coalition \(S \cup \{i\}\) with \(i \not \in S\) and i being the only player in its component in \(\sigma _0\), then it does not matter whether you take \(\sigma _0\) or \(\sigma _S\) as initial processing order, as is stated in the following theorem.

Since the initial order will be varied we need some additional notation. We denote the obtained processing order after the complete run of the algorithm for one-machine sequencing situation \((N,\sigma ,p,\alpha )\) with initial order \(\sigma \) and coalition S by \(\text {Alg}((N, \sigma ,p, \alpha ),S)\). Hence, \(\text {Alg}((N, \sigma _0,p, \alpha ),S)=\sigma _S\).

Theorem 3.4

Let \((N, \sigma _0, p, \alpha )\) be a one-machine sequencing situation, let \(S \subsetneq N\) with \(S \ne \emptyset \) and let \(i \in N \backslash S\) be such that \((S \cup \{i\})_{c(i,S \cup \{i\},\sigma _0)}^{\sigma _0}=\{i\}\). Then,

$$\begin{aligned} \sigma _{S \cup \{i\}} = \text {Alg}\left( (N, \sigma _S,p, \alpha ),S \cup \{i\}\right) . \end{aligned}$$

Proof

We start with proving that the minimum costs for coalition \(S \cup \{i\}\) in the sequencing situation \((N, \sigma _0, p, \alpha )\) is equal to the minimum costs for coalition \(S \cup \{i\}\) in the sequencing situation \((N, \sigma _S, p, \alpha )\). Then, we show that the two corresponding sets of optimal processing orders are equal. Finally, the fact that the algorithm always selects a unique processing order among the set of all optimal processing orders (property (iv)) completes the proof.

Note that \(\mathcal {A}(\sigma _S,S \cup \{i\}) \subseteq \mathcal {A}(\sigma _0,S \cup \{i\})\) and thus

$$\begin{aligned} \min _{\sigma \in \mathcal {A}(\sigma _0,S \cup \{i\})}{\sum _{j \in S \cup \{i\}}{\alpha _jC_j(\sigma ))}} \le \min _{\sigma \in \mathcal {A}(\sigma _S,S \cup \{i\})}{\sum _{j \in S \cup \{i\}}{\alpha _jC_j(\sigma ))}}. \end{aligned}$$
(2)

Moreover, from Lemma 3.3 we know that for all \(k \in S \cup \{i\}\) we have \(c(k,S \cup \{i\},\sigma _{S \cup \{i\}}) \ge c(k,S \cup \{i\},\sigma _S)\) and thus \(\sigma _{S \cup \{i\}} \in \mathcal {A}(\sigma _S,S \cup \{i\})\). As a consequence, since

$$\begin{aligned} \sum _{j \in S \cup \{i\}}{\alpha _jC_j(\sigma _{S \cup \{i\}})} = \min _{\sigma \in \mathcal {A}(\sigma _0,S \cup \{i\})}{\sum _{j \in S \cup \{i\}}{\alpha _jC_j(\sigma )}}, \end{aligned}$$

we have together with (2) that

$$\begin{aligned} \min _{\sigma \in \mathcal {A}(\sigma _0,S \cup \{i\})}{\sum _{j \in S \cup \{i\}}{\alpha _jC_j(\sigma )}} = \min _{\sigma \in \mathcal {A}(\sigma _S,S \cup \{i\})}{\sum _{j \in S \cup \{i\}}{\alpha _jC_j(\sigma )}}. \end{aligned}$$
(3)

Let \(\mathcal {O}(\sigma _0,S \cup \{i\})\) and \(\mathcal {O}(\sigma _S,S \cup \{i\})\) denote the set of optimal processing orders for coalition \(S \cup \{i\}\) in sequencing situations \((N, \sigma _0, p, \alpha )\) and \((N, \sigma _S, p, \alpha )\), respectively. We will show \(\mathcal {O}(\sigma _0,S \cup \{i\}) = \mathcal {O}(\sigma _S,S \cup \{i\})\).

First, take \(\sigma ^* \in \mathcal {O}(\sigma _S,S \cup \{i\})\). Since \(\mathcal {A}(\sigma _S,S \cup \{i\}) \subseteq \mathcal {A}(\sigma _0,S \cup \{i\})\), we have \(\sigma ^* \in \mathcal {A}(\sigma _0,S \cup \{i\})\). Moreover, due to (3), we also have \(\sigma ^* \in \mathcal {O}(\sigma _0,S \cup \{i\})\).

Second, take \(\sigma ^* \in \mathcal {O}(\sigma _0,S \cup \{i\})\). From property (iv) of the algorithm we know that for all \(k \in S \cup \{i\}\) we have \(c(k, S \cup \{i\},\sigma ^*) \ge c(k,S \cup \{i\},\sigma _{S \cup \{i\}})\). Therefore, together with \(c(k,S \cup \{i\},\sigma _{S \cup \{i\}}) \ge c(k,S \cup \{i\},\sigma _S)\) from Lemma 3.3, we know \(\sigma ^* \in \mathcal {A}(\sigma _S,S \cup \{i\})\). Consequently, together with (3), we can conclude \(\sigma ^* \in \mathcal {O}(\sigma _S,S \cup \{i\})\). Hence, we have

$$\begin{aligned} \mathcal {O}(\sigma _0,S \cup \{i\}) = \mathcal {O}(\sigma _S,S \cup \{i\}). \end{aligned}$$

Finally, since the algorithm chooses among all optimal processing orders the order in which the players are in a component to the left as far as possible and because the algorithm chooses a fixed order within the components (property (iv)), we have \(\sigma _{S \cup \{i\}} = \text {Alg}((N, \sigma _S,p, \alpha ),S \cup \{i\})\). \(\square \)

It readily follows from the previous theorem that all players in a component to the right of player i with respect to \(\sigma _S\) are not moved to a different component when applying the algorithm to one-machine sequencing situation \((N,\sigma _S,p,\alpha )\) and coalition \(S \cup \{i\}\). This is stated in the following proposition.

Proposition 3.5

Let \((N, \sigma _0, p, \alpha )\) be a one-machine sequencing situation, let \(S \subsetneq N\) with \(S \ne \emptyset \) and let \(i \in N \backslash S\) be such that \((S \cup \{i\})_{c(i,S \cup \{i\},\sigma _0)}^{\sigma _0}=\{i\}\). Then for all \(k \in S \cap F(\sigma _S,i)\) we have

$$\begin{aligned} c\left( k, S \cup \{i\}, \sigma _{S \cup \{i\}}\right) =c\left( k, S \cup \{i\}, \sigma _S\right) . \end{aligned}$$

The next proposition states that all players in a component to the left of player i with respect to \(\sigma _S\) are, if they are moved by the algorithm, moved componentwise at least as far as the original component of player i in \(\sigma _0\). As a consequence, all players that are in \(\sigma _{S \cup \{i\}}\) to the left of the original component of player i in \(\sigma _0\), are not moved by the algorithm when going from \(\sigma _S\) to \(\sigma _{S \cup \{i\}}\).

Proposition 3.6

Let \((N, \sigma _0, p, \alpha )\) be a one-machine sequencing situation, let \(S \subsetneq N\) with \(S \ne \emptyset \) and let \(i \in N \backslash S\) be such that \((S \cup \{i\})_{c(i,S \cup \{i\},\sigma _0)}^{\sigma _0}=\{i\}\).

  1. (i)

    For all \(k \in S\) with \(c(k,S \cup \{i\}, \sigma _{S \cup \{i\}})> c(k,S \cup \{i\},\sigma _S)\) we have

    $$\begin{aligned} c(k,S \cup \{i\},\sigma _{S \cup \{i\}}) \ge c(i,S \cup \{i\},\sigma _0), \end{aligned}$$
  2. (ii)

    For all \(k \in S\) with \(c(k, S \cup \{i\}, \sigma _{S \cup \{i\}})<c(i,S \cup \{i\},\sigma _0)\) we have

    $$\begin{aligned} c(k, S \cup \{i\}, \sigma _{S \cup \{i\}})=c(k,S \cup \{i\}, \sigma _S). \end{aligned}$$

The previous proposition follows directly from the following, more technical, lemma. This lemma shows that, when applying the algorithm to one-machine sequencing situation \((N,\sigma _S,p,\alpha )\) and coalition \(S \cup \{i\}\), once a predecessor of player i with respect to \(\sigma _S\) is considered by the algorithm, moving this player to a position that is to the left of the original component of player i in \(\sigma _0\) is never beneficial.

Lemma 3.7

Let \((N, \sigma _0, p, \alpha )\) be a one-machine sequencing situation, let \(S \subsetneq N\) with \(S \ne \emptyset \) and let \(i \in N \backslash S\) be such that \((S \cup \{i\})_{c(i,S \cup \{i\},\sigma _0)}^{\sigma _0}=\{i\}\). Let \(m \in S \cap P(\sigma _S,i)\) and \(l \in S \cap F(\tau _m,m)\) with \(c(l, S \cup \{i\}, \tau _m)<c(i, S \cup \{i\},\sigma _0)\). Then

$$\begin{aligned} \alpha _{(S \cup \{i\})^{\tau _m}(m,l]}p_m-\alpha _mp_{N^{\tau _m}(m,l]} \le 0, \end{aligned}$$
(4)

where \(\tau _m\) denotes the processing order during the run of the algorithm for one-machine sequencing situation \((N,\sigma _S,p,\alpha )\) and coalition \(S \cup \{i\}\) just before player m is considered.

Proof

See Appendix B. \(\square \)

4 On the convexity of SoSi sequencing games

A game \(v \in \text {TU}^N\) is called convex if

$$\begin{aligned} v(S \cup \{i\}) - v(S) \le v(T \cup \{i\}) - v(T), \end{aligned}$$
(5)

for all \(S, T \in 2^N \backslash \{\emptyset \}\), \(i \in N\) such that \(S \subset T \subseteq N \backslash \{i\}\), i.e., the incentive for joining a coalition increases as the coalition grows. Using recursive arguments it can be seen that in order to prove convexity it is sufficient to show (5) for the case \(|T|=|S|+1\) which boils down to

$$\begin{aligned} v(S \cup \{i\}) - v(S) \le v(S \cup \{j\} \cup \{i\}) - v(S \cup \{j\}), \end{aligned}$$
(6)

for all \(S \in 2^N \backslash \{\emptyset \}\), \(i,j \in N\) and \(i \ne j\) such that \(S \subseteq N \backslash \{i,j\}\).

The main result of this paper is the following theorem.

Theorem 4.1

Let \((N, \sigma _0, p, \alpha )\) be a one-machine sequencing situation and let (Nv) be the corresponding SoSi sequencing game. Then, (Nv) is convex.

Before presenting the formal proof of our main result, we highlight some of its important aspects beforehand. Using (6), let \(S \in 2^N \backslash \{\emptyset \}\), \(i,j \in N\) and let \(i \ne j\) be such that \(S \subseteq N \backslash \{i,j\}\).

Note that without loss of generality we can assume

  • Assumption 1: \(\sigma _0(j) < \sigma _0(i)\).

  • Assumption 2: \((S \cup \{j\} \cup \{i\})_{c(j,S \cup \{j\} \cup \{i\},\sigma _0)}^{\sigma _0}=\{j\}\) and \((S \cup \{j\} \cup \{i\})_{c(i,S \cup \{j\} \cup \{i\},\sigma _0)}^{\sigma _0}=\{i\}\).

The first assumption is harmless because of the symmetric role of i and j in (6). The second assumption states that player i and j both are the only player in the component of \(S \cup \{j\} \cup \{i\}\) with respect to \(\sigma _0\). In theory this is no restriction since it is always possible to add dummy players with zero processing times and zero costs per time unit (a more formal explanation can be found in Appendix C). This assumption facilitates the comparison of the marginal contribution of player i to coalition S and the marginal contribution of player i to coalition \(S \cup \{j\}\). For example, if one determines the optimal processing order for coalition \(S \cup \{i\}\) via initial processing order \(\sigma _S\) and player i is the only player in its component of \(S \cup \{j\} \cup \{i\}\) with respect to \(\sigma _0\) (and thus also with respect to \(\sigma _S\)), then the players of coalition \(S \cup \{i\}\) are in every component already ordered with respect to their urgency and thus the preprocessing step of the algorithm can be skipped. As a consequence, the marginal contribution of player i to coalition S can be written as the sum of the positive cost difference of the players who are moved by the algorithm to a different component.

In order to denote the different types of players that are moved, we introduce the following notation. For \(U \in 2^N \backslash \{\emptyset \}\) and \(k \in N\) such that \(U \subseteq N \backslash \{k\}\) and \((U \cup \{k\})_{c(k,U \cup \{k\},\sigma _0)}^{\sigma _0}=\{k\}\), let \(M^k(U)\) denote the set of players who are moved to a different component during the run of the algorithm with respect to one-machine sequencing situation \((N, \sigma _U, p, \alpha )\) and coalition \(U \cup \{k\}\). Since the algorithm only moves players to components that are to the right of its original component in \(\sigma _U\), we have

$$\begin{aligned} M^k(U) = \left\{ l \in N~|~c(l,\sigma _{U \cup \{k\}})>c(l,\sigma _U) \right\} . \end{aligned}$$

As the algorithm only moves the players of the coalition \(U \cup \{k\}\) and all players outside this coalition are not moved, we have

$$\begin{aligned} M^k(U) \subseteq \left( U \cup \{k\} \right) . \end{aligned}$$
(7)

Moreover, from Propositions 3.5 and 3.6 it follows, respectively, that

$$\begin{aligned} M^k(U) \subseteq \left( P(\sigma _U,k) \cup \{k\} \right) , \end{aligned}$$
(8)

and

$$\begin{aligned} c(l,\sigma _{U \cup \{k\}}) \ge c(k,\sigma _U), \end{aligned}$$
(9)

for all \(l \in M^k(U)\).

In order to prove Theorem 4.1 we need to compare the marginal contribution of player i to coalition S and the marginal contribution of player i to coalition \(S \cup \{j\}\). As argued above, both marginal contributions can be written as the sum of the positive cost differences of the players who are moved by the algorithm to a different component. In order to compare those cost differences more easily, we first partition the players in \(M^i(S)\), based on their position in the processing orders \(\sigma _S\) and \(\sigma _{S \cup \{j\}}\), in four subsets. Second, we derive from \(\sigma _S\) a special processing order \(\overline{\sigma }\) in such a way that all players from \(M^i(S)\) are in \(\overline{\sigma }\) and \(\sigma _{S \cup \{j\}}\) in the same component. The convexity proof is finished by means of adequately comparing all positive cost differences.

Proof of Theorem 4.1

Let \(S \in 2^N \backslash \{\emptyset \}\), \(i,j \in N\) and \(i \ne j\) such that \(S \subseteq N \backslash \{i,j\}\). We will prove

$$\begin{aligned} v(S \cup \{i\}) - v(S) \le v(S \cup \{j\} \cup \{i\}) - v(S \cup \{j\}). \end{aligned}$$
(10)

We partition the players in \(M^i(S)\), based on their position in the processing orders \(\sigma _S\) and \(\sigma _{S \cup \{j\}}\), in four subsets. First, note that from (7) it follows that \(M^i(S) \subseteq S \cup \{i\}\) and thus \(j \not \in M^i(S)\). From (8) it follows that all players in \(M^i(S)\) are in \(\sigma _S\) to the left of player i, or player i himself. By assumption 1 we have that player j is to the left of player i in \(\sigma _0\) (and thus also in \(\sigma _S\)). So, we can split \(M^i(S)\) into the following two disjoint sets:

  • \(M_1^i(S)\): the set of players in \(M^i(S)\) who are in \(\sigma _S\) to the left of player j,

  • \(M_2^i(S)\): the set of players in \(M^i(S)\) who are in \(\sigma _S\) between player j and player i, or player i himself.

Based on the position in \(\sigma _{S \cup \{j\}}\), we can split \(M_1^i(S)\) into another three disjoint subsets:

  • \(M_{1a}^i(S)\): the set of players in \(M_1^i(S)\) who are in \(\sigma _{S \cup \{j\}}\) to the left of the original component of player j,

  • \(M_{1b}^i(S)\): the set of players in \(M_1^i(S)\) who are in \(\sigma _{S \cup \{j\}}\) between the original components of player j and player i, or in the original component of player j,

  • \(M_{1c}^i(S)\): the set of players in \(M_1^i(S)\) who are in \(\sigma _{S \cup \{j\}}\) to the right of the original component of player i.

From Proposition 3.5 it follows that all players in \(M_2^i(S)\) are in \(\sigma _{S \cup \{j\}}\) between the original components of player j and player i, so we do not further split \(M_2^i(S)\) into subsets. We have now a partition of \(M^i(S)\) in four subsets, namely \(\{M_{1a}^i(S), M_{1b}^i(S), M_{1c}^i(S), M_2^i(S)\}\). Moreover, if \( i \in M^i(S)\) then \(i \in M_2^i(S)\).

The definition of the partition of \(M^i(S)\) in four subsets explains the position of the corresponding players in the processing orders \(\sigma _S\) and \(\sigma _{S \cup \{j\}}\). The following four claims indicate how the partition also determines the position in the two other processing orders \(\sigma _{S \cup \{i\}}\) and \(\sigma _{S \cup \{j\} \cup \{i\}}\). For notational convenience, we denote \(c(k,S \cup \{i\} \cup \{j\}, \sigma )\) by \(c(k,\sigma )\) for every \(k \in S \cup \{i\} \cup \{j\}\) and \(\sigma \in \Pi (N)\).

  • Claim 1 \(c(k,\sigma _{S \cup \{j\} \cup \{i\}}) = c(k,\sigma _{S \cup \{i\}}) \ge c(i,\sigma _S)\) for all \(k \in M^i(S)\).

  • Claim 2 \(c(k,\sigma _{S \cup \{j\} \cup \{i\}}) = c(k,\sigma _{S \cup \{j\}})\) for all \(k \in M^i_{1c}(S)\).

  • Claim 3 \(c(k,\sigma _S) = c(k,\sigma _{S \cup \{j\} })\) for all \(k \in M_2^i(S)\).

  • Claim 4 \(c(k,\sigma _S) = c(k,\sigma _{S \cup \{j\} })\) for all \(k \in M_{1a}^i(S)\).

The proofs of these four claims can be found in Appendix D. Figure 10 illustrates for all four partition elements of \(M^i(S)\) its position with respect to the original components of player i and player j in the four different processing orders. The solid arrows give the original components and/or the actual positions of player i and j. The dotted arrows give possible positions of player i or j.

Fig. 10
figure 10

The position of the players in \(M^i(S)\) in the four different processing orders. a Processing order \(\sigma _S\). b Processing order \(\sigma _{S \cup \{i\}}\). c Processing order \(\sigma _{S \cup \{j\}}\). d Processing order \(\sigma _{S \cup \{j\} \cup \{i\}}\)

We define \(\overline{\sigma } \in \Pi (N)\) as the unique urgency respecting processing order that satisfies

  1. (i)

    for all \(k \in M^i(S)\):

    $$\begin{aligned} c(k,\overline{\sigma })=c(k,\sigma _{S \cup \{j\}}), \end{aligned}$$
    (11)
  2. (ii)

    for all \(k \in S \backslash M^i(S)\):

    $$\begin{aligned} c(k,\overline{\sigma })=c(k,\sigma _S), \end{aligned}$$
  3. (iii)

    for all \(k, l \in S\) with \(c(k,\overline{\sigma })=c(l,\overline{\sigma })\):

    $$\begin{aligned} u_k=u_l, \sigma _0(k)<\sigma _0(l) \Rightarrow \overline{\sigma }(k) < \overline{\sigma }(l). \end{aligned}$$

Note that conditions (i) and (ii) determine the components for the players in S. Next, the urgency respecting requirement determines the order within the components for the players with different urgencies. Finally, in case there is a tie for the urgency of two players in the same component, item (iii) states a tiebreaking rule. As a consequence, due to this tiebreaking rule, we have that \(\overline{\sigma }\) is unique.

Note that \(\overline{\sigma }\) can be considered as a temporary processing order when going from \(\sigma _S\) to \(\sigma _{S \cup \{i\}}\) (cf. Fig. 11). The processing order \(\overline{\sigma }\) is derived from processing order \(\sigma _S\) in such a way that all players from \(M^i(S)\) are in \(\overline{\sigma }\) and \(\sigma _{S \cup \{j\}}\) in the same component. From Claim 3 and 4 it follows that the players in \(M^i_{1a}(S)\) and \(M^i_2(S)\) are in \(\sigma _S\) and \(\sigma _{S \cup \{j\}}\) in the same component and thus those players do not need to be moved. Hence, only the players in \(M^i_{1b}(S)\) and \(M^i_{1c}(S)\) need to be moved. Hence, we start from \(\sigma _S\) and we move all players in \(M^i_{1b}(S)\) and \(M^i_{1c}(S)\) to the components they are in in \(\sigma _{S \cup \{j\}}\). Note that since the tiebreaking rule mentioned in condition (iii) is the same tiebreaking rule as in property (iv) of the algorithm, the mutual order of the players in \(M^i(S)\) is in \(\overline{\sigma }\) the same as in \(\sigma _{S \cup \{j\}}\).

Fig. 11
figure 11

Overview how to obtain \(\sigma _{S \cup \{i\}}\) from \(\sigma _S\) via \(\overline{\sigma }\)

An illustration of the position of the players in \(M^i(S)\) in \(\overline{\sigma }\) can be found in Fig. 12. Note that since \(i \not \in S \cup \{j\}\) it follows that \(c(i,\sigma _S)=c(i,\sigma _{S \cup \{j\}})\). Moreover, we note that \(\overline{\sigma }\) and \(\sigma _{S \cup \{j\}}\) are not necessarily equal to each other as the players in \(M^j(S) \backslash M^i(S)\) are in \(\overline{\sigma }\) and \(\sigma _{S \cup \{j\}}\) in different components. However, as the players in \(M^j(S)\) will be moved to a component to the right when going from \(\sigma _S\) to \(\sigma _{S \cup \{j\}}\), we have

$$\begin{aligned} c(k, \sigma _{S \cup \{j\}}) \ge c(k,\overline{\sigma }), \end{aligned}$$
(12)

for all \(k \in S \cup \{j\} \cup \{i\}\).

Fig. 12
figure 12

The position of the players in \(M^i(S)\) in \(\overline{\sigma }\)

Now we consider the transition from \(\overline{\sigma }\) to \(\sigma _{S \cup \{i\}}\) and its corresponding cost differences. Since the players in \(M_{1c}^i(S)\) are in \(\overline{\sigma }\) already in the component they are in in \(\sigma _{S \cup \{i\}}\), only all players in \(M^i_{1a}(S)\), \(M^i_{1b}(S)\) and \(M^i_{2}(S)\) need to be moved to a component to the right when going from \(\overline{\sigma }\) to \(\sigma _{S \cup \{i\}}\) (see also Fig. 11). We go from \(\overline{\sigma }\) to \(\sigma _{S \cup \{i\}}\) by considering the players in \(M^i_{1a}(S)\), \(M^i_{1b}(S)\) and \(M^i_2(S)\) in an order reverse to the order they are in \(\overline{\sigma }\), i.e., the players are considered from the right to the left. For \(k \in M^i_{1a}(S) \cup M^i_{1b}(S) \cup M^i_2(S)\), denote the processing order just before player k is moved by \(\overline{\tau }_k\) and let \(\overline{r}_k\) denote the player that player k will be moved behind. The cost difference for coalition \(S \cup \{i\}\) due to moving this player, when going from \(\overline{\sigma }\) to \(\sigma _{S \cup \{i\}}\), is denoted by \(\overline{\delta }_k\), i.e.,

$$\begin{aligned} \overline{\delta }_k= \alpha _{(S \cup \{i\})^{\overline{\tau }_k}(k,\overline{r}_k]}p_k-\alpha _kp_{N^{\overline{\tau }_k}(k,\overline{r}_k]}. \end{aligned}$$

Similarly, we can write the marginal contribution of player i to coalition \(S \cup \{j\}\) as the sum of positive cost differences of the players in \(M^i(S \cup \{j\})\). We go from \(\sigma _{S \cup \{j\}}\) to \(\sigma _{S \cup \{j\} \cup \{i\}}\) by considering the players in \(M^i(S \cup \{j\})\) in an order reverse to the order they are in \(\sigma _{S \cup \{j\}}\), i.e., the players are considered from the right to the left. We note that since the mutual order of the players in \(M^i(S)\) is the same in \(\overline{\sigma }\) and \(\sigma _{S \cup \{j\}}\), the order in which the players in \(M^i_{1a}(S) \cup M^i_{1b}(S) \cup M^i_2(S)\) are considered when going from \(\overline{\sigma }\) to \(\sigma _{S \cup \{i\}}\) is the same as the order in which they are considered when going from \(\sigma _{S \cup \{j\}}\) to \(\sigma _{S \cup \{j\} \cup \{i\}}\). For \(k \in M^i(S \cup \{j\})\), denote the processing order just before player k is moved by \(\tau _{k}\) and let \(r_k\) denote the player that player k will be moved behind. The cost difference for coalition \(S \cup \{j\} \cup \{i\}\) due to moving this player, when going from \(\sigma _{S \cup \{j\}}\) to \(\sigma _{S \cup \{j\} \cup \{i\}}\), is denoted by \(\delta _k\), i.e.,

$$\begin{aligned} \delta _k= \alpha _{(S \cup \{j\} \cup \{i\})^{\tau _k}(k,r_k]}p_k-\alpha _kp_{N^{\tau _k}(k,r_k]}. \end{aligned}$$

From (12) together with the fact that the players in \(M^i_{1a}(S) \cup M^i_{1b}(S) \cup M^i_2(S)\) are moved to the same component in \(\sigma _{S \cup \{i\}}\) and \(\sigma _{S \cup \{j\} \cup \{i\}}\), and the fact that the players in \(M^j(S) \backslash M^i(S)\) are moved to a component to the right, we have

$$\begin{aligned} c(l,\tau _k) \ge c(l,\overline{\tau }_k), \end{aligned}$$
(13)

for all \(k \in M^i_{1a}(S) \cup M^i_{1b}(S) \cup M^i_2(S)\) and \(l \in S \cup \{j\} \cup \{i\}\).

The following claim states that the cost savings when moving a player in \(M^i_{1a}(S) \cup M^i_{1b}(S) \cup M^i_2(S)\) when going from \(\overline{\sigma }\) to \(\sigma _{S \cup \{i\}}\) is at most the cost savings when moving the same player when going from \(\sigma _{S \cup \{j\}}\) to \(\sigma _{S \cup \{j\} \cup \{i\}}\).

Claim 5 \(\overline{\delta }_k \le \delta _k\) for all \(k \in M^i_{1a}(S) \cup M^i_{1b}(S) \cup M^i_2(S)\).

Proof

The proof can be found in Appendix E.

We are now ready to prove (10). Note that a detailed explanation of the subsequent equalities and inequalities can be found after the equations.

which proves (10).

Explanations

  1. (i)

    The extra worth that is obtained by adding player i to coalition S can be split into two parts. The first part is due to the fact that player i joins the coalition and it represents the cost savings for player i in processing order \(\sigma _S\) compared to \(\sigma _0\). The completion time of player i is reduced by the sum of the processing times of the players that jumped over player i when going from \(\sigma _0\) to \(\sigma _S\) without moving any players. The second part represents the cost savings for coalition \(S \cup \{i\}\) by additionally moving players when going from \(\sigma _S\) to the optimal processing order \(\sigma _{S \cup \{i\}}\).

  2. (ii)

    The optimal processing order \(\sigma _{S \cup \{i\}}\) can be obtained from \(\sigma _S\) via \(\overline{\sigma }\) where some players are already (partially) moved to the right.

  3. (iii)

    The cost difference for coalition \(S \cup \{i\}\) when going from \(\sigma _S\) to \(\overline{\sigma }\) can be split into two parts: the cost difference for coalition S and the cost difference for player i. By the definition of \(\overline{\sigma }\) and since \(i \not \in S \cup \{j\}\), player i is not moved when going from \(\sigma _S\) to \(\overline{\sigma }\) and the completion time of player i is reduced by the sum of the processing times of the players that jumped over player i when going from \(\sigma _S\) to \(\overline{\sigma }\), i.e., the sum of the processing times of the players in \(M^i_{1c}(S)\).

  4. (iv)

    Processing order \(\sigma _S\) is optimal for coalition S and thus \(C(\sigma _S,S) -C(\overline{\sigma },S) \le 0\).

  5. (v)

    This follows from the definition of \(\overline{\delta }_k\).

  6. (vi)

    This follows from Claim 5.

  7. (vii)

    This follows from \((M^i_{1a}(S) \cup M^i_{1b}(S) \cup M^i_2(S)) \subseteq M^i(S \cup \{j\})\) (cf. Fig. 10) and \(\delta _k > 0\) for all \(k \in M^i(S \cup \{j\})\) due to property (ii) of the algorithm.

  8. (viii)

    This follows from the definition of \(\delta _k\).

  9. (ix)

    This follows from \(M^i_{1c}(S) \subseteq (P(\sigma _S,i) \cap F(\sigma _{S \cup \{j\}},i))\) (cf. Fig. 10).

  10. (x)

    The group of players that jump over player i when going from \(\sigma _0\) to \(\sigma _{S \cup \{j\}}\) can be split into two groups: the group of players that jumped over player i when going from \(\sigma _0\) to \(\sigma _S\) and the group of players that were positioned in front of player i in \(\sigma _S\) but jumped over player i when going from \(\sigma _S\) to \(\sigma _{S \cup \{j\}}\). Hence, \(\{P(\sigma _0,i) \cap F(\sigma _S,i), P(\sigma _S,i) \cap F(\sigma _{S \cup \{j\}},i)\}\) is a partition of \(P(\sigma _0,i) \cap F(\sigma _{S \cup \{j\}},i)\).

  11. (xi)

    Similar to the explanation in (i).

To conclude, we have shown \(v(S \cup \{i\}) - v(S) \le v(S \cup \{j\} \cup \{i\}) - v(S \cup \{j\})\) which proves the convexity of SoSi sequencing games. \(\square \)