Abstract
We study the shared processor scheduling problem with a single shared processor to maximize total weighted overlap, where an overlap for a job is the amount of time it is processed on its private and shared processor in parallel. A polynomialtime optimization algorithm has been given for the problem with equal weights in the literature. This paper extends that result by showing an \(O(n \log n)\)time optimization algorithm for a class of instances in which nondecreasing order of jobs with respect to processing times provides a nonincreasing order with respect to weights—this instance generalizes the unweighted case of the problem. This algorithm also leads to a \(\frac{1}{2}\)approximation algorithm for the general weighted problem. The complexity of the weighted problem remains open.
Keywords
Divisible jobs Scheduling Shared processor1 Introduction
Generally, we consider a subcontracting system where each agent j has a job with processing time \(p_j\) to be executed. Such an agent can perform the work by itself on its private processor, in which case the job completes after \(p_j\) units of time, or it can send (subcontract) a part (overlap) of length \(t_j\le p_j/2\) of this job to a subcontractor for processing on a shared processor. The subcontractor needs to complete this part of agent’s j job by \(p_jt_j\) bearing in mind that the shared processor can do at most one job at a time which obviously constraints the subcontractor. The speedup in terms of the completion time that the agent achieves in this scenario is exactly \(t_j\), or in other words, the work of agent j is completed at time moment \(p_jt_j\). Whenever \(t_j>0\), the subcontractor is paid by agent j: the payoff for executing \(t_j\) units of jth agent’s job is \(t_jw_j\). The goal of the subcontractor is to maximize its total weighted overlap (or total weighted payoff to subcontractor, or total weighted time savings to agents depending on interpretation). Thus, in this subcontracting system all agents try to minimize completion times of their jobs (the parameters \(p_j\) and \(w_j\) are fixed for each agent j) by commissioning the biggest possible part of their jobs to the subcontractor. The subcontractor is the party that decides the overlap \(t_j\) to maximize the total weighted overlap.
The shared processor scheduling problem can be placed in a wider context of scheduling with the presence of private (local) processors (machines), available only to a particular job or a set of jobs, and shared (global) processors that are available to all jobs. Then, some additional rules are given in order to specify the conditions under which a job can gain access to a shared processor in such systems. These systems can be run as either centralized or decentralized. The former typically has a single optimization criterion forcing all parties to achieve the same goal. The latter emphasizes that each party is trying to optimize its own goal, which may (and often does) lead to problems having no solutions which are optimal for each agent (job) individually. These problems can be seen as multicriteria optimization or coordination problems. The latter can be further subdivided into problems in which agents have complete knowledge about resources of other agents (complete information games) and problems without such a complete knowledge (distributed systems) in the search for coordinating mechanisms. This research falls into the category of centralized problems as the subcontractor is deciding on the schedule that reflects its best interest.
The outline of this paper is as follows. In the next section, we briefly survey the related work to provide a state of the art overview. Section 3 gives a formal statement of the shared processor scheduling problem, and it introduces the necessary notation. Then, in Sect. 4, we recall some facts related to the problem, mainly the fact that when computing optimal schedules one may restrict attention to schedules that are called synchronized. This generally greatly simplifies the formal arguments and algorithmic approach. Section 5 considers a restricted version of the problem in which it is assumed that for any pair of jobs, neither of the jobs can have weight and processing time to be strictly smaller than the other. We give an \(O(n\log n)\)time optimization algorithm for this case, and we use it subsequently as a building block to obtain an \(O(n\log n)\)time 1/2approximation algorithm for the general case in Sect. 6.
2 Related work
The shared processor scheduling problem has recently been studied by Vairaktarakis and Aydinliyim (2007), Hezarkhani and Kubiak (2015), and Dereniowski and Kubiak (2017). Vairaktarakis and Aydinliyim (2007) consider the unweighted problem with a single shared processor and with each job allowed to use at most one time interval on the shared processor. This case is sometimes referred to as nonpreemptive since jobs are not allowed preemption on the shared processor. Vairaktarakis and Aydinliyim (2007) prove that there are optimal schedules that complete job execution on its private and the shared processor at the same time; we call such schedules synchronized. It further shows that this guarantees that sequencing jobs in nondecreasing order of their processing times leads to an optimal solution for the case. Hezarkhani and Kubiak (2015) observes that this algorithm also gives optimal solutions to the preemptive unweighted problem where more than one interval can be used by a job on the shared processor. Dereniowski and Kubiak (2017) considers shared multiprocessor problem proving its strong NPhardness and giving an efficient, polynomialtime algorithm for the shared multiprocessor problem with equal weights. Also, it is shown in Dereniowski and Kubiak (2017) that synchronized optimal schedules always exist for weighted multiprocessor instances. Vairaktarakis and Aydinliyim (2007), Vairaktarakis (2013), and Hezarkhani and Kubiak (2015) also study decentralized subcontracting systems focusing on coordinating mechanisms to ensure their efficiency. Hezarkhani and Kubiak (2015) show such coordination mechanism for the unweighted problem and give examples where such schemes do not exist for the weighted problem.
The motivation to study the shared processor scheduling problem comes from diverse applications. Vairaktarakis and Aydinliyim (2007) consider it in the context of supply chains where subcontracting allows jobs to reduce their completion times by using a shared subcontractor’s processor. Bharadwaj et al. (2003) use the divisible load scheduling to reduce a job completion time in parallel and distributed computer systems, and Anderson (1981) argues for using batches of potentially infinitely small items that can be processed independently of other items of the batch in scheduling jobshops. We refer the reader to Dereniowski and Kubiak (2017) for more details on these applications.
We also remark multiagent scheduling models in which each agent has its own optimality criterion and performs actions aimed at optimizing it. In these models, being examples of decentralized systems, agents usually have a number of nondivisible jobs to execute (depending on the optimization criterion this may be seen as having one divisible job, but restricted by allowing preemptions only at certain specified points). For minimization of weighted total completion time in such models see Lee et al. (2009) and weighted number of tardy jobs see Cheng et al. (2006). Bukchin and Hanany (2007) give an example of a gametheoretic analysis to a problem of this type. For overviews and further references on the multiagent scheduling we refer to the book by Agnetis et al. (2014).
3 Problem formulation
We are given a set \(\mathcal {J}\) of n preemptive jobs. Each job \(j\in \mathcal {J}\) has its processing time \(p_j\) and weight \(w_j\). With each job \(j\in \mathcal {J}\) we associate its private processor denoted by \(\mathcal {P}_j\). Moreover, there exists a single shared processor, denoted by \(\mathcal {M}\), that is available for all jobs. We follow the convention and notation from Dereniowski and Kubiak (2017) to formulate the problem in this paper.

each job \(j\in \mathcal {J}\) executes nonpreemptively in a single time interval \((0,C_{\mathcal {S}}^{\mathcal {P}}(j))\) on its private processor and there is a (possibly empty) collection \(\mathcal {I}_j\) of open, nonempty intervals such that j executes nonpreemptively in each time interval \(I\in \mathcal {I}_j\) on the shared processor,
 for each job \(j\in \mathcal {J}\),$$\begin{aligned} C_{\mathcal {S}}^{\mathcal {P}}(j)+\sum _{I\in \mathcal {I}_j}I = p_j, \end{aligned}$$

the time intervals in \(\bigcup _{j\in \mathcal {J}}\mathcal {I}_j\) are pairwise disjoint (i.e., at most one job on \(\mathcal {M}\) at a time).

Instance A set of weighted jobs \(\mathcal {J}\) with arbitrary given processing times.

Goal Find an optimal schedule for \(\mathcal {J}\).
4 Preliminaries
The section provides a brief discussion of the main characteristics of optimal schedules on the shared processor. These characteristics simplify the formal arguments and algorithmic approach in Sects. 5 and 6.
Observation 1
(Dereniowski and Kubiak 2017) There exists an optimal schedule that has no gaps.
 (i)
\(\mathcal {S}\) is nonpreemptive and has no gaps,
 (ii)
for each job j that appears on the shared processor it holds \(C_{\mathcal {S}}^{\mathcal {M}}(j)=C_{\mathcal {S}}^{\mathcal {P}}(j)\).
Theorem 1
(Dereniowski and Kubiak 2017) There exists an optimal synchronized schedule.
5 A \(O(n \log n)\)time optimal algorithm for antithetical instances
We call an instance \(\mathcal {J}\) of the problem antithetical if for any two jobs i and j it holds: \(p_i\le p_j\) implies \(w_i \ge w_j\). We call a schedule \(\mathcal {S}\) processing time ordered if \(\mathcal {S}=(j_1,\ldots ,j_n)\), where \(p_{j_i}\le p_{j_{i+1}}\) for each \(i\in \{1,\ldots ,n1\}\). In other words, all jobs are present on the shared processor, they are arranged according to nondecreasing order of their processing times and the schedule is synchronized. The definition is correct since we observe that by the construction at the end of Sect. 4, \(\mathcal {S}\) is synchronized and all jobs from \(\mathcal {J}\) appear on the shared processor, see Vairaktarakis and Aydinliyim (2007) and Hezarkhani and Kubiak (2015). We now prove that any optimal solution for an antithetical instance is a processing time ordered schedule. We remark that this algorithm generalizes the previously known solutions for the unweighted case (\(w_1=\cdots =w_n\)) from Hezarkhani and Kubiak (2015), Vairaktarakis and Aydinliyim (2007).
Before giving the main result of this section in Lemma 3, we prove a technical lemma which shows how to transform a schedule with \(k1\) synchronized jobs, i.e., jobs completing on the shared and private processors at the same time, into a schedule that has k jobs synchronized. This transformation will be used in the proof of Lemma 3.
Lemma 2
 (i)
exactly the jobs \(j_1,\ldots ,j_k\) appear on the shared processor in \(\mathcal {S}'\) in this order and \(C_{\mathcal {S}'}^{\mathcal {M}}(j_{i})<C_{\mathcal {S}'}^{\mathcal {P}}(j_{i})\), or
 (ii)
exactly the jobs \(j_1,\ldots ,j_{i1},j_{i+1},\ldots ,j_k\) appear on the shared processor in \(\mathcal {S}'\) in this order, \(p_{j_i}>s_{\mathcal {S}'}^{\mathcal {M}}(j_{i+1})\) and \(j_i\) does not appear on \(\mathcal {M}\).
Proof
The transformation described in the proof is shown in Fig. 2. Informally speaking, we obtain \(\mathcal {S}''\) by moving a part of \(j_i\) from its private processor to the shared processor \(\mathcal {M}\), so that \(j_i\) becomes synchronized, i.e., it ends on both of these processors at the same time. This move forces all jobs that follow \(j_i\) on \(\mathcal {M}\), i.e., the jobs \(j_{i+1},\ldots ,j_k\), to be postponed on \(\mathcal {M}\) as described below. Note that the transformation is exactly the same for both case (i) and case (ii) but Fig. 2 depicts case (i) only.
We remark that it is still premature to conclude from Lemma 2 that no job is missing on \(\mathcal {M}\) in an optimal schedule \(\mathcal {S}\). This would simplify the proof of Lemma 3 as it would eliminate case (5) in the proof. However, an insertion of a job missing on \(\mathcal {M}\) in \(\mathcal {S}\) requires the jobs on \(\mathcal {M}\) to the right of the insertion point to be ordered according to the nondecreasing order of processing times. Otherwise, if this key assumption in Lemma 2 is not met, then the synchronized schedule \(\mathcal {S}''\) from Lemma 2 may not be feasible or may not satisfy the formula for its total weighted overlap given in the lemma. Unfortunately, at this point we cannot guarantee that \(\mathcal {S}\) satisfies the assumption.
Lemma 3
An optimal schedule for an antithetical instance of the problem \({\texttt {WSPS}}\) is a processing time ordered schedule.
Proof
Let \(\mathcal {S}\) be an optimal schedule for an antithetical instance \(\mathcal {J}\). By Theorem 1 we can assume that \(\mathcal {S}\) is synchronized. We assume without loss of generality that the jobs in \(\mathcal {J}=\{1,\ldots ,n\}\) are ordered in nondecreasing order of their processing times, i.e., \(p_1\le p_2 \le \cdots \le p_n\). Let \(A \subseteq \mathcal {J}\) be the set of jobs that appear on the shared processor in \(\mathcal {S}\). Let \(\pi (1), \pi (2), \ldots ,\pi (k)\), \(k=A\), be the order of jobs on the shared processor in \(\mathcal {S}\), i.e., \(\mathcal {S}=(\pi (1), \pi (2), \ldots ,\pi (k))\). We have \(n\in A\). Indeed, by Vairaktarakis and Aydinliyim (2007) and Hezarkhani and Kubiak (2015) regardless of the ordering of the jobs \(1,\ldots ,n1\), it is always possible to add the job n as the last one on \(\mathcal {M}\) and increase the total overlap.
Since \(n\in A\), we have that if there is no violator, then \(\mathcal {S}\) is a processing time ordered schedule and the proof of the lemma is thus completed. Therefore, we assume in the following that at least one violator exists. Then, the maximal index \(i\in \{1,\ldots ,k\}\) such that \(\pi (i)\) is the violator is called the violation point in \(\mathcal {S}\).
 (a)
A is maximum, and
 (b)
with respect to (a): the violation point of \(\mathcal {S}\) is minimum.
Due to the fact that some jobs may have equal processing times yet different weights in an antithetical instance not all processing time ordered schedules are optimal for the instance. However any processing time ordered schedule \(\mathcal {S}=(j_1,\ldots ,j_n)\) such that \(w_{j_1}\ge \cdots \ge w_{j_n}\) is optimal for the instance. To prove that we observe that any maximal sequence of jobs having the same processing times occupies the same interval (s, C) on \(\mathcal {M}\) in any processing time ordered schedule. However, the earlier the job appears in the sequence the longer is its overlap, moreover the overlaps in (s, C) remain the same regardless of the sequence of jobs having equal processing times. Hence, the first position should be occupied by the heaviest job, the second position by the second heaviest job etc. among the jobs with the same processing times to ensure total weighted overlap maximization. This gives an \(O(n\log n)\)time optimization algorithm for the antithetical instances.
6 A 1/2approximation algorithm
 (i)
\(i_{\ell }=n\),
 (ii)
\(w_{i_{1}}>\cdots >w_{i_{\ell }}\),
 (iii)
\(w_{k}\le w_{i_{j}}\) for each \(k\in I_{i_{j}}=\{i_{j1}+1,\ldots ,i_{j}\}\) and \(j\in \{1,\ldots ,\ell \}\), where \(i_0=0\).
Note that the key sequence always exists. This follows from the fact that it can be constructed ‘greedily’ by starting with picking the last job of the sequence [(see (i)] and then iteratively selecting the predecessor of the previously selected job so that the predecessor has strictly bigger weight [see (ii)] and satisfies the condition (iii). Also, the key sequence is unique by the same argument.
We have the following simple observation.
Claim
For each \(k \in \{1,\ldots ,\ell \}\), \(w_{i_{k}}=\max \{w_{j}\,\bigl \bigr .\,i_{k}\le j\le n\}\). \(\square \)
The key sequence for \(\mathcal {J}\) defines a synchronized schedule \(\mathcal {S}_{{\mathrm{key}}}\) for \(\mathcal {J}\) with the set of jobs executed on the shared processor being \(\mathcal {J}_{{\mathrm{key}}}=\{i_{1},\ldots ,i_{\ell }\}\) and the permutation of the jobs on the processor being \(\pi (j)=i_{j}\) for \(j\in \{1,\ldots ,\ell \}\). The jobs in \(\mathcal {J}\setminus \mathcal {J}_{{\mathrm{key}}}\) are executed on their private processors only. Following our notation introduced in Sect. 4, we get \(\mathcal {S}_{{\mathrm{key}}}=(i_1,\ldots ,i_{\ell })\). We have the following lemma.
Lemma 4
For the schedule \(\mathcal {S}_{{\mathrm{key}}}\) it holds \(2\varSigma (\mathcal {S}_{{\mathrm{key}}})\ge u^{*}\).
Proof
Lemma 5
It holds \(e^{*}\le u^{*}\).
Proof
Since \(\varSigma (\mathcal {S}_{{\mathrm{opt}}})=e^{*}\) and \(\varSigma (\mathcal {S}_{{\mathrm{key}}})\le \varSigma (\mathcal {S}_{{\mathrm{opt}}})\), Lemmas 4 and 5 give the following.
Corollary 1
It holds \(e^{*}/2\le \varSigma (\mathcal {S}_{{\mathrm{key}}}) \le e^{*}\). \(\square \)
Theorem 2
The key sequence for \(\mathcal {J}\) provides a 1/2approximate solution to the problem \({\texttt {WSPS}}\). This sequence can be found in time \(O(n\log n)\) for any set of jobs \(\mathcal {J}\), where \(n=\mathcal {J}\). Moreover, the bound of 1 / 2 is tight, i.e., for each \(\varepsilon >0\) there exists a problem instance such that \(\varSigma (\mathcal {S}_{{\mathrm{key}}})<\left( \frac{1}{2}+\varepsilon \right) \varSigma (\mathcal {S}_{{\mathrm{opt}}})\).
Proof
The fact that the key sequence is a 1/2approximation of the optimal solution follows from Corollary 1. The key sequence can be constructed directly from the definition and sorting the jobs in \(\mathcal {J}\) according to their processing times determines the \(O(n\log n)\) running time.
7 Open problems and further research
The complexity status of \({\texttt {WSPS}}\) remains open. The generalized problem with multiple shared processors is strongly NPhard (Dereniowski and Kubiak 2017) when the number of shared processors is a part of the input. However, it remains open whether the generalized problem with fixed number of processors is NPhard or whether it is FPT, for instance when the parameter is defined to be the number of shared processors. This complexity result and the open complexity questions clearly underline the difficulty in finding efficient optimization algorithms for the shared processor scheduling problem. The development of an efficient branchandbound algorithm for the problem remains unexplored so far. The 1/2approximation algorithm along with the structural properties of optimal schedules presented in this paper and in Dereniowski and Kubiak (2017) may prove useful building blocks of such an algorithm.
Footnotes
 1.
We remark that the definition of synchronized schedule that appears in Dereniowski and Kubiak (2017) uses as an intermediate step in the analysis a weaker condition \(C_{\mathcal {S}}^{\mathcal {M}}(j)\le C_{\mathcal {S}}^{\mathcal {P}}(j)\) [such schedules are called normal in Dereniowski and Kubiak (2017)] which can be omitted here due to the stronger condition in (ii) in our definition of synchronized schedule. Also, it has been proved, cf. Observation 3.2 in Dereniowski and Kubiak (2017), that normal schedule has no gaps which justifies our condition (i).
Notes
Acknowledgements
This research has been supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) Grant OPG0105675 and by Polish National Science Centre under Contract DEC2011/02/A/ST6/00201. The authors are grateful to two anonymous reviewers for their insightful comments that have led to improvements in the paper’s presentation.
References
 Agnetis, A., Billaut, J.C., Gawiejnowicz, S., Pacciarelli, D., & Soukhal, A. (2014). Multiagent scheduling. Models and algorithms. Berlin: Springer.CrossRefGoogle Scholar
 Anderson, E. J. (1981). A new continuous model for jobshop scheduling. International Journal of System Science, 12, 1469–1475.CrossRefGoogle Scholar
 Bharadwaj, V., Ghose, D., & Robertazzi, T. G. (2003). Divisible load theory: A new paradigm for load scheduling in distributed systems. Cluster Computing, 6, 7–17.CrossRefGoogle Scholar
 Bukchin, Y., & Hanany, E. (2007). Decentralization cost in scheduling: A gametheoretic approach. Manufacturing & Service Operations Management, 9(3), 263–275.CrossRefGoogle Scholar
 Cheng, T. C. E., Ng, C. T., & Yuan, J. J. (2006). Multiagent scheduling on a single machine to minimize total weighted number of tardy jobs. Theoretical Computer Science, 362(1–3), 273–281.CrossRefGoogle Scholar
 Dereniowski, D., & Kubiak, W. (2017). Shared multiprocessor scheduling. European Journal of Operational Research, 261(2), 503–514.CrossRefGoogle Scholar
 Hezarkhani, B., & Kubiak, W. (2015). Decentralized subcontractor scheduling with divisible jobs. Journal of Scheduling, 18(5), 497–511.CrossRefGoogle Scholar
 Lee, K., Choi, B.C., Leung, J. Y.T., & Pinedo, M. L. (2009). Approximation algorithms for multiagent scheduling to minimize total weighted completion time. Information Processing Letters, 109(16), 913–917.CrossRefGoogle Scholar
 Vairaktarakis, G. L. (2013). Noncooperative games for subcontracting operations. Manufacturing and Service Operations Management, 15, 148–158.CrossRefGoogle Scholar
 Vairaktarakis, G. L., & Aydinliyim, T. (2007). Centralization versus competition in subcontracting operations. Technical Memorandum Number 819, Case Western Reserve University.Google Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.