Decentralized Online Scheduling of Malleable NP-hard Jobs

. In this work, we address an online job scheduling problem in a large distributed computing environment. Each job has a priority and a demand of resources, takes an unknown amount of time, and is malleable, i.e., the number of allotted workers can ﬂuctuate during its execution. We subdivide the problem into (a) determining a fair amount of resources for each job and (b) assigning each job to an according number of processing elements. Our approach is fully decentralized, uses lightweight communication, and arranges each job as a binary tree of workers which can grow and shrink as necessary. Using the NP-complete problem of propositional satisﬁability (SAT) as a case study, we experimentally show on up to 128 machines (6144 cores) that our approach leads to near-optimal utilization, imposes minimal computational overhead, and performs fair scheduling of incoming jobs within a few milliseconds.


Introduction
A parallel task is called malleable if it can handle a fluctuating number of workers during its execution. In the field of distributed computing, malleability has long been recognized as a powerful paradigm which opens up vast possibilities for fair and flexible scheduling and load balancing [13,17]. While most previous research on malleable job scheduling has steered towards iterative data-driven applications, we want to shed light on malleability in a very different context, namely for NP-hard tasks with unknown processing times. For instance, the problem of propositional satisfiability (SAT) is of high practical relevance and an important building block for many applications including automated planning [26], formal verification [18], and cryptography [20]. We consider malleable scheduling of such tasks highly promising: On the one hand, the description of a job can be relatively small even for very difficult problems, and the successful approach of employing many combinatorial search strategies in parallel can be made malleable without redistribution of data [27]. On the other hand, the limited scalability of these parallel algorithms calls for careful distribution of computational resources. We believe that a cloud-like on-demand system for resolving NP-hard problems has the potential to drastically improve efficiency and productivity for many organizations and environments. Using malleable job scheduling, we can schedule new jobs within a few milliseconds, resolve trivial jobs in a fraction of second, and rapidly resize more difficult jobs to a fair share of all resources -as far as the job can make efficient use of these resources.
To meet these objectives, we propose a fully decentralized scheduling approach which guarantees fast, fair, and bottleneck-free scheduling of resources without any knowledge on processing times. In previous work [27], we briefly outlined initial algorithms for this purpose while focusing on our award-winning scalable SAT solving engine which we embedded into our system. In this work, we shed more light on our earlier scheduling algorithms and proceed to propose significant improvements both in theory and in practice.
We address two subproblems. The first problem is to let m workers compute a fair number of workers v j for each active job j, accounting for its priority and maximum demand, which result in optimal system utilization. In previous work [27] we outlined this problem and employed a black box algorithm to solve it. The second problem is to assign v j workers to each job j while keeping the assignment as stable as possible over time. Previously [27], we proposed to arrange each job j as a binary tree of workers which grows and shrinks depending on v j , and we described and implemented a worker assignment strategy which routes request messages randomly through the system. When aiming for optimal utilization, this protocol leads to high worst-case scheduling latencies.
In this work, we describe fully distributed and bottleneck-free algorithms for both of the above problems. Our algorithms have O(log m) span and are designed to consistently achieve optimal utilization. Furthermore, we introduce new measures to preferably reuse existing (suspended) workers for a certain job rather than initializing new workers. We then present our scheduling platform Mallob 1 which features simplified yet highly practical implementations of our approaches. Experiments on up to 128 nodes (6144 cores) show that our system leads to near-optimal utilization and schedules jobs with a fair share of resources within tens of milliseconds. We consider our theoretical as well as practical results to be promising contributions towards processing malleable NP-hard tasks in a more scalable and resource-efficient manner.

Preliminaries
We now establish important preliminaries and discuss work related to ours.

Malleable Job Scheduling
We use the following definitions [10]: A rigid task requires a fixed number of workers. A moldable task can be scaled to a number of workers at the time of its scheduling but then remains rigid. Finally, a malleable task is able to adapt to a fluctuating number of workers during its execution. Malleability can be a highly desirable property of tasks because it allows to balance tasks continuously to warrant fair and optimal utilization of the system at hand [17]. For instance, if an easy job arrives in a fully utilized system, malleable scheduling allows to shrink an active job in order to schedule the new job immediately, significantly decreasing its response time. Due to the appeal of malleable job scheduling, there has been ongoing research to exploit malleability, from shared-memory systems [13] to HPC environments [6,9], even to improve energy efficiency [25].
The effort required to transform a moldable (or rigid) algorithm into a malleable algorithm depends on the application at hand. For iterative data-driven applications, redistribution of data is necessary if a task is expanded or shrunk [9]. In contrast, we demonstrated in previous work [27] for the use case of propositional satisfiability (SAT) that basic malleability is simple to achieve if the parallel algorithm is composed of many independent search strategies: The abrupt suspension and/or termination of individual workers can imply the loss of progress, but preserves completeness. Moreover, if workers periodically exchange knowledge, the progress made on a worker can benefit the job even if the worker is removed. For these reasons, we have not yet considered the full migration of application processes as is done in adaptive middlewares [9,16] but instead hold the application itself responsible to react to workers being added or removed.
Most prior approaches rely on known processing times of jobs and on an accurate model for their execution time relative to the degree of parallelism [5,24] whereas we do not rely on such knowledge. Furthermore, most approaches employ a centralized scheduler, which implies a potential bottleneck and a single point of failure. Our approach is fully decentralized and uses a small part of each processes' CPU time to perform distributed scheduling, which also opens up the possibility to add more general fault-tolerance to our work in the future. For instance, this may include continuing to schedule and process jobs correctly even in case of network-partitioning faults [2], i.e., failures where sub-networks in the distributed environment are disconnected from each another. Other important aspects of fault-tolerance include mitigation of simple node failures (i.e., a machine suddenly goes out of service) and of Byzantine failures [7] (i.e., a machine exhibits arbitrary behavior, potentially due to a malicious attack).

Scalable SAT Solving
The propositional satisfiability (SAT) problem poses the question whether a given propositional formula F = k i=1 ci j=1 l i,j is satisfiable, i.e., whether there is an assignment to all Boolean variables in F such that F evaluates to true. SAT is the archetypical NP-complete problem [8] and, as such, a notoriously difficult problem to solve. SAT solving is a crucial building block for a plethora of applications such as automated planning [26], formal verification [18], and cryptography [20]. State-of-the-art SAT solvers are highly optimized: The most popular algorithm named Conflict-Driven Clause Learning (CDCL) performs depth-first search on the space of possible assignments, backtracks and restarts its search frequently, and derives redundant conflict clauses when encountering a dead end in its search [19]. As these clauses prune search space and can help to derive unsatisfiability, remembering important derived clauses is crucial for modern SAT solvers' performance [3].
The empirically best performing approach to parallel SAT solving is a socalled portfolio of different solver configurations [14] which all work on the original problem and periodically exchange learned clauses. In previous work, we presented a highly competitive portfolio solver with clause sharing [27] and demonstrated that careful periodic clause sharing can lead to respectable speedups for thousands of cores. The malleable environment of this solver is the system which we present here. Other recent works on decentralized SAT solving [15,21] rely on a different parallelization which generates many independent subproblems and tends to be outperformed by parallel portfolios for most practical inputs [11].

Problem Statement
We consider a homogeneous computing environment with a number of interconnected machines on which a total of m processing elements, or PEs in short, are distributed. Each PE has a rank x ∈ {0, . . . , m − 1} and runs exclusively on c ≥ 1 cores of its local machine. PEs can only communicate via message passing.
Jobs are introduced over an interface connecting to some of the PEs. Each job j has a job description, a priority p j ∈ R + , a demand d j ∈ N + , and a budget b j (in terms of wallclock time or CPU time). If a PE participates in processing a job j, it runs an execution environment of j named a worker. A job's demand d j indicates the maximum number of parallel workers it can currently employ: d j is initialized to 1 and can then be adjusted by the job after an initial worker has been scheduled. A job's priority p j may be set, e.g., depending on who submitted j and on how important they deem j relative to an average job of theirs. In a simple setting where all jobs are equally important, assume p j = 1 ∀j. A job is cancelled if it spends its budget b j before finishing. We assume for the active jobs J in the system that the number n = |J| of active jobs is no higher than m and that each PE employs at most one worker at any given time. However, a PE can preempt its current worker, run a worker of another job, and possibly resume the former worker at a later point.
Let T j be the set of active workers of j ∈ J. We call v j := |T j | the volume of j. Our aim is to continuously assign each j ∈ J to a set T j of PEs subject to: (C1) (Optimal utilization) Either all job demands are fully met or all m PEs are working on a job: (C2) (Individual job constraints) Each job must have at least one worker and is limited to d j workers: Due to rounding, in C3 we allow for job volumes to deviate by a single unit (see ε ≤ 1) from a fair distribution as long as the job of higher priority is favored.

Approach
We subdivide the problem at hand into two subproblems: First, find fair volumes v j for all currently active jobs j ∈ J subject to C1-C3. Secondly, identify pairwise disjoint sets T j with |T j | = v j for each j ∈ J. In this section, we present fully decentralized and highly scalable algorithms for both subproblems. In Section 4.1 we describe how our practical implementation differs from these algorithms.
To assess our algorithms, we consider two important measures from parallel processing. Given a distributed algorithm, consider the dependency graph which is induced by the necessary communication among all PEs. The span (or depth) of the algorithm is the length of a critical path through this graph. The local work is the complexity of local computations summed up over all PEs.

Calculation of Fair Volumes
Given jobs J with individual priorities and demands, we want to find a fair volume v j for each job j such that constraints C1-C3 are met. Volumes are recomputed periodically taking into account new jobs, departing jobs, and changed demands. In the following, assume that each job has a single worker which represents this (and only this) job. We elaborate on these representants in Section 3.2.
We defined our problem such that n = |J| ≤ m. Similarly, we assume j∈J d j > m since otherwise we can trivially set v j = d j for all jobs j. Assuming real-valued job volumes for now, we can observe that for any parameter α ≥ 0, constraints C2-C3 are fulfilled if we set v j = v j (α) := max(1, min(d j , αp j )). By appropriately choosing α, we can also meet the utilization constraint C1: Consider the function ξ(α) := m − j∈J v j (α) which expresses the unused resources for a particular value of α. Function ξ is a continuous, monotonically decreasing, and piece-wise linear function (see Fig. 1). Moreover, ξ(0) = m − n ≥ 0 and ξ(max j∈J d j /p j ) = m − j∈J d j < 0. Hence ξ(α) = 0 has a solution α 0 which Fig. 1. Volume calculation example with four jobs and m = 7. Five of the eight points where ξ(α) is evaluated are depicted, three more (d3/p3, d1/p1, and d2/p2) are omitted.
represents the desired choice of α that exploits all resources, i.e., it also fulfills constraint C1. Once α 0 is found, we need to round each v j (α 0 ) to an integer. Due to C1 and C3, we propose to round down all volumes and then increment the volume of the k := m − j v j (α 0 ) jobs of highest priority: We identify J := {j ∈ J : v j (α 0 ) < d j }, sort J by job priority, and select the first k jobs. We now outline a fully distributed algorithm which finds α 0 in logarithmic span. We exploit that ξ , the gradient of ξ, changes at no more than 2n values of α, namely when αp j = 1 or αp j = d j for some j ∈ J. Since we have m ≥ n PEs available, we can try these O(n) values of ξ(α) in parallel. We then find the two points with smallest positive value and largest negative value using a parallel reduction operation. Lastly, we interpolate ξ between these points to find α 0 .
The parallel evaluation of ξ is still nontrivial since a naive implementation would incur quadratic work -O(n) for each value of α. We now explain how to accelerate the evaluation of ξ. For this, we rewrite ξ(α) = m − j∈J v j (α) as: Intuitively, R sums up all resources which are assigned due to raising a job volume to 1 (if αp j < 1) and due to capping a job volume at d j (if αp j > d j ); and αP sums up all resources assigned as This new representation only features two unknown variables, R and P , which can be computed efficiently. At α = 0, we have R = n and P = 0 since all job volumes are raised to one. If we then successively increase α, we pass 2n events where R and P are modified, namely whenever αp j = 1 or αp j = d j for some job j. Since each such event modifies R and P by a fixed amount, we can use a single prefix sum calculation to obtain all intermediate values of R and P .
Each event e = (α e , r e , p e ) occurs at point α e and adds r e to R and p e to P . Each job j causes two events: e j = (1/p j , −1, p j ) for the point αp j = 1 where v j stops being raised to 1, and e j = (d j /p j , d j , −p j ) for the point αp j = d j where v j begins to be capped at d j . We sort all events by α e and then compute a prefix sum over r e and p e : (R e , P e ) = ( e e r e , e e r p ), where "≺" denotes the ordering of events after sorting. We can now compute ξ(α e ) = m − (n + R e ) − α e P e at each event e. 2 The value of n can be obtained with a parallel reduction.
Overall, our algorithm has O(log m) span and takes O(m log m) work: Sorting O(n) elements in parallel on m ≥ n PEs is possible in logarithmic time, 3 as is computing reductions and prefix sums. Selecting the k jobs to receive additional volume after rounding down all volumes can be reduced to sorting as well.

Assignment of Jobs to PEs
We now describe how the fair volumes computed as in the previous section translate to an actual assignment of jobs to PEs.
Basic Approach. We begin with our basic approach as introduced in [27].
For each job j, we address the k current workers in T j as w 0 j , w 1 j , . . . , w k−1 j . These workers can be scattered throughout the system, i.e., their job indices 0, . . . , k − 1 within T j are not to be confused with their ranks. The k workers form a communication structure in the shape of a binary tree (Fig. 2). Worker w 0 j is the root of this tree and represents j for the calculation of its volume (Section 3.1). Workers w 2i+1 j and w 2i+2 j are the left and right children of w i j . Jobs are made malleable by letting T j grow and shrink dynamically. Specifically, we enforce that T j consists of exactly k = v j workers. If v j is updated, all workers w i j for which i ≥ v j are suspended and the corresponding PEs turn idle. Likewise, workers without a left (right) child for which 2i + 1 < v j (2i + 2 < v j ) attempt to find a child worker w 2i+1 j (w 2i+2 j ). New workers are found via request messages: A request message r = (j, i, x) holds index i of the requested worker w i j as well as rank x of the requesting worker. If a new job is introduced at some PE, then this PE emits a request for the root node w 0 j of T j . All requests for w i j , i > 0 are emitted by the designated parent node w (i−1)/2 j of the desired worker. In [27], we proposed that each request performs a random walk through a regular graph of all PEs and is resolved as soon as it hits an idle PE. While this strategy resolves most requests quickly, some requests can require a large number of hops. If we assume a fully connected graph of PEs and a small share of workers is idle, then each hop of a request corresponds to a Bernoulli process with success probability , and a request takes an expected 1/ hops until an idle PE is hit. Consequently, to improve worst-case latencies, a small ratio of workers should be kept idle [27]. By contrast, our following algorithm with logarithmic span does not depend on suboptimal utilization.
Matching Requests and Idle PEs. In a first phase, our improved algorithm (see Fig. 3) computes two prefix sums with one collective operation: the number of requests q i being emitted by PEs of rank < i, and the number o i of idle PEs of rank < i. We also compute the total sums, q m and o m , and communicate them to all PEs. The q i and o i provide an implicit global numbering of all requests and all idle PEs. In a second phase, the i-th request and the i-th token are both sent to rank i. In the third and final phase, each PE which received both a request and an idle token sends the request to the idle PE referenced by the token.
If the request for a worker w i j is only emitted by its designated parent w (i−1)/2 j , then our algorithm so far may need to be repeated O(log m) times: Repetition l activates a worker which then emits requests for repetition l + 1. Instead, we can let a worker emit requests not only for its direct children, but for all transitive children it deserves. Each worker w i j can compute the number k of desired transitive children from v j and i. The worker then contributes k to q i . In the second phase, the k requests can be distributed communication-efficiently to a range of ranks {x, . . . , x + k − 1}: w i j sends requests for workers w 2i+1 j and w 2i+2 j to ranks x and x + 1, which send requests for corresponding child workers to ranks x + 2 through x + 5, and so on, until worker index v j − 1 is reached. To enable this distribution, we append to each request the values x, v j , and the rank of the PE where the respective parent worker will be initialized. As such, each child knows its parent within T j (Fig. 3) for job-internal communication.
We now outline how our algorithm can be executed in a fully asynchronous manner. We compute the prefix sums within an In-Order binary tree of PEs [22,Chapter 13.3], that is, all children in the left subtree of rank i have a rank < i and all children in the right subtree have a rank > i. This prefix sum computation can be made sparse and asynchronous: Only non-zero contributions to a prefix sum are sent upwards explicitly, and there is a minimum delay in between sending contributions to a parent. Furthermore, we extend our prefix sums to also include inclusive prefix sums q i , o i which denote the number of requests (tokens) at PEs of rank ≤ i. As such, every PE can see from the difference q i − q i (o i − o i ) how many of its local requests (tokens) took part in the prefix sum. Last but not least, the number of tokens and the number of requests may not always match -a PE which receives either a request or an idle token (but not both) knows of this imbalance due to the total sums q m , o m . The unmatched message is sent to its origin and can re-participate in the next iteration.
Our matching algorithm has O(log m) span and takes O(m) local work. The maximum local work of any given PE is in O(log m) (to compute the above k), which is amortized by other PEs because at most m requests are emitted.

Reuse of Suspended Workers
Each PE remembers up to C most recently used workers (for a small constant C) and deletes older workers. Therefore, if a worker w i j is suspended, it may be resumed at a later time. Our algorithms so far may choose different PEs and hence create new workers whenever T j shrinks and then re-grows. We now outline how we can increase the reuse of suspended workers.
In our previous approach [27], each worker remembers a limited number of ranks of its past (direct) children. A worker which desires a child queries them for reactivation one after the other until success or until all past children have been queried unsuccessfully, at which point a normal job request is emitted.
We make two improvements to this strategy. First, we remember past workers in a distributed fashion. More precisely, whenever a worker joins or leaves T j , we distribute information along T j to maintain the following invariant: Each current leaf w i j in T j remembers the past workers which were located in a subtree below index i. As such, past workers can be remembered and reused even if T j shrinks by multiple layers and re-grows differently.
Secondly, we adjust our scheduling to actively prioritize the reuse of existing workers over the initialization of new workers. In our implementation, each idle PE can infer from its local volume calculation (Section 4.1) which of its local suspended workers w i j are eligible for reuse, i.e., v j > i in the current volume assignment. If a PE has such a worker w i j , the PE will reject any job requests until it received a message regarding w i j . This message is either a query to resume w i j or a notification that w i j will not be reused. On the opposite side, a worker which desires a child begins to query past children according to a "most recently used" strategy. If a query succeeds, all remaining past children are notified that they will not be reused. If all queries failed, a normal job request is emitted.

The Mallob System
In the following, we outline the design and implementation of our platform named Mallob, short for Malleable Load Balancer. Mallob is a C++ application using the Message Passing Interface (MPI) [12]. Each PE can be configured to accept jobs and return responses, e.g., over the local file system or via an internal API. The application-specific worker running on each PE is defined via an interface with a small set of methods. These methods define the worker's behavior if it is started, suspended, resumed, or terminated, and allow it to send and receive application-specific messages at will. Note that we outlined some of Mallob's earlier features in previous work [27] with a focus on our malleable SAT engine.

Implementation of Algorithms
Our system features practical and simplified implementations solving the volume assignment problem and the request matching problem. We now explain how and why these implementations differ from the algorithms provided in Section 3.
Volume Assignment. Our implementation computes job volumes similar to the algorithm outlined in Section 3.1. However, each PE computes the desired change of root α 0 of ξ locally. All events in the system (job arrivals, departures, and changes in demands) are aggregated and broadcast periodically such that each PE can maintain a local image of all active jobs' demands and priorities [27]. The local search for α 0 is then done via bisection over the domain of ξ. This approach requires more local work than our fully distributed algorithm and features a broadcast of worst-case message length O(n). However, it only requires a single all-reduction. At the scale of our current implementation (n < 10 3 and m < 10 4 ), we expect that our simplified approach performs better than our asymptotically superior algorithm which features several stages of collective operations. When targeting much larger configurations in the future, it may be beneficial to implement and employ our fully distributed algorithm instead.
Request Matching. We did not yet implement asynchronous prefix sums as described in Section 3.2. Instead, we route requests directly along a communication tree R of PEs. Each PE keeps track of the idle count, i.e., the number of idle PEs, in each of its subtrees in R. This count is updated transitively whenever the idle status of a child changes. Emitted requests are routed upwards through R until hitting an idle PE or until a hit PE has a subtree with a non-zero idle count, at which point the request is routed down towards the idle PE. If a large number of requests (close to n) are emitted, the traffic along the root of R may constitute a bottleneck. However, we found that individual volume updates in the system typically result in a much smaller number of requests, hence we did not observe such a bottleneck in practice. We intend to include our bottleneck-free algorithm (Section 3.2) in a future version of our system.

Engineering
For good practical performance of our system, careful engineering was necessary. For instance, our system exclusively features asynchronous communication, i.e., a PE will never block for an I/O event when sending or receiving messages. As a result, our protocols are designed without explicit synchronization (barriers or similar). We only let the main thread of a PE issue MPI calls, which is the most widely supported mode of operation for multithreaded MPI programs.
As we aim for scheduling latencies in the range of milliseconds, each PE must frequently check its message queue and react to messages. For instance, if the main thread of a PE allocates space for a large job description, this can cause a prohibitively long period where no messages are processed. For this reason, we use a separate thread pool for all tasks which involve a risk of taking a long time. Furthermore, we split large messages into batches of smaller messages, e.g., when transferring large job descriptions to new workers.

Evaluation
We now present our experimental evaluation. All experiments have been conducted on the supercomputer SuperMUC-NG. If not specified otherwise, we used 128 compute nodes, each with an Intel Skylake Xeon Platinum 8174 processor clocked at 2.7 GHz with 48 physical cores (96 hardware threads) and 96 GB of main memory. SuperMUC-NG is running Linux (SLES) with kernel version 4.12 at the time of running our experiments. We compiled Mallob with GCC 9 and with Intel MPI 2019. We launch twelve PEs per machine, assign eight hardware threads to each PE, and let a worker on a PE use four parallel worker threads. Our system can use the four remaining hardware threads on each PE in order to keep disturbance of the actual computation at a minimum. Our software and experimental data are available at https://github.com/domschrei/mallob.

Uniform Jobs
In a first set of experiments, we analyze the base performance of our system by introducing a stream of jobs in such a way that exactly n par jobs are in the system at any time. We limit each job j to a CPU time budget B inversely proportional to n par . Each job corresponds to a difficult SAT formula which cannot be solved within the given budget. As such, we emulate jobs of fixed size.
We chose m and the values of n par in such a way that m/n par ∈ N for all runs. We compare our runs against a hypothetical rigid scheduler which functions as follows: Exactly m/n par PEs are allotted for each job, starting with the first n par jobs at t = 0. At periodic points in time, all jobs finish and each set of PEs instantly receives the next job. This leads to perfect utilization and maximizes throughput. We neglect any kind of overhead for this scheduler.
For a modest number of parallel jobs n par in the system (n par ≤ 192), our scheduler reaches 99% of the optimal rigid scheduler's throughput (Table 1). This efficiency decreases to 97.6% for the largest n par where v j = 2 for each job. As the CPU time of each job is calculated in terms of its assigned volume and as the allocation of workers takes some time, each job uses slightly less CPU time than advertised: Dividing the time for which each job's workers have been active by its advertised CPU time, we obtained a work efficiency of η ≥ 99%. Lastly, we measured the CPU utilization of all worker threads as reported by the operating system, which averages at 98% or more. In terms of overall work efficiency η × u, we observed an optimum of 98% at n par = 192, a point where neither n par nor the size of individual job trees is close to m. Table 1. Scheduling uniform jobs on 1536 PEs (6144 cores) compared to a hypothetical optimal rigid scheduler. From left to right: Max. number npar of parallel jobs; max. measured throughput θ, optimal throughput θopt (in jobs per second), throughput efficiency θ/θopt; work efficiency η; mean measured CPU utilization u of worker threads.

Impact of Priorities
In the following we evaluate the impact of job priorities. We use 32 nodes (1536 cores, 384 PEs) and introduce nine streams of jobs, each stream with a different job priority p ∈ [0.01, 1] (see Fig. 4 right) and with a wallclock limit of 300 s per job. As such, the system processes nine jobs with nine different priorities at a time. Each stream is a permutation of 80 diverse SAT instances [27].
As expected, we observed a proportional relationship between priority and assigned volume, with small variations due to rounding (Fig. 4). By contrast, response times appear to decrease exponentially towards a certain lower bound, which is in line with the NP-hardness of SAT and the diminishing returns of parallel SAT solving [27]. The modest margin by which average response times decrease is due to the difficulty of the chosen SAT benchmarks, many of which cannot be solved within the imposed time limit at either scale.

Realistic Job Arrivals
In the next set of experiments, we analyze the properties of our system in a more realistic scenario. Four PEs introduce batches of jobs with poisson-distributed arrivals (inter-arrival time of 1/λ ∈ {2.5 s, 5 s, 10 s}) and between one and eight jobs per batch. As such, we simulate users which arrive at independent times and submit a number of jobs at once. We also sample a priority p j ∈ [0.01, 1], a maximum demand d j ∈ 1, . . . , 1536, and a wallclock limit b j ∈ [1, 600] s for each job. We ran this experiment with our current request matching (Section 4.1) and with each request message performing up to h random hops (as in [27]) until our request matching is employed, for varying values of h. In addition, we ran the experiment with three different suspended worker reuse strategies: No deliberate reuse at all, the basic approach from [27], and our current approach. Fig. 5 (left) shows the number of active jobs in the system over time for our default configuration (our reuse strategy and immediate matching of requests). For all tested interarrival times, considerable changes in the system load can be observed during a job's average life time which justify the employment of a malleable scheduling strategy. Fig. 5 (right) illustrates for 1/λ = 5 s that system utilization is at around 99.8% on average and almost always above 99.5%. We also measured the ratio of time for which each PE has been idle: The median PE was busy 99.08% of all time for the least frequent job arrivals (1/λ = 10 s), 99.77% for 1/λ = 5 s, and 99.85% for 1/λ = 2.5 s. Also note that j d j < m for the first seconds of each run, hence not all PEs can be utilized immediately.
In the following, we focus on the experiment with 1/λ = 5 s. The latency of our volume calculation, i.e., the latency until a PE received an updated volume for an updated job, reached a median of 1 ms and a maximum of 34 ms for our default configuration. For the scheduling of an arriving job, Fig. 6 (left) shows that the lowest latencies were achieved by our request matching (h = 0). For increasing values of h, the variance of latencies increases and high latencies (≥ 50 ms) become more and more likely. Note that jobs normally enter a fully utilized system, and have d j = 1. Therefore, the triggered balancing calculation may render only a single PE idle, which heavily disfavors performing a random walk. Regarding the latency of expanding a job tree by another layer, Fig. 6 (right) indicates that requests performing random walks have a high chance to succeed quickly but can otherwise result in high latencies (> 10 ms).
To compare strategies for reusing suspended workers, we divided the number of created workers for a job j by its maximum assigned volume v j . This Worker Creation Ratio (WCR) is ideally 1 and becomes larger the more often a worker is suspended and then re-created at a different PE. We computed the WCR for each job and in total: As Tab. 2 shows, our approach reduces a WCR of 2.14 down to 1.8 (-15.9%). Context switches (i.e., how many times a PE changed its affiliation) and average response times are improved marginally compared to the naive approach. Last but not least, we counted on how many distinct PEs each w i j has been created: Our strategy initializes 89% of all workers only once, and 94% of workers have been created at most five times. We conclude that most jobs only feature a small number of workers which are rescheduled frequently.

Conclusion
We have presented a decentralized and highly scalable approach to online job scheduling of malleable NP-hard jobs with unknown processing times. We split our problem into two subproblems, namely the computation of fair job volumes Table 2. Comparison of worker reuse strategies in terms of worker creation ratio (WCR, per job -median, maximum -and in total), context switches (CS, median per PE and mean), the number of processed jobs within 1 h (Pr.), their mean response time (RT), and the fraction of workers created on at most {1, 2, 5, 10, 25} distinct PEs. and the assignment of jobs to PEs, and proposed scalable distributed algorithms with O(log m) span for both of them. We presented a practical implementation and experimentally showed that it schedules incoming jobs within tens of milliseconds, distributes resources proportional to each job's priority, and leads to near-optimal utilization of resources.
For future work, we intend to add engines for applications beyond SAT into our system. Furthermore, we want to generalize our approach to heterogeneous computing environments and add fault tolerance to our distributed algorithms.