1 Introduction and state of the art

The concept of Industry 4.0 is closely linked to the objective of economical and flexible production of customized products in small batch sizes. To meet this objective, unexpected failures of machines in a production system or disruption errors in supply chains e.g., caused by a pandemic situation or sudden disturbances in supply chains have to be considered. An essential aspect that is affected by these influences is production planning and control (PPC) [1]. This includes, among other steps, the planning of material requirements, the scheduling of orders, and capacity planning. The goals are a reduction of work in progress, a minimization of processing times, a reduction in inventory costs, or the ability to react to changes in demand or supply [2]. Therefore, PPC has to be dynamic, adaptive, and integrative and has to consider material requirements planning, enterprise resource planning, just-in-time manufacturing, and collaborative planning, forecasting, and replenishment, among other activities [3].

In order to meet these requirements, PPC needs tools for decision-making and planning. Thus, process scheduling, also known as job shop scheduling (JSS), is a crucial task of the PPC. JSS aims at determining the chronological processing sequence of given orders. In this process, the goal is to allocate a set of tasks (jobs) to available functional units (machines) as efficiently as possible with regards to a certain objectives [4]. These objectives can be e.g., minimizing the makespan (i.e., the completion time of all the jobs), minimizing the tardiness of each job [5], minimizing the energy consumption [6], or maximizing machine utilization [7]. Objectives can be considered individually or on a multi-criteria basis. In consequence, optimization problems can be formulated, which range under the term of job shop scheduling problem (JSSP). The general assumption is that the operations of a job have to be processed in a given order and a machine cannot process two operations at the same time, which builds constraints of the JSSP. Constraints and objectives vary between JSSP types. One important expansion of the JSSP towards industrial applicability is the flexible job shop scheduling problem (FJSSP), in which operations can be processed by more than one machine [8]. Furthermore, in the dynamic job shop scheduling problem (DJSSP) e.g., availability states of machines are considered [9].

In order to solve these various optimization problems, computer-aided methods like optimization algorithms can be used. Using exact optimization methods for NP-hard problems such as JSSP is linked with an exponential growth in runtime with the problem size. Therefore, approximation methods are deployed to find solutions, since exact methods are mostly not able to compute these in a time-efficient manner [10]. Mokhtari and Hasani used a combination of genetic and simulated annealing algorithms in order to solve multi-objective FJSSP [11]. Besides, Zhang et al. used a digital twin to improve the solution of a DJSSP. The digital twin is used as a basis for forecasting machine failures, as well as for intelligent JSS by means of a genetic algorithm [12]. These methods show capabilities for finding solutions in a time efficient manner. However, the computational effort increases rapidly with the problem size even with approximation methods. Furthermore, current developments in the field of Quantum Annealing (QA) show huge potential for application.

QA is a metaheuristic, which is proposed showing advantages solving combinatorial optimization problems compared to algorithms on classical computers [13]. Therefore, quantum annealers based on the adiabatic theorem, are realized in order to make QA applicable. Basic units of a quantum annealer are quantum bits or qubits, which describe the lowest information unit in the quantum processing unit (QPU). Similar to bits of classical computers, qubits can attain the states of 0 or 1. However, qubits are quantum objects, which results in the possibility of assuming an infinite number of states between 0 and 1 at the same time. This phenomenon is known as superposition [14]. Before a QA process is initiated, the qubits are in superposition. The superposition ends when the QA process is finished and the qubits collapse to either 0 or 1. The probability in which state a qubit collapses can be influenced by biases applying external magnetic fields. In addition, through the quantum physical phenomenon of entanglement, qubits can be linked together so that they influence each other. Entanglement can be controlled analog to biases by couplers that apply external magnetic fields during the QA. Through couplers, the end states of entangled qubits are influenced [15]. During QA, an energy landscape for qubits is defined through couplers and biases in which the quantum annealer finds the minimum energy state. These states can be assigned to possible solutions of a minimization problem, where low energy states are linked with good solutions. In finding an optimal solution along the energy profile, states of higher energy usually have to be overcome to find a state of lower energy. However, during QA, quantum tunneling makes it possible to find the lowest energy level by passing through higher energy levels [16]. This procedure is shown in Fig. 1, which summarizes the QA process. Based on the described effects, QA is able to represent many solutions of a combinatorial optimization problem at the same time through superposition and finds a good solution through tunneling and entanglement within milliseconds, even for large problem sizes [17]. In difference to gate-based quantum computers the latest versions of quantum annealers have over 5000 Qubits and are already able to address realistic problem sizes. In addition, the useability of QA for industrial applications came along with the possibility of controlling QA via cloud services by providers such as D-WaveFootnote 1 [18]. For example, to show the potential of QA for industrial applicability an approach for factory layout planning using QA was proposed by Klar et. al [19].

Fig. 1
figure 1

Quantum Annealing process

The inputs for quantum annealers are specific formulations of an energy optimization problem in the form of Ising models. Many optimization problems can be formulated in this way using the formulation as a hamilton function [20]. A hamilton function (Eq. 1) maps certain states of an optimization problem to their specific energy levels. The hamilton function is described through the sum of the initial hamilton and the final hamilton, also called tunneling and problem hamilton.

$$\begin{array}{c}{H}_{ising}=A(s){H}_{inital}+B(s){H}_{final}\end{array}$$
(1)

\(A(s)\) and \(B(s)\) are energy scaling functions that increase or decrease monotonically with the progress of the QA. At the beginning of a QA process, all qubits are in superposition and the system is in its lowest energy state mainly described by the initial hamilton (i.e., \(A(s)=1, B(s)=0\)). During the QA the influence of the initial hamilton decreases and the final hamilton increases. The lowest energy state of the final hamilton describes the solution of the minimization problem, including qubit biases and couplings. At the end of QA process (i.e., \(A(s)=0, B(s)=1\)), the qubits remain in a state described by the final Hamilton, which can be applied to annealers formulated as [18]:

$$\begin{array}{c}{H}_{final}=\sum\limits_{i}^{ }{Q}_{ii}{x}_{i}+\sum\limits_{i<j}{Q}_{ij}{x}_{i}{x}_{j}\end{array}$$
(2)

with scalar weights \({Q}_{ii}\), \({Q}_{ij}\) and binary variables \({x}_{i}\), \({x}_{j}\).

Recent studies regarding JSS have shown the potential of QA to solve such complex assignment problems within milliseconds using the hamilton formulation [21, 22]. QA offers the potential to compute the JSSP while finding a near-optimal solution in a time-efficient manner. Thus, Venturelli et al. proposed a valuable approach for solving small JSSP under finding optimal solutions [21]. The feasibility to solve JSSP with QA could successfully be shown in this approach. However, the full potential of QA should be explored by testing larger problem sizes. Besides, Kurowski et al. proposed how JSSP can be decomposed into a set of smaller optimization problems that requires less quantum hardware capacity [22], which makes the solution of larger problems possible. Aiming at industrial scale problems, Denkena et al. use a digital annealer, which simulates the principles of a quantum annealer, to solve larger instances of FJSSP [23]. Especially through the solutions of the considered problem sizes this approach shows perspectives for application in industry. Nevertheless, digital annealers only simulate quantum mechanical effects. Therefore, it can be assumed that a quantum annealer with an adequate number of qubits performs better in solving problem instances for industrial applicability regarding computation time.

Consequently, this paper presents a QA-based FJSSP for varying problem sizes. In a first step, the mathematical formulation for mapping FJSSP to a quantum annealer will be shown. Furthermore, the different solvers of QA are evaluated through a scientific benchmark to demonstrate the efficiency of the approach regarding scalability, solutions quality, and computing time.

2 Problem formulation and solver description

2.1 Framework

In this paper, the QA-based job shop scheduling approach is presented to solve FJSSP for different job sizes using D-Wave solvers. Where the QPU solvers only use the QPU of the quantum annealers, the hybrid solvers use both classical and quantum resources to solve problems. In this paper only hybrid solvers are applied, which are suitable for solving large problem instances. D-Wave offers access to leap hybrid solvers and classical hybrid solvers (CHS). The leap hybrid solvers support different kinds of quadratic models as input. Sets of binary variables defined in a hamilton formulation as binary quadratic model (BQM) are suitable for the leap hybrid BQM solver (HBQM). In contrast, discrete variables, which can assume e.g., integers, are combined in a discrete quadratic model (DQM) that the leap hybrid DQM solver (HDQM) supposes. Besides, constrained quadratic models (CQM) containing integer and binary variables are required for the leap hybrid CQM solver (HCQM). CHS support only BQM. Where the leap hybrid solvers use the QPU only once for computing a submitted problem, the CHS are programmed to decompose the problem into smaller instances and compute iteratively.

The proposed approach aims at finding feasible schedules for given sets of jobs and machines in a time-efficient manner so that multiple schedules can be computed, and the best solution can be evaluated (shown in Fig. 2). In addition, it will be examined which solver is best suited for the various problem sizes by testing different solvers. Therefore, the leap solvers will be used as well as one CHS. Though the CHS support the decomposer to split large problems into multiple small sub-problems, an additional iterative approach is proposed to achieve faster computing time by using a CHS. As the starting point, it is essential for the QA approach to determine the input variables, constraint conditions and objectives, and summarize them in a mathematical formulation.

Fig. 2
figure 2

Proposed framework

FJSSP aims to schedule A jobs \(J=\{{j}_{1},\cdots ,{j}_{A}\}\) on B machines \(M=\{{m}_{1},\cdots ,{m}_{B}\}\) with given optimization objectives. Each job consists of various operations that must be performed in a predefined sequence. \({O}_{i}\) is denoted as the set of operations for any job \(i\in J\). Furthermore, any operation \({o}_{i}\in {O}_{i}\) can be processed on at least one machine out of the given set \({M}_{{o}_{i}}\). For any \({o}_{i}\in {O}_{i}\) and \(m\in {M}_{{o}_{i}}\), the processing time of the operation on machine is defined as \({p}_{{o}_{i},m}\) and the starting time of the operation is denoted as \(t\), which is in the given timeline \(T=\{0,\dots ,{T}_{max}\}\).

Additionally, the solution for the FJSSP should satisfy the following three constraints:

  1. 1.

    Processing constraint: During the processing of any job \(i\in J\), each operation \({o}_{i}\in {O}_{i}\) must start only once on a single machine \(m\in {M}_{{o}_{i}}\).

  2. 2.

    Procedure constraint: For any job \(i\in J\), each operation \({o}_{i}\in {O}_{i}\) must be processed in the given order \({L}_{{O}_{i}}=\{0,\dots ,{l}_{{o}_{i}},\dots ,{l}_{{o}_{i,last}}\}\), in which \({o}_{i,last}\) indicates the last operation of the job \(i\).

  3. 3.

    Overlapping constraint: Each machine \(m\in M\) cannot process more than one operation at the same time.

The objectives for FJSSP can be varied (e.g., energy consumption, job completion time and processing costs). However, the proposed approach focuses on the optimization of job completion time. Therefore, only one optimization objective is implemented, the makespan objective.

2.2 Mathematical formulation

In order to exploit and evaluate the full potential of the various solvers for computing FSJP, the constraints and objectives have to be formulated using the specific kind of supported variables.

2.2.1 BQM formulation

In the BQM formulation (Table 1) according to [23], a set of binary variables is used to denote all the possible starting times of each operation on the corresponding machine. The binary variable \({k}_{{o}_{i},m,t}\) is equal to 1 if the referred operation \({o}_{i}\in {O}_{i}\) starts on the corresponding machine \(m\in {M}_{{o}_{i}}\) to the allocated discrete time \(t\in T\) (Eq. 3). The constraints and objectives are defined as binary polynomials in a hamilton formulation and summarized in a binary polynomial \(H\) (Eq. 11) which serves as a cost function for the optimization problem. The binary polynomials \({H}_{1}{,H}_{2},{H}_{3},{H}_{4}\) are added together with non-negative scalar weights \(\alpha ,\beta ,\gamma ,\delta\) which determine the impact of the respective polynomial. With \({H}_{1}\) a processing constraint is formulated (Eq. 4). Obviously, the constraint is satisfied for \({H}_{1}=0\). Here, combinations of binary variables which don’t fulfill the constraint result in bigger values, e.g., if an operation has multiple starting times. The procedure constraint is fulfilled analog to the first constraint for \({H}_{2}=0\) (Eq. 5). Here the constraint is violated if the given order of a job is not considered. The third constraint leads to \({H}_{3}=0\) if no machine is occupied by two operations simultaneously. To accomplish the optimization objective of completing each task in the shortest time, a binary polynomial \({H}_{4}\) is defined to penalize the completion time late operations (Eq. 9). The makespan objective,\({H}_{4}\) penalizes the completion time of any operation that is later than minimum predecessor time of the operation \({P}_{{o}_{i}}\), which is the sum of the minimum processing times of the preceding operations of operation \({o}_{i}\) (Eqs. 9, 10).

Table 1 Constraints and objectives in BQM formulation

2.2.2 DQM formulation

In the DQM formulation (illustrated in Table 2), the start time of the operation can be determined by a discrete variable instead of a set of binary variables in the BQM formula. To assign the corresponding machine to the operation, \({v}_{{o}_{i}}\) is an integer in the range from \(0\) to \({N}_{{o}_{i}}\cdot {T}_{max}\), where \({N}_{{o}_{i}}\) is the number of selectable machines for the operation and \({T}_{max}\) is the defined maximum completion time (Eq. 15). The starting time of the operation on the machine is determined using the following equation:

$$\begin{array}{c}{x}_{{o}_{i},m}={v}_{{o}_{i}}-\left({n}_{{o}_{i},m}-1\right){T}_{max}\end{array}$$
(12)

where \({n}_{{o}_{i},m}\in {[1,\dots ,N}_{{o}_{i}}]\) indicates the machine number of the assigned machine \(m \in {M}_{{o}_{i}}\) in the available machines for operation \({o}_{i}\).

Table 2 Constraints and objectives in DQM formulation

As in the BQM formulation, a binary variable is used in DQM to determine if the operation starts on the indicated machine to the assigned discrete time (Eq. 13), and a polynomial \(H\) summarizes the constraints and objectives correspondingly (Eq. 22). The polynomials \({H}_{2}\), \({H}_{3}\), \({H}_{4}\), \({H}_{5}\) with non-negative scalar weights \(\alpha ,\beta ,\gamma ,\delta\) are added corresponding to the constraints and objectives. However, compared to BQM, DQM does not require processing constraints. Since the starting time of each operation is already defined by Eq. 14, no more than one machine can be assigned to an operation.

Moreover, to satisfy the makespan objective, a discrete variable \({T}_{sum}\) is directly defined as makespan (Eq. 15) and a makespan constraint is added by the polynomial \({H}_{5}\). Here, if the completion time of the last operation of any task is greater than \({T}_{sum}\), the constraint is violated (Eq. 20). \({H}_{5}=0\) means that the makespan constraint is satisfied.

2.2.3 CQM formulation

Both BQM and DQM are unconstrained models that all constraints and objectives are contained in the polynomial. The optimal solution is found by controlling the scalar weights corresponding to the polynomial. In contrast, CQM is a constrained quadratic model, which sets the independent equations corresponding to the constraints. The objective is expressed in terms of minimizing a polynomial. Analog to DQM, CQM supports integer variables and binary variables. In the CQM expression (presented in Table 3), a binary variable is defined by an operation and the corresponding machine. If the operation is assigned to the corresponding machine, the variable equals 1 (Eq. 23). Another binary variable is defined by two operations and a machine, in order to determine whether two operations are assigned to the same machine (Eq. 24). In addition, the starting time of the operation is defined by a positive integer variable (Eq. 25). Similar to DQM, the makespan is determined by an integer variable (Eq. 26). In CQM, the processing constraint is formulated in Eq. 27. The sum of the binary variables \({y}_{{o}_{i},m}\) of all the different machines for any one operation equals to 1.

Table 3 Constraints and objectives in CQM formulation

To fulfill the processing constraint, the difference between the start time of operation and the next operation in the same job is greater than the duration time required to the operation (Eq. 28). Equation 29 forms the overlapping constraint. If two operations are assigned on the same machine, the difference between their starting times can not be less than the processing time of the previous executed operation. Moreover, the makespan objective minimizes the makespan that is defined by an integer variable (Eq. 31). Equation 30 for the makespan constraint is used to control the completion time of last operation for any job not to exceed the makespan.

2.2.4 Variable pruning

In addition to a minimum predecessor time \({P}_{{o}_{i}}\), non-final operations in any job have a minimum successor time \({S}_{{o}_{i}}\), which is the sum of the minimum processing durations of all subsequent operations including the processing duration of current operation \({o}_{i}\) (Eq. 32).

$$\begin{array}{c}{S}_{{o}_{i}}=\sum\limits_{{l}_{{o}_{i}^{{{\prime}}}}\ge {l}_{{o}_{i}}}^{ }\underset{{m}^{{{\prime}}}\in {M}_{{o}_{i}^{{{\prime}}}}}{\mathrm{min}}{p}_{{o}_{i}^{{{\prime}}},{m}^{{{\prime}}}} \end{array}$$
(32)

The starting time of the operation \({o}_{i}\) should be in the range of \([{P}_{{o}_{i}},{T}_{max}-{S}_{{o}_{i}}]\). Therefore, variables that are not in this time range in BQM can be pruned directly to reduce the computation time and decrease the size of the computational problem. Analog in the DQM formulation, the range of the discrete variable can be pruned leading to less interactions between variables and smaller problem sizes. Furthermore, in the CQM, the set of binary variables can be reduced by non-feasible combinations of machines as well as non-valid starting times. In addition, for the parameter \({T}_{max}\) a as small as possible value should be chosen in order to minimize the problem size for all solvers. Therefore, an estimation has to be done which depends on the number of operations in the given jobs, available machines, and the corresponding processing times. For each job, the processing times of the operations can be summarized. The maximum processing time through all jobs leads to a lower bound of \({T}_{max}\). Additionally, the sum of all jobs processing times leads to an upper bound for \({T}_{max}\). In order to use suitable values for the different problem sizes starting from the lower bound, the value will be increased with the number of operations as well as processing durations and decreased with the number of machines.

2.2.5 Iterative approach for large problems

In order to reduce the number of variables, an iterative approach is proposed by decomposing large problems into multiple small sub-problems in addition to the decomposing mechanic of the CHS mentioned above. The approach is illustrated in Fig. 3. As the starting point, the given problems are split into different samples to determine the number of jobs and corresponding operations scheduled in each iteration. The chosen sample sizes depend on the number of jobs. To fulfill the makespan objective, the jobs are selected in priority order according to the minimum processing times of all the operations. Moreover, the initial range of times that each job can be processed on the corresponding machine \({T}_{i,m}^{(0)}\) and scalar weight of the objective function are required to be defined. Therefore, an estimation according to chapter 2.2.4 has to be done. To reduce the number of variables, the initial maximum completion time is set, which is then gradually increased with each iteration and the processing times of the operations that have already been scheduled are removed. Consequently, \({T}_{max}\) is updated if the estimated completion time of the remaining jobs in the loop is beyond \({T}_{max}\). Afterwards, each sample is solved using the CHS. The iteration is completed until all the samples are completed. The solution to the problem is the combination of the solutions of all the samples in the loop.

Fig. 3
figure 3

Workflow of the iterative approach

3 Benchmark

For evaluation of the different approaches, various FJSSP were computed, and the solvers were examined regarding performance and solution quality. Therefore, FJSSP were computed with the different hybrid QA solvers under consideration of various problem instances. Hence, statements could be made which QA-based solver is best suitable to which problem sizes. For that purpose, the input parameters of the solver were adjusted until feasible solutions could be found. These results are shown in Table 4 in which “–“ expresses that the solution is not found.

Table 4 Results of various FJSSP

The results of computing various FJSSP show that very small problem instances can be solved by every solver under finding good solutions. However, the iterative CHS needs significantly less computation time than the other hybrid solvers for these instances. Since the leap hybrid solvers like HDQM and HCQM can’t go below a minimum computing time of five seconds due to input parameter restrictions, the iterative CHS shows advantages over the leap hybrid solvers when computing small problem instances regarding computing time. In addition, the solution quality of all solvers is comparable which leads to advantages using iterative CHS for small problem instances. With increasing problem sizes, the computing time for the leap hybrid solvers initially remains the same and starts to increase noticeably only from the 20 × 10 × 10 instance. In comparison, the computation time of the iterative CHS does not increase with the size of the problem due to the iteration approach. The computing time of the iterative CHS is affected by the number of iterations, which also has an influence on the quality of the solution. Therefore, the computing time for the large problem instances (30 × 20 × 10 and 30 × 20 × 15) is essentially the same as the small problem instances (3 × 3 × 3 and 6 × 6 × 6). The solution quality is comparable yet for the HDQM, the HBQM, and the iterative CHS. However, the solution quality for the HCQM worsens in comparison with the other hybrid solvers. For some instances the HCQM does not even find a solution because the permissible variable number is exceeded. It can be concluded that for medium-sized instances, the HDQM as well as HBQM show the highest suitability for finding good solutions, while the iterative CHS can be used for evaluating many solutions due to the significantly lower computing time. Moreover, the computing time for solving medium sized instances increases faster with the HBQM than with the HDQM. Consequently, for the 20 × 15 × 15 instance, the computing time is almost five times higher. Regarding large problem instances (30 × 20 × 10 and 30 × 20 × 15), the HDQM requires higher computing times as well. While the solution times of the iterative CHS are still in the range of milliseconds or seconds, the solution time for the leap hybrid solvers is in the scope of minutes. Regarding the solution quality, the HDQM produces the best solutions, where the HBQM only reaches poor makespan values. The solution quality of the iterative CHS is lower than the HDQM solution, but under consideration of the low computing time, many solutions adapting the solver parameters can be achieved and evaluated in a time efficient manner. The computing times are shown in Fig. 4.

Fig. 4
figure 4

Computing times for various FJSSP

Since the HDQM and iterative CHS achieved the best results regarding solution quality and computing times, they will be chosen for an additional scientific benchmark according to [24]. The benchmark includes ten FJSSP (MK01-MK10) with various amount of machines, operations, and processing times that has already been performed by several authors, e.g., by [23, 25,26,27,28]. The objective is the minimization of the makespan. For comparison, the results of the benchmark are illustrated in Table 5. Furthermore, solutions and computing times of different approaches are shown in Table 6. So, the comparison of the QA approaches with state-of-the-art algorithms is guaranteed. However, it has to be mentioned that the best-known solutions are computed with approximative methods because of the NP-hardness of the problem. Therefore, it might be possible that the best-known solution is not the optimal solution.

Table 5 Results of scientific benchmark
Table 6 Comparison of solutions from other proposed approaches

According to Table 6, the approach proposed by [28] demonstrates a significantly higher computing time range from 3.02 to 122.52 min relative to other approaches. Though the approach using memetic algorithms proposed by [26] shows all the best-known solutions for 10 MK problems, the highest computing time for MK 06 reaches 80 s. Therefore, it is obvious that the proposed approach using iterative CHS in this paper has a great advantage in computing time compared to other proposed approaches. Even for the largest problems, the computing time is within 1 s. However, in the iterative approach, the jobs are prioritized according to the length of the required processing time, which can affect the solution quality. However, the results of the benchmark exhibit a good performance of the HDQM. It finds solutions for every MK-problem. For MK03 and MK08 it even achieves the best-known benchmark solution. Furthermore, no solution deviates from the best-known solution over 19%. The computing time increases with the problem instances, but is still much lower than the times e.g., [28] or [25] achieved.

In conclusion, QA bears potentials for solving FJSSP. In particular, computing many solutions in a short time offers the advantage of computing many solutions to a given problem and evaluating the solutions with respect to different objectives or constraints. Consequently, the QA-based job shop scheduling approach using hybrid solvers can solve FJSSP relative efficiently.

4 Conclusion and Outlook

Modern manufacturing in the context of Industry 4.0 increases the frequency and complexity of scheduling. In order to meet these requirements efficient algorithms for scheduling are needed. Consequently, this paper presents a QA-based approach for solving FJSSP. After presenting the framework of the approach, the mathematical formulations for the different kind of solvers are shown. Afterwards FJSSP with various sizes were solved using the HBQM, HDQM, and HCQM solver as well as iterative CHS. In this process, FJSSP are computed to analyze the suitability of the various problem sizes to the different solvers. Based on these results, a scientific benchmark with other state-of-the-art algorithms was performed with a makespan objective. The scientific benchmark has shown the huge potential of QA for solving FJSSP by solving problems within seconds or milliseconds under finding good solutions. Consequently, the presented approach demonstrated its ability to find high-quality solutions in a short time and can be used to generate different schedule variants that can be evaluated against each other. Nevertheless, efficient algorithms for solving FJSSP are still quite far away from industrial application. In industry, many dynamic factors have to be considered such as unexpected failures of machines in a production system or disruption errors in supply chains. Furthermore, jobs are gradually received and order inquiries fluctuate. This leads to a demand for algorithms with consideration of these dynamic conditions.

In future research, the complexity of the allocation problem will be increased, by incorporating additional boundary conditions such as unavailabilities of machines. Moreover, objectives such as energy costs or machine utilization will be incorporated to realize a multi-objective optimization problem. In addition, it will be investigated how the mathematical formulations must be adjusted for different problem types (e.g., dynamic, flexible, multi-objective). Also, the solution quality of the HDQM and the iterative CHS have to be improved. For this purpose, the approach by using HDQM will be enlarged through an iterative solving technique. In addition, it would be desirable to develop a method to estimate the gap between optimal and approximative solution. Therefore, further research based on the results will be developed incorporating techniques to choose suitable sample sets for the iterative approach. Since the iterative solution method of the BQM formulation has already been shown to improve computation time, the same approach can be expected for the iterative formulation of DQM problems. In addition, the iterative CHS will be enlarged through bottleneck factors according to [23] for improving the job selection method for each iteration. Besides, a software demonstrator will be developed to enable intuitive interaction for planners without extensive experience with the QA interface structure.