1 Introduction

Global competition and technological progress have led to ever shorter product cycles in many industries in recent decades. Combined with the rising relevance of the individualisation of products, this results in higher levels of variance in manufacturing. New concepts were developed to increase part flexibility levels in production systems [1]. These concepts need to find an optimal tradeoff between cost efficiency and flexibility. One such concept is the so-called matrix production system, consisting of cycle-time-independent production cells with product-neutral equipment and flexible transportation that can be scheduled ad hoc [2, 3]. Although this system leads to a combined use of resources, shorter transport routes compared to workshop production, and short, intermediate storage, it requires a high capital investment, a large spatial footprint, and increased coordination and control requirements [4]. One of the challenges in matrix production systems is investment planning [5]. Due to interdependencies between machines, conventional methods to optimise the production system, like mixed integer linear programming, are only possible with severe disadvantages. Alternatively, the production system can be modelled using discrete event simulation (DES) and agent-based simulation (ABS). Simulation can be used to predict the behaviour of system configurations given specific production programs. Metaheuristics can be used with simulation to create systems that predict behaviour and prescribe desirable configurations [6]. Metaheuristics like genetic algorithms (GA), simulated annealing (SA) and tabu search can optimise a broad range of models. However, the efficiency of different heuristics varies significantly depending on their specific application [7]. Furthermore, as heuristics cannot determine global optima, the quality of the solution may depend on the chosen method [7]. Thus, apt metaheuristics are crucial for successful simulation-based optimisation in matrix production systems.

This paper contributes a case study on using metaheuristics and simulation for configuring a matrix production system to the body of knowledge. Two different metaheuristics, GA and SA, are tested and compared. Furthermore, the parametrisation of soft constraints and different observation strategies to cope with stochastic effects are examined. This work uses the simulation model of an industrial pc manufacturer implemented in Tecnomatix Plant Simulation to compare GA and SA regarding result quality and processing time.

The remainder of this contribution is structured as follows: Sect. 2 provides an overview of existing literature on the subject and its relationship with the present work. Section 3 presents the underlying use case, the simulation model and the metaheuristic algorithms used. The experimental results are described in Sect. 4 and discussed in Sect. 5. Finally, the work is summarised in Sect. 6.

2 Related work

2.1 Matrix production

The term matrix production or matrix manufacturing has only recently seen more attention. Matrix production systems address the challenges of high volume, high customisation production, which typical sequence-based or function-based production systems cannot meet [8]. In ideal matrix production systems, each station can process each product variant with minimal set-up times. The strict alignment of cycle times can be foregone in matrix production systems [2]. Orders are allocated to stations based on availability, optimising the capacity utilisation throughout the production system. Because of the required flexibility, matrix production systems have only become possible with the advent of Industry 4.0 and the accompanying computational capability to track and control the flow of products precisely. Hofmann et al. [8] investigate under which circumstances matrix production outperforms conventional production lines. Within matrix production systems, intelligent automatic scheduling has been a focus of research. Stricker et al. [9] propose a Monte Carlo tree search-based multi-objective optimisation of scheduling. May et al. [10] use an automated generation of orders that ideally utilise the ad-hoc available production capacities. Configuring or planning matrix production systems has also received considerable research interest. Filz et al. [11] use a data analysis framework and discrete event simulation to evaluate different configuration options for matrix production systems, specifically focusing on intra-logistics with automated ground vehicles (AGV). Trierweiler and Bauernhansl [5] propose an approach for the dynamic reconfiguration of production systems based on constant requirements monitoring. They propose the use of optimisation techniques for the reconfiguration.

2.2 Discrete event simulation

DES is commonly used to replicate the behaviour of production systems as they are characterised by discrete state changes and stochastic nature [12]. A large body of research examines the use of simulation for configuration and operation or control, though more recently, the focus has shifted towards the latter [13]. Mourtzis [14] examines the state of the art of simulation in the design and operation of manufacturing systems. He points out that DES has become increasingly sophisticated and can accurately depict real production systems. Schönemann et al. [15] create a multiscale framework for the simulation of production systems and use it for different challenges concerning production systems like configuration and control.

2.3 Metaheuristics

Concerning decision-making processes, simulation models are classified as predictive models, able to project the behaviour of a system under given circumstances [16]. However, in many decision situations, prescriptive decision support is required, which identifies suitable solutions for problems. For this purpose, either specifically developed exact solution methods can be used, or generalistic, usually heuristic methods that cannot guarantee to find optimal solutions but are more tolerant towards the complexity of the modelled system [17, 18]. Many metaheuristics are inspired by biology and nature, coping strategies used to find suitable solutions for a long time [19]. Metaheuristics can be classified as population-based or single-point-based [20]. GA is likely the most often used population-based method, while SA and tabu search are among the most common single-point-based methods [20]. GA is a direct, stochastic search method that effectively evaluates large search areas. The search begins with a population of random solutions and develops this population over several generations by applying probability techniques and reproduction operators to each member of the population [19]. SA imitates the gradual cooling of metals called annealing, leading to low-energy crystal structure formation [20]. In SA, the initially high temperature allows for a broad-ranging search of the solution space, whereas the low temperature promotes a targeted search for optimal solutions [20]. Several works have compared different metaheuristics in different combinatorial problems [6]. Mohan et al. [21] investigate the performance of hill-climbing, guided local search, tabu search, and SA for a vehicle routing problem, finding that SA was prone to get stuck in local optima. Halim and Ismail [7] compare six different metaheuristics to solve the travelling salesman problem, finding that especially for large problems, GA, tree physiology optimisation (TPO), and SA are suited. Zolfaghari and Liang [22] compares GA, SA, and Tabu Search (TS) in machine grouping problems. Also, several existing works have combined metaheuristics with DES. Hatami-Marbini et al. [23] develop a simulation-optimisation framework using DES and SA to find optimal control policies for producing perishable goods. de Sousa Junior et al. [24] propose the use of machine learning to improve metaheuristics used for the optimisation of shop floor simulations.

2.4 Metaheuristics for the configuration of production systems

As described above, the configuration of production systems is one application area for prescriptive models. The use of DES and metaheuristics has been examined in several previous works. Petroodi et al. [25] combine DES and SA to optimise the configuration of an automotive production system. Rabe et al. [26] propose a simheuristic framework that dynamically adapts the granularity and number of simulation experiments to examine the results of heuristically determined solutions. They validate the framework using a configuration problem in a job-shop manufacturing system using DES and GA. In addition, an SA procedure is also proposed for the dynamic layout problem by [27] and a GA [28]. Optimising the spatial arrangement of system resources is a subset of the configuration problem. Kia et al. [29] and Zhang et al. [30] deal with multiple-level warehouse layout problems. Kia et al. [29] minimise the total transport, machining and other costs, whereas [30] focus solely on transportation costs. Both use GA to generate satisfactory solutions. Arostegui et al. [31] compare GA, SA, and TS methods in different facility layout problems (FLP): the capacitive FLP, the periodical FLP, and the multi-commodity FLP. Zupan et al. [32] combine a GA with a digital twin to optimise the layout of a production cell. Furthermore, Tubaileh [33] propose an SA method to optimise the machine layout in a flexible manufacturing system. Only a few applications of metaheuristics are directly concerned with matrix production systems. For example, Völker and Verbeet [34] discuss a simulation-based model of a matrix production system. The model is optimised using an SA method. Bányai et al. [35] model a matrix production system and optimise the logistics using a sequential black hole-floral pollination algorithm.

The existing work on simulation-based heuristics indicates the interest in this topic. However, most existing works focus on developing specific new methods for configurational problems, and only a few examine different metaheuristics in a specific context. Furthermore, many works that compare different algorithms use synthetic use cases. Thus, the contribution of this paper is to compare two of the most commonly used metaheuristics, GA and SA, in a real-world configuration problem. Thereby, insights into the particular challenges the application of these methods faces are provided.

3 Experimental setting and methodology

3.1 Use case

The considered use case is located in the electrical testing of circuit boards. The production of boards includes 100% testing using a product-specific protocol with tactile resistance measurements. The testing time of each board varies depending on the protocol, the testing machine generation, and the necessity to repeat tests. If a test is repeatedly unsuccessful, the board is checked by an operator and placed on a “defect” magazine sent to a repair station when all products of an order have been tested, or the magazine is full. The production system consists of two rows of testing machines with twelve available spaces. In the system, a wide variety of boards is processed. Each board can be processed at any of the testing stations, given that it fits the testing machine loaders, of which there are two sizes. Based on the flexibility of testing machines regarding work assignments and the transportation system, this system can be characterised as a matrix production system. Since many boards need to be tested from both sides, two machines are sometimes linked together so that the first can test side A and the second can test side B. Such a linking of two machines consists of an entry and an exit loader, which allow magazines of products to be entered and taken out, a turning station, and the testing machines themselves. As some products only need testing from one side, machines can also be operated as standalone with two loaders. The boards are transported in magazines which can be adjusted to fit the specific board size. The products are assembled on eight assembly lines, which produce order-specific batches. The assembly lines are only included in the simulation model as sources. One batch can consist of one or multiple magazines, each of which can hold about 20 boards, depending on the board height. Some boards require specific, larger loader sizes to be tested and can thus only be allocated to machines equipped with those loaders. The orders are assigned to testing machines based on a first-in-first-out principle, though products can be delayed due to loader incompatibility and prioritised if a machine is already set up for the board type. The system is operated by four employees, three of which are assigned to four machines each, while one “jumper” can freely support.

The configuration of the simulated system can be changed in terms of the number of machines, their generation and linkage configuration, the loader configuration, and the number and occupation of workers. The number of machines is limited to twelve but can be reduced if necessary. For the machines, four generations are available. Generation 1 needs a manual set-up process and is generally slower than Generation 2. Generation 3 machines do not require relevant set-up times to switch between variants, and Generation 4 machines can even test both sides of the board while eliminating the need for linkages and turning machines. Machines can be linked to reducing necessary operator interactions. However, this decreases the efficiency of the linked pair, as the testing time of sides A and B typically deviates and subsequently, one of the two needs to wait for the other. This problem can be mitigated by combining slower machines of generations 1 and 2 with generation 3. Machines can also be operated in an alternating linkage, where each machine alternates between testing sides A and B and testing times are harmonised. This linkage, however, also requires two additional turning machines. For each machine or each linkage, different loaders can be chosen. The larger loaders are slightly more expensive. Finally, the number and occupation of employees can be changed to include between three and five employees and different numbers of jumpers. The available parameters and a graphical representation are shown in Fig. 1.

Fig. 1
figure 1

Configuration parameters of the matrix production system and graphical representation of the current configuration (linked machines can only have the same loader type; thus, only one is shown)

3.2 DES modeling and fitness function

The simulation model used for the experimental comparison between GA and SA represents the matrix production system of an industrial PC manufacturer as discussed in the previous section. The model has been implemented in Tecnomatix Plant Simulation (TPS). The model contains several processing stations, multiple employees with specific assigned tasks, transport units, and product instances. The modelling used an agile development process adapted from VDI3633 [12] described in [36]. Information regarding the order and machine-specific processing times were defined based on actual processing times enhanced with master data whenever no recordings were available. Layout information and working times were based on master data. The number of transport units was assumed to be infinite, though transport units were only allowed to spawn and despawn at specific source and sink points, ensuring realistic logistic processes within the modelled production segment. The dynamic failure rates and machine downtimes thereby required specific model adaptions. The machines are used to test finished products using a product-specific testing protocol electrically. The operator needs to confirm the result should any test be unsuccessful. As operators are assigned to multiple machines at a time, this can lead to waiting times if no operator is available. Additionally, the failure rate at a machine dynamically changed as an order was processed due to necessary protocol adaptions by the operator to avoid parts falsely flagged as defective. This mechanism was represented in the model using a stochastic learning process, decreasing the likelihood of these type II errors after each occurrence using the following formula,

$$\begin{aligned} \psi _{II,j+1}=\alpha _{teach} p(s) \psi _{II,j} \end{aligned}$$
(1)

where \(psi_{II,j}\) is the likelihood of a type II error at the jth tested board, p(s) is a triangular density function with a maximum at \(s=0\), and \(\alpha _{teach}\) reflects the teaching parameter that was fitted to the distribution of errors found in the use case. The function parameters were fitted using half a year of process recordings. The time to rectify a problem at a machine by adapting the protocol or manual product inspection was not distinguished from waiting times in the recordings. Thus, these times needed to be calibrated in an iterative procedure using the model to determine the waiting times with the real production system. The model was then validated using extensive testing and comparing it to multiple production periods.

The fitness of a particular configuration was determined as a combination of the achieved production capacity and the required investments and costs for employees. The capacity was evaluated using the simulation model, capturing the time necessary to fulfil the production orders of a week. As users of such optimisation models are typically only concerned with fulfilling the required capacity, the capacity fitness function \(fit_{cap,i,o,s}\) of a particular configuration \(i\in I\), for a given observation \(o\in O\) and chosen capacity increase scenario \(s\in S\) was defined as

$$\begin{aligned} fit_{cap,i,o,s}=\Biggl \{ \begin{matrix} 0,&{}\quad T_{i,o}\le T_{ref,s}\\ \frac{T_{i,o}-T_{ref,s}}{T_{ref,s}},&{}\quad T_{i,o}>T_{ref,s} \end{matrix} \end{aligned}$$
(2)

where \(T_{i,o}\) denotes the time elapsed for the production in the model and \(T_{ref,s}\) refers to the reference time set for a specific capacity expansion scenario. The capacity expansion reference time is calculated as the reference time \(T_{ref,0}\) divided by the desired capacity increase. The yearly cost \(C_i\) was calculated based on the necessary investment for machines and loaders as the average yearly depreciation as well as the employee costs:

$$\begin{aligned} C_i=\sum _{u\in U_i}\left( C_{inv,u}\left( \frac{1}{T_{use}}+\frac{ir}{2}\right) \right) +n_{emp,i}C_{emp} \end{aligned}$$
(3)

where \(u\in \ U_i\) denote the necessary changes to the starting configuration, \(C_{inv,u}\) the overall investment for each new asset, \(T_{use}\) the useful life of the asset, and ir the interest rate. \(n_{emp,\ i}\) describes the number of employees in configuration i and \(C_{emp}\) the average yearly cost of an employee. The cost fitness function \({fit}_{cost,\ i}\) is then defined as

$$\begin{aligned} {fit}_{cost,i}=\frac{C_i}{C_{max}} \end{aligned}$$
(4)

where \(C_{max}\) are the maximum possible investment costs. The overall fitness function \({fit}_{\ i,o,w,s}\) of a configuration i for an observation o, with temporal weight w and capacity threshold scenario s is defined as

$$\begin{aligned} {fit}_{i,o,w,s}={fit}_{cost,i}+g_w{fit}_{cap,i,o,s} \end{aligned}$$
(5)

where \(g_w\) is the weight factor. Using different weights, the algorithm can be influenced in the tradeoff between higher capacities and lower costs.

3.3 Metaheuristics

For both investigated metaheuristics, the system’s configuration was parametrised using 31 parameters resulting in a total of 3.02E+11 possible configurations. TPS provides a built-in tool for optimisation using GA, the GAAssistant. This tool allows the selection of parameters to be optimised and their permissable value ranges. Furthermore, some hyperparameters of the GA can also be changed, such as the population size and the number of observations per generation. A proprietary SA algorithm was developed and implemented because TPS does not provide alternative metaheuristics. The algorithm uses an exponentially decreasing temperature which is defined as the temperature \(\Theta _n\) at step \(n\in \left[ 1,N\right] \)

$$\begin{aligned} \Theta _n=\Theta _0\left( \frac{\Theta _N}{\Theta _0}\right) ^\frac{n}{N} \end{aligned}$$
(6)

where \(\Theta _0\) denotes the initial temperature and \(\Theta _N\) the final temperature. The temperature \(\Theta _n\) defines the likelihood \(\phi _n\) to accept an inferior configuration i instead of the current configuration \({\hat{i}}\) for the succeeding proximity search at step n.

$$\begin{aligned} \phi _n=e^\frac{{fit}_{i}-{{fit}_{{\hat{i}}}}}{\Theta _n} \end{aligned}$$
(7)

The creation of neighbour configurations in SA was facilitated by the operations “flip”, “insert”, and “change”. Moreover, these operations were optimised for the given parameter space. Figure 2 shows the GA and SA algorithms as a flowchart.

Fig. 2
figure 2

Flow chart of optimisation routines for GA & SA

3.4 Experimental design

To compare SA and GA, several experiments were conducted. Since the number of required simulation runs is the major determinant of the required computing time, GA and SA were compared for three sets of allowed numbers of simulation runs N, 1500, 2950, and 4500. Two aspects of the fitness function \({fit}_{i,o,w,s}\) were also varied, namely the reference time \(T_{ref,s}\), simulating different requirements towards the production system, and the weight factor for the capacity fitness function \(g_w\) to distinguish the severity with which too low capacity is punished. The variation of the reference Time is equivalent to enforcing a higher throughput of the system, as the same number of orders needs to be processes in less time. The goal of the meta heuristic optimisation is thus the increase of throughput at minimal investment and operative costs. An overview of the experiments is given in Table 1. To limit the number of experiments, the number of simulation runs was only varied for \(g_w=4\), for 3 and 5 \(N=2950\) was fixed.

Table 1 Examined experimental values

The starting configuration for all experiments was the current configuration, which required an average production time of \(\bar{T_0}=0.9895\ T_{ref,0}\). It results in an average capacity fitness of \({{\bar{fit}}}_{cap,0,s=1}=0.0994\). The fitness of the starting configuration for each value of \(T_{ref,s}\) and \(g_w\) is shown in Table 2.

Table 2 Average fitness value of current configuration for different values of \(T_{ref,s}\) and \(g_w\)

One important factor to consider in simulation-based optimisation is the stochastic nature of the simulation and, thus, the necessity to perform multiple simulation runs to evaluate a given configuration i [26]. The simulation model shows an average relative standard deviation of the production time \(T_{i,o}\) of \(1.67\%\), estimated using ten different configurations and 100 observations each. This deviation is considerable, as the desired improvements of the capacity are only one order of magnitude bigger. In the above-described experiments, each configuration is tested with a random seed. This method poses the danger of reducing the algorithm’s relevance for the stochastic solution space due to seed-optimised configurations. Thus, the effect of this seed value dependence was tested by comparing GA runs based on only one fixed seed value for all runs, with GA runs with ten observations with different seeds per configuration and runs with alternating seeds between each configuration. This test showed only a slight improvement in the fitness value for ten observations compared to alternating and fixed seeds, as shown in Fig. 3, even though only a tenth of the computation time was necessary for the latter. Furthermore, the fixed seeds’ results seemed to slightly outperform the alternating seeds, even when comparing the final results for multiple different seeds. This may be due to decreased selection efficiency when the simulation seed changes. Therefore, fixed seeds were chosen for the following experiments.

Fig. 3
figure 3

Resulting distributions of observation strategies with fixed seeds, ten observations per individual and alternating seeds using GA)

4 Results

Figure 4 shows the best configuration found by the metaheuristics for different values of s. The metaheuristics improved the fitness value of the original configuration in every case at least by a factor of 2. In practice, each of the final results was able to process the specified number of orders in reduced time as specified by \(T_{ref,s}\) in Table 1. Thereby, the potential throughput of the system was increased by \(11.11\%\), \(27.35\%\), and \(45.54\%\) respectively. For the best SA configurations investment costs of 196, 944, and 1710 currency units would be necessary. The best solutions consistently used the same number of employees, but made all of them jumpers. Figure  5 shows the fitness achieved by the final chosen configuration of GA framed in black and SA framed in grey using ten observations in the simulation model. On average, the chosen SA configuration shows a relative improvement of 0.0967 compared to the GA result, using a paired t-test on all observations; this difference is significant with \(p=0.00035\) [6]. As expected, the experiments also show a fitness improvement with increasing N, though this improvement is not as substantial as expected. GA showed an average relative improvement of 0.1582 from \(N=1500 \rightarrow 2950\) and a decline of 0.0519 for \(N=2950 \rightarrow 4500\). SA improved by 0.1216 and 0.0383, respectively.

Fig. 4
figure 4

Resulting best configurations for GA and SA using \(N=2950, g_w=4\) for different s

Fig. 5
figure 5

Fitness of best configurations for GA and SA using \(g_w=4\) for ten different observations each, lower is better

As shown in Fig. 6, the final configurations are a tradeoff between minimising the capacity fitness function and the cost fitness function. This figure shows an overview of all tested configurations for \(N=2950\) and of the final best configurations the SA algorithm was able to find with different values of \(g_w\) and s. All chosen configurations show a value of \({fit}_{cap}\) of or close to zero.

Fig. 6
figure 6

SA configuration performance for \(s=3\) (left), 2 (center), and 1 (right) and chosen best solution for \(g_w=\left( 3,4,5\right) \) using \(N=2950\)

Interestingly, the best configurations in all three s were found for \(g_w=3\), indicating that a less strict capacity threshold could be beneficial for finding a solution. This becomes apparent in Fig. 7, which shows a broad view of the performances of different configurations as well as isovalue lines, i.e. lines with equal values of \({fit}_{i,o,w,s}\), for different values of \(g_w\). The figure indicates that the slope of the Pareto optimal set of configurations, effectively the line limiting the space of found solutions from the bottom, is significantly shallower than all isovalue lines, and the algorithm is forced towards values of \({fit}_{cap,i,o,s}\) close to 0.

Fig. 7
figure 7

Distribution of SA results for \(s=3\) with isovalue lines for different \(g_w\)

Another interesting aspect of these experiments is each heuristic’s progress throughout the run. Figure 8 shows the fitness of the average best configuration at each point of the algorithm. The progress of the GA algorithm shows a continuous approximation towards the optimum, where the average rate of progress slows down with each generation. SA, on the other hand, shows an interesting behaviour. The algorithm underperforms compared to GA until the last third of the experiment. Then, when \(\Theta \) is low enough, SA seems to be much more efficient at refining the solution and dominates GA in every case. For both SA and GA, \(N=2950\) seems to be the saturation point, as no significant average improvement is found for \(N=4500\).

Fig. 8
figure 8

Average best fitness throughout different runs of SA and GA. (Note: For \(N=2950\), 9 runs are aggregated for SA and GA, respectively, for \(N=1500\), 4500 only 3 runs were performed)

5 Discussion

The results show that the combination of metaheuristics and simulation is valuable in complex decision-making in production system planning. However, the chosen metaheuristic and its parametrisation have a considered impact on the quality of the generated results. The results above clearly indicate that the SA algorithm used is superior to the GA algorithm provided by TPS, as SA dominates GA in every experiment. This is likely due to SA being more efficient in selecting configurations to be tested. It is particularly noteworthy that the GA cannot reduce the gap to SA even in the case of larger numbers of simulation runs, where an efficiency gap would be expected to diminish. Other studies have also observed similar comparative behaviour between GA and SA [7].

Interestingly, the most significant increases in result quality were found by SA in the last third of the process, where \(\Theta \) was already relatively low. For the problem presented, it might have been more efficient to start at a lower value of \(\Theta \) to limit the amount of inefficient searching. Thus, the selection of an apt cooling schedule should be considered crucial. In addition, numerous different cooling behaviour functions are applied in the literature. It would be interesting to investigate the influence of different functions on the configuration solution in this specific problem. Since these functions are the compromise between exploration and exploitation, the convergence of the method would also be affected.

To further enhance the present evaluation of the methods for such optimisation problems, it would be desirable to carry out even more different experiments with further implementation of the methods in the simulation model. This way, the degrees of freedom of discontinuation of the SA procedure and the GA could be further investigated. In this work, the GA is set with the standard parameters of TPS. It would also be interesting to observe the results of this procedure in other settings. One degree of freedom of the GA is, for example, the population size that influences the convergence of the GA [37]. In addition, crossover operators other than those used for this optimisation could be introduced and their impact investigated. Some crossover operators are better suited for certain problems [38]. The same applies to mutation operators and rates; other crossover operators can also be used for the SA method.

The parametrisation of the fitness function also seems to considerably influence the quality of the solutions. In the presented case, the capacity part of the fitness function was implemented as an increasing linear term for values exceeding a threshold time. The resulting isovalue lines of capacity and cost fitness were significantly steeper than the local gradient of the optimal Pareto set of solutions. Therefore, the resulting best solutions were always located on the \({fit}_{cap,i,os}=0\) line. Thus, the evaluation of the final solution was not directly dependent on the weighting of the capacity fitness. However, lower weightings of the capacity fitness and, thus, shallower isovalue lines seemed to allow the algorithms to explore solutions close to the threshold more efficiently and thus improve the solution quality. To optimise the algorithms, it may be prudent to choose weightings for such soft constraints that result in isovalue lines close to the expected gradient of the Pareto optimal set. This idea should be investigated further in subsequent research.

The experiments also show that, at least for the experimentation, the stochastic nature of simulations can be ignored to drastically improve the computational effort required. However, this may not necessarily be the case for different problems. For example, fixed seed observation methods will likely perform worse in systems with higher levels of stochastic variation. Thus, finding thresholds for using different observation methods would be interesting, though very laborious. Also, dynamically adjusted observation methods, as proposed by [26], could be used.

6 Conclusion

This contribution investigated genetic algorithms and simulated annealing to optimise a material flow simulation in a matrix production system. The discrete event simulation model was developed and validated to resemble the production system closely. Subsequently, both GA and SA algorithms were tested using different parametrisations. The results show that SA is superior to GA in the tested use case and meta-heuristic configuration, likely due to the increased efficiency in the final phase of the optimisation. In conclusion, it can thus be said that using such approaches is a valuable addition to the planning of matrix production systems.