1 Introduction

Dillon et al. (2010), meeting the service level agreements is a major issue for cloud computing providers and “effective decision models and optimization algorithms” for efficient resource management are key factors to reach this goal. In this context, Google proposed a subject for the ROADEF/EURO challenge 2012 (http://challenge.roadef.org/2012/en/), presenting a complex and large-scale machine reassignment problem, where a set of processes assigned to a set of machines have to be reassigned (or moved) while balancing machine usage improvement and moving costs, under resource (more precisely CPU, RAM, disk) and operational constraints.

The ROADEF/EURO challenge is a contest jointly organized by the French Operational Research and Decision Aid society (ROADEF) and the European Operational Research society (EURO). The contest appears on a regular basis since 1999 and always concerns an industrial optimization problem proposed by an industrial partner. For the 2012 edition, registered participant teams had to send a first solution program for the qualification phase that ended in December 2011. The programs were evaluated in a first set of training problem instances, the “A” set. The best teams were then qualified for the subsequent stage and could adjust their program to solve a more realistic set of instances that was made available to them, the “B” set. They had to send the revised version of their program in June 2012. The programs were evaluated both on the B set and on a set that was left unknown to the participant, the “X” set. Prizes were awarded at the EURO 2012 conference in Vinius.

In terms of participation, the 2012 challenge edition has been an unprecedented success with 82 registered teams, 48 teams that actually sent a program for qualification, 30 qualified teams and 27 teams that sent a program for the final evaluation. Figure 1 is a map presenting the registered and qualified teams distribution worldwide.

Fig. 1
figure 1

Teams distribution map

Special issues of journals have been devoted to the challenge since 2005: the car sequencing problem proposed by Renault (Solnon et al. 2008), the workforce scheduling problem proposed by France Telecom (Artigues et al. 2009), the disruption management problem for commercial airlines proposed by Amadeus (Artigues et al. 2012) and the large-scale electricity production problem proposed by EDF (Özcan et al. 2013). This paper aims at introducing this special issue by presenting the ROADEF/EURO challenge 2012 subject, as well as the methods of the finalist teams and their results. Section 2 presents the problem, as defined by Google. Related work in the literature is described in Sect. 3. The characteristics of the problem instances provided by Google are detailed in Sect. 4. The results of the participant teams for the qualification and for the final stages are analysed in Sect. 5.

2 Problem description

The aim of the addressed problem is to improve the usage of a set of machines related to several resources, such as RAM and CPU, and processes consuming these resources. Initially each process is assigned to a machine. In order to improve the machine usage, processes can be moved from one machine to another. A solution to this problem is a new process-machine assignment which satisfies all hard constraints and minimizes a given overall cost.

Let \(\mathcal {M}\) be the set of machines and \(\mathcal {P}\) the set of processes.

Definition 1

(Solution) A solution is an assignment of each process \(p \in \mathcal {P}\) to one and only one machine \(m \in \mathcal {M}\); this assignment is denoted by the mapping \(M(p) = m\). The original assignment of process p is denoted \(M_0(p)\).

2.1 Constraints

A valid solution is composed of possible moves subject to a set of hard constraints, capacity, spread, dependency and transient constraints.

Let \(\mathcal {R}\) be the set of resources which is common to all the machines, C(mr) be the capacity of resource \(r \in \mathcal {R}\) for machine \(m \in \mathcal {M}\) and R(pr) the requirement of resource \(r \in \mathcal {R}\) for process \(p \in \mathcal {P}\).

Definition 2

(Usage) Given an assignment M, the usage U of a machine m for a resource r is defined as:

$$\begin{aligned} U(m, r) = \sum _{\begin{array}{c} p \in \mathcal {P},\ M(p) = m \end{array}} R(p, r) \end{aligned}$$

Constraint 1

(Capacity) A process can run on a machine if and only if the machine has enough available capacity on every resource. A feasible assignment must satisfy:

$$\begin{aligned} \forall ~m \in \mathcal {M}, r \in \mathcal {R}, ~~U(m, r) \le C(m, r) \end{aligned}$$

Let \(\mathcal {S}\) be a set of services which partition the processes.

Constraint 2

(Conflict) A service \(s \in \mathcal {S}\) is a set of processes which must run on distinct machines, i.e.

$$\begin{aligned} \forall ~s \in \mathcal {S}, \quad (p_i,p_j) \in s^2, \quad p_i \ne p_j \Rightarrow M(p_i) \ne M(p_j) \end{aligned}$$

Let \(\mathcal {L}\) be the set of locations, a location \(l \in \mathcal {L}\) being a set of machines.

Constraint 3

(Spread) For each \(s \in \mathcal {S}\), let \(spreadMin(s) \in \mathbb {N}\) be the minimum number of distinct locations where at least one process of service s should run:

$$\begin{aligned} \forall s \in \mathcal {S}, \sum _{l \in \mathcal {L}} min\bigg (1, \bigg |\{p \in s ~|~ M(p) \in l\} \bigg | \bigg ) \ge spreadMin(s) \end{aligned}$$

Let \(\mathcal {N}\) be the set of neighborhoods, a neighborhood \(n \in \mathcal {N}\) being a set of machines.

Constraint 4

(Dependency) If service \(s^a\) depends on service \(s^b\), then each process of \(s^a\) should run in the neighborhood of a \(s^b\) process:

$$\begin{aligned} \forall ~p^a \in s^a, \;\exists ~p^b \in s^b ~ and ~n \in \mathcal {N} ~such~that~ M(p^a) \in n ~and~ M(p^b) \in n \end{aligned}$$

Let \(\mathcal {TR} \subseteq \mathcal {R}\) be the subset of resources which need transient usage.

Constraint 5

(Transient) When a process p is moved from one machine m to another machine \(m'\) some resources are consumed twice; i.e. require capacity on both original assignment \(M_0(p)\) and current assignment M(p).

$$\begin{aligned} \forall m \in \mathcal {M}, r \in \mathcal {TR}, \sum _{\begin{array}{c} p \in \mathcal {P},\ M_0(p)=m ~\vee ~ M(p)=m \end{array}} R(p, r) \le C(m, r) \end{aligned}$$

Figure 2 displays a small problem instance with nine machines on which processes are assigned and occupy a certain amount of resources (CPU, RAM and disk). Each process is a colored rectangle and the color identifies the process service. Machines are doubly partitioned in locations (there are four locations) and neighborhoods (there are three neighborhoods). There are also two dependencies.

Fig. 2
figure 2

A small problem instance

2.2 Objectives

The aim of the problem is to improve the usage of a set of machines. To do so, a total objective cost is built by combining a load cost, a balance cost and several move costs.

Let SC(mr) be the safety capacity of a resource \(r \in \mathcal {R}\) on a machine \(m \in \mathcal {M}\).

Cost 1

(Load) The load cost is defined per resource and corresponds to the used capacity above the safety capacity; more formally:

$$\begin{aligned} loadCost(r) = \sum _{m \in \mathcal {M}} max\big (0, U(m, r) -SC(m, r)\big ) \end{aligned}$$

Let \(\mathcal {B}\) be a set of triples defined in \(\mathcal {R}^2 \times \mathbb {N}\).

Cost 2

(Balance) For a given triple \(b = \langle r_1, r_2, target \rangle \in \mathcal {B}\),

$$\begin{aligned} balanceCost(b) = \sum _{m \in \mathcal {M}} max \big (0, target \times A(m, r_1) - A(m, r_2) \big ) \end{aligned}$$

with \(A(m, r) = C(m, r) - U(m, r)\).

Let PMC(p) be the cost of moving the process p from its original machine \(M_0(p)\).

Cost 3

(Process Move)

$$\begin{aligned} processMoveCost = \sum _{\begin{array}{c} p \in \mathcal {P} ~ such ~ that \\ M(p) \ne M_o(p) \end{array}} PMC(p) \end{aligned}$$

Cost 4

(Service Move)

$$\begin{aligned} serviceMoveCost = \max _{s \in \mathcal {S}} \bigg ( \big |\{p \in s ~|~ M(p) \ne M_0(p)\} \big | \bigg ) \end{aligned}$$

Let \(MMC(m_{source}, m_{destination})\) be the cost of moving any process p from machine \(m_{source}\) to machine \(m_{destination}\).

Cost 5

(Machine move)

$$\begin{aligned} machineMoveCost = \sum _{p \in \mathcal {P}} MMC(M_0(p), M(p)) \end{aligned}$$

2.2.1 Total objective cost

The objective is to minimize the weighted sum of all the previous cost components.

$$\begin{aligned} totalCost= & {} \sum _{r \in \mathcal {R}} weight_{loadCost}(r) \cdot loadCost(r) \\&+\, \sum _{b \in \mathcal {B}}weight_{balanceCost}(b) \cdot balanceCost(b) \\&+ \,weight_{processMoveCost} \cdot processMoveCost \\&+\, weight_{serviceMoveCost} \cdot serviceMoveCost \\&+ \,weight_{machineMoveCost} \cdot machineMoveCost \end{aligned}$$

3 Related work

Cloud computing aims at enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction (Dillon et al. 2010). The resource managers need to employ fast and effective decision models and optimization algorithms. The topic proposed for the ROADEF/EURO challenge aims at providing such algorithms. Some works have been conducted in this area. We can quote the following works.

Xu et al. (2014) review the state-of-the-art research on managing the performance overhead of virtual machines, and summarize them under diverse scenarios of the Infrastructure-as-a-Service (IaaS) cloud, ranging from the single-server virtualization, a single mega datacenter, to multiple geodistributed datacenters. Specifically, they discuss the performance modeling methods with a particular focus on their accuracy and cost, and compare the overhead mitigation techniques by identifying their effectiveness and implementation complexity. Yang and Tate (2012) present a descriptive literature review and classification scheme for cloud computing research that consists of four main categories: technological issues, business issues, domains and applications, and conceptualising cloud computing. We can also mention the review of Zhao et al. (2014) on the main challenges of the cloud computing paradigm.

Setzer and Stage (2010) propose a method that aims at the reduction of managerial complexity of resource and workload management in data centers hosting thousands of applications with varying workload behaviors. Their method is based on determining points in time where migrations are likely to be beneficial for a given set of workloads.

Lin et al. (2011) consider that power consumption is one of the most critical problems in data centers. One effective way to reduce power consumption is to consolidate the hosting workloads and shut down physical machines which become idle after consolidation. They show that server consolidation is an NP-hard problem. They propose a dynamic round-robin algorithm, for energy-aware virtual machine scheduling and consolidation. Cambazard et al. (2013) address the allocation of virtual machines to servers with time-variable resource demands in data centers in order to minimize energy costs while ensuring service quality. They present a scalable constraint programming-based large neighborhood search method. Kessaci et al. (2013) propose a multi-objective genetic algorithm to optimize the energy consumption, carbon dioxide emissions and the generated profit of a geographically distributed cloud computing infrastructure. Wang et al. (2014) propose a multi-objective bi-level programming model to improve the energy efficiency of servers. They combine an energy-aware data placement policy and a locality-aware multi-job scheduling scheme. Chang et al. (2010) formulate demand for computing power and other resources as a resource allocation problem with multiplicity, where computations that have to be performed concurrently are represented as tasks and a later task can reuse resources released by an earlier task. They show that finding a minimal allocation is NP-complete and present an approximation algorithm. Mezmaz et al. (2011) investigate the problem of scheduling precedence-constrained parallel applications on heterogeneous computing systems like cloud computing infrastructures. They propose a parallel bi-objective hybrid genetic algorithm that takes into account, not only makespan, but also energy consumption.

In this general context, the problem considered in the ROADEF/EURO challenge aims at considering, at a single step, the dynamics of the virtual machine assignment process and, more precisely, the benefit of machine reassignment.

4 Instances

Tables 1 and 2 present the characteristics of data sets A, B and X. Data set A was used to rank the participants during the qualification phase. Data sets B and X were used to rank the participants for the final stage. Data set X has been kept unknown to the participants throughout the competition. In dataset A, the number of machines range from 4 to 100 and the number of processes range from 100 to 1000. In larger datasets B and X the number of machines ranges from 100 to 5000 and the number of processes ranges from 5000 to 50,000. There are ten instances per set. Note that each instance of the dataset X has exactly the same parameter range that its corresponding B instance.

Table 1 Characteristics of dataset A
Table 2 Characteristics of datasets B and X

5 Results and methods overview

5.1 Results

A trivial solution corresponds to the solution where no process is reassigned. We call this solution the reference solution. For both the qualification and the final stages, teams were ranked according to the same scheme. The score of a team for a given instance is given by the gap of the team’s cost from the best cost divided by the cost of the reference solution. For the qualification stage, the score of a team was the sum of its scores on all instances of dataset A. For the final stage, the score of a team was the sum of its scores on all B and X dataset instances.

For the qualification stage a threshold for the qualification has been set to a score of 100 %. Figure 3 shows a chart displaying the score of each competing team and the qualified teams. Team identifier either starts with an “S” (resp. “J”) for Senior (resp. Junior) team. The condition for a team to be allowed to compete in the junior category is that it is made only by students, without any completed PhD. Any other team is registered in the Senior category. For more information on the registered teams, we refer to the 2012 challenge web site (http://challenge.roadef.org/2012/en/). We give a more detailed description of the qualified team composition and results for the qualification stage in Table 3. It is worth noticing that the best team in the qualification stage was the junior team J17 (Wauter and Vancroonenburg).

Fig. 3
figure 3

Rank of the teams for the qualification stage

Table 3 Results of the qualified teams for the qualification phase

For the final stage, the qualified participants had to deal with the much larger instances (see Table 2). Among the qualified teams, 27 teams sent a program for evaluation. The results of these 27 teams are summarized in Fig. 4 and in Table 4.

Fig. 4
figure 4

Final ranking

Table 4 Final scores

To evaluate the instance difficulty, with regard to the results of the participant methods, Fig. 5 displays the percentiles among the participants of the score for each instance of the B and X datasets. The figure shows that some instances were highly discriminating (e.g. B2, X7).

5.2 Methods overview

Below we provide a brief description of the method of each participant team which clearly illustrates the diversity of the proposed optimization approaches. We only mention quantitative results for the teams that obtained the Best Known Solutions (BKS) on at least one instance, or a gap close to the best known results on some instances. We also underline that, due to the exceptional participation rate, being in this short list was a real challenge. Table 5 gives a short and rought overview of the components of each medhod where the columns refer to the following components: LB (a Lower Bound method was inculded), LS (Local search, other than LNS, HH, GRASP, TS, SA and VNS was used), LNS (Large Beighborhood Search), HH (Hyper-Heuristic), GRASP (Greedy Randomized Adaptive Search Procedure), TS (Tabu Search), SA (Simulated Annealing), VNS (Variable Neighborhood Search), LA (Late Acceptance heuristic), PR (Path-Relinking), CP (Constraint Programming), (MI)LP (Mixed-Integer Linear Programming or Linear Programming was used), DP (Dynamic Programming), PS (a Parallel Search component was a key element of the method).

Buljabašić, Demirović and Gavranović, (winning team S41, paper in this special issue) propose two lower bounds based on load and balance costs to evaluate the quality of their local search algorithm. The method exploits four neighbourhoods: shift, swap, Big Process Rearrangement (BPR) and chain shift. They claim that BPR influences the objective function much more than the small process rearrangement. To choose a good set of processes, they construct an auxiliary directed weighted graph where each node represents a process. They find two BKS in dataset X and three BKS in dataset B.

Mehta, O’Sullivan and Simonis (Team S38) (Mehta et al. 2012; Malitsky et al. 2013) present a Constraint Programming (CP) model with large neighbourhood search. At each iteration of the algorithm, a subset of processes to be reassigned is selected and the variable domains of the CP model is updated. The resulting CP is solved with a threshold on the number of failures and the best solution is kept for the following iteration. They also propose lower bounds summing load and balance costs and report 0.26 % of gap. They find two BKS in dataset X and one BKS in dataset B.

The method of Jaśkowski, Gawron, Szubert and Wieloch (Team J12, paper in this special issue) consists in combining a mixed integer linear programming and a heuristic approach. The method has three phases: a greedy hill climber to obtain a good starting solution with moves on processes and machines, a hyper-heuristic approach based on several low-level heuristics and finally a Mixed Integer linear Programming (MIP) with randomized moves. The MIP heuristic iteratively selects a small subset of machines and reassigns optimally the processes to the subset of machines using a MIP solver. The choice of the machine subset can be randomized or decided by a dynamic programming. This method finds one BKS in dataset X.

Fig. 5
figure 5

Percentiles for the final results

Another successful combination of a large neighbourhood search with a constraint programming model is proposed by Brandt, Völker and Speck (Team J25, paper in this special issue). Their hybrid method uses multiple threads. Iteratively, a subset of processes is chosen by neighbourhood search and CP model reassigns these processes. At the end of each iteration, the best solution of each thread is synchronized. On seven instances over ten of the X dataset, they have less than 1.6 % of gap from the BKS. However, on some instances like X3 and X5, they are very far from the best known solution, 1883.36 and 324.66 % respectively.

Teypaz (Team S14) also proposes an integration of an exact algorithm with a metaheuristic. The algorithm is a tabu search with two phases using matching moves. To evaluate these moves, a maximum weight matching problem is solved by the well-known blossom shrinking algorithm of Edmonds. The tabu search uses two elementary moves: a reassignment of process and a swap of two processes. This algorithm is provided by the open source LEMON library and finds one BKS in dataset X.

Another tabu search is proposed by Pécot (Team S34) with two neighbourhoods: insert and switch. Instead of the whole neighbourhoods, only randomly picked subsets are used. This simple method is very efficient and finds two BKS in the X instances and three BKS in the B instances.

Jarbou and Mladenovic (Team S40) presents a three level Variable Neighborhood Search (VNS) algorithm which decomposes problem into smaller size problems. The main structure, skewed variable neighborhood search, may accept slightly worse solutions, if they are different enough from the incumbent solution. This method finds two BKS on the B instances.

Table 5 Overview of used techniques

The Simulated Annealing (SA) of Ritt, Buriol, Portal, Borba and Benavides (Team S23, paper in this special issue) has two neighbourhoods (process reassignment and process swap) which are selected randomly. They propose a particular data structure which allows to evaluate moves and perform updates in constant time. On six instances of the X dataset, they have less than 0.2 % of gap from the BKS. Moreover, the maximum gap from the BKS is less than 50 % (48.7 % on instance X5).

Sansoterra, Ferruci, Calcaveccia and Sironi (Team J33) propose a parallel simulated annealing and variable neighbourhood search, which run concurrently to exploit multiple CPU cores. Communication between heuristics happens through a shared solution pool. This solution pool provides a safe access to two separate solution sets, one with high quality and the other with high diversity solutions. A path relinking procedure takes one solution from each set with a probability of 50 % and explores intermediate solutions. One BKS from dataset X and one BKS from dataset B are found.

A late acceptance hill climbing metaheuristic with two simple and fast neighbourhood functions is proposed by Vancroonenburg and Wauters (Team J17). This methods accepts a new solution if it is better than the current solution of the last L iterations. As some of the former teams, the two moves are process reassignment and swap machine neighbourhood. Preliminary tests show that a greater value of L leads to a slower convergence but to a better final solution. On six instances of set X, the maximum gap from the BKS is less than 1.5 % but on some instances such as X5, the gap can be very large.

A hybrid method combining integer programming and a late acceptance metaheuristic with large variable neighbourhood search is proposed by Gogos, Valouxis, Alefragis and Housos (Team S5). These components are cooperatively executed until the time limit. Even if the team S5 finds no BKS in either sets, the maximum gap from the BKS is less than 234.94 % in the worst case.

Team S25 (Gharbi, Haouari, Mrab and Kharbeche, in this special issue) proposes a mixed integer linear model for the problem. However, because of the size of the instances, the MIP is iteratively solved on small subsets of machines and assigned processes. The maximum gap from the BKS on seven instances of the X dataset is less than 3.06 % but the maximum gap is significantly larger on some other instances.

A three phase algorithm combining a simulated annealing, steepest descent and linear model is used by Peekstok and Kuipers (Team S1) Although the initial algorithm took into account the infeasible solutions, with a penalty cost, because of the size of the solution space implied, in the final form of the method only feasible solutions are considered. They obtain one BKS on the instances of dataset X.

Another hybridization of integer programming with a metaheuristic is introduced by de Oliveira, Lopes, de Noronha, de Morais and de Souza (Team S27). Their iterated local search using variable neighbourhood descent has a perturbation procedure based on the IP formulation. They report an average improvement of 63.97 % of the initial solution.

A more classical iterated local search is proposed by the team Lu and Whang (Team S37) exploring three neighbourhoods: process reassignment, 1-1 process swap and 1-2 process swap. When the incumbent solution is not updated, a random number of moves is applied to perturb the actual solution. On seven instances of set X, they have less than 4.83 % of gap from the BKS but the maximum gap on all X instances is very large.

Variable neighborhood descent of the Ruiz (Team J14) includes three descent algorithms which are randomly applied. One of these algorithms is a MIP.

Chemla, Gacias and Gianessi (Team S43) use a local search algorithm that provides a warm start to a MIP. For large instances, an aggregation phase is needed.

Catusse (Team J38) proposes a hill climbing heuristic using three neighborhoods: Move changes the assignment of a process from one machine to other, Swap exchanges two processes of two machines and 2-1 Swap exchanges 2 processes against 1. A new solution is accepted if it improves the incumbent solution, but sometimes to accelerate the convergence, a degrading solution may be accepted.

Dudebout, Masson, Michallet, P.H.V. Petrucci, A. Subramanian, T. Vidal (Team J6) give a hybridization of a large neighborhood search and a local search. Local search uses two basic moves: Relocation and swap of processes on the machines. The large neighborhood search iteratively destroys and reconstructs large part of the solution. For the reconstruction, a mixed integer program is executed.

Alfandari, Butelle, Coti, Finta, Plateau, Roupin and Rozencnop (Team S26, paper in this special issue) run two heuristics, a large neighborhood search and a simulated annealing, each one on a different thread, under a framework aiming at electing the best algorithm depending of the feedback during search, based on ideas close to the hyper-heuristic concept.

Zaourar and Gabay (Team J19, in this special issue) propose a greedy randomized adaptive search procedure for the vector bin packing problem with heterogeneous bin sizes, which uses a first fit decreasing greedy algorithm for the construction phase. They provide results on instances of the vector bin packing and show how the heuristics can be adapted to the challenge problem.

Hanafi, Haschimoto, Nonobe, Vasquez, Vimont and Yagiura (Team S21) combine a mixed integer linear formulation and two metaheuristics: iterative local search and tabu search with reverse elimination method to maintain the tabu list. At each iteration, the incumbent solution is replaced by the best solution in the union of two neighborhoods: Shift and Swap. A solution is better if an aggregated objective value is less. Each component of the objective function is weighted and the weights are updated when a local optimum is found.

Another parallel approach is developed by Tierney, Delgado, Pacino and Malitsky (Team J5). They propose a large neighborhood search using a mixed integer model in a destruction/reconstruction principal. Randomly selected variables of the incumbent solution are relaxed and MIP is called as a local search operator to find the best values for relaxed variables. The iterations continue until a termination criteria is met.

Benoist, Estellon, Gardi, Megel, Darlay and Nouioua (Team S6) translate the problem into a binary model and run the default search of Local Solver; a commercial solver based on local search which uses an adaptive simulated annealing as search heuristic. A move of the local search consists of flipping the values of a certain number of binary variables and computing the resulting objective function. All moves resulting to the violation of some constraint are rejected. Other moves are accepted depending on their impact on the objective function.

Clautiaux, Liefooghe, Legillon and E.-G. Talbi (Team S11) propose a method divided into two phases, applied one after the other, in a sequential way. In the first stage, an iterated local search algorithm is performed. During the second phase, a heuristic aims at re-balancing the overload cost over machines. They use an integer programming solver to improve locally the incumbent solution.

Larose and Posta (Team J10) run a parallel version of the iterative local search on two threads of the test machine. While iterative local search generates a pool of feasible solutions, they are improved by a path-relinking phase. One iteration of the path relinking consists in randomly selecting an initial and guiding solution from the pool. Then it generates a neighborhood based on moving a process from machine to another.

Chiraphadhanakul and Figueroa (Team J21) decompose the problem into subproblems to balance the load of a subset of machines and to swap processes following two selection methods to minimize the overall costs, repeatedly. Each subproblem starts by solving a linear relaxation of the unconstrained problem to obtain the reduced costs. Then, using the reduced costs, they estimate how costly it is to assign a process on a particular machine.

6 Concluding remarks

As already mentioned the ROADEF/EURO challenge 2012 was a tremendous successful scientific event. We believe that the numerous proposed methods that includes local search, metaheuristics, matheuristics, hybrid constraint-programming and integer programming components made a significant advance of the state of the art for industrial virtual machine reassignment problem.

Further research should concentrate on the dynamic aspects of the problem considering several reassignment steps over a delimited horizon. This questions predictability aspects and makes it necessary to propose intelligent reactive procedures as well as robust optimization methods.