Stochastic project management: Multiple projects with multi-skilled human resources

This paper presents two stochastic optimization approaches for simultaneous project scheduling and personnel planning, extending a deterministic model previously developed by Heimerl and Kolisch. For the problem of assigning work packages to multi-skilled human resources with heterogeneous skills, the uncertainty on work package processing times is addressed. In the case where the required capacity exceeds the available capacity of internal resources, external human resources are used. The objective is to minimize the expected external costs. The first solution approach is a 'matheuristic' based on a decomposition of the problem into a project scheduling subproblem and a staffing subproblem. An iterated local search procedure determines the project schedules, while the staffing subproblem is solved by means of the Frank-Wolfe algorithm for convex optimization. The second solution approach is Sample Average Approximation where, based on sampled scenarios, the deterministic equivalent problem is solved through mixed integer programming. Experimental results for synthetically generated test instances inspired by a real-world situation are provided, and some managerial insights are derived.


Introduction
In project management, the two important tasks of project scheduling on the one hand and of personnel planning on the other hand are usually not faced separately from each other, but rather in a simultaneous or interleaved manner. This interaction considerably increases the complexity of the planning process, with the consequence that project managers frequently wish to get computational support alleviating the cognitive burden of the combined scheduling/staffing decision. Decision support tools addressing the combined problem have been proposed in the literature, but typically they use deterministic models. Despite the fact that managers and scientists recognize the importance of uncertainty within project management, few works take this aspect into account when proposing methods for quantitative decision support for the combined problem described above, obviously for the reason that, as argued, this problem is already hard in the deterministic case. In this study we highlight the potential of stochastic optimization applied to combined project scheduling and staffing, and compared two different methods of solving the proposed model computationally.
The project scheduling and staffing model proposed by Heimerl and Kolisch (2010) deals with the problem of assigning multi-skilled human resources to work, while taking into account resource-specific and heterogeneous skill efficiencies. The objective is to minimize the costs for internal and external personnel. Heimerl and Kolisch (2010) provide a mixed-integer programming (MIP) formulation with a tight LP bound and compare its performance, using the MIP solver of CPLEX, to that of simple heuristics. Additionally, they investigate the influence of different parameters, such as the time window size, the utilization, and the number of skills per resource, on the personnel costs. In Kolisch and Heimerl (2012) the authors extend their previously presented model and develop a hybrid metaheuristic as an innovative solution method. The proposed solution method separates the problem into a scheduling and a staffing subproblem, where the staffing problem is solved through a generalized network simplex algorithm. Results show that the hybrid metaheuristic outperforms the MIP solver. In Felberbauer et al. (2016), the models from Heimerl and Kolisch (2010) and Kolisch and Heimerl (2012) are extended by considering labor contracts. For the problem of project scheduling and staffing with labor contracts, Felberbauer et al. (2016) propose a hybrid metaheuristic combining iterated local search and a greedy staffing heuristic. Van den Bergh et al. (2013) observe that most papers on personnel scheduling problems still appear to feature a deterministic approach and advise researchers to consider stochastic problem versions. They highlight the importance of uncertainty, explicitly mentioning volatile demand, last-minute changes, and rescheduling based on new information, as interesting new research topics. A recent article on workforce planning by De Bruecker et al. (2015) states that the small number of papers in this field taking uncertainty into account is alarming. They refer to the general consensus that uncertainty is ubiquitous in real workforce planning, but uncertainty still remains relegated to the "future research" section of many papers. The authors mention the high complexity of integrating uncertainty within optimization models and techniques as one reason for its inadequate representation in the literature.
Considering uncertainty in project management and its subordinate planning levels, a recent work of Barz and Kolisch (2014) describes resource assignments in the telecommunication industry. The authors investigate a hierarchical, multi-skill resource assignment problem and use a discrete Markov decision process model for incoming jobs over an infinite time horizon. Gutjahr and Froeschl (2013) present a stochastic optimization model for project selection and project staffing. They assume that both the returns and the required efforts of the selected projects are random variables. The problem is decomposed into a project selection problem and a staffing subproblem. For the computational solution of the two problems, an adapted version of Variable Neighborhood Search and a Frank-Wolfe type algorithm, respectively, are used. Artigues et al. (2013) propose a robust optimization approach to resource-constrained project scheduling with uncertain activity duration, assuming that the decision-maker cannot associate probabilities with possible activity durations. The authors describe how robust optimization can be applied to project scheduling under uncertainty, and they develop a scenariorelaxation algorithm and a heuristic solution method that solves mediumsized instances. Gutjahr (2015) proposes a model for stochastic multi-mode resource-constrained project scheduling under risk aversion with the two objectives of makespan and costs. Activity durations and costs are modelled as random variables. For the scheduling part of the decision problem, the class of early-start policies is considered. A further decision to be made concerns the assignment of execution modes to activities. To take risk aversion into account, an approach of optimization under multivariate stochastic dominance constraints is adopted. For the resulting bi-objective stochastic integer programming problem, the Pareto frontier is determined by means of an exact solution method, incorporating a branch-and-bound technique. Ingels and Maenhout (2017a,b) investigate the assignment of employees to cover the staffing requirements for specific skills and shifts. Their research focuses on improving the short-term adjustment capability of the shift roster by maximising the substitutability of employees as a means to increase the robustness of a project plan. The authors propose a three-step methodology including a two-phase pre-emptive programming approach. Both uncertainty of demand and uncertainty of capacity are considered by stochastic models. A personnel shift roster for a medium-term period offering sufficient flexibility for between-skill substitution and within-skill substitution is provided.
The articles Heimerl and Kolisch (2010), Kolisch and Heimerl (2012), Felberbauer et al. (2016) and Gutjahr and Froeschl (2013) constitute the starting point for the present paper. In line with these articles, we assume that in the case where the work time demand exceeds the available capacity, external capacity is used, e.g. by hiring external personnel. However, we introduce the following new features: (a) Contrary to the deterministic models presented in Heimerl and Kolisch (2010); Kolisch and Heimerl (2012);Felberbauer et al. (2016), we model the required work time demand as stochastic. On the other hand, in order to keep our model compact, we simplify the model of Heimerl and Kolisch (2010); Kolisch and Heimerl (2012) by neglecting the possibility of overtime work. (b) Contrary to Gutjahr and Froeschl (2013), we address the project scheduling decision. This issue introduces a relevant additional source of computational complexity into the problem, since project scheduling problems (already in a simple, deterministic context) are notoriously hard. In particular, the demand information does not appear here per project, but in a more fine-grained way for each activity of a project. On the other hand, compared to Gutjahr and Froeschl (2013), we do not deal here with the project selection aspect.
To cope with the computational challenge of the proposed model, we develop two solution approaches and compare their performance. The first, "matheuristic" approach uses a metaheuristic for the scheduling part of the problem as well as an exact solution procedure inspired by Gutjahr and Froeschl (2013) for the staffing part. It is noteworthy that the staffing part constitutes a nonlinear (though convex) optimization problem. The second approach follows a completely different strategy by using Sample Average Approximation to achieve a linearization of the model, and by solving the resulting "deterministic equivalent" problem, which remains a combined scheduling-staffing problem, with the aid of a MIP solver. A comparison of these two ways to address a mixed-integer stochastic optimization problem with a convex "lower level" problem may also be interesting for applications outside the area of project management.
Substantial literature focuses on the deterministic case of project scheduling and staffing. However, to the best of our knowledge, no previous studies have addressed the scheduling and staffing of multiple projects with multiskilled resources holding heterogeneous skill efficiencies, under consideration of uncertain work time demand. The contribution of our article is: 1. We adapt the model of Kolisch and Heimerl (2012) by considering stochastic demand. 2. We present two solution procedures, namely Sample Average Approximation and a matheuristic, and compare their performance. 3. We analyze deterministic and stochastic planning and discuss cost estimation accuracy and the value of the stochastic solution. 4. We formulate managerial insights from the stochastic planning approach.
The paper is organized as follows: In Section 2 the stochastic optimization model is introduced. Section 3 presents the developed solution procedures. The structure of our test instances and the experimental design are described and the results of the computational experiments are discussed in Section 4. Finally, in Section 5 we formulate some managerial insights that can be drawn from the study of the stochastic optimization problem, and we mention some interesting topics for future research.

Problem formulation
In this section the stochastic project scheduling and staffing problem is presented.
Projects: P is a set of projects that must be completed during the planning horizon T (e.g., one year). All projects are independent of each other but compete for the same resources. The planning horizon T comprises discrete time periods t (e.g., months). Each project p (p ∈ P) has a work schedule defined by a sequence of consecutive activities q = 1, . . . , d p , where an activity is defined as that part of a project that is executed during one time period t. In this way, we discretize the overall project into consecutive pieces, each of the same length; as it will be discussed below, interruptions will be allowed. During activity q, project p demands D psq units of work requiring skill s (s ∈ S), where S is a set of skills. The amount of work in a skill s performed in a certain activity q of a certain project p is called a work package. The sizes D psq of the work packages, which we can also interpret as work contents or efforts, are considered as random variables. This reflects the very frequently occurring situation that the required efforts D psq are unknown at the beginning of the planning process. Their true realizations become only known during project execution; we assume, however, that the distribution of the variables D psq is known in advance(or can at least be estimated): it is described by the probability density function (PDF) h psq (p ∈ P, s ∈ S, 1 ≤ q ≤ d p ). By using a PDF, we suppose that the distribution of D psq is a continuous distribution, such as a triangular or a beta distribution.
Moreover, we assume that a time window [ES p , LS p ] for the start of each project p ∈ P is pre-defined. Therein, ES p and LS p denote the index t of the earliest time period and of the latest time period, respectively, in which project p can be started. The latest finish period of project p, denoted LF p , is defined as LF p = LS p + d p − 1. The time window size γ of project p is defined as γ = LS p − ES p . The earliest start time of activity q is denoted ES pq and defined as ES pq = ES p + q − 1. The latest start time of activity q of project p, denoted LS pq , is equal to the latest finish time LF pq of this activity and is defined as LS pq = LF pq = LS p + q − 1. The activities q = 1, . . . , d p have to be processed in an ascending order. However, it is possible to interrupt the project between the discrete activities q once the project has started. Nevertheless, the order of the activities as well as the start time and finish time constraints have to be respected. For example, if ES p = 1, LS p = 3 and d p = 3 for a given project p, then it would be allowed to schedule the three activities of project p in time periods 1, 2, 3 (earliest possible schedule); some other feasible alternatives would be to schedule them in time periods 1, 4, 5, in time periods 2, 3, 5, or in time periods 3, 4, 5 (latest possible schedule). The schedule 2, 4, 6 would not be allowed, because it exceeds the latest finish period LF p = 5. Resources: Our model distinguishes between internal and external human resources. Each internal resource k taken from the set of all internal human resources K holds a subset of skills S k ⊆ S. Conversely, from the perspective of a skill s, the subset K s = {k ∈ K | s ∈ S k } of K contains all resources that can per-form skill s. If s ∈ S k , we take account of different degrees in the performance of resource k in skill s, specified by the efficiency η sk of resource k in skill s. The higher the efficiency value η sk , the faster is resource k in executing a work package D psq requiring skill s. It is assumed that η sk is known at the beginning of the time horizon (the time of the decision) and does not change until its end. (In particular, we disregard effects as learning or knowledge depreciation). The values η sk are strictly positive for all s ∈ S k . Whenever convenient, we extend the definition of the values η sk to all pairs (s, k) and set then η sk = 0 for s / ∈ S k . The quantity η sk represents a factor for the speed in which a certain skill is exerted. To distinguish between work content and actual time needed to perform a work package, the realization d psq of the random variable D psq will also be called effective work time. To get the time actually needed by the considered resource, we have to divide the effective work time by η sk , so the real work time is d psq /η sk . For example, let us assume that the work package d psq requires 10 units of effective work time to be processed. (This would be the time a "standard" employee would need for it.) Then, if employee k has an efficiency of η sk = 1.25 in skill s, she or he needs only d psq /η sk = 10/1.25 = 8 time units of real work time to complete this work package.
Capacities: Each (internal) resource k has a limited capacity of a kt during time period t; these values are given in advance. Similar to (Gutjahr and Froeschl, 2013), we assume that internal resources earn fixed wages per period. On our assumption that there is no overtime work, the costs for these wages sum up to a constant value, so they have no influence on the optimization problem. (Note that for the assumed employment type, even if an employee has idle times, the company has to pay for the hours where she or he is available and not only for the hours of scheduled work.) On the other hand, external resources are paid for the effective work they do. Our model assumes that they are available for each skill s to an unlimited extent at the expense of a cost rate c e s per unit of effective work time. Since in our model, the work time demand D psq is stochastic, but the capacities of the internal resources are fixed and the project schedule has to be decided upon in advance as well, it is necessary to perform recourse actions after having observed the actual realizations of the D psq in order to be able to stick to the chosen schedule. We assume that whenever the internal capacity turns out as insufficient to cover the demand, the company resorts to external resources (experts in the required skills) and pays them at their given cost rates. The (expected) overall cost for the external resources has to be minimized.
Decision variables: For the stochastic optimization model, the decision variables x ptsk ≥ 0 define the amount of work performed by internal resource k with skill s in period t for project p. Note that x ptsk is measured in effective work time and not in real work time. To define the times when the periods of a project are executed, the binary decision variables z pqt ∈ {0, 1} are introduced: z pqt = 1 if in time period t, activity q of project p is executed, and 0 otherwise. Thus, the x ptsk variables define the staffing decision, whereas the z pqt variables define the scheduling decision.

Stochastic optimization model
In (2) -(7) below, we present our stochastic optimization model in mathematical terms. For a project p and a time period t, we let (1) denote the set of all possible activities q of project p that could lead to a resource demand in time period t.
The objective function in Equation (2) minimizes the expected external costs. The symbol E denotes the mathematical expectation and x + stands for max(x, 0). The external costs result from that part of the stochastic demand that is not covered by the internally scheduled capacity. The constraints in Equation (3) ensure that activity q of project p is scheduled exactly once within the pre-defined time window [ES pq , LS pq ]. The condition that activities q must be processed in an ascending order is guaranteed by the constraints in Equation (4). Equation (5) formulates the capacity constraints for each resource k and time-period t: observe that for a resource with efficiency η sk , an effective work time of x ptsk entails a real work time of x ptsk /η sk . Finally, the decision variables are defined in Equations (6) and (7).
Let us remark that in some cases, external costs grow faster than linearly in dependence of the outsourced work, for example because a selected supplier company has limited capacities itself. To generalize the model above to this situation, the given nonlinear external cost function could be approximated by a piecewise linear function, which can be dealt with by a simple extension of our Eq. (2). Numerical solution techniques for this more complex model are a topic of future research.
As it is seen from the formulation above, our model can be viewed as a tactical two-stage stochastic optimization model with project plan and work assignment as first-stage decisions and outsourcing to external resources as the possible recourse actions. During the 'real' project execution, we assume that we get updated information about the realizations of the work package processing times. Our assumed recourse in the case of insufficient internal capacity is the hiring of external resources, where we assume that the information about the work package processing times arrives early enough to hire the necessary external resources. The presented tactical planning approach could be extended including more operative decisions by concerning other recourse actions such as re-assignment and re-scheduling of internal resources. This could lead to better solutions and new interesting managerial insights but would lead to a more sophisticate planning approach.
Finally, let us recall that our model assumes that the activities q of each project p have to be arranged in a pre-specified linear order related to Kolisch and Heimerl (2012). Nevertheless, due to the requirement of independent work package processing time distributions an additional assumption, where for each project p and in each time period t not more than one activity q of project p is allowed to be scheduled (no parallel activity processing per project), is needed. Without the violation of the explained requirement an extension of the developed stochastic solution approach considering more general precedence constraints between activities is straightforward. For this extension the variable z pqt can again be the indicator variable for the decision that activity q of project p is scheduled in time period t. A constraint ∑ q z pqt ≤ 1 has to be added, and the precedence constraints have to be expressed by a more flexible set of constraints (Artigues et al., 2008) as those given by Eq. (4).

Solution procedures
3.1 Problem structure As mentioned above, the considered project scheduling and staffing problem distinguishes between two decisions: the project scheduling decision where the execution times of the activities q are determined, and the staffing decision where employees are assigned to cover parts of the work packages. Let z and x denote the array of the decision variables z pqt for project scheduling, and the array of the decision variables x ptsk for staffing, respectively. By ξ, we denote the random influence in our stochastic model. Abbreviating the total external cost by G(z, x, ξ) (it depends on ξ since it is a random variable) and its expected value by g(z, x) = E[G(z, x, ξ)], our optimization problem can be written as min z,x g(z, x), where (z, x) has to satisfy the constraints (3) -(7). For a given project schedule z, we call min x g(z, x) on the constraint that x is feasible with respect to (5) -(6), the staffing subproblem.

Expected value problem
The expected value (EV) problem or mean value problem is obtained from the original stochastic problem by replacing each random variable D psq by its expected valued psq = E(D psq ), so that the distribution of D psq collapses to the point mass ind psq . With the help of the introduction of the auxiliary variables for the external work time required y pts per project p time period t and skill s, the following mixed integer linear problem is obtained: subject to constraints (3), (4), (7), and ∑ q∈τ ptd The EV problem serves as an approximation for cases where the variances of the random variable D psq are very low, or, if the variances are larger, as a means to obtain an initial solution for an iterative search procedure

Staffing subproblem
For a pre-defined feasible project schedule z, the staffing problem min x g(z, x) remains to be solved. Because z is now already fixed, is a random variable that does not depend on any decision anymore, i.e., the distribution of D pst is already known. D pst represents the effective work time of project p in skill s that has been scheduled for time period t by the given schedule z. Using the random variables D pst , we can express the staffing problem in the form subject to constraints (5) and (6). Obviously, the objective function (12) is nonlinear, but it is not difficult to see that it is at least convex, since the function x → x + is a convex function and the expectation operator, as a linear operator, preserves convexity. Let with h psq (θ) denoting the probability density function of D pst , such that (12) can be rewritten as subject to (5) and (6). Elementary calculations show that with H pst denoting the cumulative distribution function (CDF) of D pst .

Matheuristic
Our first solution method is a matheuristic approach, i.e., a combination of a metaheuristic with an exact optimization technique. We solve the staffing subproblem by means of the exact Frank-Wolfe algorithm (see, e.g., Clarkson (2010)) for convex optimization under linear constraints. This is the topic of Subsection 3.4.1. The scheduling problem is solved by a metaheuristic of Iterated Local Search type, which will be described in Subsection 3.4.2.

Staffing: Frank-Wolfe algorithm
The idea of using the Frank-Wolfe algorithm for staffing problems has already been elaborated in Gutjahr and Froeschl (2013). Our technique used in the present work follows the approach described there rather closely, so we shall keep the presentation short. First, let us introduce the variables u ptsk = x ptsk /η sk a kt (p ∈ P, t = 1, . . . , T, s ∈ S, k ∈ K). Using this substitution for a fixed project schedule z, the objective function (12) can be rewritten as Moreover, we combine the project p and the skill s to the pair σ = (p, s), which we call project-skill combination, and we combine the time period t and the resource k to the pair ν = (t, k), the time-employee combination. Additionally, we re-label the pairs σ = (p, s) and the pairs ν = (t, k) by introducing the new indices σ = 1, . . . , C = |P| · |S| and ν = 1, . . . , L = |T| · K. The variables u ptsk are re-labelled accordingly, i.e., for σ = (p, s) and µ = (t, k), the notation u σν abbreviates u ptsk . In the constraints of Equation (5), the inequalities can be replaced by equalities, since there is an optimal solution in which each constraint (5) is active. Therefore, (5) can be replaced by ∑ C σ=1 u σν = 1 (ν = 1, . . . , L). The column vector u ν = (u 1ν , . . . , u Cν ) is an element of the standard simplex in R C , hence the feasible set is a Cartesian product of L standard simplices.
The idea of the iterative Frank-Wolfe algorithm is to replace in each iteration i the convex function Θ z by its linear approximation at a current feasible solution u [i] . The approximating linear function has a minimizer g [i] on the feasible set which can be easily determined. Next, the minimum of the convex function Θ z restricted to the line segment between u [i] and g [i] is identified. This can be done by line search. The minimizer found in this way is used as the new current solution for the next iteration.
For our application, the linear approximation in point u produces the optimization problem This problem decomposes into L partial problems The solution of the ν-th problem in (18) is an extremal point of the simplex S C . Therefore, the point g [i] is of the form (e σ * (1) , . . . , e σ * (L) ), where e σ is the σth unit vector, and σ * (ν) is the index of the optimal extremal point for timeemployee combination ν. It is easily seen that σ * (ν) is given by the index σ of the smallest value among the partial derivatives in (18). One finds which, by (15) Solving (19) by enumeration, we find for each time-employee combination ν = (t, k) the project-skill combination σ * (ν) = (p * (ν), s * (ν)) providing the highest cost-reduction potential achievable by employee k in time period t. Algorithm 1 presents the basic version of the Frank-Wolfe algorithm applied to the stochastic staffing subproblem. We also slightly modify this basic procedure by doing the line search for each column ν separately, followed by an immediate change of column ν of the current matrix u [i] . This gives Algorithm 2.

Project Scheduling: Matheuristic solution method
After the description of the solution of the staffing subproblem in the previous section, the current section presents the search procedure that optimizes the project schedule z. The implemented metaheuristic (which is also used for solving a deterministic version of our project scheduling problem in Felberbauer et al. (2016)) is a modification of iterated local search (ILS) (e.g., Lourenço et al., 2010) using variable neighborhood descent (VND) as the local search component (e.g., Mladenović and Hansen, 1997). The metaheuristic is combined with the exact Frank-Wolfe algorithm to a matheuristic (MH), schematically depicted in Figure 1.
A pseudocode of the MH is given in Algorithm 3. Therein, instead of z, an alternative representation of a project schedule by a two-dimensional scheme Z is used. The rows of Z correspond to the projects p (1 ≤ p ≤ |P|), whereas the columns correspond to the activity indices q = 1, . . . , d p (since the numbers d p need not to be identical, the number of entries per row can vary).
Algorithm 2 Modified Frank-Wolfe algorithm for the staffing problem ν ← e σ * (ν) //Set the target column vector 11: The entry Z pq in row p and column q indicates the time period t in which activity q of project p is processed. A project schedule Z must satisfy the requirements on the time windows and on the starting time relationships, as defined in terms of the representation z by Equation (3) and Equation (4), respectively. In Algorithm 3, some design parameters have already been set to fixed numerical values; these values resulted from numerous pre-tests as the most successful parametrizations.
Initial solution: In the first step of MH, an initial solution Z for the project schedule is generated by solving the Expected value problem presented in Section 3.2. The solution Z is also used as the initialization of the incumbent solution Z * .
First improvement: The procedure FirstImprovement() takes a current project schedule Z and improves it by a local search to a solution Z . The local search is based on a neighborhood defined by the current value of the Algorithm 3 MH search scheme 1: n r ← 0 2: i min ← 50,ĩ ← 1000 3: Z, Z * ← InitialSolution() 4: while time < t max do 5: k ← 1 6: repeat //GLS-VND 7: n r ← n r + 1 8: i max ← n 0.5 r · i min 9: Z ← FirstImprovement(Z, Z * , k, i max ) //Find the best neighbor 10: time ← CpuTime() 15: end while 16: return Z * parameter k (see below), and it is continued until (i) the neighbor solution is better than the current solution Z, (ii) the neighbor solution is better than the current incumbent Z * , or (iii) a local optimum is reached (no better neighbor solution exists). The name "FirstImprovement" indicates that we already terminate the search as soon as for the first time, an improving neighbor has been found; we do not necessarily explore the entire k-neighborhood. The neighborhood definition, controlled by parameter k, is based on a local move operator ("k-move") that works as follows: First of all, for the given solution Z, a capacity profile is determined. This capacity profile indicates the costs of external capacity needed for each time period. Next, we identify that time period t for which the capacity costs are maximal, we consider the set of activities (p, q) that are executed in time period t, and we sort these activities in descending order according to their contribution to the external capacity costs in period t. A k-move to a neighbor solution consists in selecting k consecutive activities (p, q) from the list and in shifting the execution time periods of the selected activities either one period forward or backward. Afterwards, in a subprocedure RepairSolution(), all predecessor periods and successor periods of an affected project are checked for feasibility (see constraints in Equation (4)) and, if necessary, repaired. Finally, the objective function of the modified project schedule is evaluated, which requires a call of the Frank-Wolfe algorithm as explained in section 3.4.1. The numerical accuracy of the solution value determination by the Frank-Wolfe algorithm is controlled through a parameter i max , which will be explained in the Remark at the end of this subsection.
Neighborhood change: In the procedure neighborhoodChangeS(), the candidate solution Z that resulted from the local search in FirstImprovement() is either accepted as the new incumbent Z * or rejected, depending on its objective function value and on the outcome of a random event: If Z is better than the current Z * , then Z is accepted in any case, i.e., Z * is replaced by Z , and the neighborhood size parameter k is reset to its initial value 1. Otherwise, we increase k by 1; moreover, with probability β, solution Z is accepted though it is worse than Z * (i.e., Z * is replaced again by Z ), and with probability 1 − β, the current incumbent solution Z * is preserved. In the experiments, a value β = 0.1 turned out as a good choice.
The repeat loop in the algorithm performs a local search with a varying neighborhood definition, specified by the value of the parameter k. Such a procedure is usually called Variable Neighborhood Descent (VND). For the maximal neighborhood size k max , we chose k max = 3.
Perturbation: The procedure Perturbation() generates a new solution from the current solution Z by performing random swaps, using a parameter that indicates the number of swaps. The procedure is triggered if all neighborhoods up to k = k max have failed to improve the incumbent solution. The number of swaps is chosen as equal to the product of a perturbation factor π and the number of projects |P|. Thus, we let the number of swaps depend on the problem size. A swap consists in exchanging the schedules (starting times of activities) of two randomly chosen projects and repairing the solution afterwards, if the new schedule is infeasible. In our experiments, it turned out that π = 0.15 was a suitable value. Note that the perturbation procedure only works for projects with an identical number of activities.
Remark. When applying the Frank-Wolfe algorithm as a subprocedure of the heuristic algorithm 3, we adopt two strategies from Gutjahr and Froeschl (2013). First, in order to use computation time economically, the solution accuracy of the Frank-Wolfe algorithm can be increased gradually during the search process as a function of the number n r of conducted neighborhood searches. It turned out that using i max = i min · n 0.5 r iterations in the Frank-Wolfe algorithm, where i min is a fixed initial value, produced good results. For the evaluation in the procedure neighborhoodChangeS(), we always use a comparably large numberĩ iterations. In the second strategy, we suppose that the current upper bound B(z, i) of the expected external costs after i iterations of the Frank-Wolfe algorithm converges with (approximately) exponential speed to the true value a = min x g(z, x) as i → ∞. That is, we assume B(z, i) ∼ a + b · exp(−ci), and estimate the parameters a, b, and c from the three observations B 1 , B 2 and B 3 of the bound in iteration i = 0.5 · i max , i = 0.75 · i max and i = i max , respectively. This leads to the following estimate of the expected external costs:

Sample average approximation
Our second solution approach for the stochastic project scheduling and staffing problem is the method of Sample Average Approximation (SAA), see Kleywegt et al. (2002). The SAA method samples scenarios from the given distribution of the random events and approximates the given stochastic problem by the so-called deterministic equivalent, the problem resulting as the average over the scenarios. The deterministic equivalent can be solved by methods from deterministic optimization.
In the case of our problem, we draw a set of N random scenarios, described by the realizations d psq of the random variables D psq (p ∈ P, s ∈ S, 1 ≤ q ≤ d p ). Thus, d (n) psq denotes the demand of work package (p, s, q) in scenario n. The external work time required under scenario n for project p and skill s in time period t will be denoted by the new variable y (n) pts . The deterministic equivalent is then the problem (20) -(22), which is obviously a mixed-integer linear program. Note that Θ SAA (z, x) represents the average of the external costs over all scenarios, which approximates the expected external costs, and that the values of the variables y (n) pts result in (21) by a formula analogous to (9). The mixed-integer linear program (20) -(22) can be solved by standard solvers such as CPLEX. However, for realistic instance sizes and an appropriate number of samples, the problem (20) -(22) can become rather large, such that it may become difficult for the solver even to find a feasible solution within a reasonable time budget.

Test instance generation
For our computational experiments, we generated test instances by the test instance generator proposed in Heimerl and Kolisch (2010). This test instance generator is inspired by data from the IT department of a large semiconductor manufacturer. However, we had to extend the test instances since in the present paper, the information about the demand of work packages is assumed to be uncertain. In particular, in addition to the parameters number of projects |P|, time window size γ = LS p − ES p , and number of skills per resource |S k |, also a further parameter representing the degree of uncertainty (which will be explained later) had to be varied across the test instances. The database with the instance set is available for download at the website http://phaidra.fhstp.ac.at/o:2529. Table 1 lists the parameter values for the basic instance structure. For this group of instances, the number of projects is 10, and the earliest start period ES p of each project p is drawn from a uniform distribution between 1 and 7. The project length d p is set to six periods for each project, and the planning horizon is defined as T = 12, which represents annual strategic project scheduling and personnel planning. We choose a time window size γ = 1 which means that for each activity q, there are two possible start times available, considering precedence constraints. With S (p) denoting the set of skills required by project p, and with S (p,q) denoting the set of skills required by activity q of project p, the test instance generator limits the number |S (p) | of skills per project by a pre-defined bound; moreover, for each activity q of a project p, the test instance generator specifies the number |S (p,q) | of required skills. For all skills s not contained in S (p,q) , the values d psq are set to zero. (For details concerning this aspect of the test instance generation, see Heimerl and Kolisch (2010)). In our case, we chose |S (p,q) | = 2 and limited the total number of skills per project by |S (p) | ≤ 3. Ten internal resources are assumed (|K| = 10), each owning |S k | = 2 out of |S| = 10 skills. The resources have different efficiency values η sk for each skill k they own; these efficiency values are drawn from a truncated normal distribution with an expected value of µ = 1, a standard deviation of σ = 0.25, and minimum and maximum threshold values of 0.5 and 1.5, respectively. The available capacity per internal resource a kt is 20 per time period. Note that we do not assume that a time period has a length of one time unit: In our above-mentioned interpretation of the test instances as referring to annual planning, the time unit is a day, and a time period extends over a month, so a kt = 20 means that an employee works 20 days per month.
The external cost rates c e s differ for different skills and are drawn from a truncated normal distribution TN a,b (µ, σ) with µ = 800, σ = 100, a = 600 and b = 1000. In test instance generation, (planned) utilization ρ is defined as the ratio of the overall expected resource demand to available internal resource capacity. In the basic instance structure, we set ρ = 1. For each work package, the test instance generator computes an initial value [E(D psq )] init of the expected resource demand from the utilization ρ and the resource supply values a kt . The actual expected value of the resource demand E(D psq ) is then drawn from a normal distribution, with mean µ = [E(D psq )] init and a coefficient of variation CV = 0.1. For the basic instance structure, we assume a symmetric triangular distribution of the actual demand with parameters (D min psq , D mod psq , D max psq ), where D mod psq = E(D psq ), D max psq = D mod psq · c max , and D min psq = D mod psq · c min . The interval [c min , c max ] controls the level of uncertainty; if c min = c max , we get the deterministic boundary case. For the basic instance structure, a moderate level of uncertainty is assumed by setting c min = 0.7 and   c max = 1.3. Notice that because we use symmetric triangular distributions in our basic test instances, the expected value E(D psq ) is identical to the modal value D mod psq of the distribution. 1 The basic instance structure is varied then to obtain other instance structures, according to Table 2 which lists the parameters with their used values. For test instance generation, we use a ceteris paribus design, which means that we fix all parameters on the value of the basic instance structure and vary the value of one investigated parameter. This yields 4 + (4 − 1) + (6 − 1) + (3 − 1) = 14 different instance structures. For each instance structure, we generate 10 instances, which leads to 140 test instances. Additionally, for each of these 150 test instances, four different levels of the degree of uncertainty (see Table 3), are investigated. This produces a total of 560 instances.
All tests are performed on a standard PC with an Intel Quad Core Processor. In detail, we used an Intel Xeon E3-1271 v3 processor (Frequency: 3,60GHz) with eight kernels and 32 Gigabytes working memory. All presented algorithms are implemented in Eclipse using Java version 1.7. We use the Java API of ILOG CPLEX version 12.4 for the SAA and the EV model formulations. 1 The choice of the triangular distribution for related models is discussed, e.g., in (Law et al., 1991;Wing Chau, 1995). Law et al. (1991) see the triangular distribution as a rough model suitable for cases where there is limited data available or the costs of data collection are high. Also the Beta distribution would be a candidate for modeling the distributions of the random variables D psq . However, as shown in Gutjahr and Froeschl (2013) for a related model, using a Beta distribution within the Frank-Wolfe framework comes at the price of a distinct increase in computation time, and the results are typically similar to those obtained by the triangular distribution.

Parameter setting for the Frank-Wolfe algorithm
There are several design decisions that have to be made when using the Frank-Wolfe algorithm within our matheuristic framework procedure: First of all, two slightly different implementation variants of this algorithm were presented in Section 3.4.1. Secondly, the algorithm requires an initial solution; we consider two options for its choice. Finally, it has to be specified how the line search is done; again, two different options will be investigated.
Algorithmic variants: We shall compare the basic Algorithm 1 ("old" in Table 4) to the modified Algorithm 2 ("new" in Table 4).
Initial solution: For the determination of the initial solution, two alternative approaches are tested. First, we use the MILP-solver CPLEX ("lp" in Table 4) to solve the deterministic linear staffing problem defined by the parameters d pst = E(D pst ) = ∑ qdpsq z pqt . Secondly, we apply a greedy staffing heuristic (Felberbauer et al., 2016) to solve the same deterministic counterpart problem heuristically ("gh" in Table 4).
Line search method: For the line search step of the algorithm, two methods are analyzed: The first method ("gs" in Table 4) uses Golden Section Search according to Kiefer (1953). The second method ("fs" in Table 4) follows the suggestion in Clarkson (2010): it refrains from determining the arg min in line 13 of Algorithm 1 or line 12 of Algorithm 2, respectively, but uses instead in each iteration i a pre-defined step size ϑ * = ϑ * (i) depending on the iteration index. The value of ϑ * is calculated as ϑ * = 2/(i + 2) (i = 1, 2, . . .). It is clear that the value of ϑ * determined in this way does not produce the minimizer on the line segment between u [i] and g [i] , but by the special choice of the step sizes (convergence to zero and finiteness of the sum of the squares), the convergence property of the Frank-Wolfe algorithm to the exact overall minimizer of Θ z is preserved (for details, see Clarkson (2010)). The advantage of the fixed step sizes scheme is that it does not require an evaluation of the function values Θ z during the execution of the algorithm; it suffices to evaluate the derivatives of Θ z .
Combining the two alternative options for each of the three design decisions indicated above, we get 2 3 = 8 different design variants of the Frank-Wolfe algorithm. The following results compare the performance of these eight design variants. For each design variant, we shall report its average solution value sv and its average computation time ct at 1000 randomly selected time schedules z = (z pqt ) for a single fixed problem instance generated according to the basic instance structure. In a pre-test, it turned out that a number i max = 1000 of iterations was sufficient to get close enough to the value Θ z achieved by a much higher number 10 6 of iterations. Therefore, we used i max = 1000 for all design variants.
The results are shown in Table 4. The comparison between the two algorithmic variants Algorithm 1 and the new Algorithm 2 (immediately applying the best partial derivative for the update of the project plan) shows a slight superiority of Algorithm 2. For the decision on the used initial solution method, the test shows that the computation time for solving the deter- Table 4 Average solution value (sv), standard deviation of the solution value, average computation time (ct), and standard deviation of the computation time for the eight Frank-Wolfe design variants. Last two columns: relative values compared to those of the best design variants. Based on i max = 1000 iterations and 1000 schedules z of a fixed instance. ministic staffing problem takes in the average ≈ 3.73 ms using the LP-solver and ≈ 0.26 ms using the greedy heuristic. On the other hand, the expected cost of the initial staffing plan x in iteration i = 1 according to the greedy heuristic is ≈ 9% higher than the one obtained by the exact LP solver. Nevertheless, applying a two-tailed sign test to the final results for solution values and computation times shows no significant difference between the performance of the LP solver and the greedy heuristic (significance level α = 0.05). For larger instances, where the LP solving time increases rapidly, the greedy heuristic can become the only feasible alternative, so that in total, we may give a preference to the greedy heuristic. Concerning the line search method, finally, it can be seen that the pre-defined step sizes scheme clearly outperforms the Golden Section Search: Although Golden Section Search provides faster improvements in the first few iterations than the step sizes scheme, the solution quality after 1000 iteration is not better, and the computation time per iteration is ≈ 200 times higher. Summarizing, the best-performing Frank-Wolfe variant uses the immediate application of each best partial derivative for the update of the project plan, the greedy heuristic for the calculation of the initial staffing solution, and the pre-defined step sizes scheme.

Parameter setting for Sample Average Approximation
The crucial parameter of the Sample Average Approximation procedure is the number N of sampled random scenarios. Therefore, the following pretests have been conducted to find an appropriate value of N. It is clear that the objective function of (20) is only an approximation to the true objective function, such that even if the SAA problem is solved exactly, we do not necessarily obtain the exact solution of the original problem. To explore the tradeoff between the two effects of increasing the value of N, namely to improve the accuracy of the objective function estimation on the one hand, and to increase the computation time on the other hand, a subset of our instances has been investigated. We chose the instances of those instances structures where the number of projects is varied as |P| ∈ {10, 15, 20, 25}, the time window size is γ ∈ {0, 1, 2, 3}, and the number of skills per resource as well as the utilization are fixed to the values of the basic instance structure, i.e., |S k | = 2 and ρ = 1. The degree of uncertainty was varied as [c min , c max ] ∈ {[0.9, 1.1], [0.7, 1.3], [0.5, 1.5], [0.2, 1.8]}. For these instances, we varied the sample size as N ∈ {10, 20, 30, . . . , 100} and analyzed the solution time and the achieved solution quality. By the SAA model from section 3.5 with sample size N, we compute the solution (z * SAA , x * SAA ), where z * SAA is the optimal project plan and x * SAA is the optimal staffing plan. Now, we compare the obtained objective function value Θ SAA (z * SAA , x * SAA ) of (20) to the true evaluation E[G(z * SAA , x * SAA , ξ)] of the solution (z * SAA , x * SAA ) according to the underlying exact probability model (cf. the notation in section 3.1). The relative gap between the two evaluations is described by In Figure 2, the solution time and the relative solution gap according to Eq. (23) as well as their 95% confidence intervals are depicted in dependence of the sample size N. We show here the special case of |P| = 20 projects and the other parameters as in the basic instance structure. It can be observed that for a sample size of N = 100, the solution gap is ≈ 0.5%, i.e., the average objective function value over the scenarios can be considered as a good estimate for the expected external costs. A further observation is that the solution time of the SAA model varies to a considerable extent. This behavior points out a first drawback of relying on the exact solution of the SAA model to solve our problem.
These results were extended by solving all test instances of the instance subset specified at the beginning of this subsection. With an appropriate sample size of N = 100 and a computation time limit of 360 sec for the CPLEX solver, we observed that CPLEX was able to solve the SAA problems for all considered test instances. In the average over all considered test instances, a relative gap of 0.71% according to Eq. (23) was obtained.

Matheuristic vs. Sample Average Approximation
This section reports on the numerical comparison of the two presented solution methods, i.e., the developed matheuristic (MH) and the Sample Average Approximation (SAA) approach. Note that by fixing a sample size N, the computation time consumed by the SAA approach is already defined. To ensure a fair comparison between SAA and MH, we computed, for a given test instance structure, the average computation times of the SAA approach for sample sizes N = 10, 20, . . . , 100 and used each of these ten values as the time budget (termination criterion) for a corresponding run of the MH approach. For each problem instance structure, ten random instances were generated, and for each of these generated instances, ten optimization runs (with different seed values for the random number generator) were executed. This led to 100 solution values for MH and 100 solution values for SAA per time budget. The averages of these solution values for a special instance structure are depicted in Figure 3. The solution values have been computed based on the determination of the exact objective function value of the solution (z, x) provided by the respective approach, i.e., the value E[G(z, x, ξ)]. The reader will observe that Figure 3 only contains 9 solution time values instead of 10, as one would expect. This is because sample sizes N = 80 and N = 90 led to identical solution times, cf. Figure 2.
The results show that the solution quality of MH is less sensitive to the time budget than that of SAA. Applying a two-tailed sign test to the results of each of the ten different time budgets, we found that the MH results were significantly superior, at significance level α = 0.05, for the smallest time budget 14 sec which was the time budget produced by sample size N = 10. For larger time budgets, the sign test could not confirm statistically significant superiority at level α = 0.05 of either SAA over MH or vice versa. However, this lack of significance may be due to our small sample size of ten instances (in order to ensure independence, we had to take the average over the ten runs of each instance). Summarizing, in the considered instances structure, MH and SAA produce results of a comparable quality in the medium computation time range, with a slight but nonsignificant advantage for SAA. For small computation times, MH is superior. Next, we compare SAA, applying sample size N = 100, to MH, using for both the same time limit of 360 seconds. Let (z * MH , x * MH ) and (z * SAA , x * SAA ) denote the optimal project schedule and staffing plan according to MH and SAA, respectively. To do the comparison, we compute the normalized difference of the expected external costs. Averaging this measure over all test instances, we obtained a value of 0.1288, indicating that in the average, the SAA solutions are by 12.88% worse than the MH solutions. However, this result should be interpreted cautiously. The considerable difference can mainly be attributed to test instances where the expected external costs of the optimal solutions are almost zero. Such a situation easily occurs in instances where the number |S k | of skills per resource is high. Investigating the same measure only for the basic instance structure, we find that there is no significant difference between the two solution methods. Real-world project scheduling problems are often rather large. To investigate which effect an increasing size of the problem instance exerts on the comparison between MH and SAA, we generated test instances where both the number of projects and the number of resources were increased simultaneously as |P| = |K| = 50, 100, 150, 200, 250, and the parameter values γ = 2, |S k | = 2 and ρ = 1 from Table 1 were used. For the SAA model, we applied CPLEX with default values and set the time limit to 10 hours, which is an appropriate time budget for a tactical planning problem. We found that the solver could not provide a feasible integer solution for 40% of these instances; the share of solvable instances rapidly drops as |P| and |K| become larger than 150. However, even for the instances for which the SAA model can be feasibly solved, the SAA solution is in the average by 45.57% worse than the MH solution. Additionally, MH needs only a fraction of the SAA solution time to find a good solution.
We conclude that the developed matheuristic is a robust solution procedure that performs well for small and medium-sized test instances, and offers good solutions also where the SAA formulation fails to return a feasible solution. The main drawback of the SAA approach is its poor reliability with respect to the identification of a feasible solution and its volatile solution time. Nevertheless, for not too large instances, the SAA approach is a promising alternative to the application of a (partially) heuristics-based method, especially in cases where powerful hardware is available.

Deterministic vs. stochastic planning
Is the advantage of treating the given project scheduling and staffing method as a stochastic optimization problem substantial enough to justify the increased computational effort, compared to a simplified, deterministic formulation? To shed light on this question, we deepen our experimental analysis in the following two subsections. In Section 4.5.1, the accuracy of using the solution value of the expected value problem as a forecast for budget planning is investigated, whereas in Section 4.5.2, the value of the stochastic solution is discussed.

Accuracy of the expected value problem
The solution of the expected value (EV) problem (8) -(10) provides a manager with a project schedule z * EV , a staffing plan x * EV , and a corresponding solution value Θ EV (z * EV , x * EV ). The obtained forecast Θ EV (z * EV , x * EV ) of the external costs could be used as an input for the budget planning process. If the demand information is actually stochastic, this forecast will be rather rough, and it will tend to underestimate the true costs. Evaluating the obtained project and staffing plan based on the stochastic model by computing gives a hint of how the forecast provided by solution of the EV problem will perform in the stochastic environment. The difference could be considered as a budget bias caused by a deterministic solution approach. Table 5 shows the relative bias BB rel of the EV problem solution value, i.e., the quotient for the basic instance structure and for all instances, in dependence of the degree of uncertainty. We see that with increasing uncertainty, the bias increases, , the use of a deterministic planning approach leads to an underestimation of the external costs by 11%. Averaged over the instances from all instance structures (with ten random instances from each instance structure), this deviation is even distinctly higher (42%). According to these results, we conclude that the deterministic EV approach to the considered project scheduling and staffing problem leads to a systematic underestimation of external costs and can, as a consequence, seriously threaten the budget plan.

Value of the stochastic solution
Whereas the last subsection investigated the difference between the predicted and the true costs of the EV solution, we turn now to the question of how much worse the EV solution is in comparison with the solution of the stochastic optimization problem. This latter difference indicates the value of taking the stochasticity of the demand into account instead of using the simplified deterministic EV model for the planning process. As the solution procedure for the stochastic optimization problem, we choose the MH approach. As before, let (z * EV , x * EV ) and E[G(z * EV , x * EV , ξ)] denote the EV solution and its expected external cost, respectively, and let (z * MH , x * MH ) and E[G(z * MH , x * MH , ξ)]) denote the MH solution and its expected external cost, respectively. The difference is called the value of the stochastic solution (VSS); it describes the cost savings achieved by applying the stochastic solution approach instead of the deterministic one. We define the relative value of the stochastic solution as the quotient In Figure 4, the absolute and the relative VSS are depicted in dependence of the degree of uncertainty for the instances of the basic instance structure. Unsurprisingly, both the absolute and the relative VSS increase as uncertainty increases.  we statistically confirm (α = 0.001) the intuitive conjecture that both the absolute and the relative VSS are positively correlated with the degree of uncertainty (for VSS and relative VSS, we get correlation coefficients of R = 0.936 and R = 0.753, respectively). In some other instance structures, the potential of the stochastic solution approach is even higher. For example, for large values of the number of projects and of the number of skills per resource, this gain reaches values between 30% and 100%. The considerable size of the VSS confirms the need of applying stochastic optimization techniques in project scheduling and staffing.

Costs of Uncertainty
In Figure 5, we plot (i) the expected external costs and (ii) the relative expected external cost increments, both in dependence of different degrees of uncertainty, for the instances of the basic scenario structure. In this Figure In areas as software engineering or in the construction industry, even higher overruns occur. Therefore, also the costs of uncertainty can be considerable in practice.

Influence of parameters on external costs
In Heimerl and Kolisch (2010), the authors investigate in a deterministic context how parameters as the number of projects, the time window size etc. influence the optimal costs. We shall extend now their results to the stochastic context of the present paper. In the present subsection, we use the ceteris paribus design explained in subsection 4.1. That is, the parameters of the basic instance structure of Table 1 are applied, with the exception of modifying one single parameter among the parameters in Table 2.
Influence of the number of projects. First, we investigate the influence of increasing the number of projects on the resulting expected external costs. As Heimerl and Kolisch (2010), we keep the total resource demand constant while increasing the number of projects, which means that for a larger number of projects, the work packages become smaller. This change increases the flexibility of the planner, so one expects decreasing costs of the optimal solutions. This was confirmed indeed in Heimerl and Kolisch (2010) for the deterministic context. We obtained similar results in the stochastic context: In Figure 6, we depict the expected external costs for four levels of uncertainty as a function of the number of projects (with fixed total demand). Additionally to the mean values, we plot the 95% confidence interval to account for the randomness in test instance generation. Apart from the already known effect that higher degree of uncertainty leads to higher external costs, one can see that all levels of uncertainty show the same behavior for an increasing number of projects: The conjecture that a larger number of smaller work packages is easier to balance across the planning horizon than bigger and fewer work packages is confirmed. Also as expected, we observe that from a certain value on, the potential of the flexibility achieved by reducing work packages sizes diminishes.
Influence of the time window size. In a similar way as it was done for the influence of the number of projects, we investigated also the influence of the time window size on the costs. The results showed that in our test instances, an additional degree of freedom, i.e., a time window size change from zero to one, led to cost savings of around 20%. A further cost reduction of up to 50% was achieved by increasing the time window size γ from 1 to 2. For time window sizes larger than two time periods, the costs did not decrease further.
Influence of the number of skills per resource. In Figure 7, we plot the expected external costs in dependence of the number of skills per resource. Please note that the situation where the number of skills per resource is |S k | = 10 represents a situation when all resources posses all skills. It can be observed that the external costs decrease monotonically with increasing |S k |. Two special findings may be particularly important. First, in the context of the considered set of instances (basic instance structure, with modifications only with respect to |S k |), a situation where all employees are extremely specialized, i.e., possess only one skill per person, leads to about the double expected external costs in comparison with a situation where the employees have two skills per person. A further investment in training that causes an increase in the number skills per person from two to four leads to another significant decrease in the expected external costs. Secondly, from a value of about four skills per person on, the investment in additional skills achieves only very limited cost savings. Of course, the quantitative amount of cost reduction depends on the specific characteristics of the test instance set (especially on the chosen parameter value for utilization, here: ρ = 1). Nevertheless, the results suggest the managerial insight that (i) multi-skilled resources can lead to significant cost savings, but (ii) companies should be aware that over-qualification allows no return on investment.
Influence of the utilization factor. Finally, in Figure 8, the expected external costs are visualized as a function of the utilization factor ρ, the ratio of the expected demand to the available time capacities of the resources. As expected, Figure 8 shows that the external costs increase as the utilization increases. In more detail, we find that when starting at 100% utilization, a 20% decrease of utilization leads to a 57% decrease of the expected external costs, whereas a 20% increase of the utilization leads to a rise of costs by 23%.

Asymmetric work time distributions
In the previous tests, we assumed that the triangular distribution of the variables D psq was symmetric. In practice, distributions of work times are often right-skewed. Therefore, we checked whether or not substantially different results were obtained by a replacement of the symmetric distributions by right-skewed triangular distributions. For this purpose, we took the basic instance structure and changed it as follows: The modal value was defined as c mode = c min + (c max − c min ) · 0.25, where c min and c max are the minimum and the maximum value of the distribution, respectively. For a series of four test instances, we fixed the expected value c expected = (c min + c max + c mode )/3 of the triangular distribution to 1. On this constraint, each of the four instances was constructed in such a way that the length c max − c min of the support interval of the distribution was gradually increased, taking the values 0.2, 0.6, 1.0 and 1.6, respectively, which corresponds to increasing uncertainty. Figure 9 shows a plot of the results. We see that the plot is very similar to Figure 5. We conclude that the skewedness of the distribution has little impact on the outcomes.

Conclusion
We developed two solution approaches for a stochastic project scheduling and staffing problem under uncertainty on required efforts. The scheduling decision is described by the choice of the start times of activities during the planning horizon. Each activity can consist of several work packages, where each work package requires a stochastic amount of effort exerted in one specific skill. The staff assignment decision matches available human resources  Fig. 9 Plot of (i) expected external costs and (ii) relative expected external cost increments compared to the situation with low stochasticity, c max − c min = 0.2, as a function of c max − c min , for the instances with right-skewed distribution.
(with heterogeneous skills) with the work package requirements of the activities. Demand for work time that is not covered by the assigned internal resources has to be satisfied by paying external work. For our first solution approach, a "matheuristic" combination of a metaheuristic with a convex optimization procedure, we decompose the problem into a project scheduling subproblem and staffing subproblem. The project schedules are optimized by an iterated local search heuristic, using Variable Neighborhood Descent as a solution component. The search is guided to initially tackle time-periods leading to high external cost. The staffing subproblem is a convex optimization problem which we solve by the Frank-Wolfe algorithm. Different design variants of this algorithm are investigated in application to our problem. Our second solution approach uses a Sample Average Approximation model of the stochastic optimization problem. This yields mixed-integer programming formulation that can be solved by CPLEX. For this second approach, a crucial parameter is the chosen sample size.
Experimental results for synthetically generated test instances show that the matheuristic is a robust solution procedure that performs well for small as well as medium-sized test instances and provides solutions even in cases where the SAA model fails to return a feasible solution. Nevertheless, up to medium-sized instances, the sample average approximation is also a good choice. We find that deterministic planning replacing the stochastic optimization problem by the corresponding expected value problem bears the risk of drastically underestimating external costs. Moreover, we demonstrate that the value of the stochastic solution, which is a measure for the cost savings achievable by a stochastic solution approach instead of using the deterministic expected value problem, is considerable, especially for moderate and high levels of uncertainty.
Our experiments confirmed some of the managerial insights that have been found (in a deterministic context) in Heimerl and Kolisch (2010) to be valid also in the context of uncertainty on required efforts. This holds especially for the potential as well as the limitations of multi-skilled resources, of small work packages and of large time windows with respect to possible cost savings. The main difference to the deterministic context of Heimerl and Kolisch (2010) is that a much more conservative planning strategy is necessary in presence of uncertainty in order to avoid excessive additional costs by required external work.
Finally, let us point out an important topic of future research: Whereas our approach only assumes substitutability of internal by external resources, the articles Ingels and Maenhout (2017a,b) focus on (within-skill or betweenskill) substitutions of internal resources by other internal resources, which is another type of recourse action that is frequently deployed for absorbing work load peaks. It would be very interesting to extend the model presented here to these forms of reactive planning and to develop suitable solution techniques for such an extension.
Another topic of future research might be risk aversion. Our model is riskneutral, which fits well to a situation of multi-project management with several small or medium-sized projects and a long-term perspective, but may be not appropriate anymore for a situation where very big problems cause specific risks. For such situations, extensions to risk-averse optimization approaches should be studied.