A branch-and-bound procedure for the resource-constrained project scheduling problem with partially renewable resources and general temporal constraints

In this paper, we consider the resource-constrained project scheduling problem with partially renewable resources and general temporal constraints. For the first time, the concept of partially renewable resources is embedded in the context of projects with general temporal constraints. While partially renewable resources have already broadened the area of applications for project scheduling, the extension by general temporal constraints allows to consider even more relevant aspects of real projects. We present a branch-and-bound procedure for the problem with the objective to minimize the project duration. To improve the performance of the solution procedure, new consistency tests, lower bounds, and dominance rules are developed. Furthermore, new temporal planning procedures, based on forbidden start times of activities, are presented which can be used for any project scheduling problem with general temporal constraints independent on the considered resource type. In a performance analysis, we compare our branch-and-bound procedure with the mixed-integer linear programming solver IBM CPLEX 12.8.0 on adaptations of benchmark instances from the literature. In addition, we compare our solution procedure with the only available branch-and-bound procedure for partially renewable resources. The results of the computational experiments prove the efficiency of our branch-and-bound procedure.


Introduction
This paper is concerned with the resource-constrained project scheduling problem with partially renewable resources (RCPSP/π ) extended by general temporal constraints which to the best of our knowledge has not been treated in the open literature so far. The concept of partially renewable resources has already proved to be useful to model constraints occurring in different real applications. The first usage of this resource type can be found in Drexl et al. (1993) for a course scheduling problem. Further examples are given in Drexl and Salewski (1997) for school timetabling, in Bartsch et al. (2006) and Knust (2010) for sports scheduling, in Briskorn and Fliedner (2012) for container transshipment, and in Okubo et al. (2015) for machine scheduling. The availability of a partially renewable resource is limited to an arbitrary set of periods. Accordingly, a partially renewable resource is only consumed by an activity in the periods the resource is available and the activity is in process. In the field of project scheduling, it is well known that partially renewable resources generalize the concepts of renewable resources with time-varying capacities and nonrenewable resources. Therefore, the usage of partially renewable resources in project scheduling problems opens new application areas, where the modeling of labor regulations in staff scheduling appears as one of the most promising fields.
Different approximation methods have been considered for the RCPSP/π . Böttcher et al. (1999) and Schirmer (1999) have both proposed a schedule-generation scheme where in Schirmer (1999) local search procedures have been investigated in addition. Furthermore, the works of Alvarez-Valdes et al. (2006, 2008 and Alvarez-Valdes et al. (2015) are dedicated to a GRASP and a scatter search algorithm for the RCPSP/π .
An alternative approach to extend the classical resource-constrained project scheduling problem (RCPSP) is given by the consideration of general temporal constraints (RCPSP/max) which has been studied thoroughly in the literature but has not yet been treated with partially renewable resources so far. Therefore, it seems promising to combine both extensions which we denote by RCPSP/max-π in the following. For an extensive overview of applications for general temporal constraints, we refer the reader to Neumann and Schwindt (1995) and Neumann et al. (2003).
For both extensions, branch-and-bound procedures have been developed. In Böttcher et al. (1999), the only branch-and-bound procedure for the RCPSP/π based on the work of Talbot and Patterson (1978) can be found. All other procedures in contrast are dedicated to the RCPSP/max given in Bartusch et al. (1988), De Reyck and Herroelen (1998), Schwindt (1998a, c), Fest et al. (1999), Dorndorf et al. (2000b), and Bianco and Caramia (2012). Since none of these procedures can directly be adapted to solve the RCPSP/max-π , the need for a new concept tackling the RCPSP/max-π is evident.
In this paper, we present a branch-and-bound procedure for the RCPSP/max-π complemented by efficient procedures to improve the performance. The following section describes the RCPSP/max-π formally. Section 3 covers the enumeration scheme of the branch-and-bound procedure and Sect. 4 presents temporal planning procedures which are used for the consistency tests concerned with in Sect. 5. Sections 6 and 7 are dedicated to lower bounds and dominance rules, respectively. In Sect. 9, the branch-and-bound procedure is discussed, and Sect. 10 provides an experimental performance analysis. Finally, conclusions are presented in Sect. 11.

Problem description
The RCPSP/max-π is given by a project consisting of n real activities and two fictitious activities 0 and n + 1 which represent the start and the end of the project, respectively. Each activity i ∈ V {0, 1, . . . , n + 1} is assigned a non-interruptible processing time p i ∈ Z ≥0 and a resource demand r d ik ∈ Z ≥0 for each partially renewable resource k ∈ R , where all fictitious activities have neither a processing time nor a resource demand. Between pairs of activities (i, j) ∈ E ⊂ V × V general temporal constraints are given. For each activity pair (i, j) ∈ E , a time lag δ i j ∈ Z between the start times of activity i and j has to be hold, i.e., S j ≥ S i + δ i j . It should be noted that negative time lags can be interpreted as maximum time lags. Besides the temporal constraints, resource capacities R k of all partially renewable resources k ∈ R have to be taken into consideration. Each resource is defined on a subset of all periods of the planning horizon Π k ⊆ {1, 2, . . . ,d} withd as the prescribed maximum project duration. It should be noted that in the literature, partially renewable resources are also defined in other ways by assigning multiple subsets of periods to each of them. In this paper, we use the so-called normalized formulation for partially renewable resources which is beneficial for theoretical issues (Böttcher et al. 1999). An activity i ∈ V consumes r d ik units of resource k ∈ R in each period of Π k the activity is in execution. In the following, we call the number of periods in Π k an activity i ∈ V with start time S i is in execution, given by r u ik (S i ) : |]S i , S i + p i ] ∩ Π k |, the resource usage of resource k ∈ R by activity i. Accordingly, the consumption of a resource k ∈ R by an activity i ∈ V which starts at time S i can be stated by r c ik (S i ) : r u ik (S i ) · r d ik , where the consumption by all activities of the project r c k (S) : i∈V r c ik (S i ) must not exceed the capacity R k . To ensure that each activity is not executed just during a part of a period, the start times are restricted to integral values, i.e., S i ∈ Z ≥0 for all i ∈ V .
The objective of the RCPSP/max-π is to assign each activity a start time, so that all temporal and resource constraints are satisfied and the project duration is minimized with a presumed start of the project at time 0 and a maximum project durationd. In the following, a sequence of start times of all activities S (S i ) i∈V with S i ∈ Z ≥0 for all i ∈ V and S 0 0 is called a schedule, where S is said to be time-feasible, resourcefeasible or feasible if it fulfills all temporal constraints, all resource constraints or all constraints, respectively. The problem RCPSP/max-π can thus be stated as follows: It should be noted that in order to ensure the compliance with the maximum project durationd of each time-feasible schedule, a temporal constraint between the start and the end of the project is established, i.e., (n + 1, 0) ∈ E with δ n+1,0 −d. Accordingly, the start time of each activity is restricted to set H : {0, 1, . . . ,d} representing all integral times of the planning horizon. In the following, the feasible region of problem (P1) is denoted by S with OS ⊆ S as the set of all optimal schedules. In addition, regarding the temporal and resource constraints, the sets of all time-feasible schedules S T and all resource-feasible schedules S R are considered as well.

Enumeration scheme
In this section, the enumeration scheme of our branch-and-bound procedure is described, i.e., the way to generate a set Φ ⊆ S containing at least one optimal solution for problem (P1). This scheme can be illustrated as a directed out-tree, where each node is assigned a so-called start time restriction W which is defined as follows: {0} is called a start time restriction with W i as the start time restriction of activity i ∈ V .
In each node of the enumeration tree, the resource relaxation of problem (P1) is considered with additional constraints, restricting the possible start times of each activity i ∈ V to the values contained in W i , i.e., S i ∈ W i . Thus, the problem corresponding to an enumeration node can be stated by In the following, the solution space of problem (P2(W)) is termed W -feasible region denoted by S T (W ) : Furthermore, an algorithm is covered, able to determine min S T (W ) exactly if there is at least one W -feasible schedule, or to prove S T (W ) ∅, otherwise. The construction of the directed out-tree or rather the enumeration tree is outlined in Algorithm 1. At first, for each activity i ∈ V the earliest and latest time-feasible start times ES i and LS i are determined by a label-correcting algorithm (see, e.g., Ahuja et al. 1993, Sect. 5.4) if there is at least one time-feasible schedule, i.e., S T ∅. Otherwise, the label-correcting algorithm proves S T ∅, so that Algorithm 1 terminates.
In the following, we assume that S T ∅. At the beginning of the construction process, the start time restriction W . . , LS i } for all activities i ∈ V is assigned to the root node and added to set Ω, which is used in the process for saving all enumeration nodes not explored yet. Additionally, set Φ which gathers all potentially optimal schedules generated in the process, is initialized to an empty set. It should be noted that problem (P2(W)) corresponding to the root node is equal to the resource relaxation of problem (P1), so that S T (W ) S T ⊇ S holds.
The main step of Algorithm 1 describes the generation of the enumeration tree. In each iteration, a start time restriction W is removed from set Ω and the corresponding problem (P2(W)) is solved, i.e., S min S T (W ) is determined. In case that schedule S is feasible, i.e., r c k (S) : i∈V r c ik (S i ) ≤ R k for all k ∈ R, schedule S is added to set Φ, meaning that the corresponding node represents a leaf of the enumeration tree. Otherwise, there is at least one conflict resource k ∈ R with r c k (S) > R k . In this case, the solution space S T (W ) is decomposed based on a reduction in the permitted maximum resource usages of all activities i ∈ V k : {i ∈ V | r d ik > 0} consuming conflict resource k ∈ R. In the following, we will useū ik as the socalled resource usage bound of activity i ∈ V for resource k ∈ R, representing an upper bound for the resource usage, i.e., r u ik (S i ) ≤ū ik , added during the enumeration process. Additionally, W ik (ū ik ) : {τ ∈ {ES i , ES i + 1, . . . , LS i } | r u ik (τ ) ≤ū ik } is said to be the start time restriction of activity i ∈ V induced by the resource usage boundū ik , comprising all time-feasible start times of activity i ∈ V with a maximum resource usage ofū ik . The following explanations for the decomposition of S T (W ) are based on Theorem 1 which implicates that no feasible schedule S ∈ S is excluded by the enumeration procedure. It should be noted that therefore the enumeration process is independent on the considered objective function.
ik > R k is given. Since this contradicts the assumption that S is feasible, the theorem is proven.
In what follows, we describe the decomposition of S T (W ) in subsets for some conflict resource k ∈ R so that each activity i ∈ V k corresponds to one of them. Let W be the start time restriction of any node in the enumeration tree with schedule S min S T (W ) / ∈ S R and conflict resource k ∈ R. Then, the decomposition of S T (W ) works as follows. Regarding Theorem 1, for each activity i ∈ V k consuming conflict resource k ∈ R, the maximum resource usageū ik is set to r u ik (S i ) − 1, meaning that all start times t ∈ W i with r u ik (t) ≥ r u ik (S i ) are removed from W i . This is achieved by is added to Ω and explored in one of the following iterations. After the exploration of all enumeration nodes, i.e., Ω ∅, Algorithm 1 terminates with output Φ containing all generated feasible schedules in the process.
Finally, it should be mentioned that the completeness of Algorithm 1 can easily be derived from Theorem 1 where the correctness follows directly with Φ ⊆ S. As a consequence of Lemma 1, we can additionally state that the enumeration scheme terminates after finitely many iterations.
Proof Since the permitted usage of a resource by an activity can be reduced at most p max : max i∈V p i times, the maximum depth of the enumeration tree is given by |V ||R| p max . Taking also into consideration, that in each decomposition step at most |V | enumeration nodes can be added to Ω, we get a maximum number of |V | |V ||R| p max enumeration nodes.

Temporal planning with start time restrictions
In this section, we discuss temporal planning procedures which represent the backbone of the consistency tests described in Sect. 5 and which are used in the enumeration scheme to determine for each enumeration node the minimal point of its corresponding feasible region. In the first part, we consider two algorithms which are able to determine the earliest and latest start times of all activities of the project, where a start time restriction and a lower or upper bound for the start time of some activity are taken into account. Based on these procedures, the second part is concerned with the calculation of minimum and maximum time lags between the start times of all activity pairs of the project.

Earliest and latest start times
The first procedure can be seen as a label-correcting algorithm which is able to determine the unique minimal point of ∅. In the following, we denote by ES(W , α, t α ) the minimal point of S T (W , α, t α ) , where ES i (W , α, t α ) represents the earliest W -feasible start time of activity i if activity α ∈ V is not started earlier than at time t α . As described in the next section, the limitation of S T (W ) by S α ≥ t α is used to determine the start-time-dependent indirect minimum time lags between all activity pairs of the project. It should be noted that Algorithm 2 comprises the calculation of min S T (W ) by setting α : 0 and t α : 0, i.e., establishing the project start to be greater than 0.
In what follows, we describe Algorithm 2 for which we use to represent the minimum time lag between the lowest start time ν j ∈ W j of activity j ∈ V which satisfies the minimum time lag to the tentative start time ν i of activity i ∈ V (if it exists). In the first step, Algorithm 2 checks conditions which imply that S T ∅. In case that none of these conditions is satisfied, the earliest time t α ≥ t α in W α is assigned to node α, the initial weights of all other nodes i ∈ V \ {α} are set to ν i : −∞ and Q, which is implemented as a queue, is initialized by Q : {α}.
In each iteration of Algorithm 2, the weights of the direct successors Succ(i) of all nodes i ∈ V are considered which have been added to Q in the previous iteration. In case that ν i + δ i j (W , ν i ) > ν j is detected, the weight ν j is set to ν i + δ i j (W , ν i ).
Since the weight of node j is increased, in the next iteration the node weights of all its direct successors have to be checked which is ensured by adding j to Q. If the updated weight of node j is greater than max W j , S T (W , α, t α ) ∅ can be stated. Finally, in case that Algorithm 2 terminates with Q ∅, schedule min S T (W , α, t α ) is determined or rather ES(W , α, t α ) is returned. The correctness of the algorithm and its time complexity is stated in Theorem 2.
In the following, we call a set I : {a, a + 1, . . . , b} ⊂ Z a start time component of start time restriction W i if and only if the conditions I ⊆ W i and a − 1, b + 1 / ∈ W i are satisfied. Otherwise, in case that I ⊆ H \ W i with a − 1, b + 1 ∈ W i is given, I is said to be a start time break of W i , where the number of start time breaks in W i and W is denoted by B i and B, respectively.
Theorem 2 Algorithm 2 determines the unique minimal point of S T (W , α, t α ) or shows that S T (W , α, t α ) ∅ with a time complexity of O(|V ||E|(B + 1)).
Proof Let S ∈ S T (W , α, t α ) be given. From t α : min{τ ∈ W α | τ ≥ t α } ≤ S α and ν i + δ i j (W , ν i ) ≤ ν i + δ i j (W , ν i ) for all ν i ≤ ν i , we can derive ν i ≤ S i for all i ∈ V in any iteration of Algorithm 2. Since in addition any further iteration implies the increase in at least one weight ν i , from S T (W , α, t α ) ∅ the termination of Algorithm 2 follows with Q ∅ and ν : (ν i ) i∈V ≤ S after a finite number of iterations. Finally, with ν ∈ S T (W , α, t α ) we can state that ν is equal to the unique minimal point of S T (W , α, t α ). If otherwise S T (W , α, t α ) ∅, the algorithm terminates either because of any condition in row 1 or with Q ∅ after a finite number of iterations.
The time complexity of the algorithm can be deduced as follows. First of all, it can easily be verified that the termination conditions in row 1 and the initialization step can be done with a time complexity of O(|V ||E|+B α ). Furthermore, the maximum number of iterations is given by O(|V |(1 + B)), which can be followed by the implication that after at most |V | iterations at least one weight ν i has to be assigned to a succeeding start time component of W i , since otherwise network N contains a cycle of positive length. Finally, the time complexity of each iteration remains to consider. Obviously, the maximum number of verified and potentially updated weights in each iteration is given by |E|. Since each start time restriction W i of an activity i ∈ V can be stored in memory as a list of non-decreasing values, representing the start and end times of all start time components, the update process can be done with a time complexity of O(|V |+B) over all iterations.
The second temporal planning procedure can be seen as a reversed version of Algorithm 2 which determines the unique maximal point of ∅ otherwise. Based on an initialized weight with ν α : max{τ ∈ W α | τ ≤ t α } and ν i : ∞ for all other activities, the algorithm decreases the activity weights iteratively until either a W -feasible schedule is determined or S T (W , α, t α ) ∅ is established. In contrast to the first procedure, the reversed version checks in each iteration for all direct predecessors i ∈ Pred( j) of some activity j if the tentative weight ν i satisfies the minimum time span to ν j , i.e., It is worth mentioning that in Franck et al. (2001a) label-correcting algorithms have already been introduced for a project scheduling problem with calendars which could also be used to determine the unique minimal and maximal point of S T (W ) with some adjustments. In contrast to the procedures in Franck et al. (2001a), Algorithm 2 and its reversed version provide the possibility to set a lower or upper bound for the start time of any activity of the project without the need to establish further temporal constraints or rather to extend the project network.

Minimum and maximum time lags
In the following, temporal planning procedures are presented which are essential for some consistency tests as described in Sect. 5. These procedures are able to determine for each activity i ∈ V all W -feasible start times, where a start time t ∈ W i is called Wfeasible if at least one schedule S ∈ S T (W ) with S i t exists. Besides, the procedures can also determine the indirect minimum and maximum time lags between all activity pairs (i, j) ∈ V ×V of the project for a subset of all time-feasible start times of activity i ∈ V which can be used to calculate the indirect minimum and maximum time lag for each W -feasible start time as described later on. The indirect minimum (maximum) time lag d i j (W , t) ( d i j (W , t)) for any activity pair (i, j) ∈ V × V is equal to the time span between the earliest (latest) W -feasible start time of activity j and time t if activity i is assumed to be started not earlier (later) than at time t.
It should be noted that in contrast to the temporal planning without start time restrictions, indirect minimum and maximum time lags have to be considered separately due to the start time breaks.
Algorithm 3 determines for each activity i ∈ V all W -feasible start times and the indirect minimum time lag d i j (W , t) to any other activity j ∈ V \ {i} for each start time t ∈ W i with W i as a subset of all time-feasible start times of activity i ∈ V . The indirect minimum time lags d i j (W , t) for all start times t ∈ W i are stored in a list [ d i j (W , t)] sorted by increasing values of t, where D(W ) : ([ d i j (W , t)]) i, j∈V is called the minimum distance matrix of start time restriction W . Algorithm 3 is based on a right-shift over all start times in W i of an activity i ∈ V , starting with t : min W i . As long as S T (W , i, t) ∅, the minimal point ES : it follows directly that all start times in {t, t + 1, . . . , ES i − 1} are not W -feasible so that they are removed from W i in row 8. After that, variable t is set to ES i and the indirect minimum time lag between activity i with start time t and each activity j ∈ V \{i} is stored in d i jt : ES j −ES i . In the next step, which is based on Theorem 3, a start time t is calculated to which the currently start time t of activity i ∈ V could be right-shifted so that all start times τ ∈ [t, t ]∩W i can be shown to be W -feasible. From Theorem 3, it can easily be derived that for all start times τ , τ ∈ [t, t ] ∩ W i with τ < τ and τ +d i j ≤ ES j , where d i j corresponds to a longest directed path in network N from activity i to activity j, d i j (W , τ ) d i j (W , τ ) + (τ − τ ) and that for all other start times with τ In case that there is still a W -feasible start time τ ≥ t + 1 in W i , a further loop pass for activity i ∈ V is conducted. Otherwise, all remaining start times τ ≥ t + 1 in W i are removed and the next activity is considered. At the end of Algorithm 3, start time restriction W contains all W -feasible start times of the initial start time restriction and the minimum distance matrix D(W ) is returned.
. Furthermore, since d ih represents a lower bound for the time span ES j − τ , (τ + d ih ) h∈V ≤ ES follows as well. In conclusion, we get ν ≤ ES so that due to First, we show that ν is time-feasible. For this, ν 0 0 can be derived by ES 0 0 and condition τ ≤ −d i0 , equivalent to τ + d i0 ≤ 0, so that at least one ordered pair for all activity pairs (i, j) ∈ V × V can be calculated in a similar way as described before. The corresponding procedure which also determines all W -feasible start times can be seen as a reversed version of Algorithm 3 which is based on leftshifts over all start times in W i , considering the latest schedule LS(W , i, t) in each iteration.
It should be mentioned that in Kreter (2016, Sect. 5.1) and Kreter et al. (2016) different temporal planning procedures have been developed for a project scheduling problem with calendars which could also be used to determine all W -feasible start times of any start time restriction W and the start-time-dependent indirect minimum time lags as described for all activity pairs of the project as well. While for these procedures referred to the problem of this work, time complexities of O(max(|V | 3d3 , |V | 4d2 )), O(|V | 4d2 ) and O(max(|V | 7 (B +1) 3 , |V | 8 (B +1) 2 )) have been shown, Algorithm 3 and its reversed version can both be implemented with a time complexity of O(|V | 2 |E|(B+ 1)). Furthermore, it should be noted that the procedures from literature are not able to determine the start-time-dependent indirect maximum time lags between all activity pairs of a project.

Consistency tests
In the literature, it could already be shown that consistency tests can successively be applied on project scheduling problems with renewable resources in the framework of an exact solution procedure (Dorndorf et al. 2000b;Schutt et al. 2013). Furthermore, in Alvarez-Valdes et al. (2006, 2008 consistency tests have been used in approximation procedures for the RCPSP/π . Commonly, consistency tests can be seen as pairs of a condition and a constraint, where the constraint is established if the condition is satisfied. In the following, we present five consistency tests whose possibly deduced constraint is unary, i.e., the established constraint can directly be transformed to a reduction in a start time restriction W i as the domain of start time S i of activity i ∈ V . Following the terminology in Dorndorf et al. (2000a), such tests can be referred to as domain-consistency tests and can be considered as functions γ mapping a start time restriction W to another start In the following, we call the outcome of all consistency tests from a set Γ , iteratively applied until no domain reduction can be done anymore or W i ∅ for at least one activity i ∈ V is detected, a fixed point.
The first two consistency tests are based on the temporal constraints S j ≥ S i +δ i j for all (i, j) ∈ E of problem (P1) and could thus be used for similar project scheduling problems independent of the considered type of resource. At first, we consider a well-known consistency test which has already been used for precedence constraints in Alvarez-Valdes et al. (2006, 2008 and also for general temporal constraints in Dorndorf et al. (2000b). This test is based on the fact that for any temporal constraint min W i + δ i j represents a lower bound of start time S j and max W j −δ i j gives an upper bound of start time S i . In this work, the test is called temporal-bound consistency test which is given by the following conditions to be checked for all (i, j) ∈ E and the corresponding reduction rules for the start time restrictions: It should be noted that the fixed point of the temporal-bound consistency test W can be obtained with Algorithm 2 and its reversed version with a time complexity of O(|V ||E|(B + 1)) by setting W i : The second consistency test is based on the temporal constraints as well. In contrast to the first consistency test, all start times t ∈ W i of an activity i ∈ V are checked whether any W -feasible schedule S ∈ S T (W ) with t S i exists rather than considering only the minimum and maximum start times in W i . The second test is called temporal consistency test and can be described by the following condition and its corresponding reduction rule: This test is conducted for all activities i ∈ V and their corresponding start times t ∈ W i . The fixed point of the temporal consistency test only contains the W -feasible start times for all activities of the project. As described in Sect. 4, the fixed point of the temporal consistency test can be determined by Algorithm 3 with a time complexity of O(|V | 2 |E|(B + 1)). It is easy to verify that the temporal consistency test dominates the temporal-bound consistency test, which means that W b i ⊇ W t i holds for all i ∈ V with W b and W t as the fixed points of the temporal-bound and temporal consistency test, respectively.
The following consistency tests take the resource constraints into consideration. The first of these consistency tests is used to remove each start time from the start time restriction of some activity if this start time implies a resource conflict taking the minimum possible resource consumptions of all other activities into account. Accordingly, for this test the minimum resource consumption of each resource k ∈ R by an activity i ∈ V k is determined if it is assumed that the activity can be started at any time in W i . That means r c,min where the so-called resource-bound consistency test for each start time t ∈ W i of any activity i ∈ V \ {0, n + 1} is stated by It should be noted that one pass over all activities and start times can be conducted with a time complexity of O(|V |I + |R|B) with I as the number of components over all sets Π k , where I : given. This can be achieved by storing the resource usages r u ik (t) for each activity i ∈ V and resource k ∈ R i for a subset of the start times of the whole planning horizon H in a list [r u ik (t)] sorted by increasing values of t which is sufficient to calculate the resource usage for any start time just like for the lists The resource usages of the start times τ ∈ H which have to be stored in [r u ik (t)] can be deduced from the following relations: The given relations imply that it is sufficient to store the resource usages r u ik (τ ) of a resource k ∈ R by an activity i ∈ V for all start times τ ∈ U k or τ ,d}, so that the resource usage for any start time τ ∈ H can be calculated by . It follows directly that the maximum number of start times stored in any list [r u ik (t)] is polynomially bounded by O(I k ) with I k as the number of components in Π k , so that it can easily be verified that r c,min ik (W ) can be determined with a time complexity of O(I k +B i ) , which results in a time complexity of O(|V |I +|R|B) over all activities i ∈ V and all resources k ∈ R. Since all inconsistent start times t ∈ W i with respect to the resource-bound consistency test for any resource k ∈ R can be removed from W i with a time complexity of O(I k + B i ), in conclusion we get a time complexity of O(|V |I + |R|B) for one pass of the resource-bound consistency test over all activities and resources.
The next consistency test can be seen as an extension of the resource-bound consistency test, where besides the resource constraints the temporal constraints between the activities of the project are considered as well. Thereby, this test makes use of the fact that each activity of the project has to be started in a schedule-dependent time window if the start time of some activity is fixed, so that the calculation of the minimum resource consumptions can be restricted to these time windows. For the so-called D-interval consistency test, the distance matrix D (d i j ) i, j∈V or rather the lengths of the longest directed paths d i j in network N between all activity pairs (i, j) ∈ V × V are used to restrict the possible start times τ ∈ W j of each activity j ∈ V \ {i} to start times in [t + d i j , t − d ji ] with t as the given start time of activity i ∈ V . The consistency test is conducted for each D- Considering any D-consistent start time of activity i ∈ V , r c ik (t) and the minimum resource consumption of each activity j ∈ V \ {i} over all start times for each activity i ∈ V and each D-consistent start time t ∈ W i . One pass of the D-interval consistency test over all D-consistent start times of all activities i ∈ V is outlined in Algorithm 4. As it can be seen in rows 4 and 5, the algorithm is based on the generation of lists which can be used in the same manner as the lists described before.
First of all, we consider the generation of list [r u,min i jkt (W , D)], which contains the minimum resource usages of all start times in a subset of all D-consistent start times of activity i ∈ V . The generation of the list is based on a right-shift over all start times t ∈ W i of activity represents the greatest start time in W r j with the lowest resource usage. The greatest start time t s to which the current start time t of activity i ∈ V can be right-shifted so that r u min keeps unchanged, is based on the calculation of τ : min{τ ∈ W j | τ > t −d ji ∧ r u jk (τ ) < r u min } and τ : min{τ ∈ W j | τ > τ min ∧ r u jk (τ ) > r u min } representing the next start times of activity j with a lower or greater resource usage than r u min . That means the minimum resource usage r u min does not decrease for all start times t of activity i ∈ V with t − d ji < τ and does not increase for all start times with t + d i j ≤ max{τ ∈ W j | τ < τ }, so that t s : min{τ − 1 + d ji , max{τ ∈ W j | τ < τ } − d i j } can directly be deduced. After the storage of r u min in list [r u,min i jkt (W , D)] for start time t s , the minimum resource usage r u min of the next start time t + with W r j ∅ is stored as well. In case that for some directly succeeding start times t > t + each right-shift by one unit leads to an increase or decrease in r u min by exactly one unit, the greatest of these start times and its corresponding minimum resource usage r u min is also stored in [r u,min i jkt (W , D)]. , which is equal to the time complexity of the update procedure for each resource k ∈ R at the end of the algorithm. In conclusion, Algorithm 4 returns a start time restriction γ D (W ) with a time complexity of O(|V | 2 I 2 + |V ||R|B 2 ).
The last consistency test extends the D-interval consistency test in the sense that for each W -feasible start time t ∈ W i of an activity i ∈ V the set of start times considered for each activity j ∈ V k \ {i} is restricted even more by using the minimum and maximum distance matrices D(W ) and D(W ). That means for any W -feasible start time t ∈ W i of activity i ∈ V the considered start times τ ∈ W j of activity j ∈ V k \{i} are restricted to [t + d i j (W , t), t − d i j (W , t)]. Accordingly, the minimum consumption of resource k ∈ R by activity j ∈ V k \ {i}, which is dependent on the W -feasible start time t ∈ W i of activity i ∈ V , is given by The corresponding condition and reduction rule of the so-called W -interval consistency test for all activities i ∈ V and all W -feasible start times t ∈ W i are given by One pass over all W -feasible start times of all activities i ∈ V can be conducted in the same way as for the D-interval consistency test sketched in Algorithm 4. In contrast to the D-interval consistency test, a list [r u,min i jkt (W , D, D)] is determined for each activity j ∈ V k \ {i} with The generation of this list works quite similar to Algorithm 4, except that for the rightshifts over the start times of activity i ∈ V , intervals with constant courses of d i j (W , t) and d i j (W , t) have to be taken into consideration as well. Since the maximum number of start times stored in lists for each resource k ∈ R can trivially be deduced and the update of W i follows directly. Conclusively, the W -interval consistency test determines γ W (W ) with a time complexity of O(|V | 2 I 2 + |V | 2 IB + |V ||R|B 2 ).

Lower bounds
In the following, we describe two lower bounds for the project duration which can be used for any node in the search tree or rather its corresponding start time restriction W . The first lower bound LB0 π is given by the solution of problem (P2(W)), i.e., LB0 π : ES n+1 (W ) with ES(W ) : ES(W , 0, 0). In the literature, such a lower bound based on a relaxation is usually referred as constructive. Conversely, the second lower bound LBD π is termed destructive, meaning that a hypothetical maximum project duration d is increased as long as it can be shown that d precludes any feasible solution (Klein and Scholl 1999). Algorithm 5 shows the procedure to determine the destructive lower bound LBD π , where the structure of Algorithm 5 is inspired by Franck et al. (2001b) and the way to find the greatest hypothetical project duration, which cannot be rejected anymore, is taken from Klein and Scholl (1999).
Algorithm 5 determines for any enumeration node or rather its corresponding start time restriction W the lowest hypothetical maximum project duration (if it exists) on a given interval [LB start , UB start ] which does not contradict the existence of a feasible schedule, where the verification of the existence of a feasible schedule is described later on. First of all, the algorithm starts with LB start : max(LB0 π , LB G ) , where LB G represents the global lower bound determined in the root node, and UB start : min(d, S * n+1 − 1) with S * as the currently best found solution (if already detected). In the following, it is assumed that LB start ≤ UB start and S T (W , n + 1, LB start ) ∅ is given, since otherwise, we can state that LBD π is greater than UB start so that the algorithm does not have to be conducted. The main step of Algorithm 5 executes a binary search on interval [LB start , UB start ] in the following way. In each iteration, an interval [LB d , UB d ] is considered with LB d : LB start and UB d : UB start for the first iteration. Based on this interval, d : (LB d + UB d )/2 is determined. If it can be shown that the maximum project duration d precludes the existence of any feasible solution, it can be followed that LBD π has to be greater than d. In the case that d UB start , the algorithm terminates since LBD π is known to be greater than UB start .
Otherwise, for the next iteration LB d : d + 1 is set, so that the interval [d + 1, UB d ] is considered next. Conversely, if d cannot be rejected, the interval [LB d , d − 1] is investigated in the next iteration, i.e., UB d : d − 1 is set. This procedure is reiterated while LB d ≤ UB d is given, where LB d equals the destructive lower bound LBD π at the end of the algorithm.
Finally, the way the algorithm verifies if any feasible solution exists for a given maximum project duration d on interval [LB start , UB start ] remains to consider. Let d be any hypothetical maximum project duration on interval [LB start , UB start ]. Then, the minimum resource consumption for each resource k ∈ R by an activity i ∈ V k over all start times t ∈ W i ∩ [ES i , LS i ] is determined with ES i as the earliest and LS i as the latest W -feasible start time of activity i ∈ V k if the project duration is not lower than LB start and not greater than d. It should be noted that in contrast to ES i , LS i has to be determined in each iteration since it depends on d. Given r c,min ik (W , t 1 , t 2 ) : min{r c ik (τ ) | τ ∈ W i ∩ [t 1 , t 2 ]}, the minimum resource consumption over all start times t ∈ W i ∩ [ES i , LS i ] can be expressed by r c,min ik (W , ES i , LS i ) and thus the total minimum resource consumption of a resource k ∈ R by i∈V k r c,min ik (W , ES i , LS i ). If there exists at least one resource k ∈ R with a total minimum resource consumption greater than R k , the considered maximum project duration d precludes any feasible solution. Otherwise, the existence of a feasible solution with project duration d cannot be ruled out by the described procedure.
In the following, the time complexities of both lower bounds are considered. For the first lower bound LB0 π , a time complexity of O(|V ||E|(B+1)) has already been shown in Sect. 4, where it should be noted that the determination of LB0 π is already a part of the enumeration process itself, so that it does not cause any additional computational effort. In contrast, the destructive lower bound LBD π entails additional computing time with a possibly better lower bound as an outcome, i.e., LBD π ≥ LB0 π . For Algorithm 5, we can state a time complexity of O(log(d)(|V ||E|(B +1)+|R|B +|V |I)) based on the following observations. First of all, the maximum number of iterations is given by log(d) due to the binary search. Combined with the time complexity to determine LS given by O(|V ||E|(B + 1)), and the time complexity to get the total minimum resource consumption of all resources stated by O(|R|B + |V |I), the mentioned time complexity follows.

Dominance rules
In the following, two dominance rules are described where both have in common that they implicate for two nodes, given by their corresponding start time restrictions W and W , that S(W ) ⊆ S(W ) with S(W ) : S T (W ) ∩ S holds. Obviously, if both nodes are not reachable from each other in the search tree, this implicates the redundancy of the enumeration node with start time restriction W or rather the dominance of the enumeration node with start time restriction W .
For the first dominance rule, a so-called resource usage boundŪ : (ū ik ) i∈V ,k∈R is assigned to each enumeration node in the search tree, used to store all resource usage boundsū ik established during the enumeration process as described in Sect. 3. Since at the beginning of the enumeration process no resource usage restriction is considered, u ik : p i for all i ∈ V and k ∈ R is set for the root node. In order to represent all time-feasible start times of an activity for a node in the enumeration tree, satisfying all resource usage boundsū ik established during the enumeration process, we introduce further notations. First, we define the so-calledŪ -induced start time restriction of an activity i ∈ V by W i (Ū ) : k∈R W ik (ū ik ) and call W (Ū ) : (W i (Ū )) i∈V the correspondingŪ -induced start time restriction. For the following explanations, in order to improve the readability, we write W ⊆ W instead of W i ⊆ W i for all i ∈ V andŪ ≤Ū to stateū ik ≤ū ik for all i ∈ V and k ∈ R.
The first rule, calledŪ -dominance rule, compares the resource usage bounds of nodes which are not reachable from each other in the search tree to reveal redundancies. LetŪ andŪ be the resource usage bounds of such nodes and assume thatŪ ≤Ū is given. Then, theŪ -dominance rule detects that the node corresponding toŪ is redundant or rather dominated by the other node which can be deduced as follows. First of all, let W be the start time restriction andŪ the resource usage bound of an arbitrary node in the search tree. Then it can easily be verified that W ⊆ W (Ū ) and S(W ) S(W (Ū )) is given, since no consistency test excludes feasible schedules from S T (W ) (cf. Sect. 5). SinceŪ ≤Ū implicates S T (W (Ū )) ⊆ S T (W (Ū )) and thus S(W (Ū )) ⊆ S(W (Ū )) as well, S(W ) ⊆ S(W ) follows directly with W and W as the start time restrictions corresponding to the nodes withŪ andŪ , respectively.
In contrast to the first rule, the second rule is not dependent on storing additional information for each enumeration node. Instead, it directly compares the start time restrictions W and W of nodes which are not reachable from each other in the search tree, for what reason the rule is called W -dominance rule. The redundancy of a node is detected by the rule if the condition W ⊆ W is satisfied, where the dominance of the node corresponding to W is trivially given by S T (W ) ⊆ S T (W ).
Concluding, the time complexity for each dominance rule should be considered, where the time complexity refers to the dominance verification between two given nodes. For theŪ -dominance rule, a time complexity of O(|V ||R|) can obviously be determined and for the W -dominance rule a time complexity of O(|V |+ min(B , B )) can be stated with B and B as the numbers of start time breaks corresponding to W and W , respectively.

Partitioning the feasible region
The dominance rules described in the previous section are just able under specified conditions to avoid that one and the same part of S T is explored several times in the search tree. In contrast, the following procedure ensures that any part of S T is explored at most one time by partitioning the feasible region of each enumeration node. It should be noted that a similar approach has already been used in Murty (1968) for the assignment problem. In order to achieve the partitioning for each node, the enumeration scheme has to be adjusted as follows. Let W be the start time restriction corresponding to any enumeration node in the search tree with S min S T (W ) / ∈ S R and the chosen conflict resource k ∈ R. Furthermore, assume that (i 1 , i 2 , . . . , i μ , . . . , i |V k (S)| ) is an arbitrary sequence of all activities considered for the decomposition of S T (W ) as described in Sect. 3 with V k (S) : {i ∈ V k | r u ik (S i ) > 0}. Then, the start time restriction W i μ corresponding to i μ ∈ V k (S) for each μ ∈ {1, 2, . . . , |V k (S)|} is set to In the following, we will show that the described decomposition leads to a partition of S T (W ) satisfying S T (W ) ∩ S i∈V k (S) (S T (W i ) ∩ S) so that the correctness of the enumeration scheme with the adjusted decomposition still remains. First of all, a partition of ∅ holds for all μ , μ ∈ {1, 2, . . . , |V k (S)|} with μ μ which follows directly from the guideline for the decomposition. Thus, it remains to show that any feasible schedule in S T (W ) is an element of the feasible region of any child node. For this, it is sufficient to show that holds with W i μ as the start time restriction determined in the enumeration procedure for activity i μ ∈ V k (S) as described in Sect. 3. The correctness of Equation (1) can be

Branch-and-bound procedure
In this section, the framework of our branch-and-bound procedure for the RCPSP/maxπ is covered, which means that the corresponding representation of the procedure enables different specifications. Besides the enumeration scheme, the branch-andbound procedure is given by a search strategy which determines how the search tree is built, consistency tests which are applied on start time restrictions, lower bounds on the project duration, and dominance rules used to prune redundant parts of the search tree. For the following explanations, we assume the search strategy to be subdivided into different strategies, called traversing, generation, ordering, and branching strategy. The traversing strategy determines the sequence in which all not completely explored nodes in the search tree are considered, where a node is said to be completely explored exactly if all its child nodes are generated. As traversing strategy, the well-known depth-first search strategy (DFS) is applied. Since computational tests have shown that DFS results in a long calculation time to find a first feasible solution, leading to a rather bad performance especially for great instance sets, an additional traversing strategy has been implemented to enhance the diversification in the search tree. This strategy works just like the DFS, except that after a predefined time span one of all not completely explored nodes with lowest level in the search tree and lowest lower bound is considered next. We call this traversing strategy scattered-path search (SPS) and denote by SPS + the extension which considers priority values of the nodes in addition, where the priority values are determined in the same way as for the ordering strategy as it will be described later on.
It should be noted that the traversing strategy neither states the maximum number of child nodes to be generated for any explored node nor the order in which the child nodes are considered. Instead, these specifications for the search procedure are determined by the generation and the ordering strategy, respectively. For the generation strategy, considering any search node which is explored, we distinguish between the alternatives to generate all its child nodes (all) or to restrict the number of the generated child nodes by a maximum value (restr). Besides the maximum number for the generation of child nodes, the order in which they are considered during the search procedure has also been shown to be crucial for the performance by computational studies. As it has already been observed for the RCPSP/max, it is also beneficial for the RCPSP/max-π to explore all generated child nodes in an order of non-decreasing lower bounds which can be seen to increase locally the probability to find a good solution. Since it is likely that the lower bounds among some child nodes are equal, the ordering strategy enables the usage of priority values for the child nodes in addition to identifying the most favorable ones. In the following, the most promising priority values, we are aware of, are presented. For this, let W be the start time restriction of any search node with S : min S T (W ) / ∈ S R and assume that k ∈ R c : {k ∈ R | r c k (S) > R k } is the chosen conflict resource for the decomposition of S T (W ). Furthermore, assume that (i 1 , i 2 , . . . , i s ) is a sequence of all generated child nodes sorted by non-decreasing lower bounds on the project duration with i μ ∈ V c k : V k (S) {i ∈ V | r u ik (S i ) > 0} for all μ ∈ {1, 2, . . . , s} and s ≤ |V c k |, so that i μ is explored before i μ if μ < μ holds. Then, the sequence between the activities with equal lower bounds in (i 1 , i 2 , . . . , i s ) is additionally sorted by priority values dependent on the chosen priority rule, where the corresponding activities can be sorted by either non-decreasing (min) or non-increasing (max) priority values for each rule. For the first two rules, a priority value π i is assigned to each activity i ∈ V c k based on the resource usage induced by schedule S, with π i r u ik (S i ) for the so-called resource usage rule (RU) and π i r u ik (S i )/ p i for the resource-usage-processing-timeratio rule (RUPT). The delayed-start-time rule (DST) takes the minimum right-shift of the currently start time S i for each activity i ∈ V c k , caused by the resource usage restriction of the enumeration process, into consideration, i.e., π i ik with In contrast to the aforementioned priority rules, the following rules are not dependent on the conflict resource k ∈ R c . Instead, those rules are based on float or slack times which are adapted to be able to take start time restrictions into consideration. The first priority rule (TF) determines the so-called total float TF i for each activity i ∈ V c k , which is defined as the maximum right-shift of start time S i so that in S T (W ) any W -feasible schedule with a better project duration, than that one of the best feasible solution found so far, exists. Thus, the priority value π i of each activity i ∈ V c k is given by with LS UB i (W ) : LS i (W , n + 1, UB − 1) and UB as the project duration of the best feasible solution in case that any solution has already been found or UB : d + 1 otherwise. The last priority rule (EFF) is based on the early free float EFF i of an activity i ∈ V c k which is equal to the maximum possible right-shift of start time S i so that all other activities of the project can still be started to their earliest W -feasible start time. Hence, represents the priority value π i for each activity i ∈ V c k . Conclusively, for the priority rules DST, TF and EFF a further variant has been implemented so that the number of start times skipped in the corresponding start time restriction by the right-shift is considered instead. That means k for the corresponding rule, where these variants are denoted by DST I , TF I and EFF I , respectively. The ordering, based on the priority values as described, can also be used to determine a sequence for the generation of all child nodes of an enumeration node. For this generation strategy, a candidate list of all child nodes is created, sorted by the priority values used for the ordering strategy to determine the sequence to generate the child nodes (restrCL). It should be noted that this strategy is just useful if the number of generated child nodes is restricted.
The last part of the search strategy is given by the branching strategy which determines the way to choose a conflict resource k ∈ R c for any unexplored node to decompose the corresponding feasible region S T (W ). As the ordering of nodes, the selection of any conflict resource k ∈ R c is related to priority values as well. Based on the priority rules for the ordering strategy, same named priority rules for the branching strategy are directly derived by assigning π k i∈V c k π i /|V c k | to each conflict resource k ∈ R c . This means, for example, that the branching strategy TF assigns π k i∈V c k TF i /|V c k | to each conflict resource k ∈ R c which is equal to the average total float over all activities in V c k . Besides the priority rules which are related to the priority values of the ordering strategies, additional rules are considered for the branching strategy. These rules assign to the priority value π k of each conflict resource k ∈ R c the absolute resource conflict k : r c k (S) − R k (ARC), the relative resource conflict k /R k (RRC) and the number of consuming activities |V c k | (NCA). Concluding, the conflict resource for the decomposition of S T (W ) is given by where ext ∈ {min, max} determines if lower (min) or greater (max) priority values are preferred. Thus far, the different possibilities of the branch-and-bound procedure to build the search tree have been considered. Next, we take a closer look on the procedures applied on the search nodes to improve the performance. While the applications of lower bounds and dominance rules covered in Sects. 6 and 7 are straightforward, the consistency tests described in Sect. 5 require further explanations.
In general, different consistency tests are iteratively applied until any fixed point is reached. In our case, this fixed point is unique which can be deduced from Theorem 2.2 in Dorndorf et al. (2000a) due to the monotony of all consistency tests used in this work, i.e., γ (W ) ⊆ γ (W ) holds if W ⊆ W is given. Algorithm 6 shows a procedure to apply iteratively a set Γ β of domain-consistency tests on any start time restriction W until either the unique fixed point is detected (W W ) or a maximum number of iterations α is reached. This procedure is equal to Algorithm 2.1 in Dorndorf et al. (2000a) except that an iteration limit is considered additionally. The outcome of the algorithm is denoted by γ α β (W ) which is in general, due to the iteration limit, not equal to the unique fixed point. For the computational studies as shown in Sect. 10, we have examined the sets Γ B , Γ D and Γ W of domain-consistency tests, where Γ B comprises the temporal-bound and the resource-bound consistency test, Γ D the temporal-bound and D-interval consistency test and Γ W the temporal and the W -interval consistency test. Since we assume that the D-interval and W -interval consistency test can both be conducted either by considering all resources k ∈ R or only the resources k ∈ R i which are demanded by activity i ∈ V , we distinguish the corresponding outcomes of Algorithm 6 by γ α β [R] and γ α β [R i ] for Γ D and Γ W , respectively. Conclusively, we have additionally examined two alternatives for the maximum number of iterations with α 1 and α ∞, where α ∞ implies that Algorithm 6 determines the unique fixed point with respect to Γ β .
Algorithm 7 outlines the framework of the branch-and-bound procedure, where in order to improve the readability, it is assumed that a depth-first search is used and that for each explored node all child nodes are generated. This means that all other alternatives described above for the traversing and generation strategy are omitted in Algorithm 7.
In the first step of Algorithm 7, the start time restriction W is initialized where the Floyd-Warshall algorithm (Ahuja et al. 1993, Sect. 5.6) is used to determine the distance matrix D or to prove that the project network contains a cycle of positive length (S ∅) with a time complexity of O(|V | 3 ). Next, a preprocessing step is applied on the start time restriction W , where the unique fixed point of set Γ W considering all resources is calculated, i.e., W : γ ∞ W [R](W ) is determined. In case that the preprocessing step cannot exclude the existence of a feasible schedule (S T (W ) ∅), the global lower bound LB G is set to min W n+1 , where it should be noted that due to the preprocessing step min W n+1 LBD π (W ) holds. After that, the root node given by a triple (W , S, LB) is put on stack Ω and the upper bound UB : d +1 is initialized.
In each iteration, a triple (W , S, LB) is taken from stack Ω. If the corresponding node cannot be pruned due to LB ≥ UB, consistency tests as described above are applied on the start time restriction W . It should be noted that for the temporal-bound and the temporal consistency test a maximum project duration of UB − 1 is assumed, whereby it is taken into account that an optimal schedule has a project duration not greater than UB. Since the resource-bound, D-interval and W -interval consistency tests are dependent on the given maximum project duration, this can increase the number of inconsistent start times, respectively. If after the application of the consistency tests there exists no W -feasible schedule with a lower project duration than UB, i.e., S UB T (W ) : S T (W , n + 1, UB − 1) ∅, then the corresponding node can be pruned. Otherwise, in case that the consistency tests have removed any start time from W , schedule S is updated. If S is resource-feasible, a new best schedule has been found, schedule S is stored by S * : S, and the upper bound UB for the project duration is set to S * n+1 . In case that schedule S is not resource-feasible, the feasible region S T (W ) is decomposed as described in Sect. 3 based on the selected conflict resource k ∈ R c corresponding to the priority rule of the branching strategy. As explained in Sect. 8, the decomposition could also be replaced by a partitioning of the feasible region. For each generated child node, which does not exclude the existence of a W -feasible schedule with a project duration lower than UB, S : min S T (W ) is determined. If S is resource-feasible, S is stored as the best solution and UB is updated as described above. Otherwise, dominance rules are applied on W and the lower bound LB is calculated if W is not dominated by another node in Ω which is no ancestor of the search node. Lower bound LB is either given by LB0 π or LBD π (W , S n+1 , UB − 1) . where in case of LB < UB the child node is stored in list Λ. After the generation of all child nodes, the nodes in list Λ are put on stack Ω corresponding to the ordering strategy, so that the child node with the best priority value is considered in the next iteration. The described procedure is iteratively conducted until the stack Ω does not contain any triple. Regarding the correctness of the enumeration scheme, Algorithm 7 returns an optimal schedule if and only if any feasible solution has been found which is given by UB ≤d. Accordingly, the infeasibility of the considered instance is proven by UB d + 1 at the end of the algorithm.

Performance analysis
In order to evaluate the performance of our branch-and-bound procedure, we have conducted computational experiments on test instances comparing the branch-andbound procedure with the mixed-integer linear programming (MILP) solver IBM CPLEX 12.8.0. Based on the binary linear program in Böttcher et al. (1999) for the RCPSP/π , we have developed different mathematical programs for the RCPSP/maxπ which differ according to the considered type of decision variable. The different types of decision variables we have used are well known as pulse and step variables (see, e.g., Artigues 2017). Preliminary tests have shown that the program based on step variables provides the best results. Accordingly, in the following we compare our branch-and-bound procedure with the IBM CPLEX solver based on a formulation with step variables which can be stated as follows: The mathematical program for the RCPSP/max-π is a time-indexed formulation with binary decision variables z it for each activity i ∈ V and all its time-feasible start times t ∈ T i : {ES i , . . . , LS i } , where z it takes value 1 if and only if activity i ∈ V starts at time t or earlier. To improve the readability, we use ζ it : z it − z i,t−1 and H + : H ∪ {0, −1, . . . , −d} in the formulation. Accordingly, the constraints of the program state that all temporal (2) and resource constraints (3) while the remaining conditions ensure that each activity is started exactly once.
Since instances for the RCPSP/max-π are not available in the open literature, we have used self-generated instances for the performance analysis. The new instance sets are based on the well-known benchmark test sets UBO for the RCPSP/max which were generated by the instance generator ProGen/max (Schwindt 1996(Schwindt , 1998b and are available via the project scheduling library PSPLIB (Kolisch and Sprecher 1997). In a first step, for each of the UBO test sets with n 10, 20, 50, 100, 200 real activities, we have chosen three instances which were generated by ProGen/max with values 0.25, 0.5 and 0.75 for the so-called order strength. The order strength OS is an estimator for the restrictiveness of a digraph which can be seen as a [0,1]-normalized control parameter for the number of possible execution sequences of the activities of the project with OS 0 implicating a parallel and OS 1 a series digraph. Since the restrictiveness cannot efficiently be calculated, the order strength OS is used instead, where it has been shown to provide the lowest mean relative error to the restrictiveness among 40 evaluated estimators in Thesen (1977). Since the actual order strength OS after the generation of an instance is in general not equal to the target value OS, we have chosen for each UBO test set (n 10, 20, 50, 100, 200) the instance with the lowest number for which OS deviates less than 10% from the target value OS 0.25, 0.5, 0.75. Concluding, the project networks of the new instance sets for the RCPSP/max-π are taken from the UBO test sets with n 10, 20, 50, 100, 200 real activities which cover the processing times of the activities and all temporal constraints.
Accordingly, it remains to consider the generation of the problem parameters concerning the partially renewable resources. For the generation of the corresponding parameters, we have used the procedure described in Schirmer (1999, Sect. 10) for the instance generator ProGen/Π which is an extension of the instance generator ProGen for the RCPSP (Kolisch et al. 1995). In Schirmer (1999), three [0,1]-normalized control parameters called horizon factor (HF), cardinality factor (CF) and interval factor (IF) are used to determine Π k for each resource k ∈ R, where in line with Schirmer (1999), each instance contains 30 partially renewable resources. The horizon factor determines an upper boundd R for the last period in Π k for each resource k ∈ R with d R : ES n+1 (1 − HF) +d · HF, where we assume that for each instance a maximum project durationd : i∈V max( p i , max (i, j)∈E δ i j ) is given. Depending ond R , the cardinality factor assigns a cardinality of |Π |: 2 (1 − CF) + (d R − 1) CF to Π k of each resource k ∈ R, where an upper bound for the number of components in Π k is directly given byĪ : min(|Π |,d R −|Π |+1) (Schirmer 1999, Lemma 10.2). Finally, the interval factor determines the number of components for each resource k ∈ R by I k : (1 − IF) +Ī · IF, so that after the assignment of |Π | periods to I k components in Π k and the determination of the number of periods between them, the period sets Π k of all resources k ∈ R are defined. For further details, we refer the reader to Schirmer (1999, Sect. 10). In order to control the average ratio of all resources used per real activity i ∈ V r : V \ {0, n + 1}, the resource factor (RF) is used which is defined by where in a first step, as described in Kolisch et al. (1995), each real activity i ∈ V r is randomly assigned a number |R i |∈ {a, . . . , a} of demanded resources (|R i | k∈R a ik ), which is followed by a random selection of the corresponding resource demands r d ik from set {r , . . . , r }. For the generation of all instance sets for the RCPSP/max-π , we have used the parameters a 5, a 25, r 1 and r 10. Finally, the resource strength (RS) regulates the degree of scarcity of the resources by specifying the amounts of resource capacities. As all control parameters described before, the resource strength is restricted to values in [0,1] as well. Dependent on the resource strength, for each resource k ∈ R the capacity is set to R k : R min k (1 − RS) + R max k · RS with R ext k : i∈V k ext{r c ik (τ ) | ES i ≤ τ ≤ LS i } and ext ∈ {min, max} so that RS 0 implicates the greatest scarcity and RS 1 the lowest.
For each instance taken from a UBO test set with n 10, 20, 50, 100, 200 real activities as described above with actual order strengths OS with lower deviations than 10% from values OS ∈ {0.25, 0.5, 0.75}, instances for the RCPSP/max-π have been generated based on a full factorial design with control parameters HF, CF, IF, RF, RS ∈ {0.25, 0.5, 0.75} and a fixed number of 30 resources. Accordingly, we have generated for each number n 10, 20, 50, 100, 200 of real activities an instance set containing 729 instances which are denoted by UBO10 π , UBO20 π , UBO50 π , UBO100 π and UBO200 π in the following. To provide a benchmark test set for the RCPSP/max-π , we made these test sets available online. 1 The computational experiments have been conducted on a PC with an Intel Core i7-8700 3.2 GHz CPU and 64 GB RAM under Windows 10 with a time limit of 300 s. The branch-and-bound algorithm and the binary linear program for the RCPSP/maxπ were both coded in C++ and compiled with the 64-bit Visual Studio 2017 C++ compiler where we used the IBM OPL C++ interface for the linear program. To solve the program, we have applied the MILP solver IBM CPLEX 12.8.0 restricted to a single thread in order to ensure a fair comparison with the branch-and-bound algorithm which is conducted on a single thread as well. Table 1 shows the results of the computational performance analysis with a time limit of 300 s based on the settings given in Table 3 which are discussed later on. For each instance set, the results of the branch-and-bound algorithm (BnB) and the MILP solver IBM CPLEX 12.8.0 (CPX) are compared. In the first row, the number of nontrivial instances (#nTriv) of the corresponding instance set is given where in line with Alvarez-Valdes et al. (2008) an instance is called non-trivial if and only if schedule ES is not resource-feasible. Since for each trivial instance an optimal solution can efficiently be determined, all results in Table 1 are restricted to non-trivial instances only. The following rows show the number of instances which were solved to optimality (#opt), for which a feasible solution was found (#feas), the infeasibility could be proved (#inf) or the instance status (feasible or infeasible) remained open (#open). Conclusively, the last two rows list the average used CPU time in seconds over all instances which were solved to optimality (∅ CPU opt ) and which were proved to be infeasible (∅ CPU inf ). The results in Table 1 indicate that BnB outperforms CPX in finding for more instances feasible solutions and in proving more solutions to be optimal over all instance sets with more than ten activities, where the differences even increase with the instance size. As a consequence, BnB is able to determine the status for more instances as well. Additionally, it can be seen that BnB has also an advantage over CPX regarding the average used CPU time over all instances which were solved to optimality (∅ CPU opt ). In contrast, CPX dominates BnB in the sense of proving either more instances to be infeasible or using less average CPU time for the proof considering instance sets UBO20 π and UBO50 π . The results of the performance analysis in Table 1 are supplemented by Table 2 which provides a closer look at the feasible solutions of BnB and CPX. The first four columns of Table 2 investigate to what extent the solution procedures are able to find feasible solutions for different instances. For this, the columns list the number of feasibly solved instances by at least one solution procedure (# ∪ feas ) and by both procedures (# ∩ feas ), followed by the number of instances for which only BnB (# < feas ) or CPX (# > feas ) could find a solution. From the first part of Table 2, it can be seen that the proportion of feasible instances which could only be shown to be feasible by BnB is much greater than the proportion of CPX over all instance sets. The second part of Table 2 addresses the quality of the solutions for all instances which could be solved feasibly, but not verified as optimal by both solution procedures (# ∩,nv feas ). These instances are segmented in the following columns into instances for which BnB could find a better solution than CPX (# < ), both procedures provided a solution with an equal project duration (# ) or CPX was able to detect a better solution than BnB (# > ). Finally, the last two columns list the average deviations between the objective function values of BnB and CPX. Given the corresponding objective function values by S BnB n+1 and S CPX n+1 , the first column shows the average absolute deviation abs CPX : S BnB n+1 − S CPX n+1 over all considered instances (∅ ,abs CPX ) and the second column the average relative deviation rel CPX : abs CPX /S CPX n+1 to the objective function value of CPX (∅ ,rel CPX ). Table 2 shows that on average the quality of the solutions of BnB is better than the solutions of CPX over all instance sets regarding both the absolute and relative deviation. It can also be observed that the average deviations strongly increase with the instance size and that BnB determines for much more instances a feasible solution with a better objective function value than CPX.
The settings of BnB we have used for the computational experiments, dependent on the instance size, are given in Table 3. The listed strategies and components applied on an instance set can be seen as the setting with the best balance between the number of instances which are solved to optimality and whose status remains open among all settings we have tested. The terms used in Table 3 are in line with the descriptions in Sect. 9, except for some additional specifications we discuss in the following. In the first row, the values in brackets represent the predefined time span for the scattered-path search until any of all not completely explored nodes with lowest level in the search tree is considered next. Furthermore, the values in brackets state the maximum number of child nodes allowed to be generated in each exploration step for the generation strategy Ordering strategy LB (min) LB (min) LB-DST (min) LB-DST I (min) LB-DST I (min) Partitioning x x --and the greatest search tree level on which the corresponding set of consistency tests is applied (Consistency tests). For the branching and ordering strategy, the symbol ext ∈ {min, max} which determines the ordering of the priority values is given in parentheses. Finally, in the last row, "x" indicates that the enumeration scheme is conducted based on the concepts described in Sect. 8, whereas "-" stands for the application of the enumeration described in Sect. 3. From Table 3, it can be seen that for small instances (UBO10 π and UBO20 π ) the partitioning of the feasible region of each search node is beneficial for the performance, whereas computational experiments on greater instances have shown that this procedure leads to a rather bad performance regarding the number of instances for which the status can be determined. The reason for this could be caused by the fact that due to the partitioning, each part of the feasible region S can just be reached by at most one path in the enumeration tree which most likely results in a decrease in the probability to find a feasible solution at all. Furthermore, Table 3 shows that it is important for the performance to decrease the intensity of consistency tests, to restrict the number of generated child nodes in each exploration step and to invest less computational effort on the calculation of lower bounds with the increase in the instance size. It should also be noted that the scattered-path search is already preferable to choose for instance sets with more than ten activities and that the most promising priority values and their orderings are dependent on the instance size as well.
Computational tests have shown that the application of dominance rules can improve the performance of the branch-and-bound procedure for small instances as well if the enumeration procedure is conducted without partitioning. To show this, Table 4 compares the different techniques covered in this paper to avoid redundancies in the search tree for instance set UBO10 π with a time limit of 300 s. In the first row, the results of BnB corresponding to the settings given in Table 3 without partitioning are given, where the last columns show the average number of completely explored nodes per instance (∅ expl nodes ) and the total used CPU time over all instances (t cpu ). The following rows list the results if either only theŪ -dominance rule, the W -dominance rule or both dominance rules (Ū /W ) are applied in addition. From Table 4, it can be seen that both dominance rules are able to reduce the average number of explored nodes per instance and the total used CPU time accompanied by an increase in optimally solved and infeasible proved instances where the W -dominance rule shows a better performance. Furthermore, it can be observed that the application of both dominance rules shows a slightly better performance where three instances still remain without an optimality proof. The last row presents the results if the feasible region of each node in the search tree is partitioned as described in Sect. 8. These results demonstrate the dominance of the partitioning technique over the dominance rules with a tremendous decrease in the average number of explored nodes per instance and the total used CPU time. Similar results could be observed for instance set UBO20 π , whereas for greater instances neither the partitioning technique nor the dominance rules were able to improve the performance. Next, the impact of the different components of the branch-and-bound procedure on the performance should be illustrated. For this, Table 5 shows the results of the branch-and-bound procedure based on the search strategy and different combinations of the components given in Table 3 for instance set UBO10 π with a time limit of 300 s. The first row provides the results of the basic version of the branch-and-bound procedure, which means that the enumeration is done without partitioning and only the lower bound LB0 π is used. The following rows show the results in case that the given component is applied in addition where it can be observed that each added component improves the performance. Conclusively, it should be mentioned that similar results are obtained for greater instances as well.
Finally, we compare our branch-and-bound algorithm with the only available exact solution procedure for partially renewable resources which is given in Böttcher et al. (1999) for the RCPSP/π (BOT). For this, Table 6 shows the results of a performance analysis conducted on test sets with 10, 20, 30 and 40 real activities (j10, j20, j30, j40) and 30 partially renewable resources, respectively, which have been generated by ProGen/Π . The results for BOT in Table 6 are taken from Schirmer (1999, Sect. 10.4), where BOT was implemented in C and tested on an IBM RS/6000 workstation with 66 MHz under AIX. For the comparison, we scaled the time limit by a factor of 50 corresponding to the clock pulse ratio of the different workstations (3, 200/66 ≈ 48.5) so that we used time limits of 6 (6, 12, 24) s for BnB while 300 (300, 600, 1200) s were chosen for BOT for instance set j10 (j20, j30, j40). It should be noted that nine instances of test set j10 which were proved to be infeasible by BOT could not be provided to us, so that they are not part of the comparison. Conclusively, Table 6 shows the great dominance of BnB which has been applied with the settings for instance set UBO20 π from Table 3.

Conclusions
We have considered the resource-constrained project scheduling problem with partially renewable resources and general temporal constraints with the objective to minimize the project duration, which to the best of our knowledge has not been considered in the open literature so far. For this problem, we have presented a branchand-bound procedure whose enumeration scheme is based on a stepwise reduction in permitted resource usages by the activities of the project. To enhance the performance of the solution procedure, we have developed consistency tests, lower bounds and dominance rules whose efficiency could be confirmed by computational experiments. Furthermore, it could be shown that the avoidance of redundancies in the search tree, obtained by an adaptation of the enumeration scheme, significantly improves the performance for small instances. A comparison with the mixed-integer linear programming solver IBM CPLEX 12.8.0 on adaptations of benchmark test sets from literature could reveal the great dominance of the branch-and-bound procedure if feasible instances are considered. In contrast, it turned out that the solver IBM CPLEX 12.8.0 is better suited to prove instances to be infeasible. Finally, the good performance of the branch-and-bound procedure could also be confirmed by a comparison with the only exact solution procedure for the RCPSP/π .
As the results of the computational study indicate, there is a great need for efficient heuristics for the RCPSP/max-π . In this context, it could be an interesting field for future research to develop heuristics which are based on the temporal planning procedures and consistency tests presented in this work. Furthermore, the investigation of alternative lower bounds and consistency tests seems to be a topic of great interest as well.