Abstract
The multiple knapsack problem with grouped items aims to maximize rewards by assigning groups of items among multiple knapsacks, without exceeding knapsack capacities. Either all items in a group are assigned or none at all. We study the bicriteria variation of the problem, where capacities can be exceeded and the second objective is to minimize the maximum exceeded knapsack capacity. We propose approximation algorithms that run in pseudopolynomial time and guarantee that rewards are not less than the optimal solution of the capacityfeasible problem, with a bound on exceeded knapsack capacities. The algorithms have different approximation factors, where no knapsack capacity is exceeded by more than 2, 1, and \(1/2\) times the maximum knapsack capacity. The approximation guarantee can be improved to \(1/3\) when all knapsack capacities are equal. We also prove that for certain cases, solutions obtained by the approximation algorithms are always optimal—they never exceed knapsack capacities. To obtain capacityfeasible solutions, we propose a binarysearch heuristic combined with the approximation algorithms. We test the performance of the algorithms and heuristics in an extensive set of experiments on randomly generated instances and show they are efficient and effective, i.e., they run reasonably fast and generate good quality solutions.
This is a preview of subscription content, access via your institution.
Notes
 1.
In this paper, the proposed methods that solve biGMKP with performance bounds are referred to as algorithms; proposed methods that solve GMKP (capacityfeasible) and biGMKP, both with no performance guarantees, are referred to as heuristics.
References
Adany, R., Feldman, M., Haramaty, E., Khandekar, R., Schieber, B., Schwartz, R., Shachnai, H., Tamir, T.: Allornothing generalized assignment with application to scheduling advertising campaigns. ACM Trans. Algorithms (TALG) 12(3), 38 (2016)
Avella, P., Boccia, M., Vasilyev, I.: A computational study of exact knapsack separation for the generalized assignment problem. Comput. Optim. Appl. 45(3), 543–555 (2010)
Ba, S., Joseph, V.R.: MaxPro: Maximum Projection Designs (2018). https://CRAN.Rproject.org/package=MaxPro, r package version 3.12
CastilloZunino, F.: GMKP instances.zip. Dataset (2020). https://doi.org/10.6084/m9.figshare.12937817.v1, https://figshare.com/articles/dataset/GMKP_instances_zip/12937817
Chekuri, C., Khanna, S.: A polynomial time approximation scheme for the multiple knapsack problem. SIAM J. Comput. 35(3), 713–728 (2005)
Chen, L., Zhang, G.: Packing groups of items into multiple knapsacks. ACM Trans. Algorithms (TALG) 14(4), 51 (2018)
Dawande, M., Kalagnanam, J., Keskinocak, P., Salman, F.S., Ravi, R.: Approximation algorithms for the multiple knapsack problem with assignment restrictions. J. Comb. Optim. 4(2), 171–186 (2000)
Ferreira, C.E., Martin, A., Weismantel, R.: Solving multiple knapsack problems by cutting planes. SIAM J. Optim. 6(3), 858–877 (1996)
Finn, G., Horowitz, E.: A linear time approximation algorithm for multiprocessor scheduling. BIT Numer. Math. 19(3), 312–320 (1979)
Forrest, J.J., Kalagnanam, J., Ladanyi, L.: A columngeneration approach to the multiple knapsack problem with color constraints. INFORMS J. Comput. 18(1), 129–134 (2006)
Fréville, A.: The multidimensional 0–1 knapsack problem: an overview. Eur. J. Oper. Res. 155(1), 1–21 (2004)
Horowitz, E., Sahni, S.: Computing partitions with applications to the knapsack problem. J. ACM (JACM) 21(2), 277–292 (1974)
Joseph, V.R., Gul, E., Ba, S.: Maximum projection designs for computer experiments. Biometrika 102(2), 371–380 (2015)
Kellerer, H., Pferschy, U.: A new fully polynomial time approximation scheme for the knapsack problem. J. Comb. Optim. 3(1), 59–71 (1999)
Khuri, S., Bäck, T., Heitkötter, J.: The zero/one multiple knapsack problem and genetic algorithms. SAC 94, 188–193 (1994)
Krause, J., Cordeiro, J., Parpinelli, R.S., Lopes, H.S.: A survey of swarm algorithms applied to discrete optimization problems. In: Swarm Intelligence and BioInspired Computation, pp. 169–191. Elsevier (2013)
Lee, C.Y.: Parallel machines scheduling with nonsimultaneous machine available time. Discret. Appl. Math. 30(1), 53–61 (1991)
Lee, C.Y., He, Y., Tang, G.: A note on “parallel machine scheduling with nonsimultaneous machine available time”. Discret. Appl. Math. 100(1–2), 133–135 (2000)
Liu, Q., Odaka, T., Kuroiwa, J., Shirai, H., Ogura, H.: A new artificial fish swarm algorithm for the multiple knapsack problem. IEICE Trans. Inf. Syst. 97(3), 455–468 (2014)
Liu, Y.Y., Wang, S.: A scalable parallel genetic algorithm for the generalized assignment problem. Parallel Comput. 46, 98–119 (2015)
Martello, S., Toth, P.: A bound and bound algorithm for the zeroone multiple knapsack problem. Discrete Appl. Math. 3(4), 275–288 (1981)
Martello, S., Toth, P.: Knapsack problems: algorithms and computer implementations. WileyInterscience series in discrete mathematics and optimization (1990)
Massey Jr., F.J.: The Kolmogorov–Smirnov test for goodness of fit. J. Am. Stat. Assoc. 46(253), 68–78 (1951)
Nip, K., Wang, Z.: Approximation algorithms for a twophase knapsack problem. In: International Computing and Combinatorics Conference, pp. 63–75. Springer (2018)
Öncan, T.: A survey of the generalized assignment problem and its applications. INFOR Inf. Syst. Oper. Res. 45(3), 123–141 (2007)
Posta, M., Ferland, J.A., Michelon, P.: An exact method with variable fixing for solving the generalized assignment problem. Comput. Optim. Appl. 52(3), 629–644 (2012)
Schuurman, P., Vredeveld, T.: Performance guarantees of local search for multiprocessor scheduling. INFORMS J. Comput. 19(1), 52–63 (2007)
Wilbaut, C., Hanafi, S., Salhi, S.: A survey of effective heuristics and their application to a variety of knapsack problems. IMA J. Manag. Math. 19(3), 227–244 (2008)
Woodcock, A.J., Wilson, J.M.: A hybrid tabu search/branch & bound approach to solving the generalized assignment problem. Eur. J. Oper. Res. 207(2), 566–578 (2010)
Acknowledgements
The authors are grateful to the review team whose comments and suggestions helped improve the content and the exposition of the paper.
Author information
Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendices
Greedy LPGMKP Algorithm
Proposition 1
Optimal extreme points of an LPGMKP instance can have more than one partially assigned group.
Proof of Proposition 1
Consider the case with two knapsacks of capacities \({c_1=3}\) and \({c_2=1}\), and two groups with rewards \(p_1=p_2=3\). The first group has two items that weigh \(w_{1_a}=1\) and \(w_{1_b}=2\), and the second group has one item with \(w_{2}=3\). Consider the solution \((x,z)\) where:
Solution \((x,z)\) is feasible and optimal, with two partially assigned groups. Also, it is an extreme point since it has 8 variables and 8 activate linearly independent constraints that are:
\(\square \)
Proposition 2
Algorithm 5 (i) generates an optimal solution for any feasible LPGMKP instance, and (ii) runs in polynomial time.
Proof of Proposition 2
(i) The algorithm sorts all groups in a nonincreasing reward to total weight ratio and then greedily assigns their items into the knapsacks, filling them one by one. Hence, there are no other groups that can fill the knapsacks with higher rewards; i.e., the solution, if feasible, would be optimal.
To guarantee feasibility, the algorithm checks before assigning a new group \( l\in K\) if it fits into the remaining capacity (considering all knapsacks). If it does, it assigns the group (\( z_l=1\), \( {\sum _{i\in M}x_{ij}=1}\), \( {\forall j\in G_l}\)). If the group does not fit entirely, it will fill up all the remaining capacity (\(z_l<1\) and \( {\sum _{i\in M}x_{ij}=z_l}\), \( {\forall j\in G_l}\)). In both cases, constraints (3) are satisfied.
Finally, capacity constraints (2) are satisfied because the knapsacks are filled up one by one, and whenever an item does not fit the current knapsack, it is fractionally split among the current knapsack and the next. Thus, the solution is feasible.
(ii) The sorting (line 1) runs in polynomial time \( {\mathcal {O}}(k\log k)\). The first “while” loop (line 3) iterates no more than \( k\) times, the “for each” loop (line 5) iterates no more than \( n\) times, and the second “while” loop (line 7) iterates no more than \( m\) times. Combining all loops together, in the worst case scenario, the algorithm iterates over \( k\) groups to go through all \( n\) items, while also iterating through all \( m\) knapsacks. Thus the algorithm runs in polynomial time \( {{\mathcal {O}}(k\log k+n+m)}\). \(\square \)
Corollary 4
Any solution found by Algorithm 5 has at most one partially assigned group.
Proof of Corollary 4
By construction, only the last group assigned can have \( {0<z_{l}<1}\).\(\square \)
Generalized mKP based pseudopolynomial time approximation algorithm for biGMKP
Definition 8
Given a GMKP instance and a finite set \( {D\subset {\mathbb {R}}_{>0}}\):
\(\text {mKP}_\text {D}\)GMKP generalizes KPGMKP, 2mKPGMKP and 3mKPGMKP. KPGMKP corresponds to \(\text {mKP}_\text {D}\)GMKP for \(D=\emptyset \), 2mKPGMKP corresponds to \(\text {mKP}_\text {D}\)GMKP for \(D=\left\{ c_{\max }/2 \right\} \), and 3mKPGMKP to \(\text {mKP}_\text {D}\)GMKP for \(D=\left\{ c_{\max }/2,c_{\max }/3 \right\} \).
Proposition 3
Let \( Z(D)\) be the feasible region of an \( \text {mKP}_\text {D}\)GMKP instance, and let
Then \( Z(D')= Z({\mathbb {R}}_{>0})\).
Proof of Proposition 3
This proposition shows that only finite sets \(D\subset D'\) are worth considering when defining an \( \text {mKP}_\text {D}\)GMKP instance.
We first prove upper bound \( w_{\max }\). Whenever \( d\ge w_{\max }\) then \( f_d(w_j)=0\), \( \forall j\in N\). This makes the lefthand side (lhs) coefficients of constraints (11) to be 0.
Sort elements of \( D'\) increasingly and let \( d'>0\) be such that \( d_h<d'<d_{h+1}\), for some consecutive \( d_h,d_{h+1}\in D'\). We claim that constraint (11) for such \( d'\) is redundant with constraint (11) for \( d_h\), i.e.,
If (13) holds, then all such \( d'\) can be removed from \( D'\), and \( Z(D')= Z({\mathbb {R}}_{>0})\) still holds. Since \( d_h<d'\), when comparing the lhs of constraints (11) for \( d'\) and \( d_h\) we have
As the value of \( d\) increases, the righthand side (rhs) of constraints (11) only change at the points contained in \( D'\) by integer amounts. Therefore
Combining constraint (11) for \( d_h\), with (14) and (15), we get
Thus, (13) holds. \(\square \)
Theorem 6
Let \( D'\) be defined as (12) (in Proposition 3), and let \( D\subset D'\) be a finite set containing \( c_{\max }/2\). For such \( D\), Algorithm 6 (i) is a \( \left( 1,1/2\right) \)approximation algorithm, and (ii) runs in pseudopolynomial time. (iii) This is a tight approximation.
Proof of Theorem 6
This theorem shows that even when set \( D\) is very large, the worst case approximation obtained by \( D=\{c_{\max }/2\} \) is not improved (equivalent to the worst case approximation of Algorithm 2).
(i,ii) Analogous to Theorem 3 proof, since \(\{c_{\max }/2\}\in D\). The pseudopolynomial time is
(iii) See Examples for Theorem 6 in Appendix 3. \(\square \)
Corollary 5
Even if Algorithm 6 could solve an instance for \( D={\mathbb {R}}_{>0}\), the \( \left( 1,1/2\right) \)approximation guarantee from Theorem 6 does not improve.
Proof of Corollary 5
Consider the tight example of Theorem 6 (Fig. 10 in Appendix 3). The groups picked by the algorithm have a feasible assignment in the corresponding GMKP instance (by rearranging items). Since all constraints (11) are valid inequalities for GMKP (proof analogous to Lemma 2), then the solution found by the algorithm is not removed by constraints (11) for any \(d\in D\); thus the tight \(\beta \le 1/2\) example works for \( D={\mathbb {R}}_{>0}\). \(\square \)
Theorem 7
Let \( D'\) be defined as (12) (in Proposition 3), and let \( D\subset D'\) be a finite set such that \( \left\{ c_{\max }/2,c_{\max }/3 \right\} \subseteq D\). For such \( D\), when all knapsacks have equal capacities, Algorithm 6 (i) is a \( \left( 1,1/3\right) \)approximation algorithm, and (ii) runs in pseudopolynomial time. (iii) This is a tight approximation.
Proof of Theorem 7
This theorem shows that even when set \( D\) is very large, the worst case approximation obtained by \( D=\left\{ c_{\max }/2,c_{\max }/3\right\} \) (equivalent to the worst case approximation of Algorithm 3) is not improved when all knapsacks have equal capacities.
(i,ii) Analogous to Theorem 4 proof, since \(\left\{ c_{\max }/2,c_{\max }/3\right\} \in D\). The pseudopolynomial time is the same as in Theorem 6.
(iii) See Examples for Theorem 7 in Appendix 3. \(\square \)
Tight examples
Examples for Theorem 1. The tightness of guarantee \( \alpha \ge 1\) is trivial; any example where the algorithm gives an optimal solution to GMKP works. Refer to Fig. 7 for the tight \( \beta \le 2\) example. Consider \( m\ge 3\) knapsacks of equal capacities 1, and two groups where

Group 1 has \( m1\) items that weigh 1 each and one item that weighs \(m/(m+1)\).

Group 2 has \( m+1\) items that weigh \(m/(m+1)\).
If all rewards equal total group weights then, given this ordering of groups, Algorithm 0 generates a solution where \( m1\) knapsacks each contain two items of weights 1 and \( m/(m+1)\). The last knapsack contains three items of weight \( m/(m+1)\). Therefore, the maximum exceeded knapsack capacity is \( {23/(m+1)}\), and as \( m\rightarrow \infty \) it converges to 2.
Examples for Theorem 2. The tightness of guarantee \( \alpha \ge 1\) is trivial; any example where the algorithm gives an optimal solution to GMKP works. For the tight \( \beta \le 1\) example, refer to the same instance as in Examples for Theorem 1, but only considering group 2 with \( m+1\) items that weigh \(m/({m+1})\). Algorithm 1 generates a solution where \( m1\) knapsacks have one item assigned, and one knapsack has two items assigned. Therefore, the maximum exceeded knapsack capacity is \( {12/(m+1)}\), and as \( m\rightarrow \infty \) the bound converges to \( 1\). Note constraint (6) of KPGMKP is satisfied.
Examples for Theorem 3. The tightness of guarantee \( \alpha \ge 1\) is trivial; any example where the algorithm gives an optimal solution to GMKP works. Refer to Fig. 8 for the tight \( \beta \le 1/2\) example. Consider \( m\ge 3\) knapsacks of equal capacities 1, and one group that has \( 2m+1\) items that weigh \(m/({2m+1})\) each. Algorithm 2 generates a solution where \( m1\) knapsacks each contain two items and one knapsack contains three items. Therefore, the maximum exceeded knapsack capacity is \( 1/23/({4m+2})\), and as \( m\rightarrow \infty \) it converges to \(1/2\). Note constraints (6) and (7) of 2mKPGMKP are satisfied.
Examples for Theorem 4. The tightness of guarantee \( \alpha \ge 1\) is trivial; any example where the algorithm gives an optimal solution to GMKP works. Refer to Fig. 9 for the tight \( \beta \le 1/3\) example. Consider \( m\ge 3\) knapsacks of equal capacities 1, and one group with \(m\) items that weigh \((3m1)/({3m})\) and one item of weight \(1/3\). Algorithm 3 generates a solution where one knapsack has the item of weight \(1/3\) assigned and one item that weighs \(({3m1})/({3m})\). Therefore, the maximum exceeded knapsack capacity is \(1/31/({3m})\), and as \( m\rightarrow \infty \) the bound converges to \( 1/3\). Note constraints (6), (7), and (9) of 3mKPGMKP are satisfied.
Examples for Theorem 6. Refer to Fig. 10 for the tight \( \beta \le 1/2\) example.
Consider \( m\ge 3\) knapsacks were capacities are \( c_i=({2m+1i})/({2m}) \), \( \forall i\in M\), and consider a single group with \( m+1\) items, whose weights are \( w_j=({2mj})/({2m})= c_j1/2m\), \( \forall j\in N\setminus \{n\} \) and \( {w_n=1/2}\). The group is feasible in GMKP instance, since the first \( m\) items \( {j\in N\setminus \{n\}}\) can be assigned respectively to knapsacks \( j+1\) where they fit exactly, and both items with \( w_{n1}=w_n=1/2\) can be assigned to the first knapsack of size 1. On the other hand, the algorithm sequentially assigns each item \( j\in N\setminus \{n\} \) to knapsack \( i=j\). Before assigning the last item \( n\), all knapsacks have \( 1/({2m})\) free capacity, so assigning \( n\) anywhere exceeds the capacity by \( 1/2 1/({2m})\). Having \( m\rightarrow \infty \) gets bound \( 1/2\). This example is also tight for the \(\alpha \ge 1\) bound.
Examples for Theorem 7. The tightness of guarantee \( \alpha \ge 1\) is trivial; any example where the algorithm gives an optimal solution to GMKP works.
Refer to Fig. 11 for the tight \( \beta \le 1/3\) example, where all knapsacks have equal capacities of 1. Recall from Proposition 3 that only elements of \(D\) of the form \({c_{\max }/q=1/q}\), \(q\in {\mathbb {Z}}_{>0}\), are relevant to be considered. Let \(D\) be partitioned into \(D_{\text {odd}}\) and \(D_{\text {even}}\), where for all \( 1/q\in D_{\text {odd}}\), \(q\) is odd; and for all \( 1/q\in D_{\text {even}}\), \(q\) is even. Let \(1/q_{\max }\in D_{\text {even}}\) be the smallest number in \( D_{\text {even}}\).
For a small \(\epsilon >0\), consider \(m = \left\lceil 1/(3\epsilon ) \right\rceil +\left\lfloor q_{\max }/3 \right\rfloor +\sum _{1/q\in D_\text {odd}} \left\lfloor q/3\right\rfloor \) knapsacks and groups where

Group 1 has an item that weighs \(1/3\) and \(\left\lceil 1/({3\epsilon }) \right\rceil \) items that weigh \( 1\epsilon \).

Group 2 has \(2\left\lfloor q_{\max }/3 \right\rfloor \) items that weigh \( 1/2\).

For each \(1/({2p+1})\in D_\text {odd}\), group 3 has \( \left\lfloor ({2p+1})/{3}\right\rfloor \) items that weigh \( ({p+1})/({2p+1})\), and \( \left\lfloor ({2p+1})/{3}\right\rfloor \) items that weigh \( {p}/({2p+1})\).
If all rewards equal total group weights then, given this ordering of groups, Algorithm 2 generates a solution where all groups are selected. One knapsack has an item of weight \( 1\epsilon \) and another of weight \( 1/3\). Therefore, the maximum exceeded knapsack capacity is \( 1/3\epsilon \), so as \( \epsilon \rightarrow 0 \) it converges to bound \( 1/3\).
We show the solution is feasible in the \( \text {mKP}_\text {D}\)GMKP instance. Capacity constraint (6) is satisfied
Constraints (11) are satisfied for any \( 1/({2p})\in D_\text {even}\)
Constraints (11) are satisfied for any \( 1/({2p+1})\in D_\text {odd}\)
Computer cluster hardware
We ran all algorithms and heuristics on a computer cluster with CPU and RAM characteristics as described in Table 3. RAM was assigned as needed up to 64 GB; no instances from the main experimental study reached the RAM threshold, only two instances from Appendix 6 reached the RAM threshold and their results were not included. Therefore, our computational study is almost unrestricted by RAM limitations. IP solver running IPGMKP used the most RAM by far.
To have a fair comparison, when solving a specific instance all algorithms and heuristics solved the instance in the same node. In this way, no algorithm or heuristic is benefited by hardware differences. We also measured processing clock time when calculating computation times—meaning that putting a node to sleep did not affect the computation time of our experiments.
Characteristics of generated instances
The outcome of the 3000 generated instances described in Sect. 9 can be visualized in Figs. 12 and 13. Figure 12 shows histograms of the instances’ parameters: \(m\), \(w_\text {split}\), \(w_\text {min}\), \(w_\text {mode}\), \(r_\text {load}\), \(r_\text {conc}\), \(n\), and \(k\). The first six listed correspond to the 6dimensions defined by the maximum projection Latin hypercube design; the last two (number of items \(n\) and groups \(k\)) are a result of the instance generation procedure. Note how \(m\), \(w_\text {split}\), \(r_\text {load}\), and \(r_\text {conc}\) are uniformly distributed as expected by design. \(w_\text {min}\) and \(w_\text {mode}\) are not uniformly distributed since the former depends on \(w_\text {split}\) and the latter depends on both \(w_\text {split}\) and \(w_\text {min}\). The instance with the most items has \(n=71,604\) and the instance with the most groups has \(k=19,205\); both histograms were cutoff since the count decreased significantly beyond the 5000 and 2500 threshold, respectively.
Figure 13 shows six 2dimensional projections of the experimental design to exemplify how the maximum projection Latin hypercube design is spacefilling in every dimensional subset.
Results for different rewards
All instances solved in Sect. 9 had rewards of each group equal to the total weight of items in that group. Here we test the same instances after modifying each reward \(p_l, l\in K\), in three different ways:

Original Reward R0: \(p^0_l\leftarrow \sum _{j\in G_l}w_j\)

Reward R1: \(p^1_l\leftarrow \lfloor 100\sqrt{p^0_l}\rceil \)

Reward R2: \(p^2_l\leftarrow \lfloor p^0_l\sqrt{p^0_l}\rceil \)

Reward R3: \(p^3_l\leftarrow \left\lfloor Random(1,10)\cdot p^0_l\right\rceil \)
Function \(\lfloor \cdot \rceil \) denotes rounding to the nearest integer, to avoid precision issues with the IP solver. Groups with reward R1 have a reward to weight ratio of approximately \(100/\sqrt{p_l^0}\); giving an incentive to pick lighter groups (we multiplied by 100 to have more precision when rounding). Groups with reward R2 have a reward to weight ratio of approximately \(\sqrt{p_l^0}\); giving an incentive to pick heavier groups. Finally, reward R3 consists of multiplying the original rewards with a real random number between 1 and 10 (each \(p^3_l\) is multiplied by a different random number); adding noise to the instance while still having rewards proportional to weights in expectation.
We repeated all experiments from Sect. 9 for an additional 9000 instances, given by the combination of the original 3000 instances and the three additional reward structures. Only one instance for reward R1 and one for reward R2 were not solved by Gurobi due to memory limitations, so we removed them from the results.
Different rewards: results for biGMKP algorithms
In Figs. 14, 15 and 16 we see the details of the maximum exceeded knapsack capacity of biGMKP algorithms, after swapoptimal improvement, for different reward structures. Rewards R1 had solutions with lower maximum exceeded knapsack capacity than rewards R2, which makes sense since R1 prioritizes lighter groups while R2 prioritizes heavier groups. Rewards R0 and R3 are somehow similar (see Figs. 2, 16, respectively), showing that having the same reward to weight ratio in expectation seems to obtain similar results. Algorithms 3mKP and 100mKP obtained the least exceeded knapsack capacity independent of reward structure.
Figures 17, 18 and 19 show the computation times of each biGMKP algorithm on instances with different reward structures. Algorithms take a longer time solving instances where heavier groups are prioritized (reward R2), even obtaining outliers that take almost an hour to run; although 99% of instances took under a minute. 3mKP persists as the most timeeffective alternative independent of the reward structure, while running 100mKP might still be recommended since it obtains better results and computation times remain short. It is interesting to note how for rewards R1, R2 and R3, Gurobi reached the time limit of 3 h in around 67% of instances, while in the original reward R0 (Fig. 3) it only reached the time limit in about 33%. It seems that Gurobi works better when total group weights and rewards are equal.
Different rewards: results for GMKP heuristics
Figures 20, 21 and 22 show the details of the optimal reward ratio of biGMKP heuristics, after swapoptimal improvement, for different reward structures. As in the original reward R0 (Fig. 4), there does not seem to be an improvement after adding constraints beyond 2mKP in any reward structure. GMKP heuristics obtained the best performance when prioritizing smaller groups (R1), and random noise on rewards did not affect the performance significantly (R4).
The performance of the best GMKP heuristic dropped slightly in comparison to the original reward R0; the 5th percentile of optimal reward ratio was 0.83 in R0 (i.e., 95% of instances did better than 0.83) while for rewards R1, R2, and R3 the 5th percentile dropped to 0.79, 0.73, and 0.74 respectively. This difference might not be due to a loss in performance, but because most instances were not solved to optimality by the IP solver in rewards R1, R2, and R3 and their gap obtained was larger; 95% of instances had a gap of 4.3% or less in reward R0, while the 95th percentile of gaps increased to 9.1%, 11.1%, and 8.2% for rewards R1, R2, and R3 respectively.
Figures 23, 24 and 25 show the computation times of each GMKP heuristic on instances with different reward structures. 2mKP runs faster than KP and similar to 3mKP, independent of reward structure. Computation times increased in comparison to the original reward structure (see Fig. 5), where 95% of instances were solved in less than 103 s in reward R0, and in less than 111, 183, and 118 s in reward R1, R2, and R3 respectively. This difference might also be explained by Gurobi having better performance when total group weights equal rewards (recall subproblems are also solved with Gurobi).
Different rewards: results for biGMKP heuristics
For different reward structures, Fig. 26 shows the density of the nondominated solutions obtained by biGMKP heuristic for the modified versions of Algorithm 2 (2mKP); as seen in line 6 of Heuristic 1. Analogous to reward R0 (see Fig. 6), most solutions lie slightly above the diagonal line that represents the case where changes in the optimal reward ratio generate a proportional change in the maximum exceeded knapsack capcity. This shows how the proposed biGMKP heuristic can be used, independent of the reward structure, to generate different bicriteria combinations doing a good job in maximizing rewards while slightly exceeding knapsack capacities.
Results for unequal knapsack capacities
All instances solved in Sect. 9 had identical knapsack capacities. Here we modify all instances by changing each knapsack capacity \(c_i, i\in M\), through a twostep procedure. The first step consists in multiplying each knapsack capacity of 100 by a real random number between 0.5 and 1.5—note knapsack capacities are 100 in expectation (each knapsack capacity is multiplied by a different random number). Since \(w_{\min }\le 50\) for all instances by design, this procedure does not generate knapsack capacities where no item would fit. The second step of the procedure avoids having groups that exceed the total knapsack capacity. Formally, the steps are the following for each instance:

1.
Define new knapsack capacities \(c'_i\), \(\forall i\in M\), as a function of the original knapsack capacity \(c_i=100\) by doing:
$$\begin{aligned} c'_i\leftarrow \left\lfloor Random\left( 0.5, 1.5 \right) \cdot 100\right\rceil \end{aligned}$$ 
2.
Let \(g_{\max } = \max _{l\in K} \sum _{j\in G_l}w_j\) and \(c_\text {total}=\sum _{i\in M}c'_i\). If \(g_{\max }>c_\text {total}\), update all knapsack capacities \(c'_i\), \(i\in M\), by doing:
$$\begin{aligned} c'_i\leftarrow \left\lceil c'_i \cdot \frac{g_{\max }}{c_\text {total}} \right\rceil \end{aligned}$$
We repeated all experiments from Sect. 9 for the 3000 instances with the modified knapsack capacities.
Unequal knapsack capacities: results for biGMKP algorithms
In Fig. 27 we see the details of the maximum exceeded knapsack capacity of biGMKP algorithms, after swapoptimal improvement, for instances with unequal knapsack capacities. Similar to the original instances with equal knapsack capacities, algorithms 3mKP and 100mKP obtained the least exceeded knapsack capacities. Performance improved in comparison to the original instances, likely because larger knapsacks allowed algorithms to reduce the maximum exceeded knapsack capacity even further.
Figure 28 shows the computation times of each biGMKP algorithm on instances with unequal knapsack capacities. 3mKP persists as the most timeeffective alternative, while running 100mKP might still be recommended since it obtains better results and computation times remain short. Computation times remained almost identical to instances with equal knapsack capacities (compare Fig. 28 to 3).
Unequal knapsack capacities: results for GMKP heuristics
Figure 29 shows the details of the optimal reward ratio of biGMKP heuristics, after swapoptimal improvement, for unequal knapsack capacities. As in the original instances (Fig. 4), there does not seem to be a relevant improvement after adding constraints beyond 2mKP. The performance of all GMKP heuristics improved in contrast to the original instances; for example, the 5th percentile of 3mKP improved from 0.78 to 0.85.
Figure 30 shows the computation times of each GMKP heuristic on instances with unequal knapsack capacities. As in instances with equal knapsack capacities, 2mKP runs faster than KP and similar to 3mKP. Computation times remained almost identical to instances with equal knapsack capacities (compare Figs. 30 to 5). Computation times do not seem affected by different knapsack capacities.
Unequal knapsack capacities: results for biGMKP heuristics
Figure 31 shows the density of the nondominated solutions obtained by biGMKP heuristic for the modified versions of Algorithm 2 (2mKP); as seen in line 6 of Heuristic 1. Analogous to the original instances with equal knapsack capacities (see Fig. 6), most solutions lie slightly above the diagonal line that represents the case where changes in the optimal reward ratio generate a proportional change in the maximum exceeded knapsack capcity. This shows how the proposed biGMKP heuristic can be used, independent of knapsack capacities, to generate different bicriteria combinations doing a good job in maximizing rewards while slightly exceeding knapsack capacities.
Rights and permissions
About this article
Cite this article
CastilloZunino, F., Keskinocak, P. Bicriteria multiple knapsack problem with grouped items. J Heuristics 27, 747–789 (2021). https://doi.org/10.1007/s1073202109476y
Received:
Revised:
Accepted:
Published:
Issue Date:
Keywords
 Multiple knapsacks
 Approximation algorithms
 Heuristics
 Bicriteria optimization
 Computational study