1 Introduction

One of the best known problems in mathematical programming is the 0–1 Knapsack Problem (KP) which is defined as follows. We are given an item set N, consisting of n items labeled \(1, 2, \ldots , n\), and a knapsack with a capacity C. Each item \(j \in N\) has a profit \(p_j\) and a weight \(w_j\). The objective is to select a subset \({N^{\prime }} \subseteq N\) such that the total profit of the selected items is maximized and their total weight does not exceed the capacity C. In the paper, we call this problem the Standard Knapsack (SK) problem.

The SK problem can be formulated by the following integer linear program:

$$\begin{aligned} (SK)&\hbox { maximize }&\sum _{j=1}^n p_{j} \, x_{j} \end{aligned}$$
(1)
$$\begin{aligned}&\hbox { Subject to }&\sum _{j=1}^n w_{j} \, x_{j}\le C, \end{aligned}$$
(2)
$$\begin{aligned}{} & {} x_{j} \in \{0,1\},\quad j=1,\ldots ,n, \end{aligned}$$
(3)

with \(p_{j}, w_{j}\) and C being non-negative integers. The SK problem is the simplest non-trivial integer programming model with binary variables, only one single constraint and only positive coefficients. Nevertheless, adding the integrality condition (3) to the linear program (1) and (2) already puts it into the class of weakly \(\mathcal{N}\mathcal{P}\)-hard problems (see, e.g., Garey & Johnson, 1979).

Despite the fact that in the SK problem the profits and weights of items are fixed, the problem plays an important role in applications and often appears in formulations of many combinatorial optimization problems. This problem is weakly \(\mathcal{N}\mathcal{P}\)-hard and can be solved exactly in pseudo-polynomial time or by constant factor approximation algorithms and approximation schemes. For an overview of all aspects of the SK problem, its relatives and applications we refer the reader to the monograph by Kellerer et al. (2004). Another interesting reference on the subject is the monograph by Martello and Toth (1990). A brief summary of recent research on the SK problem the reader may find in reviews by Cacchiani et al. (2022a, 2022b).

Though the SK problem is considered mainly in optimization theory, it may also be studied in the context of scheduling theory. For example, let us consider a production unit, where a worker operates a machine that is only available during a certain period of time. The worker has to execute on the machine a set of jobs composed of assembly, maintenance, testing or other production operations. Due to multiple repetition of these operations, the skills of this worker are improving what results in the reduction of time required to complete each job. Therefore, one can assume that the processing time of any job executed by the worker changes over time and is proportional to the value of a function of the job position in the sequence of jobs completed so far. In order to motivate the worker to increase the productivity, the employer pays this worker for each completed job a payment proportional to the processing time of the job. Under the assumptions, the aim of the worker is to find such a job sequence which maximizes the total payment for the jobs executed in the machine availability period. So defined problem is equivalent to a variant of a knapsack problem, where the job processing times and the payments are variable position-dependent item sizes and profits, respectively. The above example may be modified in many ways, since depending on the form of the functions which define the job processing times and the payments we obtain various variable job processing times. We refer the reader to the review by Gawiejnowicz (2020a) for an introductory presentation of the variability of job processing times. A detailed discussion of various classes of scheduling problems with variable job processing times and their applications the reader may find in the monograph by Gawiejnowicz (2020b).

Now, we formulate the three position-dependent knapsack problems we consider, introduce terminology and notation we use, summarize our contribution and describe the paper organization.

1.1 The formulation of position-dependent knapsack problems

In this paper, we consider three Position-Dependent Knapsack (PDK) problems with position-dependent weights or profits of items. In the first problem, denoted by P1, the weight of item j\(1\le j \le n,\) is position-dependent and described by the function \(w_j(r)=w_jf_w(r),\) where \(r\in \{1,2,\dots ,n\}\) denotes the number of the position of an item in the sequence of items packed in the knapsack so far, and \(f_w(r)\) is a function of r which can be computed in a constant time for any \(r=1,2,\dots ,n.\) The profit of item j is fixed (i.e., position-independent) and equals \(p_j(r)=p_j\). We assume that the functions \(f_w(r)\) are either monotonically non-decreasing or non-increasing.

In the second problem, denoted by P2, the weight of item j is fixed (i.e., position-independent), \(w_j(r)=w_j\) for \(1\le j \le n,\) while its profit is position-dependent and described by the function \(p_j(r)=p_jf_p(r),\) where jr and \(f_p(r)\) are defined similarly as above. Analogously to problem P1, we assume that the functions \(f_p(r)\) are either monotonically non-decreasing or monotonically non-increasing.

Finally, in the third problem, denoted by P3, both the weights of items and their profits are position-dependent and described by functions \(w_j(r)\) and \(p_j(r)\) as above, where \(1\le j\le n\). We assume that either the functions \(f_w(r)\) or \(f_p(r)\) are monotonically non-increasing and consider two subproblems of P3, denoted by P3a and P3b, respectively. In problem P3a, the functions \(f_w(r)\) are monotonically non-increasing and the number of different profit values \(p_j\) is a constant. In problem P3b, the functions \(f_p(r)\) are monotonically non-increasing and the number of different weight values \(w_j\) is a constant. As for the SK problem, the aim is to find an ordered subset \(N^{\prime } \subseteq N\) such that the total weight of items from the subset does not exceed C, while the total profit is maximized. As generalizations of the SK problem, problems P1, P2 and P3 are weakly \(\mathcal{N}\mathcal{P}\)-hard, and we will prove that even problems P3a and P3b are weakly \(\mathcal{N}\mathcal{P}\)-hard.

1.2 Terminology and notation

Throughout the paper we apply the following terminology and notation. The values \(w_j(r)=w_jf_w(r)\) and \(p_j(r)=p_jf_p(r)\) are called position-dependent weights and position-dependent profits, respectively. A solution of any of the problems P1, P2, P3 can be considered as sequence \(\sigma =(i_1,i_2,\ldots , i_{\nu })\), \(\nu \le n\), of a subset of the items in N. The weight of \(\sigma \) is given as

$$\begin{aligned} W(\sigma )= \sum _{j=1}^{\nu }w_{i_j}(j). \end{aligned}$$

The sequence \(\sigma \) is feasible if \(W(\sigma )\le C\). The profit of \(\sigma \) is then given as

$$\begin{aligned} P(\sigma )= \sum _{j=1}^{\nu }p_{i_j}(j). \end{aligned}$$

An item \(j \in N\) dominates an item \(i \in N\) if \(p_j\ge p_i\) and \(w_j\le w_i\), and at least one of the two inequalities is strict. If in an instance only a constant number k of different profit values \(p_{1}, p_{2},\ldots , p_{k}\) are given, then items with profit \(p_j\) are said to be of profit type j, \(j =1,\ldots ,k\). Weight types j are defined analogously. The number of items of weight or profit type j is denoted by \(k_{j}\), \(\sum \nolimits _{j=1}^k k_j=n\). The optimal solution value of a problem instance I and the solution value computed by an approximation algorithm A are denoted as \(z^{\star }(I)\) and \(z^{A}(I)\), respectively. (If there is no ambiguity, the reference to the instance I is omitted and we write briefly \(z^{\star }\) and \(z^{A}\), respectively.) A lower (an upper) bound on the total profit of an optimal solution of an instance of any of the problems is denoted by LB (UB). The total sum of all profit values and the largest profit are denoted by \(P=\sum _j p_j\) and \(p_{\max }\), respectively. An algorithm A is called an \(\varepsilon \)-approximation scheme if for every input \(\varepsilon \in \; ]0,1[\) the inequality

$$\begin{aligned} z^{A}(I) \ge (1- \varepsilon ) z^{\star }(I) \end{aligned}$$

holds for all problem instances I. An \(\varepsilon \)–approximation scheme A is a polynomial time approximation scheme (PTAS) if its running time is polynomial in n. An \(\varepsilon \)–approximation scheme A is a fully polynomial time approximation scheme (FPTAS) if its running time is polynomial both in n and in \(\frac{1}{\varepsilon }\).

1.3 Our contribution and the organization of the paper

The contribution of the paper is three-fold. First, we prove properties of three new knapsack problems with position-dependent item weights or profits defined in Sect. 1.1. Second, we propose dynamic programming algorithms for exact solving of the problems. Finally, we construct FPTASes for approximate solving of the problems. A brief summary of our results is given in Table 1, where symbols \(f_p(r)\nearrow \) and \(f_p(r)\searrow \) denote that function \(f_p(r)\) is monotonically non-increasing and non-decreasing, respectively, while symbol k(j) denotes the maximum index of an item as it will be defined in Eq. (8), and \(U_z=n \max _{j=1,\ldots ,n} \left\{ p_j f_p(1)\right\} \).

Table 1 Summary of results for problems P1, P2, P3a and P3b

The remaining sections of the paper are organized as follows. Sections 2 and 3 contain dynamic programming algorithms with pseudo-polynomial running time and FPTASes for problems P1 and P2, respectively. In Sect. 4, we show that problems P3a and P3b are weakly \(\mathcal{N}\mathcal{P}\)-hard. Moreover, dynamic programming algorithms for problems P3a and P3b are given and FPTASes for both problems are presented. Finally, in Sect. 5, we conclude and give remarks on possible topics for future research on position-dependent knapsack problems.

2 Exact algorithm and approximation scheme for problem P1

Recall that in problem P1 we have \(p_j(r)=p_j\) and \(w_j(r)=w_jf_w(r)\) for \(1\le j\le n,\) where \(f_w(r)\) are either monotonically non-decreasing or non-increasing functions.

First, we show that for P1 the following auxiliary result holds.

Lemma 1

If function \(f_w(r)\) is monotonically non-decreasing (non-increasing), then there exists an optimal solution of problem P1 in which items are given in non-increasing (non-decreasing) order of weights \(w_j.\)

Proof

We proceed using a standard exchange argument. We assume that function \(f_w(r)\) is monotonically non-decreasing, the proof for monotonically non-increasing function \(f_w(r)\) is analogous.

Consider an optimal solution to problem P1 which consists of a sequence \(\sigma =([1],[2],\dots ,[i_1],\dots ,[i_2],\dots ,[k])\) of \(k\le n\) items. Assume that there are two items, \(i_1\) and \(i_2\) in \(\sigma \) such that \(i_1 < i_2\) and inequality \(w_{[i_1]} \le w_{[i_2]}\) is satisfied. Then the total weight of all items in \(\sigma \) equals

$$\begin{aligned} W(\sigma )=w_{[1]}f_w(1)+\dots +w_{[i_1]}f_w(i_1)+\dots +w_{[i_2]}f_w(i_2)+\dots +w_{[k]}f_w(k). \end{aligned}$$

Let us consider now sequence \(\sigma ^{\prime }=([1],[2],\dots ,[i_2],\dots ,[i_1],\dots ,[k])\) in which items \(i_1\) and \(i_2\) have been mutually replaced. Then

$$\begin{aligned} W(\sigma ^{\prime })=w_{[1]}f_w(1)+\dots +w_{[i_1]}f_w(i_2)+\dots +w_{[i_2]}f_w(i_1)+\dots +w_{[k]}f_w(k). \end{aligned}$$

Since sequence \(\sigma ^{\prime }\) differs from \(\sigma \) only by positions of items \(i_1\) and \(i_2,\) we get

$$\begin{aligned} \begin{array}{lcl} W(\sigma )-W(\sigma ^{\prime }) &{} = &{} w_{[i_1]}f_w(i_1)+w_{[i_2]}f_w(i_2)-w_{[i_1]}f_w(i_2)-w_{[i_2]}f_w(i_1) \\ &{} = &{} \left( w_{[i_1]}-w_{[i_2]}\right) \left( f_w(i_1)-f_w(i_2)\right) .\\ \end{array} \end{aligned}$$
(4)

Because function \(f_w(r)\) is monotonically non-decreasing, the difference \(f_w(i_1)-f_w(i_2)\) in Eq. (4) is non-positive, which implies that \(W(\sigma )-W(\sigma ^{\prime }) \ge 0\) if \(w_{[i_1]} \le w_{[i_2]}.\) Hence, sequence \(\sigma ^{\prime }\) is not worse than \(\sigma \) when \(w_{[i_1]} \le w_{[i_2]}.\) By successively exchanging all other pairs of items which weights are not non-decreasingly ordered, we obtain a sequence as required. \(\square \)

Lemma 1 shows that in an optimal solution for problem P1 a certain order on the items is required. After realizing this, it is easy to solve problem P1 in pseudopolynomial time by applying a dynamic programming algorithm. The algorithm runs analogously to the dynamic programming algorithm for the SK problem with an additional dimension for the number of items in the knapsack.

If \(f_w(r)\) is monotonically non-decreasing, we assume that the items are sorted in non-increasing order of weights, i.e. \(w_1\ge w_2\ge \ldots \ge w_n\). In the case when \(f_w(r)\) is monotonically non-increasing, we assume that items are sorted in non-decreasing order of weights, i.e. \(w_1\le w_2\le \ldots \le w_n\). By Lemma 1, it is optimal to assign the items to the knapsack in the order \(1,2,\ldots ,n\).

Let \(y_j(g,r)\) denote the minimal weight of a subset of the set \(\{1,\dots ,j\},\) with cardinality r and total profit equal to g.

Problem P1 can by solved by the following dynamic programming algorithm which we will call Algorithm DP1.

First, Algorithm DP1 initializes functions \(y_0(0,0)\) and \(y_j(g,0)\) as follows:

$$\begin{aligned} \begin{array}{lcl} y_0(0,0) &{} = &{} 0, \\ y_j(g,0) &{} = &{} +\infty , g=1,\dots ,UB, j=1,\dots ,n, \end{array} \end{aligned}$$

where UB is an upper bound value defined below.

Assume that \(y_{j}(g,r)\) is computed for \(j=1,\dots ,k-1,\) \(g=0,\dots ,UB,\) \(r=0,\dots ,j,\) for some \(1\le k \le n\). Algorithm DP1 computes for \(g=0,\dots ,UB,\) \(r=1,\dots ,j\) the values of functions \(y_j(g,r)\) as follows:

$$\begin{aligned} y_j(g,r)= {\left\{ \begin{array}{ll} y_{j-1}(g,r), &{} \textrm{if}\;\; g<p_j \\ \min \left\{ y_{j-1}(g,r),y_{j-1}(g-p_j,r-1)+w_jf_w(r)\right\} , &{} \textrm{if}\;\; g \ge p_j\\ \end{array}\right. } \end{aligned}$$
(5)

for \(g=0,\dots ,UB,\) \(j=1,\dots ,n.\)

Finally, it computes the value of an optimal solution as follows:

$$\begin{aligned} \max \left\{ g: y_n(g,r)\le C, r=1,\dots ,n\right\} . \end{aligned}$$

The running time of Algorithm DP1 is \(O(n^2UB).\) As an upper bound value we can choose \(UB=np_{\max }\) or \(UB=P=\sum _j p_j\). Hence, taking the latter bound, we obtain the following result.

Theorem 1

Algorithm DP1 solves problem P1 in \(O(n^2P)\) time.

Algorithm DP1 can be easily transformed into an FPTAS by a standard scaling of the profit values \(p_j\) which is outlined in Kellerer et al. (2004, Section 2.6). For the sake of completeness, we will describe this procedure. First, we run DP1 for the instance with scaled profit values \({\tilde{p}}_j\) such that \({\tilde{p}}_j:= \left\lfloor \frac{p_j}{K} \right\rfloor \) for a given constant K. This yields a solution set \({\tilde{X}}\) for the scaled items which will usually be different from the original optimal solution set \(X^{\star }\). Evaluating the original profits of item set \({\tilde{X}}\) yields the approximate solution value \(z^{A}\). The difference between \(z^{A}\) and the optimal solution value can be bounded as follows:

$$\begin{aligned} \begin{array}{l l} z^{A} = \sum _{j \in {\tilde{X}}} p_j &{} \ge \sum _{j \in {\tilde{X}}} K \left\lfloor \frac{p_j}{K} \right\rfloor \ge \sum _{j \in X^{\star }} K \left\lfloor \frac{p_j}{K} \right\rfloor \\ &{} \ge \sum _{j \in X^{\star }} K \left( \frac{p_j}{K} -1 \right) \ge z^{\star } - nK. \end{array} \end{aligned}$$

Let LB denote a lower bound on the optimal solution value \(z^{\star }\). To get the desired performance guarantee of \(1-\varepsilon \), we have to choose K such that

$$\begin{aligned} K = \frac{\varepsilon LB}{n}. \end{aligned}$$
(6)

Let UB1 denote an upper bound for the optimal profit value of the scaled instance. Then, we get a running time for the FPTAS which is in

$$\begin{aligned} O(n^2\,UB1) = O\left( n^2\,\frac{UB}{K}\right) = O\left( \frac{n^3}{\varepsilon }\frac{UB}{LB}\right) . \end{aligned}$$
(7)

Simple lower and upper bounds for \(z^{\star }\) are \(LB = p_{\max }\) and \(UB= np_{\max }\), respectively. Since \(\frac{UB}{LB} =n\), it yields an overall running time of \(O(\frac{n^4}{\varepsilon })\). Hence, we have proven the following result.

Theorem 2

There exists an FPTAS for problem P1 running in \(O\left( \frac{n^4}{\varepsilon }\right) \) time.

3 Exact algorithm and approximation scheme for problem P2

In problem P2, we have \(p_j(r)=p_jf_p(r)\) and \(w_j(r)=w_j\) for \(1\le j\le n\). Below, we present a dynamic programming algorithm and an FPTAS for problem P2, applying the approach similar to that in Sect. 2. Since we assume that function \(f_p(r)\) is monotonic, the following result can be proven analogously to Lemma 1.

Lemma 2

If function \(f_p(r)\) is monotonically non-decreasing (non-increasing), then there exists an optimal solution of problem P2 in which items are given in non-decreasing (non-increasing) order of profits \(p_j\).

Hence, we may assume that if \(f_p(r)\) is monotonically non-increasing, items are sorted in non-increasing order of profits, i.e. \(p_1\ge p_2\ge \ldots \ge p_n\). Similarly, if \(f_p(r)\) is monotonically non-decreasing, we may assume that items are sorted in non-decreasing order of profits, i.e. \(p_1\le p_2\le \ldots \le p_n\). The corresponding dynamic program DP2 is equivalent to the dynamic program DP1 and the dynamic programming recursion is similar to Eq. (5):

$$\begin{aligned} y_j(g,r)= {\left\{ \begin{array}{ll} y_{j-1}(g,r), &{} \textrm{if}\;\; g<p_jf_p(r) \\ \min \left\{ y_{j-1}(g,r),y_{j-1}(g-p_jf_p(r),r-1)+w_j\right\} , &{} \textrm{if}\;\; g \ge p_jf_p(r)\\ \end{array}\right. } \end{aligned}$$

for \(g=0,\dots ,UB,\) \(j=1,\dots ,n.\)

The running time of DP2 is again in \(O(n^2UB)\). Lower bounds LB and upper bounds UB for the optimal solution value \(z^{\star }\) can be computed as follows. If \(f_p(r)\) is a non-increasing function, then we may choose \(LB= p_{\max }f_p(1)\) and \(UB = n p_{\max }f_p(1)\). If \(f_p(r)\) is a non-decreasing function, find for each item j the maximum position k(j) such that it can be put into the knapsack together with the other \(k(j)-1\) items of smallest weight. Let \(W_{\ell -1}(j)\) denote the set of the \(\ell -1\) smallest weights in the set \(\{w_1,\ldots ,w_n\}\setminus \{w_j\}\). Then, we have

$$\begin{aligned} k(j) = \underset{k=1,\ldots ,n}{\arg \,\max }\left\{ w_j+\sum _{i\in W_{k-1}(j)} w_i\le C\right\} . \end{aligned}$$
(8)

Consequently,

$$\begin{aligned} LB= \max _{j=1,\ldots ,n} \left\{ p_jf_p(k(j))\right\} \end{aligned}$$

and \(UB=n\,LB\) serve as lower and upper bounds for \(z^{\star }\), respectively. Hence, there holds the following result.

Theorem 3

(i) If function \(f_p(r)\) is monotonically non-increasing, Algorithm DP2 solves problem P2 in \(O(n^3 p_{\max }f_p(1))\) time.

(ii) If function \(f_p(r)\) is monotonically non-decreasing, Algorithm DP2 solves problem P2 in \(O(n^3\max \limits _{j=1,\ldots ,n} \left\{ p_jf_p(k(j))\right\} )\) time.

Proceeding in a similar manner as for problem P1 and setting \(K= \frac{\varepsilon LB}{n} \) as in Eq. (6), we get an FPTAS for problem P2. Since in both cases \(\frac{UB}{LB} =n\), Eq. (7) implies that the running time of the FPTAS is \(O(\frac{n^4}{\varepsilon })\). Hence, we have proven the following result.

Theorem 4

There exists an FPTAS for problem P2 running in \(O(\frac{n^4}{\varepsilon })\) time.

4 Exact algorithms and approximation schemes for problems P3a and P3b

In problem P3, we have \(p_j(r)=p_jf_p(r)\) and \(w_j(r)=w_jf_w(r)\) for \(1\le j\le n\). Moreover, we assume that there are only a constant number k of different profit values denoted as \(p_1, p_2,\ldots , p_k\) in problem P3a and k different weight values denoted as \(w_1, w_2,\ldots ,w_k\) in problem P3b. In the following, we assume that both \(f_p(r)\) and \(f_w(r)\) are non-increasing.

Obviously, problem P3 is weakly \(\mathcal{N}\mathcal{P}\)-hard. At the first glance, it may seem that the assumption that there are only a constant number of different profit values or weight values simplifies problems P3a and P3b, respectively. Note that the SK problem with a constant number of different profit values can be solved in polynomial time, since we may guess for each profit type j the number of items in the knapsack, named as \(\ell _j\). For each \(\ell _j\), \(j=1,\ldots , k\), put the \(\ell _j\) items of smallest weight into the knapsack. This approach gives us a running time of \(O(n^k)\) for the problem. However, for problems P3a and P3b the situation is different. In fact, we will show that both of these problems are weakly \(\mathcal{N}\mathcal{P}\)-hard by a reduction from the famous Partition problem.

Partition: Given t integers \(e_{j}\) such that \(\sum _{j=1}^{t}e_{j}=2E\), does there exist a partition of the index set \(T=\{1,2,\ldots ,t\}\) into two subsets \(T_{1}\) and \(T_{2}\) such that \(\sum _{j\in T_{1}}e_{j}=\sum _{j\in T_{2}}e_{j}=E\)?

We assume, without any loss of generality, that elements \(e_j\), \(j=1,\ldots ,t,\) of an instance of the Partition problem are ordered non-increasingly. (Otherwise, we can order them accordingly in \(O(t\log t)\) time.)

It is well known that Partition is weakly \(\mathcal{N}\mathcal{P}\)-complete (see, e.g., Garey and Johnson (1979)).

Theorem 5

Problems P3a and P3b are weakly \(\mathcal{N}\mathcal{P}\)-hard even if \(f_p(r) = f_w(r)\) for all positions r, if profits are equal to weights, and if there are only two different profit and weight values.

Proof

Given an arbitrary instance of Partition, define the following instance I of problem P3a (or P3b) with \(n=2t\) items. Items \(1,\ldots ,t\) have profit 1 and weight 1, and items \(t+1,\ldots ,2t\) are dummy with profit 0 and weight 0. The capacity of the knapsack is \(C= E\). The functions \(f_p(r)\) and \(f_w(r)\) are defined as

$$\begin{aligned} f_p(r)= f_w(r) = e_r,&\quad&r=1,\ldots , t,\\ f_p(r)= f_w(r) = 0,&\quad&r=t+1,\ldots , 2t. \end{aligned}$$

To prove Theorem 5, we show that for the constructed instance I, there is a sequence of items \(\sigma \) with \(W(\sigma )\le E\) and \(P(\sigma )\ge E\) if and only if Partition has a solution.

Let Partition have a solution, and let \(T_{1}\) and \(T_2\) be the corresponding index sets. We put items with weight 1 on the positions \(r\in \{1,\ldots ,t\}\) if \(e_r=f_p(r)\in T_1\). The rest of the first t positions is filled with dummy items. The remaining items are not included in the knapsack. Then, we get for the obtained sequence \(\sigma \) that

$$\begin{aligned} P(\sigma )= W(\sigma )= \sum _{f_p(r)\in T_1}f_p(r) = E. \end{aligned}$$

Suppose now that there is a sequence \(\sigma \) of at most 2t items with \(W(\sigma )\le E\) and \(P(\sigma )\ge E\). Only items with weight and profit 1 contribute to the total weight or total profit, respectively. Let the subset of the first t positions on which items of weight 1 are put, be denoted by \(S_1\). We get

$$\begin{aligned} W(\sigma )= P(\sigma ) = \sum _{r\in S_1} f_p(r) = E. \end{aligned}$$

The values of \(f_p(r)\) which correspond to the positions \(r\in S_1\) establish a solution of Partition. \(\square \)

In Sect. 1.2, the notion of item dominance was introduced. For the position-dependent knapsack problem, the dominance of two items means that a dominated item is only packed into the knapsack if its dominating items are packed as well. This holds also for problem P3, as it is stated in the following lemma.

Lemma 3

Let in an instance of problem P3 an item \(i \in N\) be dominated by an item \(j \in N\). Then there is an optimal solution with minimum weight with the following property: if item i is packed into the knapsack, then also item j is packed into the knapsack.

Proof

We proceed applying a simple exchange argument. Assume that an item \(i \in N\) is packed into the knapsack, but not item \(j \in N\). Then, remove item i from the knapsack and put item j at its position. Since the positions of the other items will not be changed, we get a solution with smaller or same total weight and at least the same profit. Because at least one of the inequalities is strict, this shows the assertion. \(\square \)

Under certain conditions the aforementioned domination notion applies not only to items but also to their sequence: dominating items are packed earlier than the corresponding dominated items. This double dominance is formulated in the following auxiliary result.

Lemma 4

Let \(N^{\prime }\subseteq N\) be a subset of items of an instance of problem P3. Assume that the following two conditions hold:

(i) \(f_w\) is monotonically non-increasing or all weights in \(N^{\prime }\) are equal.

(ii) \(f_p\) is monotonically non-increasing or all profits in \(N^{\prime }\) are equal.

Let an item \(i \in N^{\prime }\) be dominated by an item \(j \in N^{\prime }\). Then there is an optimal solution to the instance such that if both items are packed into the knapsack, then item j is packed earlier than item i.

Proof

If item i is processed before j in an optimal sequence \(\sigma \), i.e. i is processed on position \(i_1\) and item j is processed on position \(i_2\) with \(i_1<i_2\), then exchange the positions of i and j to get a sequence \(\sigma ^{\prime }\). Since, the other items do not change their positions, we get as in Eq. (4) that

$$\begin{aligned} W(\sigma )-W(\sigma ^{\prime }) = \left( w_{i}-w_{j}\right) \left( f_w(i_1)-f_w(i_2)\right) . \end{aligned}$$

By the dominance of items i and j and condition (i), both factors on the right side are non-negative and we get that \(W(\sigma )-W(\sigma ^{\prime })\ge 0\). This means that \(\sigma ^{\prime }\) gives a feasible sequence. The profit difference of the two sequences is equal to

$$\begin{aligned} P(\sigma )-P(\sigma ^{\prime }) = \left( p_{i}-p_{j}\right) \left( f_p(i_1)-f_p(i_2)\right) . \end{aligned}$$

By the dominance of items i and j and condition (ii), we get that \(P(\sigma )-P(\sigma ^{\prime })\le 0\). This means that the total profit of \(\sigma ^{\prime }\) is not smaller than the total profit of \(\sigma \). Hence, the lemma holds. \(\square \)

Corollary 1

Let be given an instance of P3 such that both \(f_p\) and \(f_w\) are monotonically non-increasing. If profits and weights of the items are oppositely ordered such that \(p_1\ge p_2\ge \ldots \ge p_n\) and \(w_1\le w_2\le \ldots \le w_n\), then problem P3 is solved optimally by assigning the items to the knapsack in increasing order of indices.

4.1 Problem P3a

We have now the capability to present a dynamic programming algorithm for problem P3a. Using Lemmas 3 and 4, we have for items with the same profit a strict dominance relation by sorting the items in non-decreasing weight order. Hence, if the optimal solution contains \(\ell _j\) items of profit type j, we choose the \(\ell _j\) items with smallest weight of this profit type and they are assigned in non-decreasing order of weights.

Recall that the number of items of weight or profit type j is denoted by \(k_{j}\). Let

$$\begin{aligned} \ell =\sum \limits _{j=1}^k\ell _j, \quad \vec {\ell } =(\ell _1,\ell _2,\ldots ,\ell _k), \quad \vec {\ell }_{-j} =(\ell _1,\ldots ,\ell _{j-1},\ell _j-1,\ell _{j+1}\ldots ,\ell _k). \end{aligned}$$
(9)

Let \(y(g;\ell _1,\ldots ,\ell _k)\) denote the minimal weight of a set of \(\ell _j\) items of profit type j, \(1\le j\le k\), with cardinality \(\ell _j\le k_j\), which has the total profit equal to g. Then, problem P3a can by solved by the following dynamic programming algorithm which we will call Algorithm DP3a.

First, Algorithm DP3a initializes functions \(y(g;0,\ldots ,0)\) as follows:

$$\begin{aligned} y(g; 0,\ldots ,0) = \left\{ \begin{array}{cl} 0 &{} g=0,\\ \infty &{} g=1,\ldots ,UB. \end{array}\right. \end{aligned}$$

Assume that \(y(g^{\prime };\ell _1^{\prime },\ldots ,\ell _k^{\prime })\) is computed for \(g^{\prime }=0,\dots ,UB\) and \(\ell _1^{\prime }\le \ell _1,\ldots ,\ell _k^{\prime }\le \ell _k\) with \(\sum \limits _{j=1}^k\ell _j^{\prime }\le \ell -1\). Moreover, let the weights of the \(k_j\) items of profit type j be denoted as \(w_1^j,\ldots ,w_{k_j}^j\) with \(w_1^j\le w_2^j\le \ldots \le w_{k_j}^j\). Algorithm DP3a computes for \(g=0,\dots ,UB,\) \(j=1,\ldots ,k\) the values of functions \(y(g; \vec {\ell })\) as follows. If \(g-p_jf_p(\ell )< 0\) for all \(j=1,\ldots ,k\), set \(y(g;\vec {\ell }) = \infty \). Otherwise,

$$\begin{aligned} y(g;\vec {\ell }) = \min _{j=1,\ldots ,k}\left\{ y( g-p_jf_p(\ell ); \vec {\ell }_{-j}) + w_{\ell _j}^jf_w(\ell ): \ell _j\ge 1, g-p_jf_p(\ell )\ge 0 \right\} \end{aligned}$$

for \(g=0,\ldots , UB\). Finally, it computes the value of an optimal solution as follows:

$$\begin{aligned} \max \left\{ g: y(g;\vec {\ell } )\le C\right\} \end{aligned}$$

with \(\ell _j\le k_j, j=1,\ldots ,k\). The running time of Algorithm DP3a is in \(O(n^kUB).\)

Analogously to Eq. (8), the value k(j) shall denote for each item j the maximum position such that it can be put into the knapsack together with the other \(k(j)-1\) items of smallest weight. Then,

$$\begin{aligned} LB= \max _{j=1,\ldots ,n}\,\max _{r=1,\ldots ,k(j)} \left\{ p_jf_p(r)\right\} \end{aligned}$$

and \(UB=n\,LB\) serve as a lower bound and an upper bounds for \(z^{\star }\), respectively.

Note that if \(f_p(r)\) is a non-increasing function, we may choose \(LB= p_{\max }f_p(1)\) and \(UB = n p_{\max }f_p(1)\). Summarizing, we have the following result.

Theorem 6

Algorithm DP3a solves problem P3a in \(O(n^{k+1}\max \limits _{j=1,\ldots ,n}\max \limits _{r=1,\ldots ,k(j)} \left\{ p_jf_p(k(r))\right\} )\) time.

Note that a correctness proof of Theorem 6 could be done by standard induction arguments. Since it is straightforward, we skip it. Transformation of algorithm DP3a into an FPTAS can be done like in Sect. 3, with the difference that in the expression evaluating the running time of the scheme UB is replaced by \(\frac{n^{k+1}}{\varepsilon }\frac{UB}{LB}\). Thus, the overall running time of the FPTAS is in \(O(\frac{n^{k+2}}{\varepsilon })\). This implies the following result.

Theorem 7

There exists an FPTAS for problem P3a running in \(O(\frac{n^{k+2}}{\varepsilon })\) time.

4.2 Problem P3b

A dynamic program for problem P3b can be found in a similar way, provided that we have only a constant number of weight values. If the optimal solution contains \(\ell _j\) items of weight type j, we choose the \(\ell _j\) items with largest profit of this weight type and they are assigned in non-increasing order of profits.

Let \(y(g;\ell _1,\ldots ,\ell _k)\) denote the maximum profit of a set of \(\ell _j\) items of weight type j, \(1\le j\le k\), with cardinality \(\ell _j\le k_j\), which has the total weight equal to g. Then, problem P3b can by solved optimally by the dynamic programming algorithm Algorithm DP3b.

First, Algorithm DP3b initializes the functions \(y(g;0,\ldots ,0)\) as follows:

$$\begin{aligned} y(g; 0,\ldots ,0) = \left\{ \begin{array}{cl} 0 &{} g=0,\\ -\infty &{} g=1,\ldots ,C. \end{array}\right. \end{aligned}$$

Let \(\ell \), \(\vec {\ell }\) and \(\vec {\ell }_{-j}\) be as defined in Eq. (9), and let the profits of the \(k_j\) items of weight type j be denoted as \(p_1^j,\ldots ,p_{k_j}^j\) with \(p_1^j\ge p_2^j\ge \ldots \ge p_{k_j}^j\). Algorithm DP3b computes for \(g=0,\dots ,C,\) \(j=1,\ldots ,k\) the values of functions \(y(g; \vec {\ell })\) as follows:

$$\begin{aligned} y(g;\vec {\ell }) = \max _{j=1,\ldots ,k}\left\{ y( g-w_jf_w(\ell ); \vec {\ell }_{-j}) + p_{\ell _j}^jf_p(\ell ): \ell _j\ge 1, g \ge w_jf_w(\ell ) \right\} \end{aligned}$$

for all \(g=0,\ldots , C\) (we adopt the convention that maximum on an empty set equals \(-\infty \)). Finally, it computes the value of an optimal solution as follows:

$$\begin{aligned} \max _{g=0,\ldots ,C; \; \vec {\ell }:\; \ell _j\le k_j, j=1,\ldots ,k} \left\{ y(g;\vec {\ell })\right\} . \end{aligned}$$

The running time of Algorithm DP3 is in \(O(n^k C).\)

Theorem 8

Algorithm DP3b solves problem P3b in \(O(n^{k} C)\) time.

Note that Algorithm DP3b cannot be transformed into an FPTAS in a similar way as we did with Algorithm DP3a in Sect. 4.1, since rounding of weight values in DP3b could create infeasible solutions.

We now turn to an alternative DP formulation, one which we will use in order to get an FPTAS via a different technique, called K-approximation sets and functions.

Let \(z_{\ell ,\ell _1,\ldots ,\ell _k}(g)\) be the maximum profit of a set of (exactly) \(\ell \) items, of which \(\ell _j\) items are of weight type \(j, \; j=1,\ldots ,k\), which has the total weight of at most g units (recall that \(\ell =\sum \limits _{j=1}^k\ell _j\)). The boundary condition is

$$\begin{aligned} z_{0,\vec {0}}(g)=0, \; g=0,\ldots ,C. \end{aligned}$$
(10)

The DP recurrence is in the form of (as previously, we adopt the convention that maximum on an empty set equals \(-\infty \)):

$$\begin{aligned} z_{\ell ,\vec {\ell }}(g)= \max _{j=1,\ldots ,k} \left\{ z_{\ell -1,\vec \ell _{-j}}(g-w_j f_w(\ell ))+ p^j_{\ell _j}f_p(\ell ): \ell _j \ge 1, \; g \ge w_j f_w(\ell )\right\} \end{aligned}$$
(11)

for all \(g=0,\ldots , C\). The value of an optimal solution is computed as follows:

$$\begin{aligned} \max _{\vec {\ell }:\; \ell _j\le k_j, j=1,\ldots ,k} \left\{ z_{\ell ,\vec {\ell }}(C)\right\} . \end{aligned}$$

Note that the alternative DP recursion (10)–(11) is identical to the preceding one, with the small difference that exactly g weight units are replaced by at most g weight units, which changes the boundary conditions. This is done for getting that the functions \(z_{\ell ,\vec {\ell }}(\cdot )\) are monotone non-decreasing, since the more space there is in the knapsack, the more profit we can yield from it. The DP formulation (10)–(11) is therefore a monotone DP. Monotone DPs are known to admit FPTASes, whenever they satisfy some additional technical conditions (see, e.g., Halman et al. (2014), Alon and Halman (2021; 2022)). The reason for which we switch from the \(y(g;\vec {\ell })\) notation of Algorithm DP3b to the one of \(z_{\ell ,\vec {\ell }}(g)\) of DP recurrence (10)–(11) is that we want to have a clear correspondence to the notation used in Halman et al. (2014); Alon and Halman (2021, 2022).

In the remaining of this section, we will apply the framework of (Alon & Halman, 2021) to derive an FPTAS for the monotone DP (10)–(11). For the completeness of presentation, we summarize this framework in Appendix A.

In order to show that DP formulation (10)–(11) fits into the framework, first we shall show that DP formulation (10)–(11) can be seen as a special case of DP formulation (12) in the following sense. (i) We set the level index t to be the index j and therefore the number of different levels is \(T = n\), i.e., the number of items to potentially place in the knapsack, (ii) we set the other index i to be multi-dimensional and equal to the vector \(\vec {\ell }\), therefore, the value of m, i.e., the total number of indices i is the number of combinations \(\ell _j\le k_j, j=1,\ldots , k\) for which \(\sum _{j=1}^k \ell _j=\ell \), (iii) we set the state variable \(I_{t,i}\) to be g, i.e., the remaining available space in the knapsack, (iv) for every level \(t=\ell \) and \(i=\vec {\ell }\) we set the additional information to be \(A_{\ell ,\vec {\ell }}(g)=\{j=1,\ldots ,k \;: \; \ell _j \ge 1, \; g \ge w_j f_w(\ell )\}\), which is the set of indices of items that can potentially be placed in the knapsack, (v) when considering level t in (12), instead of using all previously-calculated \(\{z_{r,j}\}_{r<t}\), we use only \(z_{t-1,j}\), and (vi) we set the boundary functions to be \(f_{0,\vec {0}} \equiv 0\). Thus, by (i)–(vi), we conclude that DP formulation (10)–(11) is indeed a special case of DP formulation (12).

Next, we shall set the ratio \(U_z\) of the lower and upper bounds on the values of function \(z_{\ell ,\vec {\ell }}(g)\) and the bound \(U_S\) on the cardinality of the state space. This can be done by assuming, respectively,

$$\begin{aligned} U_z=n \max _{j=1,\ldots ,n} \left\{ p_j f_p(1)\right\} \end{aligned}$$

and

$$\begin{aligned} U_S=C+1. \end{aligned}$$

Finally, we shall show that DP formulation (10)–(11) satisfies Conditions A.1A.4 as stated in Appendix A. Condition A.1 is satisfied, since by its construction \(z_{\ell ,\vec {\ell }}(\cdot )\) is a monotone non-decreasing function. Condition A.2 is satisfied due to the values of \(U_z\) and \(U_S\). As for Condition A.3, it is satisfied via a direct consequence of the calculus of K-approximation functions, and specifically, the linearity, composition, and maximization of approximation rules (see Proposition A.1(2, 4, 5) in Appendix A). Condition A.4(i) holds, since Eq. (11) consists of maximization over at most k arguments. Therefore, \(\tau _{{{\tilde{f}}}_{t,i}}=k\tau _{\tilde{z}_{r,j}}\).

Hence, applying Theorem A.1 (see Appendix A) with parameter value set to \(\tau _f=O(n^k)\), we get the following result.

Theorem 9

There exists an FPTAS for problem P3b that runs in \( O\left( \frac{n^{k+1}}{\varepsilon } \log \frac{n \log U_z}{\varepsilon } \log U_z \log C\right) \) time, where \(U_z=n \max _{j=1,\ldots , n} \left\{ p_j f_p(1)\right\} \).

Note that unlike the other FPTASes designed in this paper, whose running times depend only on n and \(\frac{1}{\varepsilon }\), and that are therefore strongly polynomial in the instance size (SFPTASes), the running time of the FPTAS in Theorem 9 depends polynomially on the binary encoding of the numbers in the problem instance and is thus non-strongly polynomial. It is, however, possible to give an SFPTAS using the very recent framework of (Alon & Halman, 2022) for the design of SFPTASes for monotone DPs, and using the same DP recurrence (10)–(11). This SFPTAS framework requires that a monotone DP can be cast as (12) and satisfies Conditions B.1B.6 stated in Appendix B.

Recall from the discussion above that DP formulation (10)–(11) is a special case of DP formulation (12). Before showing how DP formulation (10)–(11) satisfies Conditions B.1B.6, we note that Condition B.2 is not satisfied: an upper bound on the value that \(z_{\ell ,\vec {\ell }}\) can achieve is

$$\begin{aligned} UB= n \max _{j=1,\ldots ,n} \left\{ p_j f_p(1)\right\} , \end{aligned}$$

and a lower bound on every non-zero feasible solution is \(\min _{j=1,\ldots ,n} \left\{ p_j f_p(1)\right\} .\) However, the ratio

$$\begin{aligned} \frac{n \max _{j=1,\ldots ,n} \left\{ p_j \right\} }{\min _{j=1,\ldots ,n} \left\{ p_j \right\} } \end{aligned}$$

is not guaranteed to be strongly-polynomially bounded. Therefore, we apply the SFPTAS framework on a relaxed problem. In the relaxed problem, we drop every item i with \(p_i\le \frac{\varepsilon }{2n}\max _{j=1,\ldots ,n} \left\{ p_j\right\} .\) Since

$$\begin{aligned} LB = \max _{j=1,\ldots ,n} \left\{ p_j f_p(1) \right\} \end{aligned}$$

is a lower bound on the optimal solution, and we give up on a total profit of no more than \(\frac{\varepsilon }{2}LB\) (as there are n items and \(f_p(\cdot )\) is monotonically non-increasing), the optimal solution of the relaxed problem serves as a \(\frac{\varepsilon }{2}\)-approximation of the original one. Therefore, an SFPTAS that \(\frac{\varepsilon }{3}\)-approximates the relaxed problem will serve as an SFPTAS for the original problem.

We next shall show that DP (10)-(11) for the relaxed problem satisfies the aforementioned six conditions. As discussed above, the functions \(z_{\ell ,\vec {\ell }}(\cdot )\) are non-decreasing over \(\{0,\ldots ,C\}\), so Condition B.1 holds. As for Condition B.2, we set \(z^{\min }=\frac{\varepsilon }{2n}LB\) as explained above to be a lower bound on every non-zero value of \(z_{\ell ,\vec {\ell }}(\cdot )\) and \(z^{\max }=UB\) to be an upper bound on \(z_{\ell ,\vec {\ell }}(\cdot )\), and so \(U_z=\frac{z^{\max }}{z^{\min }}=O\left( \frac{n^2}{\varepsilon }\right) \). Condition B.3 holds by the linearity, composition, and maximization of approximation rules (Proposition A.1(2,4,5)). Condition B.4 holds, since \(z_{\ell ,\vec {\ell }}\) can be calculated in \(\tau _{f_{\ell ,\vec {\ell }}}=O(1)\) time for every \(1\le \ell \le n\) and \(\vec {\ell }\), as a maximization of up to k former z functions. Therefore, all \(z_{\ell ,\vec {\ell }}(\cdot )\) functions can be evaluated (for a single value) in \(\tau _f=O(n^k)\) time. By the calculus of sets of change points (Proposition B.2(1,3,5)), the set of change points (Definition B.1, Appendix B) \(\mathcal {C}_{\bar{z}_{\ell ,\vec {\ell }}}\) can be calculated in \(\tau ^{\mathcal {C}}_{\ell ,\vec {\ell }}=O(|\mathcal {C}_{\tilde{z}_{\ell -1,\vec {\ell }_{-1}}}|)\) time, \(|\mathcal {C}_{\bar{z}_{\ell ,\vec {\ell }}}|=O(|\mathcal {C}_{\tilde{z}_{{\ell -1},\vec {\ell }_{-1}}}|)\) for \(1\le \ell \le n\), and \(|\mathcal {C}_{{\bar{z}}_{0,\vec {0}}}|=O(1)\), so \(\tau _{\mathcal {C}}=O(n^k)\). Therefore, Conditions B.5 and B.6(ii) with the parameter setting \(a=k\) hold, so \(U_{{\mathcal {C}}}\), as defined in Appendix B, is in \(O\left( \frac{n \log \frac{n}{\varepsilon }}{\varepsilon }\right) \). Therefore, applying Theorem B.1, we obtain the following result closing the paper.

Theorem 10

There exists an FPTAS for problem P3b that runs in \(O\left( \frac{n^{k+1}}{\varepsilon }\log ^2\frac{n\log \left( \frac{n}{\varepsilon }\right) }{\varepsilon }\log \left( \frac{n}{\varepsilon }\right) \right) \) time.

5 Conclusions

In this paper, we considered three Position-Dependent Knapsack (PDK) problems which are new knapsack problems with variable position-dependent profits or weights of items. We proposed pseudo-polynomial exact algorithms and fully polynomial-time approximation schemes for these problems. Unlike the SK problem, which is relatively comprehensively researched, there are many unanswered questions related to the PDK problems.

It is interesting to understand the impact of monotonicity on our problems. For example, in Sect. 4 we assumed that functions \(f_w\) and \(f_p\) are non-increasing for problem P3a and P3b, respectively. Can other assumptions on monotonicity of the functions be solved analogously?

Another group of questions is related to problem P3. For example, it is unknown whether problem P3 is strongly \(\mathcal{N}\mathcal{P}\)-hard or even \(\mathcal{APX}\)-hard. Does there exist an algorithm with constant worst-case bound, a PTAS or even an FPTAS for problem P3?

Finally, the last group of topics for future research on the PDK problems concerns generalizations of our results to other functions than \(p_j(r) = p_jf_p(r)\) and \(w_j(r) = w_jf_w(r)\).