Obtaining a Proportional Allocation by Deleting Items

We consider the following control problem on fair allocation of indivisible goods. Given a set $I$ of items and a set of agents, each having strict linear preference over the items, we ask for a minimum subset of the items whose deletion guarantees the existence of a proportional allocation in the remaining instance; we call this problem Proportionality by Item Deletion (PID). Our main result is a polynomial-time algorithm that solves PID for three agents. By contrast, we prove that PID is computationally intractable when the number of agents is unbounded, even if the number $k$ of item deletions allowed is small, since the problem turns out to be W[3]-hard with respect to the parameter $k$. Additionally, we provide some tight lower and upper bounds on the complexity of PID when regarded as a function of $|I|$ and $k$.


Introduction
We consider a situation where a set I of indivisible items needs to be allocated to a set N of agents in a way that is perceived as fair. Unfortunately, it may happen that a fair allocation does not exist in a setting. In such situations, we might be interested in the question how our instance can be modified in order to achieve a fair outcome. Naturally, we seek for a modification that is as small as possible. This can be thought of as a control action carried out by a central agency whose task is to find a fair allocation. The computational study of such control problems was first proposed by Bartholdi, III et al. [5] for voting systems; our paper follows the work of Aziz et al. [4] who have recently initiated the systematic study of control problems in the area of fair division.
The idea of fairness can be formalized in various different ways such as proportionality, envy-freeness, or max-min fair share (see the book chapter by Bouveret et al. [6] for an introduction). Here we focus on proportionality, a notion originally defined in a model where agents use utility functions to represent their preferences over items. In that context, an allocation is called proportional if each agent obtains a set of items for which their utility is at least 1/|N | of their total utility of all items. One way to adapt this notion to a model with linear preferences (not using explicit utilities) is to look for an allocation that is proportional with respect to any choice of utility functions for the agents that is compatible with the given linear preferences. Aziz et al. [3] referred to this property as "necessary proportionality"; for simplicity, we use the shorter term "proportionality." For a survey of other possible notions of proportionality and fairness under linear preferences, we also refer to Aziz et al. [3].
We have several reasons for considering linear preferences. First, the most important advantage of this setting is the easier elicitation of agents' preferences. In many practical applications, especially with a large number of items, it is unrealistic to assume that agents are able to assign a meaningful cardinal value to each of the items. This may be due to lack of information, e.g., when agents need to declare preferences over items about which they have incomplete knowledge, or an unwillingness to associate a determined value for each item: in scenarios where the usefulness or virtue of an item cannot be simply measured by its monetary value (e.g., students ranking assignments, shared owners of a holiday home ranking time slots, heirs ranking family assets), people may find it much more convenient to express their preferences in an ordinal way, thus reducing their cognitive burden. Besides easier elicitation, it is important to note that it is also easier to visualize ordinal preferences than cardinal ones, which may have significance when we wish to elicit preferences from children or people with impaired cognitive abilities. Hence, ordinal preferences may be more useful in practical applications. From a technical point of view, this simpler model is more tractable in a computational sense: under linear preferences, the existence of a proportional allocation can be decided in polynomial time [3], whereas the same question for cardinal utilities is NP-hard [20]. Since Lipton et al. [20] show the NP-hardness of the problem already for two agents, we do not even have hope for an FPT-algorithm with the number of agents as the parameter. Clearly, if already the existence of a proportional allocation is computationally hard to decide, then we have no hope to solve the corresponding control problem efficiently.
Control actions can take various forms. Aziz et al. [4] mention several possibilities: control by adding/deleting/replacing agents or items in the given instance, or by partitioning the set of agents or items. In this paper we concentrate only on control by item deletion, where the task is to find a subset of the items, as small as possible, whose removal from the instance guarantees the existence of a proportional allocation. In other words, we ask for the maximum number of items that can be allocated to the agents in a proportional way.

Related Work
We follow the research direction proposed by Aziz et al. [4] who initiated the systematic study of control problems in the area of fair division. As an example, Aziz et al. [4] consider the complexity of obtaining envy-freeness by adding or deleting items or agents, assuming linear preferences. They show that adding/deleting a minimum number of items to ensure envy-freeness can be done in polynomial time for two agents, while for three agents it is NP-hard even to decide if an envy-free allocation exists. As a consequence, they obtain NP-hardness also for the control problems where we want to ensure envy-freeness by adding/deleting items in case there are more than two agents, or by adding/deleting agents.
The problem of deleting a minimum number of items to obtain envy-freeness was first studied by Brams et al. [7] who gave a polynomial-time algorithm for the case of two agents. 1 In a setting with cardinal utilities, Caragiannis et al. [8] propose a model where items can be donated (i.e., deleted) before allocating the rest to agents; they propose an algorithm that, after deleting a set of items, yields an allocation for the remaining items that is envy-free up to any goods, and whose Nash welfare value is at least half of the optimum. In the context of cake cutting, Segal-Halevi et al. [24] proposed the idea of distributing only a portion of the entire cake in order to obtain an envy-free allocation efficiently.
Looking at the topic in a broader sense, several papers have investigated possible ways to achieve fairness by certain types of control actions. A prominent example is hiding information from agents in order to facilitate a fair allocation. Chen and Shah [10] have found that if agents do not receive any information about the items allocated to others, then the expected amount of envy experienced by the agents typically reduces. Aziz et al. [2] investigated a model where the information that agents obtain on the allocation is based on a graph representing social contacts. Hosseini et al. [18] have proposed an algorithm that eliminates envy through withholding information about a set of few items. Halpern and Shah [16] have also examined the possibilities for overcoming envy by subsidies where agents receive monetary compensation.
For the Hospitals/Residents with Couples problem, Nguyen and Vohra [22] considered yet another type of control action: they obtained stability by slightly perturbing the capacities of hospitals.

Our Contribution
We first consider the case where the number of agents is unbounded (see Sect. 3). We show that the problem of deciding whether there exist at most k items whose deletion allows for a proportional allocation is NP-complete, and that this problem is W[3]-hard with parameter k (see Theorem 2). This latter result shows that even if we allow only a few items to be deleted, we cannot expect an efficient algorithm, since the problem is not fixed-parameter tractable with respect to the parameter k (unless FPT = W [3], which is widely believed not to be the case).
Additionally, we provide tight upper and lower bounds on the complexity of the problem. In Theorem 3 we prove that the trivial |I | O(k) time algorithm-that, in a brute force manner, checks for each subset of I of size at most k whether it is a solution-is essentially optimal (under the widely accepted assumption that FPT = W [1]). We provide another simple algorithm in Theorem 4 that has optimal running time, assuming the Exponential Time Hypothesis.
Next, we look at the possibilities of approximation in Sect. 3.1. First we focus on the approximation problem where the objective is to minimize the number k of item deletions, and we provide a strong inapproximability result in Theorem 5 by proving that not even an FPT-algorithm with parameter k can yield a ratio of |I | 1−ε for some constant ε > 0. Next, we examine the possibilities for maximizing the number of items that agents obtain under a proportional allocation. In Corollary 1, we observe that it is NP-hard to decide if there exists a set of 2|N | items which can be allocated to our set N of agents in a proportional way. Contrasting this result, we propose a simple polynomial-time algorithm in Theorem 6 that allocates one item to each agent proportionally, whenever this is possible.
In Section 4, we turn our attention to the case with only three agents. In Theorem 7 we propose a polynomial-time algorithm for this case, which can be viewed as our main result. The presented algorithm is based on dynamic programming, but relies heavily on a non-trivial insight into the structure of solutions.
Finally, in Sect. 5 we briefly look at the variant of our problem where we are given a fixed allocation in advance, and the task is to decide whether we can make the given allocation proportional by deleting certain items. We prove that this problem is easy for two agents (Theorem 8), but becomes NP-hard for six agents (Theorem 9). The computational intractability persists even if the number of deletions is small, as evidenced by Theorem 10 that proves W[2]-hardness with parameter k for this variant.

Preliminaries and Definitions
In this section, we revisit some technical concepts and notions that we use in the remainder of the paper. We also give a formal definition of the problem of Proportionality by Item Deletion (PID).
Another important observation is that a proportional allocation can only exist if the number of items is a multiple of |N |, since each agent needs to obtain at least |I |/|N | items.
Control by deleting items Given a profile P = (N , I , L) and a subset U of items, we can define the preference profile P − U obtained by removing all items in U from I and from all preference lists in L. Let us define the Proportionality by Item Deletion (PID) problem as follows. The input of PID is a pair (P, k) where P = (N , I , L) is a preference profile and k is an integer. We call a set U ⊆ I of items a solution for P if its removal from I allows for proportionality, that is, if there exists a proportional allocation π : I \ U → N for P − U . The task in PID is to decide if there exists a solution for P of size at most k. Note that the number of items remaining after the removal of the solution must be a multiple of |N |.

Unbounded Number of Agents
The existence of a proportional allocation can be decided in polynomial time by checking whether a certain bipartite graph corresponding to our instance admits a perfect matching [23,Lemma 4]. Therefore the Proportional Item Deletion problem is solvable in |I | O(k) time by the brute force algorithm that checks for each subset of I of size at most k whether it is a solution. In terms of parameterized complexity, this means that PID parameterized by the solution size k is in XP, i.e., the class of parameterized problems that can be solved in polynomial time for any constant value of the parameter.
Clearly, such a brute force approach may only be feasible if the number k of items we are allowed to delete is very small. Searching for a more efficient algorithm, one might ask whether the problem becomes fixed-parameter tractable with k as the parameter, i.e., whether there exists an algorithm for PID that, for an instance (P, k) runs in time f (k)|P| O (1) for some computable function f . Such an algorithm could be much faster in practice compared to the brute force approach described above.
Unfortunately, the next theorem shows that finding such a fixed-parameter tractable algorithm seems unlikely, as PID is W[2]-hard with parameter k. Hence, deciding whether the deletion of k items can result in a profile admitting a proportional allocation is computationally intractable even for small values of k. ( Proof We are going to present an FPT-reduction from the W[2]-hard problem k-Dominating Set, where we are given a graph G = (V , E) and an integer k, and the task is to decide if G contains a dominating set of size at most k; a vertex set D ⊆ V is dominating in G if each vertex in V is either in D or has a neighbor in D. We denote by N (v) the set of neighbors of some vertex v ∈ V , and we let Let us construct an instance I PID = (P, k) of PID with P = (N , I , L) as follows. We let N contain 3n + 2m + 1 agents where n = |V | and m = |E|: we create n + 1 so-called selection agents s 1 , . . . , s n+1 , and for each v ∈ V we create a set Let F denote the set of all first-choice items, i.e., Before defining the preferences of agents, we need some additional notation. We fix an arbitrary ordering ≺ over the items, and for any set X of items we let [X ] denote the ordering of X according to ≺. Also, for any a ∈ N , we define the set F a i to contain the first i elements of [F \ { f (a)}], for any i ∈ {1, . . . , |N | − 1}. We end preference lists below with the symbol '···' meaning all remaining items not listed explicitly, ordered according to ≺. Now we are ready to define the preference list L a for each agent a.
-If a is a selection agent a = s i with 1 ≤ i ≤ n − k, then let -If a is a selection agent a = s i with n − k < i ≤ n + 1, then let This finishes the definition of our PID instance I PID .
Suppose that there exists a solution S of size at most k to I PID and a proportional allocation π mapping the items of I \ S to the agents in N . Observe that by |I | = 2|N | + k, we know that S must contain exactly k items.
First, we show that S cannot contain any item from F. For contradiction, assume that f (a) ∈ S for some agent a. Since the preference list of a starts with more than k items from F (by N − n > n > k), the first item in L a I \S must be an item f (b) for some b ∈ N , b = a. The first item in L b I \S is exactly f (b), and thus any proportional allocation should allocate f (b) to both a and b, a contradiction.
Next, we prove that S ⊆ I V . For contradiction, assume that S contains less than k items from I V . Then, after the removal of S, the top |N | + 1 items in the preference list L s i I \S of any selection agent s i are all contained in I V ∪ F. Hence, π must allocate at least two items from I V ∪ F to s i , by the definition of proportionality. Recall that for any agent a, π allocates f (a) to a, meaning that π would need to distribute the n items in I V among the n + 1 selection agents, a contradiction. Hence, we have S ⊆ I V .
We claim that the k vertices D = {v | i v ∈ S} form a dominating set in G. Let us fix a vertex v ∈ V . For sake of contradiction, assume that N [v] ∩ D = ∅, and consider any vertex agent a in A v . Then the top |N | + 1 items in L a I \S are the same as the top |N | + 1 items in L a = L a I (using that S ∩ F = ∅), and these items form a subset of I N [v] ∪ F for every a ∈ A v . But then arguing as above, we get that π would need to allocate an item of I N [v] to each of the |N [v]| + 1 vertex agents in A v ; again a contradiction. Hence, we get that For the other direction, let D be a dominating set of size k in G, and let S denote the set of k vertex items {i v | v ∈ D}. To prove that S is a solution for I PID , we define a proportional allocation π in the instance obtained by removing S. First, for each selection agent s i with 1 ≤ i ≤ n − k, we let π allocate f (s i ) and the ith item from I V \D to s i . Second, for each selection agent s n−k+i with 1 ≤ i ≤ k + 1, we let π allocate f (s n−k+i ) and the dummy item c i to s n−k+i . Third, π allocates the items f (a It is straightforward to check that π is indeed proportional. For proving NP-completeness, observe that the presented FPT-reduction is a polynomial-time reduction as well, so the NP-hardness of Dominating Set implies that PID is NP-hard as well; since for any subset of the items we can verify in polynomial time whether it yields a solution, containment in NP follows.
As mentioned above, we can in fact strengthen the W[2]-hardness result of Theorem 1 and show that PID is even W[3]-hard with respect to parameter k.

Theorem 2 Proportionality by Item Deletion is W[3]-hard when parameterized by the size k of the desired solution.
Proof We are going to present an FPT-reduction from the W[3]-hard wcs − [3] problem, which is the weighted satisfiability problem for formulas of the form ϕ = [14,Theorem 4.13] (see also [11,15]). Let (ϕ, k) be an instance of the weighted satisfiability problem, where ϕ is a formula of the form described above; the task in wcs − [3] is to decide whether there is a truth assignment of weight k that satisfies ϕ. Let X = {x 1 , . . . , x n } be the set of variables occurring in ϕthat is, n denotes the number of variables in ϕ. We will construct an instance I PID = (P, k) of PID with P = (N , I , L) as follows. We let N contain n + 1 + m 1 i=1 m 2,i agents: we create n + 1 so-called selection agents s 1 , . . . , s n+1 , and for each 1 ≤ i ≤ m 1 we create a set A i = {a j i | 1 ≤ j ≤ m 2,i } of verification agents. Next we let I contain 2|N | + k items: we create distinct firstchoice items f (a) for each agent a ∈ N , a variable item w u for each 1 ≤ u ≤ n, m 2,i verification items y i,1 , . . . , y i,m 2,i for each 1 ≤ i ≤ m 1 , and k + 1 dummy items c 1 , . . . , c k+1 .
Let F denote the set of all first-choice items, i.e., F = { f (a) | a ∈ N }. For any subset X ⊆ X of variables, let W X = {w u | x u ∈ X }; in particular, W X denotes the set of all variable items.
Before defining the preferences of agents, we need the additional notation used also in the proof of Theorem 1. We fix an arbitrary ordering ≺ over the items, and for any set Z of items we let [Z ] denote the ordering of Z according to ≺. Also, for any a ∈ N , we define the set F a i to contain the first i elements of [F \ { f (a)}], for any i ∈ {1, . . . , |N | − 1}. Moreover, for any 1 ≤ i ≤ m 1 we define the sets Y i = {y i,1 , . . . , y i,m 2,i } and Y i = {y i,1 , . . . , y i,m 2,i −1 }. We end preference lists below with the symbol '···' meaning all remaining items not listed explicitly, ordered according to ≺. Now we are ready to define the preference list L a for each agent a.
-If a is a selection agent a = s i with 1 ≤ i ≤ n − k, then let -If a is a verification agent a = a j i for 1 ≤ i ≤ m 1 and 1 ≤ j ≤ m 2,i , then let where C i, j = X \ {x ∈ X | l i, j, = ¬x for some 1 ≤ ≤ m 3,i, j } is the set of variables that do not occur in any literal l i, j, , for 1 ≤ ≤ m 3,i, j .
This finishes the definition of our PID instance I PID .
Suppose that there exists a solution S of size at most k to I PID and a proportional allocation π mapping the items of I \ S to the agents in N . Observe that by |I | = 2|N | + k, we know that S must contain exactly k items.
First, we show that S cannot contain any item from F. To derive a contradiction, assume that f (a) ∈ S for some agent a. We can safely assume that |N | − n > k and that |N | − n > m 2,i for each 1 ≤ i ≤ m 1 . As a result, the preference list of a starts with more than k items from F. Therefore, the first item in L a I \S must be an item f (b) for some b ∈ N , b = a. Clearly, the first item in L b I \S is exactly f (b), which means that any proportional allocation should allocate f (b) to both a and b, which is a contradiction.
Next, we prove that S ⊆ W X . To derive a contradiction, assume that S contains less than k items from W X . Then, after the removal of S, the top |N | + 1 items in the preference list L s i I \S of any selection agent s i are all contained in W X ∪ F. Hence, π must allocate at least two items from W X ∪ F to each s i , by the definition of proportionality. Recall that for any agent a, π allocates f (a) to a, meaning that π would need to distribute the n items in W X among the n + 1 selection agents, which is a contradiction. Hence, we have S ⊆ W X . We also get that π must allocate all items in W X \ S ∪ {c 1 , . . . , c k+1 } to the selection agents. Consider the truth assignment α : X → {0, 1} defined by letting α(x u ) = 1 if and only if w u ∈ S, for each x u ∈ X . Since |S| = k, the truth assignment α has weight k. We show that α satisfies ϕ. To do so, we need to show that for each 1 ≤ i ≤ m 1 it holds that α satisfies ϕ i = m 2,i j=1 m 3,i, j =1 l i, j, . Take an arbitrary 1 ≤ i ≤ m 1 . To derive a contradiction, assume that for each 1 ≤ j ≤ m 2,i it holds that there is some 1 ≤ ≤ m 3,i, j such that l i, j, is made false by α. Then for each such 1 ≤ j ≤ m 2,i it holds that |W C i, j ∩ S| < k. Then for each verification agent a j i , for 1 ≤ j ≤ m 2,i it holds that the top |N | + 1 items in L a I \S (for a = a j i ) form a subset of Y i ∪ W X ∪ F. Then arguing as above, we get that π would need to allocate an item of Y i to each of the |Y i | = |Y i | + 1 agents a j i , which is a contradiction. Since i was arbitrary, we can conclude that α satisfies ϕ.
For the other direction, let α : X → {0, 1} be a truth assignment of weight k that satisfies ϕ, and let S denote the set of k variable items {w u | x u ∈ X , α(x u ) = 1}. To prove that S is a solution for I PID , we define a proportional allocation π in the instance obtained by removing S. First, for each selection agent s i with 1 ≤ i ≤ n − k, we let π allocate f (s i ) and the ith item from W X \ S to s i . Second, for each selection agent s n−k+i with 1 ≤ i ≤ k + 1, we let π allocate f (s n−k+i ) and the dummy item c i to s n−k+i . Then, for each 1 ≤ i ≤ m 1 , let 1 ≤ j i ≤ m 2,i be some number such that α makes m 3,i, j i =1 l i, j i , true-we know that such a j i exists for each i because α satisfies ϕ. For each verification agent a j i we let π allocate f (a j i ) and one item from Y i to a j i as follows. If j = j i , we let π allocate y i,m 2,i to a j i ; if j < j i , we let π allocate y i, j to a j i ; and if j > j i , we let π allocate y i, j−1 to a j i . It is straightforward to check that π is indeed proportional.
Theorem 2 implies that we cannot expect an FPT-algorithm for PID with respect to the parameter k, the number of item deletions allowed, unless FPT = W [3]. Next we show that the brute force algorithm that runs in |I | O(k) time is optimal, assuming the slightly stronger assumption FPT = W [1]. Observe that in the FPT-reduction presented in the proof of Theorem 1, the new parameter has linear dependence on the original parameter (in fact they coincide). Therefore, this reduction is a linear FPT-reduction, and consequentially, PID is W l [2]hard. Hence, as proved by Chen et al. [9], PID on an instance (P, k) with item set I cannot be solved in time f (k)|I | o(k) |P| O (1) time for any function f , unless FPT = W [1].

Theorem 3 There is no algorithm for PID that on an instance
If we want to optimize the running time not with respect to the number k of allowed deletions but rather in terms of the total number of items, then we can also give the following tight complexity result, under the Exponential Time Hypothesis (ETH). This hypothesis, formulated in the seminal paper by Impagliazzo, Paturi, and Zane [19] says that 3-Sat cannot be solved in 2 o(n) time, where n is the number of variables in the 3-CNF formula given as input. (1) time, it suffices to consider the brute force algorithm that iterates over all possible subsets of items to delete, and for each such subset computes whether deleting it enables a proportional allocation (using polynomial-time matching techniques as described by Pruhs and Woeginger [23]). This algorithm runs in time Next, we show that PID cannot be solved in 2 o(|I |) time, unless the ETH fails. The so-called Sparsification Lemma proved by Impagliazzo et al. [19] implies that assuming the ETH, 3-Sat cannot be solved in 2 o(m) time, where m is the number of clauses in the 3-CNF formula given as input. Since the standard reduction from 3-Sat to Dominating Set transforms a 3-CNF formula with n variables and m clauses into an instance (G, n) of Dominating Set such that the graph G has O(m) vertices and maximum degree 3 (see, e.g., [25]), it follows that Dominating Set on a graph (V , E) cannot be solved in 2 o(|V |) time even on graphs having maximum degree 3, unless the ETH fails.
Recall that the reduction presented in the proof of Theorem 1 computes from each instance (G, k) of Dominating Set with G = (V , E) an instance (P, k) of PID where the number of items is 3|V |+2|E|+1. Hence, assumming that our input graph G has maximum degree 3, we obtain |I | = O(|V |) for the set I of items in P. Therefore, an algorithm for PID running in 2 o(|I |) time would yield an algorithm for Dominating Set running in 2 o(|V |) time on graphs of maximum degree 3, contradicting the ETH.

Approximating PID
In view of the intractability results we have encountered sofar, it is natural to ask whether an efficient approximation might exist for PID. For some value c ≥ 1, we say that an algorithm A is an approximation for PID with ratio c if, for any instance (P, k) of PID, A either returns a solution for P containing at most c · k items, or correctly concludes that there is no solution for P of size k.
Unfortunately, the proof of Theorem 1 implies that we cannot hope for an efficient approximation algorithm. Even if we do not aim for a constant-factor approximation, that is, for a ratio c for some constant c, but allow for a ratio |I | 1−ε for some fixed ε > 0, we cannot expect an efficient algorithm.
, then there is no algorithm that, given an instance (P, k) of PID with item set I , yields an approximation for PID with ratio |I | 1−ε and runs in FPT time with parameter k.
Proof Let us suppose that A is an algorithm as described in the statement of the theorem. We are going to use A to give an FPT-algorithm for the W[2]-hard Dominating Set problem, implying FPT = W [2].
Let (G, k) be our instance of Dominating Set, and let n and m denote the number of vertices and edges in G, respectively. We first apply the reduction given in the proof of Theorem 1; let (P, k) be the constructed instance of PID with P = (N , I , L). Recall that |N | = 3n + 2m + 1 and |I | = 2|N | +k. We distinguish between two cases, depending on the relationship between |N | and k; recall that ε is a positive constant. First ε , then we apply the brute force algorithm for Dominating Set that selects k vertices in every possible way and checks whether they form a dominating set. By n < |N |, this approach takes time, which is fixed-parameter tractable with parameter k. Second, assume 3 In this case, we apply algorithm A, which either correctly concludes that there does not exist a solution of size k for P, or returns a solution S of size at most |I | 1−ε k. Observe that by |I | = 2|N | + k we have where the last inequality follows from our assumption on |N |.
Recall that P − S must contain a number of items that is a multiple of |N |, as otherwise no proportional allocation may exist for P − S. Hence, |S| ≡ k mod |N |, and thus |S| ≤ |N | implies that S must be a solution of size k. Hence, A either finds a solution of size k for P, or reports that no such solution exists. By the correctness of our reduction, a solution of size k for P implies the existence of a dominating set of size k for G (in the proof of Theorem 1 we actually determined such a set). Since A is an FPT-algorithm with parameter k, the presented algorithm for Dominating Set is also FPT with parameter k.
Inspecting the proof of Theorem 5, one can observe that the necessity of finding a solution such that the number of remaining items is a multiple of |N | seems to be a major impediment when considering approximation for PID. This led us to ask a different question: instead of approximating the size of the solution, is it perhaps possible to approximate the number of items that each agent ends up with in a proportional allocation? More formally, our task is the following: given a profile P and some integer c, determine a set U of items with |U | = c|N | such that U can be proportionally allocated to the set N of agents (i.e., such that P − (I \ U ) admits a proportional allocation).
Looking into the proofs of Theorems 1 and 2, we can immediately observe that the case c = 2, that is, finding 2|N | items (yielding two items for each agent) for which a proportional allocation exists, is already computationally intractable. Corollary 1 Given a profile P with a set N of agents and a set I of items, it is NPhard to decide whether there exists a set of 2|N | items which can be proportionally allocated to the agents of N .
We remark that Corollary 1 directly implies that it is NP-hard to approximate the number of items each agent obtains in a proportional allocation with a ratio better than 1 2 .
By contrast, there is a simple algorithm to decide whether we can find one item for each agent in a proportional way.

Theorem 6
There exists an algorithm that given a profile P = (N , I , L) determines in polynomial time a set U of |N | items that can be proportionally allocated to the agents of N , whenever such a set U exists.
Proof Suppose that S is a set of |I | − |N | items such that P − S is solvable. Then, clearly, there cannot be two agents whose first-choice items in P − S coincide. This simple observation leads us to the following algorithm. Starting from P, we repeatedly search for a pair of agents whose first-choice items coincide. If there exist such agents, then we remove their common first-choice item from P (as this item must be contained in S), and proceed with the remaining profile. Whenever we reach a profile such that no two agents' first-choice items coincide, then we allocate to each agent its first-choice item (and we delete all remaining items).
Since we only delete items from S (except for the deletion of superfluous items performed after an appropriate allocation is found), this algorithm returns a set |N | of items as promised, unless P admits no solution S of size at most |I | − |N |. The running time is clearly polynomial in |P|.

Three Agents
It is known that PID for two agents is solvable in polynomial-time: if there are only two agents, then an allocation is proportional if and only it is envy-free [3]. Since the problem of obtaining an envy-free allocation by item deletion is polynomial-time solvable (in case of two agents) [4,7], this implies tractability of PID for |N | = 2 immediately. In this section, we generalize this result by proving that PID is polynomial-time solvable for three agents.
In what follows, we will assume that our profile P contains three agents, so let N = {a, b, c}. In Sect. 4.1, we will define the necessary basic concepts that we will need. Then, in Sect. 4.2 we present a high-level overview of our algorithm. In Section 4.3, we will look at partial solutions and define the notion of branching sets. Finally, in Sect. 4.4, when all necessary notions are in place, we present our algorithm.

Basic Concepts: Prefixes and Minimal Obstructions
We begin by defining a graph representation of our profile P which can be used to determine whether P admits a proportional allocation. The following construction is identical to the one proposed by Pruhs and Woeginger [23, Section 4] and later generalized by Aziz et al. [3,Theorem 6].
Graph underlying a profile Let us define the underlying graph G of our profile P of PID as the following bipartite graph. The vertex set of G consists of the set I of items on the one side, and a set S on the other side, containing all pairs of the form (x, i) where x ∈ N is an agent and i ∈ {1, . . . , |I |/|N | }. Such pairs are called slots. We can think of the slot (x, i) as the place for the ith item that agent x receives in some allocation. We say that an item is eligible for a slot ( In the graph G, we connect each slot with the items that are eligible for it; see Fig. 1 for an illustration. Observe that any proportional allocation corresponds to a perfect matching in G; for the sake of completeness, we will prove this in Lemma 1.
Since our approach to solve PID with three agents is to apply dynamic programming, we need to handle partial instances of PID. Let us define now the basic necessary concepts.
, listing only the first i a , i b , and i c items in the preference list of agents a, b, and c, respectively. We call (i a , i b , i c ) the size of Q and denote it by size(Q).
We say that a prefix We say that P i and P j are intersecting if none of them contains the other; we call the unique largest prefix contained both in P i and in P j , i.e., the prefix P[min(i a , j a ), min(i b , j b ), min(i c , j c )], their intersection, and denote it by P i ∩ P j . We may also compare prefixes of different profiles, deciding their relationship (i.e., whether one contains the other, or they intersect) solely based on their sizes.
For some prefix Q = P[i a , i b , i c ], let I (Q) denote the set of all items appearing in Q. We define the set of slots appearing in Q as Fig. 1 for an illustration. Note that any we say that such slots are complete in Q. By contrast, if i x ≡ 1 mod 3, then the slot (x, (i x + 2)/3 ) is connected to fewer items in G(Q) than in G. Hence, the only slots which may be incomplete are the last slots in Q, that is, the slots (x, (i Fig. 1 for an illustration.

Minimal obstructions
We say that a prefix Q is a minimal obstruction, if it is not solvable, but all prefixes strictly contained in Q are solvable. Observe that all slots in a minimal obstruction must be complete. Furthermore, Hall's Theorem tells us that a minimal obstruction must have exactly one item less than the number of slots, so |I (Q)| = |S(Q)| − 1. We will call any prefix Q that is not solvable an obstruction; note that any obstruction that does not strictly contain another obstruction is, indeed, a minimal obstruction in the above sense. See Fig. 2 for an illustration. Lemma 1 shows that a minimal obstruction, if existing, can be found efficiently; Lemma 2 states some useful observations about minimal obstructions. Proof We prove this lemma for arbitrary |N |.
First, it is easy to see that any proportional allocation π immediately yields a perfect matching M for G: for each x ∈ N and each i ∈ {1, . . . , |I |/|N |} (note that |I |/|N | ∈ N since π is proportional), we simply put into M the edge connecting slot (x, i) with the ith item p (x,i) received by x; naturally, we rank items received by x according to x's preferences. The proportionality of π implies that p (x,i) is contained in the top (i − 1)|N | + 1 items in L x , and thus is indeed eligible for the slot (x, i).
For the other direction, consider a perfect matching M in G. Then giving each agent x all the items assigned to the slots {(x, i) | i ∈ {1, . . . , |I |/|N |} by M we obtain a proportional allocation π : for each agent x and index j ∈ {1, . . . , |N |}, our allocation π assigns at least j/|N | items to x from L x [1 : j], namely the items matched by M to the slots Therefore, we can check whether there exists a proportional allocation for P by finding a maximum matching in the bipartite graph G. Using the Hopcroft-Karp algorithm [17], this takes O(|I | 5/2 ) time, since G has 2|I | vertices. If no perfect matching exists in G, then we can find a minimal obstruction using a variant of the classical augmenting path method that starts from an empty matching, and increases its size by finding augmenting paths one by one. Namely, at each iteration we pick an unmatched starting slot (x, i) for which all slots in {(x , j) | x ∈ N , 1 ≤ j < i} are already matched, and search for an augmenting path that starts at (x, i).
Suppose that this algorithm stops at an iteration where the starting slot is (x, i), and no augmenting path starts at (x, i) for the current matching M. Let S H be the set of all slots reachable by an alternating path in G from (x, i), and let I H be the set of all items eligible for any slot in S H . It is well known that S H and I H violate Hall's condition: |I H | < |S H |. Moreover, the slots in S H "induce" a prefix in the sense that there exists a prefix Q with S(Q) = S H . To prove this, it suffices to show that if (y, j) ∈ S H and j ∈ {1, . . . , j − 1}, then (y, j ) ∈ S H . By our strategy for picking starting slots, we know j < j ≤ i, implying that (y, j ) is matched by M. Let q be the item assigned to it by M; note that q is eligible for (y, j) as well. To obtain an alternating path from (x, i) to (y, j ), we can take any alternating path from (x, i) to (y, j), and append the two-edge path from (y, j) to (y, j ) through q. Hence, there indeed exists a prefix Q with S(Q) = S H ; we pick such a Q containing only complete slots. Using standard arguments from matching theory, it is straightforward to check that Q is a minimal obstruction.
Each iteration can be performed in O(|I | 2 ) time (e.g., with a BFS), and there are at most |I | steps, so the algorithm runs in O(|I | 3 ) time. Proof First, observe that if i a ≡ 1 mod 3, then the set of complete slots is the same in Q as in P[i a − 1, i b , i c ], contradicting the minimality of Q. Thus, we have i a ≡ 1 mod 3, and we get i b ≡ i c ≡ 1 mod 3 analogously.
Second, let us consider the graph G(Q) underlying our prefix. Since Hall's condition fails for the set S(Q) of (complete) slots but, by minimality, it holds for any proper subset of these slots, we know that where the last equality follows from the first claim of the lemma. Let us assume  Each partial solution U is witnessed by an allocation π U showing that Q − U is solvable; in each (partial) preference list for Q − U , we indicated the items allocated by π U to the given agent by underlining them Based on Lemma 2, we define the shape of a minimal obstruction Q as either straight or slant, depending on whether Q fulfills the conditions (i) or (ii), respectively. More generally, we also say that a prefix has straight or slant shape if it fulfills the respective condition. Furthermore, we define the boundary items of Q, denoted by δ(Q), as the set of all items that appear once or twice (but not three times) in Q.

High-level Overview of Our Algorithm
Having in place the most basic definitions, we are now able to give an intuition about how our algorithm works. The main idea is to repeatedly find a minimal obstruction, delete certain items from it to render it solvable (i.e., to ensure that the remainder admits a proportional allocation), and then proceed with the modified instance. However, we are not able to immediately tell which items should be deleted from the current minimal obstruction Q, as such a decision may have consequences later, when we are dealing with subsequent minimal obstructions. Therefore, instead of picking just one solution, we apply a bounded search tree approach: at each minimal obstruction, we perform a branching, and pursue several possible ways to delete a set of items to make Q solvable. To obtain a polynomial-time algorithm, we must bound the size of our search tree; for this we need several ideas.

Bounding the number of branches
In order to bound the number of branches that we have to investigate in a branching, we use the important fact stated by Lemma 4 that any minimal solution removes at most two items from a minimal obstruction. This insight is of crucial importance in our algorithm, as it yields a polynomial bound on the number of branches, namely O(|I | 2 ).
Bounding the size of the search tree Although our search tree algorithm has a recursive structure, we apply a dynamic programming technique to limit the number of recursive calls, i.e., the number of nodes in the search tree.
To this end, we define an equivalence relation between partial solutions, corresponding to nodes in the search tree. Intuitively, two partial solutions are equivalent if they can be extended in the same way into a solution. It turns out that we can determine sufficient conditions that guarantee equivalence. These conditions are somewhat technical, but they essentially ensure that two deletions have the same effect with respect to any possible minimal obstruction that may arise later during the run of the algorithm.
These conditions allow us to classify partial solutions into equivalence classes whose number is bounded by a polynomial; this results in a polynomial running time for our algorithm.

Partial Solutions and Branching Sets
Partial solutions For a prefix Q and a set U of items, we define Q − U in the natural way: by deleting all items of U from the (partial) preference lists of the prefix (note that the total length of the preference lists constituting the prefix may decrease). We say that an item set Y ⊆ I (Q) is a partial solution for Q if Q − Y is solvable. See again Fig. 2 or 3 for an example. Observe that for any item set Y we can check whether it is a partial solution for Q by checking whether all complete slots can be covered by a matching in the graph corresponding to Q − Y . Branching set. To solve PID we will repeatedly apply a branching step: whenever we encounter a minimal obstruction Q, we shall consider several possible partial solutions for Q, and for each partial solution Y we try to find a solution U for P such that U ∩ I (Q) = Y . To formalize this idea, we say that a family Y containing partial solutions for a minimal obstruction Q is a branching set for Q, if there exists a solution U of minimum size for the profile P such that U ∩ I (Q) ∈ Y. Such a set is exactly what we need to build a search tree algorithm for PID.
Lemma 4 shows that we never need to delete more than two items from any minimal obstruction. This will be essential for constructing a branching set.

Lemma 4 Let Q be a minimal obstruction in a profile P, and let U denote an inclusionwise minimal solution for P. Then |U ∩ I (Q)| ≤ 2.
Proof Let U Q := U ∩ I (Q), and let us assume |U Q | ≥ 3 for contradiction. We are going to select a set Y of three items from U Q for which we can prove that U \ Y is a solution for P, contradicting the minimality of U .
We rank the items of U Q according to the index of the first slot in which they appear in P: we say that an item u appears at i, if i is the smallest index such that u is eligible for a slot (x, i) for some x ∈ N . If there exist three items y 1 , y 2 , and y 3 in U Q appearing strictly earlier (i.e., at a smaller index) than all other items in U Q , then we let Y = {y 1 , y 2 , y 3 }.
Otherwise, we apply the following procedure to choose Y . Let Y 1 be the set of items in U Q that appear at the earliest index, say i 1 . We select y 1 from Y 1 by favoring items eligible for more than one slot from {(a, i 1 ), (b, i 1 ), (c, i 1 )}; if there are still several possibilities to choose y 1 , then we select it arbitrarily. Similarly, let Y 2 be the set of earliest appearing items in U Q \ {y 1 }, appearing at some index i 2 . We pick an item y 2 from Y 2 by favoring items eligible for more than one slot from {(a, i 2 ), (b, i 2 ), (c, i 2 )}; again, if there are still several possibilities to choose y 2 , then we select it arbitrarily. Note that we use the notion of eligibility based on the original preference lists in P.
To choose an item y 3 from the set Y 3 of the earliest appearing items in U Q \{y 1 , y 2 }, we create the profile P 3 = P − (U \ {y 1 , y 2 }). If there exists a minimal obstruction in P 3 strictly contained in Q, then we fix such a minimal obstruction Q 3 , and we choose an item y 3 ∈ Y 3 eligible for a slot of S(Q 3 ). Otherwise we choose y 3 from Y 3 arbitrarily. Intuitively, we choose y 3 so as to overcome the possible obstructions obtained when putting y 1 and y 2 back into our instance, and our strategy for this is simply to choose an item lying within any such obstruction. Observe that if the minimal obstruction Q 3 exists, then (1) since there is no obstruction strictly contained in Q in the profile P − U Q , there must exist some item in U Q \ {y 1 , y 2 } that is eligible for some slot in S(Q 3 ); and (2) if u appears earliest in P among all such items, then u ∈ Y 3 . To see this, let (x, i) be the first slot in S(Q 3 ) for which u is eligible in P 3 . By the claim of Lemma 2 on the shape of a minimal obstruction, all slots preceding (x, i) belong to Q 3 as well, that is, the prefix P <i 3 = P 3 [3i − 5, 3i − 5, 3i − 5] "induced" by these slots in P 3 is contained in Q 3 . Thus, by our choice of u, we get that P <i 3 is a prefix of P as well, implying that no item of U Q \ {y 1 , y 2 } appears earlier in P than u. Hence, u ∈ Y 3 , showing that y 3 is well-defined.
Setting Y = {y 1 , y 2 , y 3 }, we finish our proof by proving that U \ Y is a solution for P. For contradiction, suppose that R is a minimal obstruction in P = P − (U \ Y ).
First, suppose that R contains all items in Y . As U is a solution, the profile R − Y is solvable, and hence contains at least as many items as complete slots. Note that adding the items of Y into the profile R − Y means adding exactly three new items and at most three new complete slots (since each agent's list contains at most three more items, resulting in at most one extra complete slot per agent). Hence, R has at least as many items as slots, contradicting the assumption that R is a minimal obstruction.
Hence we know that R does not contain all items in Y . By Lemma 2, R is then strictly contained 3 in Q, and by the minimality of Q we get that R must contain an item from {y 1 , y 2 , y 3 }. We claim that if R contains y h for some h ∈ {2, 3}, then it contains all items y j with 1 ≤ j < h. Since y j appears not later than y h , the only possible way for R to contain y h but not y j would be the following: y j and y h appear at the same slot number i, but R has a slant shape and thus only contains two slots from S i := {(x, i) | x ∈ N }, missing exactly the (unique) slot where y j appears. However, since R is a minimal obstruction, y h must appear at both remaining slots from S i by the last statement of Lemma 2, which contradicts our choice of y j . This leaves us with the case when y 3 is not contained in R (for y 3 ∈ I (R) would imply Y ⊆ I (R), which we already proved not to be the case). Then R is not only a prefix of P but also of P 3 . Assume w.l.o.g. that y 3

appears at index j in the slot (c, j).
Since R is a minimal obstruction in P 3 strictly contained in Q, we know that a minimal obstruction Q 3 was found when choosing y 3 , but R = Q 3 . Thus, both R and Q 3 are minimal obstructions of slant shape, with R containing the slots (a, j) and (b, j) but not (c, j), and Q 3 containing the slot (c, j) and one of (a, j) and (b, j), say (b, j).
Note also that by the last statement of Lemma 2, we know

Corollary 2 For any minimal obstruction Q in a profile, a branching set Y for Q of cardinality at most |I
Proof By Lemma 4, in order to construct the branching set Y as required, it suffices to check for each Y ⊆ I (Q) of size at most 2 whether Q − Y is solvable. To do so, we first construct the graph G underlying the prefix Q and compute a maximum matching M in G. This can be done in O(|I | 5/2 ) time using the Hopcroft-Karp algorithm, as explained in Lemma 1. Note that since Q is a minimal obstruction, it matches all but one slots in G, so |M| = |S(Q)| − 1. Now, for each Y ⊆ I (Q) with 1 ≤ |Y | ≤ 2 we compute the graph G Y underlying the prefix Q − Y . Observe that we can obtain G Y from G by deleting the items of Y , and adding the necessary edges so that every slot is connected with all items eligible for it. Observe that M yields a matching M Y of size at least |M| − 2 in G Y , which covers at least |M| − 5 = |S(Q)| − 6 complete slots (because at most three slots may have become incomplete in Q − Y ). Hence, starting from M Y we only need to find a constant number of augmenting paths in order to check whether all complete slots of G Y can be covered by a matching. This takes O(|I | 2 ) time, because G Y has at most 2|I | vertices, yielding a running time of O(|I | 4 ) in total.

Polynomial-Time Algorithm for PID for Three Agents
Let us now present our algorithm for solving PID on our profile P = (N , I , L).
We are going to build the desired solution step-by-step, iteratively extending an already found partial solution. For a prefix T of P and a partial solution U for T , we call a set E ⊆ I an extension for (T , U ) if E is disjoint from I (T ) and E ∪ U is a solution for P; we will refer to the set of items in I (T ) \ U as forbidden w.r.t. (T , U ). We propose an algorithm Extend(T , U ) that, given a prefix T of P and a partial solution U for T , returns an extension for (T , U ) of minimum size if one exists, otherwise returns 'No'.

Branching set with forbidden items.
To address the problem of finding an extension for (T , U ), we modify the notion of a branching set accordingly. Given a minimal obstruction Q in some profile P and a set F ⊆ I (Q) of items, we say that a family Y of partial solutions for Q is a branching set for Q forbidding F, if the following holds: either there exists a solution U for the profile P that is disjoint from F and has minimum size among all such solutions, and moreover, fulfills U ∩ I (Q) ∈ Y, or P does not admit any solution disjoint from F (in which case Y can be arbitrary).

Lemma 5 There is an algorithm that, given a minimal obstruction Q in a profile and a set F ⊆ I (Q) of forbidden items, produces a branching set Y forbidding F with max Y ∈Y |Y | ≤ 2 and |Y| = O(|I | 2 ), and runs in time O(|I | 4 ).
Proof The algorithm given in Corollary 2 can be adapted in a straightforward fashion to take forbidden items into account: it suffices to simply discard in the first place any subset Y ⊆ I (Q) that is not disjoint from F. It is easy to verify that this modification indeed yields an algorithm as desired.
Equivalent partial solutions. We will describe Extend as a recursive algorithm, but in order to ensure that it runs in polynomial time, we need to apply dynamic programming. For this, we need a notion of equivalence: we say that two partial solutions U 1 and U 2 for T are equivalent if 1. |U 1 | = |U 2 |, and 2. (T , U 1 ) and (T , U 2 ) admit the same extensions. See Fig. 4 for an illustration.
Ideally, whenever we perform a call to Extend with a given input (T , U ), we would like to first check whether an equivalent call has already been performed, i.e., whether Extend has been called with an input (T , U ) for which U and U are equivalent. However, the above definition of equivalence is computationally hard to handle: there is no easy way to check whether two partial solutions admit the same extensions or not. To overcome this difficulty, we will use a stronger condition that implies equivalence.
Deficiency patterns Consider a solvable prefix Q of P. We let the deficiency of Q, denoted by def(Q), be the value |S(Q)|−|I (Q)|. Note that due to possibly incomplete slots in Q, the deficiency of Q may be positive even if Q is solvable. However, if Q contains only complete slots, then its solvability implies def(Q) ≤ 0. We define the deficiency pattern of Q, denoted by defpat(Q), as the set of all triples where R can be any prefix with a straight or a slant shape that intersects Q. Roughly speaking, the deficiency pattern captures all the information about Q that is relevant for determining whether a given prefix intersecting Q is a minimal obstruction or not. Note that any given value of size(R) can be present in only one triple from the deficiency pattern of Q, because def(Q ∩ R) and I (Q ∩ R) ∩ δ(Q) only depend on size(R) and Q. See Fig. 5 for an example.
For an intuitive understanding of the role of deficiency patterns, consider a prefix T and a partial solution U for T . In our algorithm, after we have decided on deleting U , we will not delete any further items from I (T ); hence, it should not matter which items we have included in U , as long as its deletion leaves us with the same kind of prefix. So suppose that T is a prefix that may or may not become a minimal obstruction after deleting U ; clearly we may suppose that R = T − U has a straight or a slant shape (otherwise it is certainly not a minimal obstruction).
In case T − U contains T − U , the only important properties of U are its size and its intersection with the boundary of T : deleting any partial solution U with |U | = |U | that contains the same items from δ(T ) as U will leave us with the same number of slots and the same number of items as the deletion of U . (See also Fig. 4 for an example showing why the boundary matters.) In case T −U = R does not contain T −U but intersects it, all further information necessary to "classify" U is contained in defpat(T − U ). Indeed, to calculate the number of items in R, it suffices to know the number of items in the intersection Strong equivalence To formalize the above ideas, we call partial solutions U 1 and U 2 for T strongly equivalent, if As the name suggests, strong equivalence is a sufficient condition for equivalence.

Lemma 6
If U 1 and U 2 are strongly equivalent partial solutions for T , then they are equivalent as well.
Proof Suppose that W is an extension for (T , U 1 ). We need to prove that it is an extension for (T , U 2 ) as well. Clearly, we have W ⊆ I \ I (T ), so it suffices to show that P − (U 2 ∪ W ) is solvable.
Suppose for contradiction that Q 2 is a minimal obstruction in P − (U 2 ∪ W ). Let us consider the prefix Q 1 of P − (U 1 ∪ W ) that has the same size as Q 2 ; such a prefix exists because |U 1 | = |U 2 |. In the remainder of the proof, we argue that Q 1 is not solvable in P − (U 1 ∪ W ), contradicting the assumption that W is an extension for (T , U 1 ) and thus U 1 ∪ W is a solution for P. Note that Q 1 and Q 2 clearly have the same slots, all of them complete.
Recall that U 2 is a partial solution for T , so T − U 2 is solvable. Since W is disjoint from I (T ), we know that Q 2 cannot be contained in T − U 2 .
First, let us assume that Q 2 contains T − U 2 . In this case, |U 1 | = |U 2 | and U 1 ∩ δ(T ) = U 2 ∩ δ(T ) together immediately imply that Q 1 and Q 2 contain the same number of items: Second, let us assume now that Q 2 and T − U 2 are intersecting, and let their intersection be T ∩ 2 . Similarly, let T ∩ 1 be the intersection of Q 1 and T − U 1 . Since Q 2 is a minimal obstruction and thus has a straight or a slant shape, we know that is contained in the deficiency pattern of T − U 2 . By the third condition of equivalence, the same triple must also be present in the deficiency pattern of T − U 1 . Hence, T ∩ 1 must have the same deficiency as T ∩ 2 . By |U 1 | = |U 2 | and U 1 ∩ δ(T ) = U 2 ∩ δ(T ), we know that T − U 1 and T − U 2 have the same size, and thus T ∩ 1 has the same size as T ∩ 2 . This implies is guaranteed by the second condition of strong equivalence. Hence, adding the items contained in Q 2 but not in T increases the size of I (T ∩ 2 ) exactly as adding the items contained in Q 1 but not in T increases the size of I (T ∩ 1 ). Therefore, we can conclude that |I (Q 1 )| = |I (Q 2 )|, which again implies that Q 1 is not solvable.
Before giving the details of algorithm Extend, we need one more lemma on the relation of prefixes that we consider during the iterative approach of addressing minimal obstructions one-by-one.

Lemma 7
Let Q 0 be a minimal obstruction in P − U 0 for a set U 0 ⊆ I of items, and let T be the largest 4  Let us now assume that Q 0 has a slant shape; w.l.o.g. we may assume that , then (ii) holds, because any position contained in Q but not in , and in either case (i) holds. This proves our claim.
Suppose now (i). Let (x, j) be the slot to which all positions contained in T − U but not in Q belong; let I denote the set of items on these positions in T − U . Then Since I contains items of T − U = Q 0 − Y , we know that they occur on positions belonging to the slot (x, j) in Q 0 as well (note that Q 0 is a prefix in P − (U \ Y ) not in P − U ). Thus (x, j) ∈ S(Q 0 ), but observe that (x, j + 1) / ∈ S(Q 0 ). However, by the minimality of Q 0 , any item present in a last slot of Q 0 occurs at least once more in Q 0 (since an item eligible only for (x, j) among all slots in S(Q 0 ) would imply that deleting the positions corresponding to (x, j) from Q 0 would yield a prefix with |S(Q 0 )| − 1 slots and at most |I (Q 0 )| − 1 items). Therefore, any item of I occurs at least once more in Q 0 . By I ∩ Y = ∅, this implies that any item of I occurs at least once in a position of Q 0 − Y = T − U that does not belong to (x, j). However, any such position is contained in Q as well, by our assumption (i). Thus I ⊆ I (Q), which by Equality (2) Supposing (ii), let (x, j) be the slot to which all positions contained in Q but not in T − U belong; let I denote the set of items on these positions in Q. Then it is clear that (x, j) ∈ S(Q) but (x, j + 1) / ∈ S(Q), hence arguing as above, we get that , implying that the minimal obstruction Q in P − U contains only items that are forbidden w.r. t. (T , U ). Thus, there cannot exist an extension for (T , U ). Now, we are ready to describe algorithm Extend in detail. Let (T , U ) be the input for Extend. Throughout the run of the algorithm, we will store all inputs with which Extend has been computed in a table SolTable, keeping track of the corresponding extensions as well. Initially, we call Extend with input (T ∅ , ∅), where T ∅ denotes the empty prefix of our input profile P, i.e., P[0, 0, 0], and we initialize SolTable as empty.
For an example of running algorithm Extend on an instance of PID, see Appendix A.

Algorithm Extend(T , U ):
Step 0: Check for strongly equivalent inputs.
For each (T , U ) in SolTable, check whether U and U are strongly equivalent with respect to T , and if so, return Extend(T , U ).

Step 1: Check for trivial solution.
Check if P − U is solvable. If so, then return the empty extension ∅, and store the entry (T , U ) together with the value ∅ in SolTable.
Step 2: Find a minimal obstruction.
Find a minimal obstruction Q in P − U ; recall that P − U is not solvable in this step.

Step 3: Compute the new prefix.
Let T be the largest prefix of P for which T − U = Q. If I (T ) I (T ), then return 'No', and store the entry (T , U ) together with the value 'No' in SolTable.

Step 4: Compute a branching set.
Using Lemma 5, determine a branching set Y for Q forbidding I (T ) \ U . If Y = ∅, then return 'No', and store the entry (T , U ) together with the value 'No' in SolTable.
For each Y ⊆ Y, compute E Y := Extend(T , U ∪ Y ).
Step 6: Find a smallest extension.
Compute a set E Y for which |Y ∪ E Y | = min Y ∈Y |Y ∪ E Y |. Return the set Y ∪ E Y , and store the entry (T , U ) together with the extension Y ∪ E Y in SolTable.

Lemma 8 When initially called with input (T ∅ , ∅), algorithm Extend is correct, i.e., for any prefix T of P and any partial solution U for T , Extend(T , U ) returns a minimum-size extension for (T , U ) (if existent).
Proof Observe that it suffices to prove the claim for those cases when algorithm Extend does not return a solution in Step 0: its correctness in the remaining cases (so when a solution contained in SolTable for a strongly equivalent input is found and returned in Step 0) follows from Lemma 6.
We are going to prove the lemma by induction on |I \ U |. Clearly, if |I \ U | = 0, then P − U is an empty instance, and hence is trivially solvable. Assume now that I \ U = ∅, and that Extend returns a correct output for any input First, if the algorithm returns ∅ in Step 1, then this is clearly correct. Second, if it returns 'No' in Step 3, then in this case T = T ∅ , so Extend(T , U ) is a recursive call, and hence was called when branching on a branching set. Thus, there exists some Y ⊆ U with 1 ≤ |Y | ≤ 2 for which T − (U \ Y ) is a minimal obstruction Q 0 . Hence, Lemma 7 can be applied, which implies the correctness of this step.
Third, if the algorithm returns 'No' in Step 4 because it finds that the branching set Y forbidding I (T )\U for the minimal obstruction Q is empty, then this, by the definition of a branching set (forbidding I (T ) \ U ) and by the soundness of the algorithm of Lemma 5, means that there is no solution S for P − U disjoint from I (T ) \ U . But then we also know that there is no solution S for P for which S ∩ I (T ) = U holds, so there is no extension for (T , U ). Hence this step is correct as well.
Therefore, we can assume that the algorithm's output is Y ∪ E Y for some Y ∈ Y, where E Y = Extend(T , U ∪ Y ) and T is the largest profile in P for which T − U = Q. As |I \ (U ∪ Y )| < |I \ U | for any Y ∈ Y, the induction hypothesis implies that Extend runs correctly on all inputs (T , U ∪ Y ), Y ∈ Y. Hence, E Y is an extension for (T , U ∪ Y ) and so U ∪ Y ∪ E Y is a solution for P. Moreover, since E Y ∩ I (T ) = ∅ and I (T ) ⊇ I (T ) by Step 3, we know that E Y is disjoint from I (T ). Since Y is contained in a branching set for Q in P − U forbidding I (T ) \ U , we get that Y ∪ E Y is disjoint from I (T ) as well. Thus, Y ∪ E Y is an extension for (T , U ).
It remains to argue that if E is an extension for (T , U ), then |Y ∪ E Y | ≤ |E|. Clearly, E is a solution for P − U disjoint from I (T ) \ U . By the definition of a branching set forbidding I (T ) \ U and the correctness of Lemma 5, we know that there must exist a solution E for P − U disjoint from I (T ) \ U and having size Using again the induction hypothesis, we get that Extend(T , U ∪ Y ) returns an extension E Y for (T , U ∪ Y ) of minimum size, so in particular, |E Y | ≤ |E \ Y |, implying |Y ∪ E Y | ≤ |E | ≤ |E|. Thus, by our choice of Y , we get that the output of Extend(T , U ) (that is, Y ∪ E Y ) has size at most |E|. This proves our claim. Therefore, we get that if Extend returns an output in Step 6, then this output is correct. Lemma 8 immediately gives us an algorithm to solve PID: Extend(T ∅ , ∅) returns a solution S for P of minimum size; we only have to compare |S| with the desired solution size k.
The next lemma states that Extend gets called polynomially many times.

Lemma 9
Throughout the run of algorithm Extend initially called with input (T ∅ , ∅), the table SolTable contains O(|I | 7 ) entries.
Proof Let us consider table SolTable at a given moment during the course of algorithm Extend, initially called with the input (T ∅ , ∅) (and having possibly performed several recursive calls since then). Let us fix a prefix T . We are going to give an upper bound on the maximum cardinality of the family U T of partial solutions U for T for which SolTable contains the entry (T , U ). By Step 0 of algorithm Extend, no two sets in U T are strongly equivalent. Recall that if U 1 and U 2 , both in U T , are not strongly equivalent with respect to T , then either Let us partition the sets in U T into groups: we put U 1 and U 2 in the same group, if Examining Steps 2-4 of algorithm Extend, we can observe that if U = ∅, then for some Y U ⊆ U of size 1 or 2, the prefix T − (U \ Y U ) is a minimal obstruction Q U . Since removing items from a prefix cannot increase the size of its boundary, Lemma 3 implies that the boundary of T − U contains at most 3 items. We get |δ(T ) \ U | = |δ(T − U )| ≤ 3, from which it follows that δ(T ) ∩ U is a subset of δ(T ) of size at least |δ(T )| − 3. Therefore, the number of different values that δ(T ) ∩ U can take is O(|I | 3 ). Since any U ∈ U T has size at most |I |, we get that there are O(|I | 4 ) groups in U T . Let us fix some group U g of U T . We are going to show that the number of different deficiency patterns for T − U where U ∈ U g is constant.
Recall that the deficiency pattern of T − U consists of triples of the form where R is some prefix of P − U with a slant or a straight shape, and R ∩ is the intersection of T − U and R. First observe that by the definition of a group, size(T − U 1 ) = size(T − U 2 ) holds for any U 1 , U 2 ∈ U g . Let us fix an arbitrary U ∈ U g . Since T − U can be obtained by deleting 1 or 2 items from a minimal obstruction, Lemma 2 implies that there can only be a constant number of prefixes R of P − U which intersect T − U and have a slant or a straight shape; in fact, it is not hard to check that the number of such prefixes R is at most 5 for any given T − U . Therefore, the number of values taken by the first coordinate size(R) of any triple in the deficiency pattern of T − U is constant. Since T − U has the same size for any U ∈ U g , we also get that these values coincide for any U ∈ U g . Hence, we obtain that (A) the total number of values the first coordinate of any triple in the deficiency pattern of T − U for any U ∈ U g can take is constant.
Let R ∩ be the intersection of T − U and some prefix of straight or slant shape. By definition, R ∩ is contained in Q U . By We are now able to formulate our main theorem, stating that PID is solvable in polynomial time for three agents. Let us distinguish now between two types of recursive calls to Extend: a call Extend(T , U ) is regular, if Step 0 does not produce an output during its execution; otherwise we refer to this call as a shadow call. We first give an upper bound for the time spent on regular calls. Note that in each such call, an entry is added to SolTable. By Lemma 9, SolTable contains O(|I | 7 ) entries, and therefore the number of regular calls to Extend is also O(|I | 7 ). This gives us an upper bound of O(|I | 11 ) on the total time spent on regular calls to Extend.
To bound the time spent on shadow calls, observe that each regular call may give rise to at most O(|I | 2 ) shadow calls (and there are no recursive calls performed within a shadow call). Hence, the number of shadow calls is O(|I | 9 ). Since Step 0 takes O(|I |) time, this yields a bound of O(|I | 10 ) for the total time spent on shadow calls. Hence the total running time of Extend(T ∅ , ∅) is as claimed.

PID with Fixed Allocation
In this section we investigate a version of PID where an allocation is given in advance, and we want to make this allocation proportional by item deletion. The input of the problem, which we refer to as PID with Fixed Allocation, consists of a preference profile P = (N , I , L), an allocation π : I → N , and an integer k ∈ N. We call a set S ⊆ I of items a solution for (P, π), if the restriction of π to I \ S is proportional for the profile P − S; the task is to find a solution of size at most k for (P, π).
Since we are given a fixed allocation, the concept of minimal obstruction can be simplified accordingly: we say that agent x becomes envious at index i for some i ∈ {1, . . . , |I |}, if the number of items in L x [1 : i] assigned to x by π is less than i/|N |, but for any smaller index j < i the number of items in L x [1 : j] assigned to x by π is at least j/|N |. Clearly, if no agent becomes envious at any index, then our allocation π is proportional.
Our first observation is that PID with Fixed Allocation can be solved in polynomial time if there are only two agents. To show this, we propose an algorithm that we call GreedyDel. Suppose that N contains only two agents. For an agent x ∈ N , we denote byx the other agent, i.e.,x ∈ N , x =x.
Step 1: If π is proportional for P, then return 'Yes'.
Step 2: Perform a greedy deletion: Let x denote an agent and i an index such that x becomes envious at i. Compute the item s that is the least preferred by agentx among all items of L x [1 : i] assigned tox by π , and call GreedyDel(P − {s}, π I \{s} , k − 1).

Theorem 8 If the number of agents is two, then PID with Fixed Allocation can be solved in polynomial time by algorithm GreedyDel.
Proof It is easy to see that algorithm GreedyDel can be implemented in quadratic running time. To prove its correctness, let us consider the steps of GreedyDel on our input instance (P, π, k). Note that Steps 0 and 1 are clearly correct. We claim that the deletion performed in Step 2 is safe in the following sense: if S is a solution for (P, π) and s is the item deleted by the algorithm in Step 2, then there exists a solution S for (P, π) that contains s. Clearly, if this holds, then GreedyDel is correct.
Let x be the agent and i the index for which x becomes envious at i, as found in Step 2. Let S [1:i] denote those items of S that are contained in L x [1 : i]. Suppose that S does not contain the deleted item s. However, since S is a solution, we know that S [1:i] must contain strictly more items assigned by π tox than to x, i.e., Let s be the item least preferred by x in S [1:i] ∩ π −1 (x). We will show that S = S \ {s } ∪ {s} is a solution for (P, π), proving our claim that Step 2 is safe. Note that by our choice of s, agentx prefers s to s, sox cannot become envious at any index in P − S . If x prefers s to s , then x cannot become envious at any index in P − S either (because deleting an item assigned tox that comes before s in L x is always preferable for x than deleting s ), so in this case S is a solution for (P, π). Thus, for the sake of contradiction, assume that x also prefers s to s, and x becomes envious somewhere in P − S . This means that there exists an index j ∈ {1, . . . , |I |} such that where J x and Jx denote the set of items in L x [1 : j] assigned to x and tox by π , respectively (note that L x is the preference list of x in the original instance P). First, if Jx contains either both s and s or neither of them, then |Jx ∩ S| = |Jx ∩ S |, so where the first equality is implied by {s, s } ∩ J x = ∅, and the inequality follows from the fact that S is a solution for (P, π). This contradicts (4).
Second, if |Jx ∩ {s, s }| = 1, then s ∈ Jx but s / ∈ Jx , because x prefers s to s. In this case we know j < i. Using that s is the least preferred by x among all items of S [1:i] ∩ π −1 (x) but it still falls within L x [1 : j], we know that all items of S [1:i] ∩ π −1 (x) fall within L x [1 : j] and are thus contained in Jx . From this, we get that where the first inequality follows from (3). Recall that x only becomes envious at i in P, and therefore |J x | ≥ |Jx |, leading us to which again contradicts (4).
Interestingly, PID with Fixed Allocation becomes NP-hard if the number of agents is 6. Hence, providing an allocation in advance does not seem to make PID much easier; the intuitive argument behind this is that we may be given an allocation that is quite unreasonable and may actually become a hindrance to proportionality instead of helping us. Our results leave open the computational complexity of the problem when the number of agents is in {3, 4, 5}.

Theorem 9 If the number of agents is six, then PID with Fixed Allocation is NP-complete.
Proof It is straightforward to see that the problem is in NP. To prove its NPcompleteness, we are going to present a reduction from the Cubic Monotone 1-in-3-SAT problem, whose input is a propositional formula ϕ in conjunctive normal form where variables only occur as positive literals, and the underlying graph G is a cubic graph (i.e., every vertex has degree 3). Formally, we define G = (V ∪ C, E) as a bipartite graph whose two partitions are the set V of variables and the set C of clauses, and a variable v ∈ V is adjacent to a clause C ∈ C in G if and only if v appears in C. Since G is cubic, each variable occurs in exactly three clauses, and conversely, each clause contains exactly three distinct variables. The task in the Cubic Monotone 1-in-3-SAT problem is to decide whether there exists a truth assignment for ϕ where exactly one out of three variables is true in each clause; we will call such a truth assignment valid. Moore and Robson [21] proved that this problem is NP-hard.
Given the input formula ϕ, we construct an instance (P, π, k) of PID with Fixed Allocation such that ϕ is satisfiable if and only if (P, π) admits a solution of size at most k. Let ϕ contain variables v 1 , . . . , v n and clauses C 1 , . . . , C n ; note that n ≡ 0 mod 3 must hold, as G is cubic. We define the set of agents as well as a set I x containing some newly introduced items for each agent x ∈ N ; we set |I f | = 3n, |I f | = 2n, and |I x | = 4n for any other agent x ∈ N \ { f , f }. We let U = V 1 ∪ · · · ∪ V n andÛ =V 1 ∪ · · · ∪V n . Items in U andÛ are assigned by π to f and f , respectively, while for any agent x ∈ N , items in I x are assigned to x by π . We set k = 3n.
To finish the definition of profile P = (N , I , L), it remains to give the preferences of each agent, for which we need additional notation. The three items in V i will correspond to the three occurrences of variable v i , each a positive literal, while the items inV i will correspond to their negated forms. Hence, for each clause C i ∈ C, we define P i as the set of items that, for any j ∈ {1, . . . , n}, contains item a j , b j , or c j if and only if C i contains the first, second, or third occurrence, respectively, of variable v j in ϕ. We also define N i as the set of items containingâ j ,b j , orĉ j for some j if and only if P i contains a j , b j , or c j , respectively.
To simplify our notation for the preference lists, we fix an arbitrary ordering ≺ over I so that we can omit listing all irrelevant items in the preference lists: the symbol '···' at the end of a preference list stands for the sequence of all remaining items according to their order in ≺. Also, we write [X ] for a set X of items to denote their sequence according to ≺. In the preference list of some agent x ∈ N , it will be sufficient to distinguish between items of D = I v ∪ I v ∪ I w ∪ I w only up to the point of indicating whether they are assigned to x by π or not. Thus, we will use '•' symbols in L x to denote items from I x , and we will use '•' symbols to denote items from D \ I x (these will serve as dummy items). If the number of '•' (or '•') symbols in L x is , then they refer to the first items from I x (or from D \ I x , respectively) according to ≺. Now, the preference lists are as shown in Table 1.
Notice that we need to make sure that we do not "run out of" the necessary items when using '•' symbols: for any agent x, the number of available dummy items is |D \ I x | ≥ 12n, while L x contains at most 12n dummies. Hence, P is indeed a profile.
The reduction presented can clearly be computed in polynomial time, so let us prove its correctness. First assume that S is a solution for (P, π) of size at most k. Note that π assigns exactly 4n items to each agent except for f and f , while it assigns 6n and 5n items to f and f , respectively (recall that f receives all 3n items inÛ , while f receives all 3n items in U ). By k = 3n, we get that S contains exactly 2n items of U ∪ I f and n items of U ∪ I f . In particular, S does not contain any (dummy) item from D.
Fix some agent x ∈ N \ { f , f }, and consider the first 7 j items in the preference list of x for some j ∈ {1, . . . , 3n}. Observe that L x [1 : 7 j] contains only j items  Recall that in a preference list L x , all '•' symbols stand for items in I x (assigned by π to x), while '•' symbols stand for items in D \ I x (assigned by π to some other agent in {v, v , w, w } \ {x}). The numbers above braces indicate the length of the given series of items assigned to x by π . However, π cannot be proportional as long as the set consisting of the first 6 j + 1 items in x's preference list contains at most j items assigned to x by π . Hence we know that S must contain at least j items from L x [1 : 7 j]. Let Q x j denote the set of items in L x [7 j − 6 : 7 j] that are not in D and hence may be included in the solution S; e.g., Q v 1 = {a 1 ,â 1 }, Q v 2 = {b 1 ,b 1 }, and so on. Then our observation for each x ∈ N \ { f , f } can be written as However, notice that Q v j = Q v 3n+1− j and also Q w j = Q w 3n+1− j , which implies |S ∩ (Q x j+1 ∪ · · · ∪ Q x 3n )| = |S ∩ (Q x 3n− j ∪ · · · ∪ Q x 1 )| ≥ 3n − j (6) for any x ∈ N \ { f , f } where we abuse notation by setting v = v and w = w; note that we used Inequality (5) for agent x and index 3n − j. Using that |S| = 3n and that the sets Q x j , j ∈ {1, . . . , 3n}, are mutually disjoint, it is straightforward to verify that Inequalities (5) and (6) can only hold for each value of j ∈ {1, . . . , 3n} if |S ∩ Q x j | = 1 for every j ∈ {1, . . . , 3n}.
For agents v and v , (7) implies that u ∈ S if and only ifû / ∈ S for any item u ∈ U . Taking into account the statement of (7) for agents w and w , we obtain that either V i ⊆ S andV i ∩ S = ∅, orV i ⊆ S and V i ∩ S = ∅ for each i ∈ {1, . . . , n}.
Let us define a truth assignment α by setting variable v i ∈ V in ϕ to true if and only if V i ⊆ S. We claim that α is valid for ϕ, i.e., each clause in C contains exactly one variable v i for which V i ⊆ S.
Considering the first 7 j items from the preference list of f for some j ∈ {1, . . . , n} and arguing similarly as before, we obtain that |S ∩ (P 1 ∪ · · · ∪ P j )| ≥ j.
Analogously, from the first 8 j items from the preference list of f (among which only j are assigned to f by π ), we get |S ∩ (N 1 ∪ · · · ∪ N j )| ≥ 2 j.
However, notice that |S∩(P j ∪N j )| = 3 for any j ∈ {1, . . . , n}. Hence, Inequalities (8) and (9) imply |S ∩ P j | = 1 and |S ∩ N j | = 2 for every j ∈ {1, . . . , n}. This means exactly that α is valid for ϕ. For the other direction, assume that there exists a valid truth assignment α for ϕ, and let T α denote the set of true variables. First observe that |T α | = n/3 must hold, because each variable appears in exactly three clauses, and there are n clauses. We define a solution S for (P, π) by putting a i , b i , and c i into S for each v i ∈ T α , and puttingâ i , b i , andĉ i into S for each v i ∈ V \ T α . Hence |S ∩ U | = n and |S ∩Û | = 2n. Note that Inequality (5) holds for S for any agent x ∈ N \ { f , f } and index j ∈ {1, . . . , 3n}. Inequalities (8) and (9) hold as well for any j ∈ {1, . . . , n}, due to the validity of α. Based on these facts, it is easy to check that S is indeed a solution for (P, π).
Our next theorem shows that PID with Fixed Allocation remains computationally intractable even if the number of deletions allowed is small.

Theorem 10 PID with Fixed Allocation is W[2]-hard when parameterized by the size k of the desired solution.
agent ofB becomes envious at index |N | + 1 in P − S, then π is guaranteed to be proportional in P − S.

Conclusion and Open Questions
In Section 4 we have shown that Proportionality by Item Deletion is polynomial-time solvable if there are only three agents. In comparison, the problem of obtaining an envy-free allocation by item deletion was shown to be polynomial-time solvable for two agents, but becomes NP-hard already for three agents [4]. We have also proved that if the number of agents is unbounded, then PID becomes NP-hard, and practically intractable already when we want to delete only a small number of items, as shown by the W[3]-hardness result of Theorem 2.
The complexity of PID remains open for the case when the number of agents is a constant greater than 3. Is it true that for any constant n, there exists a polynomialtime algorithm that solves PID in polynomial time for n agents? The reason why our algorithm is not directly applicable for more than three agents is that Lemma 4 relies heavily on the properties of minimal obstructions observed in Lemma 2; these properties imply a very strict structure for two properly intersecting minimal obstructions. For the case of more than three agents, there are more possibilities how minimal obstructions may properly intersect, and so we cannot obtain a variant of Lemma 4 for n ≥ 4: it is an open question whether for n ≥ 4 it holds that any inclusion-wise minimal solution S contains at most n − 1 items from a minimal obstruction Q. In fact, it is not even known whether there is a bound f (n) on the number of items in S ∩ I (Q) for some function f ; we believe that establishing such a bound would imply the existence of a polynomial-time algorithm for PID for any constant n.
Supposing that there does exist a polynomial-time algorithm for PID for any constant n, can we find an FPT-algorithm with respect to the parameter n? If not-that is, if PID turns out to be NP-hard for some constant number of agents-then can we at least give an FPT-algorithm with parameter k for a constant number of agents (or maybe with combined parameter (k, n))?
Since approximation seems hopeless w.r.t. the number of deletions, it may be interesting to see whether there might exist an approximation w.r.t. the number of items obtained by each agent in a proportional way. Our results from Corollary 1 and Theorem 1 show that we cannot get a polynomial-time approximation with a ratio better than 2. Is this lower bound sharp?
Regarding PID with Fixed Allocation, we gave a polynomial-time algorithm for n = 2, but proved the problem to be NP-complete for n = 6. This leaves open the case when the number of agents is in {3, 4, 5}; it would be interesting to close the gap.
Finally, there is ample space for future research if we consider different control actions (such as adding or replacing items), different notions of fairness, or different models for agents' preferences.
Funding Open access funding provided by Budapest University of Technology and Economics.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Appendix A Example for Running Algorithm Extend
In this section we show how the algorithm runs on a profile P with agents a, b, and c ranking the item set {1, 2, . . . , 9, x, y, z, v, u} as below: Profile P : a : 1, 2, 3, 6, 4, 7, 8, 9, y, x, z, v, 5, u. b : 2, 1, 5, 3, 4, 8, 6, x, 7, y, z, u, 9, v. c : 3, 4, 6, 2, 5, 7, 1, x, 8, y, z, 9, v, u. At each step, we will only indicate Step 0, Step 1,or Step 3 if it leads to an output. If not, then we give the minimal obstruction Q and the branching set Y computed in Steps 2 and 4. The numbering of the calls reflects how the algorithm performs the recursive calls in Step 5; Fig. 7 depicts the search tree traversed by the algorithm. Table 2 shows the content of SolTable. Sometimes, in order to omit details and avoid tedium, we will shorten the description of a call.