1 Introduction

Algorithms for mobility have been studied for decades in the areas such as operations research, optimization, and graph theory. Typical examples include algorithms for the shortest path, the minimum spanning tree, the maximum flow, the traveling salesman, the Steiner tree, and the facility location problems. These algorithms are used to solve real-world problems in networks such as traffic, information, communication, and social networks. For example, automotive navigation systems are based on shortest path algorithms [48]. Mobility algorithms are fundamental to our daily lives.

In the fields of optimization and computer science, a lot of effort has been made to construct efficient algorithms for various types of optimization problems. However, it is often difficult to model real-world problems as traditional optimization ones, and there exist computational barriers for problems such as NP-hardness, which is briefly described in Sect. 9.2.

As an example of the difficulty of modeling, we may not know the complete information needed to solve the optimization problem in advance. Suppose someone is driving to a restaurant and wants to arrive as early as possible. In this scenario, the next direction in the route map is sequentially selected considering the dynamically changing traffic conditions. Sometimes, we must choose an action in each step based on the current information without knowing the complete information, which will be revealed piece by piece in the future. Such sequential problems are called online problems, whereas traditional problems, in which the entire input is given in advance, are called offline problems. As another example, real-world problems often have multiple agents, each of whom aims to optimize their own objectives. However, in traditional optimization problems, only one agent makes decisions. In the former case, achieving an outcome that is satisfactory for everyone is desirable. Problems with multiple agents have been studied in economics and game theory under the name of mechanism design, which aims to design economic mechanisms for the desired objectives, where agents behave rationally and self-interested.

Algorithms for mobility have been studied in a joint project between Kyoto University and Toyota Motor Corporation, titled “Advanced Mathematical Science for Mobility Society,” which started in 2020, with a focus on traditional optimization, online optimization, and mechanism design. In this chapter, we briefly introduce online optimization and mechanism design and present some of the results obtained from this project.

The remainder of this chapter is organized as follows. In Sect. 9.2, we describe approaches to solve traditional combinatorial optimization problems by taking the shortest path and traveling salesman problems as examples. Subsequently, we discuss our results for the reallocation problem. In Sect. 9.3, we introduce the framework for online optimization problems and present a competitive analysis. In particular, we focus on the problems called the online traveling salesman and the online dial-a-ride problems, which occur naturally in a mobility society. Subsequently, we describe our results for the online machine scheduling problem, which is a generalization of the two aforementioned online optimization problems. Finally, in Sect. 9.4, we describe the fair division of resources, which is one of the main topics that appear in multi-agent systems. We introduce solution concepts for fair division, such as envy-freeness and proportionality, and present known results for the envy-free and proportional division of divisible resources (so-called cake-cutting), and indivisible resources. We also describe our results for indivisible resources with and without monetary transfers.

2 Basic Problems and Algorithms for Mobility Society

In this section, we introduce the fundamental optimization problems: the shortest path and the traveling salesman problems. We then outline their differences from the viewpoint of the computational complexity. Additionally, we briefly describe a series of studies on developing efficient algorithms.

The primary task of an automotive navigation system is to determine a shortest route from an individual’s current location to a given destination. This problem is modeled as the shortest path problem, which is described as follows. Let \(G=(V,E)\) be a directed graph with a vertex set V and an edge set \(E \subseteq V\times V\). The vertices correspond to road intersections, and the edges correspond to road segments. Each edge \((u,v)\in E\) has a positive real number \(\ell (u,v)\) called weight, which represents the length of the corresponding road segment, or the cost to travel from u to v on the road segment. The shortest path problem is given an edge-weighted graph and two vertices, s and t, called the source and sink, respectively, to find a path from the source s to the sink t such that the sum of the weights in the path is minimized. In general, we may allow for negative edge weights in the shortest path problem. However, in this chapter, we only allow nonnegative edge weights because the edge weights in many mobility-related applications often correspond to length, time, or nonnegative costs.

The traveling salesman problem (TSP) is the problem of determining a shortest tour that visits all the vertices exactly once for a given edge-weighted graph. The TSP has been extensively studied since it has several applications, such as planning, logistics, and microchip manufacturing. As a logistical example, let us consider a single van driver who delivers packages from a depot station to customers’ homes. The driver needs a shortest tour that visits all the homes and returns to the depot. In this case, we have a complete graph \(G=(V,V\times V)\) with the vertices V corresponding to the depot and homes, and edge weights representing the travel time or cost between the depot and homes, corresponding to the edges’ endpoints. Then, the problem of finding a shortest tour is formulated as the TSP.

The shortest path problem admits an algorithm that runs in polynomial time [17]. Theoretically, such an algorithm is considered to be efficient. Thus, constructing polynomial-time algorithms for problems is one of the ultimate goals in the field of optimization and computational complexity. On the other hand, the TSP is known to be NP-hard [32], which implies that the TSP is unlikely to be solved in polynomial time.

Meanwhile, many NP-hard problems, including the TSP, arise in the real world. Hence, considerable effort has been made to develop fast algorithms for NP-hard problems from both theoretical and practical viewpoints. Owing to the NP-hardness, we need to compromise the running time of the algorithms and/or the optimality of their outcomes, which leads to two main approaches: finding an exact optimal solution as quickly as possible and finding a sufficiently good solution quickly.

2.1 Approaches for Hard Problems

Most combinatorial optimization problems can be trivially solved by a brute-force search, that is, enumerating and checking all possible solutions. Thus, determining how to yield a performance better than a brute-force search is an important research topic. Theoretical research aimed at answering this question began in the 1960s. For example, the dynamic programming algorithms proposed by Bellman [8] and Held and Karp [27] in 1962 were used to solve the TSP with n vertices in \(\textrm{O}(2^n n^2)\) time. This bound is considerably faster than a simple brute-force search, which requires \(\textrm{O}(n\cdot n!)\) time. Currently, these algorithms are referred to as the exact exponential algorithms. For further details, see the book of Fomin and Kratsch [20]. Parameterized complexity theory, a similar area of research introduced by Downey and Fellows [18], aims to provide a refined analysis of NP-hard problems by using their parameters. In particular, this theory focuses on the existence of algorithms whose running times are bounded by a polynomial in the input length and a function for the parameters. An overview of this approach has been presented by Flum and Grohe [19] and Niedermeier [43]. From a practical viewpoint, many useful methods are based on logic such as satisfiability (SAT) solvers, and mathematical programming such as mixed integer programming (MIP) solvers. There are several mixed integer programming formulations of the TSP. Practical MIP solvers enable us to obtain an (exact) optimal solution for the medium-scale TSP.

In real-world applications, it is often adequate to determine a sufficiently good solution quickly instead of taking much time to find an optimal solution. Therefore, researchers have developed approximation algorithms and heuristics to solve real-world problems. Here, an approximation algorithm is referred to as a polynomial-time algorithm with a performance guarantee. For a problem instance I of a minimization problem, let \(\textrm{ALG}(I)\) denote the objective value obtained by an algorithm \(\textrm{ALG}\), and let \(\textrm{OPT}(I)\) denote the optimal value of the instance I. For \(\rho ~(\ge 1)\), algorithm \(\textrm{ALG}\) is said to be a \(\rho \)-approximation if \(\textrm{ALG}(I)\le \rho \cdot \textrm{OPT}(I)\) holds for any problem instance I. Approximation algorithms for various difficult problems have been extensively studied since the 1990s.

Unfortunately, the general TSP is known to be hard even to approximate, unless \(\text {P}= \text {NP}\). This means that, for any polynomial-time algorithm, there exists a problem instance in which the output value of the algorithm is infinitely large compared with the optimal value, unless \(\text {P}= \text {NP}\). On the other hand, the metric TSP, which is an important special case of the TSP, admits approximation algorithms. Let \(\mathbb {R}_+\) be the set of nonnegative real numbers. The TSP is called metric if the vertex set V and the edge weight \(\ell :V \times V \rightarrow \mathbb {R}_+\) form a metric space, that is, \(\ell \) satisfies three conditions: (1) non-degeneracy \(\ell (u,v)=0\) if and only if \(u=v\), (2) the symmetry \(\ell (u,v)=\ell (v,u)\) for any \(u,v\in V\), and (3) the triangle inequality \(\ell (u,v)\le \ell (u,w)+\ell (w,v)\) for any \(u,v,w \in V\). A celebrated result of Christofides [12] is a 1.5-approximation algorithm for the metric TSP. This algorithm finds a tour with a length no more than 1.5 times the minimum tour length. A typical example of the metric TSP is when the vertices are assumed to be embedded in the Euclidean space, and the edge weights are defined as the Euclidean distances \(\ell (u,v)=\Vert u-v\Vert _2\). This problem is called the Euclidean TSP, and is naturally occurring from real-world applications such as microchip manufacturing. The Euclidean TSP further admits a polynomial-time approximation scheme [3]. Specifically, for each fixed positive number \(\varepsilon >0\), we can find a \((1+\varepsilon )\)-approximate solution in polynomial time. The Christofides’ 1.5-approximation algorithm has been the best approximation algorithm for the general metric TSP for several decades. Recently, improved algorithms for the general metric TSP have also been developed [30, 31]. In addition, substantial progress has been made in regard to algorithms for a special case, called graphic TSP [21, 40, 42, 45].

Simultaneously, considerable effort has been devoted to developing better heuristics from a practical viewpoint. A well-known heuristic for the TSP is k-opt. The basic idea of k-opt is to repeatedly remove k disjoint edges and reassemble the fragments into a different tour. Generalizing this, the Lin–Kernighan method [35] repeats the k-opt procedure by varying k. This is one of the core techniques to construct better heuristics for the TSP when the edge weights are symmetric. Some studies have also applied general heuristics, such as genetic algorithms and simulated annealing, to the TSP.

Finally, we note that a polynomial-time algorithm does not necessarily run quickly in practice, particularly when the problem is large. Therefore, even though the shortest path problem is solvable in polynomial time, a lot of additional effort has been made to find a short path quickly. For example, Dijkstra’s algorithm [17], which is a well-known polynomial-time algorithm for the shortest path problem, is impractical for a large given graph (e.g., a huge load network or SNS network). This is because this algorithm essentially determines a shortest path tree, which consists of shortest paths from the source to all the other vertices. To overcome this drawback, the \(\textrm{A}^*\) algorithm [26] was proposed to determine a shortest path from the source to the sink, rather than a shortest path tree, by incorporating heuristics into Dijkstra’s algorithm. Approximation algorithms for the shortest path problem have also attracted attention [1, 7].

2.2 Reallocation Scheduling

In this subsection, we present the results for reallocation scheduling, which are obtained in the project. Reallocation scheduling is a fundamental problem in various fields, such as supply chain management, logistics, and transportation science. Many reallocation models have been studied from both theoretical and practical perspectives. Reallocation models are categorized based on three aspects: (i) reallocation cost, (ii) fungibility of products, and (iii) parallel/sequential execution.

For example, let us consider the dial-a-ride problem, which involves designing schedules for vehicles to handle customer requests. Specifically, we consider a company that provides delivery services. This company dispatches identical delivery vehicles to pick up and drop off packages at the requested locations. Each request involves carrying a package between two specified locations. To process a request, a vehicle must first move to the pickup location, and then the vehicle transports the package to its drop-off location. The vehicle completes the process by handing over the package. The problem is to find a schedule that minimizes some objective, for example, the makespan or sum of the completion times of the vehicles. Here, the makespan is the time at which all vehicles return to the depot after completing all requests. This problem can be regarded as a vehicle routing problem for reallocation [15, 25, 28]. In the dial-a-ride problem, (i) the reallocation cost depends on the route; (ii) customers are not fungible; and (iii) reallocation is performed sequentially for each vehicle. Another example is the bike-share rebalancing problem, which involves scheduling trucks to redistribute shared bikes with the objective of minimizing the total cost [16]. In this problem, conditions (i) and (iii) for the dial-a-ride problem also apply: the reallocation cost depends on the route, and reallocation is performed sequentially for each truck. However, condition (ii) does not apply: shared bikes are fungible. That is, we can fulfill the desired bike distribution without distinguishing between bikes. In both of these problems, costs arise from the transportation of the delivery vehicles rather than the reallocation of the products.

In the project, Ishii et al. [29] considered reallocation scheduling in which (i) the cost (i.e., transit time) of reallocating a product is given in advance; (ii) products are not fungible; and (iii) reallocations are performed in a parallel manner. We refer to this problem as the reallocation problem. For example, suppose that several warehouses store many products (or items). Some products are already stored in their designated warehouses, whereas others are stored in temporary warehouses; the latter must be reallocated to their designated warehouses. To reallocate a product p, a certain amount of time is required, which is called the transit time of p. Each product also has a size, and each warehouse has three types of capacity: (a) the capacity of the warehouse itself, (b) the carry-in size capacity, and (c) the carry-out size capacity. The capacity of type (a) restricts the total number of products that can be stored in a warehouse at any moment, while (b) and (c) restrict the total size of products that are simultaneously carried in and out, respectively. For this setting, we provided an algorithm to find a reallocation schedule with the minimum completion time.

The project investigated the computational complexity of the reallocation problem under various scenarios and obtained the following results.

Theorem 9.1

(Ishii et al. [29]) The reallocation problem with uniform product sizes and uniform transit times can be solved in \({\mathrm O}(mn \log m)\) time, where m and n denote the number of products and warehouses, respectively.

Theorem 9.2

(Ishii et al. [29]) The reallocation problem is NP-hard in general.

The paper [29] also provided approximation algorithms for the reallocation problem under several scenarios that arise in real applications.

3 Online Optimization for Mobility Society

In online problems, the complete data are not known in advance, but are revealed piece by piece. Decisions must be made based only on the currently nown information, and not on future information. Online problems have been actively studied in the field of theoretical computer science since the seminal paper by Sleator and Tarjan in 1985 [46]. The authors suggested comparing an online algorithm with an optimal offline algorithm to measure the performance of the online algorithm in a manner similar to an approximation ratio. For an input sequence \(\sigma \), let \(\textrm{ALG}(\sigma )\) and \(\textrm{OPT}(\sigma )\), respectively, denote the costs obtained by an online algorithm \(\textrm{ALG}\) and the optimal offline algorithm \(\textrm{OPT}\). For \(\rho ~(\ge 1)\), algorithm \(\textrm{ALG}\) is said to be (asymptotically) \(\rho \)-competitive if a nonnegative constant \(\alpha \) exists such that

$$\begin{aligned} \textrm{ALG}(\sigma )\le \rho \cdot \textrm{OPT}(\sigma )+\alpha \end{aligned}$$

for any input sequence \(\sigma \).

Sleator and Tarjan [46] studied the paging problem, which is an important problem in implementing computer operating systems. For example, consider a two-level memory system consisting of primary and secondary memories. A primary memory is typically small, while a secondary one is large. Suppose that we have a sequence of requests, each of which specifies a page in the memory system. A request is served if the corresponding page is in the primary memory. If a page is not in the primary memory, a page fault occurs. Then, some pages must be removed from the primary memory, and the requested page must be loaded from the secondary memory into the primary memory. The aim is to minimize the number of page faults incurred in the request sequence. Sleator and Tarjan [46] proved that any algorithm for the paging problem is at least k-competitive, and that algorithms so-called least recently used and first-in first-out are k-competitive.

The k-server problem proposed by Manasse et al. [39] is a generalization of the paging problem and is one of the most extensively studied online problems. As an example, suppose that we have k mobile servers located in a metric space. In this problem, we are given a sequence of service requests, each of which specifies a point to visit in the space. When a request arrives, we must immediately select a server to move to the requested point without knowing future requests. The next request is known when the current request is completed. The aim is to determine how to assign servers to requests to minimize the total moving distance moved by the servers. Manasse et al. [39] conjectured that a k-competitive algorithm exists for the k-server problem. Koutsoupias and Papadimitriou [34] proved that the work function algorithm has a competitive ratio of at most \(2k-1\) for any metric. Despite many efforts, the conjecture is still open.

Coester and Koutsoupias [13] studied the online k-taxi problem, which is a generalization of the k-server problem. In this problem, k taxis serve a sequence of requests in a metric space; a request consists of two points s and t, representing a passenger who wants to be carried by a taxi from s to t.

The online k-server and k-taxi problems assume that a new request arrives after the current request has been processed. To handle a situation in which each request arrives at a certain time, we require a real-time model. The online k-TSP problem is a real-time version of the k-server problem, wherein each request arrives at a release time, and the servers can move at a unit speed. When the requests are provided in advance, and there is only one server, the problem coincides with the TSP. The online k-dial-a-ride problem is a real-time version of the k-taxi problem. This is also a generalization of the online k-TSP. The goal of this problem is to minimize the makespan. Ascheuer et al. [4] proposed the SMARTSTART algorithm, which achieves the best possible competitive ratio of 2.

Goko et al. [23] introduce a scheduling problem called the online \(({\boldsymbol{M}},k)\)-scheduling problem, which generalizes the online k-dial-a-ride problem. In the problem, k machines move in the metric space \({\boldsymbol{M}}\). For example, let us consider a company that provides on-demand delivery services. Similar to the online k-dial-a-ride problem, we need to design schedules for vehicles to process all requests such that the makespan is minimized. However, unlike the online k-dial-a-ride problem, we do not know the destination state of a request until we begin to process it. Furthermore, the time required to process a request is not known until the end of the process, because the time required for pickup at the source and handing over at the destination are also included. Here, we assume that each vehicle can only carry one package at a time.

In the paper [23], Goko et al. provided the following result.

Theorem 9.3

(Goko et al. [23]) The online \(({\boldsymbol{M}},k)\)-scheduling problem is \(\Theta (\frac{\log k}{\log \log k})\)-competitive.

This theorem means that there exists an \(\textrm{O}(\log k/\log \log k)\)-competitive online algorithm and every online algorithm is \(\mathrm{\Omega }(\log k/\log \log k)\)-competitive. This result is obtained in the following two steps. First, to address the unknown release times, we provide frameworks for producing a \(\min \{2\rho +1/2,\,\rho +2\}\)-competitive algorithm using a \(\rho \)-competitive algorithm for the basic online \(({\boldsymbol{M}},k)\)-scheduling problem, in which all jobs are released at time 0. Then, to address the unknown destination states and processing times, we construct an \(\textrm{O}(\log k/\log \log k)\)-competitive algorithm for the basic problem. We also improved the competitive ratios of the online \(({\boldsymbol{M}},k)\)-scheduling problem for certain special cases.

4 Mechanism Design for Mobility Society

In traditional optimization problems, only one agent makes decisions. However, real-world problems often involve multiple agents, each of whom aims to optimize their own gains. In this scenario, an outcome should be satisfactory for everyone. The preferable outcome is formulated as a solution that satisfies concepts such as equilibrium, stability, envy-freeness, proportionality, and popularity. An algorithm that achieves a desirable outcome is essential for our livelihood. For example, when several passengers ride in the same taxi, they typically wish to split the taxi fare fairly. In terms of mechanism design, one way to achieve this end is to calculate the payment for each passenger based on the Shapley value, which is a basic solution concept in cooperative game theory. A Shapley value of a passenger is, roughly speaking, the passenger’s average marginal contribution to the taxi fare of any subset of other passengers.

In the joint project, we have studied stable matchings [24, 38] and fair division [2, 22, 33, 37]. In the remainder of this section, we describe the results for fair division in [33, 37]. Specifically, we define the solution concepts of fair division, such as envy-freeness and proportionality in the fair division. Then we outline the related results from the joint project.

Fair division is the problem of dividing a set of resources among several agents in such a way that agents receive their due share. This problem arises in various real-world settings, such as the division of inheritance, electronic frequency allocation, and airport traffic management. Cake-cutting is a fundamental fair division problem, which has been studied in economics and mathematics since 1940s. Let \(N=\{1,2,\dots , n\}\) be a set of n agents. A cake C is a heterogeneous divisible good; let C be represented by the interval [0, 1]. Let \(\mathfrak {C}\) denote the Borel subsets of C.Footnote 1 Each agent \(i\in N\) has a valuation measure \(v_i:\mathfrak {C} \rightarrow \mathbb {R}_+\). An n-tuple \((C_1,\dots ,C_n) \in \mathfrak {C}^n\) is called a partition of C if \(\bigcup _{i \in N} C_i = C\) and \(C_i \cap C_j = \emptyset \) for any distinct \(i,j \in N\). For a partition \((C_1,\dots ,C_n)\) of C, let agent \(i \in N\) receive \(C_i\). A partition \((C_1,\dots ,C_n)\) of C is called envy-free (EF) if \(v_i(C_i) \ge v_i(C_j)\) for any \(i,j\in N\). This means that every agent i has no incentive to exchange the agent’s own piece \(C_i\) with another \(C_j\).

In the literature, we assume that each valuation measure \(v_i\) (\(i\in N\)) satisfies the following natural properties:

  1. 1.

    Normalization: \(v_i(C)=1\) and \(v_i(\emptyset ) = 0\).

  2. 2.

    Non-atomicity: \(v_i(\{x\}) = 0\) for every single point \(x \in C\).

  3. 3.

    \(\sigma \)-additivity: \(v_i(\bigcup _{k=1}^{\infty } X_k) = \sum _{k=1}^\infty v_i(X_k)\) for every sequence of disjoint sets \(X_1, X_2, \dotsc \subseteq \mathfrak {C}\).

For such valuations, an EF partition of the cake always exists [47]. For example, if \(n=2\), the cut-and-choose (also called divide-and-choose) protocol can find an EF partition. In this protocol, one agent, called the cutter, cuts the cake into two pieces, the other agent, called the chooser, chooses one of the pieces, and the cutter receives the remaining piece.

Proportionality is another fairness criterion. A partition \((C_1,\dots ,C_n)\) of C is called proportional (PROP) if \(v_i(C_i) \ge 1/n\cdot v_i(C)\) for any \(i\in N\). This means that every agent receives at least the agent’s due share according to the agent’s own valuation measure. By definition, envy-freeness implies proportionality; hence, a proportional partition always exists if valuation measures satisfy the properties above.

Recently, the fair division of a set of indivisible goods, such as cars and houses, has been studied extensively in computer science. This problem can be modeled as follows. Let \(M=\{1,2,\dots ,m\}\) be a set of m indivisible goods. Each agent \(i\in N\) has a valuation function \(v_i:2^M \rightarrow \mathbb {R}_+\) with \(v_i(\emptyset )=0\). We assume that the valuation functions are monotone: \(v_i(X) \le v_i(Y)\) for any \(X \subseteq Y \subseteq M\). An allocation of indivisible goods \(A=(A_1, \ldots , A_n)\) is a partition of M into n bundles, that is, it satisfies that \(\bigcup _{i\in N} A_i = M\) and \(A_i \cap A_j = \emptyset \) for any distinct i and j in N.

Unlike the cake-cutting problem, an EF or PROP partition of indivisible goods may not exist. As an extreme example, if there is only one good which is demanded by every agent, it is given to a single agent; however, the other agents have envy. Specifically, any partition is not PROP, and hence, not EF.

Thus, the following relaxed criteria of EF have been studied in the literature by Budish [9] and Caragiannis et al. [10].

  • EFX [10]: an allocation A is called envy-free up to any good (EFX) if for any agents \(i,j \in N\), we have \(v_i(A_i) \ge v_i(A_j \setminus \{e\})\) for every \(e \in A_j\).

  • EF1 [9]: an allocation A is called envy-free up to one good (EF1) if for any agents \(i,j \in N\), either \(v_i(A_i) \ge v_i(A_j)\) or there exists an \(e \in A_j\) such that \(v_i(A_i) \ge v_i(A_j \setminus \{e\})\).

Any set of indivisible goods admits an EF1 allocation [36]. It is known that there exists an EFX allocation for 2 agents [44], while the existence is open for at least 3 agents. When each agent has an additive valuation, i.e., a valuation function \(v_i\) of each agent i satisfies \(v_i(X)=\sum _{e \in X} v_i(\{e\})\) for any subset \(X \subseteq M\), it is known that an EFX allocation always exists for 3 agents [11], while its existence is open for at least 4 agents.

In the joint project, Mahara [37] revealed a new class of fair division instances that admit EFX allocations.

Theorem 9.4

(Mahara [37]) An EFX allocation exists when each agent has one of two valuation functions.

Similar to the relaxation of EF to EF1 and EFX, two relaxed concepts of PROP have been proposed by Conitzer et al. [14] and Moulin [41].

  • PROPx [5, 41]: an allocation A is called proportional up to the least valued good (PROPx) if for any agent \(i\in N\), either \(v_i(A_i)\ge 1/n\cdot v_i(M)\) or for every \(e \in M\setminus A_i\), we have \(v_i(A_i\cup \{e\})\ge 1/n\cdot v_i(M)\).Footnote 2

  • PROP1 [14]: an allocation A is called proportional up to the highest valued good (PROP1) if for any agent \(i\in N\), either \(v_i(A_i)\ge 1/n\cdot v_i(M)\) or there exists an \(e \in M\setminus A_i\) such that \(v_i(A_i\cup \{e\})\ge 1/n\cdot v_i(M)\).

A PROPx allocation may not exist [5, 41], while a PROP1 allocation always exists [14].

In the joint project, a new concept related to PROP1 and PROPx was proposed for additive valuations [33]. An allocation A is called proportionality up to the least valued good on average (PROPavg) if it holds that

$$\begin{aligned} v_i(A_i) + \frac{1}{n-1}\sum _{k\in N\setminus \{i\}} \displaystyle \mathop {\text {Min}}_{e\in A_k} v_i(\{e\}) \ge \frac{1}{n}v_i(M), \end{aligned}$$
(9.1)

where we define \(\mathrm{{Min}}(S) = \min (S)\) for \(S \ne \emptyset \) and \(\mathrm{{Min}}(\emptyset ) = 0\). In other words, agent i receives a set of goods that she values as at least 1/n fraction of the total value of all goods, minus the average minimum value of the set of goods received by any other agent. Note that for additive valuations, PROPavg is a concept that lies between PROP1 and PROPx: any PROPx allocation is PROPavg, and any PROPavg allocation is PROP1. Moreover, PROPavg is a relaxation of Avg-EFX, which is introduced by Baklanov et al. [6]. The notion of Avg-EFX is obtained by replacing \(n-1\) with n on the left-hand-side of Eq. (9.1):

$$\begin{aligned} v_i(A_i) + \frac{1}{n}\sum _{k\in N\setminus \{i\}} \displaystyle \mathop {\text {Min}}_{e\in A_k} v_i(\{e\}) \ge \frac{1}{n}v_i(M). \end{aligned}$$

In the paper [33] of the project, Kobayashi and Mahara demonstrated that a PROPavg allocation always exists; however, the existence of Avg-EFX allocations remains open [6].

Theorem 9.5

(Kobayashi and Mahara [33]) A PROPavg allocation always exists when each agent has an additive valuation.

The joint project also studied EF allocations with subsidies [22] and fair ride allocations based on the Shapley value [2].