Facility location with doublepeaked preferences
Abstract
We study the problem of locating a single facility on a real line based on the reports of selfinterested agents, when agents have doublepeaked preferences, with the peaks being on opposite sides of their locations. We observe that doublepeaked preferences capture reallife scenarios and thus complement the wellstudied notion of singlepeaked preferences. As a motivating example, assume that the government plans to build a primary school along a street; an agent with singlepeaked preferences would prefer having the school built exactly next to her house. However, while that would make it very easy for her children to go to school, it would also introduce several problems, such as noise or parking congestion in the morning. A 5min walking distance would be sufficiently far for such problems to no longer be much of a factor and at the same time sufficiently close for the school to be easily accessible by the children on foot. There are two positions (symmetrically) in each direction and those would be the agent’s two peaks of her doublepeaked preference. Motivated by natural scenarios like the one described above, we mainly focus on the case where peaks are equidistant from the agents’ locations and discuss how our results extend to more general settings. We show that most of the results for singlepeaked preferences do not directly apply to this setting, which makes the problem more challenging. As our main contribution, we present a simple truthfulinexpectation mechanism that achieves an approximation ratio of \(1+b/c\) for both the social and the maximum cost, where b is the distance of the agent from the peak and c is the minimum cost of an agent. For the latter case, we provide a 3 / 2 lower bound on the approximation ratio of any truthfulinexpectation mechanism. We also study deterministic mechanisms under some natural conditions, proving lower bounds and approximation guarantees. We prove that among a large class of reasonable strategyproof mechanisms, there is no deterministic mechanism that outperforms our truthfulinexpectation mechanism. In order to obtain this result, we first characterize mechanisms for two agents that satisfy two simple properties; we use the same characterization to prove that no mechanism in this class can be groupstrategyproof.
Keywords
Facility location Strategyproofness Doublepeaked preferences Approximate mechanism design without money Social cost Maximum cost1 Introduction
We study the problem of locating a single facility on a real line, based on the input provided by selfish agents who wish to minimize their costs. Each agent has a location \(x_i \in \mathbb {R}\) which is her private information and is asked to report it to some central authority, which then decides where to locate the facility, aiming to optimize some function of the agents’ reported locations. This model corresponds to problems such as finding the ideal location for building a primary school or a bus stop along a street, so that the total distance of all agents’ houses from the location is minimized, or so that no agent’s house will lie too far away from that location.
In our setting, we assume that agents have doublepeaked preferences, i.e. we assume that each agent i has two unique most preferred points or peaks, located at some distances from \(x_i\) on opposite sides, where her cost is minimum. Traditionally, preferences in facility location problems are assumed to be singlepeaked, i.e. each agent’s location is her most preferred point on the line and furthermore the cost is assumed to be linear, i.e. it increases linearly (at the same rate) to the left and the right of that peak. Sometimes however, singlepeaked preferences do not model reallife scenarios accurately. Take for instance the example mentioned above, where the government plans to build a primary school on a street. An agent with singlepeaked preferences would definitely want the school built next to her house, so that she wouldn’t have to drive her children there everyday. However, it is quite possible that she is also not very keen on the inevitable drawbacks of having a primary school next to her house either, like unpleasant noise or trouble with parking. On the other hand, a 5min walking distance is sufficiently far for those problems to no longer be a factor but also sufficiently close for her children to be able to walk to school. There are two such positions, (symmetrically) in each direction, and those would be her two peaks.
Our primary objective is to explore doublepeaked preferences in facility location settings similar to the ones studied extensively for singlepeaked preferences throughout the years [1, 11, 14, 18, 20, 23, 26, 27, 30, 32]. For that reason, following the literature we assume that the cost functions are the same for all agents and that the cost increases linearly, at the same rate, as the output moves away from the peaks. The straightforward extension to the doublepeaked case is piecewiselinear cost functions, with the same slope in all intervals, which gives rise to the natural model of symmetric agents, i.e. the peaks are equidistant from the agent’s location. Note that this symmetry is completely analogous to the singlepeaked case (for facility location problems, e.g. see [30]), where agents have exactly the same cost on two points equidistant from their peaks. Our lower bounds and impossibility results naturally extend to nonsymmetric settings, but some of our mechanisms do not. We discuss those extensions in Sect. 6.
Our model also applies to more general spaces, beyond the real line. One can imagine for instance that the goal is to build a facility on the plane where for the same reasons, agents would like the facility to be built at some distance from their location, in every direction. This translates to an agent having infinitely many peaks, located on a circle centered around her location. In that case of course, we would no longer refer to agents’ preferences as doublepeaked but the underyling idea is similar to the one presented in this paper. We do not explore such extensions here; we leave that for future work.
Agents are selfinterested entities that wish to minimize their costs. We are interested in mechanisms that ensure that agents are not incentivized to report anything but their actual locations, namely strategyproof mechanisms. We are also interested in group strategyproof mechanisms, i.e. mechanisms that are resistant to manipulation by coalitions of agents. Moreover, we want those mechanisms to achieve some good performance guarantees, with respect to our goals. If our objective is to minimize the sum of the agent’s costs, known as the social cost, then we are looking for strategyproof mechanisms that achieve a social cost as close as possible to that of the optimal mechanism, which need not be strategyproof. The prominent measure of performance for mechanisms in computer science literature is the approximation ratio [2, 6, 12, 24], i.e. the worst possible ratio of the social cost achieved by the mechanism over the minimum social cost over all instances of the problem. The same holds if our objective is to minimize the maximum cost of any agent. In the case of randomized mechanisms, i.e. mechanisms that output a probability distribution over points in \(\mathbb {R}\), instead of a single point, as a weaker strategyproofness constraint, we require truthfulnessinexpectation, i.e. a guarantee that no agent can reduce her expected cost from misreporting.
1.1 Our results
Our main contribution is a truthfulinexpectation mechanism (M1) that achieves an approximation ratio of \(1+b/c\) for the social cost and \(\max \{1+b/c,2\}\) for the maximum cost, where b is the distance between an agent’s location and her peak and c is her minimum cost. We also prove that no truthfulinexpectation mechanism can do better than a 3 / 2 approximation for the maximum cost proving that at least for the natural special case where \(b=c\), Mechanism M1 is not far from the best possible. For deterministic mechanisms, we prove that no mechanism in a wide natural class of strategyproof mechanisms can achieve an approximation ratio better than \(1+b/c\) for the social cost and \(1+2b/c\) for the maximum cost and hence cannot outperform mechanism M1. To prove this, we first characterize the class of strategyproof, anonymous and position invariant mechanisms for two agents, showing that it consists only of a single Mechanism (M2). Intuitively, anonymity requires that all agents are handled equally (irrespectively of their names) by the mechanism while position invariance essentially requires that if we shift an instance by some amount, the location of the facility should be shifted by the same amount as well. This is a quite natural condition and can be interpreted as a guarantee that the facility will be located relatively to the reports of the agents and independently of the underlying network (e.g. the street).
Summary of our results
Doublepeaked  Singlepeaked  

Upper  Lower  Upper  Lower  
Social cost  
Deterministic  \(n1\)  \(1+ \frac{b}{c}\)  1  1 
Randomized  \(1+\frac{b}{c}\)  –  1  1 
Maximum cost  
Deterministic\(^*\)  \(1+\frac{2b}{c}\)  \(1+ \frac{2b}{c}\)  2  2 
Randomized  \(1+\frac{b}{c}\)  3 / 2  3 / 2  3 / 2 
1.2 Related work on facility location
The strategic version of the facility location problem in computer science was first studied by Proccacia and Tennenholtz [30] in a seminal paper, where the authors coined the term approximate mechanism design without money to study problems where, in the absence of monetary terms, strategyproof mechanisms can only achieve the desired objectives within some approximation factor. The main goal of [30] is the location of a single facility on the real line when agents have singlepeaked preferences, and the corresponding bounds shown in Table 1 are from that paper. A series of papers have studied generalizations of the problem to more general metric spaces [1, 11, 32], multiple facilities [14, 23, 26, 27] or even enhancing strategyproof mechanisms with additional capabilities [21, 22]. Most of the related work actually considers the same objectives that we do here, namely the social cost or the maximum cost, with the notable exceptions of the leastsquares objective [18], the \(L_p\) norm of costs [17] or the minimax envy [5]. In a recent paper, Procaccia et al. [29] use the facility location problem to explore the tradeoffs between the approximation guarantees and the variance of truthfulinexpectation mechanisms.
In the literature of artificial intelligence and multiagent systems, in recent years, several variations of the basic model have emerged, capturing different situations that may arise in practice. A series of papers [7, 8, 16] study the obnoxious facility location problem over an interval, where agents declare their least preferred point and their utility increases linearly in both directions away from that point. Preference relations that give rise to such utility structures are referred to as singledipped or singlecaved preferences [25]. Serafino and Ventre [33, 34] introduced and studied the setting with heterogeneous facilities, in which agents’ locations are known and they declare their interests in different facilities. Building on the idea of heterogeneous facilities, a recent literature [16, 39] considers a single facility location setting where the agents report their most preferred positions, along with a binary variable, indicating whether the facility is desirable or obnoxious. Preference profiles in such a setting can be seen as being a combination of singlepeaked and singledipped preference orderings. In a different direction, Todo et al. [37] and Sonoda et al. [35] study the capabilities of falsename proof mechanisms, which are strategyproof mechanisms that are also impervious to agents assuming false identities.
In light of these recent developments, the current paper (the conference version of which preceded many of the aforementioned papers) can be seen as a different generalization of the basic facility location model, capturing different reallife scenarios and building on the vastly growing literature on such extensions.
1.3 Related work on doublepeaked preferences
Singlepeaked preferences were introduced in [4] as a way to avoid Condorcet cycles in majority elections. Moulin [28] characterized the class of strategyproof mechanisms in this setting, proving that median voter schemes are essentially the only strategyproof mechanisms for agents with singlepeaked preferences. Doublepeaked preferences have been mentioned in social choice literature, to describe settings where preferences are not singlepeaked, voting cycles do exist and majority elections are not possible. For example, Cooter ([10], pages 39–42) argues that when the decisions involve multidimensional choices, preferences are more likely to be doublepeaked rather than singlepeaked and the intransitivity of the preferences disallows for majority winners. As a motivating example, he uses the choice of the level of expenditure in public schools in the U.S. and the example of “yuppies”, i.e. young urban professionals that would prefer either a high expenditure level or a low expenditure level rather than a moderate level, as in the first case they could send their children to public school whereas in the second case they would send them to a private school without being burdened by large taxes for the support of public education. A simpler example of the same principle can be the choice of temperature in a room; both \(15^\circ \) with a lightweight jacket and \(25^\circ \) with just a regular shirt might be preferable to a moderate choice of \(20^\circ \).
More broadly, in social choice settings similar to the example of [10], doublepeaked preferences can be used to model situations where e.g. a leftwing party might prefer a more conservative but quite effective policy to a more liberal but ineffective one on a lefttoright political axis. In fact, Egan [13] provides a detailed discussion on doublepeaked preferences in political decisions. He uses a 1964–1970 survey about which course of action the United States should take with regard to the Vietnam war as an example where the status quo (keep U.S. troops in Vietnam but try to terminate the war) was ranked last by a considerable fraction of the population when compared to a leftwing policy (pull out entirely) or a rightwing policy (take a stronger stand). This demonstrates that in a scenario where the standard approach would be to assume that preferences are singlepeaked, preferences can instead be doublepeaked. Egan provides additional evidence for the occurrence of doublepeaked preferences supported by experimental results based on surveys on the U.S. population, for many different problems (education, health care, illegal immigration treatment, foreign oil treatment etc).
More examples of doublepeaked preferences in reallife scenarios are presented in [31]. The related work demonstrates that although they might not be as popular as their singlepeaked counterpart, doublepeaked preferences do have applications in settings more general than the street example described earlier. On the other hand, the primary focus of this paper is to study doublepeaked preferences on facility location settings and therefore the modeling assumptions follow the ones of the facility location literature.
In the literature of multiagent systems, Yang and Guo [38] consider \(\kappa \)peaked preferences (a straightforward generalization of doublepeaked preferences) and the problem of controlling elections for such preference profiles for several wellknown voting rules, proving several hardness and parametrized complexity results.
2 Preliminaries
Let \(N=\{1,2,\ldots ,n\}\) be a set of agents. We consider the case where agents are located on a line, i.e. each agent \(i \in N\) has a location \(x_i \in \mathbb {R}\). We will occasionally use \(x_i\) to refer to both the position of agent i and the agent herself. We will call the collection \({\mathbf {x}} = \langle x_1, \ldots , x_n\rangle \) a location profile or an instance.
A randomized mechanism is a function \(f: \mathbb {R}^n \mapsto \varDelta (\mathbb {R})\), where \(\varDelta (\mathbb {R})\) is the set of probability distributions over \(\mathbb {R}\). It maps a given location profile to probabilistically selected locations of the facility. The expected cost of agent i is \(\mathbb {E}_{y \sim {\mathcal {D}}} \left[ {\mathrm {cost}}(y, x_i) \right] \), where \({\mathcal {D}}\) is the probability distribution of the mechanism outputs.
We will call a deterministic mechanism f strategyproof if no agent would benefit by misreporting her location, regardless of the locations of the other agents. This means that for every \({\mathbf {x}}\in \mathbb {R}^n\), every \(i\in N\) and every \(x'_i\in \mathbb {R}\), \({\mathrm {cost}}(f({\mathbf {x}}),x_i)\le {\mathrm {cost}} (f(x'_i, \mathbf {x}_{i}), x_i)\), where \(\mathbf {x}_{i}=\langle x_1,\ldots ,x_{i1}, x_{i+1},\ldots ,x_n \rangle \). A mechanism is truthfulinexpectation if it guarantees that every agent always minimizes her expected cost by reporting her location truthfully. Throughout the paper we will use the term strategyproofness when refering to deterministic mechanisms and the term truthfulness when refering to randomized mechanisms.
A mechanism is (strongly) group strategyproof if there is no coalition of agents, who by jointly misreporting their locations, affect the outcome in a way such that the cost of none of them increases and the cost of at least one of them strictly decreases. In other words, there is no \(S \subseteq N\) such that for some misreports \(x_S\) of agents in S and some reports \(\mathbf {x}_{S}\) of agents in \(N\backslash S\), \({\mathrm {cost}} (f(x_S', \mathbf {x}_{S}), x_i) \le {\mathrm {cost}} (f({\mathbf {x}}), x_i)\) for all \(i \in S\), and \({\mathrm {cost}} (f(x_S', \mathbf {x}_{S}), x_j) < {\mathrm {cost}} (f({\mathbf {x}}), x_j)\) for at least one \(j \in S\).
We note here that the definition above is often referred in the literature as strong group strategyproofness, to distinguish it from weak group strategyproofness, where for a group deviation to be possible, it should be the case that the cost of all agents of the deviating coalition strictly decreases. Throughout the paper, when referring to group strategyproofness, we will assume that the notion follows the strong definition.
We are interested in strategyproof mechanisms that perform well with respect to the goal of minimizing either the social cost or the maximum cost. We measure the performance of the mechanism by comparing the social/maximum cost it achieves with the optimal social/maximum cost, on any instance \({\mathbf {x}}\).
For randomized mechanisms, the definitions are similar and the approximation ratio is calculated with respect to the expected social or maximum cost, i.e. the expected sum of costs of all agents and expected maximum cost of any agent, respectively.
Finally we consider some properties which are quite natural and are satisfied by many mechanisms (including the optimal mechanism). A mechanism f is anonymous, if for every location profile \({\mathbf {x}}\) and every permutation \(\pi \) of the agents, \(f(x_1, \ldots , x_n) = f(x_{\pi (1)}, \ldots , x_{\pi (n)})\). We say that a mechanism f is onto, if for every point \(y \in \mathbb {R}\) on the line, there exists a location profile \({\mathbf {x}}\) such that \(f({\mathbf {x}}) = y \). Without loss of generality, for anonymous mechanisms, we can assume \(x_1\le \cdots \le x_n\).
A property that requires special mention is that of position invariance, which is a very natural property as discussed in the introduction. This property was independently defined by [17] where it was referred to as shift invariance. One can view position invariance as an analogue to neutrality in problems like the one studied here, where there is a continuum of outcomes instead of a finite set.
Definition 1
A mechanism f satisfies position invariance, if for all location profiles \(\mathbf {x}=\langle x_1,\ldots ,x_n \rangle \) and \(t \in \mathbb {R}\), it holds \(f(x_1+t, x_2+t, \ldots , x_n+t) = f({\mathbf {x}}) + t\). In this case, we will call such a mechanism position invariant. We will refer to instances \({\mathbf {x}}\) and \(\langle x_1+t, x_2+t, \ldots , x_n+t \rangle \) as position equivalent.
Note that position invariance implies the onto condition. Indeed, for any location profile \(\mathbf {x}\), with \(f(\mathbf {x})=y\), we have \(f(x_1+t, x_2+t, \ldots , x_n+t)=y'=y+t\) for any \(t \in \mathbb {R}\), so every point \(y' \in \mathbb {R}\) is a potential output of the mechanism.
3 A truthfulinexpectation mechanism
We start the exposition of our results with our main contribution, a truthfulinexpectation mechanism that achieves an approximation ratio of \(1+b/c\) for the social cost and \(\max \{1+b/c,2\}\) for the maximum cost.
Mechanism M1
Given any instance \(\mathbf {x}=\langle x_1,\ldots ,x_n\rangle \), find the median agent \(x_{m}={\mathrm {median}}(x_1,\ldots ,x_n)\), breaking ties in favor of the agent with the smallest index. Output \(f(\mathbf {x})=x_mb\) with probability \(\frac{1}{2}\) and \(f(\mathbf {x})=x_m+b\) with probability \(\frac{1}{2}\).
Theorem 1
Mechanism M1 is truthfulinexpectation.
Proof
First, note that the median agent does not have an incentive to deviate, since her expected cost is already minimum, neither does any agent i for which \(x_i=x_m\). Hence, for the deviating agent i it must be either \(x_i < x_m\) or \(x_i> x_m\). We consider three cases when \(x_i<x_m\). The proof for the case \(x_i>x_m\) is symmetric. Observe that for agent i to be able to move the position of the facility, she has to report \(x_i'\ge x_m\) and change the identity of the median agent. Let \(x'_m\) be the median agent in the new instance \(\langle x'_i,x_{i} \rangle \), after agent i’s deviation. If \(x'_m=x_m\), then obviously agent \(x_i\) does not gain from deviating, so we will assume that \(x'_m>x_m\).
Case 1 \(x_i + b \le x_mb\) (symmetrically \(x_ib \ge x_m+b\)).
In this case, the cost of agent i is calculated with respect to \(x_i+b\) for both possible outcomes of the mechanism. Since \(x'_mb > x_mb\) and \(x'_m+b > x_m+b\), it holds that \((x_i+b)(x'_mb) > (x_i+b)(x_mb)\) and \((x_i+b)(x'_m+b) > (x_i+b)(x_m+b)\) and agent i can not gain from misreporting.
Case 2 \(x_mb < x_i + b \le x_m\) (symmetrically \(x_m \le x_ib < x_m+b\)).
Again, the cost of agent i is calculated with respect to \(x_i+b\) for both outcomes of the mechanism. This time, it might be that \((x_i+b)(x'_mb) < (x_i+b)(x_mb)\) but since \((x'_mb)(x_mb)=(x'_m+b)(x_m+b)\), it will also hold that \((x_i+b)(x'_m+b) > (x_i+b)(x_m+b)\) and also \((x_i+b)(x_mb)  (x_i+b)(x'_mb) = (x_i+b)(x'_m+b)  (x_i+b)(x_m+b)\). Hence, the expected cost of agent i after misreporting is at least as much as it was before.
Case 3 \(x_m < x_i+b \le x_m+b\) (symmetrically \(x_mb \le x_ib < x_m\)).
The cost of agent i before misreporting is calculated with respect to \(x_ib\) when the outcome is \(x_mb\) and with respect to \(x_i+b\) when the outcome is \(x_m+b\). For any misreport \(x'_i < x_i+b\), this is still the case (for \(x'_mb\) and \(x'_m+b\) respectively) and since \((x'_mb)(x_mb)=(x'_m+b)(x_m+b)\), her expected cost is not smaller than before. For any misreport \(x'_i > x_i+b\), her cost is calculated with respect to \(x_i+b\) for both possible outcomes of the mechanism and for the same reason as in Case 2, her expected cost is at least as much as it was before misreporting. \(\square \)
3.1 Social cost
Next, we will calculate the approximation ratio of the mechanism for the social cost. In order to do that, we will need the following lemma.
Lemma 1
Let \(\mathbf {x}=\langle x_1,\ldots ,x_m,\ldots ,x_n\rangle \), where \(x_{m}={\mathrm {median}}\,(x_1,\ldots ,x_n)\), breaking ties in favor of the smallest index. There exists an optimal location for the social cost in \([x_mb,x_m+b]\).
Proof
Assume by contradiction that there exists a point \(y<x_mb\) or \(y>x_m+b\) with a strictly smaller social cost than all the points in \([x_mb,x_m+b]\).
Assume first that \(y<x_mb\). Since \(x_m\) is the median agent, it holds that for at least \(\lceil n/2 \rceil \) agents, \(x_ib \ge x_mb\), that is \(x_mb\) admits a smaller cost for at least \(\lceil n/2 \rceil \) agents when compared to y. Let \(X_1\) be the set of those agents. On the other hand, for each agent \(x_i<x_m\), \(x_mb\) may admit a smaller or larger cost than y, depending on her position with respect to y. In the worst case, the cost is larger for every one of those agents, which happens when \(x_i+b \le y\) for every agent with \(x_i < x_m\). Let \(X_2\) be the set of those agents. Now observe that for any two agents \(x_\alpha \in X_1\) and \(x_\beta \in X_2\), it holds that \({\mathrm {cost}}(x_\alpha ,y){\mathrm {cost}}(x_a,x_mb) = {\mathrm {cost}}(x_\beta ,x_mb){\mathrm {cost}}(x_\beta ,y)\). Since \(X_1 \ge X_2\), it holds that that \(SC_{x_mb}({\mathbf {x}}) \le SC_{y}({\mathbf {x}})\). Since it holds that \(SC_{x_mb}({\mathbf {x}}) > SC_{y}({\mathbf {x}})\), \(x_mb\) is an optimal location and we get a contradiction.
Now assume \(y> x_m+b\). If the number of agents is odd, then we can use an exactly symmetric argument to prove that \(SC_{x_m+b} \le SC_{y}\). If the number of agents is even, the argument can still be used, since our tiebreaking rule selects agent \(x_{n/2}\) as the median. Specifically, \(x_m+b\) admits a smaller cost for exactly n / 2 of the agents (including agent \(x_{n/2}\)) and in the worst case, y admits a smaller cost for n / 2 agents as well. If \(X_1\) and \(X_2\) are the sets of those agents respectively, then again it holds that \({\mathrm {cost}}(x_\alpha ,y){\mathrm {cost}}(x_\alpha ,x_m+b) = {\mathrm {cost}}(x_\beta ,x_m+b){\mathrm {cost}}(x_\beta ,y)\) for \(x_\alpha \in X_1\) and \(x_\beta \in X_2\) and we get a contradiction as before. \(\square \)
We now proceed to proving the approximation ratio of Mechanism M1.
Theorem 2
Mechanism M1 has an approximation ratio of \(1+\frac{b}{c}\) for the social cost.
Proof
Consider an arbitrary instance \(\mathbf {x}=\langle x_1,\ldots ,x_n\rangle \) and let \(x_m\) be the median agent. By Lemma 1, there exists an optimal location \(y \in [x_mb,x_m+b]\). Let \(\delta =y(x_mb)\). For every agent i, it holds that \(\text {cost}(x_i,x_mb) \le \text {cost}(x_i,y)+\delta \). To see this, first observe that \((x_ib)(x_mb) \le (x_ib)y+\delta \) and that \((x_i+b)(x_mb) \le (x_i+b)y+\delta \). If the cost of an agent admitted by y and \(x_mb\) is calculated with respect to the same peak, then \(\min ((x_ib)(x_mb),(x_i+b)(x_mb)) \le \min ((x_ib)y,(x_i+b)y)+\delta \) and the inequality holds. If the cost is calculated with respect to different peaks for y and \(x_mb\), it must be that \(\text {cost}(x_i,x_mb)=c+(x_ib)(x_mb)\) and \(\text {cost}(x_i,y)=c+x_i+by\), because \(x_mb < y\). Since \((x_ib)(x_mb) \le (x_i+b)(x_mb) \le (x_i+b)y+\delta \), the inequality holds. Similarily, we can prove that \(\text {cost}(x_i,x_m+b) \le \text {cost}(x_i,y)+(2b\delta )\) for every agent i. Hence, we can upper bound the cost of Mechanism M1 by \(\frac{1}{2}\sum _{i=1}^{n}\text {cost}(x_i,x_mb) +\) \(\frac{1}{2}\sum _{i=1}^{n}\text {cost}(x_i,x_m+b)\) \(\le \frac{1}{2} \sum _{i=1}^{n} \left( \text {cost}(x_i,y)+\delta \right) +\frac{1}{2} \sum _{i=1}^{n} \left( \text {cost}(x_i,y) + 2b\delta \right) = SC_y(\mathbf {x}) +nb = SC_{{\mathrm {opt}}}(\mathbf {x}) + nb\). The approximation ratio then becomes \(1+\frac{nb}{SC_{{\mathrm {opt}}}(\mathbf {x})}\), which is at most \(1+\frac{b}{c}\), since \(SC_{{\mathrm {opt}}}(\mathbf {x})\) is at least nc.
For the lower bound, consider the location profile \(\mathbf {x}=\langle x_1,\ldots ,x_n\rangle \) with \(x_1=\cdots =x_{k1}=x_kb=x_{k+1}2b=\cdots =x_n2b\). Note that the argument works both when \(n=2k\) and when \(n=2k+1\) because Mechanism M1 selects agent \(x_k\) as the median agent in each case. The optimal location is \(x_1+b\) whereas Mechanism M1 equiprobably outputs \(f_{{\mathrm{M1}}}(\mathbf {x})=x_{k}b\) or \(f_{{\mathrm{M1}}}(\mathbf {x})=x_{k}+b\). The cost of the optimal location is \(SC_{\mathrm {opt}}(\mathbf {x}) = nc + b\) whereas the cost of Mechanism M1 is \(SC_{\mathrm{M1}}(\mathbf {x})=nc + (1/2)(n1)b +(1/2)(n1)b=nc+(n1)b\). The approximation ratio then becomes \(\frac{nc+(n1)b}{nc+b}\) \(= 1 \frac{b}{c}\cdot \frac{n2}{n+(b/c)}\). As the number of agents grows to infinity, the approximation ratio of the mechanism on this instance approaches \(1+b/c\). This completes the proof. \(\square \)
3.2 Maximum cost
We also consider the maximum cost and prove the approximation ratio of Mechanism M1 as well as a lower bound on the approximation ratio of any truthfulinexpectation mechanism. The results are summarized in Table 1.
Theorem 3
Mechanism M1 has an approximation ratio of \(\max \{1+b/c,2\}\) for the maximum cost.
Proof
Let \(\mathbf {x}=\langle x_1,\ldots ,x_n\rangle \) be an arbitrary instance and let \(x_m\) be the median agent. We will consider two cases, based on the location of \(f_{\mathrm {opt}}(\mathbf {x})\) with respect to \(x_mb\) (or symmetrically \(x_m+b\)).
Case 1 \(f_{\mathrm {opt}}(\mathbf {x}) < x_mb\) (or \(f_{\mathrm {opt}}(\mathbf {x}) > x_m+b\)).
Let \(\delta = (x_mb)f_{\mathrm {opt}}(\mathbf {x})\). For the same reason as in the proof of Theorem 2, for every agent i, it holds that \(\text {cost}(x_i,x_mb) \le \text {cost}(x_i,f_{\mathrm {opt}}(\mathbf {x}))+\delta \) and also that \(\text {cost}(x_i,x_m+b) \le \text {cost}(x_i,f_{\mathrm {opt}}(\mathbf {x}))+(2b+\delta )\).
Case 2 \(x_mb \le f_{\mathrm {opt}}(\mathbf {x}) \le x_m+b\).
Now, let \(\delta =f_{\mathrm {opt}}(\mathbf {x})(x_mb)\). Again, it holds that \(\text {cost}(x_i,x_mb) \le \text {cost}(x_i,f_{\mathrm {opt}}(\mathbf {x}))+\delta \) and also that \(\text {cost}(x_i,x_m+b) \le \text {cost}(x_i,f_{\mathrm {opt}}(\mathbf {x}))+(2b\delta )\).
For the matching lower bound, consider an instance \(\mathbf {x}=\langle x_1,\ldots ,x_n\rangle \) on which \(x_1+b < x_2b\) and \(x_i=x_2\) for all \(i \notin \{1,2\}\). It is \(f_{\mathrm {opt}}(\mathbf {x})= \frac{x_1+x_2}{2}\), i.e. the middle of the interval between \(x_1\) and \(x_2\), whereas Mechanism M1 selects equiprobably among \(x_2b\) and \(x_2+b\). Let \(d = f_{\mathrm {opt}}(\mathbf {x})  (x_1+b)\). Then \(MC_{\mathrm {opt}}(\mathbf {x}) = c+d\), whereas \(MC_{{\mathrm{M1}}}(\mathbf {x}) = c + \frac{1}{2}2d + \frac{1}{2}(2d+2b)= b+c+2d\). The approximation ratio is \(1+\frac{b+d}{c+d}\) which is \(1+\frac{b}{c}\) as d goes to 0 and 2 as d goes to infinity. \(\square \)
Next, we provide a lower bound on the approximation ratio of any truthfulinexpectation mechanism.
Theorem 4
For any values of c and b, no truthfulinexpectation mechanism can achieve an approximation lower than \(\frac{3}{2}\) for the maximum cost.
First, we state a couple of lemmas which are in essence very similar to those used in the proof of the singlepeaked preferences case in [30]. Let \(\mathbf {x}=\langle x_1,x_2 \rangle \) be an instance such that \(x_1+b<x_2b\) and let \(\lambda = (x_2b)(x_1+b)\). Let f be a truthfulinexpectation mechanism and let \(\mathcal {D}\) be the distribution that \(y=f(\mathbf {x})\) follows on instance \(\mathbf {x}\).
Lemma 2
On instance \(\mathbf {x}\), at least one of \(\mathbb {E}_{y \sim {\mathcal {D}}}\left[ \text {cost}(x_1,y)\right] \ge c+\frac{\lambda }{2}\) and \(\mathbb {E}_{y \sim \mathcal {D}}\left[ \text {cost}(x_2,y)\right] \ge c+\frac{\lambda }{2}\) holds.
Proof
Obviously, \(\text {cost}(x_1,y) + \text {cost}(x_2,y) \ge 2c+\lambda \) for any choice of y, hence \(\mathbb {E}_{y \sim {\mathcal {D}}}\left[ \sum _{i=1}^{2}\text {cost}(x_i,y)\right] = \sum _{i=1}^{2}\mathbb {E}_{y \sim {\mathcal {D}}}[\text {cost}(x_i,y)] \ge 2c+\lambda \). Therefore, it must be that \(\mathbb {E}_{y \sim {\mathcal {D}}}[\text {cost}(x_i,y)] \ge c+\frac{\lambda }{2}\) for at least one of \(i=1\) or \(i=2\). \(\square \)
Lemma 3
Let \(f_{\mathrm {opt}}(\mathbf {x})\) be the outcome of the optimal mechanism on instance \(\mathbf {x}\). If \(\mathbb {E}_{y \sim {\mathcal {D}}}[yf_{\mathrm {opt}}(\mathbf {x})] = \varDelta \), then the maximum cost of the mechanism on this instance is \(\mathbb {E}[MC_f(\mathbf {x})] = c+ \frac{\lambda }{2} + \varDelta \).
Proof
We can now prove the theorem.
Proof
Consider an instance with two agents on which \(x_1+b < x_2b\) and \((x_2b)(x_1+b)=\lambda \). It holds that \(f_{\mathrm {opt}}(\mathbf {x})=\frac{x_1+x_2}{2}\). Assume there is a truthfulinexpectation mechanism M on which \(y=f(x_1,x_2)\) follows a distribution \({\mathcal {D}}\) on this instance. According to Lemma 2, at least one of \(\mathbb {E}_{y \sim \mathcal {D}}[\text {cost}(y,x_1)] \ge c+\lambda /2\) and \(\mathbb {E}_{y \sim \mathcal {D}}[\text {cost}(y,x_2)] \ge c+\lambda /2\) holds. W.l.o.g., assume the second inequality is true (if the first inequality is true then we can make a symmetric argument with agent \(x_1\) deviating).
4 Deterministic mechanisms
We now turn our attention to deterministic mechanisms. We will start by stating and proving the following lemma, which will be very useful throughout the paper. Variations of the instances used here will appear in several of our proofs. When \(n=2\), we define the following family of instances, called primary instances.
Primary instance We will say that an instance is a primary instance, if it holds that \({\mathbf {x}}=\langle x_1,x_2 \rangle \) with \(x_1+2b+\epsilon = x_2b\), where \(\epsilon \) is a positive real number. In the following we will fix such an \(\epsilon >0\) (e.g. \(\epsilon = b/2\)) and refer to the resulting instance as the primary instance.
Lemma 4
On the primary instance, there is no anonymous, position invariant and strategyproof mechanism such that \(f({\mathbf {x}}) \in [x_1+b,x_2b]\).
Proof
For contradiction, suppose there exists an anonymous, position invariant strategyproof mechanism M that ouputs \(f({\mathbf {x}})\in [x_1+b,x_2b]\). Let’s denote \(\delta _1 = f(\mathbf {x})  (x_1+b)\), \(\delta _2=(x_2b)f(\mathbf {x})\). Throughout the proof we start with the primary instance and construct some other instances to prove the lemma. There are 2 cases to be considered.

Instance \(\mathrm {I}\) : \(\mathbf {x^{\mathrm {I}}}=\langle x^{\mathrm {I}}_1,x^{\mathrm {I}}_2 \rangle \), where \(x^{\mathrm {I}}_1=x_1+\delta _1, x^{\mathrm {I}}_2=x_2\).

Instance \(\mathrm {II}\) : \(\mathbf {x^{\mathrm {II}}}=\langle x^{\mathrm {II}}_1,x^{\mathrm {II}}_2 \rangle \), where \(x^{\mathrm {II}}_1=x_1+\delta _1, x^{\mathrm {II}}_2=x_2+\delta _1\).

Instance \(\mathrm {III}\) : \(\mathbf {x^{\mathrm {III}}}=\langle x^{\mathrm {III}}_1,x^{\mathrm {III}}_2 \rangle \), where \(x^{\mathrm {III}}_1=x_1+2b, x^{\mathrm {III}}_2=x_2\).
 (a)
\(f(\mathbf {x^{\mathrm {III}}})=x_{1}^{\mathrm {III}}b\). See left hand side of Fig. 2, where we have instance \(\mathrm {IVa}\): \(x_1^{\mathrm {IVa}}=x_1+2b, x_2^{\mathrm {IVa}}=x_2+2b\).
Obviously, instance \(\mathrm {IVa}\) is position equivalent to the primary instance, so \(f(x_1^{\mathrm {IVa}},x_2^{\mathrm {IVa}})=x_1^{\mathrm {IVa}}+b\). Note that the cost of agent \(x_2^{\mathrm {III}}\) is \(b+c+\epsilon \). If agent \(x_2^{\mathrm {III}}\) misreports \(x_2^{\mathrm {IVa}}\), then her cost becomes \(b+c\epsilon \), which is smaller than when reporting truthfully. So \(f(\mathbf {x^{\mathrm {III}}})\not =x_1^{\mathrm {III}}b\) if Mechanism M is strategyproof.
 (b)
\(f(\mathbf {x^{\mathrm {III}}})=x_1^{\mathrm {III}}+b\). See right hand side of Fig. 2, where we have instance \(\mathrm {IVb}\): \(x_1^{\mathrm {IVb}}=x_1^{\mathrm {III}}, x_2^{\mathrm {IVb}}=x_1^{\mathrm {III}}(b+\epsilon )\).
Obviously, instance \(\mathrm {IVb}\) is position equivalent to instance \(\mathrm {III}\), so it holds that \(f(x_1^{\mathrm {IVb}},x_2^{\mathrm {IVb}})=x_2^{\mathrm {IVb}}+b\) as implied by anonymity and position invariance. Note that on instance \(\mathrm {III}\), the cost of agent \(x_2^{\mathrm {III}}\) is \(b+c\epsilon \). If agent \(x_2^{\mathrm {III}}\) misreports \(x_2^{\mathrm {IVb}}\), then her cost becomes \(c+2\epsilon \), which is smaller than when reporting truthfully (since \(\epsilon \) is arbitrarily small). So \(f(\mathbf {x^{\mathrm {III}}})\not =x_1^{\mathrm {III}}+b\) if Mechanism M is strategyproof.
In all, there is no anonymous, position invariant and strategyproof mechanism such that \(f({\mathbf {x}}) \in [x_1+b,x_2b]\). \(\square \)
It is well known [28] that when agents have singlepeaked preferences, outputting the location of the kth agent (kth order statistic), results in a strategyproof mechanism. This is not the case however, for doublepeaked preferences and any choice of k.
Lemma 5
Given any instance \(\mathbf {x}=\langle x_1,\ldots ,x_n \rangle \), any mechanism that outputs \(f({\mathbf {x}})=x_ib\), for \(i=2,\ldots ,n\) or any mechanism that outputs \(f({\mathbf {x}})=x_i+b\), for \(i=1,\ldots ,n1\), is not strategyproof.
Proof
We prove the case when \(f({\mathbf {x}})=x_1+b\). The arguments for the other cases are similar. Consider any instance where \(x_2=x_1+b\) and \(x_ib > x_2+b\) for \(i=3,\ldots ,n\). Since \(f(\mathbf {x})=f(x_1,x_2,x_3,\ldots ,x_n)=x_1+b\), the cost of agent 2 is \(\mathrm {cost}(f(\mathbf {x}),x_2)=b+c\). If agent 2 misreports \(x_2'=x_1b\), the outcome will be \(f(\mathbf {x'})=f(x_1,x_2',x_3,\ldots ,x_n)=x_2'+c=x_1\), and \(\mathrm {cost}(f(\mathbf {x'}),x_2)=c<b+c=\mathrm {cost}(f(\mathbf {x}),x_2)\). Agent 2 has an incentive to misreport, therefore the mechanism is not strategyproof. \(\square \)
This only leaves two potential choices among kth order statistics, either \(f({\mathbf {x}})=x_{1}b\) or \(f({\mathbf {x}})=x_{n}+b\). Consider the following mechanism.
Mechanism M2
Given any instance \(\mathbf {x}=\langle x_1,\ldots ,x_n \rangle \), locate the facility always on the left peak of agent 1, i.e. \(f({\mathbf {x}})=x_{1}b\) or always on the right peak of agent n, i.e. \(f(\mathbf {x})=x_n+b\).
From now on, we will assume that M2 locates the facility on \(x_1b\) on any instance \(\mathbf {x}\). The analysis for the other case is similar.
Theorem 5
Mechanism M2 is strategyproof.
Proof
Obviously, agent 1 has no incentive to misreport, since her cost is already minimum. For any other agent \(i, i=2,\ldots ,n\), the cost is \(\mathrm {cost}(f({\mathbf {x}}),x_{i}) = c+x_ix_1\). For any misreport \(x'_{i} \ge x_1\), the facility is still located on \(x_1b\) and every agent’s cost is the same as before. For some misreport \(x'_{i} < x_{1}\), the facility moves to \(f(x_1,\ldots ,x_i',\ldots ,x_n)=x'_{i}b<x_1b\), i.e. further away from any of agent i’s peaks and hence this choice admits a larger cost for her. \(\square \)
In the following, we prove that for the case of two agents, Mechanism M2 is actually the only strategyproof mechanism that satisfies anonymity and position invariance. We start with the following lemma.
Lemma 6
For any instance \({\mathbf {x}}= \langle x_1,x_2 \rangle \), where \(x_1+b<x_2b\), if an anonymous, position invariant and strategyproof mechanism outputs \(f(x_1,x_2)=x_1b\), then it must output \(f(x'_1,x'_2)=x_1'b\) for any instance \(\mathbf {x'}=\langle x_1',x_2'\rangle \) with \(x_1' \le x_2'\).
Proof
Let \((x_2b)  (x_1+b) = \gamma \). For any other instance \(\mathbf {x'}=\langle x'_1,x'_2\rangle \) with \(x_1' \le x_2'\), we assume \(x'_1=x_1\) without loss of generality due to position invariance. In particular, we will first prove that on any instance \(\mathbf {x'}\) with \(x'_1=x_1\) and \(x'_2 > x_1\), the position of the facility has to be \(x'_1b\) and we will establish that by only considering potential deviations of agent 2 on instance \({\mathbf {x}}\) such that \(x_2'\ge x_1\). If it holds for this set of deviations, then it certainly holds even if agent 2 can deviate anywhere on the real line. Let S be the set of those instances. But then, any instance \({\tilde{\mathbf {x}}} \notin S\) is position equivalent to some instance \(\mathbf {x} \in S\) and therefore it has to be that \(f({\tilde{\mathbf {x}}})=\tilde{x}_1b\) and the lemma will follow.
On instance \(\mathbf {x}\), the cost of agent 2 is \(2b+c+\gamma \), so for any deviation of \(x_2\), her cost must be at least \(2b+c+\gamma \), as required by strategyproofness. This implies that if agent 2 misreports \(x'_2\), then on the resulting instance it must be either \(f(x_1,x'_2) \in (\infty ,x_1b]\) or \(f(x_1,x'_2)\in [x_2+3b+\gamma ,+\infty )\).
First, assume \(f(x_1,x_2')\in [x_2+3b+\gamma ,\infty )\). Let instance \(\mathrm {I}\) be \(\mathbf {x^I}=\langle x^\mathrm {I}_1,x^\mathrm {I}_2\rangle \), where \(x^\mathrm {I}_1=x_1, x^\mathrm {I}_2=f(x_1,x_2')+b\). On instance \(\mathrm {I}\) it must be either \(f(x^\mathrm {I}_1,x^\mathrm {I}_2)=f(x_1,x_2')\) or \(f(x^\mathrm {I}_1,x^\mathrm {I}_2)=f(x_1,x_2')+2b\), otherwise agent 2 can deviate from \(x^\mathrm {I}_2\) to \(x_2'\) and move the facility to \(x_2^{\mathrm {I}}b\), minimizing her cost and violating strategyproofness. However then, on instance \(\mathrm {I}\), agent 1 can misreport \(\bar{x}_1^{\mathrm {I}}=x^\mathrm {I}_22b\gamma \) and move the facility to \(x^{\mathrm {I}}_23b\gamma \), reducing her cost and violating strategyproofness. This follows from the fact that the resulting instance \(\langle \bar{x}_1^\mathrm {I},x_2^\mathrm {I}\rangle \) and instance \(\mathbf {x}\) are position equivalent and that \(f(x_1,x_2)=x_1b\) on instance \(\mathbf {x}\). Hence, it can not be that \(f(x_1,x_2') \in [x_2+3b+\gamma ,\infty )\) on instance \(\mathbf {x'}\).
Second, assume \(f(x_1,x_2')\in (\infty ,x_1b)\). Then, since \(x_2' > x_1\), agent 2 can deviate from \(x_2'\) to \(x_2\) and move the facility to \(x_1b\), i.e. closer to her actual position. Hence, it can’t be that \(f(x_1,x_2') \in (\infty ,x_1b)\) on instance \(\mathbf {x'}\) either.
In conclusion, it must be \(f(x_1,x_2')=f(x_1',x_2')=x_1'b\) for any instance \(\mathbf {x'}=(x_1',x_2')\) with \(x_1' \le x_2'\). \(\square \)
Theorem 6
When \(n=2\), the only strategyproof mechanism that satisfies position invariance and anonymity is Mechanism M2.
Proof
For contradiction, suppose there exists an anonymous, position invariant strategyproof mechanism M which is different from Mechanism M2 and consider the primary instance used in Lemma 4.
We first argue that it must be that \(f(x_1,x_2) \in [x_1+b,x_2b]\). Assume on the contrary that \(f(x_1,x_2) < x_1+b\) (the argument for \(f(x_1,x_2)>x_2b\) is symmetric). Then, consider the instance \(\langle x'_1,x'_2 \rangle \), where \(x_2'=x_2\) and \(x_1'=f(x_1,x_2)b\). On this instance, it must be that \(f(x_1',x_2') = x_1'b\) or \(f(x_1',x_2') = x_1'+b\), otherwise agent 1 can deviate from \(x_1'\) to \(x_1\) and move the facility to \(x_1'+b\), minimizing her cost. In addition, if \(f(x_1',x_2') = x_1'b\), then according to Lemma 6, the unique strategyproof mechanism that satisfies position invariance and anonymity is Mechanism M2, and thus our assumption is violated. So let’s assume \(f(x_1',x_2')=x_1'+b\). Then, on the primary instance, agent 2 could report \(\hat{x}_2 =x_2+(x_1+bf(x_1,x_2))\) and by position invariance (since instances \(\langle x_1',x_2'\rangle \) and \(\langle x_1,\hat{x}_2\rangle \) are position equivalent), it should be \(f(x_1,\hat{x}_2)=x_1+b\). This would give agent 2 an incentive to misreport, violating strategyproofness. Hence, on the primary instance it must be that \(f(x_1,x_2) \in [x_1+b,x_2b]\).
However, according to Lemma 4, it is impossible for a strategyproof mechanism to output \(f(x_1,x_2) \in [x_1+b,x_2b]\) on the primary instance. In conclusion, no mechanism other than Mechanism M2 is strategyproof, anonymous and position invariant. \(\square \)
4.1 Social cost
We have seen that Mechanism M2 is strategyproof, but how well does it perform with respect to our goals, namely minimizing the social cost or the maximum cost? In other words, what is the approximation ratio that Mechanism M2 achieves against the optimal choice? First, we observe that the optimal mechanism, which minimizes the social cost, is not strategyproof.
Theorem 7
The optimal mechanism with respect to the social cost, \(f_{\mathrm {opt}}(\mathbf {x}) = \arg \min \limits _y \sum \limits _{i=1}^{n} \mathrm {cost}(y,x_i)\), is not strategyproof.
Proof
Consider an instance \({\mathbf {x}}=\langle x_1,x_2,x_3\rangle \), such that \(x_2+b<x_3b\), \(x_2b< x_1 + b < x_2\) and \((x_1+b)(x_2b) = \epsilon \), where \(\epsilon \) is an arbitrarily small positive quantity. On this instance, the optimal facility location is \(x_2+b\) and the cost of agent \(x_1\) is \(c+2b\epsilon \). Suppose now that agent \(x_1\) reports \(x'_1 < x_22b\). Moreover, suppose that when there are two locations \(y_1\) and \(y_2\), with \(y_1 < y_2\) that admit the same social cost, the optimal mechanism outputs \(y_1\). If the mechanism outputs \(y_2\) instead, we can use a symmetric argument on the instance \(\mathbf {x'}=\langle x'_1,x'_2,x'_3 \rangle =\langle x'_1,x_2,x_2+2b\epsilon \rangle \) with agent 3 misreporting \(x_3\).^{2} By this tiebreaking rule, on instance \(\mathbf {x}=\langle x_1',x_2,x_3\rangle \), the location of the facility is \(x_2b\) and the cost of agent \(x_1\) is \(c+\epsilon \), i.e. smaller than before. Hence, the optimal mechanism is not strategyproof. To extend this to an arbitrary number of agents, let \(x_j=x_2\) for every other agent \(x_j\). \(\square \)
Unfortunately, when considering the social cost, in the extremal case, the approximation ratio of Mechanism M2 is dependent on the number of agents. The approximation ratio is given by the following theorem.
Theorem 8
Proof
Consider any instance \(\mathbf {x'}=\langle x'_1,\ldots ,x'_n\rangle \) and let \(y=f_{\mathrm {opt}}(\mathbf {x'})\). It also holds that \(f_{\mathrm{M2}}(\mathbf {x'})=x'_1b\). Denote the social costs of the optimal mechanism and Mechanism M2 on instance \(\mathbf {x'}\) by \(SC_{\mathrm {opt}}(\mathbf {x'})\) and \(SC_{\mathrm{M2}}(\mathbf {x'})\), respectively. Let \(\mathbf {x}\) be the instance obtained by instance \(\mathbf {x'}\) as follows. For every agent \(i, i \ne 1\), if \(x'_i+b \le y\), let \(x_i=2yx'_i\); if \(x'_i< y< x'_i + b\), let \(x_i =x'_i+2b\); if \(x'_ib<y<x'_i\), let \(x_i =2yx'_i+2b\); otherwise let \(x_i=x'_i\). Observe that \(x_iby=\min \left( x'_iby,x'_i+by\right) \) and hence \(\text {cost}(x_i,y)=\text {cost}(x'_i,y)\) for every agent i. Similarily, it holds that \((x_ib)(x_1b)\ge (x'_ib)(x'_1b)\) and hence (since \(x_1=x'_1\)), \(\text {cost}(x_ib,x_1b)\ge \text {cost}(x'_ib,x'_1b)\) for all agents i.
We will calculate an upper bound on the approximation ratio on instance \(x'\). To do that, we will calculate an upper bound on the value of the ratio \(SC_{\mathrm{M2}}(\mathbf {x})/SC_{y}(\mathbf {x})\) on instance \(\mathbf {x}\), where \(SC_{\mathrm{M2}}(\mathbf {x})\) is the social cost of Mechanism M2 on instance \(\mathbf {x}\) and \(SC_{y}(\mathbf {x})\) is the social cost admitted by y on instance \(\mathbf {x}\). By the way the instance was constructed, it holds that \(SC_{\mathrm{M2}}(\mathbf {x}) \ge SC_{\mathrm{M2}}(\mathbf {x'})\) and \(SC_{y}(\mathbf {x})=SC_{y}(\mathbf {x'})=SC_{\mathrm {opt}}(\mathbf {x'})\) and hence \(SC_{\mathrm{M2}}(\mathbf {x})/SC_{y}(\mathbf {x})\) is an upper bound on \(SC_{\mathrm{M2}}(\mathbf {x'})/SC_{\mathrm {opt}}(\mathbf {x'})\).
Let \(d_i=(x_ib)y\), for \(i=2,\ldots ,n\) and let \(k = \sum _{i\ne 1}d_i\). Finally let \(d = y  (x_1+b)\). We consider three cases. (See Fig. 3).
Case 1 \(x_1+b < y\).
Case 2 \(x_1 < y \le x_1+b\), which means \(0\le d<b\).
Case 3 \(x_1b<y \le x_1\), which means \(b \le d <2b\).
In all, since \(SC_{\mathrm{M2}}(\mathbf {x})/SC_{y}(\mathbf {x}) \le n1\) the approximation ratio of Mechanism M2 is at most \(\max \{n1,1+\frac{2b}{c}\}\). The approximation ratio is exactly \(n1\) on any instance \(\mathbf {x}=\langle x_1,x_2,\ldots ,x_n\rangle \) with \(x_2=\cdots =x_n\) and \(x_1 \ll x_2\), i.e. when agent 1 lies on the left, really far away from the other \(n1\) agents. We note that by our analysis, it follows that the other upper bound of the ratio of the mechanism is actually \(1+\frac{(n1)2b}{nc}\) but we instead wrote a larger bound of \(1+\frac{2b}{c}\) for ease of exposition. The ratio is exactly \(1+\frac{(n1)2b}{nc}\) on any instance \(\mathbf {x}=\langle x_1,x_2,\ldots ,x_n\rangle \) with \(x_1+b=x_2b=\cdots =x_nb\) and goes to \(1+\frac{2b}{c}\) as n goes to infinity. \(\square \)
Next, we will prove a lower bound of \(1+b/c\) on the approximation ratio of any anonymous, position invariant, and strategyproof mechanism, when the number of agents is even. We will start with the following lemma. The main intuition behind the proof of the lemma is that we will simulate profiles with two agents with profiles with n agents, where two groups of n / 2 agents coincide on two different positions. The twoagent mechanism will then output the same location that the nagent mechanism outputs on the respective location profile. Similar ideas have been used before in the literature, e.g. see [19, 36].
Lemma 7
Let \(M^n\) be a strategyproof, anonymous and position invariant mechanism for n agents, where n is even. Then, for any location profile \(\mathbf {x}=\langle x_1=\cdots =x_{n/2}, x_{n/2+1}=\cdots =x_n\rangle \), it holds that \(M^n(\mathbf {x})= x_1b\).
Proof
Let \(M^2\) be the following mechanism for two agents: On input location profile \(\langle x_1,x_2\rangle \), output \(M^n(\mathbf {x'})\), where \(\mathbf {x'}=\langle x'_1=\cdots =x'_{n/2}, x'_{n/2+1}=\cdots =x'_n \rangle \), and \(x'_1=x_1\) and \(x'_{n/2+1}=x_2\). First, we claim that \(M^2\) is strategyproof, anonymous and position invariant. If that is true, then by Theorem 6, \(M^2\) is Mechanism M2 and the lemma follows.
First let \(\mathbf {x}=\langle x_1,x_2 \rangle \), \({\hat{\mathbf {x}}}=\langle \hat{x}_1, \hat{x}_2 \rangle \) be any two position equivalent location profiles. Observe that the corresponding nagent profiles \(\mathbf {x'}\) and \({\hat{\mathbf {x}}'}\) obtained by placing n / 2 agents on \(x_1\) and \(\hat{x_1}\) and n / 2 agents on \(x_2\) and \(\hat{x_2}\) respectively are also position equivalent. Since \(M^n\) is position invariant, it must hold that \(M^n(\mathbf {x'})=M^n({\hat{\mathbf {x}}'})\) and hence by construction of \(M^2\), \(M^2(\mathbf {x})=M^2({\hat{\mathbf {x}}})\). Since \(\mathbf {x}\) and \({\hat{\mathbf {x}}}\) where arbitrary, Mechanism \(M^2\) is position invariant.
Similarly, let \(\mathbf {x}=\langle x_1,x_2 \rangle \), \({\hat{\mathbf {x}}}=\langle \hat{x}_1, \hat{x}_2 \rangle \) be any two location profiles, such that \({\hat{\mathbf {x}}}\) is obtained by \(\mathbf {x}\) by a permutation of the agents. The outcome of \(M^n\) on the corresponding nagent location profiles (since the number of agents placed on \(x_1\) and \(x_2\) is the same) is the same and by construction of \(M^2\), \(M^2(\mathbf {x})=M^2({\hat{\mathbf {x}}})\) and since the profiles where arbitrary, the mechanism is anonymous.
Finally, for strategyproofness, start with a location profile \({\hat{\mathbf {x}}'}=\langle \hat{x}'_1,\hat{x}'_2 \rangle \) and let \(\mathbf {x'}=\langle x'_1=\cdots =x'_{n/2}, x'_{n/2+1}=\cdots =x'_n \rangle \) be the corresponding nagent location profile. Let \(y=M^n(\mathbf {x'})\) and let \(\text {cost}(x',y)\) be the cost of agents \(x'_1,\ldots ,x'_{n/2}\) on \(\mathbf {x'}\). For any \(x_1\), let \(\langle x_1, x'_2=\cdots =x'_{n/2}, x'_{n/2+1}=\cdots =x'_n \rangle \) be the resulting location profile. By strategyproofness of \(M^n\), agent \(x'_1\) can not decrease her cost by misreporting \(x_1\) on profile \(\mathbf {x'}\) and hence her cost on the new profile is at least \(\text {cost}(x',y)\). Next, consider the location profile \(\langle x_1=x_2,x'_3=\cdots =x'_{n/2}, x'_{n/2+1}=\cdots =x'_n \rangle \) and observe that by the same argument, the cost of agent \(x'_2\) is not smaller on the new profile when compared to \(\langle x_1, x'_2=\cdots =x'_{n/2}, x'_{n/2+1}=\cdots =x'_n \rangle \) and hence her cost is at least \(\text {cost}(x',y)\). Continuing like this, we obtain the profile \(\langle x_1=\cdots =x_{n/2},x'_{n/2+1}=\cdots =x'_n \rangle \) and by the same argument, the cost of agent \(x'_{n/2}\) on this profile is at least \(\text {cost}(x',y)\). The location profile \(\langle x_1=\cdots =x_{n/2},x'_{n/2+1}=\cdots =x'_n \rangle \) corresponds to the 2agent location profile \({\hat{\mathbf {x}}}=\langle \hat{x}_1,\hat{x}'_2 \rangle \) and by construction of \(M^2\), \(\text {cost}(\hat{x}'_1,M^2({\hat{\mathbf {x}}'})) \le \text {cost}(\hat{x}'_1,M^2({\hat{\mathbf {x}}}))\) and since the choice of \(x_1\) (and hence the choice of \(\hat{x_1}\)) was arbitrary, Mechanism \(M^2\) is strategyproof. \(\square \)
Theorem 9
When the number of agents is even, no strategyproof mechanism that satisfies position invariance and anonymity can achieve an approximation ratio lower than \(1+(b/c)\) for the social cost.
Proof
Let \(M^n\) be a strategyproof, anonymous and position invariant mechanism and consider any location profile \(\mathbf {x}=\langle x_1=\cdots =x_{n/2},x_{n/2}+1=\cdots =x_n\) with \(x_{n/2+1}=x_1+2b\). By Lemma 7, \(M^n(\mathbf {x})= x_1b\) and the social cost of \(M^n\) is \(nc+(n/2)\cdot 2b\) while the social cost of the optimal allocation is only nc. The lower bound follows. \(\square \)
4.2 Maximum cost
First, it is easy to see that the mechanism that outputs the location that minimizes the maximum cost is not strategyproof. On any instance \(\langle x_1,x_2 \rangle \) with \(x_1+b < x_2b\) the optimal location of the facility is \((x_1+x_2)/2\). If agent \(x_2\) misreports \(x_2'=2x_22bx_1\) then the location moves to \(x_2b\), minimizing her cost.
While the approximation ratio of Mechanism M2 for the social cost is not constant, for the maximum cost that is indeed the case. In fact, as we will prove, when the number of agents is even, Mechanism M2 actually achieves the best possible approximation ratio amongst strategyproof mechanisms. We start with the theorem about the approximation ratio of Mechanism M2.
Theorem 10
For \(n \ge 3\), Mechanism M2 achieves an approximation ratio of \(1+\frac{2b}{c}\) for the maximum cost.
Proof
Let \(\mathbf {x}=\langle x_1,\ldots ,x_n\rangle \) be any instance. We consider three cases, depending on the distance between agents \(x_1\) and \(x_n\).
Case 1 \(x_1+b \le x_nb \Rightarrow x_nx_1 \ge 2b\).
Case 2 \(x_1< x_nb< x_1+b \Rightarrow c< x_nx_1 < 2b\).
The lower bound for the case when the number of agents is even follows.
Corollary 1
When the number of agents is even, no deterministic strategyproof mechanism that satisfies position invariance and anonymity can achieve an approximation ratio lower than \(1+\frac{2b}{c}\) for the maximum cost.
Proof
On instance \({\hat{\mathbf {x}}}\) of the proof of Theorem 9, there are agents whose cost for any strategyproof, anonymous and position invariant mechanism is \(2b+c\), while under the optimal mechanism it is only c. The lower bound on the approximation ratio follows. \(\square \)
A different lower bound that holds for any number of agents (and without using the position invariance property) is proved in the next theorem.
Theorem 11
No deterministic strategyproof mechanism can achieve an approximation ratio lower than 2 for the maximum cost.
Proof
Consider an instance \(\mathbf {x}=\langle x_1,x_2 \rangle \) with \(x_1+b < x_2b\) (the instance can be extended to arbitrarily many agents by placing agents on positions \(x_1\) and \(x_2\) and all the arguments will still hold). The optimal location of the facility is \(f_{\mathrm {opt}}(\mathbf {x}) = \frac{x_1+x_2}{2}\). Assume for contradiction that M is a deterministic strategyproof mechanism with approximation ratio smaller than 2.
First, we argue that it can not be that \(f_\mathrm {M}(\mathbf {x}) \in [x_2b,\infty )\). Let \(d = x_2bf_{\mathrm {opt}}(\mathbf {x})\). It holds that \(MC_{\mathrm {opt}}(\mathbf {x})=c+d\). If it was \(f_\mathrm {M}(\mathbf {x}) \in [x_2b,\infty )\), then it would be that \(MC_{\mathrm {M}}(\mathbf {x})\ge c+2d\) and the approximation ratio would be at least \(2\frac{c}{d+c}\) which goes to 2 as d grows to infinity (i.e. the agents are placed very far away from each other). Now, for Mechanism M to achieve an approximation ratio smaller than 2, it must be \(f_{\mathrm {M}}(\mathbf {x}) \in [f_{\mathrm {opt}}(\mathbf {x}),x_2b)\) (or symmetrically \(f_{\mathrm {M}}(\mathbf {x}) \in (x_1+b,f_{\mathrm {opt}}(\mathbf {x})]\)).
Now consider the instance \(\mathbf {x'}=\langle x_1',x_2'\rangle \) with \(x_1'=x_1\) and \(x_2' = f_{\mathrm {M}}(\mathbf {x})+b\). On this instance, it must be either \(f_{\mathrm {M}}(\mathbf {x'})=f_{\mathrm {M}}(\mathbf {x})\) or \(f_{\mathrm {M}}(\mathbf {x'})=f_{\mathrm {M}}(\mathbf {x})+2b\) (the left or the right peak of agent \(x_2'\)), otherwise agent \(x_2'\) could report \(x_2\) and move the facility to \(x_2'b\), minimizing her cost and violating strategyproofness. For calculating the lower bound, we need the choice that admits the smaller of the two costs, i.e. \(f_{\mathrm {M}}(\mathbf {x'})=f_{\mathrm {M}}(\mathbf {x})\). We calculate the approximation ratio on instance \(\mathbf {x'}\).
The optimal choice for the facility is again \(f_{\mathrm {opt}}(\mathbf {x'})=(x_1+x_2')/2\). Let \(\lambda = (x_2'b)  f_{\mathrm {opt}}(\mathbf {x'})\). The approximation ratio then is \(2\frac{c}{\lambda +c}\). We know that \(\lambda \ge d/2\), so when d grows to infinity as before, \(\lambda \) also grows to infinity and the approximation ratio goes to 2. This means that there exists an instance for which the approximation ratio of the mechanism is 2, which gives us the lower bound. \(\square \)
5 Group strategyproofness
As we mentioned in the introduction, under the reasonable conditions of position invariance and anonymity, there is no group strategyproof mechanism for the problem. We will prove this claim by using Lemma 4 and the following lemma.
Recall the definition of the primary instance from Sect. 4. When \(n=2k+1, k\in \mathbb {Z}^+\), let P be the instance obtained by locating \(k+1\) agents on \(x_1\) and k agents on \(x_2\) on the primary instance. Similarly, let S be the instance obtained by locating k agents on \(x_1\) and \(k+1\) agents on \(x_2\) on the primary instance. Formally, let \({\mathbf {x}}^P=\langle x_1^P, \ldots ,x_n^P \rangle \), where \(x_1^P=\cdots =x_{k+1}^P\), \(x_{k+2}^P=\cdots =x_{n}^P\), and \((x_{n}^Pb)(x_{1}^P+b)=b+\epsilon \) and \({\mathbf {x}}^S=\langle x_1^S, \ldots ,x_n^S \rangle \), where \(x_1^S=\cdots =x_{k}^S\), \(x_{k+1}^S=\cdots =x_{n}^S\), and \((x_{n}^Sb)(x_{1}^S+b)=b+\epsilon \), where \(\epsilon \) is the same quantity as in the primary instance.
Lemma 8
When \(n=2k+1\), any position invariant and group strategyproof mechanism that outputs \(f(\mathbf {x^P})=x_{1}^{{\mathrm {P}}}+b\) on instance \({\mathrm {P}}\), must output \(f(\mathbf {x})=x_1+b\) on any instance \(\mathbf {x}=\langle x_1,\ldots ,x_n \rangle \), where \(x_1=\cdots =x_{k+1}\), \(x_{k+2}=\cdots =x_{n}\) and \((x_{n}b)(x_{1}+b)=2b\). Similarly, any position invariant and group strategyproof mechanism that outputs \(f(\mathbf {x^S})=x_{n}^{\mathrm {S}}b\) on instance \({\mathrm {S}}\), must output \(f(\mathbf {x})=x_nb\) on any instance \(\mathbf {x}=\langle x_1,\ldots ,x_n \rangle \), where \(x_1=\cdots =x_{k}\), \(x_{k+1}=\cdots =x_{n}\) and \((x_{n}b)(x_{1}+b)=2b\).
Proof
We prove the first part of the lemma. The proof of the second part is symmetric. Note that the difference between instances \({\mathbf {x}}^{{\mathrm {P}}}\) and \({\mathbf {x}}\) is that the distance between the two groups of agents is \(3b+\epsilon \) in \({\mathbf {x}}^{{\mathrm {P}}}\) while it is 4b in \({\mathbf {x}}\). First, we argue that \(f({\mathbf {x}})\in [x_1+b,x_nb]\). Indeed, if that was not the case, if \(f({\mathbf {x}})<x_1+b\), by the onto condition implied by position invariance, all agents could jointly misreport some different positions and move the facility to \(x_1+b\). This point admits a smaller cost for all agents; specifically the cost of agents \(x_1,\ldots ,x_{k+1}\) is minimized while the cost of agents \(x_{k+2},\ldots ,x_{n}\) is reduced and group strategyproofness is violated. Using a symmetric argument, we conclude that it can’t be \(f({\mathbf {x}})>x_nb\) either.
Secondly, we argue that \(f({\mathbf {x}}) \not \in (x_1+b,x_nb]\). Indeed, assume that was not the case. Then agents \(x^{{\mathrm {P}}}_{k+2},\ldots ,x^{{\mathrm {P}}}_{n}\) on instance \({\mathbf {x}}^{{\mathrm {P}}}\) could jointly misreport \(x_{k+2},\ldots ,x_{n}\) and move the facility from \(f(\mathbf {x^P})=x_{1}^{{\mathrm {P}}}+b\) to \(f({\mathbf {x}})\). Since by assumption \(f({\mathbf {x}}) \in (x_1+b,x_nb]\), the cost of each deviating agent is smaller than her cost before deviating. Group strategyproofness is then violated and hence it must be that \(f(\mathbf {x})=x_1+b\). By position invariance, it must be that \(f({\bar{\mathbf {x}}})=\bar{x}_1+b\) on any instance \({\bar{\mathbf {x}}}\) which is position equivalent to instance \(\mathbf {x}\). \(\square \)
Theorem 12
There is no group strategyproof mechanism that is anonymous and position invariant.
Proof
When \(n=2\), on the primary instance, according to Lemma 4, there is no anonymous, position invariant and strategyproof mechanism such that \(f({\mathbf {x}}) \in [x_1+b,x_2b]\). In addition, if the facility was placed on some point \(f(\mathbf {x}) < x_1+b\) (the argument for \(f(\mathbf {x})>x_2b\) is symmetric), for any mechanism that satisfies position invariance which implies onto, agents 1 and 2 could jointly misreport some positions \(\hat{x}_1\) and \(\hat{x}_2\) such that \(f(\hat{x}_1,\hat{x}_2)=x_1+b\). Obviously, this deviation admits the minimum possible cost for agent 1 and a reduced cost for agent 2, violating group strategyproofness.
The proof can easily be extended to the case when n is even. On the primary instance, simply place \(\frac{n}{2}\) agents on \(x_1\) and \(\frac{n}{2}\) agents on \(x_2\). By considering deviations of coalitions of agents coinciding on \(x_1\) or \(x_2\) instead of deviations of agents \(x_1\) and \(x_2\) respectively, all the arguments still hold. However, additional care must be taken when n is odd.
When \(n=2k+1, k\in \mathbb {Z^+}\), we denote by \(P_{\mathrm {J}}\) the instance after placing \(k+1\) agents on \(x_{1}^{\mathrm {J}}\) and k agents on \(x_{2}^{\mathrm {J}}\) on instance \(\mathrm {J}\), where \(\mathrm {J}\) is either instance \(\mathrm {I}\), \(\mathrm {II}\), \(\mathrm {III}\), \(\mathrm {IVa}\) or \(\mathrm {IVb}\) of the proof of Lemma 4. Similarly, let \(S_{\mathrm {J}}\) be the instance after placing k agents on \(x_{1}^{\mathrm {J}}\) and \(k+1\) agents on \(x_{2}^{\mathrm {J}}\) on instance \(\mathrm {J}\). Finally, let \({X_{1}^{\mathrm {K}}}\) be the group of agents i for which \(x_{i}^{{\mathrm {K}}}=x_{1}^{\mathrm {K}}\) on instance K, where K is either \(P,P_{\mathrm {I}},P_{\mathrm {II}},P_{\mathrm {III}}, P_{\mathrm {IVa}},P_{\mathrm {IVb}}, S,S_{\mathrm {I}},S_{\mathrm {II}},S_{\mathrm {III}}, S_{\mathrm {IVa}}\) or \(S_{\mathrm {IVb}}\), and let \({X_{2}^{\mathrm {K}}}\) be the group of agents i for which \(x_{i}^{{\mathrm {K}}}=x_{n}^{\mathrm {K}}\) on instance \(\mathrm {K}\).
For contradiction, assume that there exists an anonymous, position invariant, and group strategyproof mechanism. On instance P, by group strategyproofness, it must be \(f(\mathbf {x^{P}}) \in [x_1^{P}+b,x_n^{P}b]\) and by the same arguments used in case 1 of Lemma 4 (with instances \(P_{\mathrm {I}},P_{\mathrm {II}}\) instead of \(\mathrm {I},\mathrm {II}\) and with \(X_{1}^{P}\) and \(X_{2}^{P}\) instead of \(x_1\) and \(x_2\))^{3}, it must be \(f(\mathbf {x^P}) \notin (x_1^{P}+b,x_n^{P}b)\). Hence, it must be that either \(f(\mathbf {x^P})=x_1^{P}+b\) or \(f(\mathbf {x^P})=x_n^{P}b\). Assume w.l.o.g. that \(f(\mathbf {x^P})=x_1^{P}+b\); the other case can be handled symmetrically.
Following the arguments of case 2 of Lemma 4 (using instance \(P_{\mathrm {III}}\) instead of \(\mathrm {III}\)), group strategyproofness and position invariance imply that \(f(\mathbf {x^{P_{\mathrm {III}}}}) = x_{1}^{P_{\mathrm {III}}}b\) or \(f(\mathbf {x^{P_{\mathrm {III}}}}) = x_{1}^{P_{\mathrm {III}}}+b\) on instance \(P_{\mathrm {III}}\). By the arguments of subcase (a) (using instance \(P_{\mathrm {IVa}}\) instead of \(\mathrm {IVa}\)), it can’t be that \(f(\mathbf {x^{P_{\mathrm {III}}}}) = x_{1}^{P_{\mathrm {III}}}b\), so it must be that \(f(\mathbf {x^{P_{\mathrm {III}}}}) = x_{1}^{P_{\mathrm {III}}}+b\). However, we can not simply apply the argument used in subcase (b) to get a contradiction, because since there is a different number of agents on \(x_{1}^P\) and \(x_{2}^P\), instances P and \(P_{\mathrm {IVa}}\) are no longer position equivalent.
Next, consider instance S and observe that \(f(\mathbf {x^S}) \notin (x_{1}^{S}+b,x_{n}^{S}b)\) by the same arguments as above and \(f(\mathbf {x^S}) \in [x_{1}^{S}+b,x_{n}^{S}b]\) by group strategyproofness and position invariance. Hence it is either \(f(\mathbf {x^S})= x_{1}^{S}+b\) or \(f(\mathbf {x^S})= x_{n}^{S}b\). Assume first that \(f(\mathbf {x^S})=x_{1}^{S}+b\). By the same arguments as above (using instances \(S_{\mathrm {III}}\) and \(S_{\mathrm {IVa}}\)), on instance \(S_{\mathrm {III}}\), it must be that \(f(\mathbf {x^{S_{\mathrm {III}}}}) = x_{1}^{S_{\mathrm {III}}}+b\). Now, observe that if \(X_{2}^{P_{\mathrm {III}}}\) misreport \(\bar{x}_{i}^{P_{\mathrm {III}}}=x_{i}^{P_{\mathrm {III}}}2b2\epsilon \), then we get instance \(P_{\mathrm {IVb}}\) which is position equivalent to instance \(S_{\mathrm {III}}\) and hence it must be that \(f(\mathbf {x^{P_{\mathrm {IVb}}}})=x_n^{P_{\mathrm {IVb}}}+b\). The cost of \(X_{2}^{P_{\mathrm {III}}}\) before misreporting was \(b+c\epsilon \) while it becomes \(c+2\epsilon \) after misreporting. This violates strategyproofness, which means that on instance S, it must be \(f(\mathbf {x^S})=x_{n}^{S}b\).
Let’s denote instance \(\mathrm {T}\) by \(\mathbf {x^T}=\langle x_1^{\mathrm {T}},\ldots ,x_n^{\mathrm {T}} \rangle \), where \(x_1^{\mathrm {T}}=\cdots =x_{k}^{\mathrm {T}}=x_{k+1}^{\mathrm {T}}2b=x_{k+2}^{\mathrm {T}}4b=\cdots =x_{n}^{\mathrm {T}}4b\). Let \(X_{1}^{\mathrm {T}}\) be the set of agents i for which \(x_i^{\mathrm {T}}=x_1^{\mathrm {T}}\), and let \(X_{2}^{T}\) be the set of agents j for which \(x_j^{\mathrm {T}}=x_n^{\mathrm {T}}\), and let \(x_{t}\) be agent \(x_{k+1}^{\mathrm {T}}\). On instance \(\mathrm {T}\), it must be either \(f(\mathbf {x}^{T})=x_{1}^{\mathrm {T}}+b\) or \(f(\mathbf {x}^{T})=x_{n}^{\mathrm {T}}b\), otherwise agent \(x_{t}\) could misreport \(x_{t}'=x_{1}^{\mathrm {T}}\) and then by Lemma 8 and the fact that on instance P it is \(f(\mathbf {x^P})=x_1^{P}+b\), it should be \(f(x_1^{\mathrm {T}},\ldots ,x_{t}',\ldots ,x_n^{\mathrm {T}})=x_tb\), which admits a cost of c for agent \(x_t\). Similarily, by Lemma 8 and the fact that on instance S, it is \(f(\mathbf {x^S})=x_n^{S}b\), if agent \(x_t\) misreports \(x_{t}''= x_{n}^{\mathrm {T}}\), then it should be \(f(x_1^{\mathrm {T}},\ldots ,x_{t}'',\ldots ,x_n^{\mathrm {T}})=x_t+b\). If \(f(\mathbf {x^{T}})=x_{n}^{\mathrm {T}}b\), agent \(x_t\) could form a coalition with agents \(X_{1}^{\mathrm {T}}\) and by misreporting \(x_{t}'\), move the facility to \(x_{t}b\), a choice that would admit the same cost for her, but a strictly smaller cost for every other member of the coalition. If \(f(\mathbf {x^{T}})=x_{1}^{\mathrm {T}}+b\) then agent \(x_t\) could form a coalition with agents \(X_{2}^{\mathrm {T}}\) and by misreporting \(x_{t}''\), move the facility to \(x_t+b\), a choice that would admit the same cost for her, but a strictly smaller cost for every other member of the coalition. In each case, there is a coalition of agents that can benefit from misreporting and group strategyproofness is violated. This completes the proof. \(\square \)
6 Conclusion and discussion
In this paper, we studied a natural variant of a wellknown problem, that of truthful facility location, when agents have doublepeaked linear preferences over the set of possible locations. As we saw, the fact that doublepeaked preferences are not as wellbehaved as their wellstudied singlepeaked counterpart makes the problem of designing good truthful mechanisms quite more challenging. Given the standard interpretation of the median agent as a majority outcome in the singlepeaked domain, our results seem to indicate that even slightly more complicated preference structures (that do not admit such majorities) might necessarily have very bad performance guarantees in the absence of such a consensus and that some sort of randomization is in fact essential.
Our work can be placed directly in the middle of the extensive literature on truthful facility location that comprises of many interesting settings, modelling different situations. The original model [1, 30] assumes singlepeaked preferences, the obnoxious facility model [7, 8, 16] assumes singleddipped (also known as singlecaved preferences) whereas the dual preference model [33, 34, 39] assumes a combination of singlepeaked and singledipped preferences. Each preference structure in these works is motivated by corresponding reallife scenarios; in this light, our model can also be seen as another interesting preference structure, motivated by a different realistic scenario, which immediately places it in tight connection to the related work in artificial intelligence and theoretical computer science. Furthermore, as we mentioned in the introduction, the interest in these types of problems is expanding with works considering different objectives or more complicated cost structures.
Given that doublepeaked preferences are a natural preference structure for some scenarios, as also advocated by some of the related work in economics, one could consider their use in other problems, beyond facility location. We have already mentioned their use in [38] for the problem of controlling elections; one could think of other uses in computational aspects of social choice, an important subfield of artificial intelligence. For example, one could ask the questions of whether efficient algorithms for deciding whether a given incomplete preference structure can be extended to a doublepeaked profile, similarly to the corresponding questions for singlepeaked preferences [3, 15]. Another question is whether it is possible to efficiently elicit doublepeaked preference orderings using some kind of query operation, e.g. comparison queries, like for instance in [9] for the singlepeaked case.
7 Future work and extensions
Starting from randomized mechanisms, we would like to obtain lower bounds that are functions of b and c, to see how well Mechanism M1 fares in the general setting. For deterministic mechanisms, we would like to get a result that would clear up the picture. Characterizing strategyproof, anonymous and position invariant mechanisms would be ideal, but proving a lower bound that depends on n on the ratio of such mechanisms (for the social cost) would also be quite helpful. The techniques used in our characterization for two agents and our lower bounds seem to convey promising intuition for achieving such a task.
The results for the case when peaks are not required to be symmetric
Nonsymmetric  

Ratio  Lower  
Social cost  
Deterministic  \(n1\)  \(1+\frac{b_1+b_2}{c}\) 
Randomized  \(n1\)  − 
Maximum cost  
Deterministic  \(1+\frac{b_1+b_2}{c}\)  \(1+\frac{b_1+b_2}{c}\) 
Randomized  \(1+\frac{b_1+b_2}{c}\)  3 / 2 
A different extension could be in terms of the cost functions used in the model. Although the symmetric case is arguably the best analogue of the singlepeaked preference setting, it could certainly make sense to consider a more general model, where the cost functions do not have the same slope in every interval and hence the peaks are not equidistant from the location of an agent. Let \(b_1\) and \(b_2\) be the distances from the left and the right peaks respectively. Clearly, all our lower bounds still hold, although one could potentially prove even stronger bounds by taking advantage of the more general setting. The main observation is that Mechanism M1 is no longer truthfulinexpectation, because its truthfulness depends heavily on the peaks being equidistant. On the other hand, mechanism M2 is still strategyproof and the approximation ratio bounds extend naturally. A summary of the results for the nonsymmetric setting is depicted in Table 2.
Future work could also consider a different choice for the objective function. Here, we studied the objective functions of the social cost and the maximum cost, following the original literature on the problem with singlepeaked preferences [30]. Since then, several other objectives functions have been considered in the literature such as the leastsquares objective, [18], the \(L_p\) norm of costs[17] or the minimax envy [5]; it would make sense to consider the same or at least similar objectives for the case of doublepeaked preferences as well. Finally, it would be meaningful to consider the problem under the verification framework [21, 22], especially if it turns out that strong inapproximability bounds apply, at least for the case of deterministic strategyproof mechanisms.
Footnotes
 1.
It is not hard to see by our results that if we let an agent’s cost be zero on her peaks, then in very general settings, no determnistic strategyproof mechanism can guarantee a finite approximation ratio.
 2.
In fact, even if the optimal mechanism outputs a distribution over points that all admit the minimum social cost, the argument still works.
 3.
Here we assume for convenience that any mechanism outputs the same location in \([x_1^{P}+c,x_n^{P}c]\) on the primary instance and instance P. This is without loss of generality because the argument for any output in \([x_1^{P}+b,x_n^{P}b]\) is exactly the same.
References
 1.Alon, N., Feldman, M., Procaccia, A. D., & Tennenholtz, M. (2010). Strategyproof approximation of the minimax on networks. Mathematics of Operations Research, 35(3), 513–526.MathSciNetCrossRefMATHGoogle Scholar
 2.Ashlagi, I., Fischer, F., Kash, I., & Procaccia, A. D. (2010). Mix and match. In Proceedings of the 11th ACM conference on Electronic commerce (ACMEC) (pp. 305–314). ACM.Google Scholar
 3.Bartholdi, J., & Trick, M. A. (1986). Stable matching with preferences derived from a psychological model. Operations Research Letters, 5(4), 165–169.MathSciNetCrossRefMATHGoogle Scholar
 4.Black, D. (1957). The theory of committees and elections. Dordrecht: Kluwer Academic Publishers. (reprint at 1986).MATHGoogle Scholar
 5.Cai, Q., FilosRatsikas, A., Filos, A., & Tang, P. (2016). Facility location with minimax envy. In Proceedings of the 25th International Joint Conference on Artificial Intelligence (IJCAI) (pp. 137–143).Google Scholar
 6.Caragiannis, I., FilosRatsikas, A., & Procaccia, A. D. (2011). An improved 2agent kidney exchange mechanism. In Proceedings of the 7th Workshop of Internet and Network Economics (WINE) (pp. 37–48). Springer.Google Scholar
 7.Cheng, Y., Wei, Y., & Zhang, G. (2013). Strategyproof approximation mechanisms for an obnoxious facility game on networks. Theoretical Computer Science, 497, 154–163.MathSciNetCrossRefMATHGoogle Scholar
 8.Cheng, Y., Yu, W., & Zhang, G. (2011). Mechanisms for obnoxious facility game on a path. In International Conference on Combinatorial Optimization and Applications (pp. 262–271). Springer.Google Scholar
 9.Conitzer, V. (2007). Eliciting singlepeaked preferences using comparison queries. In Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems (p. 65). ACM.Google Scholar
 10.Cooter, R. D. (2002). The strategic constitution. Princeton: Princeton University Press.Google Scholar
 11.Dokow, E., Feldman, M., Meir, R., & Nehama, I. (2012). Mechanism design on discrete lines and cycles. In Proceedings of the 13th ACM Conference on Electronic Commerce (ACMEC) (pp. 423–440).Google Scholar
 12.Dughmi, S., & Gosh, A. (2010). Truthful assignment without money. In Proceedings of the 11th ACM conference on Electronic commerce (ACMEC) (pp. 325–334).Google Scholar
 13.Egan, P. J. (2013). “Do something” politics and doublepeaked policy preferences. Journal of Politics, 76(2), 333–349.CrossRefGoogle Scholar
 14.Escoffier, B., Gourvès, L., Thang, N. K., Pascual, F., & Spanjaard, O. (2011). Strategyproof mechanisms for facility location games with many facilities. In The 2nd International Conference on Algorithmic Decision Theory, (pp. 67–81). Berlin, Heidelberg: Springer.Google Scholar
 15.Escoffier, B., Lang, J., & Öztürk, M. (2008). Singlepeaked consistency and its complexity. In The 18th European Conference on Artificial Intelligence (ECAI), 8, (pp. 366–370).Google Scholar
 16.Feigenbaum, I., & Sethuraman, J. (2014). Strategyproof mechanisms for onedimensional hybrid and obnoxious facility location. arXiv preprint arXiv:1412.3414.
 17.Feigenbaum, I., Sethuraman, J., & Ye, C. (2013). Approximately optimal mechanisms for strategyproof facility location: Minimizing \(l_p \) norm of costs. arXiv preprint arXiv:1305.2446.
 18.Feldman, M., & Wilf, Y. (2013). Strategyproof facility location and the least squares objective. In Proceedings of the 14th ACM conference on Electronic Commerce (ACMEC) (pp. 873–890).Google Scholar
 19.FilosRatsikas, A., & Miltersen, P. B. (2014). Truthful approximations to range voting. In International Conference on Web and Internet Economics (pp. 175–188). Springer.Google Scholar
 20.Fotakis, D., & Tzamos, C. (2010). Winnerimposing strategyproof mechanisms for multiple facility location games. In Proceeding of the 5th International Workshop of Internet and Network Economics (WINE) (pp. 234–245).Google Scholar
 21.Fotakis, D., Tzamos, C., & Zampetakis, E. (2015). Who to trust for truthfully maximizing welfare? arXiv preprint arXiv:1507.02301.
 22.Fotakis, D., & Tzamos, C. (2013). Winnerimposing strategyproof mechanisms for multiple facility location games. Theoretical Computer Science, 472, 90–103.MathSciNetCrossRefMATHGoogle Scholar
 23.Fotakis, D., & Tzamos, C. (2014). On the power of deterministic mechanisms for facility location games. ACM Transactions on Economics and Computation, 2(4), 15.CrossRefMATHGoogle Scholar
 24.Guo, M., & Conitzer, V. (2010). Strategyproof allocation of multiple items between two agents without payments or priors. In Ninth International Joint Conference on Autonomous Agents and Multi Agent Systems (AAMAS), Vol. 10, pp. 881–888.Google Scholar
 25.Inada, K. I. (1964). A note on the simple majority decision rule. Econometrica: Journal of the Econometric Society, 8, 525–531.MathSciNetCrossRefGoogle Scholar
 26.Lu, P., Sun, X., Wang, Y., & Zhu, Z. A. (2010). Asymptotically optimal strategyproof mechanisms for twofacility games. In Proceedings of the 11th ACM Conference on Electronic Commerce (ACMEC) (pp. 315–324).Google Scholar
 27.Lu, P., Wang, Y., & Zhou, Y. (2009). Tighter bounds for facility games. In Proceeding of the 5th International Workshop of Internet and Network Economics (WINE), (pp. 137–148).Google Scholar
 28.Moulin, H. (1980). On strategyproofness and single peakedness. Public Choice, 35(4), 437–455.CrossRefGoogle Scholar
 29.Procaccia, A. D., Wajc, D., & Zhang, H. (2016). ApproximationVariance Tradeoffs in Mechanism Design. working paper.Google Scholar
 30.Procaccia, A. D., & Tennenholtz, M. (2013). Approximate mechanism design without money. ACM Transactions on Economics and Computation, 1(4), 18.CrossRefGoogle Scholar
 31.Rosen, H. S. (2005). Public Finance (7th ed.). McGrawHill Irwin.Google Scholar
 32.Schummer, J., & Vohra, R. V. (2002). Strategyproof location on a network. Journal of Economic Theory, 104(2), 405–428.MathSciNetCrossRefMATHGoogle Scholar
 33.Serafino, P., & Ventre, C. (2015). Truthful mechanisms without money for nonutilitarian heterogeneous facility location. In AAAI (pp. 1029–1035).Google Scholar
 34.Serafino, P., & Ventre, C. (2016). Heterogeneous facility location without money. Theoretical Computer Science, 636, 27–46.MathSciNetCrossRefMATHGoogle Scholar
 35.Sonoda, A., Todo, T., & Yokoo, M. (2016). Falsenameproof locations of two facilities: Economic and algorithmic approaches. In Proceedings of the 30th AAAI Conference on Artificial Intelligence, (pp. 615–621). AAAI Press.Google Scholar
 36.Svensson, L.G., & Reffgen, A. (2014). The proof of the Gibbard–Satterthwaite theorem revisited. Journal of Mathematical Economics, 55, 11–14.MathSciNetCrossRefMATHGoogle Scholar
 37.Todo, T., Iwasaki, A., & Yokoo, M. (2011). Falsenameproof mechanism design without money. In The 10th International Conference on Autonomous Agents and Multiagent SystemsVolume 2, (pp. 651–658). International Foundation for Autonomous Agents and Multiagent Systems.Google Scholar
 38.Yang, Y., & Guo, J. (2015). How hard is control in multipeaked elections: A parameterized study. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, (pp. 1729–1730). International Foundation for Autonomous Agents and Multiagent Systems.Google Scholar
 39.Zou, S., & Li, M. (2015). Facility location games with dual preference. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, (pp. 615–623). International Foundation for Autonomous Agents and Multiagent Systems.Google Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.