## Abstract

A principal needs to elicit the true value of an object she owns from an agent who has a unique ability to compute this information. The principal cannot verify the correctness of the information, so she must incentivize the agent to report truthfully. Previous works coped with this unverifiability by employing two or more information agents and awarding them according to the correlation between their reports. We show that, in a common value setting, the principal can elicit the true information even from a single information agent, and even when computing the value is costly for the agent. Moreover, the principal’s expense is only slightly higher than the cost of computing the value. For this purpose we provide three alternative mechanisms, all providing the same above guarantee, highlighting the advantages and disadvantages in each. Extensions of the basic mechanism include adaptations for cases such as when the principal and the agent value the object differently, when the object is divisible and when the agent’s cost of computation is unknown. Finally, we deal with the case where delivering the information to the principal incurs a cost. Here we show that substantial savings can be obtained in a multi-object setting.

### Similar content being viewed by others

## Notes

The object may have a negative value which cannot be freely disposed of. For example, a broken car takes room in the garage, and movers should be paid in order to get rid of it. Similarly, a company in debt cannot be abandoned before its debt is fully paid.

We consider only cdfs

*G*that are continuous and differentiable almost everywhere, so \(G'\) is well-defined almost everywhere. In points in which*G*is discontinuous (i.e., has a jump), \(G'\) can be defined using Dirac’s delta function.If \(\epsilon =0,\) then reporting the true

*v*is only a weakly-dominant strategy: the agent never gains from reporting a false value, but may be indifferent between false and true value. For example, if the true value is 2 and the cdf is uniform in [3, 5] and zero elsewhere, then the agent is indifferent between reporting 1 and reporting 2, since in both cases he loses the object with probability 1. Making \(\epsilon\) even slightly above 0 prevents this indifference and makes reporting*v*strictly better than any other strategy.However, to attain this strict-truthfulness, it is sufficient to have \(\epsilon\) arbitrarily small. Hence, in the following analysis we assume for simplicity that \(\epsilon \rightarrow 0.\)

If the object value is negative, then the risk for the principal is paying too much for getting rid of the object.

To get an idea of the magnitude of the principal’s loss, consider a special case in which \(v_a = a\cdot v_p\) for some constant

*a*. Suppose also, for the sake of the example, that \(v_p\) is distributed uniformly in [0, 2*M*]. Suppose the principal uses Mechanism 1 with the \(G_{c'}\) of (3). Then, using the expressions in the text body, we find that the principal’s loss is at most \((3/a - 2) \cdot c'.\) So when \(a=1\) the principal’s loss is exactly \(c'\) (which may be very near*c*), but when \(a<1\) the loss is more than \(c',\) as can be expected. It is interesting that the loss (when*a*is fixed) is linear function of \(c'.\) We do not know if this is true in general.Notice such saving is only possible with the delivery cost, as with the calculation cost

*c*the agent will first need to calculate the exact value in order to determine if it is above or below the threshold.The Matlab code used is downloadable from https://tinyurl.com/rl3rnw9.

When there are many agents, the problem becomes easier. For example, with three or more agents the following mechanism is possible: (a) Offer each agent to provide you the information for \(c',\) for some \(c'>c.\) (b) Collect the reports of all agreeing agents. (c) If one report is not identical to at least one other report, then file a complaint against this agent and send her to jail. This creates a coordination game where the focal point is to reveal the true value, similarly to the famous ESP game. In our setting there is a single agent, so this trick is not possible.

## References

Akerlof, G. A. (1970). The market for “lemons”: Quality uncertainty and the market mechanism.

*The Quarterly Journal of Economics*,*84*(3), 488–500.Alkoby, S., & Sarne, D. (2015). Strategic free information disclosure for a Vickrey auction. In

*International workshop on agent-mediated electronic commerce and trading agents design and analysis*(pp. 1–18). Cham: Springer.Alkoby, S., & Sarne, D. (2017). The benefit in free information disclosure when selling information to people. In

*AAAI*(pp. 985–992).Alkoby, S., Sarne, D., & David, E. (2014). Manipulating information providers access to information in auctions. In

*Technologies and applications of artificial intelligence*(pp. 14–25). Cham: Springer.Alkoby, S., Sarne, D., & Das, S. (2015). Strategic free information disclosure for search-based information platforms. In

*Proceedings of the 2015 international conference on autonomous agents and multiagent systems*(pp. 635–643).Alkoby, S., Sarne, D., & Milchtaich, I. (2017). Strategic signaling and free information disclosure in auctions. In

*AAAI*(pp. 319–327).Armantier, O., & Treich, N. (2013). Eliciting beliefs: Proper scoring rules, incentives, stakes and hedging.

*European Economic Review*,*62*, 17–40.Babaioff, M., Kleinberg, R., & Paes Leme, R. (2012). Optimal mechanisms for selling information. In

*Proceedings of the 13th ACM conference on electronic commerce*pp. (92–109). New York: ACM.Baron, P., & Myerson, R. B. (1982). Regulating a monopolist with unknown costs.

*Econometrica*,*50*(4), 911–930.Barrage, L., & Lee, M. S. (2010). A penny for your thoughts: Inducing truth-telling in stated preference elicitation.

*Economics Letters*,*106*(2), 140–142.Ben-Porath, E., Dekel, E., & Lipman, B. L. (2014). Optimal allocation with costly verification.

*The American Economic Review*,*104*(12), 3779–3813.Bolton, P., Dewatripont, M., et al. (2005).

*Contract theory*. Cambridge: MIT.Emek, Y., Feldman, M., Gamzu, I., PaesLeme, R., & Tennenholtz, M. (2014). Signaling schemes for revenue maximization.

*ACM Transactions on Economics and Computation*,*2*(2), 5.Faltings, B., & Radanovic, G. (2017). Game theory for data science: Eliciting truthful information.

*Synthesis Lectures on Artificial Intelligence and Machine Learning*,*11*(2), 1–151.Glazer, J., & Rubinstein, A. (2004). On optimal rules of persuasion.

*Econometrica*,*72*(6), 1715–1736.Green, J. R., & Laffont, J. J. (1986). Partially verifiable information and mechanism design.

*The Review of Economic Studies*,*53*(3), 447–456.Grossman, S. J. (1981). The informational role of warranties and private disclosure about product quality.

*The Journal of Law and Economics*,*24*(3), 461–483.Grossman, S. J., & Hart, O. D. (1980). Disclosure laws and takeover bids.

*The Journal of Finance*,*35*(2), 323–334.Hajaj, C., & Sarne, D. (2017). Selective opportunity disclosure at the service of strategic information platforms.

*Autonomous Agents and Multi-Agent Systems*,*31*(5), 1133–1164.Hajaj, C., Dickerson, J. P., Hassidim, A., Sandholm, T., & Sarne, D. (2015). Strategy-proof and efficient kidney exchange using a credit mechanism. In

*Twenty-Ninth AAAI conference on artificial intelligence*.Hart, S., Kremer, I., & Perry, M. (2017). Evidence games: Truth and commitment.

*American Economic Review*,*107*(3), 690–713.Hart, S., & Nisan, N. (2017). Approximate revenue maximization with multiple items.

*Journal of Economic Theory*,*172*, 313–347.Hausch, D. B., & Li, L. (1993). A common value auction model with endogenous entry and information acquisition.

*Economic Theory*,*3*(2), 315–334.Hendricks, K., Porter, R. H., & Tan, G. (1993). Optimal selling strategies for oil and gas leases with an informed buyer.

*The American Economic Review*,*83*(2), 234–239.Hendricks, K., Porter, R. H., & Wilson, C. A. (1994). Auctions for oil and gas leases with an informed bidder and a random reservation price.

*Econometrica: Journal of the Econometric Society*,*62*, 1415–1444.Hossain, T., & Okui, R. (2013). The binarized scoring rule.

*Review of Economic Studies*,*80*(3), 984–1001.Kagel, J. H., & Levin, D. (1986). The winner’s curse and public information in common value auctions.

*The American Economic Review*,*76*(5), 894–920.Kong, Y., & Schoenebeck, G. (2019). An information theoretic framework for designing information elicitation mechanisms that reward truth-telling.

*ACM Transactions on Economics and Computation*,*7*(1), 2:1–2:33.Manelli, A. M., & Vincent, D. R. (2007). Multidimensional mechanism design: Revenue maximization and the multiple-good monopoly.

*Journal of Economic Theory*,*137*(1), 153–185.Moldovanu, B., & Tietzel, M. (1998). Goethe’s second-price auction.

*Journal of Political Economy*,*106*(4), 854–859.Moscarini, G., & Smith, L. (2001). The optimal level of experimentation.

*Econometrica*,*69*(6), 1629–1644.Offerman, T., Sonnemans, J., Van de Kuilen, G., & Wakker, P. P. (2009). A truth serum for non-Bayesians: Correcting proper scoring rules for risk attitudes.

*The Review of Economic Studies*,*76*(4), 1461–1489.Persico, N. (2000). Information acquisition in auctions.

*Econometrica*,*68*(1), 135–148.Porter, R. H. (1995). The role of information in us offshore oil and gas lease auction.

*Econometrica*,*63*(1), 1–27.Prelec, D. (2004). A Bayesian truth serum for subjective data.

*Science*,*306*(5695), 462–466.Radanovic, G., Faltings, B., & Jurca, R. (2016). Incentives for effort in crowdsourcing using the peer truth serum.

*ACM Transactions on Intelligent Systems and Technology*,*7*(4), 48:1–48:28.Sarne, D., Alkoby, S., & David, E. (2014). On the choice of obtaining and disclosing the common value in auctions.

*Artificial Intelligence*,*215*, 24–54.Segal-Halevi, E., Alkoby, S., Sharbaf, T., & Sarne, D. (2019). Obtaining costly unverifiable valuations from a single agent. In

*AAMAS*(pp. 1216–1224).Vickrey, W. (1961). Counterspeculation, auctions, and competitive sealed tenders.

*The Journal of Finance*,*16*(1), 8–37.Waggoner, B., & Chen, Y. (2013). Information elicitation sans verification. In

*Proceedings of the 3rd workshop on social computing and user generated content (SC13)*.Weaver, R., & Prelec, D. (2013). Creating truth-telling incentives with the Bayesian truth serum.

*Journal of Marketing Research*,*50*(3), 289–302.Weitzman, M. L. (1979). Optimal search for the best alternative.

*Econometrica: Journal of the Econometric Society*,*47*(3), 641–654.Wiegmann, D. D., Weinersmith, K. L., & Seubert, S. M. (2010). Multi-attribute mate choice decisions and uncertainty in the decision process: A generalized sequential search strategy.

*Journal of Mathematical Biology*,*60*(4), 543–572.Witkowski, J., & Parkes, D. C. (2012). A robust Bayesian truth serum for small populations. In

*AAAI*.

## Acknowledgements

The project was initiated thanks to an idea of Tomer Sharbaf, who also participated in the preliminary version of this paper [38]. This paper benefited a lot from discussions with the participants of the industrial engineering seminar in Ariel University, the game theory seminar in Bar Ilan University, the game theory seminar in the Hebrew University of Jerusalem, the game theory seminar in the Technion, and the Israeli artificial intelligence day. We are particularly grateful to Igal Milchtaich and Sergiu Hart for their helpful mathematical ideas. We are also grateful to the anonymous reviewers of AAMAS for their helpful comments. This research was partially supported by the Israel Science Foundation (Grant No. 1162/17).

## Author information

### Authors and Affiliations

### Corresponding author

## Additional information

### Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

A preliminary version appeared in the proceedings of AAMAS 2019. We are grateful to Tomer Sharbaf for participating in the preliminary version [38]. New in this version are: (a) handling objects that may have both a positive and a negative value. (b) handling the case where besides the information computation cost, the agent incurs an information delivery cost (Sect. 9), and therefore, the principal may want to elicit the true value only when it is above a certain threshold. (c) extending the section on different values (Sect. 6) by reducing the problem of minimizing the principal’s expense subject to revealing the true value into a substantially simpler form. (d) Adding examples based on real value data (Sect. 3) and simulations (Sect. 9).

## Rights and permissions

## About this article

### Cite this article

Segal-Halevi, E., Alkoby, S. & Sarne, D. Obtaining costly unverifiable valuations from a single agent.
*Auton Agent Multi-Agent Syst* **34**, 46 (2020). https://doi.org/10.1007/s10458-020-09469-4

Published:

DOI: https://doi.org/10.1007/s10458-020-09469-4