Efficiently Learning from Revealed Preference
In this paper, we consider the revealed preferences problem from a learning perspective. Every day, a price vector and a budget is drawn from an unknown distribution, and a rational agent buys his most preferred bundle according to some unknown utility function, subject to the given prices and budget constraint. We wish not only to find a utility function which rationalizes a finite set of observations, but to produce a hypothesis valuation function which accurately predicts the behavior of the agent in the future. We give efficient algorithms with polynomial sample-complexity for agents with linear valuation functions, as well as for agents with linearly separable, concave valuation functions with bounded second derivative.
Unable to display preview. Download preview PDF.
- 1.Varian, H.: Revealed preference. In: Samuelsonian Economics and the Twenty-First Century, pp. 99–115 (2006)Google Scholar
- 2.Beigman, E., Vohra, R.: Learning from revealed preference. In: Proceedings of the 7th ACM Conference on Electronic Commerce, pp. 36–42. ACM (2006)Google Scholar
- 6.Echenique, F., Golovin, D., Wierman, A.: A revealed preference approach to computational complexity in economics. In: ACM Conference on Electronic Commerce, pp. 101–110 (2011)Google Scholar
- 7.Balcan, M., Harvey, N.: Learning submodular functions. In: STOC 2011, pp. 793–802 (2011)Google Scholar
- 8.Balcan, M., Constantin, F., Iwata, S., Wang, L.: Learning valuation functions. In: COLT (2012)Google Scholar