Motivating Time-Inconsistent Agents: A Computational Approach

Conference paper

DOI: 10.1007/978-3-662-54110-4_22

Part of the Lecture Notes in Computer Science book series (LNCS, volume 10123)
Cite this paper as:
Albers S., Kraft D. (2016) Motivating Time-Inconsistent Agents: A Computational Approach. In: Cai Y., Vetta A. (eds) Web and Internet Economics. WINE 2016. Lecture Notes in Computer Science, vol 10123. Springer, Berlin, Heidelberg


We study the complexity of motivating time-inconsistent agents to complete long term projects in a graph-based planning model as proposed by Kleinberg and Oren [5]. Given a task graph G with n nodes, our objective is to guide an agent towards a target node t under certain budget constraints. The crux is that the agent may change its strategy over time due to its present-bias. We consider two strategies to guide the agent. First, a single reward is placed at t and arbitrary edges can be removed from G. Secondly, rewards can be placed at arbitrary nodes of G but no edges must be deleted. In both cases we show that it is NP-complete to decide if a given budget is sufficient to guide the agent. For the first setting, we give complementing upper and lower bounds on the approximability of the minimum required budget. In particular, we devise a \((1+\sqrt{n})\)-approximation algorithm and prove NP-hardness for ratios greater than \(\sqrt{n}/3\). Finally, we argue that the second setting does not permit any efficient approximation unless \(\mathrm{P}=\mathrm{NP}\).


Approximation algorithms Behavioral economics Computational complexity Planning and scheduling Time-inconsistency 

Copyright information

© Springer-Verlag GmbH Germany 2016

Authors and Affiliations

  1. 1.Department of Computer ScienceTechnical University of MunichMunichGermany

Personalised recommendations