Avoiding Wireheading with Value Reinforcement Learning

Conference paper

DOI: 10.1007/978-3-319-41649-6_2

Part of the Lecture Notes in Computer Science book series (LNCS, volume 9782)
Cite this paper as:
Everitt T., Hutter M. (2016) Avoiding Wireheading with Value Reinforcement Learning. In: Steunebrink B., Wang P., Goertzel B. (eds) Artificial General Intelligence. AGI 2016, AGI 2016. Lecture Notes in Computer Science, vol 9782. Springer, Cham

Abstract

How can we design good goals for arbitrarily intelligent agents? Reinforcement learning (RL) may seem like a natural approach. Unfortunately, RL does not work well for generally intelligent agents, as RL agents are incentivised to shortcut the reward sensor for maximum reward – the so-called wireheading problem. In this paper we suggest an alternative to RL called value reinforcement learning (VRL). In VRL, agents use the reward signal to learn a utility function. The VRL setup allows us to remove the incentive to wirehead by placing a constraint on the agent’s actions. The constraint is defined in terms of the agent’s belief distributions, and does not require an explicit specification of which actions constitute wireheading.

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Australian National UniversityCanberraAustralia

Personalised recommendations