Chapter

Artificial General Intelligence

Volume 9782 of the series Lecture Notes in Computer Science pp 12-22

Date:

Avoiding Wireheading with Value Reinforcement Learning

  • Tom EverittAffiliated withAustralian National University Email author 
  • , Marcus HutterAffiliated withAustralian National University

* Final gross prices may vary according to local VAT.

Get Access

Abstract

How can we design good goals for arbitrarily intelligent agents? Reinforcement learning (RL) may seem like a natural approach. Unfortunately, RL does not work well for generally intelligent agents, as RL agents are incentivised to shortcut the reward sensor for maximum reward – the so-called wireheading problem. In this paper we suggest an alternative to RL called value reinforcement learning (VRL). In VRL, agents use the reward signal to learn a utility function. The VRL setup allows us to remove the incentive to wirehead by placing a constraint on the agent’s actions. The constraint is defined in terms of the agent’s belief distributions, and does not require an explicit specification of which actions constitute wireheading.