Skip to main content

Advertisement

Log in

Computational Neural Mechanisms of Goal-Directed Planning and Problem Solving

  • Original Paper
  • Published:
Computational Brain & Behavior Aims and scope Submit manuscript

Abstract

The question of how animals and humans can solve arbitrary goal-driven problems remains open. Reinforcement learning (RL) methods have approached goal-directed control problems through model-based algorithms. However, RL focus on maximizing long-term reward is inconsistent with the psychological notion of planning to satisfy homeostatic drives, which involves setting goals first, then planning actions to achieve them. Optimal control theory suggests a solution: animals can learn a model of the world, learn where goals can be fulfilled, set a goal, and then act to minimize the difference between actual and desired world states. Here, we present a purely localist neural network model that can autonomously learn the structure of an environment and then achieve any arbitrary goal state in a changing environment without relearning reward values. The model, GOLSA, achieves this through a backwards spreading activation that propagates goal-values to an agent. The model elucidates how neural inhibitory mechanisms can support competition between goal representations, serving to push needs-based planning versus exploration. The model performs similar to humans in canonical revaluation tasks used to classify human and rodent behavior as goal-directed. The model revaluates optimal actions when goals, goal-values, world structure, and need to fulfill drive changes. The model also clarifies a number of issues inherent in other RL-based representations such as policy dependence in successor representations, while elucidating biological constraints such as the role of oscillations in gating information flow for learning versus action. Together, our proposed model suggests a biologically grounded framework for multi-step planning behaviors through consideration of how goal representations compete for behavioral expression in planning.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

References

Download references

Acknowledgments

We thank A. Ramamoorthy for helpful discussions on the manuscript.

Funding

JWB was supported by NIH R21 DA040773.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Joshua W. Brown.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

ESM 1

(PDF 3210 kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fine, J.M., Zarr, N. & Brown, J.W. Computational Neural Mechanisms of Goal-Directed Planning and Problem Solving. Comput Brain Behav 3, 472–493 (2020). https://doi.org/10.1007/s42113-020-00095-7

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s42113-020-00095-7

Keywords

Navigation