Skip to main content
Log in

Machine Learning - Call for Papers: Fast-track Special Issue on Reinforcement Learning for Real Life

Call For Papers

Machine Learning Journal – Springer

Fast-track Special Issue on Reinforcement Learning for Real Life

Reinforcement learning (RL) is a general learning, predicting, and decision-making paradigm and applies broadly in many disciplines in science, engineering, and the arts. RL has seen prominent successes in many problems, such as those in simulated environments like Atari games and AlphaGo, and those in real life like robotics, recommender systems, and nuclear fusion. However, given the significant theoretical and algorithmic gains made in the past few years, applying RL in real life remains challenging, and a natural question is:

Why isn’t RL used even more often and how can we improve this?

The main goals of the workshop are to: (1) identify key research problems that are critical for the success of real-world applications; (2) report progress on addressing these critical issues; and (3) have practitioners share their success stories of applying RL to real-world problems, and the insights gained from such applications.

We invite paper submissions of original work successfully applying RL algorithms to real-life problems and/or addressing practically relevant RL issues. Our topics of interest are general, w.r.t. practical RL algorithms, practical issues, and applications.

Topics of interest include (but are not limited to):

+ studies about real-life RL systems, esp. about deployment/product

+ significant efforts for a high-fidelity simulator, esp. for a complicated system

+ significant efforts for benchmarks/datasets

+ significant efforts for human factors

The following alone is not considered real-life RL:

- practical work with existing simulator/benchmark/dataset only, w/o significant real-life efforts

- theory/algorithm work with toy/simple experiments only, w/o significant real-life efforts

Submission Options


Submissions to the NeurIPS 2022 Reinforcement learning for Real Life Workshop

Submissions directly to the MLJ Special Issue

Submissions to the workshop will have one reviewing round for the workshop (also for the selection), discussions during the workshop, and one round for the MLJ after revisions. For submissions directly to the MLJ Special Issue, there will be only one reviewing round, with no opportunity for substantial revisions afterwards. 

Submission Info For NeurIPS 2022 RL4RealLife Workshop

See the workshop website for details.

Submission Info For (Re)submissions to the MLJ Special Issue

Submissions should be made via the Machine Learning journal website at When submitting your paper, be sure to specify that the paper is a contribution for the Special Issue “SI: Reinforcement Learning For Real Life”.

It is the policy of the journal that no submission, or substantially overlapping submission, be published or be under review at another journal or conference at any time during the review process. Papers extending previously published conference papers are acceptable, as long as the journal submission provides a significant contribution beyond the conference paper, and the overlap is described clearly at the beginning of the journal submission. If you have any question about whether the overlap with another paper is “substantial,” please include in the paper a discussion of the similarities and differences with other papers, including the unique contribution(s) of the Machine Learning submission.

Submission Guidelines for Machine Learning Journal are at:

For any inquiry about the special issue (and the workshop), please contact us at We are looking forward to receiving your contribution.

Editorial Schedule:

NeurIPS 2022 RL4RealLife Workshop

  Submission deadline: 09/15/2022

                    Review: 10/15/2022 

MLJ RL4RealLife Special Issue (direct submissions & revisions from the workshop):   

(Re)submission deadline (extended): 03/30/2023 
                   Review: 05/30/2023
                   Final Decision: 06/30/2023

Guest Editors:

Emma Brunskill (Stanford)
Minmin Chen (Google)
Omer Gottesman (Amazon)
Lihong Li (Amazon)
Yuxi Li (
Yao Liu (Amazon)
Zongqing Lu (PKU)
Niranjani Prasad (Microsoft)
Zhiwei (Tony) Qin (Lyft)
Csaba Szepesvari (Deepmind & U. of Alberta)
Matthew E. Taylor (U. of Alberta)