The combination of Reinforcement Learning (RL) with deep learning has led to a series of impressive feats, with many believing (deep) RL provides a path towards generally capable agents. However, the success of RL agents is often highly sensitive to design choices in the training process, which may require tedious and error-prone manual tuning. This makes it challenging to use RL for new problems and also limits its full potential. In many other areas of machine learning, AutoML has shown that it is possible to automate such design choices, and AutoML has also yielded promising initial results when applied to RL. However, Automated Reinforcement Learning (AutoRL) involves not only standard applications of AutoML but also includes additional challenges unique to RL, that naturally produce a different set of methods. As such, AutoRL has been emerging as an important area of research in RL, providing promise in a variety of applications from RNA design to playing games, such as Go. Given the diversity of methods and environments considered in RL, much of the research has been conducted in distinct subfields, ranging from meta-learning to evolution. In this survey, we seek to unify the field of AutoRL, provide a common taxonomy, discuss each area in detail and pose open problems of interest to researchers going forward.
强化学习 (RL) 与深度学习的结合带来了一系列令人印象深刻的壮举，许多人认为（深度）强化学习提供了通向通用智能体的途径。然而，RL 智能体的成功通常对训练过程中的设计选择高度敏感，这可能需要繁琐且容易出错的手动调整。这使得使用 RL 解决新问题变得具有挑战性，也限制了它的全部潜力。在机器学习的许多其他领域，AutoML 已经表明可以自动化此类设计选择，并且 AutoML 在应用于 RL 时也产生了有希望的初步结果。然而，自动强化学习 (AutoRL) 不仅涉及 AutoML 的标准应用，还包括 RL 独有的额外挑战，这些挑战自然会产生一组不同的方法。因此，AutoRL 已成为 RL 研究的一个重要领域，在从 RNA 设计到玩游戏（如围棋）的各种应用中提供了希望。鉴于 RL 中考虑的方法和环境的多样性，许多研究都是在不同的子领域进行的，从元学习到进化。在本次调查中，我们寻求统一 AutoRL 领域，提供通用分类法，详细讨论每个领域，并提出未来研究人员感兴趣的开放问题。