Abstract
In most real world scenarios, a policy trained by reinforcement learning in one environment needs to be deployed in another, potentially quite different environment. However, generalization across different environments is known to be hard. A natural solution would be to keep training after deployment in the new environment, but this cannot be done if the new environment offers no reward signal. Our work explores the use of self-supervision to allow the policy to continue training after deployment without using any rewards. While previous methods explicitly anticipate changes in the new environment, we assume no prior knowledge of those changes yet still obtain significant improvements. Empirical evaluations are performed on diverse simulation environments from DeepMind Control suite and ViZDoom, as well as real robotic manipulation tasks in continuously changing environments, taking observations from an uncalibrated camera. Our method improves generalization in 28 out of 32 environments across various tasks and outperforms domain randomization on a majority of environments.
Robotic manipulation
We train policies in simulation and deploy on a real robot, operating solely from an uncalibrated camera. Policy Adaptation during Deployment (PAD) transfers successfully and can adapt to a variety of real-world environments, including environmental changes such as table cloths and disco lights.
Reach
Push
Non-stationary environments
We evaluate on a collection of natural video backgrounds and show that Policy Adaptation during Deployment (PAD) continuously adapts to changes in the environment. We here compare our method to the non-adaptive SAC trained with an inverse dynamics model (denoted SAC+IDM), as well as CURL (Srinivas et al.), a recently proposed contrastive method.
Stationary environments
We evaluate on randomized environments and show that Policy Adaptation during Deployment (PAD) outperforms both CURL (Srinivas et al.) and the non-adaptive SAC trained with an inverse dynamics model (denoted SAC+IDM) on a majority of tasks while impacting performance in the original (training) environment minimally.