SC19 Proceedings

The International Conference for High Performance Computing, Networking, Storage, and Analysis

Metaoptimization on a Distributed System for Deep Reinforcement Learning

Workshop: Metaoptimization on a Distributed System for Deep Reinforcement Learning

Abstract: Training intelligent agents through reinforcement learning (RL) is a notoriously unstable procedure. Massive parallelization on GPUs and distributed systems has been exploited to generate a large amount of training experiences and consequently reduce instabilities, but the success of training remains strongly influenced by the choice of the hyperparameters. To overcome this issue, we introduce HyperTrick, a new metaoptimization algorithm, and show its effective application to tune hyperparameters in the case of deep RL, while learning to play different Atari games on a distributed system. Our analysis provides evidence of the interaction between the identification of the optimal hyperparameters and the learned policy, that is peculiar of the case of metaoptimization for deep RL. When compared with state-of-the-art metaoptimization algorithms, HyperTrick is characterized by a simpler implementation and it allows learning similar policies, while making a more effective use of the computational resources in a distributed system.

Back to Machine Learning in HPC Environments Archive Listing

Back to Full Workshop Archive Listing