Learn What Not to Learn: Action Elimination with Deep Reinforcement Learning
Tom Zahavy · Matan Haroush · Nadav Merlis · Daniel J Mankowitz · Shie Mannor

Wed Dec 5th 05:00 -- 07:00 PM @ Room 517 AB #114

Learning how to act when there are many available actions in each state is a challenging task for Reinforcement Learning (RL) agents, especially when many of the actions are redundant or irrelevant. In such cases, it is easier to learn which actions not to take. In this work, we propose the Action-Elimination Deep Q-Network (AE-DQN) architecture that combines a Deep RL algorithm with an Action Elimination Network (AEN) that eliminates sub-optimal actions. The AEN is trained to predict invalid actions, supervised by an external elimination signal provided by the environment. Simulations demonstrate a considerable speedup and added robustness over vanilla DQN in text-based games with over a thousand discrete actions.

Author Information

Tom Zahavy (Technion)

Tom Zahavy is a Ph.D. student in the Faculty of Electrical Engineering at the Technion. Tom's Research focuses on developing Artificial Intelligence (AI) that learns to make optimal decisions in dynamical environments. In particular, Tom is using Reinforcement Learning and Artificial Neural Networks (computational models motivated from the human brain), to solve video games like Atari and MineCraft. The problem is that the AI brain is a black box, i.e., we know how to train it, but we are lacking tools to understand how it works. Thus, Tom has developed methods to analyze and visualize the AI brain, like an fMRI for computers. The approach is to look into the AI internal representation of the world (see the world from the AI eyes) and then simplify it using abstract models.

Matan Haroush (Technion)
Nadav Merlis (Technion)
Daniel J Mankowitz (Technion)
Shie Mannor (Technion)

More from the Same Authors