Skip to yearly menu bar Skip to main content


Poster

Learning Transferable Graph Exploration

Hanjun Dai · Yujia Li · Chenglong Wang · Rishabh Singh · Po-Sen Huang · Pushmeet Kohli

East Exhibition Hall B + C #166

Keywords: [ Deep Learning ] [ Embedding Approaches; Reinforcement Learning and Planning ] [ Program Understanding and Generation ] [ Applications ]


Abstract:

This paper considers the problem of efficient exploration of unseen environments, a key challenge in AI. We propose a learning to explore' framework where we learn a policy from a distribution of environments. At test time, presented with an unseen environment from the same distribution, the policy aims to generalize the exploration strategy to visit the maximum number of unique states in a limited number of steps. We particularly focus on environments with graph-structured state-spaces that are encountered in many important real-world applications like software testing and map building. We formulate this task as a reinforcement learning problem where theexploration' agent is rewarded for transitioning to previously unseen environment states and employ a graph-structured memory to encode the agent's past trajectory. Experimental results demonstrate that our approach is extremely effective for exploration of spatial maps; and when applied on the challenging problems of coverage-guided software-testing of domain-specific programs and real-world mobile applications, it outperforms methods that have been hand-engineered by human experts.

Live content is unavailable. Log in and register to view live content