Timezone: »

 
A Contextual Bandit Approach for Learning to Plan in Environments with Probabilistic Goal Configurations
Sohan Rudra · Saksham Goel · Anirban Santara · Claudio Gentile · Laurent Perron · Fei Xia · Vikas Sindhwani · Carolina Parada · Gaurav Aggarwal

Object-goal navigation (Object-nav) entails searching, recognizing and navigating to a target object. Object-nav has been extensively studied by the Embodied-AI community, but most solutions are often restricted to considering static objects (e.g., television, fridge, etc.). We propose a modular framework for object-nav that is able to efficiently search indoor environments for not just static objects but also movable objects (e.g. fruits, glasses, phones, etc.) that frequently change their positions due to human interaction. Our contextual-bandit agent efficiently explores the environment by showing optimism in the face of uncertainty and learns a model of the likelihood of spotting different objects from each navigable location. The likelihoods are used as rewards in a weighted minimum latency solver to deduce a trajectory for the robot. We evaluate our algorithms in two simulated environments and a real-world setting, to demonstrate high sample efficiency and reliability.

Author Information

Sohan Rudra (Google)
Saksham Goel (Google)
Saksham Goel

Currently working as a SWE, within the Google Search, to help curious people go on exploration journeys when using Google search during the day while working as a Research SWE to help robots find interesting objects in real world during the night.

Anirban Santara (Google)
Claudio Gentile (Google Research)
Laurent Perron (Google)
Fei Xia (Google)
Vikas Sindhwani (Google)
Carolina Parada (Google)
Gaurav Aggarwal (Google)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors