Object Goal Navigation using
Data Regularized Q-Learning
IEEE CASE 2022

Abstract

overview

Object Goal Navigation requires a robot to navigate to an instance of a target out-of-view object class in a previously unseen environment. The framework described in this paper, first builds a semantic map of the environment gradually over time, and then repeatedly selects a long-term goal based on the semantic map to locate the target object instance. The long-term goal -- `where to go' is formulated as a vision-based deep reinforcement learning problem. Specifically, an Encoder Network is trained to process a semantic map, extract high-level features, and select a long-term goal. In addition, we incorporate data augmentation and Q-function regularization to make the long-term goal selection process more effective. We report experimental results using the photo-realistic Gibson benchmark dataset in the AI Habitat 3D simulation environment to demonstrate that our framework substantially improves performance on standard measures in comparison with state of the art baseline.

Video

Citation

Acknowledgements

ObjNav-DrQ was implemented on top of the SemExp codebase.
The website template was borrowed from Michaƫl Gharbi.