Skip to yearly menu bar Skip to main content


Reward Finetuning for Faster and More Accurate Unsupervised Object Discovery

Katie Luo · Zhenzhen Liu · Xiangyu Chen · Yurong You · Sagie Benaim · Cheng Perng Phoo · Mark Campbell · Wen Sun · Bharath Hariharan · Kilian Weinberger

Great Hall & Hall B1+B2 (level 1) #1104
[ ]
[ Paper [ Poster [ OpenReview
Thu 14 Dec 8:45 a.m. PST — 10:45 a.m. PST


Recent advances in machine learning have shown that Reinforcement Learning from Human Feedback (RLHF) can improve machine learning models and align them with human preferences. Although very successful for Large Language Models (LLMs), these advancements have not had a comparable impact in research for autonomous vehicles—where alignment with human expectations can be imperative. In this paper, we propose to adapt similar RL-based methods to unsupervised object discovery, i.e. learning to detect objects from LiDAR points without any training labels. Instead of labels, we use simple heuristics to mimic human feedback. More explicitly, we combine multiple heuristics into a simple reward function that positively correlates its score with bounding box accuracy, i.e., boxes containing objects are scored higher than those without. We start from the detector’s own predictions to explore the space and reinforce boxes with high rewards through gradient updates. Empirically, we demonstrate that our approach is not only more accurate, but also orders of magnitudes faster to train compared to prior works on object discovery. Code is available at

Chat is not available.