Abstract
Understanding object affordances is essential for enabling robots to perform purposeful interactions in diverse and unstructured environments. However, existing approaches either rely on retrieval, which is fragile due to sparsity and coverage gaps, or on large-scale models, which frequently mislocalize contact points and mispredict post-contact actions when applied to unseen categories, thereby hindering robust generalization. We introduce Retrieval-Augmented Affordance Prediction (RAAP), a framework that unifies affordance retrieval with alignment-based learning. By decoupling static contact localization and dynamic action direction, RAAP transfers contact points via dense correspondence and predicts action directions through a retrieval-augmented alignment model that consolidates multiple references with dual-weighted attention. Trained on compact subsets of DROID and HOI4D with as few as tens of samples per task, RAAP achieves consistent performance across unseen objects and categories, and enables zero-shot robotic manipulation in both simulation and the real world.
BibTeX
@inproceedings{zhuang2026raap,
title={RAAP: Retrieval-Augmented Affordance Prediction with Cross-Image Action Alignment},
author={Zhuang, Qiyuan and Xu, He-Yang and Wang, Yijun and Zhao, Xin-Yang and Li, Yang-Yang and Wei, Xiu-Shen},
booktitle={IEEE International Conference on Robotics and Automation (ICRA)},
year={2026}
}