RAAP: Retrieval-Augmented Affordance Prediction with Cross-Image Action Alignment

Qiyuan Zhuang1, He-Yang Xu1, Yijun Wang1, Xin-Yang Zhao2, Yang-Yang Li2, Xiu-Shen Wei1†
1Southeast University, 2Nanjing University of Science and Technology
† Indicates Corresponding Author

Retrieval-Augmented Affordance Prediction (RAAP) transfers contact knowledge through dense correspondence and aligns action directions across images, enabling zero-shot robotic manipulation on unseen objects and novel categories.

Abstract

Understanding object affordances is essential for enabling robots to perform purposeful interactions in diverse and unstructured environments. However, existing approaches either rely on retrieval, which is fragile due to sparsity and coverage gaps, or on large-scale models, which frequently mislocalize contact points and mispredict post-contact actions when applied to unseen categories, thereby hindering robust generalization. We introduce Retrieval-Augmented Affordance Prediction (RAAP), a framework that unifies affordance retrieval with alignment-based learning. By decoupling static contact localization and dynamic action direction, RAAP transfers contact points via dense correspondence and predicts action directions through a retrieval-augmented alignment model that consolidates multiple references with dual-weighted attention. Trained on compact subsets of DROID and HOI4D with as few as tens of samples per task, RAAP achieves consistent performance across unseen objects and categories, and enables zero-shot robotic manipulation in both simulation and the real world.

BibTeX

@inproceedings{zhuang2026raap,
  title={RAAP: Retrieval-Augmented Affordance Prediction with Cross-Image Action Alignment},
  author={Zhuang, Qiyuan and Xu, He-Yang and Wang, Yijun and Zhao, Xin-Yang and Li, Yang-Yang and Wei, Xiu-Shen},
  booktitle={IEEE International Conference on Robotics and Automation (ICRA)},
  year={2026}
}