AffordGrasp: Cross-Modal Diffusion for Affordance-Aware Grasp Synthesis

1ShanghaiTech University,
2Shanghai Engineering Research Center of Intelligent Vision and Imaging,
3University of Science and Technology of China,
Corresponding author

Abstract

Generating human grasping poses that accurately reflect both object geometry and user-specified interaction semantics is essential for natural hand–object interactions in AR/VR and embodied AI. However, existing semantic grasping approaches struggle with the large modality gap between 3D object representations and textual instructions, and often lack explicit spatial or semantic constraints, leading to physically invalid or semantically inconsistent grasps. In this work, we present AffordGrasp, a diffusion-based framework that produces physically stable and semantically faithful human grasps with high precision. We first introduce a scalable annotation pipeline that automatically enriches hand–object interaction datasets with fine-grained structured language labels capturing interaction intent. Building upon these annotations, AffordGrasp integrates an affordance-aware latent representation of hand poses with a dual-conditioning diffusion process, enabling the model to jointly reason over object geometry, spatial affordances, and instruction semantics. A distribution adjustment module further enforces physical contact consistency and semantic alignment. We evaluate AffordGrasp across four instruction-augmented benchmarks derived from HO-3D, OakInk, GRAB, and AffordPose, and observe substantial improvements over state-of-the-art methods in grasp quality, semantic accuracy, and diversity.

Example Image

Mesh Viewer

Affordance Prediction

Video Results

Wrap your fingers around the bottle's body

Carefully lift the bottle to avoid slide.

Press the dispenser to avoid over-pouring.

Twist the top of the dispenser to open it.

BibTeX


        @article{liu2024realdex,
          title={Realdex: Towards human-like grasping for robotic dexterous hand},
          author={Liu, Yumeng and Yang, Yaxun and Wang, Youzhuo and Wu, Xiaofei and Wang, Jiamin and Yao, Yichen and Schwertfeger, S{\"o}ren and Yang, Sibei and Wang, Wenping and Yu, Jingyi and others},
          journal={arXiv preprint arXiv:2402.13853},
          year={2024}
        }