Human-Object Interaction from Human-Level Instructions

Zhen Wu, Jiaman Li, Pei Xu, C. Karen Liu

Stanford University

In IEEE/CVF International Conference on Computer Vision, 2025.

teaser

Abstract

ICCV 2025 Intelligent agents must autonomously interact with the environments to perform daily tasks based on human-level instructions. They need a foundational understanding of the world to accurately interpret these instructions, along with precise low-level movement and interaction skills to execute the derived actions. In this work, we propose the first complete system for synthesizing physically plausible, long-horizon human-object interactions for object manipulation in contextual environments, driven by human-level instructions. We leverage large language models (LLMs) to interpret the input instructions into detailed execution plans. Unlike prior work, our system is capable of generating detailed finger-object interactions, in seamless coordination with full-body movements. We also train a policy to track generated motions in physics simulation via reinforcement learning (RL) to ensure physical plausibility of the motion. Our experiments demonstrate the effectiveness of our system in synthesizing realistic interactions with diverse objects in complex environments, highlighting its potential for real-world applications.

Video

Bibtex

@inproceedings{hoifhli,
  title={Human-Object Interaction from Human-Level Instructions},
  author={Wu, Zhen and Li, Jiaman and Xu, Pei and Liu, C. Karen},
  booktitle={2025 IEEE/CVF International Conference on Computer Vision (ICCV)},
  pages={1--11},
  year={2025}
}