You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Dynamic Object Manipulation (DOM)

Project Page | Paper | Code

TL;DR: DOM is a large-scale dynamic manipulation dataset with 200K episodes, 2,800+ scenes, and 206 objects for training and evaluating VLA models.

Introduction

The Dynamic Object Manipulation (DOM) benchmark is designed to address the challenges of rapid perception and temporal anticipation in robotics. It includes:

  • 200K synthetic episodes across 2,800+ scenes and 206 objects.
  • Support for evaluating VLA models in dynamic scenarios requiring continuous control and closed-loop adaptation.

Citation

If you find this dataset or the DynamicVLA framework useful for your research, please cite:

@article{xie2026dynamicvla,
  title     = {DynamicVLA: A Vision-Language-Action Model for 
               Dynamic Object Manipulation},
  author    = {Xie, Haozhe and 
               Wen, Beichen and 
               Zheng, Jiarui and 
               Chen, Zhaoxi and 
               Hong, Fangzhou icon, 
               Diao, Haiwen and 
               Liu, Ziwei},
  journal   = {arXiv preprint arXiv:2601.22153},
  year      = {2026}
}

Changelog

  • [2026/04/26] The dataset is released.
  • [2026/01/31] The repo is created.
Downloads last month
43

Models trained or fine-tuned on hzxie/DOM

Paper for hzxie/DOM