Linslab2025MIT
VLA-OS Dataset
A multi-dataset collection for training vision-language-action models, combining data from LIBERO, Colosseum, FurnitureBench, DexArt, deformable object manipulation, and PerAct2 with task planning annotations.
Downloads849
Likes2
Why This Matters for Physical AI
This dataset enables research into vision-language-action model architectures and planning representations by providing diverse multi-source robotic manipulation data with semantic annotations.
Technical Profile
- Modalities
- rgblanguage
- Environment
- simulationlab
- Task Types
- manipulationgraspingpick_and_placeobject_rearrangement
- Annotation Types
- language_instructionstask_planning
- License
- MIT
Community Signals
Top 25% by downloads
Access
Need custom rgb data?
Claru builds purpose-built datasets for simulation applications with dense human annotations and quality assurance.
Request a Sample Pack