eddyhkchiu2024apache-2.0
V2V-GoT-QA
A multimodal LLM-based dataset for cooperative autonomous driving with graph-of-thoughts reasoning, containing 110K training and 31K testing QA pairs across perception, prediction, and planning tasks.
Downloads109
Episodes141000
Likes2
Why This Matters for Physical AI
This dataset advances cooperative autonomous driving research by providing multimodal reasoning capabilities for vehicle-to-vehicle coordination, enabling LLMs to perform occlusion-aware perception, planning-aware prediction, and collision avoidance.
Technical Profile
- Modalities
- rgblanguage
- Robot Embodiments
- autonomous_vehicle
- Action Space
- waypoints
- Environment
- outdoor
- Task Types
- navigationplanningperceptionprediction
- Episodes
- 141000
- Annotation Types
- language_instructionsaction_labelsbounding_boxes
- License
- apache-2.0
Community Signals
HuggingFace Discussions1
Access
Need custom rgb data?
Claru builds purpose-built datasets for outdoor applications with dense human annotations and quality assurance.
Request a Sample Pack