Training Data for Collaborative Robotics

Collaborative Robotics is building advanced robotic systems. Here is how real-world data accelerates their path from development to production deployment.

About Collaborative Robotics

Collaborative Robotics (Cobot) builds mobile manipulator robots designed to work alongside people in commercial environments. Founded in 2022 by Brad Porter (former VP at Amazon Robotics and Scale AI), the company has raised over $100 million. Their robot combines mobile base navigation with manipulation capabilities, targeting healthcare, hospitality, and logistics applications.

Mobile manipulation in human environmentsSafe human-robot interactionMulti-floor autonomous navigationTask learning from demonstrationsCloud-connected fleet intelligence

Collaborative Robotics at a Glance

2016+
Founded
Funded
Stage
Global
Deployment
AI-First
Approach

Known Data Requirements

Collaborative Robotics' mobile manipulator must navigate and manipulate in spaces designed for people — hospitals, hotels, offices, warehouses. Their data needs span diverse indoor environments with the specific obstacle types, floor plans, and human interaction patterns these spaces present. The mobile manipulation paradigm requires data that jointly captures navigation and manipulation.

Diverse manipulation demonstrations

Source: Collaborative Robotics product deployments and research publications

Multi-modal recordings of manipulation tasks across diverse objects, environments, and conditions relevant to Collaborative Robotics's deployment contexts.

Real-world environment recordings

Source: Collaborative Robotics deployment requirements

Visual and geometric recordings of target deployment environments capturing the specific layouts, lighting, and conditions Collaborative Robotics's robots encounter.

Perception pretraining data

Source: Collaborative Robotics AI architecture requirements

Diverse egocentric and multi-view video for pretraining visual representations that ground Collaborative Robotics's AI in real-world physical understanding.

How Claru Data Addresses These Needs

Lab NeedClaru OfferingRationale
Diverse manipulation demonstrationsManipulation Trajectory Dataset + Custom CollectionClaru captures multi-modal manipulation recordings with dense annotations across diverse environments, matching the diversity Collaborative Robotics needs for robust policy training.
Real-world environment recordingsCustom Environmental Recording CampaignsClaru coordinates multi-sensor recordings across partner facilities in Collaborative Robotics's target deployment environments, capturing authentic visual distributions.
Perception pretraining dataEgocentric Activity Dataset (386K+ clips)Purpose-collected first-person video of human activities provides visual pretraining data that grounds Collaborative Robotics's AI in real physical interactions.

Technical Data Analysis

Brad Porter's experience at Amazon Robotics (where he oversaw 750K+ robots) and Scale AI (data infrastructure at massive scale) directly shapes Collaborative Robotics' approach. The company understands that the data bottleneck — not hardware — is what limits mobile manipulation deployment.

The mobile manipulator form factor creates compound data requirements. Navigation data alone is not enough; the robot must understand how to position its base to enable manipulation, how to navigate while carrying objects, and how to share space safely with people. This coupled navigation-manipulation capability requires training data where both modalities are captured simultaneously.

Healthcare deployment presents unique environmental and interaction requirements. Hospital corridors, patient rooms, nursing stations, and supply areas each have specific layouts, obstacle types, and human behavior patterns. Training data from real healthcare facilities — not simulated hospital environments — captures the specific visual distributions, navigation constraints, and human interaction patterns that healthcare robots encounter.

The cloud fleet intelligence dimension adds another data requirement: learning from the collective experience of deployed robots. When one robot encounters a new situation, the fleet should benefit. This requires standardized data capture from every deployed robot plus the infrastructure to aggregate, annotate, and incorporate new experiences into model updates.

Key Research & References

  1. [1]Brohan et al.. RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control.” CoRL 2023, 2023. Link
  2. [2]Open X-Embodiment Collaboration. Open X-Embodiment: Robotic Learning Datasets and RT-X Models.” ICRA 2024, 2024. Link
  3. [3]Kim et al.. OpenVLA: An Open-Source Vision-Language-Action Model.” arXiv 2406.09246, 2024. Link

Frequently Asked Questions

Collaborative Robotics' mobile manipulator must navigate and manipulate in spaces designed for people — hospitals, hotels, offices, warehouses. Their data needs span diverse indoor environments with the specific obstacle types, floor plans, and human interaction patterns these spaces present. The mobile manipulation paradigm requires data that jointly captures navigation and manipulation.

Simulation cannot faithfully model the contact dynamics, material properties, and environmental conditions that Collaborative Robotics's robots encounter in deployment. Real-world data provides the distributional coverage that fills simulation gaps.

Yes. Claru operates a global network of 10,000+ data collectors across 100+ cities who can capture teleoperated demonstrations, egocentric video, and sensor data in target environments using standardized recording protocols.

Accelerate Collaborative Robotics's Data Pipeline

Talk to our team about purpose-built datasets for Collaborative Robotics's robotic systems.