Training Data for Flexiv

Flexiv is building advanced robotic systems. Here is how real-world data accelerates their path from development to production deployment.

About Flexiv

Flexiv develops adaptive robots with force-controlled, AI-powered manipulation capabilities. Founded in 2016 by Shiquan Wang (Stanford PhD), the company builds robots that combine high-precision force control with AI-based task learning. Their Rizon series robots feature built-in force/torque sensing and adaptive compliance, targeting manufacturing, healthcare, and research applications. Flexiv has raised over $400 million.

Force-controlled adaptive manipulationAI-based task learning and generalizationContact-rich assembly automationCompliant manipulation for delicate objectsMulti-modal sensory fusion for manipulation

Flexiv at a Glance

2016+
Founded
Funded
Stage
Global
Deployment
AI-First
Approach

Known Data Requirements

Flexiv's force-controlled robots excel at contact-rich manipulation tasks like polishing, assembly, and insertion that require precise force modulation. Their AI task learning system needs training data that captures both visual observations and force/torque signals during diverse manipulation tasks — data that is extremely scarce in existing public datasets.

Diverse manipulation demonstrations

Source: Flexiv product deployments and research publications

Multi-modal recordings of manipulation tasks across diverse objects, environments, and conditions relevant to Flexiv's deployment contexts.

Real-world environment recordings

Source: Flexiv deployment requirements

Visual and geometric recordings of target deployment environments capturing the specific layouts, lighting, and conditions Flexiv's robots encounter.

Perception pretraining data

Source: Flexiv AI architecture requirements

Diverse egocentric and multi-view video for pretraining visual representations that ground Flexiv's AI in real-world physical understanding.

How Claru Data Addresses These Needs

Lab NeedClaru OfferingRationale
Diverse manipulation demonstrationsManipulation Trajectory Dataset + Custom CollectionClaru captures multi-modal manipulation recordings with dense annotations across diverse environments, matching the diversity Flexiv needs for robust policy training.
Real-world environment recordingsCustom Environmental Recording CampaignsClaru coordinates multi-sensor recordings across partner facilities in Flexiv's target deployment environments, capturing authentic visual distributions.
Perception pretraining dataEgocentric Activity Dataset (386K+ clips)Purpose-collected first-person video of human activities provides visual pretraining data that grounds Flexiv's AI in real physical interactions.

Technical Data Analysis

Flexiv occupies a unique position in robotics: they build hardware with industry-leading force control (0.1N force resolution, 1kHz control loop) and then layer AI on top to make that hardware adaptable to new tasks. This combination addresses the fundamental limitation of traditional industrial robots — they are precise but rigid, unable to adapt to variation in parts, fixtures, or environments.

The force-controlled approach creates a specific data requirement that most robotics datasets do not address. Flexiv's AI needs training data that pairs visual observations with synchronized force/torque signals — what does the robot see AND feel during a polishing operation, an insertion task, or a deformable object manipulation? This multi-modal data (vision + force) is extremely scarce in public datasets, which overwhelmingly focus on vision-only data.

Contact-rich assembly tasks like peg-in-hole insertion, snap-fit assembly, and screw driving depend on force feedback to detect contact states, adjust insertion angle, and apply appropriate force. The physics of these contact interactions vary across materials (metal-on-metal vs plastic-on-plastic), tolerances (tight vs loose fit), and geometric configurations. Real-world force data from diverse assembly tasks provides the training signal for force-adaptive policies.

Flexiv's compliant manipulation for delicate objects — medical devices, food items, electronics components — requires understanding how much force different objects can withstand. A force policy trained only on rigid objects will damage a ripe tomato or a flexible circuit board. Training data must include diverse object compliance characteristics captured through real force-controlled manipulation, not just visual observation.

Key Research & References

  1. [1]Brohan et al.. RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control.” CoRL 2023, 2023. Link
  2. [2]Open X-Embodiment Collaboration. Open X-Embodiment: Robotic Learning Datasets and RT-X Models.” ICRA 2024, 2024. Link
  3. [3]Kim et al.. OpenVLA: An Open-Source Vision-Language-Action Model.” arXiv 2406.09246, 2024. Link

Frequently Asked Questions

Flexiv's force-controlled robots excel at contact-rich manipulation tasks like polishing, assembly, and insertion that require precise force modulation. Their AI task learning system needs training data that captures both visual observations and force/torque signals during diverse manipulation tasks — data that is extremely scarce in existing public datasets.

Simulation cannot faithfully model the contact dynamics, material properties, and environmental conditions that Flexiv's robots encounter in deployment. Real-world data provides the distributional coverage that fills simulation gaps.

Yes. Claru operates a global network of 10,000+ data collectors across 100+ cities who can capture teleoperated demonstrations, egocentric video, and sensor data in target environments using standardized recording protocols.

Accelerate Flexiv's Data Pipeline

Talk to our team about purpose-built datasets for Flexiv's robotic systems.