System Online · 30K+ Hours Indexed

Ground Truth for Embodied Intelligence

The only end-to-end data engine for Sim-to-Real transfer. From teleoperation capture to production-ready action-state pairs.

30K+Hours
193Scenarios
5Modalities
20HzSync Rate
Live Preview
NVIDIA Isaac·FREQ: 100HZ·ROS 2·DEPTH: ACTIVE STEREO·PyTorch·FORMAT: HDF5·Hugging Face·SIM: ISAAC GYM·LeRobot·CAMERAS: 3+·Open X-Embodiment·LATENCY: <20MS·NVIDIA Isaac·FREQ: 100HZ·ROS 2·DEPTH: ACTIVE STEREO·PyTorch·FORMAT: HDF5·Hugging Face·SIM: ISAAC GYM·LeRobot·CAMERAS: 3+·Open X-Embodiment·LATENCY: <20MS·

The Platform

Data, pipeline, and services — unified.

Data Corpus

The Corpus

30,000+ hours of expert-annotated multi-modal manipulation data. RGB-D, force/torque, proprioception — every frame production-ready.

193 Scenarios5 ModalitiesHDF5 + LeRobot
Pipeline

Processing Pipeline

Raw video in, action tokens out. Our auto-labeling pipeline transforms teleoperation recordings into training-ready datasets.

01
Raw Video
02
Segmentation
03
Action Labels
04
Action Tokens
Services

Teleop Services

Human-in-the-loop dexterity. Our expert operators capture complex manipulation scenarios with sub-20ms teleoperation latency and haptic feedback.

Latency
<20ms
Haptic
Enabled
Operators
Expert
Arms
Dual

Core Technology

End-to-End Data Collection Platform

A crowdsourced pipeline connecting researchers with expert operators and calibrated hardware. Define your task, and we handle the rest.

01

Create Task

Researchers define collection requirements, scenarios, and quality criteria.

02

Match Contractor

System assigns qualified operators based on expertise and equipment.

03

Connect Hardware

Contractor connects to calibrated robot arms, cameras, and sensors.

04

Collect Data

Expert teleoperation capture with real-time monitoring and feedback.

05

Auto-Verify

Automated QA checks for trajectory continuity, drift, and format compliance.

06

Review & Deliver

Task creator reviews quality, provides feedback, data delivered in HDF5/LeRobot.

sample_task.json
{
  "task_type": "bimanual_manipulation",
  "scenarios": ["kitchen_pickup", "table_sort"],
  "episodes_required": 500,
  "quality_criteria": {
    "success_rate": 0.95,
    "max_drift_ms": 5
  },
  "output_format": "hdf5+lerobot"
}
Launch Platformplatform.signiq-lab.ai

Data Explorer

See the stream, the trajectory, the intent.

Toggle across modalities to inspect synchronized views before you train.

episode_0923_164719_cameras.mp4
metadata.json
{
  "task_id": "manipulation_04",
  "robot": "ALOHA_dual_arm",
  "cameras": ["left", "right", "top"],
  "fps": 20,
  "format": "HDF5",
  "modalities": ["rgb", "depth", "force_torque", "proprioception"],
  "episodes": 1247,
  "success_rate": 0.94
}

Training-Ready Guarantees

Robot-native data. No retargeting required.

Every dataset passes automated QA before delivery. What ships is what your policy sees — synchronized, labeled, and formatted for immediate training.

<5ms
Sync Drift

Cross-modal synchronization verified per-frame. RGB, depth, force/torque, and proprioception aligned within 5ms tolerance.

100%
Episode Labels

Every episode tagged success or failure with failure-mode annotation. Train on clean demonstrations, test against edge cases.

Zero
Retargeting

Data captured directly from robot hardware — not human video requiring kinematic retargeting. Action-state pairs are native.

5 min
To First Train

Native LeRobot v1.2 and HDF5 format. Load directly into your training loop — no conversion scripts, no schema mapping.

Automated QA Pipeline

Every episode passes through 12 automated checks before entering the dataset. Rejected episodes are re-collected, not patched.

Trajectory continuity validated — no teleportation artifacts
Joint position limits enforced per-robot URDF spec
Camera intrinsics and extrinsics calibrated per-session
Gripper state binary-labeled at action boundaries
Duplicate and corrupt frames automatically rejected
End-effector 6D pose (XYZ + RPY) computed and verified

Featured Datasets

Ready-to-train packages.

View all datasets
BimanualHome TasksALOHA

ALOHA Home Tasks

Bimanual manipulation in home environments — object sorting, kitchen tasks, and tabletop coordination with dual ALOHA arms.

View details
HumanoidDual ArmDexterous

Humanoid Dual-Arm Manipulation

Humanoid robot performing complex dual-arm coordination, object handling, and dexterous manipulation scenarios.

View details

Strategic partners

Deploying with the world's leading teams.

Collaborating with industry leaders to define the future of embodied AI.

UBTECH
Walker S

Walker S Series

Industrial-grade dexterity powered by SignIQ Lab's manipulation dataset. Deployed in major EV manufacturing lines.

X-HUMANOID
Tiangong

Tiangong (天工)

The world's first full-sized electric running humanoid. Training on SignIQ Lab's embodied intelligence platform.

Build with the world's most comprehensive robotic dataset.

Talk to our team to explore off-the-shelf packages or design a custom capture run in our instrumented facilities.