Northwestern HAND · NSF Engineering Research Center

A Haptic Generative Model for Robot Manipulation

A multi-institution initiative to collect the haptic data missing from robot learning — and release a shared generative model of what manipulation should feel like.

A shared resource · open to the robotics community
Contact Ed Colgate →

Proposed collaborating institutions

Northwestern University
Corporate partner
Massachusetts Institute of Technology
Carnegie Mellon University
Texas A&M University
Stanford University
UC Berkeley
Georgia Institute of Technology
Fluid Reality Hardware partner
Hugging Face Hardware & data partner

The problemRobot learning has a touch-data problem

Robot manipulation policies are trained on teleoperation data. But the teleoperators producing that data cannot feel what the robot feels. The result: demonstrations with wrong force profiles, operators working at a fraction of normal speed, and models that fail at the contact-rich tasks that matter — insertion, threading, snap-fit assembly, in-hand manipulation.

Robot-side sensing is advancing rapidly. High-resolution tactile sensors now deliver thousands of pixels per fingertip. But sensing without feedback is a camera without a display: the information never reaches the human whose demonstrations shape the policy. And there is no shared dataset that pairs robot encoder and torque readings with ground-truth haptic expectation — the field has no ImageNet for touch.

As Ed Colgate put it in his 2026 IEEE Haptics Symposium keynote, dexterity is a negotiation between the operator, the robot, and the environment. Today's teleoperation stacks break that negotiation: hardware-in-the-middle introduces friction, hysteresis, and latency; the back-channel from environment to operator is too weak to convey what is actually happening at contact.

The visionA shared dataset. A generative model. An open benchmark.

We propose a large-scale, multi-institution effort to capture what the robot should be feeling alongside encoder and torque readings across a standardized suite of touch-heavy manipulation tasks — and to release the resulting data, model, and benchmarks as a shared resource for the field.

Concretely: build capture rigs that minimize hardware-in-the-middle (in the spirit of UMI and MIT Dexop), deploy them across partner labs, run operators through a common benchmark battery, and record kinematics, torques, and ground-truth tactile state simultaneously. Train a generative model that predicts expected tactile state from kinematics — usable as a training signal, a reward shaper, or a source of synthetic haptic supervision for future policies.

In practiceTouch-heavy manipulation, captured end-to-end

Early demonstrations from a prototype dual-arm teleoperation system show operators performing contact-rich manipulation with real-time fingertip haptic feedback — the kind of episodes the dataset would capture at scale.

Prototype teleoperation clips. Final dataset would span dozens of tasks across multiple labs.

ProgramFive pillars of the initiative

1

A standardized benchmark

Touch-heavy manipulation tasks building on NIST benchmarks, ManipulationNET, Rice peg-in-hole, and inspection-to-100%-coverage protocols — tasks that don't just benefit from touch, but require it.

2

Hardware-in-the-middle-free capture

Rigs inspired by UMI and Dexop that avoid the hysteresis and friction of traditional teleop stacks — so recorded kinematics reflect real intent, not transmission artifacts.

3

Ground-truth haptic capture

Every episode pairs encoder and torque readings with high-resolution tactile state, so models can learn the mapping from motion to expected feel.

4

Open release

Dataset, model weights, and benchmark suite released to the community. The ImageNet moment for touch — owned by no single lab.

5

Cross-lab reproducibility

Common rigs, common tasks, and common evaluation deployed across partner institutions so results transfer and cumulative progress compounds.

StakeholdersWho is involved — and what each gets

University labs

Shared capture infrastructure, data access, co-authorship on dataset and model releases, ownership of specific benchmark tasks, reproducible evaluation.

Corporate partners

Early access to dataset and model, influence over benchmark task selection, integration support, research pipeline into the haptic data community.

Students & researchers

Training on open infrastructure, reproducible baselines, a standardized benchmark to publish against, and a new subfield to contribute to.

Hardware contributors

Reference deployments of their sensors, actuators, and capture rigs; co-development of the rig hardware standard that the dataset runs on.

HardwareThe capture rig

The capture rig combines high-resolution robot-side tactile sensing with operator-side haptic feedback — so the operator feels what the robot feels in real time, and every demonstration is annotated with ground-truth haptic state. Fluid Reality's 6 mm electroosmotic actuators contribute the operator feedback layer — 128 pressure levels per actuator, <15 ms response, 1 kHz update rate that renders both contact and vibration in a single silent micro-hydraulic element. Small enough to fit inside a glove fingertip; no compressors, no pneumatics. Hugging Face contributes the data infrastructure layer: dataset hosting, versioning, and distribution on the Hub, plus model release alongside.

128
Pressure levels per actuator (7-bit)
< 15 ms
Pressure response (100 µm displacement)
1 kHz
Update rate — contact & vibration in one actuator
6 mm
Actuator diameter · solid-state micro-hydraulic

What we're looking forPartners to stand this up

University partners

  • Lab capacity to host capture rigs and run operator sessions
  • Ownership of one or more benchmark tasks end-to-end
  • Graduate student involvement in data collection and model training
  • Commitment to shared data-release and co-authorship norms
  • Cross-institutional coordination on evaluation protocols

Corporate partners

  • Program funding ($TBD — scaled to number of labs and task coverage)
  • Real-world task curation: which manipulation problems matter in deployment
  • Compute resources for model training and ablations
  • Optional: deployment testbeds for validating downstream policy transfer
  • Seats in the steering committee shaping benchmark direction

Let's build the ImageNet moment for touch.

If your lab or organization wants to contribute to — or help fund — a shared haptic data initiative for robot manipulation, Ed would love to talk.

ed@northwestern.edu