Seven principles that guide how we build safety infrastructure for humanoid robots. Not a mission statement. A thesis.
We build infrastructure, not insurance products. We generate the behavioral data that makes underwriting possible. The insurer prices the risk. The OEM builds the robot. We provide the bridge between them — the measurement layer that turns "we think it's safe" into "we can prove it's safe." Our actuarial data specification defines exactly what data we surface for underwriting.
Physical safety is a solved problem. Collision avoidance, force limiting, geofencing — the hardware layer has decades of engineering behind it. What isn't solved: what decisions does the robot make? When does it deviate from expected behavior? Can you prove, after the fact, that it was operating within policy? That's the gap. Hardware breaks predictably. Behavior doesn't.
Every decision a robot makes must be cryptographically signed, hash-chained, and tamper-evident. Not because regulation requires it — though it will. Because trust requires it. When a humanoid robot injures someone and it goes to court, the audit trail is either cryptographically provable or it's just a log file. Log files are not evidence. Signed hash chains are.
If any rule says no, the answer is no. If no rule matches, the answer is no. Safety is not a negotiation. There is no "override" for a deny rule. There is no "maybe." The policy engine is deterministic: same input, same output, every time. This is not a design choice — it is a formal property. Monotonic safety: tightening a policy can never reduce safety. See how we implement these principles in practice.
No physical action without simulation validation. The digital twin is not optional — it is the difference between "probably safe" and "provably safe." Before a robot arm moves, MuJoCo simulates the motion. Before a gripper closes, the contact forces are validated. If the simulation shows a 62-Newton contact on a 50-Newton limit, the action is denied before it touches hardware. Twelve milliseconds of simulation prevents twelve months of litigation.
The policy format is open. The adapters are open. The enforcement engine is open. If engineers write safety policies in our YAML format and integrate our ROS 2 adapter, we own the ecosystem — not because we lock them in, but because we earned the standard. What's closed: the certification intelligence, the compliance mappings, the insurance data, the fleet analytics. Standards should be public. Business models should be defensible.
Not regulation. Not technology. Insurance. The EU AI Act sets the floor. Technology sets the ceiling. But insurance determines who actually deploys. A robot that can't be underwritten can't operate in a warehouse, a hospital, or a home. The companies that make their robots insurable first will define the market. Everyone else will follow — or they won't ship. We're building the infrastructure that makes the first group possible.
The company that defines how robot behavior is measured, enforced, and insured becomes the standard. Standards don't get displaced.
Talk to Us