The math that makes the difference between fleets that drift and fleets that hold
Multi-agent AI systems fail in ways that are invisible until everything breaks. Here's the math that proves it won't.
A boat navigating a rock passage using standard floating-point GPS makes micro-adjustments every few seconds. It overcorrects. It overshoots. It burns fuel fighting itself. After a hundred corrections the heading is garbage — and the system reports everything is fine because each individual correction was "close enough."
Text diagram — boat in a rock passage:
Floating point says "close enough." Constraint theory says "here." The difference is provable. Here's the actual code:
// Floating point: accumulates error
let trust = 0.1;
for _ in 0..100 { trust += 0.1; }
// Result: 10.0000004 or -9.9999996 depending on rounding
// The boat is now in the wrong rock field
// Constraint boundary: zero drift after any number of hops
let trust = Direction::from_u8(6); // 48-direction integer encoding
for _ in 0..100 { trust = trust.compose(Direction::from_u8(6)); }
// Result: exactly Direction::from_u8(6), always
// The boat is exactly where it started, every time
The 48-direction encoding used here (Pythagorean48) gives 5.585 bits per trust vector. Deterministic. No rounding. The group theory guarantees zero drift regardless of how many times you compose.
On a boat, "close enough" means you're drifting toward the rocks. "Here" means you're not.
The numbers below come from the live PLATO room server. Refresh to see them update.
Vessels — Each vessel is a named agent with a fixed role. The count tells you whether the fleet is intact. If it drops, a vertex vanished from the constraint graph.
Rooms — Rooms are the fleet's working memory. Each room holds a different type of constraint: vessel identities, trust vectors, ambient briefing state. The room count shows how much the fleet has written down.
Agents — Agents are live processes that read rooms, do work, and write results. Unlike vessels (which are roles), agents can come and go. The fleet survives agent churn because the constraint graph is in the rooms, not in any single agent.
Tiles — Tiles are compressed knowledge. The fleet distills what it learns into fixed-size constraint tiles. When one agent proves something, every agent can read the tile and use it — no retraining, no fine-tuning, no hallucinated constraints.
Every decision the fleet makes runs through three phases. This is not a pipeline — it's a control loop with provable termination guarantees at each phase.
The deadband captain is the navigation layer. P0 maps the rocks, P1 finds safe water, P2 steers. The name comes from control theory: a deadband is the range where the system does nothing because it's already in the right place. When the fleet is rigid and safe, the captain sleeps. When it isn't, it acts precisely.
Copy the prompt, paste it into DeepSeek, Groq, or any OpenAI-compatible chat. Each one gives you something concrete to work with, whether or not you use anything from this fleet.
Ask any chatbot to turn a real-world problem into a working constraint. Works in any chat that can do structured reasoning.
Pick something in your life with at least two ways to go wrong — a workflow,
a system, a number you keep managing wrong. Write three sentences about
what "too high" and "too low" look like for it. Then write one GUARD
statement in the style of: GUARD (x > max AND x < min) IMPLIES alert.
I'll turn your bounds into a working constraint.
Ask any chatbot to check if a group coordination problem is provably self-organizing. Works in any chat.
Describe a group of things that need to coordinate — agents, services,
people, machines. For each one, describe what it does and what it needs
from the others. Then tell me the fewest rules that would make the whole
group self-organize without any of them needing to ask permission.
I'll map those rules into a rigid graph and tell you whether it's provably
self-coordinating.
Ask any chatbot to model a recurring decision as P0/P1/P2. Shows why greedy always fails.
Give me a decision you keep facing — something with at least two ways
to go wrong. I'll model it as P0 (what NOT to do), P1 (where you CAN be),
P2 (the best path). Then I'll show you why greedy always fails and what
the deadband protocol does instead.
Ask any chatbot to flip a search problem into a constraint problem. The rocks are the snap target.
Describe a problem you keep trying to solve by searching for the right
answer. Now describe it differently: "where are all the places this
definitely WON'T work?" I'll help you flip it. The rocks are the snap
target. Everything else is just having yourself a path of safe.
The constraint playground. Fleet topology visualization. Live at: