Durable handoffs for multi-agent pipelines
The durability comes from the runtime, not from the application code.
Multi-agent systems are sequential pipelines that look like distributed systems.
A researcher gathers findings, a writer drafts, a reviewer checks.
Each agent makes API calls — to Claude, to OpenAI, to whatever LLM is doing the work.
Each call can fail mid-flight. And when one fails, you get a choice: re-run the whole pipeline from the top and burn tokens you already paid for, or wire up checkpointing yourself.
The example-multi-agent-orchestration-ts repo shows the third option.
Three specialist agents — researcher → writer → reviewer — coordinated by a 15-line generator.
Each yield* ctx.run(agent, args) is a durable checkpoint. Crash the writer mid-draft, Resonate retries only the writer.
The researcher does not re-run. Its cached output is fed straight back into the retried call.
There is no retry configuration, no step metadata, no routing schema.
The orchestrator is sequential code that reads top-to-bottom.
The bigger story isn’t the retry — it’s everything that’s missing. No orchestration platform. No event bus, no status dashboard backed by a separate database, no per-step retry knobs.
Resonate runs in embedded mode in this example: no external services, no servers to provision, no operational surface to learn. The durability comes from the runtime, not from the application code.
The same primitive that powers retries also powers human-in-the-loop, yield* ctx.promise({ id: “approval/topic” }), blocks the workflow on an external signal.
The pipeline pauses at that line, waiting for an HTTP resolve. While it waits the process can crash, restart, redeploy — the promise survives, and when it resolves the workflow picks up at the next line.
One mechanism, two patterns. No additional surface area.


