Skip to content

Compose Multiple Agents

You finished Now Wire In A Containment Pattern with one app, one decider, and three named invariants. The shape that page introduced — proposal-then-decide, with a relay namespace and a backstop invariant — is exactly what scales to multi-agent systems. This page is the rung-7 walkthrough. You will run three agents (infra, triage, remediation) over the bundled Incident Response example and exercise the four CLI surfaces that exist for multi-agent ops:

  • jacqos scaffold --agents … — partition the ontology by agent
  • jacqos lineage fork — branch the timeline to try a different resolution without losing the original
  • jacqos contradiction list / preview / resolve — name and decide a contradiction explicitly, with provenance
  • jacqos verify --composition-report … — prove the multi-agent boundary holds across a frozen composition-analysis artifact

Roughly forty minutes. Every code block is lifted verbatim from examples/jacqos-incident-response/ so you can paste freely.

Use the --agents flag to partition the ontology by owner. Three agents — infra reads telemetry, triage derives blast radius, remediation proposes actions — get one rule file each:

Terminal window
jacqos scaffold incident-response --agents infra,triage,remediation
cd incident-response

--agents takes a comma-separated namespace list (lowercase ASCII, digits, underscores; minimum two). The scaffold writes namespace- partitioned .dh files plus shared intents.dh and a starter fixtures/ directory. For this walkthrough, use the bundled examples/jacqos-incident-response/ copy directly — it ships filled-in rules, four golden fixtures, and frozen generated/ artifacts.

Terminal window
cp -r examples/jacqos-incident-response my-incident-app
cd my-incident-app

Open ontology/schema.dh. Every relation is prefixed by the namespace that owns it — that’s the coordination contract:

relation infra.service(service_id: text)
relation infra.depends_on(service_id: text, dependency_id: text)
relation infra.degraded(service_id: text)
relation infra.healthy(service_id: text)
relation infra.is_primary_db(service_id: text)
relation infra.replica_synced(service_id: text)
relation triage.blast_radius(service_id: text, root_service: text)
relation triage.root_cause(root_service: text)
relation triage.severity(root_service: text, severity: text)
relation triage.stakeholder_notified(root_service: text)
relation proposal.remediation_action(
decision_id: text,
root_service: text,
target_service: text,
action: text,
seq: int
)
relation remediation.plan(
root_service: text,
target_service: text,
action: text,
seq: int
)
relation remediation.scale_down(service_id: text)
relation remediation.unsafely_scaled_primary(service_id: text)

infra.* is the topology + telemetry surface. triage.* derives from infra.* and never writes back. proposal.* is the fallible-decider relay namespace from rung 6 — the LLM remediation agent’s output lands here before any executable intent can fire. remediation.* is the ratified-decision surface that intents consume. The composition checker uses these prefixes to compute namespace reducts and prove they stay safe under composition.

Open ontology/rules.dh. Triage derives blast radius recursively from infra.* topology, then exposes severity for the other agents to react to. The transitive-closure rule is the heart of the shared-reality contract — it lets every downstream agent see the same impacted set without any agent needing to message another:

rule infra.transitively_depends(service, dependency) :-
infra.depends_on(service, dependency).
rule infra.transitively_depends(service, root) :-
infra.depends_on(service, dependency),
infra.transitively_depends(dependency, root).
rule triage.root_cause(root) :-
infra.degraded(root),
not infra.healthy(root).
rule triage.blast_radius(root, root) :-
triage.root_cause(root).
rule triage.blast_radius(service, root) :-
infra.transitively_depends(service, root),
triage.root_cause(root).

The remediation agent’s output is gated through the proposal.* relay (the same pattern from rung 6, just with a richer schema):

rule assert proposal.remediation_action(decision_id, root, target, action, seq) :-
atom(obs, "proposal.id", decision_id),
atom(obs, "proposal.root_service", root),
atom(obs, "proposal.target_service", target),
atom(obs, "proposal.action", action),
atom(obs, "proposal.seq", seq).
rule remediation.plan(root, target, action, seq) :-
proposal.remediation_action(_, root, target, action, seq).
rule remediation.scale_down(service) :-
remediation.plan(_, service, "scale_down", _).
rule remediation.unsafely_scaled_primary(node) :-
remediation.scale_down(node),
infra.is_primary_db(node),
not infra.replica_synced(node).

remediation.unsafely_scaled_primary is the catastrophic-action relation an invariant later reduces to zero. The shape is identical to the rung-6 reservation_requires_authorization backstop, lifted to a multi-agent surface.

Open ontology/intents.dh. The communications and remediation agents both react to triage’s output. Neither calls the other:

rule intent.notify_stakeholder(root, severity) :-
triage.root_cause(root),
triage.severity(root, severity),
not triage.stakeholder_notified(root).
rule intent.remediate(root, severity) :-
triage.root_cause(root),
triage.severity(root, severity),
not remediation.plan(root, _, _, _).

Two independent derived intents from the same shared facts. There is no orchestration graph, no if comms-done then remediate gate. Both fire when their bodies hold; the platform dispatches each through its declared capability in jacqos.toml.

These are the structural safety boundary. Each is a named invariant over a derived “violation” relation, identical to the reservation_requires_authorization shape from rung 6:

invariant no_kill_unsynced_primary(node) :-
count remediation.unsafely_scaled_primary(node) <= 0.
invariant always_have_admin(service) :-
count infra.admin_gap(service) <= 0.
invariant no_isolate_healthy(service) :-
count remediation.unsafely_isolated(service) <= 0.

If any agent — or any composition of agents — produces a state that satisfies one of those violation bodies, the evaluator refuses the transition and names the violator in the diagnostic.

In one terminal start the dev shell:

Terminal window
jacqos dev

In a second terminal, replay the happy-path fixture and verify:

Terminal window
jacqos replay fixtures/happy-path.jsonl
jacqos verify

The happy-path fixture shows all three agents coordinating: infra publishes topology and telemetry, triage derives that db-primary is the root cause with critical severity, and the remediation model proposes a safe reroute. Because multiple agent-owned namespaces coordinate, jacqos verify writes a composition analysis alongside the rest of the verification bundle:

✓ fixture_replay: replayed 4 fixture(s)
✓ golden_fixtures: 4 fixture expectation(s) matched
✓ invariants: 4 fixture(s) matched expected invariant outcomes (3 invariant violation(s) recorded)
✓ candidate_authority_lints: 0 authority warning(s) across 0 fixture(s)
✓ replay_determinism: 4 fixture(s) matched fresh replays
✓ shadow_reference_evaluator: 4 of 4 fixture replays matched the shadow reference evaluator
✓ composition: passed all 3 composition subchecks; report: generated/verification/composition-analysis-sha256-…json

The three composition subchecks are: no unconstrained cross- namespace rules (every rule that crosses a namespace boundary is explicit and accounted for), namespace-reduct partition monotonicity (each namespace’s slice of the rule graph is a well- defined sub-program), and fixture coverage (every named invariant is exercised by at least one fixture).

Step 7: Inspect The Frozen Composition Report

Section titled “Step 7: Inspect The Frozen Composition Report”

jacqos verify always emits a fresh composition report, but the checked-in generated/verification/composition-analysis-sha256-*.json is the frozen one — pinned to the evaluator digest the example ships under. You verify against it the same way you’d verify against a golden fixture:

Terminal window
jacqos verify --composition-report generated/verification/composition-analysis-sha256-64f440b630e4f419dbccacdca46c502ecd8e090d20d693162258cadfa6e4de84.json

The verify output now reports both the freshly computed report and the supplied report:

✓ composition: passed all 3 composition subchecks; report: generated/verification/composition-analysis-sha256-…json; verified supplied report: generated/verification/composition-analysis-sha256-…json

If a future change shifts the namespace partition — adds a cross- namespace dependency, drops a fixture that was the only coverage of an invariant, makes a rule unconstrained across boundaries — the supplied report stops matching and verify fails before any fixture replays. The composition report is the multi-agent analogue of an expected.json golden.

The same artifact also drops out of the explicit export command, useful for promoting a freshly proven composition into the frozen slot:

Terminal window
jacqos export composition-analysis

Replay the second fixture. It is constructed so the timeline contains both an LLM proposal that violates a catastrophic invariant and a contradicting telemetry sequence (api-gateway flips degraded → healthy at a higher sequence number, retracting the earlier infra.degraded assertion):

Terminal window
jacqos replay fixtures/contradiction-path.jsonl
jacqos contradiction list

contradiction list returns a JSON array of every open contradiction. Each entry names the relation, the conflicting mutations, and the observations that produced them — full provenance, no hidden state:

[
{
"contradiction_id": "sha256:906704b8d97e66128b926b395b7b130dbdb3f37d7cbba903d9c89907b680352c",
"relation": "infra.degraded",
"value": ["api-gateway"],
"rule_ids": [
"rule:sha256:6c84dcb23eb697217761c03b37812b906913e5d5b13cbd0885d13b4ab6de7cb2",
"rule:sha256:ff7eff5b3228ace539ea96cd65c286f6efe87dd4335dce55bb2a4fdd3274ec60"
],
"observation_refs": [
"contradiction-path.jsonl#5",
"contradiction-path.jsonl#6"
]
}
]

Notice what the platform did not do: it did not guess. The ontology asserted and retracted infra.degraded("api-gateway"), both with provenance, and surfaced the conflict for an explicit human (or upstream-system) decision.

Before committing a resolution, preview it. The preview reports exactly what observation the resolver would append, without mutating the lineage:

Terminal window
jacqos contradiction preview sha256:906704b8d97e66128b926b395b7b130dbdb3f37d7cbba903d9c89907b680352c \
--decision accept-retraction \
--note "telemetry-correction"
{
"lineage_id": "default",
"contradiction_id": "sha256:906704b8d97e66128b926b395b7b130dbdb3f37d7cbba903d9c89907b680352c",
"decision": "accept_retraction",
"note": "telemetry-correction",
"observation": {
"kind": "manual.contradiction_resolution",
"ref": "manual.contradiction_resolution:sha256:…:accept_retraction:…"
}
}

The three legal decisions are accept-assertion (the asserter won), accept-retraction (the retractor won), and defer (record that no decision is yet made; the contradiction stays open). Each one becomes one new observation appended to the timeline — the resolution itself is auditable.

To commit, swap preview for resolve:

Terminal window
jacqos contradiction resolve sha256:906704b8d97e66128b926b395b7b130dbdb3f37d7cbba903d9c89907b680352c \
--decision accept-retraction \
--note "telemetry-correction"

Now jacqos contradiction list returns [] and the timeline contains one new manual.contradiction_resolution observation that the next verify consumes deterministically.

Step 10: Fork The Lineage To Try A Different Resolution

Section titled “Step 10: Fork The Lineage To Try A Different Resolution”

The point of an immutable observation log is that you can branch without losing anything. Fork the lineage before committing the resolution, replay the same contradiction, and try accept-assertion instead — the original default lineage stays intact:

Terminal window
jacqos lineage fork
{
"parent_lineage_id": "default",
"lineage_id": "lineage-fork-30d4acec8e8a21a30ce337b3171ee8413c3893dda85f5c3d9924ae66c105610f",
"fork_head_observation_id": 11
}

The child lineage shares every observation up to fork_head_observation_id and diverges from there. Use --lineage on replay and studio to act on the child:

Terminal window
jacqos replay --lineage lineage-fork-30d4acec8e8a21a30ce337b3171ee8413c3893dda85f5c3d9924ae66c105610f \
fixtures/contradiction-path.jsonl
jacqos contradiction resolve sha256:906704b8d97e66128b926b395b7b130dbdb3f37d7cbba903d9c89907b680352c \
--decision accept-assertion \
--note "trust-the-original-degraded-signal"

Now compare the two lineages in Studio:

Terminal window
jacqos studio --lineage lineage-fork-30d4acec8e8a21a30ce337b3171ee8413c3893dda85f5c3d9924ae66c105610f

The Activity timeline shows the child’s accept-assertion resolution as a manual observation; the Ontology view shows the namespace-reduct partition is unchanged (no rule edits, no composition drift); the provenance pane shows infra.degraded is still asserted on the child and retracted on main. Two defensible answers to the same evidence, both fully auditable, neither one mutating the other.

In one session you exercised the full multi-agent observation- first loop:

  1. Three agents wrote into one shared, namespace-partitioned model. None of them messaged any other.
  2. jacqos verify --composition-report proved that the namespace boundary held across the frozen composition-analysis artifact — a multi-agent analogue of a golden fixture.
  3. A real contradiction was named, previewed, and resolved with one explicit decision per branch.
  4. jacqos lineage fork let you try a different resolution without losing the original — both lineages remain available for inspection and audit.

You never wrote orchestration code. You never managed state. The LLM remediation agent was structurally bounded by proposal.* and the catastrophic invariant — exactly the shape the rung-6 page generalized.

You are at rung 7 of the reader ladder. The natural next step depends on where you are heading.

  • Debug, verify, ship — rung 8. The day-to-day workflow once your multi-agent app is in front of real observations: how to read a red verify, when to fork, when to resolve, when to ship.
  • CLI Reference — every flag for every command exercised on this page, plus audit facts/intents, reconcile, and composition check.

Optional, but the mental model is worth knowing if you plan to ship more than three agents:

  • Model-Theoretic Foundations — the amalgamation property and CALM theorem are why two agents writing into the same shared model never deadlock or drift.
  • Multi-Agent Patterns — full narrative on namespace partitions and stigmergic coordination, with the same incident-response example as the worked spine.
  • Incident Response Walkthrough — the bundled flagship reference, with all four fixtures (happy-path, contradiction-path, cascade-path, coverage-path) and their expected world states.
  • Smart Farm — secondary multi-agent example with a different domain (irrigation, weather, crop rotation) if you want a second composition surface to study.