Skip to content

Multi-Agent Patterns

Multi-agent systems usually fail in one of two ways:

  • the agents are tightly orchestrated and brittle
  • the agents are loosely orchestrated and unsafe

JacqOS takes a different path. Agents do not coordinate by passing hidden state back and forth. They coordinate through one shared derived model, and the ontology decides which transitions are allowed.

This guide walks through examples/jacqos-incident-response/, the flagship cloud incident-response app. It is the right example because the problem is hostile by default: cascading failures, incomplete telemetry, and an LLM-powered remediation agent that can suggest dangerous actions.

If you are starting from scratch, jacqos scaffold incident-response --agents infra,triage,remediation gives you the shape directly: namespace-partitioned ontology files, shared invariants, and golden fixtures that already prove the shared-worldview path.

For a small live-ingress version of the same idea, use examples/jacqos-multi-agent-live/. It has two independent producers, one shared lineage, relation-filtered SSE subscribers, and a dispatch receipt that prevents a subscriber loop. The app is intentionally smaller than incident response so you can see the live contract directly: producers append observations, the ontology derives shared facts, and subscribers follow only the relations they own.

The --agents scaffold is not a vague “make this multi-agent” toggle. You name the agent-owned namespaces explicitly.

Terminal window
jacqos scaffold incident-response --agents infra,triage,remediation

--agents takes a comma-separated namespace list. Each namespace must be lowercase ASCII, may include digits or underscores, and you must provide at least two namespaces.

The scaffold it writes is intentionally sparse:

incident-response/
jacqos.toml
ontology/
schema.dh
intents.dh
invariants.dh
infra/
rules.dh
triage/
rules.dh
remediation/
rules.dh
mappings/
inbound.rhai
fixtures/
happy-path.jsonl
happy-path.expected.json
contradiction-path.jsonl
contradiction-path.expected.json

That layout encodes the coordination contract:

  • each namespace owns one rule file
  • cross-agent dependencies are explicit in rule bodies, not hidden in prompts
  • intent.* stays at the world-touching boundary
  • contradiction and happy-path fixtures already check the shared worldview

An incident starts with weak signals:

  • one service looks degraded
  • downstream services start failing
  • operators do not know the root cause yet
  • the remediation agent wants to act before the full picture is clear

This is exactly where ad hoc agent orchestration becomes dangerous. If one agent has stale context or an unsafe action slips through, you can make the outage worse than the failure that triggered it.

The incident-response app solves that by putting all coordination in the shared model:

  • infra.* holds topology and health evidence
  • triage.* derives root cause, blast radius, and severity
  • intent.* derives the next external actions
  • candidate.* and remediation.* hold remediation proposals and accepted plans

The agents stay independently authored and independently triggered. The worldview stays shared.

The communications agent and remediation agent do not message each other directly. They both react to the same triage facts.

rule intent.notify_stakeholder(root, severity) :-
triage.root_cause(root),
triage.severity(root, severity),
not triage.stakeholder_notified(root).
rule intent.remediate(root, severity) :-
triage.root_cause(root),
triage.severity(root, severity),
not remediation.plan(root, _, _, _).

This is stigmergy: coordination through the shared environment rather than through orchestration graphs.

That buys you three things immediately:

  1. The agents stay loosely coupled. Adding a new agent means adding new rules against the same ontology, not rewiring the old ones.
  2. Every agent sees the same derived truth. There is no private cache of “what the incident means.”
  3. Provenance stays unified. A bad intent still traces back to the same shared evidence graph.

If you have read Model-Theoretic Foundations, this is North Star 5 running on top of North Star 1.

Identify agents without making them stateful

Section titled “Identify agents without making them stateful”

Use agent_id and agent_run_id when provenance needs to show which agent role produced an observation or model request. Do not use them as hidden workflow state.

For high-stakes systems, include actor metadata on observations that cross a trust boundary:

{
"metadata": {
"source": "capability:llm.complete",
"actor_id": "model:remediation_planner",
"actor_kind": "model",
"agent_id": "remediation",
"agent_run_id": "incident-42/remediation/run-3",
"authority_scope": ["incident.propose_remediation"],
"acted_on_behalf_of": "agent:remediation"
}
}

For current local apps, mirror the identity fields your ontology must reason about into the observation payload and map those payload fields into atoms. The metadata remains useful for export, audit, and cloud handoff. Identity stays visible in provenance; authority still comes from ontology rules and invariants.

The core of the example is recursive Datalog, not a hand-built workflow.

rule infra.transitively_depends(service, dependency) :-
infra.depends_on(service, dependency).
rule infra.transitively_depends(service, root) :-
infra.depends_on(service, dependency),
infra.transitively_depends(dependency, root).
rule triage.blast_radius(root, root) :-
triage.root_cause(root).
rule triage.blast_radius(service, root) :-
infra.transitively_depends(service, root),
triage.root_cause(root).

This is the right pattern whenever the world already has graph structure:

  • service dependencies
  • escalation chains
  • package dependency trees
  • account hierarchies
  • approval chains

Do not encode graph reachability in prompt logic or imperative retries. Put the graph in observations, derive transitive closure in .dh, and let every agent react to the same computed blast radius.

In the cascade fixture, that means a degraded db-primary can light up auth-service, edge-api, frontend-web, and cdn-edge without any agent maintaining its own copy of the dependency chain.

The remediation agent is allowed to think probabilistically. The ontology is not.

The app accepts candidate remediation proposals, derives a plan, and then forbids catastrophic actions with invariants.

rule remediation.plan(root, target, action, seq) :-
candidate.remediation_action(_, root, target, action, seq).
rule remediation.scale_down(service) :-
remediation.plan(_, service, "scale_down", _).
rule remediation.unsafely_scaled_primary(node) :-
remediation.scale_down(node),
infra.is_primary_db(node),
not infra.replica_synced(node).
invariant no_kill_unsynced_primary(node) :-
count remediation.unsafely_scaled_primary(node) <= 0.

This is the pattern to copy for high-stakes agent work:

  1. Let the agent propose into candidate.* or another clearly non-authoritative namespace.
  2. Derive accepted plans in ordinary facts.
  3. Write catastrophic invariants against the derived plan.
  4. Let jacqos verify prove the boundary before anything touches the world.

The human review surface is the invariant, not the generated rule code and not the model prompt.

When a multi-agent app misbehaves, the temptation is to open every rule and ask which agent did the wrong thing. In JacqOS, you start from the bad tuple instead.

Imagine the remediation path proposes scale_down("db-primary") and the app derives:

remediation.unsafely_scaled_primary("db-primary")

Open Studio and find the row for that derivation. The drill inspector’s Atoms / Observations and Facts sections list the local witnesses in text form:

  • remediation.scale_down("db-primary")
  • infra.is_primary_db("db-primary")
  • absence of infra.replica_synced("db-primary")
  • the candidate remediation proposal that introduced the action

You do not need the whole incident timeline. You need the local neighborhood around the unsafe fact and the provenance chain that fed it.

That is especially important in multi-agent systems because it prevents the usual blame-game debugging loop. You do not ask “which agent is wrong?” first. You ask “which evidence made this tuple true?” first.

Use this workflow:

  1. Replay the failing fixture, usually fixtures/contradiction-path.jsonl or fixtures/cascade-path.jsonl.
  2. Open Studio and select the bad fact or blocked intent in Activity.
  3. Read the drill inspector’s local witnesses before widening with the verification bundle’s full neighborhood export.
  4. Add or tighten the invariant or fixture that should forbid the bad state.

See Visual Provenance and Debugging with Provenance for the UI workflow.

Use namespace reducts to inspect agent boundaries

Section titled “Use namespace reducts to inspect agent boundaries”

The incident app is split across explicit namespaces:

  • infra.*
  • triage.*
  • intent.*
  • candidate.*
  • remediation.*

That is not just naming hygiene. Namespace reduct analysis tells you where the coordination contract really lives.

An excerpt from the incident-response graph bundle looks like this:

{
"namespaces": [
{ "namespace": "candidate", "rule_count": 1 },
{ "namespace": "infra", "rule_count": 15 },
{ "namespace": "intent", "rule_count": 2 },
{ "namespace": "remediation", "rule_count": 7 },
{ "namespace": "triage", "rule_count": 7 }
],
"cross_namespace_edges": [
{
"from_namespace": "intent",
"from_relation": "intent.notify_stakeholder",
"to_namespace": "triage",
"to_relation": "triage.root_cause"
},
{
"from_namespace": "remediation",
"from_relation": "remediation.plan",
"to_namespace": "candidate",
"to_relation": "candidate.remediation_action"
}
]
}

This gives you the guarantee you want in multi-agent work:

  • each agent-owned rule domain is explicit
  • shared read models are explicit
  • coupling is visible as named cross-namespace edges instead of hidden control flow

When two namespaces are fully disjoint, jacqos stats will prove that with a reduct-disjoint pair. In the incident app, the report is useful for a different reason: it shows that the agents coordinate only through the declared triage and remediation surfaces. There is no hidden side channel.

That is the right test for a multi-agent JacqOS app. Independence should be structural and inspectable, not implied by file layout or team convention.

The CLI makes that workflow first-class:

  1. jacqos scaffold incident-response --agents infra,triage,remediation creates the namespace-partitioned starting point.
  2. jacqos verify runs composition analysis automatically when the app has more than one agent-owned namespace.
  3. jacqos export composition-analysis writes the same portable report that verification embeds, so you can diff namespace boundaries, monotonicity summaries, and invariant-fixture coverage in CI.
  4. jacqos composition check recomputes the current report and tells you whether a checked-in artifact still matches current inputs.
  5. jacqos stats exposes the agent_reduct_report so you can inspect shared surfaces, coordination edges, and disjoint namespace pairs without guessing.

The visual rendering of namespace partitions and cross-namespace edges ships with the V1.1 Studio rule-graph surface. In V1, the boundary summary lives in the composition-analysis artifact you can check into generated/verification/ and validate with jacqos composition check.

Multi-agent correctness is not just “all fixtures are green.” You also need to know whether the namespaces still compose cleanly as independent domains.

Use the verification and composition commands together:

Terminal window
# Run the ten verification check families, including composition as check 10
jacqos verify
# Pin the static boundary report for review or CI
jacqos export composition-analysis
# Recompute and compare the checked-in report
jacqos composition check

When you check the composition-analysis artifact into generated/verification/, pin it through jacqos verify so the same run that proves your fixtures also proves the boundary report has not drifted:

Terminal window
jacqos verify --composition-report \
generated/verification/composition-analysis-<evaluator_digest>.json

--composition-report re-runs check 10 against the path you pass and fails the verification run if the artifact no longer matches current inputs. This is the form to reach for in CI: one command, one exit code, one signed contract for every agent boundary the app declares.

The composition report is where Module Boundary Engine features become operational:

  • namespace reduct partitions show which agent domains are actually coupled
  • cross-namespace dependency analysis shows the exact coordination edges
  • monotonicity summaries distinguish warning-grade monotonic cycles from failure-grade non-monotonic cycles
  • invariant fixture coverage tells you whether the fixtures still prove the safety boundaries you think you shipped

That is the right review surface for multi-agent change. You do not need to read every generated rule. You need to inspect whether the shared reality still has the boundaries you intended.

Branch lineages to explore agent decisions safely

Section titled “Branch lineages to explore agent decisions safely”

Multi-agent apps will routinely face decisions where you want to explore one choice without committing to it on the live observation history. Rather than mutating shared state, fork the lineage:

Terminal window
jacqos lineage fork

That creates a child lineage from the committed head of the active lineage and prints a JSON object with the new lineage_id, the parent_lineage_id, and the fork_head_observation_id. Use the returned id with the lineage-aware flags on the rest of the CLI to inspect the branch in isolation:

Terminal window
jacqos replay fixtures/cascade-path.jsonl --lineage <child-lineage-id>
jacqos studio --lineage <child-lineage-id>
jacqos export observations --lineage <child-lineage-id>
jacqos export facts --lineage <child-lineage-id>

Parent and child lineages never merge back. If the child branch derives the worldview you wanted, promote it by replaying its observations on a fresh lineage; if it does not, abandon it and the parent’s history is untouched. This is how you experiment with a new agent’s behavior — for example, a remediation proposal you are not yet sure should fire — without polluting the canonical lineage that the rest of the system relies on.

Resolve cross-agent contradictions explicitly

Section titled “Resolve cross-agent contradictions explicitly”

When two agents derive conflicting evidence about the same fact — one asserts a triage.root_cause, another retracts it — the evaluator surfaces a contradiction rather than silently picking a winner. Multi-agent apps will see these whenever an upstream sensor and a downstream decider disagree.

List the open contradictions:

Terminal window
jacqos contradiction list

Preview a resolution before committing it:

Terminal window
jacqos contradiction preview ctr-007 \
--decision accept-assertion \
--note "Confirmed by infra agent on second observation"

--decision accepts accept-assertion, accept-retraction, or defer. The preview shows you exactly which downstream facts and intents change without appending anything to the observation log.

Once you are sure, commit the resolution:

Terminal window
jacqos contradiction resolve ctr-007 \
--decision accept-assertion \
--note "Confirmed by infra agent on second observation"

The resolution is recorded as an observation with full provenance, so later audit runs can reconstruct exactly who decided what and why.

The payoff of stigmergic coordination is that a new agent should feel like a new namespace, not a rewrite.

Suppose you want to add a notify.* agent after triage.* already exists:

rule notify.page(root, severity) :-
triage.root_cause(root),
triage.severity(root, severity).

The incremental workflow is:

  1. Add the new namespace relation declarations and ontology/notify/rules.dh.
  2. Read from existing shared facts like triage.* rather than introducing a private message channel.
  3. Add or tighten invariants when the new namespace becomes part of a safety boundary.
  4. Extend happy-path.expected.json and contradiction-path.expected.json so the new coordination surface is proven, not assumed.
  5. Run jacqos verify and inspect the composition report before you treat the new agent as stable.

If the new namespace only reads shared facts and emits new derived facts or intents, existing agents stay independent. If the new namespace creates a cross-namespace negation or aggregate loop, the composition report will tell you immediately.

Use this pattern when you build your own multi-agent app:

  1. Put shared world state in neutral fact namespaces.
  2. Let each agent react to the shared model, not to another agent’s private output.
  3. Use recursive rules for graph problems like blast radius or dependency closure.
  4. Route non-authoritative agent output through candidate.* and stop unsafe outcomes with invariants.
  5. Debug from the bad tuple outward with Gaifman-scoped provenance.
  6. Use namespace reducts to prove where domains are disjoint and where coordination is intentional.

Why single-process evaluation is distribution-ready

Section titled “Why single-process evaluation is distribution-ready”

JacqOS V1 evaluates all agent namespaces in a single process. This is a strength: one process, one model, zero coordination overhead. But a natural question follows — if you need to distribute agents across separate processes in the future, does the architecture support it, or would it require a redesign?

The answer is that distribution is a deployment concern, not a semantic one. The properties that make single-process evaluation correct are the same properties that make distributed evaluation possible. This section makes that claim precise.

Let P = (R, σ, S) be a JacqOS program where:

  • R is a finite set of .dh rules
  • σ is the vocabulary (the set of relation names declared in schema.dh)
  • S = S₀, S₁, …, Sₖ is the stratification computed by the loader

Let O be a finite, ordered observation sequence (one lineage). Let A(O) be the atom set produced by the deterministic mapper. Let M(P, O) denote the stratified minimal model — the unique set of derived facts computed by the evaluator.

Partition the rules by namespace into disjoint sets R₁, …, Rₙ (e.g., R_infra, R_triage, R_remediation). Each Rᵢ derives only into its own sub-vocabulary σᵢ ⊆ σ. The composition check enforces this partition: no rule in Rᵢ derives a relation in σⱼ for i ≠ j.

There exists a distributed evaluation protocol D using n processes (one per namespace) such that the model M_D(P, O) produced by D is identical to M(P, O).

The proof proceeds by induction on the stratum index.

Base case — atom extraction. The mapper from observations to atoms is deterministic and per-observation. Every process that sees observation sequence O computes the identical atom set A(O). No coordination is required at this step. In the incident-response example, all 42 atoms are determined entirely by the mapper and the 11 observations.

Inductive step — stratum Sⱼ. Assume all processes agree on the derived facts for strata S₀ through Sⱼ₋₁. We show they agree on Sⱼ.

Case 1: Monotonic rules in Sⱼ. A rule is monotonic when it uses only positive body literals (no negation, no retraction, no aggregation). By the CALM theorem (Hellerstein, 2010), monotonic programs can be evaluated in a distributed, coordination-free manner — the order and location of rule application do not affect the result.

Concretely: if process Pᵢ applies its monotonic rules Rᵢ ∩ Sⱼ over the shared atom base plus the agreed-upon lower-stratum facts, it derives some facts Fᵢ. The least Herbrand model of Sⱼ is the unique minimal fixed point (Knaster-Tarski), so ⋃ᵢ Fᵢ converges to it regardless of evaluation order. Processes can compute independently and merge by set-union.

In the incident-response example, stratum 0 contains monotonic rules for infra.depends_on, infra.service, infra.health_signal, infra.is_primary_db, infra.replica_synced, infra.production_system, and infra.has_admin_access. Stratum 1 contains the monotonic recursive closure infra.transitively_depends. These 9 monotonic rules can be evaluated by the infra.* process independently, with no coordination within the stratum — CALM guarantees convergence.

Case 2: Non-monotonic rules in Sⱼ. A rule is non-monotonic when it uses negation (not), aggregation (max, count), or mutation (assert/retract). Stratified negation semantics require that every negated relation is fully computed in a lower stratum before the negating rule fires. By the inductive hypothesis, all processes agree on the lower-stratum facts. Therefore every process evaluates the same negated literals against the same stable base, and derives the same facts.

In the incident-response example, triage.root_cause (stratum 3) uses not infra.healthy(root). This is safe because infra.healthy is computed in stratum 2, which is complete and agreed-upon before stratum 3 begins. The negation sees identical inputs on every process.

Synchronization protocol. The protocol requires one barrier per stratum boundary: after all processes finish stratum Sⱼ, they exchange their derived facts before any process begins Sⱼ₊₁. The number of barriers equals k (the number of strata), which is structurally determined by the program — not by runtime conditions. The evaluator already computes this stratification at load time.

For the incident-response example, jacqos stats reports 7 strata (S₀ through S₆), so the distributed protocol requires 6 synchronization barriers. Within each monotonic stratum, evaluation is coordination-free.

Namespace disjointness and amalgamation. When two namespace-partitioned rule sets Rᵢ and Rⱼ derive into disjoint output vocabularies σᵢ and σⱼ and share only lower-stratum facts as inputs, the amalgamation property (see Model-Theoretic Foundations) guarantees:

M(Rᵢ ∪ Rⱼ, I) = M(Rᵢ, I) ∪ M(Rⱼ, I)

That is, independently derived models that agree on their shared substructure merge without contradiction. The composition check already verifies that namespace boundaries satisfy this condition. In the incident-response example, the composition report shows that the only cross-namespace edges are:

  • intent.notify_stakeholder reads triage.root_cause (lower stratum)
  • remediation.plan reads candidate.remediation_action (lower stratum)

Both are read-only references to lower-stratum facts — exactly the pattern that amalgamation permits. ∎

The proof identifies the exact coordination cost of distribution:

AspectCost
Atom extractionZero coordination (deterministic mapper)
Monotonic strataZero coordination within strata (CALM)
Non-monotonic strataOne barrier per stratum boundary
Invariant checkingZero coordination (read-only over complete model)

The total coordination overhead is bounded by the number of strata, which is a static property of the program. Adding a new agent namespace does not increase the number of strata unless the new rules introduce new negation or aggregation dependencies.

The honest caveat: in the incident-response example, 15 of 32 rules are non-monotonic (negation, aggregation, or mutation), spanning 7 of 9 strata. This means most strata require the synchronization barrier. The design is distribution-ready, but this particular program would not benefit from aggressive distribution — the coordination cost is real.

The examples/jacqos-smart-farm/ example demonstrates the opposite end of the spectrum. It models distributed IoT sensors across a farm — soil probes, weather stations, and crop scanners — each running as an independent agent namespace. Of its 24 rules, 21 are monotonic and only 3 use negation or mutation:

incident-responsesmart-farm
Monotonic rules17 / 32 (53%)21 / 24 (88%)
Non-monotonic rules15 / 32 (47%)3 / 24 (12%)
Monotonic strata2 / 95 / 8
Synchronization barriers needed62

In the smart-farm example, sensor enrichment (soil.*, weather.*, crop.*) and even the cross-agent join (irrigation.candidate) are entirely monotonic. A soil probe in the north field and a weather station at the barn can each run their namespace rules locally and sync derived facts by set-union — CALM guarantees convergence with zero coordination. Only the final irrigation decision layer (irrigation.skip, intent.irrigate) uses negation and would require the synchronization barrier.

This is the distribution story the architecture was designed for: edge nodes run monotonic strata independently, the central hub runs the non-monotonic decision layer, and the stratum boundaries computed at load time tell you exactly which is which. jacqos stats already reports this breakdown.

The smart-farm stratum breakdown reveals a clean two-tier split:

Tier 1 — Edge agents (strata 0–3, all monotonic, coordination-free):

Soil node: soil.reading → soil.dry, soil.healthy, soil.acidic
Weather node: weather.reading → weather.hot, weather.frost_risk
weather.rainfall → weather.dry_period
Crop node: crop.scan → crop.water_demand, crop.frost_sensitive
→ crop.high_demand
Cross-agent: irrigation.candidate (S3), irrigation.frost_protect (S2)

Every rule in Tier 1 only adds facts, never negates. Each physical sensor node runs its namespace rules locally. The cross-agent joins (irrigation.candidate needs soil.dry + crop.high_demand, irrigation.frost_protect needs crop.frost_sensitive + weather.frost_risk) are also monotonic — they join facts from different namespaces but use only positive body literals.

Under the CALM theorem, Tier 1 nodes can sync lazily. A soil probe with intermittent connectivity can buffer its derived facts and merge them whenever it reconnects. Eventual consistency is sufficient because monotonic derivation is order-independent.

Tier 2 — Central hub (strata 4–5, non-monotonic, requires barrier):

S4: irrigation.unsafe_frost_irrigate (uses NOT frost_protect)
S5: intent.irrigate (uses NOT skip, NOT irrigated)

Only these 2 rules out of 24 require the complete Tier 1 output before they can evaluate. The synchronization barrier sits between S3 and S4 — that is the exact CALM boundary for this program.

The amalgamation property guarantees that this split is safe:

M(soil ∪ weather ∪ crop ∪ irrigation, O) = M(soil, O) ∪ M(weather, O) ∪ M(crop, O) ∪ M(irrigation, O_merged)

Running all agents in one process (what V1 does) or running them on separate devices and merging — the derived model is identical.

Strata are dependency depth, not agent boundaries

Section titled “Strata are dependency depth, not agent boundaries”

A common misunderstanding: stratum numbers do not map to “which agent runs this.” They map to dependency depth in the rule graph.

In the smart-farm example:

StratumWhat it computesWhy it is at this depth
S0Atom projections (soil.reading, crop.scan, etc.)No dependencies
S1Single-hop enrichment (soil.dry, weather.hot, etc.)Depends on S0 projections
S2crop.high_demand, irrigation.frost_protectDepends on S1 enrichment
S3irrigation.candidateDepends on S2 cross-namespace join
S4irrigation.unsafe_frost_irrigateFirst negation: not frost_protect
S5intent.irrigateSecond negation: not skip, not irrigated

The CALM boundary does not land at an agent namespace boundary — it lands between S3 and S4, which is the first point where negation appears. jacqos stats reports this as the monotonicity summary. When planning a distributed deployment, the stratum breakdown is the authoritative guide to where synchronization barriers are required.

Design pattern: express “don’t do X” as a positive fact

Section titled “Design pattern: express “don’t do X” as a positive fact”

irrigation.skip looks like it should be non-monotonic — it is about not irrigating. But look at the rule:

rule irrigation.skip(zone) :-
irrigation.candidate(zone),
weather.rainfall("main", mm),
mm >= 15.

This is a positive join with a comparison. No negation. The “skip” decision is asserted as a positive fact rather than derived by negating the intent. The evaluator classifies it as monotonic.

This is a design pattern worth copying: when you want to express “don’t do X under condition Y,” derive a positive skip or block fact and then negate it in the intent rule. This keeps the blocking condition in the monotonic tier (edge-safe, coordination-free) and confines negation to the final intent derivation (central hub).

Compare the two approaches:

-- Approach A: negation in the enrichment layer (non-monotonic, needs barrier)
rule irrigation.candidate(zone) :-
soil.dry(zone),
crop.high_demand(zone),
not weather.rainfall("main", mm), mm >= 15. -- negation here
-- Approach B: positive skip fact (monotonic enrichment, negation only in intent)
rule irrigation.candidate(zone) :-
soil.dry(zone),
crop.high_demand(zone).
rule irrigation.skip(zone) :-
irrigation.candidate(zone),
weather.rainfall("main", mm),
mm >= 15. -- positive fact
rule intent.irrigate(zone) :-
irrigation.candidate(zone),
not irrigation.skip(zone). -- negation deferred to intent

Approach B keeps one more rule in the monotonic tier. In a distributed deployment, this means the skip decision can be computed at the edge. The negation is deferred to the central hub where it belongs.

Whether evaluation happens in one process or across n processes, the following invariant holds:

For any observation sequence O and program P, the derived model M(P, O) is unique, deterministic, and independent of evaluation topology.

This is not an aspiration. It is a consequence of three properties that JacqOS enforces at load time:

  1. Deterministic atom extraction — the mapper is a pure function
  2. Stratified fixed-point semantics — the model is the unique stratified minimal model
  3. Namespace-disjoint derivation — the composition check verifies that no namespace writes into another’s vocabulary

Single-process evaluation is the simplest deployment of these properties. Distributed evaluation is another deployment of the same properties. The math does not change.

  • Advanced Agents walks the full multi-agent workflow end to end: scaffold with --agents, fork a lineage, drive a contradiction through preview and resolve, and pin the boundary contract with jacqos verify --composition-report.
  • Model-Theoretic Foundations explains rule shapes, locality, and reducts in more depth.
  • Invariant Review shows how to turn catastrophic safety into machine-checked constraints.
  • Fixtures and Invariants covers replay and counterexample-driven iteration.
  • Live Ingress shows how to run the same shared-reality pattern through jacqos serve, adapters, and SSE subscribers.
  • CLI Reference documents jacqos verify, jacqos lineage fork, jacqos contradiction, jacqos stats, and Studio launch workflows.