Security & Auditability
What is provably contained, and what still requires human review?
Section titled “What is provably contained, and what still requires human review?”This is the question a risk officer, a compliance lead, or an audit partner will ask before they sign off on shipping an AI agent into a regulated workflow. JacqOS is built so that the answer can be precise rather than reassuring. Some properties of agent behavior are structurally enforced by the platform for a fixed evaluator and lineage: violating programs fail to load, and violating transitions are refused before effects fire. Other properties depend on the rules, fixtures, and helpers your team writes, and those still need human judgment. This page draws the line.
The boundary
Section titled “The boundary”Every JacqOS application runs through a single deterministic pipeline:
Observation → Atoms → Facts → Intents → Effects → (new Observations)The evaluator computes the derived facts as a stratified Datalog model
over the current observation log. It then checks every named
invariant — declarative integrity constraints written in .dh —
against that model. Only transitions that satisfy every invariant are
allowed to produce effects on the world. This is what we mean by the
satisfiability boundary: the engine acts as an automated model
checker, and a transition that violates an invariant is rejected
before any external action fires.
The boundary is mechanical, not cultural. Agents emit intents freely; the platform checks invariant satisfaction before anything touches the world. The mathematical guarantee is “no effect fires from a state that fails an invariant check for the fixed evaluator and lineage.” For the conceptual model behind this pipeline see Observation-First Thinking. For the review surface — what humans declare, what the evaluator checks — see Invariant Review.
Two reserved namespaces extend the boundary into AI-specific failure modes:
candidate.*holds fallible-sensor evidence (LLM extractions, voice parses, vision labels). Candidate facts never become accepted truth until an explicit acceptance rule promotes them.proposal.*holds fallible-decider output (LLM-suggested actions). Proposals never become executable intents until an explicit domain decision relation ratifies them.
Any rule that tries to derive accepted truth or executable intents directly from a fallible-tool observation is rejected at load time. A bad LLM parse cannot silently become a fact. A bad LLM proposal cannot silently become an effect.
What auditors get
Section titled “What auditors get”jacqos verify produces a verification bundle — a single JSON
artifact that captures the evidence of a verification run. The bundle
is content-addressed, diff-stable across repeated runs, and designed
to be the document an auditor inspects. The full schema lives in the
Verification Bundle reference.
What the bundle evidences cryptographically:
- Evaluator identity.
evaluator_digest = hash(ontology IR, mapper semantics, helper digests). Two runs with the same digest are the same evaluator. A change to a rule, a mapper, or a helper changes the digest. - World-state identity. Each fixture carries an
observation_digest(the input sequence) and aworld_digest(the derived state). Two bundles with the same evaluator digest and matching per-fixture world digests are semantically identical evaluators on that corpus. - Replay determinism. The
replay_determinismcheck confirms that running the same fixture twice from a clean database produced the same world digest. A failure here is a bug — non-deterministic mapper, helper, or evaluator behavior — and is treated as a blocker. - Invariant satisfaction over the corpus. The
invariantscheck records that every declared invariant held after every fixed point across every fixture and every property-tested generated scenario. - Containment lints. The
candidate_authority_lintscheck records that no accepted fact bypassed acandidate.*acceptance rule and no executable intent bypassed aproposal.*ratification relation. - Redaction audit. The
secret_redactioncheck records that no obvious secret material was found in exported fixtures, counterexamples, or the bundle itself. - Shadow conformance. When configured, the
shadow_reference_evaluatorcheck records that an independent reference implementation produced bit-identical derived state.
What the bundle does not prove:
- It does not prove your invariants capture everything that should always be true. If you forgot to declare an invariant, the bundle will pass for a state that violates the rule you forgot to write.
- It does not prove your fixtures cover every meaningful scenario. Property testing helps, but coverage is bounded by the schemas and generators you declare.
- It does not prove your
.dhrules are correct — only that they are consistent with the invariants and fixtures you declared. - It does not prove the safety of code outside the ontology surface (helpers, custom Wasm modules, the host application embedding the evaluator).
The honest framing: the bundle is a machine-checkable claim about a specific evaluator against a specific corpus. The strength of that claim depends on how completely your invariants and fixtures characterize your domain.
Provenance for incident review
Section titled “Provenance for incident review”When something goes wrong in production, the question is rarely “what did the AI generate?” — it is “why did the system believe this?” Every derived fact and every intent in JacqOS carries structural provenance: explicit edges in a derivation graph that name the rule that fired, the atoms that satisfied the rule body, the observations those atoms came from, and the prior facts that contributed.
For an audit or incident review, this means every accepted fact, every executable intent, and every effect can be traced backward to the exact observation evidence that caused it. There is no hidden state to reconstruct, no audit log to cross-reference against — the provenance graph is part of the derived state itself, exported in every verification bundle, and rendered in Studio’s drill inspector.
A regulator asking “why did this customer get this offer?” or “on what basis did the model accept this medical-intake field?” can be answered by walking the provenance edges from the offer or the accepted fact back to the originating observations. The chain is mechanical, not narrative — it is what the evaluator already materialized to derive the fact in the first place.
Effect capabilities
Section titled “Effect capabilities”Anything the application does to the outside world flows through a
declared capability. Capabilities are listed in jacqos.toml and
checked at load time. Undeclared use is a hard load error — the
program will not start.
V1 declares five capabilities:
| Capability | Purpose |
|---|---|
http.fetch | Declared outbound HTTP only |
llm.complete | Explicit model call for LLM-assisted agents |
blob.put / blob.get | Large raw body storage |
timer.schedule | Request a future timer observation |
log.dev | Developer diagnostics only, never canonical state |
This matters for two reasons. First, the set of external surfaces an application can touch is reviewable from a single text file — auditors do not need to read rule code to know what the agent might do. Second, mappers and helpers are sandboxed (no ambient I/O, capability-free pure functions) so the evaluator and rule layer cannot reach the outside world even by accident.
What this is NOT
Section titled “What this is NOT”It is more important to be precise about what JacqOS does not provide than to oversell what it does.
- Not a SOC 2 attestation. SOC 2, ISO 27001, HIPAA, and similar frameworks attest to organizational controls — your access management, your incident response process, your vendor reviews. JacqOS is a runtime; it does not produce attestations and does not substitute for them. What it does provide is artifacts (the observation log, the verification bundle, capability declarations) that make the runtime portion of those frameworks easier to evidence.
- Not a substitute for security review of your
.dhrules. The evaluator checks your rules against your invariants. It does not prove your rules implement the right business logic, and it does not catch invariants you forgot to declare. Domain experts must still review the invariant set against the policy it is meant to enforce. - Not a guarantee against bugs in helper code or in the rules themselves. Helpers are capability-free pure functions, but pure functions can still implement the wrong logic. A helper that computes the wrong tax rate will compute it wrongly every time, deterministically. Determinism is a property of the runtime, not a proof of correctness.
- Not protection against malicious or compromised infrastructure. JacqOS runs on your hardware, in your environment. The usual operational controls — host hardening, network segmentation, key management, secret storage — are still your responsibility. The platform produces signed, content-addressed artifacts; how those artifacts are stored and distributed is up to you.
- Not a sandbox for arbitrary AI code. JacqOS contains AI agents
by routing their output through the
candidate.*andproposal.*relays before any rule can act on it. It does not run AI-generated Rust, Python, or shell scripts. The containment model only applies to evidence and proposals expressed through the observation pipeline.
If a stakeholder asks “is JacqOS safe?”, the honest answer is: “JacqOS makes specific properties structurally enforced and specific failure modes impossible within the evaluator boundary. Everything outside that boundary is the same engineering review you would do for any production system.”
Compliance posture
Section titled “Compliance posture”Three platform properties combine into the artifacts compliance reviews actually consume.
Reproducible local replay
Section titled “Reproducible local replay”Every observation log can be replayed deterministically on a clean database to produce bit-identical derived state. This is not “we have logs you can grep” — it is the runtime guarantee that the same observations always produce the same facts, intents, and effect receipts. For audit purposes, this means a question of the form “what would the system have done if we had received this observation at this point in the lineage?” has a single, mechanically reproducible answer.
Content-addressed observations and bundles
Section titled “Content-addressed observations and bundles”Observations are append-only. The verification bundle’s
observation_digest is a hash of the observation sequence; its
world_digest is a hash of the derived state; its evaluator_digest
is a hash of the ontology IR, mapper semantics, and helper digests.
A diff between two bundles tells you exactly which fixtures, which
rules, or which mapper changes account for a behavior change. There
is no scenario where “the logs were edited” — the digests would
change.
Verification digests as evidence
Section titled “Verification digests as evidence”A verification bundle attached to a build, a release, or a regulator submission is a machine-checkable claim of the form: “this evaluator (evaluator_digest), against this corpus (per-fixture observation_digest), produced this derived state (per-fixture world_digest), and every declared invariant held.” The auditor does not have to trust the developer’s account of what the system does — they can recompute the bundle on a clean install and compare digests.
For frameworks that care about change control, the package boundary
makes this concrete: the evaluation package is a frozen runtime
handoff — a single content-addressed bundle of the ontology, mappers,
helpers, and prompts. A change to any of those changes the
package_digest. Promotion from a shadow build to the
effect-authoritative evaluator is an explicit, recorded transition,
not an ambient deploy.
None of this replaces the organizational controls a compliance framework requires. It does mean the runtime portion of those controls — “show us that the system that ran on this date is the same system that produced these test results” — has a one-line answer: compare the digests.
Where to go next
Section titled “Where to go next”- Observation-First Thinking — the pipeline behind the satisfiability boundary
- Invariant Review — what humans declare versus what the evaluator checks
- Visual Provenance — tracing any derived fact back to the observations that caused it
- Verification Bundle — the full schema of the audit artifact
- What is JacqOS? — the introduction this page branches off