Skip to content

Visual Provenance

The Problem: Generated Code You Can’t Trace

Section titled “The Problem: Generated Code You Can’t Trace”

When an AI agent misbehaves, the instinct is to open the code and trace the logic. With AI-generated rules, this breaks down fast.

The rules may be dense, unfamiliar, and optimized for correctness rather than readability. There could be dozens of derivation rules across multiple strata, with negation and aggregation interacting in ways that are hard to simulate mentally. Even if you understand Datalog, you’re reading someone else’s solution to a problem the AI interpreted from your constraints.

You don’t want to debug the implementation. You want to answer: “Why did the system believe this? What evidence led here?”

That’s what visual provenance gives you.

JacqOS Studio provides a provenance drill — a three-section inspector (Action, Timeline, Provenance) that traces any derived fact backward to the observations that produced it. The Provenance section unpacks the chain in five sub-stops — Decision, Facts, Observations, Rule, Ontology — so you can read the derivation top to bottom without ever opening a single generated rule.

Every fact in JacqOS carries structural provenance — not log entries, but edges in a derivation graph:

  • Which rule derived the fact
  • Which atoms satisfied the rule body
  • Which observations produced those atoms
  • Which prior facts contributed (for recursive or multi-step derivation)

Select any Activity row in Studio, and the drill inspector renders the chain in text form across three flat sections — Action, Timeline, and Provenance. The Provenance section is itself sub-divided (Decision, Facts, Observations, Rule, Ontology) so the same evidence chain can be read in two complementary orders: reverse-chronological in Timeline, and structural top-to-bottom in the Provenance section:

booking_confirmed("req-1", "slot-42")
← rule: assert booking_confirmed (rules.dh:12)
← atom(obs-3, "reserve.succeeded", "true")
← Observation obs-3: reserve.result
← atom(obs-3, "reserve.request_id", "req-1")
← atom(obs-3, "reserve.slot_id", "slot-42")

One click takes you from a derived action to the raw observation that caused it. No code reading required.

Studio can inspect a live jacqos serve session through the same HTTP and SSE surfaces that adapters use:

Terminal window
export JACQOS_STUDIO_SERVE_URL=http://127.0.0.1:8787
export JACQOS_STUDIO_LINEAGE=live-demo
jacqos-studio

In serve mode, Studio reads lineage status, observation tail, fact and intent deltas, effects, run records, provenance neighborhoods, and reconciliation.required events from the public serve endpoints. The live view is still observation-first: every row you inspect traces back to observations and rules, not to a hidden runtime object.

The drill inspector answers three questions about every Activity row.

What’s in the ontology around this action

Section titled “What’s in the ontology around this action”

The Ontology destination groups every relation by stratum and color-codes reserved prefixes (atom, candidate., proposal., intent., observation.). Selecting a relation shows its stratum index and prefix kind. This is the “architecture” view: it answers “what relations exist, and how are they classified?” without reading any .dh source.

The visual rule graph — relations as nodes, derivation and negation edges, stratum boundaries, coverage overlays — ships in V1.1.

Each Activity row carries the full derivation in its drill inspector. Inside the Provenance section, the Facts sub-stop lists which derived facts contributed; the Observations sub-stop lists the atoms that satisfied each rule body and the observations they came from. The Timeline section anchors on the receipt fact and walks backward through Effect → Intent → Decision → Proposal → Observation events.

This is the “runtime” view. It answers: “What did happen? Which evidence produced this action?”

The Decision section shows the ratifying decision (for proposal-gated intents) or the rule that fired (for direct derivations). The Rule section names the rule and source location today; inline .dh snippets join it in V1.1. The Ontology section shows relation and stratum context today, with the visual rule-graph surface joining it in V1.1.

intent.reserve_slot("req-2", "slot-42")
← rule: intent.reserve_slot (intents.dh:4)
← booking_request("req-2", "sam@example.com", "slot-42")
← atom(obs-2, "booking.email", "sam@example.com")
← Observation obs-2: booking.request
← NOT slot_reserved("slot-42")
(no matching fact at this evaluation point)

This is the “why” view. It answers: “Why does this specific tuple exist? What exact evidence chain produced it?”

Rule Debugging Through Effects, Not Mental Simulation

Section titled “Rule Debugging Through Effects, Not Mental Simulation”

Traditional Datalog debugging asks you to simulate the fixed-point computation in your head: “What would this rule match? What about after that rule fires? What about the negation in stratum 3?” This is impractical with AI-generated rules you didn’t write.

JacqOS flips this. Instead of simulating what should happen, you inspect what did happen.

Select any Activity row whose derived fact you want to trace. The drill inspector shows exactly which rule fired, which atoms satisfied the body, and which observations produced those atoms. Every binding is concrete — not “this rule could match X,” but “this rule did match obs-7’s atom with value ‘slot-42’.”

When a fact you expected is missing, the verification bundle records why each candidate rule didn’t fire:

  • Rule A: body clause 2 failed — no atom matching reserve.succeeded("req-3", _)
  • Rule B: negation check succeeded — request_cancelled("req-3") exists, blocking derivation

You see the specific point where each candidate rule stopped matching. No mental simulation needed — the evaluator already did the work, and the bundle exposes the result. Querying for missing facts directly from a Studio surface ships in V1.1.

For intents that fired and produced effects, Studio shows the full lifecycle:

intent.send_confirmation("req-1", "pat@example.com")
→ Effect: http.fetch POST /api/send-email
→ Status: completed
→ Result observation: obs-8 (email.send_result)
→ Derived: confirmation_sent("req-1")

You can trace from intent to effect execution to the resulting observation and back into the next round of derivation. The entire loop is visible.

From Bad Fact to Exact Rule to Why It Fired

Section titled “From Bad Fact to Exact Rule to Why It Fired”

Here’s the debugging workflow when something goes wrong:

1. Spot the problem. You see a fact that shouldn’t exist, an intent that shouldn’t have fired, or an expected fact that’s missing.

2. Open the drill inspector. Click the Activity row for the bad action. The drill inspector shows the full derivation chain — every rule, every atom, every observation — across the Decision, Facts, and Observations sub-stops of the Provenance section.

3. Identify the rule. The Decision and Rule sections name the exact rule (with source location) that derived the bad fact. You don’t need to search — the inspector takes you there.

4. Understand why it fired. The drill inspector shows the concrete bindings. You can see which atoms matched, which observations they came from, and (in V1.1) which negation checks passed or failed.

5. Inspect the rule in context. Open the Ontology destination to see the rule’s stratum and prefix kind. Per-rule visual context — neighboring relations, derivation edges, negation edges — ships with the V1.1 visual rule graph.

6. Fix the invariant or fixture. Now you know what happened and why. Add an invariant that forbids this state, or add a fixture that exercises this scenario. The AI regenerates rules until the invariant holds and the fixture passes.

At no point did you need to read the generated rule syntax. You saw the rule’s effect — what it matched, what it produced — and traced the evidence chain. The generated code is an implementation detail.

Studio’s Compare lens chip lets you pin a comparison evaluator alongside the live one from the Activity bottom bar:

  • Fact diff — which facts exist in one version but not the other
  • Provenance diff — which derivation paths changed
  • Rule diff — which rules produced different results
  • New observations — which observations changed the derivation

The dual-pane render — both worldviews side by side in the Activity surface — ships in V1.1; in V1 the Compare lens chip surfaces the comparison evaluator’s identity but does not yet split the row stream. The same fact-diff data is exported in every verification bundle, so CI and tooling can already consume it.

Provenance Completes the Verification Surface

Section titled “Provenance Completes the Verification Surface”

Visual provenance is the third leg of JacqOS’s verification model:

SurfaceWhat it answers
Invariants”Are the universal constraints satisfied?”
Golden fixtures”Does the system produce the right output for known inputs?”
Visual provenance”Why did this specific thing happen?”

Invariants catch violations. Fixtures prove correct behavior. Provenance explains why — both when things go right and when they go wrong.

Together, these three surfaces mean you can verify, debug, and understand AI agent behavior without ever reading the generated .dh rules. You review what the system must do (invariants), what it does do (fixtures), and why it does it (provenance).