Using Fallible Sensors Safely
Sometimes the outside world does not arrive as clean fact. It arrives as a voice transcript, an OCR parse, an LLM extraction, a vision label, or a heuristic guess. Those tools are useful, but they are not trustworthy enough to become shared system truth on their own.
JacqOS treats these systems as fallible sensors. They can observe and propose, but they cannot decide what becomes real. Their output stays behind the candidate-evidence boundary until explicit acceptance rules promote it into trusted fact.
Why This Matters
Section titled “Why This Matters”BBC reporting in 2025 described a viral Taco Bell drive-thru prank where a voice AI accepted an order for 18,000 waters. The important lesson is not the prank itself. It is the trust-boundary failure: an absurd interpretation crossed too directly from “the system thinks this is what the customer said” into “the system is ready to act on it.”
JacqOS is designed to prevent that class of failure.
Instead of letting a fallible interpretation drive execution immediately, JacqOS keeps three things separate:
- Evidence — what the outside world said or what a sensor returned
- Candidate evidence — the system’s current proposal for what that evidence means
- Accepted fact — what the system is actually willing to believe and use for downstream action
That separation is the difference between “the voice model heard 18,000 waters” and “the store is now committed to an impossible order.”
What Counts As A Fallible Sensor?
Section titled “What Counts As A Fallible Sensor?”A fallible sensor is any component that produces a semantic interpretation that may be wrong.
Common examples:
- LLMs extracting structure from free-form text
- Speech-to-text or voice ordering systems
- OCR reading handwritten or scanned input
- Vision models labeling scenes or objects
- External classifiers scoring fraud, urgency, or intent
- Heuristic parsers that guess at meaning from messy payloads
These are different technologies, but they create the same product problem: they generate useful proposals that should not be treated as trusted fact by default.
Three Terms To Keep Straight
Section titled “Three Terms To Keep Straight”The same pattern shows up in product docs, runtime specs, and ontology code under three different names. They describe the same boundary at different layers:
- Fallible sensor — product-language term for a component that interprets the world and can be wrong
requires_relay— formal mapper-contract term for atoms that must first pass through a reserved trust-boundary namespacecandidate.*— ontology namespace for non-authoritative proposals that crossed the mapper boundary but have not been accepted yet
The implementation is not “LLM-specific safety.” It is a general trust-boundary mechanism that happens to apply cleanly to LLMs, speech, OCR, and vision.
The JacqOS Pattern
Section titled “The JacqOS Pattern”In product terms, the rule is simple:
A fallible sensor can propose. It cannot make truth.
The common JacqOS pipeline looks like this:
effect runtime -> observation -> atoms -> candidate.* -> accepted facts -> intent.* -> downstream effectsWhat this means in practice:
- Effect runtime performs world-facing work through declared capabilities (
llm.complete,http.fetch, etc.). - Observation records the result as append-only evidence.
- Mapper extracts deterministic atoms from that observation.
- Mapper contract marks selected atoms as
requires_relay. - Ontology derives
candidate.*facts from those atoms. - Acceptance rules promote candidates into trusted facts.
- Downstream intents derive only from trusted facts or explicit review flows.
When sensor output already arrives as an ingress observation, the pipeline simply starts at the observation step. The trust boundary stays the same.
This separation keeps world contact, evidence capture, semantic interpretation, and action derivation distinct.
Why Trust Marking Lives At The Mapper, Not The Effect
Section titled “Why Trust Marking Lives At The Mapper, Not The Effect”It would be tempting to say, “This effect produces candidates.” JacqOS does not model it that way, and that is the right choice.
The trust question is not “Which capability produced this data?” The trust question is “Which parts of this observation are safe to treat as authoritative?”
One observation often contains a mix of:
- trusted structural data
- untrusted semantic interpretation
For example, a voice-order parse might contain a stable order ID, a timestamp, a store ID, a guessed item, a guessed quantity, and a confidence score. The structural fields are often safe to use directly. The interpreted fields are not.
If candidate status were attached to the whole effect result, the model would be too coarse:
- you would over-quarantine trustworthy structural fields
- you could not express partial trust within one observation
- imports, fixtures, and replayed observations would need special-case behavior
JacqOS instead attaches trust marking at the mapper-output level. That lets one observation carry both ordinary atoms and relay-required atoms side by side.
Authoring A Candidate-Relay Mapper
Section titled “Authoring A Candidate-Relay Mapper”The mapper contract declares which atom classes require explicit acceptance. Every Rhai mapper exposes a mapper_contract() function that the loader reads at startup, plus a map_observation(obs) function that runs per observation.
fn mapper_contract() { #{ requires_relay: [ #{ observation_class: "voice_parse", predicate_prefixes: ["parse."], relay_namespace: "candidate", } ], }}
fn map_observation(obs) { let body = parse_json(obs.payload);
[ atom("order.id", body.order_id), atom("order.store_id", body.store_id), atom("parse.item", body.item), atom("parse.quantity", body.quantity), atom("parse.confidence", body.confidence), ]}In that example:
order.idandorder.store_idare ordinary atoms — safe to use directly.parse.item,parse.quantity, andparse.confidenceare markedrequires_relay— they must be promoted throughcandidate.*before any rule can rely on them.
The shell enforces this by matching the mapper contract’s observation_class and the configured predicate_prefixes, then setting CanonicalAtom.relay_namespace on the matching atoms in the canonical mapper export.
Partial Trust Within One Observation
Section titled “Partial Trust Within One Observation”This is the key design point. From a single voice_parse observation, JacqOS can trust:
- which order the payload belongs to
- which store emitted it
- when it was recorded
while still refusing to trust:
- what item the customer asked for
- how many units they asked for
- whether the system interpreted a correction correctly
That means the system can safely hang review workflows, provenance, and audit history off the same observation without treating the interpreted content as accepted truth.
Authoring An Acceptance Rule
Section titled “Authoring An Acceptance Rule”The mapper marks atoms. The ontology decides what those atoms mean and when they cross from candidate into accepted fact.
relation candidate.requested_item(order_id: text, item: text)relation candidate.quantity(order_id: text, quantity: int)relation accepted_order_item(order_id: text, item: text)relation accepted_quantity(order_id: text, quantity: int)relation customer_confirmed(order_id: text)relation order_requires_review(order_id: text)
rule assert candidate.requested_item(order, item) :- atom(obs, "order.id", order), atom(obs, "parse.item", item).
rule assert candidate.quantity(order, qty) :- atom(obs, "order.id", order), atom(obs, "parse.quantity", qty).
rule accepted_order_item(order, item) :- candidate.requested_item(order, item), customer_confirmed(order).
rule accepted_quantity(order, qty) :- candidate.quantity(order, qty), customer_confirmed(order), qty > 0, qty <= 8.
rule order_requires_review(order) :- candidate.quantity(order, qty), qty > 8.Notice what happens here:
candidate.*captures the proposal.accepted_*captures what the system is willing to believe.- Suspicious proposals derive review paths instead of action paths.
This is exactly how you stop “18,000 waters” from going straight to POS.
Operator Reminder: = Versus ==
Section titled “Operator Reminder: = Versus ==”.dh reserves = for binding (aggregate binds and helper.* calls only) and uses ==, !=, <, <=, >, >= for comparisons. The acceptance rule above uses > and <= to bound the quantity; an attempt to write qty = 8 in clause position is rejected at load time. See the .dh Language Reference for the full grammar.
What Load-Time Validation Enforces
Section titled “What Load-Time Validation Enforces”Once atoms are marked requires_relay, you do not get to skip the candidate boundary.
If you try to derive an accepted fact directly from those atoms, the validator rejects the ontology at load time with diagnostic E2401:
<relation> derives from requires_relay observations without a <namespace> relayThe derivation must pass through:
- a declared
candidate.*relation, and - an explicit acceptance rule that uses additional evidence (review events, thresholds, corroboration, confirmation turns).
That turns candidate-evidence from a convention into an enforced trust boundary. The check is implemented in validate_relay_boundaries and keys on mapper predicate configuration, not on observation class strings.
candidate.* Is Not Committed Truth
Section titled “candidate.* Is Not Committed Truth”candidate.* relations are not just another accepted domain surface. They are non-authoritative ontology input. They can influence review, comparison, and promotion logic, but they are not committed worldview by themselves.
That is why the pattern is so useful:
- the system can remember what was proposed
- operators can inspect what was proposed
- invariants can reason about what was proposed
- downstream action can still be withheld until explicit acceptance happens
Worked Example: Drive-Thru Order
Section titled “Worked Example: Drive-Thru Order”End-to-end, here is what happens when a voice ordering system hears:
“No, I said water.”
The voice parser produces an observation. The mapper emits trusted structural atoms (order.id, order.store_id) plus relay-required atoms (parse.item = "water", parse.quantity = 18000, parse.confidence = 0.41).
The ontology derives:
candidate.requested_item(order, "water")candidate.quantity(order, 18000)candidate.parse_confidence(order, 0.41)
In a workflow-first system, that parse might flow straight into a POS submission attempt.
In JacqOS, it does not. The acceptance rule for accepted_quantity requires qty <= 8, so 18000 cannot promote. Instead the ontology derives:
order_requires_review(order)order_requires_confirmation(order)
And blocks:
intent.submit_pos_order(order, ...)
until the customer confirms or a human approves the interpretation.
The same pattern works for many other cases:
- OCR thinks an invoice total is
$80,000instead of$800.00 - A vision model flags a production image as “unsafe”
- An LLM claims a patient has hypertension when the note is ambiguous
- A heuristic parser infers a cancellation request from a frustrated but non-cancelling message
Why Candidate-Evidence Is Valuable
Section titled “Why Candidate-Evidence Is Valuable”This pattern gives you practical benefits immediately:
- Safer automation — absurd, low-confidence, or conflicting interpretations do not go straight to execution.
- Better auditability — you can inspect what the sensor proposed, why it was accepted or rejected, and which observation introduced it.
- Cleaner review paths — human confirmation and deterministic checks become explicit ontology surfaces instead of ad hoc application code.
- Broader reuse — the same boundary works for LLMs, speech, OCR, vision, and heuristics.
- Stronger testing — disagreement and contradiction fixtures prove the safety path, not just the happy path.
Design Checklist
Section titled “Design Checklist”When you use a fallible sensor in JacqOS, start with this checklist:
- Keep acquisition separate from acceptance logic. When JacqOS initiates the sensor call, world contact lives in the effect runtime; when sensor output arrives as ingress, start at observation and keep the same trust boundary.
- Keep mappers deterministic. They classify and flatten observations; they do not decide truth.
- Mark only the uncertain atom classes. Leave trusted structural fields ordinary when appropriate.
- Route sensor output through
candidate.*, not directly into accepted domain facts. - Promote through explicit rules. Human review, thresholds, corroboration, and contradiction checks all belong here.
- Write invariants for impossible or dangerous states. “18,000 waters” should be logically impossible to auto-accept.
- Derive review or confirmation intents from suspicious candidates.
- Keep downstream effects behind accepted facts. External actions should derive from trusted worldview, not raw proposals.
- Ship disagreement fixtures. Prove that bad sensor output stays contained.
Diagnostic Reference
Section titled “Diagnostic Reference”| Code | Severity | Message Template | When You See It |
|---|---|---|---|
E2401 | Error | <relation> derives from requires_relay observations without a <namespace> relay | A rule head accepts requires_relay-marked atoms without going through the configured relay namespace (candidate for sensors, proposal for action suggestions). |
The full validator diagnostic inventory lives in the .dh Language Reference. E2401 is the only code reserved for relay-boundary violations in V1.
Reference Example
Section titled “Reference Example”The Drive-Thru Ordering Walkthrough turns this pattern into a concrete app without depending on a specific brand.
Its shape is straightforward:
- Observations: captured audio, parsed voice order, customer confirmation, crew review, POS submission result
- Candidate surfaces:
candidate.requested_item,candidate.quantity,candidate.modifier - Accepted surfaces:
accepted_order_item,accepted_quantity,accepted_modifier - Review surfaces:
order_requires_confirmation,order_requires_review - Action surface:
intent.submit_pos_order
The ontology proves that:
- absurd quantities cannot auto-promote
- low-confidence parses require confirmation or review
- correction turns can replace old candidates with new ones cleanly
- POS submission never derives from unaccepted candidates
Its fixture set includes:
- a happy path with clean speech and immediate confirmation
- a correction-turn path where the customer changes the item or quantity
- an impossible-order path such as “18,000 waters”
- a disagreement path where the crew rejects the parse
You can replay the impossible-order path with jacqos replay fixtures/impossible-order.jsonl, inspect the contradiction history from a correction turn, and verify with jacqos verify that no raw parse candidate can derive a POS submission.
Beyond LLMs
Section titled “Beyond LLMs”The phrase “candidate-evidence pattern” often appears in JacqOS docs through the LLM lens, because that is the clearest familiar example. The runtime mechanism is more general.
The same pattern works for:
- speech parsing
- OCR
- vision classifiers
- heuristic extractors
- vendor scoring systems
- imported non-deterministic model output
In all of these cases, the rule is the same:
If the observation contains fallible semantic interpretation, mark the relevant atoms
requires_relaywithrelay_namespace = "candidate", route them throughcandidate.*, and only then promote them into trusted facts.
JacqOS lets you build systems where sensors can be helpful without being authoritative, the platform records what was proposed and what was accepted, and humans review invariants and fixtures instead of reading generated glue code.
Going Deeper
Section titled “Going Deeper”- Drive-Thru Ordering Walkthrough — a concrete fallible-sensor app with correction turns and bounded POS submission
- Action Proposals — the sibling pattern for
proposal.*(model-suggested actions, not interpretations) - LLM Agents — candidate-evidence applied specifically to
llm.complete - Fallible Sensor Containment Pattern — the high-level pattern page
- Observation-First Thinking — why evidence and belief are separate in JacqOS
- Medical Intake Walkthrough — a concrete candidate-evidence example with clinician review
- .dh Language Reference — load-time rejection rules and candidate-evidence syntax