Skip to content

Using Fallible Sensors Safely

Sometimes the outside world does not arrive as clean fact. It arrives as a voice transcript, an OCR parse, an LLM extraction, a vision label, or a heuristic guess. Those tools are useful, but they are not trustworthy enough to become shared system truth on their own.

JacqOS treats these systems as fallible sensors. They can observe and propose, but they cannot decide what becomes real. Their output stays behind the candidate-evidence boundary until explicit acceptance rules promote it into trusted fact.

BBC reporting in 2025 described a viral Taco Bell drive-thru prank where a voice AI accepted an order for 18,000 waters. The important lesson is not the prank itself. It is the trust-boundary failure: an absurd interpretation crossed too directly from “the system thinks this is what the customer said” into “the system is ready to act on it.”

JacqOS is designed to prevent that class of failure.

Instead of letting a fallible interpretation drive execution immediately, JacqOS keeps three things separate:

  • Evidence — what the outside world said or what a sensor returned
  • Candidate evidence — the system’s current proposal for what that evidence means
  • Accepted fact — what the system is actually willing to believe and use for downstream action

That separation is the difference between “the voice model heard 18,000 waters” and “the store is now committed to an impossible order.”

A fallible sensor is any component that produces a semantic interpretation that may be wrong.

Common examples:

  • LLMs extracting structure from free-form text
  • Speech-to-text or voice ordering systems
  • OCR reading handwritten or scanned input
  • Vision models labeling scenes or objects
  • External classifiers scoring fraud, urgency, or intent
  • Heuristic parsers that guess at meaning from messy payloads

These are different technologies, but they create the same product problem: they generate useful proposals that should not be treated as trusted fact by default.

The same pattern shows up in product docs, runtime specs, and ontology code under three different names. They describe the same boundary at different layers:

  • Fallible sensor — product-language term for a component that interprets the world and can be wrong
  • requires_relay — formal mapper-contract term for atoms that must first pass through a reserved trust-boundary namespace
  • candidate.* — ontology namespace for non-authoritative proposals that crossed the mapper boundary but have not been accepted yet

The implementation is not “LLM-specific safety.” It is a general trust-boundary mechanism that happens to apply cleanly to LLMs, speech, OCR, and vision.

In product terms, the rule is simple:

A fallible sensor can propose. It cannot make truth.

The common JacqOS pipeline looks like this:

effect runtime -> observation -> atoms -> candidate.* -> accepted facts -> intent.* -> downstream effects

What this means in practice:

  1. Effect runtime performs world-facing work through declared capabilities (llm.complete, http.fetch, etc.).
  2. Observation records the result as append-only evidence.
  3. Mapper extracts deterministic atoms from that observation.
  4. Mapper contract marks selected atoms as requires_relay.
  5. Ontology derives candidate.* facts from those atoms.
  6. Acceptance rules promote candidates into trusted facts.
  7. Downstream intents derive only from trusted facts or explicit review flows.

When sensor output already arrives as an ingress observation, the pipeline simply starts at the observation step. The trust boundary stays the same.

This separation keeps world contact, evidence capture, semantic interpretation, and action derivation distinct.

Why Trust Marking Lives At The Mapper, Not The Effect

Section titled “Why Trust Marking Lives At The Mapper, Not The Effect”

It would be tempting to say, “This effect produces candidates.” JacqOS does not model it that way, and that is the right choice.

The trust question is not “Which capability produced this data?” The trust question is “Which parts of this observation are safe to treat as authoritative?”

One observation often contains a mix of:

  • trusted structural data
  • untrusted semantic interpretation

For example, a voice-order parse might contain a stable order ID, a timestamp, a store ID, a guessed item, a guessed quantity, and a confidence score. The structural fields are often safe to use directly. The interpreted fields are not.

If candidate status were attached to the whole effect result, the model would be too coarse:

  • you would over-quarantine trustworthy structural fields
  • you could not express partial trust within one observation
  • imports, fixtures, and replayed observations would need special-case behavior

JacqOS instead attaches trust marking at the mapper-output level. That lets one observation carry both ordinary atoms and relay-required atoms side by side.

The mapper contract declares which atom classes require explicit acceptance. Every Rhai mapper exposes a mapper_contract() function that the loader reads at startup, plus a map_observation(obs) function that runs per observation.

fn mapper_contract() {
#{
requires_relay: [
#{
observation_class: "voice_parse",
predicate_prefixes: ["parse."],
relay_namespace: "candidate",
}
],
}
}
fn map_observation(obs) {
let body = parse_json(obs.payload);
[
atom("order.id", body.order_id),
atom("order.store_id", body.store_id),
atom("parse.item", body.item),
atom("parse.quantity", body.quantity),
atom("parse.confidence", body.confidence),
]
}

In that example:

  • order.id and order.store_id are ordinary atoms — safe to use directly.
  • parse.item, parse.quantity, and parse.confidence are marked requires_relay — they must be promoted through candidate.* before any rule can rely on them.

The shell enforces this by matching the mapper contract’s observation_class and the configured predicate_prefixes, then setting CanonicalAtom.relay_namespace on the matching atoms in the canonical mapper export.

This is the key design point. From a single voice_parse observation, JacqOS can trust:

  • which order the payload belongs to
  • which store emitted it
  • when it was recorded

while still refusing to trust:

  • what item the customer asked for
  • how many units they asked for
  • whether the system interpreted a correction correctly

That means the system can safely hang review workflows, provenance, and audit history off the same observation without treating the interpreted content as accepted truth.

The mapper marks atoms. The ontology decides what those atoms mean and when they cross from candidate into accepted fact.

relation candidate.requested_item(order_id: text, item: text)
relation candidate.quantity(order_id: text, quantity: int)
relation accepted_order_item(order_id: text, item: text)
relation accepted_quantity(order_id: text, quantity: int)
relation customer_confirmed(order_id: text)
relation order_requires_review(order_id: text)
rule assert candidate.requested_item(order, item) :-
atom(obs, "order.id", order),
atom(obs, "parse.item", item).
rule assert candidate.quantity(order, qty) :-
atom(obs, "order.id", order),
atom(obs, "parse.quantity", qty).
rule accepted_order_item(order, item) :-
candidate.requested_item(order, item),
customer_confirmed(order).
rule accepted_quantity(order, qty) :-
candidate.quantity(order, qty),
customer_confirmed(order),
qty > 0,
qty <= 8.
rule order_requires_review(order) :-
candidate.quantity(order, qty),
qty > 8.

Notice what happens here:

  • candidate.* captures the proposal.
  • accepted_* captures what the system is willing to believe.
  • Suspicious proposals derive review paths instead of action paths.

This is exactly how you stop “18,000 waters” from going straight to POS.

.dh reserves = for binding (aggregate binds and helper.* calls only) and uses ==, !=, <, <=, >, >= for comparisons. The acceptance rule above uses > and <= to bound the quantity; an attempt to write qty = 8 in clause position is rejected at load time. See the .dh Language Reference for the full grammar.

Once atoms are marked requires_relay, you do not get to skip the candidate boundary.

If you try to derive an accepted fact directly from those atoms, the validator rejects the ontology at load time with diagnostic E2401:

<relation> derives from requires_relay observations without a <namespace> relay

The derivation must pass through:

  1. a declared candidate.* relation, and
  2. an explicit acceptance rule that uses additional evidence (review events, thresholds, corroboration, confirmation turns).

That turns candidate-evidence from a convention into an enforced trust boundary. The check is implemented in validate_relay_boundaries and keys on mapper predicate configuration, not on observation class strings.

candidate.* relations are not just another accepted domain surface. They are non-authoritative ontology input. They can influence review, comparison, and promotion logic, but they are not committed worldview by themselves.

That is why the pattern is so useful:

  • the system can remember what was proposed
  • operators can inspect what was proposed
  • invariants can reason about what was proposed
  • downstream action can still be withheld until explicit acceptance happens

End-to-end, here is what happens when a voice ordering system hears:

“No, I said water.”

The voice parser produces an observation. The mapper emits trusted structural atoms (order.id, order.store_id) plus relay-required atoms (parse.item = "water", parse.quantity = 18000, parse.confidence = 0.41).

The ontology derives:

  • candidate.requested_item(order, "water")
  • candidate.quantity(order, 18000)
  • candidate.parse_confidence(order, 0.41)

In a workflow-first system, that parse might flow straight into a POS submission attempt.

In JacqOS, it does not. The acceptance rule for accepted_quantity requires qty <= 8, so 18000 cannot promote. Instead the ontology derives:

  • order_requires_review(order)
  • order_requires_confirmation(order)

And blocks:

  • intent.submit_pos_order(order, ...)

until the customer confirms or a human approves the interpretation.

The same pattern works for many other cases:

  • OCR thinks an invoice total is $80,000 instead of $800.00
  • A vision model flags a production image as “unsafe”
  • An LLM claims a patient has hypertension when the note is ambiguous
  • A heuristic parser infers a cancellation request from a frustrated but non-cancelling message

This pattern gives you practical benefits immediately:

  • Safer automation — absurd, low-confidence, or conflicting interpretations do not go straight to execution.
  • Better auditability — you can inspect what the sensor proposed, why it was accepted or rejected, and which observation introduced it.
  • Cleaner review paths — human confirmation and deterministic checks become explicit ontology surfaces instead of ad hoc application code.
  • Broader reuse — the same boundary works for LLMs, speech, OCR, vision, and heuristics.
  • Stronger testing — disagreement and contradiction fixtures prove the safety path, not just the happy path.

When you use a fallible sensor in JacqOS, start with this checklist:

  • Keep acquisition separate from acceptance logic. When JacqOS initiates the sensor call, world contact lives in the effect runtime; when sensor output arrives as ingress, start at observation and keep the same trust boundary.
  • Keep mappers deterministic. They classify and flatten observations; they do not decide truth.
  • Mark only the uncertain atom classes. Leave trusted structural fields ordinary when appropriate.
  • Route sensor output through candidate.*, not directly into accepted domain facts.
  • Promote through explicit rules. Human review, thresholds, corroboration, and contradiction checks all belong here.
  • Write invariants for impossible or dangerous states. “18,000 waters” should be logically impossible to auto-accept.
  • Derive review or confirmation intents from suspicious candidates.
  • Keep downstream effects behind accepted facts. External actions should derive from trusted worldview, not raw proposals.
  • Ship disagreement fixtures. Prove that bad sensor output stays contained.
CodeSeverityMessage TemplateWhen You See It
E2401Error<relation> derives from requires_relay observations without a <namespace> relayA rule head accepts requires_relay-marked atoms without going through the configured relay namespace (candidate for sensors, proposal for action suggestions).

The full validator diagnostic inventory lives in the .dh Language Reference. E2401 is the only code reserved for relay-boundary violations in V1.

The Drive-Thru Ordering Walkthrough turns this pattern into a concrete app without depending on a specific brand.

Its shape is straightforward:

  • Observations: captured audio, parsed voice order, customer confirmation, crew review, POS submission result
  • Candidate surfaces: candidate.requested_item, candidate.quantity, candidate.modifier
  • Accepted surfaces: accepted_order_item, accepted_quantity, accepted_modifier
  • Review surfaces: order_requires_confirmation, order_requires_review
  • Action surface: intent.submit_pos_order

The ontology proves that:

  • absurd quantities cannot auto-promote
  • low-confidence parses require confirmation or review
  • correction turns can replace old candidates with new ones cleanly
  • POS submission never derives from unaccepted candidates

Its fixture set includes:

  • a happy path with clean speech and immediate confirmation
  • a correction-turn path where the customer changes the item or quantity
  • an impossible-order path such as “18,000 waters”
  • a disagreement path where the crew rejects the parse

You can replay the impossible-order path with jacqos replay fixtures/impossible-order.jsonl, inspect the contradiction history from a correction turn, and verify with jacqos verify that no raw parse candidate can derive a POS submission.

The phrase “candidate-evidence pattern” often appears in JacqOS docs through the LLM lens, because that is the clearest familiar example. The runtime mechanism is more general.

The same pattern works for:

  • speech parsing
  • OCR
  • vision classifiers
  • heuristic extractors
  • vendor scoring systems
  • imported non-deterministic model output

In all of these cases, the rule is the same:

If the observation contains fallible semantic interpretation, mark the relevant atoms requires_relay with relay_namespace = "candidate", route them through candidate.*, and only then promote them into trusted facts.

JacqOS lets you build systems where sensors can be helpful without being authoritative, the platform records what was proposed and what was accepted, and humans review invariants and fixtures instead of reading generated glue code.