Skip to content

contradict map

research-os contradict map detects contradiction candidates among a section’s candidate claims. As of v0.3.0, the detector is an explicit operator choice via --detector <auto|heuristic|ollama-intern>. The earlier env-var-driven pattern still works in auto mode, but the flag is now the canonical surface and is environment-independent.


Narrow-topic documentation section — heuristic, fast, no LLM:

Terminal window
research-os contradict map 01-token-surface-and-standards \
--triaged-only \
--detector heuristic

Default (env-var-driven) — LLM if OLLAMA_INTERN_MODEL is set and the model is available, otherwise heuristic fallback:

Terminal window
research-os contradict map 03-survey --triaged-only

Force LLM (refuses if model unavailable):

Terminal window
research-os contradict map 03-survey \
--triaged-only \
--detector ollama-intern

FlagRequiredDefaultDescription
<section>yesSection id, e.g. 01-token-surface-and-standards
--triaged-onlynofalseOnly consider claims that triage selected_for_review
--detector <mode>noautoauto, heuristic, or ollama-intern
--pack <dir>nocwdPath to the pack root

Exit 0 on success. Exit 2 on invalid --detector value or on --detector ollama-intern when the configured model is unavailable.


Uses the LLM detector when a configured Ollama model is available; falls through to the heuristic detector when it is not. Mirrors pre-v0.3.0 behavior. The mode chosen is announced visibly on every run — there are no silent shifts.

Bypasses Ollama entirely. No model availability check, no LLM calls. Always works. Always completes quickly (CPU-only token-overlap classification). Use this for narrow-topic documentation sections (the ollama-intern detector’s Jaccard prefilter saturates when claims share vocabulary, stalling the LLM path for 20+ minutes), for any rig without the configured model installed, and for reproducible CI runs.

Requires the configured Ollama model. If the model is unavailable, the command exits with a visible failure rather than silently falling back. The operator asked for LLM; the operator gets LLM or a refusal.


The CLI prints exactly one of these strings as the first line of every run. Operators can verify which detector ran without spelunking ledgers.

ModeOutcomeAnnouncement
--detector heuristicalwayscontradict map: using heuristic detector
--detector autoLLM chosencontradict map: using ollama-intern detector with model <model>
--detector autoheuristic fallbackcontradict map: ollama-intern unavailable; using heuristic detector
--detector ollama-internmodel availablecontradict map: using ollama-intern detector with model <model>
--detector ollama-internmodel unavailablecontradict map: ollama-intern detector requested but model <model> is unavailable; aborting (use --detector heuristic to bypass)

The choice is structural, not stylistic.

  • Narrow-topic documentation sectionsheuristic. Claims share vocabulary (“workflow,” “json,” “schema,” “install,” “node”); the prefilter passes a large fraction of pairs and LLM classification stalls. ComfyUI Sections 01–05 and the XRPL Section 01 are the canonical anchors.
  • Cross-domain sections with naturally divergent vocabularyauto or ollama-intern. Lower vocabulary overlap means fewer pairs reach the LLM and classification completes in reasonable time.
  • CI / reproducibility / no-LLM rigsheuristic.

The full operator-selection rule lives in the Operator Playbook under “Contradiction detection.”


contradict map was the chain blocker for Experiment 3 Session 1 (XRPL creator-token durability pack). Prior to v0.3.0, there was no environment-independent way to force the heuristic detector — clearing OLLAMA_INTERN_MODEL worked only when no default model was installed. Once hermes3:8b was installed, the clearing pattern silently stopped working: auto would re-acquire the default and stall on narrow-topic sections.

The --detector flag closes that gap. The detector choice is now an explicit input to the command, not a function of the surrounding shell.


  • Does not change the schema of contradictions.jsonl or contradictions.md.
  • Does not change contradict resolve or the closure-ledger flow.
  • Does not deprecate OLLAMA_INTERN_MODEL. The env var still controls which model the LLM detector uses; the flag only chooses which detector runs.