Common Tasks
Each task is a recipe: what you want to do, the commands, what success looks like, and what to check if it fails.
Run a basic design audit
Section titled “Run a basic design audit”Goal: Find out which documented features are discoverable and which are buried.
ai-ui atlas # Parse docs → feature catalogai-ui probe # Crawl UI → trigger graphai-ui diff # Match features ↔ triggersOr all three at once:
ai-ui stage0Success: ai-ui-output/diff.md exists and shows coverage percentage, matched features, and must-surface items.
If it fails:
CONFIG_NOT_FOUND— createai-ui.config.jsonin your project rootPROBE_TIMEOUT— make sure your dev server is running at the configuredbaseUrlATLAS_NO_DOCS— check that yourdocs.globspatterns match actual files
Build the full design map
Section titled “Build the full design map”Goal: Get surface inventory, feature discoverability scores, task flows, and an IA proposal.
ai-ui stage0 # Run atlas + probe + diff firstai-ui graph # Build trigger graphai-ui design-map # Generate design mapSuccess: ai-ui-output/design-map.md contains four sections: surface inventory, feature map, task flows, and IA proposal.
If it fails:
GRAPH_NO_INPUT— runstage0first to generate the required input files- Empty design map — your probe may not have found any triggers (check
probe.jsonl)
Capture runtime evidence (SPAs)
Section titled “Capture runtime evidence (SPAs)”Goal: For apps where URLs don’t change, capture what actually happens when buttons are clicked — storage writes, DOM mutations, fetch calls.
ai-ui runtime-effects # Click triggers in a real browserai-ui graph --with-runtime # Rebuild graph with evidenceai-ui design-map # Now evaluates goal rulesSuccess: Task flows in design-map.md show goal tags like GOALS: Open Audio Settings [score: 2].
If it fails:
- No goals showing — make sure
goalRulesis configured inai-ui.config.json - All goals
(unknown)— runtime-effects didn’t observe matching effects. Check that the UI actually writes to storage/fires fetches when clicked.
Set up CI verification
Section titled “Set up CI verification”Goal: Block PRs that reduce discoverability below a threshold.
ai-ui stage0ai-ui graphai-ui verify --strict --gate minimum --min-coverage 60Success: Exit code 0 means coverage is above the threshold. Exit code 1 means it dropped.
If it fails:
- Exit code 2 — runtime error (missing files, broken config)
- Coverage dropped — check
diff.mdto see which features lost their triggers
Save and compare baselines
Section titled “Save and compare baselines”Goal: Track discoverability across releases. See what improved and what regressed.
# Save a baseline after a known-good stateai-ui baseline --write
# After making changes, run the pipeline againai-ui stage0 && ai-ui graph && ai-ui design-map
# Compareai-ui replay-pack# ... make changes, run pipeline again ...ai-ui replay-packai-ui replay-diffSuccess: The replay-diff shows a before/after comparison of coverage, goals, and task flows.
Generate a PR comment
Section titled “Generate a PR comment”Goal: Post a summary of design diagnostics as a PR comment.
ai-ui stage0 && ai-ui graph && ai-ui design-mapai-ui pr-commentSuccess: Markdown output suitable for pasting into a GitHub PR comment.
Debug a failure
Section titled “Debug a failure”Goal: Understand why a command failed.
ai-ui <command> --verboseVerbose mode prints detailed output including:
- Which files were parsed (atlas)
- Which routes were crawled and which triggers were found (probe)
- Which features matched which triggers and why (diff)
- Graph construction details (graph)
Common failure patterns:
| Symptom | Likely cause | Fix |
|---|---|---|
| No triggers found | Dev server not running | Start your dev server first |
| 0% coverage | Wrong baseUrl in config | Check the URL matches your dev server |
| Missing features | Doc globs don’t match files | Check docs.globs patterns |
| Probe hangs | Route requires auth | Probe runs unauthenticated — use public routes |