smoke-check▌
Donchitos/Claude-Code-Game-Studios · updated Apr 16, 2026
### Smoke Check
- ›description: "Run the critical path smoke test gate before QA hand-off. Executes the automated test suite, verifies core functionality, and produces a PASS/FAIL report. Run after a sprint's stories ar
- ›argument-hint: "[sprint | quick | --platform pc|console|mobile|all]"
- ›allowed-tools: Read, Glob, Grep, Bash, Write, AskUserQuestion
Smoke Check
This skill is the gate between "implementation done" and "ready for QA hand-off". It runs the automated test suite, checks for test coverage gaps, batch-verifies critical paths with the developer, and produces a PASS/FAIL report.
The rule is simple: a build that fails smoke check does not go to QA. Handing a broken build to QA wastes their time and demoralises the team.
Output: production/qa/smoke-[date].md
Parse Arguments
Arguments can be combined: /smoke-check sprint --platform console
Base mode (first argument, default: sprint):
sprint— full smoke check against the current sprint's storiesquick— skip coverage scan (Phase 3) and Batch 3; use for rapid re-checks
Platform flag (--platform, default: none):
--platform pc— add PC-specific checks (keyboard, mouse, windowed mode)--platform console— add console-specific checks (gamepad, TV safe zones, platform certification requirements)--platform mobile— add mobile-specific checks (touch, portrait/landscape, battery/thermal behaviour)--platform all— add all platform variants; output per-platform verdict table
If --platform is provided, Phase 4 adds platform-specific batches and
Phase 5 outputs a per-platform verdict table in addition to the overall verdict.
Phase 1: Detect Test Setup
Before running anything, understand the environment:
-
Test framework check: verify
tests/directory exists. If it does not: "No test directory found attests/. Run/test-setupto scaffold the testing infrastructure, or create the directory manually if tests live elsewhere." Then stop. -
CI check: check whether
.github/workflows/contains a workflow file referencing tests. Note in the report whether CI is configured. -
Engine detection: read
.claude/docs/technical-preferences.mdand extract theEngine:value. Store this for test command selection in Phase 2. -
Smoke test list: check whether
production/qa/smoke-tests.mdortests/smoke/exists. If a smoke test list is found, load it for use in Phase 4. If neither exists, smoke tests will be drawn from the current QA plan (Phase 4 fallback). -
QA plan check: glob
production/qa/qa-plan-*.mdand take the most recently modified file. If found, note the path — it will be used in Phase 3 and Phase 4. If not found, note: "No QA plan found. Run/qa-plan sprintbefore smoke-checking for best results."
Report findings before proceeding: "Environment: [engine]. Test directory: [found / not found]. CI configured: [yes / no]. QA plan: [path / not found]."
Phase 2: Run Automated Tests
Attempt to run the test suite via Bash. Select the command based on the engine detected in Phase 1:
Godot 4:
godot --headless --script tests/gdunit4_runner.gd 2>&1
If the GDUnit4 runner script does not exist at that path, try:
godot --headless -s addons/gdunit4/GdUnitRunner.gd 2>&1
If neither path exists, note: "GDUnit4 runner not found — confirm the runner path for your test framework."
Unity: Unity tests require the editor and cannot be run headlessly via shell in most environments. Check for recent test result artifacts:
ls -t test-results/ 2>/dev/null | head -5
If test result files exist (XML or JSON), read the most recent one and parse PASS/FAIL counts. If no artifacts exist: "Unity tests must be run from the editor or CI pipeline. Please confirm test status manually before proceeding."
Unreal Engine:
ls -t Saved/Logs/ 2>/dev/null | grep -i "test\|automation" | head -5
If no matching log found: "UE automation tests must be run via the Session Frontend or CI pipeline. Please confirm test status manually."
Unknown engine / not configured:
"Engine not configured in .claude/docs/technical-preferences.md. Run
/setup-engine to specify the engine, then re-run /smoke-check."
If the test runner is not available in this environment (engine binary not on PATH, runner script not found, etc.), report clearly:
"Automated tests could not be executed — engine binary not found on PATH. Status will be recorded as NOT RUN. Confirm test results from your local IDE or CI pipeline. Unconfirmed NOT RUN is treated as PASS WITH WARNINGS, not FAIL — the developer must manually confirm results."
Do not treat NOT RUN as an automatic FAIL. Record it as a warning. The developer's manual confirmation in Phase 4 can resolve it.
Parse runner output and extract:
- Total tests run
- Passing count
- Failing count
- Names of any failing tests (up to 10; if more, note the count)
- Any crash or error output from the runner itself
Phase 3: Check Test Coverage
Draw the story list from, in priority order:
- The QA plan found in Phase 1 (its Test Summary table lists expected test file paths per story)
- The current sprint plan from
production/sprints/(most recently modified file) - If the
quickargument was passed, skip this phase entirely and note: "Coverage scan skipped — run/smoke-check sprintfor full coverage analysis."
For each story in scope:
- Extract the system slug from the story's file path
(e.g.,
production/epics/combat/story-001.md→combat) - Glob
tests/unit/[system]/andtests/integration/[system]/for files whose name contains the story slug or a closely related term - Check the story file itself for a
Test file:header field or a "Test Evidence" section
Assign a coverage status to each story:
| Status | Meaning |
|---|---|
| COVERED | A test file was found matching this story's system and scope |
| MANUAL | Story type is Visual/Feel or UI; a test evidence document was found |
| MISSING | Logic or Integration story with no matching test file |
| EXPECTED | Config/Data story — no test file required; spot-check is sufficient |
| UNKNOWN | Story file missing or unreadable |
MISSING entries are advisory gaps. They do not cause a FAIL verdict but must
appear prominently in the report and must be resolved before /story-done can
fully close those stories.
Phase 4: Run Manual Smoke Checks
Draw the smoke test checklist from, in priority order:
- The QA plan's "Smoke Test Scope" section (if QA plan was found in Phase 1)
production/qa/smoke-tests.md(if it exists)tests/smoke/directory contents (if it exists)- The standard fallback list below (used only when none of the above exist)
Tailor batches 2 and 3 to the actual systems identified from the sprint or QA plan. Replace bracketed placeholders with real mechanic names from the current sprint's stories.
Use AskUserQuestion to batch-verify. Keep to at most 3 calls.
Batch 1 — Core stability (always run):
question: "Smoke check — Batch 1: Core stability. Please verify each:"
options:
- "Game launches to main menu without crash — PASS"
- "Game launches to main menu without crash — FAIL"
- "New game / session starts successfully — PASS"
- "New game / session starts successfully — FAIL"
- "Main menu responds to all inputs — PASS"
- "Main menu responds to all inputs — FAIL"
Batch 2 — Sprint mechanic and regression (always run):
question: "Smoke check — Batch 2: This sprint's changes and regression check:"
options:
- "[Primary mechanic this sprint] — PASS"
- "[Primary mechanic this sprint] — FAIL: [describe what broke]"
- "[Second notable change this sprint, if any] — PASS"
- "[Second notable change this sprint] — FAIL"
- "Previous sprint's features still work (no regressions) — PASS"
- "Previous sprint's features — regression found: [brief description]"
Batch 3 — Data integrity and performance (run unless quick argument):
question: "Smoke check — Batch 3: Data integrity and performance:"
options:
- "Save / load completes without data loss — PASS"
- "Save / load — FAIL: [describe what broke]"
- "Save / load — N/A (save system not yet implemented)"
- "No new frame rate drops or hitches observed — PASS"
- "Frame rate drops or hitches found — FAIL: [where]"
- "Performance — not checked in this session"
Record each response verbatim for the Phase 5 report.
Platform Batches (run only if --platform argument was provided):
PC platform (--platform pc or --platform all):
question: "Smoke check — PC Platform: Verify platform-specific behaviour:"
options:
- "Keyboard controls work correctly across all menus and gameplay — PASS"
- "Keyboard controls — FAIL: [describe issue]"
- "Mouse input and cursor visibility correct in all states — PASS"
- "Mouse input — FAIL: [describe issue]"
- "Windowed and fullscreen modes function without graphical issues — PASS"
- "Windowed/fullscreen — FAIL: [describe issue]"
- "Resolution changes apply correctly — PASS"
- "Resolution changes — FAIL: [describe issue]"
Console platform (--platform console or --platform all):
question: "Smoke check — Console Platform: Verify platform-specific behaviour:"
options:
- "Gamepad input works correctly for all actions — PASS"
- "Gamepad input — FAIL: [describe issue]"
- "UI fits within TV safe zone margins (no text clipped) — PASS"
- "TV safe zone — FAIL: [describe what is clipped]"
- "No keyboard/mouse-only fallbacks shown to gamepad user — PASS"
- "Input prompt inconsistency — FAIL: [describe]"
- "Game boots correctly from cold start (no prior save) — PASS"
- "Cold start — FAIL: [describe issue]"
Mobile platform (--platform mobile or --platform all):
question: "Smoke check — Mobile Platform: Verify platform-specific behaviour:"
options:
- "Touch controls work correctly for all primary actions — PASS"
- "Touch controls — FAIL: [describe issue]"
- "Game handles orientation change (portrait ↔ landscape) correctly — PASS"
- "Orientation change — FAIL: [describe what breaks]"
- "Background / foreground transitions (home button) handled gracefully — PASS"
- "Background/foreground — FAIL: [describe issue]"
- "No visible performance issues on target device (no thermal throttling signs) — PASS"
- "Mobile performance — FAIL: [describe issue]"
Phase 5: Generate Report
Assemble the full smoke check report:
## Smoke Check Report
**Date**: [date]
**Sprint**: [sprint name / number, or "Not identified"]
**Engine**: [engine]
**QA Plan**: [path, or "Not found — run /qa-plan first"]
**Argument**: [sprint | quick | blank]
---
### Automated Tests
**Status**: [PASS ([N] tests, [N] passing) | FAIL ([N] failures) |
NOT RUN ([reason])]
[If FAIL, list failing tests:]
- `[test name]` — [brief failure description from runner output]
[If NOT RUN:]
"Manual confirmation required: did tests pass in your local IDE or CI? This
will determine whether the automated test row contributes to a FAIL verdict."
---
### Test Coverage
| Story | Type | Test File | Coverage Status |
|-------|------|-----------|----------------|
| [title] | Logic | `tests/unit/[system]/[slug]_test.[ext]` | COVERED |
| [title] | Visual/Feel | `tests/evidence/[slug]-screenshots.md` | MANUAL |
| [title] | Logic | — | MISSING ⚠ |
| [title] | Config/Data | — | EXPECTED |
**Summary**: [N] covered, [N] manual, [N] missing, [N] expected.
---
### Manual Smoke Checks
- [x] Game launches without crash — PASS
- [x] New game starts — PASS
- [x] [Core mechanic] — PASS
- [ ] [Other check] — FAIL: [user's description]
- [x] Save / load — PASS
- [-] Performance — not checked this session
---
### Missing Test Evidence
Stories that must have test evidence before they can be marked COMPLETE via
`/story-done`:
- **[story title]** (`[path]`) — Logic story has no test file.
Expected location: `tests/unit/[system]/[story-slug]_test.[ext]`
[If none:] "All Logic and Integration stories have test coverage."
---
### Platform-Specific Results *(only if `--platform` was provided)*
| Platform | Checks Run | Passed | Failed | Platform Verdict |
|----------|-----------|--------|--------|-----------------|
| PC | [N] | [N] | [N] | PASS / FAIL |
| Console | [N] | [N] | [N] | PASS / FAIL |
| Mobile | [N] | [N] | [N] | PASS / FAIL |
**Platform notes**: [any platform-specific observations not captured in pass/fail]
Any platform with one or more FAIL checks contributes to the overall FAIL verdict.
---
### Verdict: [PASS | PASS WITH WARNINGS | FAIL]
[Verdict rules — first matching rule wins:]
**FAIL** if ANY of:
- Automated test suite ran and reported one or more test failures
- Any Batch 1 (core stability) check returned FAIL
- Any Batch 2 (primary sprint mechanic or regression check) returned FAIL
**PASS WITH WARNINGS** if ALL of:
- Automated tests PASS or NOT RUN (developer has not yet confirmed)
- All Batch 1 and Batch 2 smoke checks PASS
- One or more Logic/Integration stories have MISSING test evidence
**PASS** if ALL of:
- Automated tests PASS
- All smoke checks in all batches PASS or N/A
- No MISSING test evidence entries
Phase 6: Write and Gate
Present the full report in conversation, then ask:
"May I write this smoke check report to production/qa/smoke-[date].md?"
Write only after approval.
After writing, deliver the gate verdict:
If verdict is FAIL:
"The smoke check failed. Do not hand off to QA until these failures are resolved:
[List each failing automated test or smoke check with a one-line description]
Fix the failures and run /smoke-check again to re-gate before QA hand-off."
If verdict is PASS WITH WARNINGS:
"Smoke check passed with warnings. The build is ready for manual QA.
Advisory items to resolve before running /story-done on affected stories:
[list MISSING test evidence entries]
QA hand-off: share production/qa/qa-plan-[sprint].md with the qa-tester
agent to begin manual verification."
If verdict is PASS:
"Smoke check passed cleanly. The build is ready for manual QA.
QA hand-off: share production/qa/qa-plan-[sprint].md with the qa-tester
agent to begin manual verification."
Collaborative Protocol
- Never treat NOT RUN as automatic FAIL — record it as NOT RUN and let the developer confirm status manually. Unconfirmed NOT RUN contributes to PASS WITH WARNINGS, not FAIL.
- Never auto-fix failures — report them and state what must be resolved. Do not attempt to edit source code or test files.
- PASS WITH WARNINGS does not block QA hand-off — it records advisory
gaps for
/story-doneto follow up on. quickargument skips Phase 3 (coverage scan) and Phase 4 Batch 3. Use it for rapid re-checks after fixing a specific failure.- Use
AskUserQuestionfor all manual smoke check verification. - Never write the report without asking — Phase 6 requires explicit approval before any file is created.
Ratings
4.6★★★★★57 reviews- ★★★★★Hassan Agarwal· Dec 24, 2024
smoke-check reduced setup friction for our internal harness; good balance of opinion and flexibility.
- ★★★★★Kabir Abbas· Dec 20, 2024
I recommend smoke-check for anyone iterating fast on agent tooling; clear intent and a small, reviewable surface area.
- ★★★★★Nia Rao· Dec 20, 2024
We added smoke-check from the explainx registry; install was straightforward and the SKILL.md answered most questions upfront.
- ★★★★★Naina Rahman· Dec 20, 2024
smoke-check reduced setup friction for our internal harness; good balance of opinion and flexibility.
- ★★★★★Harper Reddy· Dec 16, 2024
Useful defaults in smoke-check — fewer surprises than typical one-off scripts, and it plays nicely with `npx skills` flows.
- ★★★★★Pratham Ware· Dec 4, 2024
We added smoke-check from the explainx registry; install was straightforward and the SKILL.md answered most questions upfront.
- ★★★★★Sakshi Patil· Nov 23, 2024
smoke-check fits our agent workflows well — practical, well scoped, and easy to wire into existing repos.
- ★★★★★Ama Sanchez· Nov 23, 2024
smoke-check has been reliable in day-to-day use. Documentation quality is above average for community skills.
- ★★★★★Camila Rahman· Nov 15, 2024
Registry listing for smoke-check matched our evaluation — installs cleanly and behaves as described in the markdown.
- ★★★★★Michael White· Nov 11, 2024
Keeps context tight: smoke-check is the kind of skill you can hand to a new teammate without a long onboarding doc.
showing 1-10 of 57