// may the 4th be with you⚔️
← Blog
explainx / blog

Pre-mortem agent skill: verified risk review before you ship

Pre-mortem agent skill (parcadei/continuous-claude-v3): verified risks for coding agents—two-pass workflow, npx skills install. Canonical ExplainX listing.

3 min readExplainX Team
Agent skillsPre-mortemProductivityRisk managementparcadei

MDX restores the committed source plus an HTML comment attribution; plain text bundles the rendered markdown body with the explainx.ai attribution footer.

Pre-mortem agent skill: verified risk review before you ship

Pre-mortem review is a decades-old planning technique: assume failure, work backward, and surface risks early before you sink cost into the wrong design. Gary Klein described it for teams; in product engineering circles the tiger / paper tiger / elephant vocabulary—often associated with Shreyas Doshi’s writing on product risk—is a compact way to sort real threats from noise and taboo topics.

For coding agents, the hard part is not the metaphor—it is discipline. The premortem skill in parcadei/continuous-claude-v3 bakes in verification rules so the model does not treat every suspicious line as a crisis.

Canonical registry listing: premortem — ExplainX.

TL;DR

QuestionShort answer
What is it?An agent skill that runs a structured pre-mortem with quick and deep depths and YAML-shaped outputs.
Why it mattersForces two-pass reasoning: candidates → verified risks with mitigation_checked evidence.
Installnpx skills add https://github.com/parcadei/continuous-claude-v3 --skill premortem
Browseexplainx.ai/skills/.../premortem
Best forPRs, RFCs, before large refactors, and any workflow where false-positive “security theater” wastes time.

Why “verify before you flag” belongs in a skill

Large language models are good at sounding alarmed. Without guardrails, an agent can:

  • Flag a hardcoded path without checking for an exists() guard three lines later.
  • Call something “missing error handling” without tracing the call path.
  • Confuse out-of-scope work with an implementation bug.

The premortem skill encodes an explicit anti-pattern list and a verification checklist (context ±20 lines, fallback branches, scope, dev-only code). If a check is unknown, the instruction set tells the model not to promote the finding to a tiger. That is harness behavior—policy at the tooling layer, not vibes in a one-off chat.

The two-pass workflow (how to read the SKILL.md)

  1. Pass 1 — candidates: Collect potential_risks using normal scanning (pattern match, intuition, diff review).
  2. Pass 2 — verification: For each candidate, decide tiger · paper_tiger · false_alarm.

True tigers require a filled mitigation_checked field: what mitigations you looked for and did not find. If you cannot write that line with concrete evidence, the finding stays a candidate or becomes a false alarm.

Paper tigers get the opposite treatment: cite where the mitigation lives (file:lines).

Elephants capture the awkward, under-discussed risks—often process or political, not a missing try/catch.

Slash-style usage (from the upstream skill)

The packaged workflow expects intentful depth:

  • /premortem — auto-detect context; offer quick vs deep.
  • /premortem quick — plans, PRs, localized edits.
  • /premortem deep — before a big implementation push.
  • /premortem <file> — focus a plan or module.

Exact slash wiring depends on your agent host (Claude Code, Cursor, etc.); the value is the checklist + output schema, not the literal command prefix.

Install and pin

From the ExplainX listing:

npx skills add https://github.com/parcadei/continuous-claude-v3 --skill premortem

For team repos, pair installs with a committed skills-lock.json so everyone gets the same instruction pack revision—see our skills-lock.json primer.


Related on ExplainX

Sources


Skill contents and CLI flags change over time. Confirm behavior against parcadei/continuous-claude-v3 and your installed npx skills version before relying on this in production workflows.

Related posts