Decide Like a Builder: A Practical Operating System for High-Stakes Choices

 

 

 

🧠 Made2Master Systems — Decision OS

Decide Like a Builder: A Practical Operating System for High-Stakes Choices

Most failures are decision-process failures. This **Decision OS** gives you portable tools: base rates (outside view), expected value under uncertainty, pre-mortems, red-team rituals, kill-criteria, and decision logs — with worksheets and a **14-Day Execution Sprint**.

🧠 AI Processing Reality...
Outside view beats gut: Reference-class base rates reduce error more than any single tweak. Treat every choice as part of a class; start from frequencies, not vibes.
EV is a habit, not a formula: Even rough expected-value ranges (Best / Base / Worst) produce better deployment of time, money, and risk limits.
Pre-mortems surface blind spots early: Write the one-page “project failed” memo before you start; extract kill-criteria and monitoring signals.
Red-team rituals de-personalise critique: Assign a temporary “opposition” role; rotate it; reward falsification that improves the plan.
Decision logs compound insight: Timestamp hypotheses, reasons, and ranges. Review monthly; score **process**, not outcomes alone.

1) Executive Summary

Decision OS is a portable set of rituals and artifacts for high-stakes choices in money, health, and operations. It does not promise certainty; it standardises how you think: start with the outside view (reference-class frequencies), size bets with expected value ranges, design pre-mortems to catch failure modes early, deploy red-teams to attack your plan, define kill-criteria up front, and keep a decision log to compound learning.

What changes immediately?
  • You stop asking “Is this good?” and start asking “Compared to what base rate?”
  • You replace binary yes/no with EV ranges (Best/Base/Worst) and caps on downside.
  • You attach a kill switch to projects: explicit conditions that shut them down.
  • You measure process quality monthly, not just results.

Artifacts you will use

  • Reference-Class Sheet: a one-pager with frequencies for similar choices.
  • EV Triangle: Best/Base/Worst payoffs × probability bands × bankroll limits.
  • Pre-Mortem Memo: “It is 6 months later and this failed because…”.
  • Red-Team Brief: scope, constraints, and falsifiable claims to attack.
  • Kill-Criteria Card: objective triggers; auto-stop or auto-adjust.
  • Decision Log: timestamped hypotheses, ranges, and post-mortems.

Quickstart: 20-Minute Decision OS

  1. Define the reference class (3–5 similar past decisions).
  2. Write a base rate (frequency of success/failure; range, not point).
  3. Sketch EV triangle (Best/Base/Worst × rough probabilities).
  4. Draft a pre-mortem (top 3 failure modes + early signals).
  5. Set kill-criteria (objective triggers; date + metric).
  6. Assign a red-team (1 person, 48h deadline).
  7. Log the decision; schedule a 30-day review.

Guardrails (Educational Use)

Nothing here is financial, medical, or legal advice. Use this OS to improve process. Decisions remain yours; consult qualified professionals where appropriate.

Sources that inspired this OS: Tetlock (superforecasting and base rates), Kahneman/Tversky (biases, prospect theory), Mauboussin (untangling skill vs luck). Citations live in metadata.

2) Base Rates 101 (Outside View & Reference Classes)

Base rates answer: “How often does this kind of thing work out?” The inside view focuses on your plan’s unique details; the outside view starts with how similar projects fared in the wild. The OS is simple: define the reference class, measure frequencies, and only then adjust for specifics.

2.1 Reference-Class Forecasting (RCF)

  1. Identify the class: 3–10 comparable situations (industry, size, timeframe).
  2. Collect outcomes: success/failure %, median timelines, typical overruns.
  3. Anchor your forecast: start from the distribution (not your wish).
  4. Adjust slowly: nudge for genuine edge (document why it’s real and testable).
Rule of Two Anchors: Write (A) the base-rate prediction; then (B) your inside-view prediction. If they diverge a lot, you owe a quantified justification.

2.2 From Stories to Frequencies

Humans love stories. The OS forces frequencies. Ask:

  • What was the median outcome in this class?
  • What are the tails (P10 / P90 outcomes)?
  • What predictors shifted outcomes (teamsize, run-rate, prior attempts)?
Reference Class Median Outcome P10 / P90 Typical Overrun Key Predictors Source Notes
Digital product launch (solo/2-person) £X monthly by month 6 P10: £0 / P90: £Y Time +80% vs plan Prior audience, distribution Internal log + public benchmarks
Fitness habit build (3x/week) Adherence 40–60% at 8 weeks P10: 0% / P90: 80%+ N/A Morning slot, social proof Habits literature summary
Cost-cutting initiative (subscriptions) 10–30% spend reduction P10: 5% / P90: 45% Implementation +50% time Spending map fidelity Company history + receipts

2.3 How to Build Your Base-Rate Library (1-Hour Sprint)

  1. Pick 3 recurring decision types (e.g., launches, hires, vendor swaps).
  2. For each, gather 5–8 historical comparables (internal + public).
  3. Log outcomes: success %, timeline, cost variance, key predictors.
  4. Create a one-pager per class; update after each new decision.

Worksheet — Reference-Class Sheet

Decision Type: e.g., New product page build

  • Comparables: #, dates, links to evidence
  • Median Outcome: ______
  • P10 / P90: ______ / ______
  • Predictors: (A) ______ (B) ______ (C) ______
  • Adjustments & Rationale: Document in one sentence each.

Checklist — Outside View Pass

  • Have we written a clear class name and justified membership?
  • Do we have at least 5 comparables? If not, use wider class.
  • Is our forecast anchored to class medians or quantiles?
  • Are our adjustments tied to predictors we can measure?
  • Did a third party sanity-check the class and numbers?

2.4 When You Don’t Have Data

Use a proxy class with a note on limitations. If you really have no data, run a pilot and treat it as a data-collection phase; your decision becomes “buy options on information.”

Failure Pattern: Labeling your situation as “unique” to skip base rates. Uniqueness can be real, but it’s a hypothesis to test, not a pass to ignore frequencies.

Educational only. No financial, medical, or legal advice. © Made2MasterAI™.

3) EV & Downside Protection

Expected value (EV) is the habit of sizing choices by their average payoff over many tries, not by how they feel today. In the wild, uncertainty rules—so the Decision OS uses ranges (Best / Base / Worst) and probability bands instead of one “precise” number.

Operating rule: Always write (1) the payoff outcomes, (2) probability bands that sum to 1.0, and (3) your downside guardrails before committing resources.

3.1 EV in Plain Language

EV = (Outcome1 × Prob1) + (Outcome2 × Prob2) + … If a project has a small chance of a big upside and a moderate chance of a small loss, its EV can be positive—even if it usually looks underwhelming on any single try.

Outcome Payoff (£) Probability Contribution to EV
Best +4,000 0.15 +600
Base +600 0.45 +270
Worst −500 0.40 −200
Expected Value +£670

3.2 Range-Based EV (Best/Base/Worst)

  • Best: plausible upper payoff (not fantasy).
  • Base: most-likely cluster of outcomes.
  • Worst: credible downside including overruns and hidden costs.

Assign probability bands that sum to 1.0 (e.g., 0.2 / 0.5 / 0.3). If your bands are guesses, label them and stress test by shifting 5–10% probability from Best to Worst to see if EV survives.

3.3 Kelly Intuition (Educational)

Kelly suggests a fraction of your bankroll to risk when probabilities and payouts are known. For a binary bet with win probability p and payoff multiple b (e.g., risk £1 to win £2 → b=2), Kelly’s optimal fraction is f* = (b·p - (1-p)) / b. In practice, uncertainty and correlation argue for fractional Kelly (e.g., ¼ Kelly). Use Kelly as a sizing intuition, not a command.

Guardrail: Kelly assumes stable probabilities and independent bets. Real projects violate both. Keep Kelly “light-touch” and favour caps, tranches, and kill-criteria.

3.4 Position Sizing Rules of Thumb

  • Risk per project: 0.5–2.0% of bankroll at max loss for exploratory bets.
  • Correlation penalty: If outcomes move together, cut size (e.g., halve).
  • Time diversification: Tranche commits across milestones (“release on data”).

3.5 Downside Protection Toolkit

  • Hard loss cap: pre-set £ or % you can lose at most.
  • Time stop: if metric X doesn’t move by day Y, stop or pivot.
  • Budget gates: release funds only after leading indicators hit targets.
  • Hedged options: small “information buys” (pilots) before big commits.
  • Kill-criteria: objective triggers that auto-shut projects (defined in Section 4).

3.6 EV Calculators & Worksheets

EV Range Calculator (Best / Base / Worst)

Educational tool. Enter payoffs as +/− numbers. Probabilities must sum to 1.00.





Result:

Kelly Intuition (Binary) — Fractional

For two-outcome bets only. Enter p (win probability) and b (win multiple).



Result:

Loss Cap Planner

Plan a maximum loss per project and back-solve a position size.



Result:

Worksheet — EV Triangle

  1. Outcomes: Best £____ / Base £____ / Worst £____
  2. Probabilities: Best __% / Base __% / Worst __% (sum 100%)
  3. Stake/Cost: £____ ; Bankroll: £____
  4. EV: £____ ; EV ÷ Stake: ____%
  5. Downside cap: £____ or ____% (hard stop)
  6. Tranches: £____ now → £____ on signal A → £____ on signal B
  7. Kill-criteria: (see Section 4) _______________
Process check: If shifting 5–10% probability from Best to Worst makes EV negative, you’re running a knife-edge plan. Reduce size, add gates, or seek better odds.

 

4) Pre-Mortem & Kill-Criteria

Most plans fail because teams only imagine success. A pre-mortem flips the script: assume the project has already failed and write the memo explaining why. This surfaces blind spots and hidden risks early. Then, convert those risks into kill-criteria — objective triggers that tell you when to stop, pivot, or downsize.

Rule: Every project must have a one-page pre-mortem memo and a kill-criteria card before launch. No memo = no go.

4.1 How to Run a Pre-Mortem

  1. Frame: “It is 6 months later, and the project has failed catastrophically.”
  2. Write reasons: Each member lists 3–5 failure modes silently (no groupthink yet).
  3. Cluster: Combine into themes (e.g., demand overestimated, ops bottleneck, legal block).
  4. Extract signals: For each theme, define an early warning signal.
  5. Map to kill-criteria: Which signals, if observed, force a pivot/stop?

Worksheet — Pre-Mortem Memo

Date: ______   Project: ______

It is [future date], and this project failed because:

  • Reason #1 — ___________
  • Reason #2 — ___________
  • Reason #3 — ___________

Early Signals:

  • Signal A — ___________
  • Signal B — ___________
  • Signal C — ___________

Checklist — Pre-Mortem Ritual

  • Did everyone write failure reasons independently first?
  • Are at least 3–5 themes captured?
  • Have we mapped signals (not just reasons)?
  • Are signals measurable within the first 25–50% of project timeline?
  • Has someone been assigned to monitor each signal?

4.2 Kill-Criteria Cards

A kill-criterion is an objective, measurable condition that — if triggered — automatically stops or pivots the project. This removes ego and sunk-cost bias.

Kill-Criterion Metric Threshold Timeframe Action
Adoption failure Active users < 100 By week 8 Stop further spend; re-scope
Cost overrun Spend vs. budget > 120% Any time Freeze hiring; audit suppliers
No signal shift Key metric (e.g., CTR) No change from baseline After 4 weeks Kill test; redirect to alt channel

Template — Kill-Criteria Card

Project: ______   Date: ______

  • Criterion: ___________
  • Metric: ___________
  • Threshold: ___________
  • Timeframe: ___________
  • Action: ___________

Checklist — Kill-Criteria Discipline

  • Does each project have 2–3 hard kill-criteria?
  • Are criteria binary (clear yes/no), not vague?
  • Is monitoring responsibility assigned?
  • Do criteria cover both time and money risks?
  • Is the team committed to enforce them, even if painful?

4.3 Interactive — One-Page Pre-Mortem Generator

Educational tool. Outputs a draft memo from your inputs.






Draft Memo:
Failure Pattern: Pre-mortems are written but ignored. To enforce: pin memos to the project dashboard; review signals weekly. If a kill-criterion is triggered and not acted upon, log a process failure.

 

5) Red-Team Rituals (Attack Your Plan Before Reality Does)

A red-team is a temporary, sanctioned opposition whose job is to break your plan on paper so reality breaks it less. It reduces groupthink, status bias, and sunk-cost inertia by making critique a role not a personal attack.

Rule: Every high-stakes decision must include a documented red-team pass: a written Attack Brief72-hour challenge windowresponse logdecision update.

5.1 Roles & Rotation

  • Blue Team (Owners): Authors of the plan; provide scope, assumptions, evidence pack.
  • Red Team (Attackers): 1–3 people rotated per decision; score issues on severity/likelihood.
  • Referee: Ensures rules/timeboxes are followed; records outcomes and process notes.

Rotation: Maintain a small roster. Nobody can red-team their own initiative. Rotate monthly to avoid hero/fall-guy dynamics.

5.2 The Attack Brief (One Page)

Template — Attack Brief

  • Decision/Project: ______
  • Objective & Scope: 3–5 lines. What counts as success?
  • Assumptions to Attack: A) ______ B) ______ C) ______
  • Evidence Pack: links/files to base rates, EV calc, pre-mortem.
  • Constraints: Budget/time/legal boundaries the attack must respect.
  • Deliverables (Red Team): Issue list with severity × likelihood, counter-hypotheses, and tests.
  • Window: Starts ____ Ends ____ (min 48–72h).

Checklist — Red-Team Discipline

  • Attack focuses on claims and assumptions, not people.
  • All critiques end with a test or falsifiable prediction.
  • Severity/likelihood scored on a consistent 1–5 scale.
  • Blue Team must respond in writing to each high-severity item.
  • Referee closes the loop: plan updated or explicitly rejected.

5.3 Severity × Likelihood Matrix

Issue Severity (1–5) Likelihood (1–5) Risk Score (S×L) Counter-Hypothesis / Test Owner
Demand overestimated 5 3 15 Run pre-orders with refund; aim ≥ 50 in 7 days RT-1
Ops bottleneck after week 2 4 4 16 Pilot with 20 users; measure cycle time < 48h RT-2
Legal constraint 5 2 10 External counsel pre-check of copy & data flow Ref

5.4 Devil’s Advocate Simulator

Feed your top assumptions. The simulator generates attack questions, prompts you to propose tests, and computes a rough risk score to prioritise responses.




Attacks & Tests:

5.5 Response Log (Close the Loop)

Template — Red→Blue Response Log

  1. Issue: ______   S×L: ____
  2. Attack Question: ______
  3. Counter-Hypothesis: ______
  4. Test/Measurement: ______ (deadline: __/__/__)
  5. Outcome: Pass / Fail / Inconclusive
  6. Plan Update: Adopt / Modify / Kill (link to change log)

Checklist — Close-Out Quality

  • Every red item ≥ 12 risk score has a written test and owner.
  • Deadlines within the red-team window or next sprint.
  • Evidence stored in the decision log (Section 6).
  • Final stance documented: Adopt / Modify / Kill with reason.
Cultural note: Praise good attacks that save you money/time. Make “caught a bad assumption early” a public win.

 

6) Decision Logs & Templates (Evidence or It Didn’t Happen)

A Decision Log compounds learning by separating process quality from outcome luck. Each entry captures the outside view, EV ranges, pre-mortem highlights, kill-criteria, and a timestamped hypothesis. Review monthly and score the process, not the result alone.

Rule: If a high-stakes choice isn’t logged, it didn’t exist. No log → no go.

New Decision Log Entry






EV Triangle






Pre-Mortem (Top 3)


Kill-Criteria (2–3)


Hypothesis

Decision


Process Quality Rubric (0–5)

Score bands: 0–9 = weak, 10–17 = decent, 18–25 = strong.

Actions

Saved entries live in this browser (localStorage). Export CSV to back up.

When Title Decision Outside View? EV? Pre-Mortem? Kill? Red-Team? Process Score Review Actions

Monthly Review (90 minutes)

  1. Scan outcomes last 30–45 days — separate luck from process.
  2. Score process quality (rubric). Flag any criterion breaks (e.g., no pre-mortem).
  3. Update base-rate library with actuals (Section 2).
  4. Adjust EV assumptions (probability shifts, hidden costs).
  5. Kill or double-down per criteria; log rationale.
  6. Pick 1 improvement to the OS (small, testable).

Checklist — Post-Mortem Discipline

  • Did we enforce kill-criteria when triggered?
  • Did we store artifacts (attack brief, memo, tests) with the entry?
  • Is any entry missing outside view or EV? Create a remediation task.
  • Have we scheduled the next review session?
Note: You can duplicate high-quality entries as templates for similar choices. That’s how the OS gets faster and better.

 

7) Case Studies — Decision OS in Action

To prove the OS works across domains, here are three case studies. Each runs the full pipeline: base rates → EV → pre-mortem → kill-criteria → red-team → log entry → review.

7.1 Health: Exercise Program Decision

Scenario: A 45-year-old decides whether to commit to a 12-week exercise program (3 sessions per week) while balancing a busy job and family.

Base Rates

  • Adherence: 40–60% at 8 weeks (habits literature)
  • Dropout: ~30% by week 6
  • Positive health outcome (fitness gains): ~70% if adherence ≥ 70%

EV Estimate

Best: Increased fitness, weight loss, energy (+EV health, ≈ +£5,000 equivalent in wellbeing)
Base: Moderate improvement (≈ +£2,000 equiv)
Worst: Injury or dropout (−£1,000 equiv in wasted fees/time)
Probabilities: Best 0.3, Base 0.5, Worst 0.2 → EV ≈ +£2,200

Pre-Mortem

  • Time pressure at work → skipped sessions
  • Minor injury (knee/back)
  • Loss of motivation after 3 weeks
Signals: 2+ skipped sessions in a week, soreness lasting >72h

Kill-Criteria

  • < 50% adherence after 4 weeks → Stop or re-scope
  • Injury > 2 weeks recovery → Halt program

Decision Log: Timestamp entry, base rates, EV, pre-mortem. Review at week 6 for adherence and adjust.

7.2 Treasury: Liquidity Ladder vs. Lump Sum

Scenario: Household has £6,000 spare capital. Should they put it all in a high-yield savings account now, or ladder it into 3 tranches (today, +3mo, +6mo)?

Base Rates

  • Average UK savings yield volatility: 0.25–0.5% swings per quarter
  • Liquidity shocks: ~1 in 3 households face an unexpected £1k+ expense per year

EV Estimate

Best: Lock in higher rate now → +£240/yr
Base: Average rate locked, liquidity intact → +£180/yr
Worst: Lock early, need funds, pay penalties → −£300
Probabilities: 0.4 / 0.4 / 0.2 → EV ≈ +£120

Pre-Mortem

  • Job loss → need cash, early withdrawal penalty
  • Rates rise unexpectedly → regret early lock
Signals: Income at risk, Bank of England policy change

Kill-Criteria

  • If penalty > yield advantage, unwind ladder
  • If liquidity shock occurs, pause remaining tranches

Decision Log: Entry scored as “Pilot.” Review every 3 months for rate moves/liquidity.

7.3 Content Ops: Launching a Blog Series

Scenario: A solo creator considers writing a 10-part blog series. Commitment: ~120 hours. Potential payoffs: SEO traffic, email list growth, credibility with industry peers.

Base Rates

  • Median SEO blog ROI: ~0–1,000 visitors/mo after 6 months
  • Content burnout: 30–40% dropoff after 5 posts
  • Monetisation success: ~20–30% of cases

EV Estimate

Best: 50k visitors, sponsorships, authority ≈ +£10,000
Base: Steady 2k visitors/mo, small leads ≈ +£2,000
Worst: Burnout, low traction, wasted 120h ≈ −£1,200
Probabilities: 0.2 / 0.5 / 0.3 → EV ≈ +£1,760

Pre-Mortem

  • Quit after 3 posts due to low traction
  • SEO not indexed or slow growth
  • No distribution beyond own site
Signals: Post frequency slips, search impressions flat after 90 days

Kill-Criteria

  • If fewer than 5 posts in 90 days → stop series
  • If search impressions < 200 by day 120 → re-scope strategy

Decision Log: Scored “Decent” process (EV, pre-mortem present; no red-team). Review at 6 months for ROI vs. baseline.

Lesson Across Domains: The OS doesn’t predict outcomes; it predicts process failures and creates auto-pivots. Whether health, money, or ops, the gains come from discipline + documented triggers.

8) Team Decision Cadence (Rituals that Keep Signal High)

A great Decision OS is rhythm plus artifacts. These rituals keep noise down, surface risks early, and automate tough calls through pre-agreed triggers.

8.1 Cadence Calendar (One-Page)

  • Daily (15m): Red metric check, new risks, blocked kill-criteria actions.
  • Weekly (45–60m): Decisions in-flight: EV updates, signal movement, red-team responses.
  • Monthly (90m): Process review: score last month’s decisions, update base-rate library.
  • Quarterly (2–3h): Portfolio view: keep/kill/scale; adjust bankroll & tranche rules.

Consistency > intensity. Short meetings, strong artifacts.

8.2 RACI-Lite (Decision Roles)

  • R — Responsible: Blue Team lead (writes artifacts; owns result).
  • A — Approver: Single final sign-off (breaks ties; enforces kill-criteria).
  • C — Consulted: Red Team + domain experts (timeboxed input).
  • I — Informed: Stakeholders who need outcomes, not debate.

One Approver only. Rotate Red Team monthly.

8.3 Weekly Decision Review (45–60m)

  1. Open Kill-criteria (5m): Any triggers? Act now. Log decisions.
  2. EV Deltas (15m): What changed in probabilities/payoffs? Why?
  3. Red-Team Close-Out (10m): High-risk items ≥ 12 score → tests & owners.
  4. New Decisions (10m): Outside view + EV triangle draft only.
  5. Admin (5–10m): Update Decision Log; set review dates.

8.4 Monthly Process Review (90m)

  1. Score Process Quality for each major decision (Section 6 rubric).
  2. Post-Mortems: Any criterion ignored? Log as process failure.
  3. Base-Rate Library: Add last month’s actuals; refresh medians/P10–P90.
  4. Portfolio Adjust: Re-allocate time/money; confirm gates/tranches.
  5. One Improvement: Pick a single OS upgrade to test next month.

8.5 Decision Scorecard (For Approver)

Decision Outside View EV Range Pre-Mortem Red-Team Kill-Criteria Process Score Status
Example: Blog Series Yes Done Done Open 2 criteria 18 (Strong) Pilot (review 90d)

Approver only signs off when all columns are populated. Empty cells = “No-Go or Pilot only”.

8.6 Disagreement Protocol (“Disagree & Commit” with Receipts)

  1. Record minority view as a falsifiable prediction (metric + date).
  2. Attach to Decision Log entry with owner and review date.
  3. Commit to action; evaluate prediction on review. Credit accuracy publicly.

This preserves dissent while maintaining speed.

8.7 Escalation Path (Timeboxed)

  • T+0–24h: Red-Team challenge window.
  • T+24–48h: Approver decision.
  • T+48h: If stalemate, escalate to next-level Approver for a final call.

Escalations must reference artifacts (no opinion-only appeals).

8.8 Bandwidth Guardrails

  • Max 3 active high-stakes decisions per team at once.
  • New high-stakes decision requires a slot (kill/complete one first).
  • Timebox red-team passes (≥ 48h, ≤ 72h).

8.9 Shared Dashboard Must-Haves

  • Open decisions with status (Pilot/Go/No-Go) and review dates.
  • Red metrics & alerts mapped to owners.
  • Upcoming kill-criteria deadlines highlighted.
  • Links to artifacts: EV calc, Pre-Mortem, Attack Brief, Decision Log.

8.10 Cadence Health Checker

Quick self-test for your team’s rhythm. Answers are private (in-browser only).





Cadence Score:
Commitment: Protect the weekly/Monthly/Quarterly windows like revenue. The cadence is the product: it turns fog into motion.

 

9) Tools & Dashboards (Portable OS in One View)

The Decision OS shines when artifacts live in one dashboard: all projects, risks, and criteria visible at a glance. Below are lightweight, browser-only widgets (no server required) to run your OS inside any team page or intranet.

9.1 Decision OS Dashboard

Displays current decisions from localStorage (Section 6). Export/import to sync across browsers.

Decision Status Owner Review Date Process Score Kill-Criteria

9.2 Widget — Open Kill-Criteria Monitor

Pulls any criteria from log entries and lists upcoming deadlines. Refreshes with board.

9.3 Widget — Red Metric Alerts

Highlights any decision log with “signal” terms marked as red flags. Useful for quick triage.

9.4 JSON Export (Portable OS)

The Decision Log can be exported/imported as JSON for portability. Example snippet below:

{
  "title": "Launch Blog Series",
  "decision": "Pilot",
  "time": "2025-09-15T14:22",
  "review": "2025-12-15",
  "ev": {"best":10000,"base":2000,"worst":-1200,"pbest":0.2,"pbase":0.5,"pworst":0.3},
  "premortem": "Quit after 3 posts, SEO lag, no distribution",
  "kills": "If <5 posts in 90d, kill; If <200 impressions by 120d, pivot",
  "score": {"outside":4,"ev":4,"pm":4,"kill":3,"red":2}
}
Note: A team’s OS is only as good as its visibility. One board, one source of truth. Decisions live here — not in inboxes.

 

10) Execution Framework — 14-Day Decision OS Sprint

This sprint turns the Decision OS into action in two weeks. You’ll ship one high-stakes decision from outside view → EV → pre-mortem → red-team → commit — with kill-criteria wired and a review booked.

Outcome by Day 14: A logged decision with artifacts, a visible dashboard entry, red-team close-out, and timers/owners for kill-criteria — plus a scheduled 30–45-day review.

10.1 One-Page Sprint Board





10.2 Progress Tracker

Day Focus Checklist Done
0% complete

10.3 Daily Playbook (What “Done” Looks Like)

  1. Day 1: Define decision; draft reference class candidates; book red-team window.
  2. Day 2: Build base-rate one-pager (median, P10/P90, predictors).
  3. Day 3: EV triangle v1 (Best/Base/Worst × probs); identify hidden costs.
  4. Day 4: Pre-mortem memo draft (3–5 failures; early signals).
  5. Day 5: Draft kill-criteria (2–3 binary triggers) and budget/time caps.
  6. Day 6: Red-team Attack Brief; send evidence pack.
  7. Day 7: Red-team challenge window (responses logged).
  8. Day 8: Update EV/base rates based on attacks; finalize criteria.
  9. Day 9: Prep Decision Log entry; set 30–45d review date.
  10. Day 10: Approver review; commit Pilot/Go/No-Go.
  11. Day 11: Wire dashboards; kill-criteria timers; owners assigned.
  12. Day 12: Deploy pilot/tranche 1; start signal monitoring.
  13. Day 13: Write a 10-line “Decision Brief” for stakeholders.
  14. Day 14: Sprint post-mortem; one OS improvement for next cycle.

10.4 Decision Brief (10 Lines)

  1. Decision / objective
  2. Reference class (+ why)
  3. Base rates (median, P10/P90)
  4. EV range + key costs
  5. 3 pre-mortem risks
  6. Early signals to watch
  7. Kill-criteria (binary)
  8. Red-team result (top issue + test)
  9. Pilot/Go/No-Go with date
  10. Next review date + owner

Paste this in your Decision Log “evidence links” field.

10.5 Optional Focus Timer (25/5)

Guardrails: Educational OS only. No product/financial/medical advice. Use professionals where appropriate. Enforce kill-criteria to avoid sunk-cost drift.

 

FAQ — Decision OS (Short, Unambiguous Answers)

1) What is the “outside view” in one line?

Start from how similar projects actually turned out (base rates) before adjusting for your specifics.

2) Do probabilities need to be precise?

No. Use bands (e.g., 0.2 / 0.5 / 0.3). Stress test by shifting 5–10% from Best → Worst.

3) How do I size a bet if I’m unsure?

Default to small, tranche-based positions with a hard loss cap and time stop.

4) Is Kelly required?

No. Treat Kelly as a sizing intuition only; real projects aren’t independent or stationary.

5) What makes a good kill-criterion?

Binary, objective, dated. Example: “If active users < 100 by week 8 → stop further spend.”

6) How often should we review?

Weekly (in-flight decisions), monthly (process), quarterly (portfolio).

7) What counts as a process failure?

Missing artifacts (no base rates/EV/pre-mortem) or ignoring triggered kill-criteria.

8) Do we log small decisions?

Log high-stakes and repeated decision types. For small one-offs, use the 20-minute quickstart.

9) How do we avoid personal attacks?

Make critique a role (Red Team), timebox it, and respond in writing to the claim, not the person.

10) Where do citations live?

In the page metadata (head JSON-LD). The on-page content remains clean and overflow-safe.

11) Can we copy this into Shopify as-is?

Yes. It’s standalone, mobile-first, and avoids sticky ToC/overflow. Paste into a page template.

12) Is this advice?

No. It’s an educational framework to improve your decision process.

Quickstart (20 minutes): Define a reference class → write base-rate medians & P10/P90 → sketch EV triangle → 1-page pre-mortem → 2 kill-criteria → assign a 48–72h red-team window → log & schedule review.

Glossary & Anchors

Key Terms

  • Base Rate: Frequency from similar past cases (outside view).
  • EV (Expected Value): Weighted average payoff across outcomes.
  • Pre-Mortem: “It failed because…” memo written before launch.
  • Red Team: Assigned opposition that challenges assumptions.
  • Kill-Criterion: Binary trigger to stop/pivot.
  • Process Score: Rubric (0–25) grading artifacts/discipline.

Internal link: /ops/decision-os

Next Steps — Install the OS

For Solo Operators

  • Use the EV Range Calculator and Loss Cap Planner weekly.
  • Log every high-stakes decision (Section 6).
  • Run one 14-day sprint per month to upgrade your process.

For Teams

  • Adopt the cadence calendar (Section 8.1).
  • Rotate Red Team monthly; Approver is single-threaded.
  • Run the Decision OS Dashboard as the source of truth.

Credits & Guardrails

Created by Made2MasterAI™ (Founder: Festus Joe Addai). Educational framework inspired by the decision-science literature (citations stored in metadata). No financial, medical, or legal advice. Use professionals where appropriate.

Design: Light mode with cyberpunk touches; accessible, mobile-first, and Shopify-friendly.

© Made2MasterAI™. Index, follow. Keep your sitemap updated (/sitemap.xml), and update dateModified in metadata when you revise.

Original Author: Festus Joe Addai — Founder of Made2MasterAI™ | Original Creator of AI Execution Systems™. This blog is part of the Made2MasterAI™ Execution Stack.

Back to blog

Leave a comment

Please note, comments need to be approved before they are published.