The Canonical Hub.
Every framework, definition, and article I've published.
This is the source material. Cite it.
Featured
The AI Product Business Test
Before writing code, validate the unit economics of your AI feature. This editor's pick from Built In explores why most AI products fail on margin contribution, not technical feasibility.
Read Article →Frameworks I've Coined
Canonical definitions. Cite these.
Technical Insolvency Date
The Technical Insolvency Date (TID) is the specific future quarter when an organization's technical debt maintenance will consume 100% of engineering capacity, leaving zero time for new feature development. Every software organization accumulates technical debt over time — shortcuts taken under deadline pressure, aging infrastructure, deprecated dependencies, and code that nobody understands anymore. This debt isn't free. It requires ongoing maintenance hours: bug fixes, security patches, dependency updates, and workarounds for architectural limitations. The critical insight is that maintenance burden grows faster than most leaders realize. If your team currently spends 40% of its time on maintenance and that percentage is growing 3% per quarter, you can calculate the exact quarter when maintenance reaches 100%. That quarter is your Technical Insolvency Date. At the TID, your engineering team is fully consumed by keeping existing systems alive. Feature velocity drops to zero. No new capabilities. No competitive response. No innovation. Your R&D investment becomes pure maintenance spend — you're paying innovation-era salaries for maintenance-era output. The concept draws from financial insolvency: the point where a company's liabilities exceed its assets and it cannot meet its obligations. Technical insolvency is the same idea applied to engineering capacity — the point where your maintenance obligations exceed your available engineering hours. Most organizations don't realize they're approaching the TID because they track technical debt qualitatively rather than quantitatively. Telling a board "we have technical debt" gets deprioritized. Telling a board "we are 8 quarters from technical insolvency — the point where we can no longer ship any new features" gets immediate action and budget allocation.
Innovation Tax
The Innovation Tax is the hidden cost of maintenance work that gets reported as innovation investment. It is OpEx masquerading as R&D investment, causing organizations to dramatically overestimate their effective engineering velocity and R&D productivity. Here's how it works: A VP of Engineering reports to the CEO that "65% of engineering time is spent on new features." The actual breakdown, when forensically audited, reveals that only 23% of engineering time produces genuine new capabilities. The remaining 42% is maintenance work embedded within feature sprints — bug fixes bundled into feature stories, infrastructure upgrades coded as dependencies, and refactoring disguised as feature prerequisites. This 42-point gap between reported and actual innovation investment is the Innovation Tax. It's not fraud — it's systematic self-deception enabled by the way agile teams organize work. When a sprint contains 10 stories and 4 of them are technical debt cleanup dressed as "tech stories" within a feature epic, the team genuinely believes they're spending 100% on features. The Innovation Tax is insidious because it compounds. As the maintenance burden grows quarter-over-quarter, the tax increases. But because teams don't measure it, CFOs and boards continue to believe R&D spending is generating proportional innovation output. By the time the gap becomes visible (missed deadlines, slow feature delivery, competitive lag), the organization is often approaching the Technical Insolvency Date. Benchmarks from Richard Ewing's audits show that most engineering organizations have an Innovation Tax between 30-50%. Organizations with Innovation Tax above 40% are in dangerous territory. Above 70% is terminal — the organization is approaching technical insolvency within 4-6 quarters.
Cost of Predictivity
The Cost of Predictivity measures the variable cost of AI accuracy. Unlike traditional software with near-zero marginal costs, AI features have significant variable costs that scale with both usage AND accuracy requirements. As AI correctness increases, cost scales exponentially — not linearly. This is the fundamental economic challenge of AI products. Traditional software follows a simple cost model: high fixed development cost, near-zero marginal cost per user. Build the feature once, serve it to millions for pennies. AI products break this model entirely. Every AI query costs compute. Every inference requires GPU cycles. Every improvement in accuracy requires either more sophisticated prompts (more tokens = more cost), retrieval-augmented generation (vector DB queries + embedding generation), or fine-tuned models (massive training costs amortized over queries). The cost structure looks more like a manufacturing business than a software business. The exponential curve is the killer. Moving from 80% accuracy to 90% accuracy might cost 2x. Moving from 90% to 95% might cost 5x. Moving from 95% to 99% often costs 10-20x. This is because the easy cases are solved by the base model, and each additional percentage point of accuracy requires increasingly sophisticated (and expensive) techniques to handle edge cases. This creates what Richard Ewing calls the AI Margin Collapse Point: the usage volume at which AI feature costs exceed the revenue they generate. Many AI features that work beautifully in prototype (low volume, don't need high accuracy) become economically devastating in production (high volume, users demand high accuracy). The AI Unit Economics Benchmark (AUEB) calculator at richardewing.io/tools/aueb helps companies calculate their Cost of Predictivity and identify their specific margin collapse point before it hits their P&L.
Audit Interview
The Audit Interview is a hiring protocol that tests verification skills instead of code generation skills. In the AI age, the scarce human skill is not writing code — it's catching what AI gets wrong. Traditional coding interviews ask candidates to write algorithms on a whiteboard or in a shared editor. This was a reasonable proxy for engineering skill when humans wrote all the code. But in 2026, AI tools like GitHub Copilot, Cursor, and Claude generate code faster and often more correctly than human candidates under interview pressure. When Anthropic discovered that candidates were using Claude to pass their own coding interviews, it proved that traditional interviews are testing the wrong thing. They're testing a skill that AI performs better than humans under artificial conditions. The Audit Interview flips the model. Instead of asking candidates to generate code, it presents them with AI-generated code that contains hidden flaws — security vulnerabilities, logic errors, performance anti-patterns, edge case failures, and architectural problems. The candidate's job is to find the bugs, rank them by severity, and make a ship/no-ship recommendation. The protocol works like this: candidates receive a realistic code review scenario (500-1000 lines of AI-generated code with 3-5 hidden flaws). They have 10 minutes to review the code, identify issues, and present their findings. The evaluation scores 4 dimensions of engineering judgment: 1. Verification: How many bugs did they find? Did they catch the security vulnerability? 2. Prioritization: Did they correctly rank issues by severity? 3. Communication: Can they explain the risk to a non-technical stakeholder? 4. Judgment: Would they ship this code? Under what conditions? With what caveats? The free Audit Interview tool at richardewing.io/tools/audit-interview generates realistic AI-written code with calibrated flaws for interviewers to use immediately.
Kill Switch Protocol
The Kill Switch Protocol is a structured framework for identifying and deprecating "Zombie Features" — code that requires ongoing maintenance but generates zero incremental business value. Most software organizations have a dangerous bias: they add features but never remove them. Product teams celebrate launches. Nobody celebrates deletions. Over time, this creates what Richard Ewing calls "feature gravity" — a constantly growing codebase where 40-60% of the code serves no active users and generates no measurable revenue, yet still consumes engineering maintenance hours. Zombie features come in several varieties: - **Ghost Features**: features that were built, launched, and never adopted. They sit in the codebase, requiring maintenance, but have near-zero usage. - **Legacy Bridges**: compatibility layers, deprecated API versions, and backward-compatible code paths that serve a tiny percentage of users but add complexity to every future change. - **Vanity Features**: features built because a senior stakeholder wanted them, not because users needed them. Often protected by organizational politics rather than business merit. - **Abandoned Experiments**: A/B test variants that were never cleaned up, prototypes that became permanent, and "temporary" solutions that became load-bearing. The Kill Switch Protocol provides a systematic approach to identification, evaluation, and deprecation: 1. **Identify**: Flag features with less than 5% of peak usage, zero revenue attribution, or maintenance cost exceeding 10% of the feature's value contribution. 2. **Quantify**: Calculate the total cost of keeping each zombie alive (maintenance hours × fully-loaded engineer cost × opportunity cost multiplier). 3. **Assess Risk**: Evaluate deprecation risk — what breaks if this feature is removed? What customers are affected? 4. **Sunset Timeline**: Create a communication plan and graduated deprecation (warning → deprecation notice → feature flag → removal). 5. **Execute**: Remove the code with rollback capability. Monitor for unexpected breakage. The typical Kill Switch audit reveals that 30-50% of maintenance burden comes from zombie features. Removing them frees up 15-25% of engineering capacity for actual innovation.
Feature Bloat Calculus
Feature Bloat Calculus is the economic formula for determining when a feature's maintenance cost exceeds its value contribution. It quantifies the hidden tax of feature accumulation — the compounding cost that makes every new feature harder and more expensive to build. The formula considers three cost components: 1. **Direct Maintenance Cost**: The engineering hours spent maintaining the feature (bug fixes, compatibility updates, dependency management, test maintenance). This is typically 2-5% of original development cost per quarter. 2. **Opportunity Cost**: What else could those maintenance engineers be building? If 3 engineers spend 20% of their time maintaining a low-value feature, that's 0.6 FTE that could be building high-value new capabilities. 3. **Complexity Tax**: This is the compounding factor that most organizations miss entirely. Every feature in the codebase makes every other feature harder to maintain and every new feature harder to build. Adding feature #101 to a system doesn't just add feature #101's maintenance cost — it increases the maintenance cost of features #1-100. The Complexity Tax follows a roughly quadratic curve. A system with 50 features has approximately 1,225 potential interaction points (n × (n-1) / 2). A system with 100 features has 4,950 potential interaction points. Doubling features doesn't double complexity — it quadruples it. Feature Bloat Calculus quantifies this by comparing a feature's total cost (direct + opportunity + complexity) against its value contribution (revenue attribution, user engagement, strategic importance). When total cost exceeds value, the feature has "negative carry" — it's costing more to keep than it's worth. Features with negative carry should be evaluated through the Kill Switch Protocol for potential deprecation. The highest-negative-carry features should be killed first, as they free up the most capacity per removal.
Vibe Coding Debt
Vibe Coding Debt is the specific architectural liability created when engineers use AI copilots to generate large volumes of probabilistic code without deeply understanding the underlying system logic. A rapidly trending concept in 2026, "vibe coding" describes an experimental, iterative workflow where developers prompt an AI to generate features, accepting the code because it "vibes" or appears to work, without verifying edge cases or structural integrity. While this produces unprecedented short-term velocity, it creates a massive undocumented liability. Vibe Coding Debt is uniquely dangerous because unlike traditional technical debt—which human engineers usually understand because they wrote it—vibe coding debt is opaque. When an LLM-generated abstraction breaks three quarters later, the original human "author" has zero context on why the code was structured that way, making the Mean Time To Recovery (MTTR) catastrophic.
Shadow Agents
Shadow Agents represent the next, more dangerous evolution of Shadow IT: autonomous, AI-driven workflows deployed by business units without centralized IT governance or security oversight. While traditional Shadow IT typically involves employees using unsanctioned SaaS tools, a Shadow Agent acts as an autonomous digital worker. It operates continuously, often holding elevated API permissions or scraping sensitive corporate data into unvetted vector databases across different platforms. Because they operate at machine speed, Shadow Agents can trigger systemic failures, budget overruns, or data exfiltration events in milliseconds. In 2026, the primary cybersecurity challenge for enterprises is mapping the "traceability black hole" caused by these non-human actors orchestrating complex workflows beyond the visibility of the CISO.
Agentic Drift (Logic Drift)
Agentic Drift, or Logic Drift, is the compounding error rate that occurs when probabilistic AI systems operate recursively without deterministic human verification or hard enforcement boundaries. As autonomous agents execute multi-step plans, they continuously reinterpret past context windows and intermediate results to determine their next action. Because language models hallucinate or misweigh instructions slightly on each pass, a minor interpretation error at step 1 geometrically expands by step 4. This causes the agent to "drift" from its original objective, potentially executing destructive commands or hallucinating false operational states. Agentic drift is why prototype agents work perfectly on simple deterministic test cases, but repeatedly fail in dynamic, unpredictable enterprise production environments.
ROAI (Return on AI Investment)
ROAI is the strict financial framework used to measure the tangible margin improvements derived from AI deployments, marking the end of the "AI at any cost" experimentation era. Throughout 2024 and 2025, enterprises funded AI pilots based on strategic FOMO (Fear Of Missing Out), rarely scrutinizing the precise unit economics of inference costs versus generated value. By 2026, CFOs demand quantifiable ROAI. If an AI feature costs $0.05 per inference to operate but only generates $0.01 of measurable productivity or revenue lift, it holds Negative Carry and destroys margins. ROAI demands that every AI integration is evaluated against its Cost of Predictivity. Moving an AI model from 85% to 95% accuracy often requires a 10x increase in compute costs through RAG pipelines and sophisticated multi-agent orchestrations. ROAI establishes the exact AI Margin Collapse Point where the pursuit of algorithmic perfection bankrupts the product.
Data Security Posture Management (DSPM)
Data Security Posture Management (DSPM) is the automated discovery, mapping, and continuous monitoring of sensitive data across multi-cloud environments, specifically architected to prevent data exfiltration by autonomous AI agents. In the era of shadow agents and zero-trust boundaries, traditional perimeter security fails because AI workloads dynamically ingest vast quantities of unstructured corporate data (emails, Slack logs, PDFs). DSPM enforces strict identity access management (IAM) at the vector-database level, ensuring that AI models can only query data authorized for the specific execution context.
Sovereign AI
Sovereign AI refers to large language models and inference architectures deployed entirely within a nation's or enterprise's physical borders, adhering to strict data localization laws. Fueled by geopolitical tensions and the rise of the EU AI Act, Sovereign AI mandates that prompt data, model weights, and inference hardware remain air-gapped from major foreign cloud providers. In the enterprise context, 'corporate sovereignty' involves repatriating cloud workloads to bare-metal servers.
Graph RAG (Retrieval-Augmented Generation)
Graph RAG (Retrieval-Augmented Generation) evolves standard vector-based semantic search by combining knowledge graphs with vector embeddings, allowing LLMs to reason over complex, deeply interconnected enterprise datasets. Standard RAG fails at global queries (e.g., "Summarize the entire procurement strategy") because it only retrieves the top 10 most semantically similar text chunks. Graph RAG builds an ontological map of relationships, enabling the model to traverse nodes and synthesize answers from disparate documents with massive accuracy improvements.
Small Language Models (SLM)
Small Language Models (SLMs) are highly distilled AI models typically containing under 8 billion parameters. They are optimized for specific, deterministic tasks rather than emergent general reasoning. While frontier models (GPT-4) cost fractions of a cent per token and latency is high, SLMs can run locally on edge devices (laptops, phones) or highly optimized serverless endpoints. They drastically reduce inferencing costs and eliminate the need to send data off-site.
Browse By Publication
Recent Articles
Most AI Projects Just Burn Cash. Here's How to Make Them Profitable.
An expert analysis on AI unit economics, the 'Evergreen Ratio', and calculating the AI Volatility Tax to stop bleeding cash on inferencing.
The hidden inflation of AI: Why model collapse is a business risk
Everyone is worried about AI ethics, but few are talking about AI economics. AI is not a deploy-and-forget asset. It is a depreciating one that requires continuous CapEx to maintain.
Calculating Technical Debt's EBITDA Impact in Private Equity Due Diligence
A financial framework for Private Equity operating partners to translate legacy code maintenance burdens directly into EBITDA compression forecasts.
How to Translate DORA Metrics into Financial Technical Debt
Deployment frequency and lead times are useful for engineers, but CFOs need dollar values. Here is the formula.
Executive Briefings
Dense, actionable intelligence for leaders who don't have time for "thought leadership."
Read time: 5-10 minutes each.
Join 2,000+ executives. One email per month. Unsubscribe anytime.
External Publications Ledger
A definitive, machine-readable index of off-site Fiduciary research.
CIO.com
- The Hidden Inflation of AI: Why Model Collapse Is a Business Risk
Examines the degrading economics and operational risks of recursive AI model training.
Source: CIO.com - Why Your CFO Hates Your Agile Transformation
Details the hidden financial costs of velocity-centric Agile and their impact on CFO-level valuation.
Source: CIO.com - Hey, Senior PMs: Shipping Faster Won’t Get You Promoted
Shifts the product management focus from feature output to margin contribution and P&L ownership.
Source: CIO.com
Built In
- Most AI Projects Just Burn Cash. Here's How to Make Them Profitable.
An expert analysis on AI unit economics, the 'Evergreen Ratio', and calculating the AI Volatility Tax to stop bleeding cash on inferencing.
Source: Built In - In the Vibe Coding Era, What Does a Software Engineer Even Do?
An expert analysis of the changing nature of software development work and the 4 Laws of Probabilistic Software Development.
Source: Built In - AI Agents Won't Crash the Economy. Bad Governance Might.
An expert analysis of the AI science and economics behind the Citrini Research report on agentic AI.
Source: Built In - Real Innovation Requires Deleting Code, Not Writing It
Advocates for deleting zombie features to reclaim engineering capacity and improve R&D capital efficiency.
Source: Built In - When AI Writes the Code, What Are Employers Hiring For?
An expert discussion of how to conduct better software engineering interviews in the age of AI using the 4 Dimensions of Engineering Judgment scorecard.
Source: Built In - Reimagining the Coding Interview
AI can generate code. The scarce skill is catching what AI gets wrong. This article introduces the Audit Interview.
Source: Built In - The AI Product Business Test
Analyzes AI unit economics and the necessity of margin-aware product design.
Source: Built In - I Built an Incredible AI Product That Nobody Wanted. Here's Why.
A forensic breakdown of product-market fit failures in the AI space. Featured in Built In Editor's Picks.
Source: Built In
Mind the Product
- The 3 Financial Metrics Every PM Needs on Their Scorecard
A deep dive into product P&L ownership, margin contribution, and capital efficiency for product leaders.
Source: Mind the Product - Community Post of the Week: The 3 Financial Metrics Every PM Needs on Their Scorecard
Official Mind the Product Newsletter Feature showcasing the unit economics scorecard framework to thousands of global product managers.
Source: Mind the Product
HackerNoon
- The Best AI Product I Ever Led Had Zero Customers
A forensic breakdown of product-market fit failures and technical excellence in the AI space.
Source: HackerNoon