You're In — Full Checklist Below

You've been added to The Product Economist briefing. Here's the complete diagnostic framework.

The Complete R&D Audit Checklist

The 38 questions used in every $7,500 diagnostic engagement. Organized across 6 domains with traffic-light scoring, remediation actions, and benchmark thresholds. This is the same framework used to audit engineering organizations at companies from Series A startups to Fortune 500 enterprises.

38
Questions
6
Domains
38
Scoring Rubrics
38
Action Items

📖 How to Use This Checklist

Step 1: Self-Assessment

Score each question using the traffic-light rubric. Be honest — this is for your benefit, not anyone else's.

Step 2: Prioritize

Count your red scores. These are your highest-impact remediation opportunities. Start with the domain with the most red.

Step 3: Execute

Use the action items for each question. Tackle 2-3 red items per quarter. Track progress with the free tools below.

Critical Risk
Improvement Needed
On Track

Domain 1: Engineering Velocity & Delivery

How fast and reliably does your engineering organization deliver value?

01

What percentage of engineering time is spent on maintenance vs. new features?

Why: If maintenance exceeds 40%, you may be approaching Technical Insolvency.

🎯 Action

Calculate Innovation Tax: maintenance hours ÷ total hours. Track monthly.

Scoring
>60%
40-60%
<40%
02

What are your DORA metrics (deploy frequency, lead time, failure rate, MTTR)?

Why: DORA measures delivery speed. Pair with PDI to see if you're shipping fast toward insolvency.

🎯 Action

Instrument CI/CD pipeline. Track all 4 metrics weekly.

Scoring
Monthly deploys, >1wk lead time
Weekly deploys, 1-7d lead time
Multiple/day, <1hr lead time
03

What is your cycle time from commit to production?

Why: Long cycle times compound delays and reduce feedback speed.

🎯 Action

Measure commit-to-production time. Target: <1 hour for elite teams.

Scoring
>1 week
1-7 days
<1 day
04

How often do deployments cause incidents?

Why: Change failure rate directly measures deployment quality.

🎯 Action

Calculate: failed deployments ÷ total deployments.

Scoring
>30%
15-30%
<15%
05

What is your average sprint completion rate?

Why: Consistently missing sprint commitments signals estimation or capacity problems.

🎯 Action

Track: stories completed ÷ stories committed per sprint.

Scoring
<60%
60-80%
>80%
06

Do you have feature flags for safe rollouts?

Why: Feature flags enable incremental releases, A/B testing, and instant rollback.

🎯 Action

Implement feature flag system. Target: all new features behind flags.

Scoring
No flags
Some features
All features flagged
07

What is your code review turnaround time?

Why: Slow reviews create bottlenecks and context-switching costs.

🎯 Action

Measure: time from PR open to first review. Target: <4 hours.

Scoring
>24 hours
4-24 hours
<4 hours
🏗️

Domain 2: Technical Debt & Architecture

What is the health of your technology capital, and where is value being destroyed?

01

Can you identify your 3 largest sources of technical debt and their financial impact?

Why: Most organizations cannot quantify debt in dollars. Without financial language, leadership ignores it.

🎯 Action

Run PDI assessment. Assign dollar values to top debt categories.

Scoring
Cannot identify
Identified but not quantified
Quantified in dollars
02

What is your Technical Insolvency Date?

Why: The exact quarter when maintenance costs consume 100% of engineering capacity.

🎯 Action

Plot Innovation Tax trend. Extrapolate to 100%. That's your insolvency date.

Scoring
<6 months away
6-18 months
>18 months or improving
03

What percentage of your codebase has test coverage?

Why: Low coverage = high change failure rate = slow delivery = more rework costs.

🎯 Action

Measure line/branch coverage. Target: >70% for critical paths.

Scoring
<30%
30-70%
>70%
04

When was your last architecture review?

Why: Architecture debt is the most expensive form of debt — it requires rewrites, not refactors.

🎯 Action

Establish quarterly Architecture Review Board. Document all decisions.

Scoring
Never / >12 months
6-12 months ago
Within last quarter
05

How many services or modules have a single maintainer?

Why: Single points of failure. If that person leaves, the knowledge leaves with them.

🎯 Action

Audit: map each service to its maintainers. Cross-train where count = 1.

Scoring
>30% of services
10-30%
<10%
06

What is the age distribution of your critical dependencies?

Why: Outdated dependencies = security vulnerabilities + compatibility issues + upgrade debt.

🎯 Action

Audit dependency ages. Flag anything >2 major versions behind.

Scoring
>50% outdated
20-50% outdated
<20% outdated
07

Do you have automated security scanning in your CI/CD pipeline?

Why: Manual security reviews don't scale. Automated SAST/DAST catches vulnerabilities before production.

🎯 Action

Integrate SAST tool. Block merges with critical vulnerabilities.

Scoring
No scanning
Manual only
Automated in CI/CD
Scored red on 3+ questions so far?

A 30-minute Gut-Check call identifies whether you have a real problem — or just technical anxiety.

Gut-Check — $450 →
🤖

Domain 3: AI & Emerging Technology Economics

Are your AI investments creating or destroying value?

01

What is the fully-loaded cost per AI inference request?

Why: AI features often have hidden variable costs that erode gross margins.

🎯 Action

Instrument per-request cost tracking: compute + tokens + storage + overhead.

Scoring
Unknown
Estimated
Tracked per-request
02

Do you use model routing (different models for different query types)?

Why: Using frontier models for every query costs 10-50x more than necessary.

🎯 Action

Classify queries by complexity. Route 70% to smaller, cheaper models.

Scoring
One model for all
2-3 models
Smart routing with 3+ tiers
03

What percentage of your AI features have positive unit economics?

Why: 40-60% of AI features launch unprofitable. Growth accelerates losses.

🎯 Action

Calculate per-feature P&L. Kill or optimize negative-margin features.

Scoring
Unknown / <30%
30-60%
>60% profitable
04

How much of your production code was generated by AI, and what's its defect rate?

Why: Vibe-coded applications accumulate hallucination debt — debt no one on the team fully understands.

🎯 Action

Track AI-generated code percentage. Measure defect rate vs. human-written code.

Scoring
>30% AI code, no quality tracking
AI code tracked
AI code tracked + quality monitored
05

Do you have a model right-sizing strategy?

Why: Using a Ferrari for the mailbox. Right-sizing cuts AI costs 60-80%.

🎯 Action

Benchmark: test smaller models against quality thresholds. Document findings.

Scoring
No strategy
Some right-sizing
Systematic optimization
06

What guardrails exist for AI output quality?

Why: Without guardrails: hallucinations, bias, and harmful outputs reach users.

🎯 Action

Implement output validation, safety filters, and quality monitoring.

Scoring
No guardrails
Basic filters
Comprehensive guardrail pipeline
💰

Domain 4: Product & Revenue Alignment

Is engineering investment aligned with revenue generation?

01

What is your Revenue Per Engineer (RPE), and how does it trend?

Why: Declining RPE signals engineering capital misallocation.

🎯 Action

Calculate: ARR ÷ engineering headcount. Track quarterly. Use APER calculator.

Scoring
<$200K or declining
$200K-500K, flat
>$500K and growing
02

Can you identify which features generate revenue and which are zombie features?

Why: Most organizations maintain features that destroy value. 30-50% of features have <5% usage.

🎯 Action

Instrument feature usage. Identify features with <5% MAU. Run Kill Switch Protocol.

Scoring
No feature-level tracking
Some tracking
Full feature-level P&L
03

Do your PMs own a P&L, or just a backlog?

Why: PMs who don't understand their P&L make uninformed capital allocation decisions every sprint.

🎯 Action

Create per-product P&L. Train PMs on unit economics.

Scoring
Backlog only
Some financial awareness
Full P&L ownership
04

Can you calculate the gross margin of each product line?

Why: AI features introduce variable COGS. Without margin visibility, you may be scaling losses.

🎯 Action

Allocate engineering + infrastructure costs per product. Calculate margins.

Scoring
No margin tracking
Aggregate only
Per-product margins
05

What would happen if you removed your 10 least-used features tomorrow?

Why: The Kill Switch Protocol typically recovers 20-40% of engineering capacity from zombie features.

🎯 Action

List 10 lowest-usage features. Calculate maintenance cost of each. Draft removal plan.

Scoring
Don't know usage
Know usage, afraid to cut
Regular feature pruning
06

What is your time-to-revenue for new features?

Why: Long time-to-revenue means engineering investment isn't generating returns fast enough.

🎯 Action

Track: feature release date → first revenue attribution. Target: <30 days.

Scoring
>90 days or unknown
30-90 days
<30 days
Multiple red scores across domains?

The Insolvency Diagnostic quantifies your exposure and delivers a written Risk Report with prioritized remediation.

Full Diagnostic — $2,500 →
👥

Domain 5: Organization & People

Is your team structured for sustainable, scalable delivery?

01

What is your engineering attrition rate over the last 12 months?

Why: Each departure costs $150K-250K (recruiting + onboarding + lost productivity).

🎯 Action

Calculate: departures ÷ average headcount × 100.

Scoring
>20%
10-20%
<10%
02

What is the average tenure on your engineering team?

Why: Low tenure means constant knowledge loss and ramp-up costs.

🎯 Action

Track average tenure. Flag teams with <18 month average.

Scoring
<12 months avg
12-24 months
>24 months
03

Is your engineering org structured around products or projects?

Why: Project-based teams ship and move on. Product teams own outcomes.

🎯 Action

Evaluate: do teams own products end-to-end, or get assigned projects?

Scoring
Project-based
Mixed
Product-based, end-to-end ownership
04

What is your span of control (direct reports per manager)?

Why: Below 5: manager overhead too high. Above 8: insufficient coaching.

🎯 Action

Audit: count direct reports per manager. Restructure outliers.

Scoring
<4 or >10
4-5 or 9-10
6-8
05

How many key-person dependencies exist?

Why: If one person's departure would halt a project, that's a critical risk.

🎯 Action

Map: for each critical system, who are the only people who understand it?

Scoring
>5 single-points-of-failure
2-5
0-1
06

Do you have a documented career ladder with clear levels?

Why: Without clear progression, top engineers leave for companies that offer it.

🎯 Action

Publish engineering career ladder. Review annually.

Scoring
No ladder
Informal
Published with clear criteria
📊

Domain 6: Strategic & Financial

Is your R&D investment being valued, reported, and optimized at the board level?

01

What percentage of your "R&D spend" is actually maintenance OpEx?

Why: The Innovation Tax — many companies report 50% R&D investment when 80% is actually maintenance.

🎯 Action

Audit: categorize every engineering hour as innovation vs. maintenance.

Scoring
>70% maintenance
40-70%
<40% maintenance
02

If a PE firm audited your engineering organization today, what would they find?

Why: Technical Due Diligence reveals hidden liabilities. Better to find them yourself.

🎯 Action

Conduct an internal pre-diligence audit using this checklist + PDI tool.

Scoring
Major undisclosed liabilities
Known but unquantified issues
Clean, documented assessment
03

What is the accuracy-cost curve for your critical AI features?

Why: Going from 80% to 95% accuracy often costs 10x more. The Cost of Predictivity must be modeled.

🎯 Action

For each AI feature: plot accuracy vs. cost. Find the diminishing returns inflection point.

Scoring
Not modeled
Partially modeled
Fully modeled with trade-off analysis
04

Can your engineering investment survive a 30% budget cut?

Why: Knowing your critical path vs. nice-to-have helps make tough decisions before they're forced.

🎯 Action

Create a tiered investment plan: must-have (70%), should-have (20%), nice-to-have (10%).

Scoring
No prioritization framework
Some prioritization
Tiered investment plan documented
05

Do you report engineering health metrics to the board?

Why: Boards that see engineering metrics make better investment decisions.

🎯 Action

Create quarterly technology capital report: PDI, APER, DORA, Innovation Tax.

Scoring
No engineering metrics to board
Ad-hoc reporting
Quarterly technology capital report
06

What is the total cost of ownership for your technology stack?

Why: Most companies underestimate TCO by 40-60%. Hidden costs: maintenance, integration, training, migration.

🎯 Action

Map TCO for each major platform: license + integration + maintenance + opportunity cost.

Scoring
Unknown
Partially calculated
Fully mapped and reviewed annually

🛠️ Answer These Questions With Free Tools

Don't guess — use our calculators to get accurate scores for the metrics above.

Want These Questions Answered Professionally?

Book a diagnostic engagement and get a written executive summary with quantified findings, benchmarks, and a prioritized remediation roadmap.