2.2 Pod Composition: The Two-Pizza Rule for AI

A AI Innovation is deliberately small—typically 6-10 people—following Amazon's "two-pizza rule" that a team should be small enough to feed with two pizzas. This constraint isn't arbitrary; it's the foundation for maintaining the communication efficiency, accountability, and agility that make the model work. This section defines the roles, structures, and configurations that comprise an effective AI pod.

Why Size Matters

Research consistently shows that small teams outperform large ones for complex, innovative work. A famous NASA study found that teams of 3-7 people achieved the highest performance on challenging tasks. In AI development—where problems are ambiguous, iteration is constant, and deep expertise matters—this effect is amplified. Adding people to an AI team often makes it slower, not faster.

The Two-Pizza Rule for AI

The Math of Communication

As teams grow, communication overhead explodes. For a team of n people:

6
People = 15 Relationships
10
People = 45 Relationships
15
People = 105 Relationships
20
People = 190 Relationships

Beyond communication overhead, larger teams suffer from:

The AI-Specific Constraint

AI work has characteristics that make small teams even more important:

AI Work Characteristic Why Small Teams Win
High Uncertainty Faster pivots when experiments fail; less wasted coordination
Deep Expertise Required Quality over quantity; one excellent ML engineer beats three mediocre ones
Rapid Iteration Tight feedback loops; less handoff overhead between experiments
Cross-functional Nature Generalists who span boundaries; less specialization friction
Tacit Knowledge Critical insights transfer through osmosis in small teams; lost in large ones

Core Roles in a AI Innovation

While specific configurations vary by AI product type and complexity, most pods include these core roles:

Leadership

Single-Threaded Owner (STO)

The pod's leader with full product accountability. Owns vision, strategy, stakeholders, and business outcomes. Makes final decisions on scope and direction.

Typical background: Technical PM, senior data scientist, or engineering leader

AI Ethics Liaison

Embedded governance expert who ensures responsible AI practices are integrated throughout development. Reports to both the STO and the enterprise AI Ethics function.

Typical background: Ethics/compliance, legal, or technical policy

Technical Roles

ML Engineer(s)

Build, train, and optimize models. Responsible for model architecture, training pipelines, and performance optimization. Bridge between research and production.

Count: 1-3 depending on model complexity

Data Engineer

Owns data pipelines, quality, and infrastructure. Ensures training and inference data is available, clean, and properly governed. Critical for MLOps.

Count: 1-2 typically

Software Engineer(s)

Build the systems around the model: APIs, integrations, UI components, and infrastructure. Ensure the AI system works as part of larger applications.

Count: 1-3 depending on integration complexity

MLOps/Platform Engineer

Owns deployment, monitoring, and operational infrastructure. Ensures models can be deployed, scaled, and monitored reliably. May be shared across pods for smaller organizations.

Count: 0.5-1 (may be shared)

Product & Domain Roles

Product Designer/UX

Designs how users interact with AI features. Critical for AI explainability, feedback collection, and ensuring AI capabilities are actually usable.

Count: 0.5-1 (may be shared)

Domain Expert

Brings deep knowledge of the business domain the AI serves. Validates model outputs, provides training data guidance, and connects technical work to business value.

Count: 0.5-1 (may be embedded or consulting)

Role Combinations

In smaller pods, individuals often combine roles:

Pod Archetypes

Different AI products require different pod configurations. Here are common archetypes:

Archetype 1: The Prediction Engine Pod

For: Classification, regression, forecasting models

Role Count Focus
Single-Threaded Owner 1 Business value, stakeholders
ML Engineer 2 Model development, feature engineering
Data Engineer 1 Training data, feature pipelines
Software Engineer 1 API, integration
AI Ethics Liaison 0.5 Fairness, bias testing
Domain Expert 0.5 Output validation, business rules
Total 6

Archetype 2: The GenAI Application Pod

For: LLM-powered applications, chatbots, content generation

Role Count Focus
Single-Threaded Owner 1 Product vision, stakeholders
ML Engineer / Prompt Engineer 1 Prompt optimization, fine-tuning, RAG
Software Engineer 2 Application, UI, orchestration
Data Engineer 1 Knowledge bases, vector stores
Product Designer 1 UX, conversation design
AI Ethics Liaison 1 Content safety, misinformation
Total 7

Archetype 3: The Computer Vision Pod

For: Image/video analysis, object detection, visual inspection

Role Count Focus
Single-Threaded Owner 1 Business outcomes, operations integration
ML Engineer (CV Specialist) 2 Model architecture, training
Data Engineer 1 Image pipelines, annotation management
Software Engineer 2 Edge deployment, real-time processing
AI Ethics Liaison 0.5 Privacy, surveillance concerns
Domain Expert 0.5 Quality standards, edge cases
Total 7

Archetype 4: The High-Risk AI Pod

For: Healthcare, finance, HR decisions—regulated domains

Role Count Focus
Single-Threaded Owner 1 Compliance accountability, stakeholders
ML Engineer 2 Explainable models, validation
Data Engineer 1 Data lineage, audit trails
Software Engineer 1 Secure deployment, logging
AI Ethics Liaison 1 Full-time governance, documentation
Domain Expert / Compliance 1 Regulatory requirements, validation
QA / Test Engineer 1 Validation, edge case testing
Total 8

Pod Evolution Over the Lifecycle

Pod composition isn't static. It evolves as the AI product moves through its lifecycle:

Exploration Phase

Lean & Experimental (3-5 people)

Minimal viable pod focused on validating feasibility. Heavy on ML experimentation, light on production engineering. STO may be part-time from another pod if this is a spin-off exploration.

  • STO (may be part-time)
  • 1-2 ML Engineers
  • 1 Data Engineer
  • Ethics Liaison (consulting)
Development Phase

Full Pod Formation (6-8 people)

Complete cross-functional team for building production-ready AI. STO fully dedicated. All core roles filled, though some may still be shared.

  • Full STO
  • 2 ML Engineers
  • 1-2 Software Engineers
  • 1 Data Engineer
  • Ethics Liaison (embedded)
  • Product/Domain as needed
Operations Phase

Steady-State Operations (5-7 people)

May shed some development capacity as initial build completes, but retains full operational capability. Focus shifts to monitoring, iteration, and incident response.

  • Full STO
  • 1-2 ML Engineers (maintenance + improvement)
  • 1 Software Engineer
  • 1 Data Engineer / MLOps
  • Ethics Liaison (may reduce to part-time)
Expansion Trigger

Mitosis Consideration

If the pod reaches 10+ people or takes on distinct product lines, it's time to split. See Section 6.2 for the Mitosis Strategy.

Scaling Warning Signs

Watch for these indicators that a pod has grown too large:

Time to Consider Mitosis
  • Stand-ups take longer than 15 minutes
  • Team members don't know what others are working on
  • Decision-making has slowed noticeably
  • The STO is becoming a coordination bottleneck
  • Sub-groups have formed with distinct work streams
  • You need more than two pizzas