5.1 Risk-Tiered Autonomy
Not all AI products carry the same risk. A recommendation engine for internal document search deserves different governance than an algorithm making credit decisions. Risk-Tiered Autonomy matches the level of oversight to the level of potential harm, enabling pods to move fast on low-risk work while maintaining rigorous controls where stakes are high.
Traditional governance applies the same heavyweight process to every AI initiative, creating unnecessary friction for low-risk work while potentially under-scrutinizing high-risk deployments. Risk-Tiered Autonomy inverts this: governance intensity scales with potential harm. Low-risk pods move with startup speed; high-risk pods accept more oversight as the price of operating in consequential domains.
The Governance Philosophy
Speed Through Trust
Risk-Tiered Autonomy is built on a foundation of trust that is earned and verified:
- Pods are trusted by default to operate within guardrails appropriate to their risk tier
- Trust is verified through automated checks, periodic reviews, and outcome monitoring
- Trust can be revoked if pods violate guardrails or demonstrate poor judgment
- Higher risk requires higher evidence of responsible practices before granting autonomy
Aligned with Regulatory Thinking
The tier system aligns with emerging AI regulations, particularly the EU AI Act:
| EU AI Act Category | AI Innovation Tier | Alignment |
|---|---|---|
| Prohibited | Tier 4: Prohibited | Applications that cannot be built under any governance |
| High Risk | Tier 3: High Risk | Consequential decisions requiring extensive oversight |
| Limited Risk | Tier 2: Moderate Risk | Customer-facing AI with transparency obligations |
| Minimal Risk | Tier 1: Low Risk | Internal tools with minimal external impact |
Tier Definitions
Tier 1: Low Risk
- Internal tools and productivity aids
- Non-consequential recommendations
- Human always makes final decision
- No protected groups disproportionately affected
- Easily reversible impacts
Examples: Code completion for developers, internal document search, meeting scheduling assistance
Tier 2: Moderate Risk
- Customer-facing features
- Business process automation
- Decisions with moderate financial impact
- Some affected populations require fairness consideration
- Impacts generally reversible with some effort
Examples: Product recommendations, content moderation, customer support automation, pricing optimization
Tier 3: High Risk
- Decisions affecting fundamental rights or opportunities
- Health, safety, or legal implications
- Significant financial consequences for individuals
- Regulated domains with specific compliance requirements
- Difficult or impossible to reverse impacts
Examples: Credit scoring, hiring/screening, medical diagnosis support, insurance underwriting, predictive policing
Tier 4: Prohibited
- Social scoring by public authorities
- Real-time remote biometric identification for law enforcement
- Exploitation of vulnerable groups
- Subliminal manipulation causing harm
- Any use case that cannot be made ethical regardless of safeguards
Examples: Mass surveillance systems, behavioral manipulation targeting children, emotion recognition in schools/workplaces for control
Tier Requirements Matrix
Each tier has specific requirements across governance dimensions:
| Requirement | Tier 1 | Tier 2 | Tier 3 |
|---|---|---|---|
| Charter Approval | STO + Liaison | AI Council delegate | Full AI Council |
| Ethics Liaison | Part-time/shared | Dedicated or shared | Full-time dedicated |
| Model Card Review | Quarterly | Monthly | Bi-weekly |
| Fairness Testing | Basic checks | Comprehensive testing | External audit option |
| Deployment Approval | STO decides | STO + Liaison sign-off | AI Council review |
| Monitoring | Standard | Enhanced + fairness | Real-time + alerting |
| Incident Escalation | STO handles | Liaison notified | AI Council notified |
| External Audit | Not required | On request | Annual required |
Autonomy by Tier
What pods can decide on their own varies by tier:
Tier 1 Autonomy
Near-complete autonomy within minimal guardrails
- All technical decisions
- Deployment timing
- Feature scope changes
- Budget reallocation (within limits)
Tier 2 Autonomy
Significant autonomy with consultation requirements
- Most technical decisions
- Deployment with Liaison concurrence
- Scope within charter bounds
- Budget per approved plan
Tier 3 Autonomy
Bounded autonomy with approval checkpoints
- Technical decisions with review
- Deployment requires AI Council
- Scope changes require re-approval
- Budget changes require justification
Risk Classification Process
Initial Classification
Risk tier is determined during the Ideation & Chartering phase based on a structured assessment:
Impact Assessment
Evaluate potential harms: Who could be affected? How severely? How many people? Is it reversible?
Domain Classification
Is this in a regulated domain? Does it affect fundamental rights? Are there specific compliance requirements?
Automation Level
How much human oversight exists? Is AI recommending or deciding? What's the human-in-the-loop design?
Fairness Sensitivity
Are protected groups affected? Could the AI systematically disadvantage some populations?
Tier Assignment
Based on the assessment, assign initial tier. Ethics Liaison validates classification. Disputes escalate to AI Council.
Reclassification
Risk tiers can change during the lifecycle:
- Scope expansion into higher-risk domains
- Increased automation (human oversight reduced)
- Discovery of fairness issues
- Regulatory changes affecting the domain
- Incidents revealing higher risk than anticipated
- Demonstrated track record of responsible operation
- Scope reduction to lower-risk use cases
- Addition of mitigating controls
- Strong fairness and performance metrics over time