3.1 The AI Risk Tiering System

Not all AI systems require the same level of governance. A risk-based approach ensures that controls are proportionate to potential harm—applying rigorous oversight where it matters most while enabling innovation for lower-risk applications.

🇪🇺 EU AI Act Alignment

This risk classification framework aligns with the EU AI Act's four-tier system (Prohibited, High-Risk, Limited Risk, Minimal Risk), ensuring organizations can achieve global compliance while scaling governance appropriately.

Risk Tier Overview

Risk Tier Description Governance Requirements Examples
Prohibited Clear threat to fundamental rights, safety, or democratic values Cannot be deployed Social scoring, subliminal manipulation, real-time biometric ID
High Risk Significant potential impact on health, safety, or fundamental rights Full framework compliance, third-party assessment, continuous monitoring Hiring systems, credit scoring, medical diagnosis, biometric ID
Limited Moderate impact with transparency concerns Transparency requirements, documentation, periodic review Chatbots, content recommendation, sentiment analysis
Minimal Low potential for harm Standard development practices, basic documentation Spam filters, video game AI, inventory optimization

3.1.1 Prohibited AI: Unacceptable Risks

Certain AI applications pose unacceptable risks to individuals and society. These must not be developed or deployed, regardless of potential business value.

Prohibited AI Practices (EU AI Act, Effective Feb 2025)

🚫 The following AI applications are PROHIBITED:

  1. Social Scoring: AI that evaluates or classifies people based on social behavior or personal characteristics leading to detrimental treatment
  2. Subliminal Manipulation: AI using subliminal techniques to materially distort behavior in ways causing harm
  3. Exploitation of Vulnerabilities: AI targeting specific groups based on age, disability, or social/economic situation to distort behavior
  4. Biometric Categorization: AI inferring sensitive attributes (race, political opinions, religion, sexual orientation) from biometric data
  5. Untargeted Facial Recognition: Scraping facial images from internet or CCTV to build recognition databases
  6. Emotion Recognition at Work/School: AI inferring emotions in workplaces or educational institutions (except safety/medical purposes)
  7. Real-Time Remote Biometric ID: Law enforcement use in public spaces for identification (limited exceptions)
  8. Predictive Policing (Individual): AI predicting individual likelihood of committing crimes based solely on profiling

Internal Prohibited Use Cases

Beyond regulatory prohibitions, organizations should consider prohibiting:

⚠️ Enforcement

Any AI project identified as prohibited must be immediately halted. The RAI Council should be notified, and a formal review conducted to understand how the project reached this stage. Penalties under EU AI Act: up to €35 million or 7% of global annual turnover.

3.1.2 High-Risk AI: Critical Applications

High-risk AI systems can significantly impact people's lives, health, safety, or fundamental rights. They require the most stringent governance controls.

High-Risk Categories (EU AI Act Annex III)

Domain Specific Applications Key Concerns
Employment Recruitment, resume screening, promotion decisions, task allocation, performance monitoring, termination Discrimination, unfair treatment, privacy
Credit & Finance Credit scoring, loan decisions, insurance pricing, fraud detection affecting individuals Discrimination, financial exclusion
Education Student assessment, admission decisions, learning personalization affecting opportunities Equal access, developmental impact
Healthcare Medical diagnosis, treatment recommendations, triage, resource allocation Patient safety, accuracy, equity
Biometrics Remote biometric identification, emotion recognition, categorization Privacy, accuracy across demographics
Critical Infrastructure Energy grid management, water systems, transportation safety Safety, reliability, security
Law Enforcement Risk assessments, evidence analysis, lie detection, crime prediction Due process, accuracy, civil liberties
Border/Immigration Document verification, risk assessment, asylum application evaluation Human rights, accuracy, discrimination
Justice & Democracy Judicial decision support, election influence analysis Due process, democratic integrity

High-Risk Compliance Requirements

Internal High-Risk Governance Process

Step 1

Algorithmic Impact Assessment

Complete full AIA identifying risks, affected populations, and mitigation strategies

Step 2

RAI Council Review

Present system to council for ethics review and risk discussion

Step 3

Independent Validation

Third-party or internal audit validation of fairness, accuracy, and security

Step 4

Executive Approval

Final deployment approval from designated executive

Step 5

Continuous Monitoring

Ongoing monitoring with quarterly performance reviews

3.1.3 Limited Risk: Transparency Focus

Limited-risk AI systems don't pose significant threats to safety or rights but require transparency to ensure users understand they're interacting with AI or that content is AI-generated.

Limited Risk Examples

Limited Risk Requirements

Requirement Description Implementation
AI Disclosure Users must be informed they're interacting with AI "You are chatting with an AI assistant" notices
Synthetic Content Labeling AI-generated or manipulated content must be marked Watermarking, metadata tags, visible labels
Documentation Model cards and system documentation maintained Standardized templates, version control
Periodic Review Annual risk reassessment Scheduled review calendar, documented assessments
Performance Monitoring Track accuracy and user satisfaction Dashboards, feedback mechanisms
⚠️ Risk Escalation

Limited-risk systems can become high-risk if their scope expands. A customer service chatbot becomes high-risk if it starts making credit decisions. Implement change management processes to reassess risk when functionality changes.

3.1.4 Minimal Risk: Standard Practices

Minimal-risk AI systems pose little threat to individuals or society and can proceed with standard development practices.

Minimal Risk Examples

Minimal Risk Requirements

Risk Classification Decision Tree

Question 1: Does the system fall under any prohibited use case?

→ YES: PROHIBITED - Cannot proceed

→ NO: Continue to Question 2

Question 2: Does the system affect fundamental rights, health, safety, or access to essential services?

→ YES: HIGH RISK - Full compliance required

→ NO: Continue to Question 3

Question 3: Does the system interact directly with users or generate content presented as human-created?

→ YES: LIMITED RISK - Transparency requirements

→ NO: Continue to Question 4

Question 4: Could the system's failure cause significant business impact or reputational harm?

→ YES: Consider LIMITED RISK classification

→ NO: MINIMAL RISK - Standard practices

Implementation Steps

1

Adopt the Risk Taxonomy

Formally adopt this four-tier risk classification system. Customize criteria for your industry and jurisdiction.

Timeline: 1-2 weeks | Owner: RAI Council

2

Classify Existing Systems

Apply the classification framework to all AI systems in your inventory. Document rationale for each classification.

Timeline: 2-4 weeks | Owner: AI Governance Team + Model Owners

3

Establish Classification Process

Create a repeatable process for classifying new AI projects at initiation. Build into project intake workflow.

Timeline: 2-3 weeks | Owner: PMO / AI Governance

4

Implement Tier-Specific Controls

Deploy governance controls appropriate to each tier. High-risk systems get immediate priority.

Timeline: 4-8 weeks | Owner: Development Teams / Risk