3.1 The AI Risk Tiering System
Not all AI systems require the same level of governance. A risk-based approach ensures that controls are proportionate to potential harm—applying rigorous oversight where it matters most while enabling innovation for lower-risk applications.
This risk classification framework aligns with the EU AI Act's four-tier system (Prohibited, High-Risk, Limited Risk, Minimal Risk), ensuring organizations can achieve global compliance while scaling governance appropriately.
Risk Tier Overview
| Risk Tier | Description | Governance Requirements | Examples |
|---|---|---|---|
| Prohibited | Clear threat to fundamental rights, safety, or democratic values | Cannot be deployed | Social scoring, subliminal manipulation, real-time biometric ID |
| High Risk | Significant potential impact on health, safety, or fundamental rights | Full framework compliance, third-party assessment, continuous monitoring | Hiring systems, credit scoring, medical diagnosis, biometric ID |
| Limited | Moderate impact with transparency concerns | Transparency requirements, documentation, periodic review | Chatbots, content recommendation, sentiment analysis |
| Minimal | Low potential for harm | Standard development practices, basic documentation | Spam filters, video game AI, inventory optimization |
3.1.1 Prohibited AI: Unacceptable Risks
Certain AI applications pose unacceptable risks to individuals and society. These must not be developed or deployed, regardless of potential business value.
Prohibited AI Practices (EU AI Act, Effective Feb 2025)
🚫 The following AI applications are PROHIBITED:
- Social Scoring: AI that evaluates or classifies people based on social behavior or personal characteristics leading to detrimental treatment
- Subliminal Manipulation: AI using subliminal techniques to materially distort behavior in ways causing harm
- Exploitation of Vulnerabilities: AI targeting specific groups based on age, disability, or social/economic situation to distort behavior
- Biometric Categorization: AI inferring sensitive attributes (race, political opinions, religion, sexual orientation) from biometric data
- Untargeted Facial Recognition: Scraping facial images from internet or CCTV to build recognition databases
- Emotion Recognition at Work/School: AI inferring emotions in workplaces or educational institutions (except safety/medical purposes)
- Real-Time Remote Biometric ID: Law enforcement use in public spaces for identification (limited exceptions)
- Predictive Policing (Individual): AI predicting individual likelihood of committing crimes based solely on profiling
Internal Prohibited Use Cases
Beyond regulatory prohibitions, organizations should consider prohibiting:
- AI that makes fully autonomous decisions about employment termination
- Systems that deny essential services without human review
- AI creating synthetic media of real individuals without consent
- Surveillance systems that track employee behavior beyond job performance
- AI weapons or systems designed to cause physical harm
Any AI project identified as prohibited must be immediately halted. The RAI Council should be notified, and a formal review conducted to understand how the project reached this stage. Penalties under EU AI Act: up to €35 million or 7% of global annual turnover.
3.1.2 High-Risk AI: Critical Applications
High-risk AI systems can significantly impact people's lives, health, safety, or fundamental rights. They require the most stringent governance controls.
High-Risk Categories (EU AI Act Annex III)
| Domain | Specific Applications | Key Concerns |
|---|---|---|
| Employment | Recruitment, resume screening, promotion decisions, task allocation, performance monitoring, termination | Discrimination, unfair treatment, privacy |
| Credit & Finance | Credit scoring, loan decisions, insurance pricing, fraud detection affecting individuals | Discrimination, financial exclusion |
| Education | Student assessment, admission decisions, learning personalization affecting opportunities | Equal access, developmental impact |
| Healthcare | Medical diagnosis, treatment recommendations, triage, resource allocation | Patient safety, accuracy, equity |
| Biometrics | Remote biometric identification, emotion recognition, categorization | Privacy, accuracy across demographics |
| Critical Infrastructure | Energy grid management, water systems, transportation safety | Safety, reliability, security |
| Law Enforcement | Risk assessments, evidence analysis, lie detection, crime prediction | Due process, accuracy, civil liberties |
| Border/Immigration | Document verification, risk assessment, asylum application evaluation | Human rights, accuracy, discrimination |
| Justice & Democracy | Judicial decision support, election influence analysis | Due process, democratic integrity |
High-Risk Compliance Requirements
- Risk Management System: Identify, analyze, estimate, and evaluate risks throughout lifecycle
- Data Governance: Ensure training data is relevant, representative, and appropriately free of errors
- Technical Documentation: Comprehensive documentation enabling conformity assessment
- Record-Keeping: Automatic logging of events for traceability
- Transparency: Clear instructions for downstream deployers
- Human Oversight: Designed to be effectively overseen by humans
- Accuracy & Robustness: Appropriate levels of accuracy, resilience to errors
- Cybersecurity: Resilience against manipulation and attacks
- Conformity Assessment: Internal or third-party assessment before market placement
- EU Database Registration: Registration in public EU database for high-risk systems
Internal High-Risk Governance Process
Algorithmic Impact Assessment
Complete full AIA identifying risks, affected populations, and mitigation strategies
RAI Council Review
Present system to council for ethics review and risk discussion
Independent Validation
Third-party or internal audit validation of fairness, accuracy, and security
Executive Approval
Final deployment approval from designated executive
Continuous Monitoring
Ongoing monitoring with quarterly performance reviews
3.1.3 Limited Risk: Transparency Focus
Limited-risk AI systems don't pose significant threats to safety or rights but require transparency to ensure users understand they're interacting with AI or that content is AI-generated.
Limited Risk Examples
- Chatbots & Virtual Assistants: Customer service bots, internal help desks, AI assistants
- Content Generation: AI writing tools, image generators, video creation
- Recommendation Systems: Product recommendations, content curation, search personalization
- Sentiment Analysis: Customer feedback analysis, social media monitoring
- Inventory & Operations: Demand forecasting, supply chain optimization
Limited Risk Requirements
| Requirement | Description | Implementation |
|---|---|---|
| AI Disclosure | Users must be informed they're interacting with AI | "You are chatting with an AI assistant" notices |
| Synthetic Content Labeling | AI-generated or manipulated content must be marked | Watermarking, metadata tags, visible labels |
| Documentation | Model cards and system documentation maintained | Standardized templates, version control |
| Periodic Review | Annual risk reassessment | Scheduled review calendar, documented assessments |
| Performance Monitoring | Track accuracy and user satisfaction | Dashboards, feedback mechanisms |
Limited-risk systems can become high-risk if their scope expands. A customer service chatbot becomes high-risk if it starts making credit decisions. Implement change management processes to reassess risk when functionality changes.
3.1.4 Minimal Risk: Standard Practices
Minimal-risk AI systems pose little threat to individuals or society and can proceed with standard development practices.
Minimal Risk Examples
- Spam filters and email categorization
- Video game AI and entertainment NPCs
- Autocorrect and spell checking
- Basic search functionality
- Manufacturing quality control (non-safety)
- Internal analytics and reporting dashboards
Minimal Risk Requirements
- Standard software development lifecycle
- Basic model documentation (lightweight model card)
- Registration in AI inventory
- General awareness training for developers
- Standard security and privacy practices
Risk Classification Decision Tree
Question 1: Does the system fall under any prohibited use case?
→ YES: PROHIBITED - Cannot proceed
→ NO: Continue to Question 2
Question 2: Does the system affect fundamental rights, health, safety, or access to essential services?
→ YES: HIGH RISK - Full compliance required
→ NO: Continue to Question 3
Question 3: Does the system interact directly with users or generate content presented as human-created?
→ YES: LIMITED RISK - Transparency requirements
→ NO: Continue to Question 4
Question 4: Could the system's failure cause significant business impact or reputational harm?
→ YES: Consider LIMITED RISK classification
→ NO: MINIMAL RISK - Standard practices
Implementation Steps
Adopt the Risk Taxonomy
Formally adopt this four-tier risk classification system. Customize criteria for your industry and jurisdiction.
Timeline: 1-2 weeks | Owner: RAI Council
Classify Existing Systems
Apply the classification framework to all AI systems in your inventory. Document rationale for each classification.
Timeline: 2-4 weeks | Owner: AI Governance Team + Model Owners
Establish Classification Process
Create a repeatable process for classifying new AI projects at initiation. Build into project intake workflow.
Timeline: 2-3 weeks | Owner: PMO / AI Governance
Implement Tier-Specific Controls
Deploy governance controls appropriate to each tier. High-risk systems get immediate priority.
Timeline: 4-8 weeks | Owner: Development Teams / Risk