Appendix A: Algorithmic Impact Assessment (AIA) Template

Comprehensive Risk Assessment for AI Systems

Instructions for Use

Purpose of the Algorithmic Impact Assessment

This template provides a structured approach to assess the potential impacts of AI systems before deployment. It aligns with EU AI Act requirements, NIST AI RMF guidance, and industry best practices. Complete this assessment for all AI systems classified as High-Risk or above, and for any AI system that may significantly affect individuals or groups.

When to Complete This Assessment

Trigger Assessment Required Approvals Needed
New AI system development (High-Risk) Full assessment before deployment RAI Council + Executive Sponsor
New AI system development (Limited Risk) Abbreviated assessment Model Owner + RAI Representative
Significant changes to existing system Updated assessment Based on change risk level
Third-party AI procurement (High-Risk use) Deployment assessment RAI Council + Procurement
Annual review of High-Risk systems Reassessment Model Owner

Section 1: System Identification

Unique identifier for this assessment (auto-generated or assigned)
Official name of the AI system
Version being assessed
Person responsible for completing this assessment
Person accountable for the AI system
Senior executive accountable for business outcomes

Section 2: System Description & Purpose

Describe what the AI system does and the business problem it solves. Be specific about intended outcomes.
Select all that apply
Describe the technical architecture including key components, data flows, and integration points
List specific use cases and scenarios where the system will be deployed
List use cases for which the system is NOT intended and should not be used
Where will the system be deployed? Select all that apply.
Estimated number of users/subjects affected

Section 3: Risk Classification

Classify the system according to EU AI Act risk categories
If High-Risk, select the applicable EU AI Act Annex III category
Explain the reasoning behind the risk classification
List any sector-specific regulations that apply

Section 4: Data Assessment

List all data sources used for training
Describe the size and temporal scope of training data
What is the legal basis under GDPR/applicable law?
Document data rights, licenses, and any restrictions
Describe analysis of potential bias in training data
Describe data quality checks and known limitations

Section 5: Stakeholder & Impact Analysis

Who are the direct users and subjects of this AI system?
Who else may be indirectly affected?
Identify any vulnerable populations who may be affected and how
What is the significance of decisions made or influenced by this system?
Identify potential negative impacts
Harm Category Applicable? Description & Severity
Physical Safety
Financial Harm
Discrimination / Civil Rights
Privacy Violation
Psychological / Emotional
Reputational (to subjects)
Access to Services / Opportunities
Autonomy / Manipulation
Identify positive impacts for stakeholders

Section 6: Fairness Assessment

Select characteristics that may be at risk of bias
What fairness criteria will be used to evaluate the system?
Describe how fairness will be tested
Document results of fairness testing
Describe measures implemented to address bias

Section 7: Transparency & Explainability

How will decisions be explained to affected individuals?
How will users be informed they are interacting with AI?
Confirm documentation is complete

Section 8: Human Oversight & Control

Explain why this level of automation is appropriate given the risks
Can human operators override AI decisions?
How can affected individuals challenge AI-influenced decisions?
What training/qualifications are required for human oversight?
Can the system be immediately halted if needed?

Section 9: Security & Robustness

Confirm security reviews completed
Assess ML-specific security risks
Risk Applicable? Mitigation
Adversarial Input Attacks
Data Poisoning
Model Extraction
Model Inversion
Prompt Injection (LLMs)
What are the uptime and reliability requirements?

Section 10: Monitoring & Maintenance Plan

Describe ongoing performance monitoring
Describe ongoing fairness monitoring
How will data/concept drift be detected?
How will AI-related incidents be handled?
How often will the model be retrained?
When will this AIA be reviewed?

Section 11: Overall Risk Summary & Decision

LOW RISK

Minor potential for harm; standard controls sufficient

MEDIUM RISK

Moderate potential for harm; enhanced controls required

HIGH RISK

Significant potential for harm; extensive controls required

Summarize the key risks and mitigations
What risks remain after mitigations?
What conditions must be met before deployment?

Section 12: Approvals

Name: _______________________

Date: _______________________

Signature: ___________________

Name: _______________________

Date: _______________________

Signature: ___________________

Name: _______________________

Date: _______________________

Signature: ___________________

Name: _______________________

Date: _______________________

Signature: ___________________

Name: _______________________

Date: _______________________

Signature: ___________________

Name: _______________________

Date: _______________________

Signature: ___________________