4.1 Phase 1: Ideation & Design

The foundation of responsible AI begins before a single line of code is written. This phase establishes whether an AI solution should be built and how to design it responsibly from the outset.

1 Ideation
2 Data
3 Development
4 Testing
5 Deployment
6 Monitoring

Key Takeaways

  • The "Should we build this?" question is more important than "Can we build this?"
  • Early stakeholder mapping prevents costly redesigns and ethical failures
  • Vulnerable group analysis is mandatory under EU AI Act for high-risk systems
  • Design decisions made in this phase determine 80% of downstream ethical outcomes

4.1.1 Use Case Validity Check: "Should We Build This?"

The ideation phase is the most cost-effective point to identify and address ethical concerns. Research indicates that ethical issues identified post-deployment cost 10-100x more to remediate than those caught during design. The Use Case Validity Check provides a structured framework for evaluating whether an AI application should proceed.

The Five Gates Framework

Every proposed AI use case must pass through five evaluation gates before proceeding to development:

Gate 1

Legitimacy Gate

Core Question: Is this a legitimate use of AI technology?

Pass Criteria:
  • Use case does not fall into EU AI Act "prohibited" category
  • Use case aligns with organizational values and ethics principles
  • Primary purpose is not to deceive, manipulate, or harm
  • Legal review confirms no regulatory prohibitions
Red Flags:
  • Social scoring or trustworthiness ranking
  • Subliminal manipulation techniques
  • Exploitation of vulnerable groups
  • Real-time biometric identification (unless exempted)
Gate 2

Necessity Gate

Core Question: Is AI the right solution for this problem?

Evaluation Questions:
  • Could simpler, more explainable methods achieve similar results?
  • What is the marginal benefit of AI over traditional approaches?
  • Does the complexity of AI justify the risks introduced?
  • Is there sufficient quality data to support AI development?
Documentation Required:
  • Comparison analysis with non-AI alternatives
  • Quantified benefit assessment
  • Data availability assessment
Gate 3

Proportionality Gate

Core Question: Are the potential benefits proportional to the potential risks?

Assessment Framework:
Benefits Risks
Efficiency gains Individual harm potential
Accuracy improvements Group discrimination risk
Cost reduction Privacy intrusion
User experience enhancement Autonomy reduction
Social good contribution Systemic risk introduction
Gate 4

Feasibility Gate

Core Question: Can we build this responsibly with available resources?

Capability Assessment:
  • Do we have expertise to build and maintain this system?
  • Can we achieve required accuracy without discrimination?
  • Do we have resources for ongoing monitoring and updates?
  • Can we provide meaningful human oversight?
  • Can we explain decisions to affected individuals?
Gate 5

Accountability Gate

Core Question: Can we take responsibility for this system's impacts?

Requirements:
  • Clear ownership assigned for system outcomes
  • Grievance mechanisms defined for affected parties
  • Remediation pathways established for errors
  • Insurance or liability coverage in place
  • Sunset criteria and decommissioning plan defined

Use Case Decision Matrix

Gate Results Decision Next Steps
All 5 gates passed PROCEED Advance to stakeholder mapping and design phase
Gate 1 failed STOP Do not proceed; fundamental ethical/legal barrier
Gates 2-4 failed REDESIGN Revise scope, approach, or resource allocation
Gate 5 failed PAUSE Establish accountability structures before proceeding
Multiple gates borderline ESCALATE Refer to RAI Council for executive decision

The "Just Because We Can" Trap

Technical capability should never be the primary driver of AI deployment decisions. Organizations must resist the temptation to deploy AI simply because it's technologically feasible. The question "Should we?" must always precede "Can we?"

4.1.2 Stakeholder Mapping & Vulnerable Group Analysis

Comprehensive stakeholder mapping ensures that all parties affected by an AI system are identified and their interests considered during design. This analysis is particularly critical for high-risk AI systems under the EU AI Act.

Stakeholder Categories

Primary Stakeholders (Direct Impact)

👤
End Users

Individuals who directly interact with the AI system

  • Customers using AI-powered services
  • Employees using AI tools
  • Citizens accessing public services
🎯
Subjects of Decisions

Individuals about whom AI makes decisions

  • Job applicants (hiring AI)
  • Loan applicants (credit AI)
  • Patients (diagnostic AI)
📊
Data Subjects

Individuals whose data trains or operates the system

  • Training data contributors
  • Individuals in datasets
  • Historical record subjects

Secondary Stakeholders (Indirect Impact)

👥
Affected Communities

Groups impacted by aggregated decisions

  • Neighborhoods affected by policing AI
  • Demographics affected by content algorithms
  • Market segments affected by pricing AI
⚙️
Operators

Personnel who operate and maintain the system

  • Customer service agents
  • Decision reviewers
  • System administrators
💼
Displaced Workers

Individuals whose roles may be automated

  • Current job holders
  • Related role holders
  • Industry participants

Tertiary Stakeholders (Systemic Impact)

🏛️
Regulators

Authorities responsible for oversight

🌍
Society

Broader societal impacts and norms

🌱
Environment

Ecological and resource impacts

Vulnerable Group Analysis

EU AI Act Article 9(4) requires that high-risk AI systems consider the "possible negative impact... on vulnerable groups of persons." Organizations must conduct systematic analysis of how AI systems may disproportionately affect protected and vulnerable populations.

Vulnerable Group Vulnerability Factors AI Risk Examples Mitigation Considerations
Children & Minors
  • Developmental stage
  • Limited consent capacity
  • Susceptibility to influence
  • Recommendation algorithm addiction
  • Inappropriate content exposure
  • Data exploitation
  • Age verification systems
  • Parental controls
  • Enhanced content filtering
Elderly
  • Digital literacy gaps
  • Cognitive changes
  • Social isolation
  • Fraud susceptibility
  • Interface accessibility
  • Over-reliance on AI companions
  • Simplified interfaces
  • Human fallback options
  • Fraud detection safeguards
Persons with Disabilities
  • Accessibility barriers
  • Historical discrimination
  • Underrepresentation in data
  • Biometric recognition failures
  • Inaccessible AI interfaces
  • Employment algorithm bias
  • Accessibility-first design
  • Inclusive training data
  • Alternative interaction modes
Economically Disadvantaged
  • Limited resource access
  • Power imbalances
  • Dependency on services
  • Credit scoring discrimination
  • Benefits eligibility errors
  • Insurance pricing bias
  • Equity-focused testing
  • Appeal mechanisms
  • Human review for denials
Racial/Ethnic Minorities
  • Historical discrimination
  • Data underrepresentation
  • Structural inequities
  • Facial recognition disparities
  • Language model bias
  • Predictive policing targeting
  • Disaggregated testing
  • Diverse training data
  • Impact monitoring by group

Stakeholder Engagement Methods

Responsible Design Principles

Once a use case passes validity checks and stakeholders are mapped, the following design principles guide responsible AI development:

1

Human-Centered Design

Design AI to augment human capabilities, not replace human judgment in consequential decisions.

  • Define human touchpoints throughout the workflow
  • Preserve meaningful human agency and override capability
  • Design for human understanding, not just efficiency
2

Explainability by Design

Build interpretability into the system architecture from the start.

  • Choose interpretable models where accuracy permits
  • Design explanation interfaces for different audiences
  • Document feature importance and decision factors
3

Fail-Safe Design

Anticipate failure modes and design graceful degradation.

  • Define fallback procedures for system failures
  • Implement confidence thresholds for automated decisions
  • Preserve human override capabilities at all times
4

Privacy by Design

Embed privacy protections into system architecture.

  • Apply data minimization from the outset
  • Design for purpose limitation
  • Build in consent and preference management
5

Inclusive Design

Design for diverse users and edge cases, not just typical users.

  • Consider accessibility requirements from start
  • Test with diverse user populations
  • Avoid assumptions about "normal" users
6

Contestability by Design

Build in mechanisms for individuals to challenge decisions.

  • Design appeal pathways into workflows
  • Preserve audit trails for decision review
  • Enable human reconsideration processes

Implementation Checklist

Ideation Phase Deliverables

Gate Review Timeline

Milestone Timing Participants Output
Initial Screening Week 1 Product Owner, RAI Lead Preliminary risk tier, assessment scope
Stakeholder Mapping Workshop Week 2 Cross-functional team Stakeholder map, engagement plan
Five Gates Assessment Week 3 RAI Lead, Legal, Business Owner Gate evaluation documentation
Design Review Week 4 Technical Lead, RAI Lead Responsible design requirements
RAI Council Gate (High-Risk) Week 5 RAI Council Go/No-Go decision