Appendix B: Vendor AI Security Questionnaire

Comprehensive Due Diligence Assessment for Third-Party AI Systems

📋 How to Use This Questionnaire

This questionnaire is designed to assess third-party AI vendors before procurement. It should be used in conjunction with your standard vendor security assessment processes.

When to Use
  • Procuring any AI-powered software or service
  • Engaging vendors who use AI in their service delivery
  • Renewing contracts with existing AI vendors (periodic reassessment)
  • Evaluating AI components in larger technology procurements
Response Types

Y/N Yes/No response required TEXT Free-text response required SELECT Choose from options DOC Documentation required

Critical Questions

Questions marked with CRITICAL are mandatory and may be disqualifying if answered unsatisfactorily.

Section 0: Vendor & Product Information

0.1 Vendor Details

0.1.1 Vendor Legal Name
0.1.2 Vendor Headquarters Location
0.1.3 Data Processing Locations (all jurisdictions where data may be processed)
0.1.4 Product/Service Name and Version
0.1.5 Brief Description of AI Capabilities

0.2 Assessment Context

0.2.1 Internal Use Case Description
0.2.2 Risk Classification (per your internal framework)
0.2.3 Assessment Date
0.2.4 Internal Assessor Name(s)

Section 1: Model & Algorithm Transparency

This section assesses the vendor's transparency about their AI/ML models, including training data, architecture, and decision-making processes.

1.1 Model Documentation

1.1.1 Does the vendor provide documentation (e.g., model cards, system cards) describing the AI system's purpose, capabilities, and limitations? Y/N CRITICAL
Look for: intended use cases, out-of-scope uses, known limitations, performance benchmarks
1.1.2 What type of AI/ML technology does the system use? SELECT
1.1.3 Is the model architecture disclosed? If so, describe. TEXT
1.1.4 Does the vendor use third-party foundation models (e.g., GPT, Claude, Llama)? If so, which? TEXT
Important for understanding supply chain dependencies
1.1.5 Does the vendor provide an AI Bill of Materials (AI BOM) listing all model components, datasets, and dependencies? Y/N DOC

1.2 Training Data

1.2.1 Can the vendor describe the sources of training data used for their AI models? Y/N CRITICAL
1.2.2 Does the training data include personal information? If so, what types? TEXT
1.2.3 What is the lawful basis for processing personal data in training (if applicable)? SELECT
1.2.4 Does the vendor have documented processes for ensuring training data quality and accuracy? Y/N
1.2.5 Does the vendor have rights/licenses to use all training data, including for commercial purposes? Y/N
Critical for intellectual property and copyright compliance
1.2.6 Is there any pending or threatened litigation related to training data? Y/N

1.3 Explainability

1.3.1 What level of explainability does the system provide? SELECT
1.3.2 Can explanations be provided at the individual decision level? Y/N
Required for many high-stakes applications (credit, employment, etc.)
1.3.3 Are explanations available in formats suitable for end users (not just technical staff)? Y/N

Section 2: Fairness & Bias Assessment

This section evaluates the vendor's practices for identifying, measuring, and mitigating bias in their AI systems.

2.1 Bias Testing

2.1.1 Has the vendor conducted bias testing on their AI system? Y/N CRITICAL
2.1.2 Which protected characteristics were tested for bias? TEXT
E.g., race, gender, age, disability, national origin, religion
2.1.3 What fairness metrics were used? TEXT
E.g., demographic parity, equalized odds, predictive parity, disparate impact ratio
2.1.4 Can the vendor provide bias testing results and methodology documentation? Y/N DOC
2.1.5 Were any bias issues identified? If so, describe remediation steps taken. TEXT

2.2 Ongoing Fairness Monitoring

2.2.1 Does the vendor conduct ongoing bias monitoring in production? Y/N
2.2.2 What is the frequency of bias monitoring? SELECT
2.2.3 Are bias monitoring reports available to customers? Y/N

2.3 Diverse & Inclusive Development

2.3.1 Does the vendor have diversity in their AI development and testing teams? Y/N
2.3.2 Has the vendor engaged external stakeholders or affected communities in AI development or testing? Y/N

Section 3: Security & Robustness

This section assesses the vendor's AI security practices, including protection against adversarial attacks and ensuring system reliability.

3.1 AI-Specific Security Testing

3.1.1 Has the vendor conducted adversarial testing (red teaming) on the AI system? Y/N CRITICAL
3.1.2 Which AI-specific attack vectors have been tested? TEXT
E.g., adversarial inputs, data poisoning, model inversion, membership inference, prompt injection
3.1.3 Does the vendor have defenses against prompt injection attacks? (For LLM/GenAI systems) Y/N
3.1.4 Does the vendor implement input validation and sanitization for AI inputs? Y/N
3.1.5 Does the vendor implement output filtering/guardrails? Y/N

3.2 Model Protection

3.2.1 What measures protect against model extraction attacks? TEXT
3.2.2 Does the vendor implement rate limiting on API calls? Y/N
3.2.3 Are confidence scores or probability distributions exposed in outputs? Y/N
Exposing detailed scores can enable model extraction

3.3 Infrastructure Security

3.3.1 What security certifications does the vendor hold? TEXT
3.3.2 Is data encrypted at rest and in transit? Y/N
3.3.3 Does the vendor have a vulnerability disclosure program or bug bounty? Y/N
3.3.4 When was the last third-party penetration test? Provide date and summary if available. TEXT

3.4 Reliability & Availability

3.4.1 What is the vendor's committed SLA for availability? TEXT
3.4.2 Does the vendor have disaster recovery procedures for AI systems? Y/N
3.4.3 What is the Recovery Time Objective (RTO) and Recovery Point Objective (RPO)? TEXT

Section 4: Privacy & Data Protection

This section evaluates compliance with privacy regulations and data protection practices related to AI processing.

4.1 Data Processing

4.1.1 Does the vendor use customer data to train or improve their AI models? Y/N CRITICAL
4.1.2 Can customers opt out of having their data used for model training? Y/N
4.1.3 What data retention periods apply to data processed by the AI system? TEXT
4.1.4 Does the vendor implement data minimization principles? Y/N

4.2 Data Subject Rights

4.2.1 Can data subjects access data held about them in the AI system? Y/N
4.2.2 Can data subjects request deletion of their data from the system? Y/N
4.2.3 If data was used for training, can it be "unlearned" or removed from models? Y/N
4.2.4 Does the system support human review of automated decisions (GDPR Article 22)? Y/N

4.3 Privacy-Enhancing Technologies

4.3.1 Does the vendor implement any privacy-enhancing technologies? TEXT
E.g., differential privacy, federated learning, homomorphic encryption, secure multi-party computation
4.3.2 Is data anonymization or pseudonymization applied? Y/N

4.4 Regulatory Compliance

4.4.1 Is the vendor GDPR compliant? Y/N
4.4.2 Does the vendor offer a Data Processing Agreement (DPA)? Y/N DOC
4.4.3 What mechanisms are in place for international data transfers? TEXT
4.4.4 Has the vendor completed a Data Protection Impact Assessment (DPIA) for this system? Y/N

Section 5: Human Oversight & Governance

This section evaluates the vendor's internal AI governance, human oversight mechanisms, and accountability structures.

5.1 Human Control

5.1.1 What level of automation does the system support? SELECT
5.1.2 Can customers configure the level of human oversight? Y/N
5.1.3 Is there an emergency stop or override mechanism? Y/N
5.1.4 What happens when the system cannot make a confident decision? TEXT

5.2 Vendor Governance

5.2.1 Does the vendor have a dedicated AI ethics or responsible AI function? Y/N
5.2.2 Does the vendor have published AI ethics principles or guidelines? Y/N DOC
5.2.3 Does the vendor conduct internal AI ethics reviews before product release? Y/N
5.2.4 Has the vendor published any AI transparency reports? Y/N

5.3 Audit & Compliance

5.3.1 Does the vendor support customer audits of AI systems? Y/N
5.3.2 Are comprehensive audit logs maintained? Y/N
5.3.3 How long are audit logs retained? TEXT
5.3.4 Has the vendor undergone any third-party AI audits? Y/N

Section 6: Monitoring & Incident Management

This section assesses the vendor's capabilities for ongoing monitoring and incident response.

6.1 Performance Monitoring

6.1.1 Does the vendor monitor model performance in production? Y/N
6.1.2 Does the vendor monitor for data drift and concept drift? Y/N
6.1.3 Are performance dashboards or reports available to customers? Y/N
6.1.4 What performance metrics are tracked? TEXT

6.2 Incident Response

6.2.1 Does the vendor have an AI-specific incident response plan? Y/N CRITICAL
6.2.2 Will the vendor notify customers of AI-related incidents? Within what timeframe? TEXT
6.2.3 Has the vendor experienced any significant AI-related incidents? If so, describe. TEXT
6.2.4 Can the vendor roll back to previous model versions if issues are detected? Y/N

6.3 Model Updates

6.3.1 How frequently is the model updated/retrained? SELECT
6.3.2 Are customers notified before model updates? Y/N
6.3.3 Can customers opt out of automatic model updates? Y/N

Section 7: Contractual & Legal Considerations

This section addresses liability, indemnification, and contractual protections related to AI systems.

7.1 Liability & Indemnification

7.1.1 Does the vendor accept liability for AI system errors or failures? Y/N CRITICAL
7.1.2 Does the vendor provide indemnification for IP infringement claims related to AI outputs? Y/N
7.1.3 Does the vendor provide indemnification for bias-related claims? Y/N
7.1.4 What is the vendor's limitation of liability cap? TEXT

7.2 Insurance

7.2.1 Does the vendor carry AI-specific professional liability insurance? Y/N
7.2.2 What is the coverage amount? TEXT

7.3 Regulatory Compliance

7.3.1 Is the vendor prepared to comply with the EU AI Act requirements? Y/N
7.3.2 Will the vendor support customer regulatory compliance requirements (documentation, audits, reporting)? Y/N
7.3.3 Does the contract include provisions for regulatory change? Y/N

7.4 Exit & Transition

7.4.1 What are the data portability options upon contract termination? TEXT
7.4.2 What is the data deletion timeline upon termination? TEXT
7.4.3 Will the vendor provide transition assistance? Y/N

📊 Assessment Scoring Guide

Use this guide to score vendor responses and determine overall risk level.

Section Weight Key Criteria Score Range
1. Model & Algorithm Transparency 15% Documentation completeness, training data disclosure, explainability 0-100
2. Fairness & Bias 20% Bias testing comprehensiveness, ongoing monitoring, remediation practices 0-100
3. Security & Robustness 20% Adversarial testing, certifications, model protection, reliability 0-100
4. Privacy & Data Protection 15% Data usage policies, subject rights, regulatory compliance 0-100
5. Human Oversight & Governance 15% Control mechanisms, ethics function, audit support 0-100
6. Monitoring & Incident Management 10% Performance monitoring, incident response, update procedures 0-100
7. Contractual & Legal 5% Liability acceptance, indemnification, exit provisions 0-100
Overall Risk Rating
Weighted Score Risk Level Recommendation
80-100 Low Risk Proceed with standard contract terms
60-79 Medium Risk Proceed with enhanced contractual protections and monitoring
40-59 High Risk Significant remediation required before approval
<40 Critical Risk Do not proceed - material gaps in vendor capabilities
Critical Question Failures

Any "No" response to a CRITICAL question requires executive-level review and approval, regardless of overall score. Multiple critical failures may be disqualifying.

Assessment Summary

Scoring Summary

Section Score (0-100) Weight Weighted Score
1. Model & Algorithm Transparency 15% -
2. Fairness & Bias 20% -
3. Security & Robustness 20% -
4. Privacy & Data Protection 15% -
5. Human Oversight & Governance 15% -
6. Monitoring & Incident Management 10% -
7. Contractual & Legal 5% -
Overall Weighted Score -

Critical Question Summary

Number of Critical Questions Failed:
List Critical Failures (if any):

Assessment Decision

Overall Risk Level:
Recommendation:
Conditions / Remediation Required (if applicable):
Assessment Summary / Rationale:

Approvals

Role Name Date Signature
Assessment Lead
Procurement Lead
RAI Council Representative
Information Security
Legal / Compliance
Executive Sponsor (if High/Critical Risk)