Comprehensive Due Diligence Assessment for Third-Party AI Systems
📋 How to Use This Questionnaire
This questionnaire is designed to assess third-party AI vendors before procurement. It should be used in conjunction with your standard vendor security assessment processes.
When to Use
Procuring any AI-powered software or service
Engaging vendors who use AI in their service delivery
Renewing contracts with existing AI vendors (periodic reassessment)
Evaluating AI components in larger technology procurements
Response Types
Y/N Yes/No response required
TEXT Free-text response required
SELECT Choose from options
DOC Documentation required
Critical Questions
Questions marked with CRITICAL are mandatory and may be disqualifying if answered unsatisfactorily.
Section 0: Vendor & Product Information
0.1 Vendor Details
0.1.1Vendor Legal Name
0.1.2Vendor Headquarters Location
0.1.3Data Processing Locations (all jurisdictions where data may be processed)
0.1.4Product/Service Name and Version
0.1.5Brief Description of AI Capabilities
0.2 Assessment Context
0.2.1Internal Use Case Description
0.2.2Risk Classification (per your internal framework)
0.2.3Assessment Date
0.2.4Internal Assessor Name(s)
Section 1: Model & Algorithm Transparency
This section assesses the vendor's transparency about their AI/ML models, including training data, architecture, and decision-making processes.
1.1 Model Documentation
1.1.1Does the vendor provide documentation (e.g., model cards, system cards) describing the AI system's purpose, capabilities, and limitations?Y/NCRITICAL
Look for: intended use cases, out-of-scope uses, known limitations, performance benchmarks
1.1.2What type of AI/ML technology does the system use?SELECT
1.1.3Is the model architecture disclosed? If so, describe.TEXT
1.1.4Does the vendor use third-party foundation models (e.g., GPT, Claude, Llama)? If so, which?TEXT
Important for understanding supply chain dependencies
1.1.5Does the vendor provide an AI Bill of Materials (AI BOM) listing all model components, datasets, and dependencies?Y/NDOC
1.2 Training Data
1.2.1Can the vendor describe the sources of training data used for their AI models?Y/NCRITICAL
1.2.2Does the training data include personal information? If so, what types?TEXT
1.2.3What is the lawful basis for processing personal data in training (if applicable)?SELECT
1.2.4Does the vendor have documented processes for ensuring training data quality and accuracy?Y/N
1.2.5Does the vendor have rights/licenses to use all training data, including for commercial purposes?Y/N
Critical for intellectual property and copyright compliance
1.2.6Is there any pending or threatened litigation related to training data?Y/N
1.3 Explainability
1.3.1What level of explainability does the system provide?SELECT
1.3.2Can explanations be provided at the individual decision level?Y/N
Required for many high-stakes applications (credit, employment, etc.)
1.3.3Are explanations available in formats suitable for end users (not just technical staff)?Y/N
Section 2: Fairness & Bias Assessment
This section evaluates the vendor's practices for identifying, measuring, and mitigating bias in their AI systems.
2.1 Bias Testing
2.1.1Has the vendor conducted bias testing on their AI system?Y/NCRITICAL
2.1.2Which protected characteristics were tested for bias?TEXT
E.g., race, gender, age, disability, national origin, religion
2.1.3What fairness metrics were used?TEXT
E.g., demographic parity, equalized odds, predictive parity, disparate impact ratio
2.1.4Can the vendor provide bias testing results and methodology documentation?Y/NDOC
2.1.5Were any bias issues identified? If so, describe remediation steps taken.TEXT
2.2 Ongoing Fairness Monitoring
2.2.1Does the vendor conduct ongoing bias monitoring in production?Y/N
2.2.2What is the frequency of bias monitoring?SELECT
2.2.3Are bias monitoring reports available to customers?Y/N
2.3 Diverse & Inclusive Development
2.3.1Does the vendor have diversity in their AI development and testing teams?Y/N
2.3.2Has the vendor engaged external stakeholders or affected communities in AI development or testing?Y/N
Section 3: Security & Robustness
This section assesses the vendor's AI security practices, including protection against adversarial attacks and ensuring system reliability.
3.1 AI-Specific Security Testing
3.1.1Has the vendor conducted adversarial testing (red teaming) on the AI system?Y/NCRITICAL
3.1.2Which AI-specific attack vectors have been tested?TEXT
E.g., adversarial inputs, data poisoning, model inversion, membership inference, prompt injection
3.1.3Does the vendor have defenses against prompt injection attacks? (For LLM/GenAI systems)Y/N
3.1.4Does the vendor implement input validation and sanitization for AI inputs?Y/N
3.1.5Does the vendor implement output filtering/guardrails?Y/N
3.2 Model Protection
3.2.1What measures protect against model extraction attacks?TEXT
3.2.2Does the vendor implement rate limiting on API calls?Y/N
3.2.3Are confidence scores or probability distributions exposed in outputs?Y/N
Exposing detailed scores can enable model extraction
3.3 Infrastructure Security
3.3.1What security certifications does the vendor hold?TEXT
3.3.2Is data encrypted at rest and in transit?Y/N
3.3.3Does the vendor have a vulnerability disclosure program or bug bounty?Y/N
3.3.4When was the last third-party penetration test? Provide date and summary if available.TEXT
3.4 Reliability & Availability
3.4.1What is the vendor's committed SLA for availability?TEXT
3.4.2Does the vendor have disaster recovery procedures for AI systems?Y/N
3.4.3What is the Recovery Time Objective (RTO) and Recovery Point Objective (RPO)?TEXT
Section 4: Privacy & Data Protection
This section evaluates compliance with privacy regulations and data protection practices related to AI processing.
4.1 Data Processing
4.1.1Does the vendor use customer data to train or improve their AI models?Y/NCRITICAL
4.1.2Can customers opt out of having their data used for model training?Y/N
4.1.3What data retention periods apply to data processed by the AI system?TEXT
4.1.4Does the vendor implement data minimization principles?Y/N
4.2 Data Subject Rights
4.2.1Can data subjects access data held about them in the AI system?Y/N
4.2.2Can data subjects request deletion of their data from the system?Y/N
4.2.3If data was used for training, can it be "unlearned" or removed from models?Y/N
4.2.4Does the system support human review of automated decisions (GDPR Article 22)?Y/N
4.3 Privacy-Enhancing Technologies
4.3.1Does the vendor implement any privacy-enhancing technologies?TEXT
Proceed with enhanced contractual protections and monitoring
40-59
High Risk
Significant remediation required before approval
<40
Critical Risk
Do not proceed - material gaps in vendor capabilities
Critical Question Failures
Any "No" response to a CRITICAL question requires executive-level review and approval, regardless of overall score. Multiple critical failures may be disqualifying.