1.3 Scope of Application

This framework applies to all AI systems within your organization's sphere of influence—whether developed internally, procured from vendors, or adopted informally by employees. Understanding the full scope ensures no AI usage falls outside your governance perimeter.

🎯 The Build-Buy-Shadow Triad

Organizations typically encounter AI through three channels: systems they build internally, solutions they buy from third parties, and tools employees adopt without formal approval (Shadow AI). Comprehensive governance must address all three.

1.3.1 Internal Development (Build)

AI systems developed in-house represent your greatest opportunity for control—and your greatest responsibility. This includes:

Covered Systems

Governance Requirements by Development Phase

Phase Governance Activities Gate Criteria
Ideation Use case validity check, stakeholder mapping, preliminary risk assessment Project approved to proceed; risk tier assigned
Data Collection Data lineage documentation, bias assessment, privacy review, IP clearance Data approved for use; DPIA completed if required
Development Model card creation, fairness testing, security review, documentation Technical documentation complete; testing criteria met
Validation Red teaming, adversarial testing, fairness audit, explainability check Independent validation passed; risks documented
Deployment User disclosure, monitoring setup, incident response readiness Deployment approval from governance body
Operation Continuous monitoring, drift detection, bias monitoring, incident management Ongoing compliance; periodic re-certification

Build-Specific Challenges

Technical Debt

Rushing to production without proper documentation creates long-term governance gaps. Enforce "documentation-complete" gates.

Experiment Sprawl

Research teams may create dozens of experimental models without tracking. Implement lightweight registration for all experiments.

Feature Creep

AI capabilities added incrementally may bypass initial risk assessment. Require re-assessment when functionality changes significantly.

Open Source Risk

Using open-source models or training data brings inherited biases and license obligations. Conduct provenance review for all components.

1.3.2 Third-Party Procurement (Buy)

Procuring AI from vendors shifts some development risk but not accountability. Under most regulations, deployers remain responsible for how AI is used in their operations.

Categories of Procured AI

Category Examples Governance Focus
AI-as-a-Service OpenAI API, Azure AI Services, Google Cloud AI, AWS AI Services Data handling, model updates, vendor stability, output monitoring
Embedded AI Features CRM AI assistants, ERP optimization, HR screening tools Understand AI functionality, assess bias risk, validate claims
AI-Powered SaaS AI writing assistants, automated marketing platforms, AI analytics Data privacy, output ownership, service continuity
Custom Development Outsourced model development, consulting firm deliverables Full development lifecycle oversight, IP ownership, quality assurance
Pre-Trained Models Commercial models, model marketplaces, industry-specific models Training data provenance, licensing, performance validation

Vendor Due Diligence Requirements

Contractual Provisions to Include

⚖️ Essential Contract Language
  • Transparency Rights: Right to request documentation on model training, capabilities, and limitations
  • Audit Rights: Ability to conduct or commission third-party audits of AI systems
  • Change Notification: Advance notice of significant model updates or behavioral changes
  • Data Use Restrictions: Prohibition on using customer data for model training without explicit consent
  • Compliance Cooperation: Vendor support for regulatory audits and conformity assessments
  • Indemnification: Coverage for claims arising from vendor AI bias, errors, or security breaches
  • Exit Provisions: Data portability and transition support if relationship ends

Ongoing Vendor Management

1.3.3 Shadow AI & Employee Use of Public Tools

Shadow AI represents one of the fastest-growing and least-controlled sources of AI risk. Research indicates that 60% of employees have used AI at work, but only 18.5% are aware of company policies governing this use.

Common Shadow AI Scenarios

Scenario Risk Level Typical Concerns
ChatGPT/Claude for Content Medium Confidential information in prompts, brand voice inconsistency, accuracy
AI Coding Assistants High Proprietary code exposure, security vulnerabilities, license compliance
AI Meeting Transcription High Recording confidential discussions, data storage location, consent
Personal AI Assistants Medium Work data on personal accounts, lack of enterprise controls
AI Image Generation Medium Copyright concerns, brand misuse, inappropriate content
AI Data Analysis High PII exposure, customer data in third-party tools, regulatory violations

Shadow AI Detection Strategies

Shadow AI Governance Framework

1

Acknowledge Reality

Accept that employees are using AI tools. Outright bans typically drive usage underground, reducing visibility and increasing risk. Focus on enabling safe use rather than prohibition.

2

Provide Sanctioned Alternatives

Deploy enterprise-grade AI tools with appropriate security and governance controls. If employees have access to secure options, they're less likely to use unsanctioned alternatives.

3

Establish Clear Policies

Create an Acceptable Use Policy for AI that clearly defines: what tools are approved, what data can be used with AI, required disclosures, and prohibited activities.

4

Implement Technical Controls

Deploy DLP solutions that detect sensitive data in AI prompts, classify and label data appropriately, and block unauthorized AI tool access where necessary.

5

Train and Communicate

Educate all employees on safe AI use, emphasizing real-world examples of data leakage incidents. Make the "why" behind policies clear, not just the rules.

AI Acceptable Use Policy Elements

📋 Policy Template Components
  • Approved Tools List: Specific AI tools sanctioned for business use
  • Data Classification: What types of data may/may not be used with AI tools
  • Disclosure Requirements: When AI-generated content must be identified
  • Quality Assurance: Human review requirements for AI outputs
  • Account Usage: Requirement to use enterprise accounts, not personal
  • Reporting Obligations: How to report AI-related concerns or incidents
  • Consequences: Disciplinary actions for policy violations

Scope Decision Matrix

Use this matrix to determine governance requirements based on AI source and risk level:

AI Source Minimal Risk Limited Risk High Risk
Build Standard SDLC + Model Card Full lifecycle governance + AIA Complete framework + Third-party audit
Buy Vendor questionnaire + Monitoring Due diligence + Contractual protections Full assessment + Audit rights + Continuous validation
Shadow Policy awareness + Training Detection + Sanctioned alternatives Technical controls + Restricted access + Strict monitoring