HomeEnterprise & CloudEnterprise Security Audit Guide
advanced10 min read· Module 11, Lesson 6

🔐Enterprise Security Audit Guide

Security checklist and compliance review for AI deployments

Enterprise Security Audit Guide

Deploying AI systems — particularly large language models like Claude — into enterprise environments demands rigorous security auditing before, during, and after production launch. A single misconfigured API key, an unvalidated output rendered as HTML, or an unmonitored endpoint can expose your organization to data breaches, compliance violations, and reputational damage.

This lesson provides a comprehensive, actionable security audit framework covering pre-deployment checklists, risk assessment, input/output security, access control, API key lifecycle, network hardening, logging and audit trails, incident response, vendor risk management, and regulatory compliance mapping.


1. Pre-Deployment Security Checklist

Before any AI system touches production traffic, walk through every item below. Each item should be signed off by the responsible team.

1.1 Identity & Access

#Check ItemOwnerStatus
1All API keys are stored in a secrets manager (e.g., AWS Secrets Manager, HashiCorp Vault)Platform
2No API keys are committed to version controlDevSecOps
3Service accounts follow the principle of least privilegeIAM
4Multi-factor authentication is enforced for all human users accessing AI dashboardsSecurity
5Role-based access control (RBAC) is configured for the AI platformIAM

1.2 Network & Infrastructure

#Check ItemOwnerStatus
6All API calls are made over TLS 1.2 or higherNetwork
7Outbound traffic to AI vendor endpoints is restricted via allowlistsNetwork
8Internal services communicate through private subnets or VPN tunnelsInfrastructure
9Rate limiting is configured at the gateway levelPlatform
10DDoS protection is enabled for public-facing AI endpointsNetwork

1.3 Data Protection

#Check ItemOwnerStatus
11PII is redacted or masked before sending to the AI providerData Engineering
12Data classification labels are applied to all AI training and inference dataGovernance
13Data at rest is encrypted with AES-256 or equivalentInfrastructure
14Data in transit is encrypted end-to-endNetwork
15Zero Data Retention (ZDR) is enabled if required by policyCompliance

1.4 Application Security

#Check ItemOwnerStatus
16Input validation and sanitization is implemented for all user promptsBackend
17Output sanitization prevents XSS, injection, and code executionBackend
18Prompt injection defenses are tested and documentedSecurity
19Content filtering policies are configured and activeAI/ML
20Error messages do not leak internal system detailsBackend

1.5 Monitoring & Response

#Check ItemOwnerStatus
21Centralized logging captures all AI API requests and responsesObservability
22Alerting thresholds are configured for anomalous usage patternsSRE
23An incident response runbook exists for AI-specific failuresSecurity
24Regular penetration testing includes the AI integration surfaceSecurity
25A rollback plan exists to disable AI features without downtimePlatform

2. Risk Assessment Framework

Use a structured risk matrix to evaluate every AI-related threat.

2.1 Risk Scoring Matrix

Likelihood (L): 1 = Rare, 2 = Unlikely, 3 = Possible, 4 = Likely, 5 = Almost Certain Impact (I): 1 = Negligible, 2 = Minor, 3 = Moderate, 4 = Major, 5 = Critical Risk Score = L x I 1 - 5 → Low → Accept with monitoring 6 - 12 → Medium → Mitigate within 30 days 13 - 19 → High → Mitigate within 7 days 20 - 25 → Critical → Block deployment until resolved

2.2 Common AI Threat Catalog

ThreatLikelihoodImpactScoreMitigation
API key leaked in source code3515Secrets scanning in CI/CD
Prompt injection via user input4416Input validation + guardrails
Model hallucination in critical path4312Human-in-the-loop review
Excessive token spend from abuse339Rate limiting + budget alerts
PII exposure in logs3515Log redaction pipeline
Vendor outage blocking core flow3412Fallback and circuit breaker
Unauthorized model access2510RBAC + audit logging

2.3 Risk Register Template

Maintain a living risk register document. Review it monthly with stakeholders:

Risk ID: RISK-AI-001 Title: API Key Exposure Description: API keys could be committed to public or internal repos Category: Credential Management Current Controls: Pre-commit hooks, CI secret scanning Residual Risk: Medium (score 6) Risk Owner: DevSecOps Lead Review Date: Quarterly

3. Input / Output Security

3.1 Input Security

Every piece of data sent to the AI model is a potential attack vector.

Sanitization pipeline:

TypeScript
function sanitizePrompt(raw: string): string { // 1. Strip control characters let cleaned = raw.replace(/[- -]/g, ""); // 2. Enforce maximum length const MAX_PROMPT_LENGTH = 10_000; cleaned = cleaned.slice(0, MAX_PROMPT_LENGTH); // 3. Redact known PII patterns (e.g., SSN, credit card) cleaned = cleaned.replace(/d{3}-d{2}-d{4}/g, "[REDACTED-SSN]"); cleaned = cleaned.replace(/d{4}[- ]?d{4}[- ]?d{4}[- ]?d{4}/g, "[REDACTED-CC]"); // 4. Check for prompt injection patterns if (detectsInjection(cleaned)) { throw new SecurityError("Potential prompt injection detected"); } return cleaned; }

Key principles:

  • Never trust user input — always validate and sanitize.
  • Enforce strict character and length limits.
  • Redact PII before it reaches the API.
  • Log rejected inputs for security review.

3.2 Output Security

AI-generated content must be treated as untrusted by default.

TypeScript
function sanitizeOutput(modelResponse: string): string { // 1. Strip any HTML/script tags to prevent XSS let safe = modelResponse.replace(/<script[^>]*>[sS]*?</script>/gi, ""); safe = safe.replace(/<[^>]+>/g, ""); // 2. Validate structured output against schema if (expectingJSON) { const parsed = JSON.parse(safe); validateSchema(parsed, expectedSchema); } // 3. Check for sensitive data leakage if (containsSensitivePatterns(safe)) { auditLog.warn("Model output contained sensitive patterns"); safe = redactSensitive(safe); } return safe; }

4. Access Control Architecture

4.1 RBAC Model for AI Platforms

Role Permissions ───────────────────────────────────────────────────── AI Admin Full access: keys, models, config, logs AI Developer Create/test prompts, view own usage AI Reviewer Read-only: review outputs, audit logs AI Consumer (App) Invoke model via service account only Billing Admin View cost reports, set budgets Security Auditor Read all logs, export compliance reports

4.2 Service Account Best Practices

  • One service account per application or microservice.
  • Rotate credentials every 90 days (or less).
  • Bind service accounts to specific IP ranges or VPC endpoints.
  • Monitor for anomalous usage patterns per account.
  • Immediately revoke compromised credentials.

5. API Key Lifecycle Management

5.1 The Complete Lifecycle

Phase Actions ────────────────────────────────────────────── Generation Create key in Anthropic dashboard or API Storage Store in secrets manager, never in code Distribution Inject via environment variables at runtime Rotation Rotate every 60-90 days; automate if possible Monitoring Track usage per key; alert on anomalies Revocation Immediately revoke on compromise or personnel change Destruction Remove from all systems after revocation

5.2 Automated Key Rotation

TypeScript
async function rotateApiKey(currentKeyId: string): Promise<void> { // 1. Generate new key const newKey = await anthropic.admin.apiKeys.create({ name: `production-${Date.now()}`, workspace_id: WORKSPACE_ID, }); // 2. Update secrets manager await secretsManager.putSecret("ANTHROPIC_API_KEY", newKey.key); // 3. Verify new key works await anthropic.messages.create({ model: "claude-sonnet-4-20250514", max_tokens: 10, messages: [{ role: "user", content: "ping" }], }); // 4. Revoke old key await anthropic.admin.apiKeys.disable(currentKeyId); // 5. Log rotation event auditLog.info("API key rotated", { oldKeyId: currentKeyId, newKeyId: newKey.id }); }

6. Network Security

6.1 Defense in Depth

Layer your network security controls:

LayerControlPurpose
EdgeWAF + DDoS protectionBlock malicious traffic before it reaches your stack
GatewayAPI gateway with rate limitingThrottle requests and enforce authentication
TransportTLS 1.2+ with certificate pinningPrevent man-in-the-middle attacks
ApplicationInput validation + output sanitizationPrevent injection and data leakage
InternalPrivate subnets + security groupsRestrict lateral movement
MonitoringIDS/IPS + anomaly detectionDetect and respond to threats

6.2 Firewall Rules

# Allow outbound HTTPS to Anthropic API only iptables -A OUTPUT -p tcp --dport 443 -d api.anthropic.com -j ACCEPT iptables -A OUTPUT -p tcp --dport 443 -j DROP # Allow inbound only from your application load balancer iptables -A INPUT -p tcp --dport 8080 -s 10.0.0.0/16 -j ACCEPT iptables -A INPUT -p tcp --dport 8080 -j DROP

7. Logging & Audit Trails

7.1 What to Log

EventRequired FieldsRetention
API request sentTimestamp, user ID, model, token count, request hash1 year
API response receivedTimestamp, status code, latency, token count1 year
Authentication eventTimestamp, user/service, result, IP address2 years
Key rotationTimestamp, old key ID, new key ID, operator2 years
Policy violationTimestamp, type, input hash, action taken3 years
Configuration changeTimestamp, field changed, old/new value, operator2 years

7.2 Structured Logging Example

TypeScript
const auditEntry = { timestamp: new Date().toISOString(), eventType: "ai.api.request", userId: context.userId, serviceAccount: context.serviceAccount, model: "claude-sonnet-4-20250514", inputTokens: usage.input_tokens, outputTokens: usage.output_tokens, latencyMs: endTime - startTime, status: "success", inputHash: sha256(prompt), // Hash, never raw content ipAddress: context.clientIP, traceId: context.traceId, }; logger.info(auditEntry);

Critical rule: Never log raw prompts or responses in production. Log hashes for traceability, store raw content only in encrypted, access-controlled storage with automatic expiration.


8. Incident Response

8.1 AI-Specific Incident Categories

CategorySeverityExample
Credential CompromiseP1 - CriticalAPI key posted on public GitHub
Data LeakageP1 - CriticalPII exposed in model output served to wrong user
Prompt Injection AttackP2 - HighUser bypasses guardrails to extract system prompt
Service AbuseP2 - HighToken spend spike indicating unauthorized usage
Model MisbehaviorP3 - MediumConsistent hallucinations in a critical workflow
Vendor OutageP3 - MediumAnthropic API returns 5xx for extended period

8.2 Incident Response Runbook

STEP 1 — DETECT → Automated alert fires (PagerDuty, Opsgenie, etc.) → On-call engineer acknowledges within 15 minutes STEP 2 — TRIAGE → Classify severity (P1 / P2 / P3) → Identify blast radius (which users, services affected) → Open incident channel and page relevant teams STEP 3 — CONTAIN → P1: Immediately revoke compromised keys → P1: Enable kill switch to disable AI features → P2: Apply rate limiting or block offending IPs → P3: Route traffic to fallback logic STEP 4 — ERADICATE → Root cause analysis → Patch vulnerability or misconfiguration → Rotate all potentially affected credentials STEP 5 — RECOVER → Gradually re-enable AI features → Monitor for recurrence → Verify no data was exfiltrated STEP 6 — POST-MORTEM → Document timeline, impact, root cause → Update risk register → Implement preventive controls → Share learnings with organization

9. Vendor Risk Management

9.1 Vendor Assessment Checklist

AreaQuestionEvidence Required
Data HandlingDoes the vendor store request/response data?Privacy policy, DPA
CertificationsSOC 2 Type II, ISO 27001?Audit reports
Data ResidencyWhere is data processed and stored?Infrastructure documentation
Incident ResponseWhat is the vendor SLA for security incidents?SLA agreement
SubprocessorsWho are the vendor's subprocessors?Subprocessor list
EncryptionWhat encryption standards are used?Technical documentation
Business ContinuityWhat is the vendor's disaster recovery plan?BC/DR documentation

9.2 Contractual Requirements

Ensure your vendor agreement includes:

  • Data Processing Agreement (DPA) with clear data handling terms.
  • Zero Data Retention clause if required by your policies.
  • Breach notification requirements (e.g., within 72 hours).
  • Right to audit the vendor's security controls.
  • Data deletion on contract termination.
  • Liability caps and indemnification for security breaches.

10. Compliance Mapping

10.1 GDPR (General Data Protection Regulation)

GDPR RequirementAI Implementation
Lawful basis for processingDocument legal basis for sending user data to AI provider
Data minimizationSend only necessary data; redact PII where possible
Right to erasureEnsure vendor supports deletion; enable ZDR
Data portabilityExport AI interaction logs in machine-readable format
Privacy by designBuild PII detection into the data pipeline
DPIA requiredConduct Data Protection Impact Assessment for AI features
Cross-border transfersUse inference_geo to control processing location

10.2 CCPA (California Consumer Privacy Act)

CCPA RequirementAI Implementation
Right to knowDisclose what data is sent to AI providers
Right to deleteProcess deletion requests across AI vendor data
Right to opt-outProvide opt-out mechanism for AI-processed features
Non-discriminationEnsure AI features work equally for opt-out users
Service provider agreementExecute CCPA-compliant DPA with AI vendor

10.3 HIPAA (Health Insurance Portability and Accountability Act)

HIPAA RequirementAI Implementation
BAA requiredSign Business Associate Agreement with AI vendor
PHI safeguardsNever send raw PHI to AI; de-identify first
Access controlsRBAC for all systems handling health data + AI
Audit controlsLog every AI interaction involving health workflows
Transmission securityTLS 1.2+ for all AI API communication
Breach notificationInclude AI systems in breach notification procedures

10.4 Compliance Readiness Scorecard

Framework Ready? Score Notes ────────────────────────────────────────── GDPR [ ] __/7 DPA signed, DPIA completed? CCPA [ ] __/5 Opt-out mechanism in place? HIPAA [ ] __/6 BAA signed, PHI de-identified? SOC 2 [ ] __/5 Audit logs, access controls? ISO 27001 [ ] __/4 ISMS updated for AI scope?

Summary

Enterprise AI security is not a one-time exercise — it is a continuous process of assessment, hardening, monitoring, and improvement. Use this guide as your living reference:

  1. Run the pre-deployment checklist before every production launch.
  2. Score every risk with the risk assessment framework.
  3. Validate input and output at every boundary.
  4. Enforce least-privilege access with RBAC and service accounts.
  5. Automate key rotation and monitor the full lifecycle.
  6. Harden your network with defense in depth.
  7. Log everything — but never log raw sensitive data.
  8. Prepare for incidents with documented runbooks.
  9. Assess your vendor regularly and maintain contractual protections.
  10. Map every compliance requirement and maintain audit-ready evidence.