IAM 2.0 Playbook
Strategic Patterns, Implementation Guidance, and Maturity-Aligned Roadmaps for Enterprise Identity Programs
Execution Companion to the IAM 2.0 Reference Architecture
TechVision Research
January 2026
How to Use This Playbook
This Playbook is the execution guide for the IAM 2.0 Reference Architecture. It provides:
- Maturity Model (Levels 1–5) with progression criteria and capability wiring
- 3 High-Value Patterns (Event-Driven Automation, Identity Graph, Continuous Risk-Driven Certification) with maturity-specific implementations
- Phase-by-Phase Implementation Roadmaps for Level 2→3 and Level 3→4 transitions (timelines, activities, success criteria, risks)
- Pattern Selection Framework to choose between patterns based on organizational context, pain points, and capabilities
- Tool Stacks & Vendor Guidance for each pattern
Use This Document If You:
- Are assessing your current IAM maturity (where are we on Levels 1–5?)
- Need a phased roadmap to progress from Level 2→3 or Level 3→4
- Are selecting between architectural patterns (which solves our pain point fastest?)
- Want to understand timelines, investment levels, and success criteria for each pattern
- Need tool recommendations and vendor evaluation guidance
Companion Documents:
IAM 2.0 Reference Architecture (Foundational)
- For vision, principles, and core concepts
- For detailed RA layer descriptions (Interact, Access, Change, Repositories, Analytics, Manage)
- For deployment pattern options (on-prem, hybrid, cloud-native)
IAM 2.0 Maturity Model (Assessment Framework)
- Where your IAM program stands
- What’s required to reach your target stae
Playbook Overview
Purpose
This playbook bridges the IAM 2.0 Maturity Model and Reference Architecture layers to provide actionable, pattern-based guidance for identity program evolution. It combines strategic architectural patterns with maturity-level-specific implementations, enabling organizations to make deliberate technology and process choices at each progression level.
What This Playbook Delivers
3 High-Value Patterns: Event-Driven Identity Automation, Identity Graph & Entitlement Synthesis, and Continuous Risk-Driven Certification—each mapped to maturity progression.
Explicit Maturity Wiring: Which patterns activate at which maturity levels, with capability family connections.
Phased Implementation Playbooks: Step-by-step guides for Level 2→3 and Level 3→4 transitions with timeline, tooling, and success criteria.
Decision Framework: How to choose between patterns, tools, and architectural approaches based on organizational context.
How to Use This Playbook
- Assess Current State: Determine your organization’s current IAM maturity level (Levels 1–5) using the IAM 2.0 Maturity Model below
- Define Target State: Choose your target maturity level (typically 3 within 18–24 months, 4 within 24–36 months)
- Select Patterns: Review the High-Value Patterns section to understand which patterns enable your target level
- Execute Playbook: Follow the phase-specific playbook (Level 2→3 or Level 3→4) with step-by-step guidance, timelines, and success criteria
- Validate Progress: Use success criteria and maturity checkpoints to track advancement
IAM 2.0 Maturity Model Summary
Level 1: Initial
- Identity processes exist but are largely manual and undocumented
- No formal policies or governance; ad hoc decisions
- Minimal automation; reactive incident response
- Limited visibility into entitlements or access trends
- Maturity indicator: Joiner SLA 5–7 days; no access review process
Level 2: Basic
- Core IAM platforms deployed (IGA, IdP, directory)
- Manual processes starting to shift to automated workflows
- Basic provisioning for new hires; manual for movers/leavers
- Annual access reviews; manual evidence collection
- Deployment typically on-prem or hybrid
- Maturity indicator: Joiner SLA 48–72 hours; quarterly reviews at 50–75% completion
Level 3: Managed ← Typical target for mid-market, 18–24 months
- Event-driven automation for HR joiner/mover/leaver
- Fully automated joiner SLA <24 hours
- Identity graph with 95%+ entitlement visibility
- Quarterly access reviews 100% completion; SoD enforced
- Cloud IAM (AWS, Azure) governance started
- Maturity indicator: Joiner <24h, SoD enforced, quarterly reviews automated
Level 4: Advanced ← Target for large enterprise, 24–36 months
- Real-time event-driven provisioning; <1-hour joiner SLA
- Risk-driven continuous certification (weekly/monthly/quarterly by risk tier)
- 70%+ AI-powered recommendations with <2% false positives
- Full identity graph across on-prem, cloud, and SaaS
- Cloud entitlements governance (CIEM) mature
- Maturity indicator: Joiner <1h, 70%+ AI recommendations, zero standing privilege
Level 5: Optimized
- Fully autonomous operations with human oversight
- <1-hour anomaly detection and incident containment
- Predictive identity provisioning and risk forecasting
- Self-optimizing policies and automated remediation
- Maturity indicator: <1% SLA misses, <5min incident response, zero manual overhead
Pattern Selection Quick Reference
| Pattern | Primary Maturity Level | Key Benefit | Core RA Components | Timeline |
| Event-Driven Identity Automation | 2 → 3 → 4 | Real-time identity sync across all systems; eliminate batch delays | Interact (Event Listener), Change (Provisioning), Repositories | 6–9 months (L2→L3) |
| Identity Graph & Entitlement Synthesis | 3 → 4 | Unified view of all identities and permissions; role mining & conflict detection | Repositories (Graph), Analytics (Analysis), Change (Enforcement) | 8–12 months (L3→L4) |
| Continuous Risk-Driven Certification | 3 → 4 → 5 | Shift from annual reviews to continuous, AI-scored certification; reduce risk velocity | Analytics (Risk Scoring), Change (Workflows), Manage (Audit) | 9–12 months (L3→L4) |
Table 1: Pattern Selection Quick Reference
3 High-Value Architectural Patterns
Pattern 1: Event-Driven Identity Automation
Maturity Levels
Level 2 Foundation | Level 3 Enabler | Level 4 Core
Problem Solved
Batch-based identity sync (hourly, daily) creates SLA misses, security gaps, and cascading manual work. Manual joiner/mover/leaver processes slow onboarding and create orphan accounts.
Pattern Definition
Event Stream Architecture: HR system (Workday, SuccessFactors) publishes identity events (joiner, mover, termination) to a durable event broker (Kafka, AWS EventBridge, Azure Event Grid). IGA platform and downstream systems subscribe to these events in real time, triggering provisioning, directory updates, and app access changes within seconds to minutes.
Maturity Progression
- Level 2: Trigger manual provisioning tasks from HR events; basic logging
- Level 3: Fully automated provisioning via HR event API; 24-hour joiner SLA
- Level 4: Real-time event mesh with orchestration; <1-hour joiner SLA; anomaly detection triggers
- Level 5: Predictive event generation (terminate alert before resignation); autonomous self-healing
Reference Architecture Alignment
See IAM 2.0 Reference Architecture for detailed layer descriptions.
- Interact Layer: Event Listener, Event Transformation, APIs
- Change Layer: Provisioning Orchestration, Workflow Engine
- Repositories: Identity Correlation, HR Graph
Technology Stack
Producers: Workday, SAP SuccessFactors, BambooHR
Event Broker: Apache Kafka, AWS EventBridge, Azure Event Grid, RabbitMQ
Consumers: SailPoint, Okta, Saviynt, custom Lambda/Functions
Orchestration: HashiCorp Consul, Apache NiFi, MuleSoft
Flow Diagram
HR System (Event Producer)
↓ (Joiner: {“emp_id”: “123”, “start_date”: “2025-01-15”})
Event Broker (Kafka Topic: “identity.lifecycle.joiner”)
├→ IGA Platform (provision account)
├→ Directory Service (create AD user)
├→ Email System (create mailbox)
└→ App Provisioning Engine (SSO + SaaS apps)
↓
Joiner SSO-ready in <1 hour (vs. 48–72 hours manual)
Implementation Challenges & Solutions
| Challenge | Solution |
| HR system doesn’t publish events natively | Implement polling adapter or custom webhook; transform into standard event schema |
| Event loss/duplication during broker failure | Use persistent event broker with at-least-once delivery; implement idempotent consumers |
| Downstream app can’t keep up with event velocity | Queue events in IGA; execute provisioning with backpressure control; retry with exponential backoff |
| Data quality issues in HR (bad email format, duplicate employee IDs) | Implement event validation and transformation layer; quarantine bad events; alert data stewards |
Table 2: Event-Driven Pattern Implementation Challenges & Solutions
Key Metrics
- Joiner SLA achievement (>95%)
- Event-to-action latency (<30min)
- Orphan account reduction (>95%)
- Manual escalation rate (<5%)
Success Indicators (Level 3)
✓ HR events published & consumed in real-time (no batching)
✓ Joiner SSO-ready within 24 hours, SLA 95%+
✓ Mover role changes within 4 hours; leaver deprovision <1 hour
✓ Event audit trail complete & searchable
✓ <5% manual intervention rate
Pattern 2: Identity Graph & Entitlement Synthesis
Maturity Levels
Level 3 Enabler | Level 4 Core
Problem Solved
Siloed systems (directory, HR, IGA, cloud IAM, PAM) create blind spots: entitlements scattered across systems, orphan accounts undetected, conflict-of-duty violations missed, and role mining impossible without manual analysis. SoD rules exist but aren’t enforced; access reviews are incomplete because not all data is visible.
Pattern Definition
Unified Identity Graph: Aggregate all identity data (users, roles, entitlements, devices, resources, group memberships, permissions) into a graph database (Neo4j, Amazon Neptune, or cloud-native equivalent). Synthesize logical entitlements by traversing the graph, auto-detect SoD violations, identify unused permissions, and enable AI-driven role mining. Serve as the single source of truth for all downstream governance, policy, and analytics.
Maturity Progression
- Level 3: Aggregate AD, HR, IGA into basic data warehouse; manual SoD rule enforcement; 80% entitlement visibility
- Level 4: Full identity graph with real-time updates; auto-detect SoD violations; cloud IAM integrated; 99% entitlement visibility
- Level 5: AI-driven auto-remediation; predictive role evolution; self-optimizing role definitions
Reference Architecture Alignment
See IAM 2.0 Reference Architecture for detailed layer descriptions.
- Repositories: Identity Graph, Data Correlation, SoD Repository
- Analytics: Role Mining, Usage Analytics, Entitlement Analysis
- Change: SoD Enforcement, Conflict Resolution
Technology Stack
Graph Database: Neo4j (on-prem), Amazon Neptune (AWS), ArangoDB
Data Aggregation: SailPoint IdentityIQ, Okta Identity Governance, Saviynt
Role Mining: SailPoint Role Miner, Saviynt Joiner Booster, built-in IGA tools
Analytics: Exabeam, Splunk, Tableau (connect to graph database)
Data Model (Example)
Nodes: User, Role, Application, Permission, Device, Group, Department
Relationships:
User –[has_role]–> Role
User –[member_of]–> Group
Role –[contains]–> Permission
Permission –[grants_access_to]–> Application
User –[has_device]–> Device
Device –[can_access]–> Resource
Permission –[conflicts_with]–> Permission (SoD)
User –[reports_to]–> Manager
Implementation Challenges & Solutions
| Challenge | Solution |
| Multiple identity formats create correlation issues | Implement deduplication algorithm; establish matching rules; clean up data first |
| Graph traversal queries are slow as entitlement depth grows | Pre-compute role expansion; cache computed entitlements; use query optimization |
| SoD rules conflict with business need | Implement exception approval workflow; audit-log all exceptions; require attestation |
| Legacy apps don’t report permissions accurately | Implement targeted discovery; use connector-based permission crawling |
Table 3: Identity Graph Pattern Implementation Challenges & Solutions
Key Metrics
- Entitlement visibility (>95%)
- SoD violation detection rate
- Role coverage (%users mapped to business roles)
- Orphan account count
Success Indicators (Level 4)
✓ Identity graph >95% complete (all users, roles, apps, permissions)
✓ All SoD conflicts detected within 24 hours of role assignment
✓ Role mining algorithm identifies & recommends consolidation of redundant roles
✓ Cloud IAM (AWS, Azure, GCP) permissions fully visible & correlated
✓ Access reviews auto-populated with complete entitlement snapshots
Pattern 3: Continuous Risk-Driven Certification
Maturity Levels
Level 3 Enabler | Level 4 Core | Level 5 Native
Problem Solved
Annual or quarterly access reviews are labor-intensive, slow to detect violations, and compliance-driven rather than risk-driven. By the time a review concludes, access has already changed. Managers rubber-stamp certifications. Exceptions accumulate. High-risk users (contractors, privileged admins, ex-employee transitions) get same review frequency as low-risk employees.
Pattern Definition
Continuous Certification Engine: Shift from periodic, universal reviews to continuous, risk-scored micro-certifications. High-risk users (based on entitlements, location, peer comparison, behavior) are reviewed weekly; medium-risk monthly; low-risk quarterly. AI algorithms recommend certifications (“Approve”, “Revoke”, “Step-Up Review”) based on usage patterns, peer behavior, and policy. Approved low-risk recommendations auto-confirm; high-risk and suspicious access require manager attestation.
Maturity Progression
- Level 3: Quarterly reviews, 100% manual, manager-driven via IGA workflows
- Level 4: Continuous risk-based reviews; weekly/monthly/quarterly by risk tier; 70%+ AI recommendations; <48h SLA
- Level 5: Fully autonomous; micro-certifications triggered by policy violations; <1h response; 99.9% exception-free
Reference Architecture Alignment
See IAM 2.0 Reference Architecture for detailed layer descriptions.
- Analytics: Risk Scoring Engine, UEBA, Peer Behavior Analysis
- Change: Continuous Certification Workflows, Auto-Remediation
- Manage: Audit Trail, Exception Tracking, SLA Monitoring
Technology Stack
Risk Scoring Engine: Exabeam, Splunk UEBA, CrowdStrike Falcon, custom ML pipeline
Certification Platform: SailPoint, Okta Identity Governance, Saviynt
Workflow Orchestration: ServiceNow, custom API + Lambda/Functions
Audit Trail: Splunk, ELK Stack, cloud-native logging
Risk Scoring Model (Example)
Risk Score = f(Entitlements, Usage, Behavior, Peer Comparison, Policy)
Entitlements: +5 per SoD conflict, +2 per privileged role, +1 per inactive app
Usage: -2 if all perms used weekly, +3 if 30%+ unused for 90 days
Behavior: +5 if impossible travel, +3 if off-hours access (for non-shift roles)
Peer Comparison: +4 if entitlements outlier vs. role peer group
Policy: +2 per policy violation (late review, unsecured device)
Risk Tier:
High Risk (Score 20+): Weekly review
Medium Risk (Score 10-19): Monthly review
Low Risk (Score <10): Quarterly review
Implementation Challenges & Solutions
| Challenge | Solution |
| Managers resist frequent reviews (perceived extra work) | Automate 70% of recommendations; show time savings; measure approval velocity |
| AI recommendations have high false-positive rate | Tune model with training data; require manual approval for riskiest users |
| Usage data is incomplete (some apps don’t report) | Implement app-side instrumentation; proxy access logs; use proxy-server data |
| Legitimate high-risk access triggers false alerts | Implement context flags (contractor flag, temp access end date) |
Table 4: Continuous Certification Pattern Implementation Challenges & Solutions
Key Metrics
- Review completion SLA
- AI recommendation accuracy
- Exception rate
- False-positive rate
- Time-to-revoke for violations
Success Indicators (Level 4)
✓ 100% of high-risk users reviewed weekly; 100% of medium-risk monthly
✓ 70%+ of certifications AI-recommended; <2% AI false-positive rate
✓ Manager review time <5min per user (vs. 15–30min with no AI)
✓ SLA: <48 hours manager decision; <24h auto-revoke for critical violations
✓ Exception tracking & SLA (all exceptions must be audited & approved)
How Patterns Wire to Maturity Levels
Pattern Activation by Maturity Level
| Maturity Level | Active Patterns | Primary Capability Family Benefits | Key Success Criteria |
| Level 2 (Basic) | Event-Driven (foundational) | Identity Lifecycle Mgmt, Auth & Access, Entitlements (starter) | HR events published; manual joiner SLA <72h→48h |
| Level 3 (Managed) | Event-Driven (mature), Identity Graph (enabler), Continuous Cert (enabler) | All 7 capability families mature simultaneously | Joiner <24h, SoD enforced, quarterly reviews, 95% SLA |
| Level 4 (Advanced) | All 3 patterns (core) | Continuous Lifecycle, Proactive Risk, AI-Assisted Governance | Joiner <1h, 99% event-driven, continuous risk-scoring, 70%+ AI recommendations |
| Level 5 (Optimized) | All 3 patterns (autonomous) | Fully autonomous operations across all families | Zero SLA misses, <1h anomaly detection, <1min incident containment |
Table 5: Pattern Activation by Maturity Level
Capability Family → Pattern Mapping
For detailed capability layer descriptions, see the IAM 2.0 Reference Architecture, Section 6: Architecture Layers and Capabilities Detailed.
Identity Lifecycle Management
Supported by: Event-Driven Identity Automation (core), Identity Graph (enhanced visibility)
- Level 2: HR events trigger manual provisioning tasks
- Level 3: Fully automated joiner/mover/leaver with <24h SLA
- Level 4+: Real-time <1h; predictive offboarding; self-healing
Authentication & Secure Access
Supported by: Event-Driven (re-provisioning on access change), Identity Graph (context-aware access)
- Level 2: Basic SSO; conditional access policies start
- Level 3: Enterprise SSO; 95% MFA; session management
- Level 4+: Passwordless default; continuous auth; risk-based step-up MFA
Authorization & Entitlements
Supported by: Identity Graph (complete visibility), Event-Driven (role changes), Continuous Cert (validation)
- Level 2: Basic role catalog; manual assignments
- Level 3: Business roles; SoD rules; role assignment automation
- Level 4+: ABAC; role mining; automated SoD conflict resolution
Access Governance & Compliance
Supported by: Continuous Risk-Driven Certification (core), Identity Graph (complete entitlement snapshot)
- Level 2: Annual manual reviews; 50% completion
- Level 3: Quarterly reviews; 100% completion; SoD monitoring
- Level 4+: Continuous risk-based reviews; 70%+ AI recommendations; <48h SLA
Privileged Access Management
Supported by: Identity Graph (PAM entitlements visible), Continuous Cert (PAM monitoring), Event-Driven (JIT approval triggers)
- Level 2: Manual elevation requests; basic logging
- Level 3: JIT approval workflow; 2–4h SLA; session recording
- Level 4+: Zero standing privilege; anomaly detection; auto-kill suspicious sessions
Non-Human Identity & IoT Governance
Supported by: Event-Driven (service account JML), Identity Graph (complete service account discovery)
- Level 2: Manual service account inventory; quarterly rotation
- Level 3: Automated rotation; service account registry; 90% coverage
- Level 4+: CIEM; auto-scope cloud IAM; agent governance; 100% coverage
Governance Model & Organization
Supported by: All 3 patterns (enable policy automation), Continuous Cert (policy-driven)
- Level 2: Basic policies; ad hoc decisions
- Level 3: Formal workflows; 95% policy compliance
- Level 4+: Policy-as-code; autonomous decisions; zero manual overhead
Level 2 → Level 3 Implementation Playbook
Overview
Timeline: 12–18 months
Investment: $500K–$2M
Pattern Focus: Event-Driven Automation (mature), Identity Graph (enabler)
Phase 1: Foundation & Assessment (Months 1–2)
Activity 1: Assess Current State
Map all identity systems (HR, AD, IGA, apps); identify integration gaps; document current joiner/mover/leaver SLAs (baseline: 48–72h for joiner).
Deliverables: Current-state architecture diagram, gap analysis, tool inventory, integration roadmap
Activity 2: Define Target Operating Model
Design future joiner/mover/leaver flows; define SLAs (Target: joiner <24h, mover <4h, leaver <1h); identify process changes needed.
Deliverables: Future-state process flows, SLA targets, org change plan
Activity 3: Secure Executive Sponsorship
Present business case (faster onboarding, lower risk, audit readiness); secure CISO/CIO approval and budget.
Deliverables: Signed charter, funding approval, executive steering committee established
Phase 2: HR-IAM Integration & Event Automation (Months 3–6)
Activity 1: Implement HR Event Publishing
Connect HR system (Workday, SAP SuccessFactors) to IGA. Enable joiner/mover/termination event publishing via API or webhook. Test event delivery reliability.
Technical: Workday event subscriptions configured; events flowing to event broker (Kafka, EventBridge) or IGA message queue
Success Metric: >99% event delivery; <5min event-to-system latency
Activity 2: Automate Joiner Provisioning
Configure IGA to trigger account creation in AD, email, and core SaaS apps (Office 365, Slack, etc.) upon joiner event. Implement approval workflow for manager oversight (required for Level 3).
Technical: IGA connectors for AD, Exchange, app SSO; workflow engine for manager approval (2-hour SLA)
Success Metric: Joiner SSO-ready within 24 hours in >95% of cases
Activity 3: Automate Mover Provisioning
When HR reports department or role change, IGA automatically updates AD groups, reassigns roles, and modifies app access. Require manager approval for SoD-critical moves.
Technical: IGA role lifecycle engine; SoD conflict detection in approval workflow
Success Metric: Mover changes within 4 hours of HR update
Activity 4: Automate Termination & Deprovision
On termination event, IGA immediately revokes all access (directory, apps, PAM, email). Implement grace period for data retrieval (1–7 days). Track deprovision completion.
Technical: IGA termination workflow; concurrent revocation across all systems; fallback/escalation for failures
Success Metric: 100% deprovisioning within 1 hour of termination event
Phase 3: Identity Repository & Data Quality (Months 4–8)
Activity 1: Establish Identity Deduplication Rules
Define how users are uniquely identified across HR, AD, and IGA (primary key: email or employee ID). Implement deduplication logic in IGA. Clean up orphan accounts.
Technical: IGA matching rules; data quality remediation; audit report
Success Metric: 100% of users uniquely correlated; <0.5% orphan accounts
Activity 2: Implement Identity Graph Foundation
Export identity data from AD, HR, and IGA to centralized repository (data lake, IGA analytics, or basic graph DB). Correlate users, roles, and permissions.
Technical: IGA reporting database; daily ETL sync; role aggregation
Success Metric: >95% entitlement visibility; ability to answer “who has what” in <5min
Activity 3: Establish Role Catalog
Create business-meaningful role definitions (Job Title + Department + Responsibilities = Role). Map current users to roles. Identify role conflicts and cleanup.
Technical: IGA role management; role mining tools; manual curator review
Success Metric: >80% of users assigned to a business role
Phase 4: Access Governance & Certification (Months 7–12)
Activity 1: Design Quarterly Access Review Process
Define review scope (all users, all apps), reviewer assignment (direct manager), review workflow (approve/deny/exception), and SLA (30 days to complete).
Deliverables: Access review SOC, manager communication, sample certification report
Activity 2: Automate Evidence Collection
IGA and app systems automatically collect entitlement snapshots for review. Enable usage reporting (active/inactive entitlements).
Technical: IGA analytics; app usage APIs; evidence database
Success Metric: 100% of users have complete entitlement snapshot <24h before review
Activity 3: Launch Q1 Access Review
Conduct first quarterly review with 100% manager participation. Track compliance, exceptions, and remediation SLA.
Success Metric: >95% manager completion within 30 days; <5% exception rate; 100% of denials executed within 48h
Phase 5: Privileged Access & Cloud Governance (Months 8–14)
Activity 1: Implement JIT Privileged Access
Deploy PAM platform (CyberArk, HashiCorp Vault, Delinea). Eliminate standing admin accounts. Implement request → approval → temporary credential → session recording flow. Target 2–4 hour approval SLA.
Technical: PAM platform; approval workflow integration; session recording indexing
Success Metric: 100% of admin access via JIT; zero standing privileged accounts; 100% session recording
Activity 2: Discover & Govern Cloud Entitlements
Deploy CIEM tool (Ermetic, Wiz, Lacework). Scan AWS, Azure, GCP for dangerous permissions. Create remediation backlog. Begin automated removal of unused permissions.
Technical: CIEM scanner; permission analysis engine; remediation workflow
Success Metric: 100% cloud entitlements discovered; 50%+ of unnecessary permissions flagged
Phase 6: Governance Model & Operations Maturity (Months 10–18)
Activity 1: Establish IAM Governance Structure
Appoint Identity Owner; define role provisioning, exception approval, and review workflows with clear SLAs. Document policies (least privilege, SoD, separation of duties).
Deliverables: IAM Policy document, governance charter, workflow definitions, SLA targets
Activity 2: Implement Exception Tracking & Audit
Formal exception approval workflow with business justification, expiration date, and periodic re-approval. 100% audit trail for all access changes.
Technical: IGA exception workflow; audit logging (Splunk, ELK); compliance dashboards
Success Metric: 100% exception tracking; zero unapproved exceptions; 100% audit trail completeness
Activity 3: Compliance Automation
Generate monthly/quarterly compliance reports (SOC 2, HIPAA, PCI, ISO 27001). Automate evidence collection. Enable on-demand auditor requests.
Technical: Compliance dashboard; automated report generation
Success Metric: Monthly compliance reports 100% automated; audit readiness <24h
Level 3 Success Criteria Checklist
Joiner/Mover/Leaver Automation:
- Joiner SSO-ready <24 hours; SLA >95%
- Mover role changes <4 hours; SLA >95%
- Leaver fully deprovisioned <1 hour; SLA >95%
- Event-to-action latency <30 min avg
- <5% manual intervention rate
Identity Repository:
- 95% entitlement visibility
- <0.5% orphan/stale accounts
- 100% user deduplication
- 80% users assigned to business roles
Access Governance:
- Quarterly reviews, 100% completion within 30 days
- 95% manager compliance with review SLA
- 100% entitlement snapshots collected pre-review
- All denials executed within 48 hours
- Audit evidence auto-generated & searchable
Privileged Access:
- Zero standing privileged accounts
- 100% admin access via JIT; approval SLA <4h
- 100% session recording & indexed
Cloud Governance:
- 100% cloud entitlements discovered
- 50%+ dangerous permissions identified
- Automated removal of unused permissions started
Governance Model:
- Named Identity Owner with authority & budget
- Formal IAM policies & workflows documented
- 100% exception tracking & approval
- 100% audit trail completeness
- Monthly/quarterly compliance reports automated
Level 3 Transition Risks & Mitigation
Risk: Data quality issues in HR/IGA derail automation
Mitigation: Implement data quality gates before automation; audit HR data; define remediation SLAs for bad records
Risk: Downstream app connectors fail, causing provisioning delays
Mitigation: Implement connector failure detection; escalate to IT Ops; define fallback manual process; test connector reliability
Risk: Manager resistance to frequent reviews/approvals
Mitigation: Show time savings with automation; measure adoption; communicate benefits; provide manager training
Risk: Compliance team requires historical data for audit (we don’t have 5 years of logs)
Mitigation: Document current audit-readiness; plan gradual log retention buildup; get auditor acceptance of baseline
Level 3 → Level 4 Implementation Playbook
Overview
Timeline: 18–30 months
Investment: $1M–$5M
Pattern Focus: All 3 (Event-Driven mature + Identity Graph core + Continuous Cert core)
Phase 1: Event-Driven Architecture Maturation (Months 1–4)
Activity 1: Deploy Event Mesh
Replace point-to-point HR→IGA→App integrations with event mesh (Kafka, AWS EventBridge, Azure Event Grid). Implement durable event storage; enable event replay.
Technical: Kafka cluster or EventBridge implementation; event schema registry (Avro/Protobuf); routing rules
Success Metric: All identity events flowing through mesh; <5min event-to-action latency; 99.99% uptime
Activity 2: Implement Real-Time Orchestration
Move from sequential (joiner wait for AD account → email account → SSO) to parallel execution (all systems provisioned concurrently via event handlers). Target <1h joiner SLA.
Technical: Orchestration layer (HashiCorp Consul, MuleSoft, custom Lambda); concurrent provisioning handlers
Success Metric: >90% joiner provisioning completed in parallel; <1h SLA >95%
Activity 3: Enable Event-Driven Anomaly Triggers
Publish identity change events (role assignment, privilege escalation, SoD violation) to analytics/UEBA engines. Trigger security reviews immediately upon suspicious events.
Technical: Event routing to analytics/SIEM; real-time rule engine
Success Metric: <5min from event to anomaly detection; <30min from detection to security review
Phase 2: Full Identity Graph & Cloud IAM Integration (Months 2–8)
Activity 1: Deploy Graph Database
Implement Neo4j (on-prem), Amazon Neptune (AWS), or ArangoDB. Establish data correlation engine (entity matching, relationship inference).
Technical: Graph DB setup; ETL pipeline for data ingestion; query optimization
Success Metric: Graph operational; supports <500ms queries; >99% uptime
Activity 2: Aggregate All Identity Data into Graph
Integrate AD, HR, IGA, cloud IAM (AWS/Azure/GCP), PAM, cloud apps (SaaS entitlements), and devices. Establish real-time update feeds.
Technical: Connectors for all systems; daily/real-time sync cadence; data quality monitors
Success Metric: >99% entitlement visibility; <24h update latency
Activity 3: Implement Role Mining & SoD Conflict Detection
Use graph algorithms to identify access patterns, recommend business roles, and detect SoD violations. Auto-flag conflicts for review.
Technical: Graph traversal queries; clustering algorithms; conflict engine
Success Metric: >80% of users assigned to optimal business roles; 100% SoD violations detected within 24h
Activity 4: Enable Cloud Entitlement Governance (CIEM Integration)
Connect CIEM tool (Ermetic, Wiz, Lacework) to identity graph. Visualize cloud permissions alongside on-prem entitlements. Identify cloud least-privilege opportunities.
Technical: CIEM + Graph DB integration; permission correlation; remediation workflow
Success Metric: 100% AWS/Azure/GCP permissions visible in graph; 60%+ permission sprawl reduced
Phase 3: Risk Scoring Engine & Behavioral Analytics (Months 4–10)
Activity 1: Deploy UEBA / Behavioral Analytics
Implement Exabeam, Splunk UBA, or CrowdStrike Falcon. Establish baseline of normal user behavior (login patterns, app access, data exfiltration indicators). Detect deviations.
Technical: UEBA engine; baseline models; real-time scoring
Success Metric: Behavioral baseline established for 95%+ of users; anomaly detection with <5% false positive rate
Activity 2: Build Risk Scoring Algorithm
Create composite risk score combining: entitlement risk (SoD, privilege level), behavior risk (impossible travel, off-hours access), usage risk (inactive for 90 days), and policy risk (pending review, late approval). Update continuously.
Technical: Custom ML pipeline or UEBA algorithm; real-time scoring engine; risk dashboard
Success Metric: Risk scores updated hourly across all users; <2 hour latency from event to score update
Activity 3: Implement Risk-Tiered User Segmentation
Classify users into High/Medium/Low risk based on composite score. Use risk tier to drive review frequency, approval SLA, and monitoring intensity.
Technical: Risk tier assignment; dynamic policy rules
Success Metric: >20% classified High-risk; >30% Medium-risk; >50% Low-risk
Phase 4: Continuous Risk-Driven Certification Engine (Months 6–14)
Activity 1: Transition from Quarterly to Continuous Reviews
Move away from universal, time-based reviews toward risk-driven, continuous certification. High-risk users reviewed weekly; medium-risk monthly; low-risk quarterly.
Technical: IGA continuous certification engine; risk-based scheduling; workflow automation
Success Metric: 100% High-risk users reviewed weekly; 100% Medium-risk monthly; 100% Low-risk quarterly
Activity 2: Implement AI-Driven Recommendations
Train ML model to recommend “Approve”, “Revoke”, or “Step-Up Review” based on entitlements, usage, risk, and peer behavior. Target 70%+ recommendation coverage with <2% false-positive rate.
Technical: ML model training; recommendation engine; feedback loop
Success Metric: 70%+ of certifications AI-recommended; <2% false positives; >90% manager adoption of recommendations
Activity 3: Enable Auto-Approval for Low-Risk Recommendations
Automatically approve “Approve” recommendations from low-risk users where usage supports retention. Require manager review only for high-risk or “Revoke” recommendations.
Technical: Auto-approval policy engine; exception tracking
Success Metric: >50% of certifications auto-approved; <5% exception rate
Activity 4: Reduce SLA to <48 Hours
Managers review/approve high-risk certifications within 48 hours. Auto-remediation executes “Revoke” recommendations within 24 hours (with option to appeal).
Success Metric: >95% High-risk manager SLA <48h; 100% auto-revoke execution <24h
Phase 5: Autonomous Privilege & Workload Identity (Months 8–16)
Activity 1: Implement Anomaly Detection & Auto-Kill for PAM
Monitor privileged session behavior (keyboard patterns, command sequences, file access). Detect and kill sessions showing anomalous activity (mass deletion, config changes, lateral movement) in real-time.
Technical: PAM behavioral analytics; UEBA integration with PAM; auto-kill playbook
Success Metric: <2% false positive rate; <5min detection-to-kill latency
Activity 2: Expand Workload Identity Governance
Extend continuous certification and risk scoring to non-human identities (service accounts, AI agents, containers, IoT). Implement workload identity federation (OIDC, Kubernetes).
Technical: Workload identity platform; OIDC provider; agent authorization framework
Success Metric: >90% of workload identities federated; zero hardcoded credentials
Phase 6: Zero Trust IdP & Passwordless Authentication (Months 4–12)
Activity 1: Deploy FIDO2 as Default
Transition from password + MFA to FIDO2 hardware keys (Yubikey, etc.) or platform-based authenticators (Windows Hello, Face ID). Target 95%+ FIDO2 enrollment.
Technical: Cloud IdP (Okta, Azure Entra ID) FIDO2 support; key management; backup options
Success Metric: 95%+ users on FIDO2; <1% phishing-susceptible auth methods
Activity 2: Implement Continuous Authentication
Monitor signals throughout session (device health, location, user behavior). Trigger step-up MFA or session termination if risk spikes.
Technical: Zero Trust IdP; continuous auth policy engine; risk-based step-up
Success Metric: <200ms auth latency; <5min step-up completion SLA
Phase 7: Policy-as-Code & Governance Automation (Months 10–20)
Activity 1: Convert Workflows to Policy Rules
Move from “approval gate” (person says yes/no) to “policy engine” (policy rules decide). Implement policy-as-code framework (Styra OPA, custom policy engine).
Technical: Policy language (Rego, YAML, or custom DSL); policy evaluation engine; versioning
Success Metric: 95%+ of governance decisions policy-driven; zero manual approval bottlenecks
Activity 2: Enable Policy Versioning & Audit
Track all policy changes; revert if needed; audit all policy-driven decisions (who, what, why, when). Support policy exceptions with explicit approval.
Technical: Policy git repos; policy audit trail; exception workflow
Success Metric: 100% policy audit trail; <5 min policy change deployment
Phase 8: Orchestration & Incident Response Automation (Months 14–24)
Activity 1: Implement Orchestration Layer
Deploy SOAR (Security Orchestration, Automation, and Response) platform. Define incident response playbooks (suspicious login → step-up MFA, impossible travel → session kill, SoD violation → exception approval).
Technical: SOAR platform (Splunk SOAR, Demisto, etc.); playbook library
Success Metric: >80% low-risk incidents auto-remediated; >95% high-risk incidents escalated within 5 min
Activity 2: Enable Autonomous Low-Risk Remediation
Automatically execute “safe” remediations (revoke inactive access, rotate unused service account passwords, remove orphan accounts) with logging & audit trail.
Technical: Auto-remediation policy; rollback capability; audit logging
Success Metric: <2% false positive rate; <24h for low-risk remediation completion
Level 4 Success Criteria Checklist
Identity Lifecycle:
- Real-time event-driven; joiner SLA <1 hour >95%
- Mover changes within 10 minutes
- Leaver fully deprovisioned <1 hour
Identity Repository & Graph:
- Full identity graph operational with <500ms query latency
- 99% entitlement visibility (on-prem + cloud)
- 100% SoD conflicts detected within 24 hours
- 80% users assigned to optimal business roles
Risk Scoring & Analytics:
- Behavioral analytics baseline established; <5% false positive rate
- Risk scores updated hourly across all users
- 20% High-risk, >30% Medium-risk, >50% Low-risk segmentation
Continuous Access Governance:
- 100% continuous certification (no quarterly batches)
- Weekly reviews for High-risk; monthly for Medium; quarterly for Low
- 70%+ AI-recommended; <2% false positive rate
- 50% auto-approved; <48h manager SLA for escalations
- <5% exception rate; 100% exceptions audited
Privileged Access:
- Zero standing privilege; 100% JIT <1 hour approval SLA
- Real-time PAM anomaly detection; <5min kill latency
- <2% false positive anomaly rate
Authentication:
- 95%+ FIDO2 enrollment
- Continuous authentication with <200ms latency
- <1% phishing-susceptible authentication methods
Governance Model:
- 95%+ governance decisions policy-driven
- Policy versioning & audit 100% complete
- Zero manual governance bottlenecks
Level 4 Transition Risks & Mitigation
Risk: Machine learning models produce high false-positive recommendations, losing manager trust
Mitigation: Start with explainable recommendations; require manager review of all High-risk; tune model continuously; measure accuracy & adjust
Risk: Event mesh becomes bottleneck or point of failure
Mitigation: Design for 99.99% uptime; implement circuit breakers & fallback queues; test failure scenarios
Risk: Policy-as-code complexity leads to unintended denials (false negatives)
Mitigation: Start with audit-only policies; gradually enable enforcement; maintain rollback capability; comprehensive testing
Risk: Graph database becomes data quality bottleneck
Mitigation: Implement data quality gates upstream; monitor data freshness; establish SLAs for data updates
Risk: Autonomous remediation makes irreversible changes (e.g., revokes necessary access)
Mitigation: Limit auto-remediation to low-confidence findings; implement appeal/undo workflows; 24–48h grace period before hard revoke
Pattern Selection & Decision Framework
Decision Tree: Which Pattern Should We Implement First?
Q1: What is your primary pain point?
- Slow joiner SLA; manual provisioning bottleneck
→ Implement Event-Driven Identity Automation first
- Timeline: 6–9 months
- Benefit: 48h→24h joiner SLA
- RA Focus: Interact, Change
- No visibility into who has what; SoD violations not detected
→ Implement Identity Graph & Entitlement Synthesis first
- Timeline: 8–12 months
- Benefit: 95%+ entitlement visibility
- RA Focus: Repositories, Analytics
- Annual access reviews are burden; risk not detected between reviews
→ Implement Continuous Risk-Driven Certification first
- Timeline: 9–12 months
- Benefit: Quarterly→continuous reviews
- RA Focus: Analytics, Change, Manage
- Multiple pain points; not sure where to start
→ See “Parallel vs Sequential” section below
Parallel vs Sequential Implementation
| Approach | Timing | Pros | Cons | Best For |
| Sequential | 18–24 months (all 3) | Focus; lower risk; builds expertise; phased tooling spend | Slower time-to-value; patterns integrate better when done together | Mid-market (500–5K employees) |
| Parallel | 18–24 months (faster convergence) | Faster time-to-value; Event-Driven feeds Identity Graph; synergies | Higher risk; requires more skilled team; higher concurrent spend | Large enterprise (10K+ employees) |
| Waterfall | 24–30 months | Lowest risk; easiest to manage stakeholders | Longest time-to-value; late patterns miss earlier investments | Risk-averse organizations |
Table 6: Parallel vs Sequential Implementation
Recommended Approach: Overlap Patterns 1 & 2 (Months 1–12), then Pattern 3 (Months 6–18). This creates synergies while maintaining manageable risk.
Tool Selection Framework by Pattern
Pattern 1: Event-Driven Identity Automation
| Component | Option A (Preferred) | Option B (Lower Cost) | Option C (On-Prem) |
| Event Broker | AWS EventBridge (managed, scalable) | Azure Event Grid (cloud-native) | Apache Kafka (self-managed) |
| HR Connector | Workday REST API (native) | SAP SuccessFactors (native) | Custom polling adapter |
| IGA Platform | SailPoint IdentityIQ | Okta Identity Governance | Saviynt |
| Orchestration | IGA native workflows | Custom Lambda/Functions | HashiCorp Terraform (IaC) |
Table 7: Tool Selection for Event-Driven Pattern
Pattern 2: Identity Graph & Entitlement Synthesis
| Component | Option A (Preferred) | Option B (Lower Cost) | Option C (Analytics) |
| Graph Database | Amazon Neptune (AWS) | Neo4j (community/enterprise) | ArangoDB (flexible) |
| Data Aggregation | IGA native data warehouse | Custom ETL pipeline (Python/Spark) | Splunk / ELK data lake |
| Role Mining | SailPoint Role Miner | Manual role design + IGA assignment | Saviynt role analytics |
| SoD Enforcement | IGA SoD engine + conflict detection | Custom graph queries | Third-party SoD tool |
Table 8: Tool Selection for Identity Graph Pattern
Pattern 3: Continuous Risk-Driven Certification
| Component | Option A (Preferred) | Option B (Lower Cost) | Option C (Advanced) |
| Risk Scoring Engine | Exabeam UEBA | Custom ML pipeline (Python/TensorFlow) | Splunk UBA |
| Behavioral Analytics | CrowdStrike Falcon (endpoint) + Splunk (network) | Open-source tools (ELK + custom models) | Darktrace (autonomous AI) |
| Certification Engine | IGA continuous certification (SailPoint/Okta) | Custom workflow (ServiceNow + IGA API) | Dedicated certification platform |
| Auto-Remediation | SOAR platform (Splunk SOAR, Demisto) | Custom Lambda/Functions + webhook | Orchestration platform (HashiCorp) |
Table 9: Tool Selection for Continuous Certification Pattern
Decision Criteria Matrix: Org Profile → Pattern Recommendation
| Org Size | Maturity | Budget | Primary Pain | Recommended Pattern Sequence |
| Startup (500–2K) | Level 1–2 | $200–500K | No tooling; high manual work | 1. Event-Driven (via Okta) |
| Mid-Market (2K–10K) | Level 2–3 | $500K–$2M | Slow joiner; no governance visibility | 1. Event-Driven → 2. Graph → 3. Cert |
| Enterprise (10K+) | Level 3–4 | $2M–$5M+ | Risk detection gaps; governance at scale | 1 & 2 (parallel) → 3 |
| Highly Regulated | Level 3 (min) | +30–50% budget | Compliance evidence; audit readiness | 1. Governance Model + Event-Driven |
Table 10: Decision Criteria Matrix: Org Profile to Pattern Recommendation
Risk Assessment: Pattern Adoption Challenges by Organizational Type
| Org Profile | Event-Driven Risks | Graph Risks | Continuous Cert Risks |
| HR Datacenter | HR system doesn’t publish events natively | HR data quality issues propagate to graph | HR employment status updates delayed |
| High-Compliance | Event loss/replay concerns | Graph becomes compliance-critical | AI recommendations require explainability |
| Legacy-Heavy | Legacy apps don’t have real-time provisioning APIs | Legacy app entitlements hard to discover | Legacy app usage data unavailable |
| Cloud-Native | Event volume high; scale challenges | Cloud IAM integration critical | Workload identity governance needed |
Table 11: Risk Assessment by Organizational Type
Tool Migration Path (Example)
SailPoint Organization Moving to Okta
- Phase 1 (Month 1): Run both in parallel; sync AD from SailPoint to Okta
- Phase 2 (Months 2–4): Move non-critical apps to Okta SSO; keep SailPoint provisioning
- Phase 3 (Months 5–6): Move SailPoint provisioning logic to Okta (rebuild rules)
- Phase 4 (Month 7): Decommission SailPoint; Okta operational
- Phase 5 (Month 8+): Leverage Okta Identity Governance for governance
Risk: Provisioning gaps during transition
Mitigation: Extensive testing; phased app cutover; fallback to manual for critical apps
When NOT to Implement a Pattern
❌ Don’t implement Event-Driven if:
- HR system is unreliable or data quality is <80%
- 30% of apps don’t support API-driven provisioning
- Team lacks event architecture expertise; prefer to hire
- Instead: Strengthen HR data quality first; implement basic automation
❌ Don’t implement Identity Graph if:
- Data correlation cannot be automated (too much manual matching)
- Graph database expertise unavailable; learning curve too high
- Entitlement visibility already >90% via IGA reporting
- Instead: Optimize IGA reporting; hire graph DB specialist
❌ Don’t implement Continuous Cert if:
- Manager adoption of quarterly reviews <50%
- No behavioral analytics baseline; can’t build risk models
- High false-positive concern + critical access (PAM, financial systems)
- Instead: Fix quarterly review completion first; build analytics baseline
Document Navigation Guide
When to Use This Playbook
Use this document if you:
- Are implementing a maturity progression (Level 2→3, Level 3→4)
- Need step-by-step roadmaps with phases, timelines, and success criteria
- Are selecting between Event-Driven, Identity Graph, or Continuous Certification patterns
- Want tool stacks and vendor recommendations for each pattern
- Need to understand when NOT to implement a pattern
When to Use the RA
Reference: IAM 2.0 Reference Architecture
Use when you:
- Need foundational concepts (vision, principles, core trust model)
- Are reviewing all RA layers and capabilities (Interact, Access, Change, Repositories, Analytics, Manage)
- Need deployment pattern options (on-prem, hybrid, cloud-native)
- Are new to the IAM 2.0 conceptual model
- Need a reference checklist of all identity capabilities
Quick Navigation by Question
| Your Question | Go To | Section |
| Where are we today on maturity? | This Playbook | IAM 2.0 Maturity Model Summary |
| What patterns enable Level 3? | This Playbook | How Patterns Wire to Maturity Levels |
| What are the 6 RA layers? | The RA | Architecture Layers and Capabilities Detailed |
| How do we move from L2 to L3? | This Playbook | Level 2 → Level 3 Implementation Playbook |
| What tools support Pattern X? | This Playbook | Technology Stack (in each pattern section) |
| Does our current setup conform to L3? | Maturity Model | Comprehensive Maturity Matrix |
| What’s our timeline and investment? | This Playbook | Phase breakdown in L2→L3 or L3→L4 playbook |
| Should we use Event-Driven or Identity Graph first? | This Playbook | Pattern Selection & Decision Framework |
Table 12: Quick Navigation Guide
Conclusion
This Playbook provides the execution companion to the IAM 2.0 Reference Architecture. The three high-value patterns—Event-Driven Identity Automation, Identity Graph & Entitlement Synthesis, and Continuous Risk-Driven Certification—form the architectural backbone of modern IAM programs.
Key Takeaways:
- Patterns enable maturity levels. Don’t just adopt tools; adopt patterns aligned with your target maturity.
- Sequencing matters. Event-Driven typically comes first; Identity Graph amplifies it; Continuous Cert leverages both.
- Risk assessment is critical. Your org profile (size, legacy systems, regulatory burden, team expertise) determines which patterns to prioritize and how to sequence them.
- Success is measurable. Every phase has clear success criteria. Track them obsessively.
- Governance evolves. From workflow-based (approval gates) to policy-driven (rules engine) to autonomous (self-optimizing). Plan for that shift.
Three-Document Ecosystem:
| Document | Primary Use |
| IAM 2.0 Reference Architecture | Conceptual foundation (vision, layers, capabilities) |
| IAM 2.0 Maturity Model | Assessment (framework, scoring, mapping), |
| IAM 2.0 Playbook (this document) | Execution roadmaps (maturity, patterns, implementation) |
Table 13: Three-Document Ecosystem
For organizations ready to execute these playbooks, TechVision Research provides assessment, architecture design, vendor selection, and implementation guidance.