Why Access Reviews Fail in Real Companies

Access reviews are supposed to reduce risk. In most organizations, they produce audit screenshots instead. Here is why the system is designed to fail and what actually fixes it.

Share:

10 min read

Why Access Reviews Fail in Real Companies

Somewhere right now, a manager is staring at a list of 400 entitlements they do not recognize, for people who may or may not still report to them, and they are about to click "Approve All" because their actual job starts in six minutes.

This is not a failure of character. It is a failure of system design.

Access reviews are the single most common governance control in enterprise IAM. Nearly every regulated organization runs them: SOX shops, HIPAA-covered entities, PCI merchants, SOC 2 companies. External auditors ask for evidence that they happen. Internal teams scramble to produce that evidence every quarter.

And in most organizations, they accomplish almost nothing.

Not because the people are lazy. Not because the tools are broken. Because the entire system is optimized to produce a compliance artifact instead of reducing access risk. The incentive structure rewards completion. It does not reward good decisions. So you get fast completions and bad decisions, and everyone involved knows it, and the cycle repeats next quarter.

This is a structural problem. Understanding why it exists is the first step toward making access reviews actually useful.

What Access Reviews Are Supposed to Do

The theoretical purpose is simple: validate that every user's current access is still appropriate. Catch entitlement creep. Find orphaned permissions. Remove access that no longer matches someone's job.

If it worked the way it was designed, a quarterly review would produce three outcomes:

  1. Confirmation that most access is still correct
  2. Identification and removal of access that drifted beyond what the role requires
  3. A clean audit trail showing that someone with authority validated the state of access in each system

The regulatory drivers are real. SOX Section 404 requires controls over access to financial systems. HIPAA requires periodic review of access to protected health information. PCI-DSS Requirement 7 requires restricting access to cardholder data on a need-to-know basis. SOC 2 CC6.1 requires logical access controls with periodic validation. These are not optional for the companies subject to them.

The IGA platforms that run these campaigns are real too. SailPoint runs access certifications. Saviynt runs certification campaigns. Microsoft Entra ID has built-in access reviews. Each one produces the same output: a campaign completion percentage, a list of decisions (approve/revoke), and a timestamp showing who signed off.

Here is the honest observation: in most organizations, the campaign closes, the completion rate hits 95%+, the audit team collects their screenshots, and the actual risk posture does not meaningfully change.

The entitlements that were inappropriate before the review are still there after it.

The Five Ways Access Reviews Actually Fail

1. Rubber-Stamping

This is the most visible failure mode and the one everyone in IGA has seen firsthand.

Managers approve everything. Not because they are negligent, but because rejecting something they do not understand might break a workflow, generate a help desk ticket, and create a visible problem they will own. Approving costs nothing visible. Rejecting creates immediate friction.

In SailPoint, the "Approve All" button exists. In every IGA platform that runs certifications, some version of it exists. And campaign analytics can show you the truth: if a reviewer processed 300 entitlements in 4 minutes, they did not review anything. They cleared a queue.

The rational calculus is: approving has zero visible cost. Rejecting has immediate, personal cost. Every time. Until that math changes, rubber-stamping is the dominant strategy.

2. Wrong Reviewers

The default reviewer assignment in most IGA platforms is manager-based. The user's direct manager gets the review because the HR feed says they are the reporting relationship.

This works fine for birthright access. A marketing manager can reasonably confirm that their direct report still needs access to the CRM and the content management system.

It falls apart for technical entitlements. That same marketing manager does not know what SAP_FI_POSTING_AUTH_LEVEL_3 means. They do not know what SG-APP-FIN-PROD-RW-EAST grants. They cannot make a meaningful decision about a database role or an API service account permission. But the IGA platform assigned them the review because org chart says they own the person.

The person who actually understands the entitlement is the application owner or a technical lead. But they are not in the review loop, because the campaign was scoped by manager, not by entitlement ownership.

3. Bad Data and Unintelligible Entitlements

Even a well-intentioned reviewer cannot make good decisions about entitlements they cannot read.

Entitlement names in IGA platforms are often raw technical strings pulled from target systems. Active Directory group names like DL-CORP-FIN-REPORTING-RW-US-EAST. SAP authorization objects. Database role identifiers. AWS IAM policy ARNs. These are not human-readable descriptions of what access is being granted. They are system identifiers.

If the reviewer cannot understand what the entitlement actually allows the user to do, the review is theater. They will approve it because the alternative is rejecting something they cannot evaluate and potentially breaking a critical workflow.

This compounds with role explosion. When an organization has thousands of roles built by accumulation rather than design, many of those roles overlap, contradict each other, or have drifted far from their original intent. A reviewer seeing 15 roles assigned to a single user, with opaque names and no descriptions, is not performing governance. They are performing a ritual.

4. Timing Problems

Most reviews run quarterly or semi-annually. The schedule is driven by audit calendars, not by risk events.

This creates two problems. First, access that should have been removed in January does not get flagged until the March campaign, reviewed in late March, and remediated (maybe) in April. That is three months of inappropriate access that the review process was theoretically supposed to catch.

Second, the burst pattern. Three thousand review items land on managers at the same time, competing with quarter-end reporting, performance reviews, project deadlines, and everything else that happens on a schedule. Completion rates drop. The IGA team sends escalation emails. Managers batch-approve to clear the queue because the escalation emails create more urgency than any individual entitlement decision.

Continuous access reviews exist. SailPoint ISC supports event-triggered micro-certifications. Saviynt supports continuous compliance monitoring. But most organizations still run quarterly campaigns because that is what the audit program was built around, and changing the audit program requires convincing the external auditors that a different cadence still satisfies the control objective.

5. No Remediation Follow-Through

The review is half the control. The other half is removing the access that was flagged for revocation.

In many organizations, a reviewer clicks "Revoke" in the IGA platform, and one of several things happens:

  • The connector to the target system is misconfigured, and the revocation fails silently
  • The target system requires a manual step that routes to a ticket queue nobody actively monitors
  • The provisioning workflow has an exception handler that catches the revocation and parks it for "review" by an application owner who never sees it
  • The entitlement gets removed from the IGA's view of the user's access, but the actual permission persists in the target system because the sync is one-directional

Auditors rarely check whether revocations completed in the target system. They check whether the certification campaign was marked "completed" in the IGA platform. Those are different things.

The Incentive Structure That Keeps This Broken

Each failure mode above has its own local cause. But the reason they persist together, year after year, in otherwise competent organizations, is systemic.

The audit-compliance loop. External auditors need evidence that access reviews happen. The IAM team needs to show auditors that reviews happen. The metric everyone optimizes for is campaign completion rate, not "amount of inappropriate access removed" or "reduction in entitlement creep over time." The system produces completion. It does not produce security. And completion is what gets measured, reported, and rewarded.

The cost asymmetry. Approving access has no visible cost. The inappropriate entitlement sits there quietly. Nobody pages you at 2 AM because a user had too much access (unless there is a breach, and breaches are rare and usually attributed to other causes). Revoking access has immediate, visible cost: broken workflows, help desk tickets, escalations, angry users, business disruption, and a trail that leads directly back to the person who clicked "Revoke."

Every rational actor in the system is incentivized to approve.

The accountability gap. If a reviewer approves access that later contributes to a data breach, there is almost never a direct consequence for the reviewer. The incident response focuses on the attacker, the vulnerability, the detection gap. Nobody goes back to the quarterly access review and says "this manager approved this entitlement six months ago and that is why the breach succeeded." But if a reviewer revokes access that breaks payroll processing on Monday morning, the consequence is immediate and personal.

The error modes are asymmetric. One direction is invisible and diffuse. The other direction is loud and specific.

The budget problem. Running access reviews well requires sustained investment: data quality work, entitlement description maintenance, reviewer training, ownership models, remediation workflows, connector reliability, continuous improvement cycles. Running access reviews at all requires an IGA license and a quarterly email. Most organizations fund the minimum viable compliance artifact because the return on better reviews is invisible (risk that did not materialize) while the cost is concrete (headcount, time, political capital).

The vendor incentive. IGA vendors sell platforms. Platforms run certifications. Customers buy platforms to satisfy auditors. The vendor success metric is deployment and renewal, not whether the customer's access risk actually decreased. Nobody in that commercial chain is optimized to ask: "Did the review produce meaningful access changes?" They are optimized to ask: "Did the campaign complete? Will the customer renew?"

This is not conspiracy. It is just incentives doing what incentives do.

What Actually Working Looks Like

Fixing access reviews is not a single project. It is a set of operational changes that compound over time. None of them are conceptually difficult. All of them require sustained attention.

Make the Review Unit Meaningful

Do not dump 500 entitlements on a single reviewer in one campaign.

Scope campaigns by application, by risk tier, or by entitlement type. A reviewer should be able to make a genuine decision on every line item. If the volume makes that impossible, the campaign design is wrong.

SailPoint ISC supports campaign filters and custom populations. Saviynt supports flexible certification scopes. Use them. A focused campaign that reviews 30 high-risk entitlements with the right reviewer produces better outcomes than a sprawling campaign that technically covers 3,000 entitlements but gets rubber-stamped.

Fix the Data Before You Fix the Process

Entitlement descriptions must be human-readable. "Read/write access to the production financial reporting database" is reviewable. SG-APP-FIN-PROD-RW-EAST is not.

This is expensive. Someone has to map entitlements to plain-language descriptions and maintain that mapping as systems change. Most organizations skip it because it is unglamorous work with no immediate visible payoff. It is also the single highest-leverage improvement you can make to review quality.

In SailPoint, entitlement descriptions live on the entitlement object. In Saviynt, entitlement metadata can be enriched through connector configuration. Neither platform does this automatically. It is manual classification work that pays permanent dividends.

Start with your highest-risk entitlements. If you have 10,000 entitlements, describe the 500 that grant write access to production financial systems, sensitive data, or administrative capabilities. Let the low-risk ones wait.

Assign the Right Reviewer

Manager-based review works for birthright access: the stuff that comes with the job and that the manager can reasonably evaluate. It fails for technical entitlements, application-specific permissions, and privileged access.

Build a tiered reviewer model:

  • Managers review birthright access and standard job-function entitlements
  • Application owners review application-specific entitlements and technical permissions
  • Security or PAM teams review privileged and sensitive entitlements

This requires maintained ownership data. If you do not know who owns each application's entitlements, start there. An ownership registry is a prerequisite for useful reviews, and it has value far beyond the review process (incident response, change management, decommissioning decisions all need the same data).

Trigger Reviews by Events, Not Just Calendars

Role change, department transfer, extended leave return, privilege escalation. These are the moments when access is most likely to be wrong. A user who moved from finance to marketing six weeks ago should not keep their financial system access until the next quarterly campaign discovers it.

Event-driven micro-certifications catch drift in real time. SailPoint ISC supports event-triggered certifications. Saviynt supports continuous compliance monitoring. Entra ID access reviews can be triggered through lifecycle workflows.

The quarterly campaign does not disappear. Auditors still want periodic comprehensive reviews. But it becomes a backstop that catches whatever the event-driven reviews missed, not the primary control surface.

Close the Remediation Loop

A revocation decision is worthless if the access persists in the target system.

Automated provisioning with confirmation is the cleanest path. The IGA platform sends the revocation, the connector executes it, the target system confirms the permission is gone. Where automated revocation is not possible (legacy systems, manual-only applications), track manual remediation with SLAs, escalation paths, and verification steps.

Report on remediation completion rate as a distinct metric from campaign completion rate. Show both numbers to auditors. The sharp ones will care about the gap. And the gap will tell you exactly where your connectors, workflows, or ownership models need work.

The Uncomfortable Truth About Access Reviews

Even well-run access reviews are a trailing indicator. They catch problems after the access was granted and after the user operated with that access for weeks or months. They do not prevent overprovisioning at the point of request.

The real fix is upstream:

  • Tighter birthright access definitions that do not grant everything on day one
  • Request-based access with automatic expiration for anything beyond the baseline
  • Just-in-time access for sensitive entitlements that should not persist
  • Automated deprovisioning on role change, not just termination
  • Access request workflows that require business justification, not just a manager click

Access reviews are a compensating control for weak provisioning. If your provisioning is excellent, your reviews should be boring: mostly confirmations with occasional catches. If your reviews keep finding large volumes of inappropriate access, the problem is not the review process. The problem is how access got there in the first place.

This is not a reason to stop doing reviews. It is a reason to stop pretending reviews alone constitute governance. They are one control in a program. Treat them that way.

Where This Leaves You

Access reviews fail because the system is designed to produce compliance artifacts, not risk reduction. The audit-compliance loop rewards campaign completion. The cost asymmetry rewards approving. The data quality gap makes informed decisions impossible. The remediation gap means even good decisions do not execute.

Fixing this is not glamorous work. It is data quality, ownership registries, reviewer models, connector reliability, and metric changes. It compounds slowly. It does not produce a single dramatic before-and-after slide.

But organizations that do this work have IGA programs that actually reduce risk instead of just documenting its existence. And the people who understand why reviews fail, who can diagnose the structural causes, and who know how to build the operational patterns that make governance real instead of performative: those are exactly the practitioners that IGA and governance teams are hiring for.

If your current access review process is mostly theater and you want to work somewhere that takes governance seriously, or if you are the person trying to make it work at your current organization and want to see who else is solving these problems, browse identity governance roles and look at what the serious teams are building.

Ad
Favicon

 

  
 

Share:

Command Menu