Site icon Windows Active Directory

Auditing azure ad app permissions

How to see what apps can really do in your tenant

If you’ve ever opened microsoft entra id (azure ad) and clicked through enterprise applications → permissions, you’ve seen the comforting illusion of control: a list of “api permissions” that looks finite, reviewable, and mostly harmless.

In real incidents, that list is rarely the whole story.

The permissions you see (requested permissions on an app registration) are not always the permissions that are granted (consented in your tenant). The permissions that are granted are not always the permissions that are usable (because of conditional access, workload identity constraints, or missing assignments). And the permissions that are usable are not always the permissions that are used (because many apps are stale, abandoned, or over-scoped “just in case”).

Auditing azure ad app permissions is the discipline of closing those gaps—so you can answer, with evidence:

That matters more now than even a year ago. OAuth-based compromise patterns keep evolving (consent phishing is still thriving), and recent research write-ups keep reminding defenders that attackers prefer access paths that don’t look like “password theft.” Microsoft’s own guidance focuses heavily on reducing consent risk and controlling app access.

This article goes beyond the “click here in the portal” version. You’ll get first-principles clarity, an expert-grade technical audit runbook, and a practical comparison of the best audit methods—portal, Graph, Defender app governance, and tool-based approaches.


what “app permissions” really are (and why most audits miss the point)

At the core, Entra app access is built from two primitives:

  1. An identity for the app in your tenant
    That’s the service principal (enterprise application). It’s the “instance” of an app in your directory, where assignments, consents, and policies land.
  2. An authorization grant that links that identity to resource access
    In Microsoft terms, this is mainly:
    • delegated permission grants (OAuth scopes) represented by oAuth2PermissionGrant objects
    • application permission assignments (app roles) represented by appRoleAssignments (to the service principal)

Most “surface audits” only look at the app registration and what permissions are configured to be requested. But attackers, risky SaaS tools, and “shadow IT” don’t care what was requested. They care what was granted in your tenant.

So the first-principles rule is simple:

Requested permissions are intent. Granted permissions are reality.

And your audit should be built around reality.


the comparison that actually matters: four ways to audit azure ad app permissions

You can audit in at least four credible ways. The difference is coverage and truthfulness, not convenience.

1) entra admin center (fastest, least complete)

Best for: quick spot checks, explaining to stakeholders, small tenants.
Weakness: hard to do tenant-wide rigor; doesn’t naturally produce a complete inventory with delegated + app-only grants and “who consented what”.

It can show granted permissions per app and allow revocation actions. Microsoft’s docs explicitly describe reviewing and revoking permissions via the admin center. (GitHub)

2) microsoft graph (most complete, most defensible)

Best for: real audits, repeatable reporting, CI-style governance.
Weakness: you must model the data correctly (service principals, grants, role assignments), and permission name mapping takes effort.

Graph has first-class resource types and endpoints for delegated grants (oAuth2PermissionGrant) and can enumerate service principals and relationships.

3) defender for cloud apps “app governance” (best operational signal)

Best for: ongoing monitoring, risk scoring, behavioral detection, remediation workflows.
Weakness: licensing/enablement, and it’s more “governance + detection” than “compliance-style inventory”.

Microsoft positions app governance as visibility and policy management for OAuth-enabled apps across Entra and others.

4) purpose-built tooling (fast reporting, opinionated risk views)

Examples include:

Best for: speed, dashboards, stakeholder-friendly outputs.
Weakness: you still need to understand the primitives to validate tool output and avoid false confidence.

A mature program often uses Graph for truth, Defender for signal, and tools for workflow/reporting.


The irreducible truths behind app permission risk

To audit well, you need a few “physics laws” of Entra permissions.

truth 1: delegated vs application isn’t a detail—it’s the whole threat model

That distinction is the difference between “user tricked into consenting” and “app can read every mailbox.”

Microsoft’s own identity platform documentation frames permissions and consent around these consented authorizations and scenarios.

truth 2: “admin consent” is a security boundary and a foot-gun

Admin consent is meant to prevent users from granting high-impact access. But many tenants accidentally convert a user-specific need into a tenant-wide grant by consenting “on behalf of the organization.”

That’s not hypothetical—practitioners complain about exactly this workflow friction and approval gating in the wild.

truth 3: grants persist longer than your memory

Even if an app is “no longer used,” its service principal and consent grants often remain. Those stale grants are a common source of quiet risk—especially if secrets/certs are also left active.

truth 4: least privilege is not a slogan; it’s an engineering constraint

Least privilege in Entra means:

User consent settings and admin consent workflows are explicit controls Microsoft recommends configuring to reduce risk.


Build a defensible tenant-wide app permission inventory

This section is deliberately technical and designed to be copied into your internal audit procedure. It focuses on the three questions that matter:

  1. what exists (inventory of service principals and apps)
  2. what’s granted (delegated grants + app role assignments)
  3. what’s risky or stale (prioritization + next actions)

step 0: decide what you’re auditing (scope that prevents nonsense)

Be explicit:

A good default for most orgs:

step 1: connect using microsoft graph powershell with appropriate read scopes

You need read access to applications/service principals and the permission grants. Start with least privilege, then expand.

Example baseline (you may need more depending on what you pull):

Microsoft documents Graph PowerShell cmdlets like Get-MgServicePrincipal and required permissions.

step 2: build the inventory spine: service principals are your ground truth

Your report should key on service principal id, not app name. Names change; IDs don’t.

Collect fields that support triage:

Why? Because your audit needs to answer “is this internal, marketplace, or unknown?” fast.

step 3: collect application permissions (app roles assigned to the service principal)

Application permissions are typically represented as app role assignments to the resource service principal (for example: Microsoft Graph resource SP).

Conceptually:

You’ll pull assignments and then map role IDs to names by looking up the resource’s app roles.

A practical trick: for Microsoft Graph, the resource app id is well-known and used in Azure CLI documentation examples.

step 4: collect delegated grants (oAuth2PermissionGrants)

Delegated grants are represented by oAuth2PermissionGrant objects. Graph supports listing them.

Key fields you must interpret correctly:

This is where many audits fail: they list scopes but don’t distinguish tenant-wide vs per-user, which changes your risk profile dramatically.

step 5: map “ids to human names” so humans can review the report

Raw output is unreadable unless you translate:

If you want a shortcut for Graph permission meaning, tools like Graph Permissions Explorer can help you understand what a permission implies, and Microsoft Identity Tools can generate reports that categorize privilege. (graphpermissions.merill.net)

step 6: classify risk using a simple, repeatable rubric

Avoid subjective “this feels scary.” Use a rubric that’s explainable.

Example classification:

critical (usually needs explicit business justification)

high

medium

low

Microsoft’s own community Q&A shows how broad something like Directory.Read.All is perceived, and why admins question it. (Microsoft Learn)

step 7: add “governance fields” that decide what you do next

Your report should include:

This is how you avoid spending a week debating an app nobody uses.

step 8: decide remediation actions that match how access was granted

Remediation is different for each case:

Microsoft’s Entra guidance explicitly supports reviewing and revoking permissions and managing user consent settings and admin consent workflows.

step 9: close the loop with preventative controls (otherwise your audit decays)

A one-time audit is a snapshot. You want drift control:

This is also where practitioners converge: many orgs don’t want to “turn off all consent,” they want a controlled request/approve pathway.


wrap-up: the point of the audit is control, not reporting

A real audit outcome isn’t a spreadsheet. It’s a tenant where:

If you want a practical next step that also helps with lead generation: publish (and offer as a download) a tenant app permission audit worksheet with:

Add a short form: “Send me the worksheet + the PowerShell/Graph query pack.”

For readers on windows-active-directory.com, this topic pairs naturally with adjacent guides like:

And if you’re using ManageEngine for identity governance or M365 security operations, this is a good place to reference their governance workflows for approvals, reviews, and reporting—while keeping Graph-based exports as your ground truth.


external references

Exit mobile version