Mega Menu
Free AD Tool

AD CSV generator tool

Generate any CSV file just by using the basic AD Attributes.

Detecting stale accounts in azure ad

Free AD Tool

Duplicate Object Audit

Find all duplicate objects in your domain with a single click.

Contents

Scanning headers...

A stale account is not “a user who hasn’t logged in for 90 days.” That definition is convenient, but it’s incomplete—and in Entra ID it can be dangerously misleading.

A stale account is an identity object whose continued existence creates risk or cost without delivering current business value. Login inactivity is just one signal. The real question is: does this identity still have an active purpose, and can it still be used to access something valuable?

This matters more today for three reasons:

  1. Cloud access never sleeps. Even if a human stops signing in, service tokens, app passwords, refresh tokens, and non-interactive flows can keep running—or be abused.
  2. B2B grows silently. Guest users accumulate via Teams, SharePoint, and ad-hoc collaboration, then rarely get revisited.
  3. Attackers love forgotten doors. Stale accounts are perfect for persistence, lateral access, and privilege reuse—especially if they retain roles, group memberships, or legacy auth paths.

Microsoft has improved the platform signals (notably signInActivity and related properties), and Entra Governance can automate the hygiene. But the winning approach is still evidence-first, staged, and reversible—not a one-shot deletion spree.

Below is a deep, practical, and comparison-style guide: portal vs Graph vs PowerShell vs access reviews, plus the real-world pitfalls practitioners keep hitting.


What “stale” really means in entra id

Most teams start with: “last interactive sign-in time older than N days → stale.”

That’s a reasonable first pass because Entra’s user list can show Last interactive sign-in time and lets you sort/filter quickly.

But that surface view fails in common scenarios:

  • Non-interactive activity continues. A user might stop logging in interactively, but still generates non-interactive sign-ins through background clients or token refreshes (or the opposite).
  • Service accounts don’t behave like humans. Some never sign in interactively by design.
  • Guests often look “inactive” even when risk is high. They may never accept the invite, or their activity may not show the way you expect.
  • Sign-in activity can be delayed or inconsistent in exports/APIs. People repeatedly report gaps or lag between portal sign-ins and Graph-returned signInActivity.

So, instead of treating stale detection as a single threshold on one timestamp, treat it as an inference problem: build confidence using multiple signals, then act in stages.


The three signals you are actually measuring

At a foundational level, detecting stale accounts in Entra ID reduces to three measurable questions:

1) Can the identity authenticate?

If yes, it can be abused. Whether it has authenticated recently is secondary to whether it can authenticate today.

Key primitives:

  • accountEnabled (enabled/disabled)
  • authentication methods present (MFA methods, FIDO keys, app passwords, etc.)
  • conditional access coverage
  • risky user / risky sign-in indicators (when available)

2) Does the identity still hold authorization?

Even if it never signs in, membership and roles can remain:

  • directory roles
  • privileged access group memberships
  • app role assignments
  • Azure RBAC (outside Entra user object, but still tied)

This is why stale detection must always include “what can this account reach?”

3) Is there evidence of recent use?

This is where sign-in timestamps matter—but you need to interpret them correctly.

Microsoft Graph exposes signInActivity on the user object, providing last interactive and non-interactive sign-in attempt times, plus “last successful” in newer capability.

And Microsoft Learn documents practical management patterns for inactive/obsolete accounts and emphasizes using the admin center + Graph.

These three signals—authenticate, authorize, evidence of use—create a robust mental model: stale is the intersection of “no recent evidence” + “still has power” + “still can authenticate.”


Build a trustworthy stale-account detection pipeline (portal vs graph vs powershell vs access reviews)

This section is intentionally implementation-heavy and designed to be copy/paste friendly. It also includes “gotchas” practitioners have surfaced in the field.

Step 0: pick your definition, but make it tiered

Use tiers instead of a single cutoff:

  • Tier A (attention): no interactive sign-in for 30–45 days
  • Tier B (likely stale): no interactive and no non-interactive sign-in for 90 days
  • Tier C (cleanup candidate): Tier B + no licenses assigned + no group/role/app assignments of consequence
  • Tier D (high risk stale): Tier B + privileged role/group membership (handle urgently)

This tiering prevents a classic failure: deleting “inactive” identities that are actually service principals in disguise, automation mailboxes, break-glass accounts, or edge-case admins.


Option 1: entra admin center (fast triage, weakest automation)

Best for: quick audits, small tenants, and validating “does the data look sane?”

Microsoft documents the core flow:

  • Go to Identity → Users → All users
  • Edit columns and add Last interactive sign-in time
  • Requires appropriate permissions (least-privileged often includes Reports Reader)

What this gets you

  • Immediate visibility
  • Easy sorting/export

Where it fails

  • Hard to scale for recurring governance
  • Hard to incorporate non-interactive use, role exposure, or exception logic
  • Easy to turn into a “once-a-year cleanup” ritual (the worst-case pattern)

Use this UI path to validate your threshold assumptions, then move to Graph/PowerShell for repeatability.


Option 2: microsoft graph signInActivity (the canonical data plane, with real constraints)

Best for: reliable, repeatable reporting pipelines, integration with ticketing/governance, and automation.

The object that matters: signInActivity

Microsoft Graph defines signInActivity as last interactive/non-interactive sign-in attempt times, with newer “last successful” available (not backfilled).

Key properties you’ll typically use

  • signInActivity.lastSignInDateTime (interactive)
  • signInActivity.lastNonInteractiveSignInDateTime (non-interactive)
  • signInActivity.lastSuccessfulSignInDateTime (newer capability; “not backfilled” behavior is commonly noted) (Microsoft Learn)

Licensing and permissions (this blocks many scripts)

A recurring issue: Graph calls that work in one tenant fail in another because the tenant lacks premium licensing or required permissions.

Microsoft troubleshooting guidance explicitly notes needing Entra ID Premium P1 or P2 and relevant Graph permissions for sign-in activity scenarios. (Microsoft Learn)

In practice, you will see people succeed with delegated permissions like:

  • AuditLog.Read.All
  • Directory.Read.All
  • User.Read.All

…but licensing still matters when the service decides whether to return the fields.

Graph query patterns

A) Pull users with sign-in activity fields

  • Use $select and request signInActivity along with identity fields.

B) Filter by inactivity
Some engineers use $filter with comparisons on signInActivity/lastSignInDateTime (careful: edge cases exist, and behavior can differ). An example pattern is discussed in practitioner write-ups. (Welcome to Pariswells.com |)

C) Handle missing signInActivity
It can be absent:

  • when a user never signed in
  • when invitation wasn’t redeemed (guests)
  • when data hasn’t propagated yet

This is discussed in community Q&A and aligns with practitioner observations. (Stack Overflow)

Operational rule: treat “missing signInActivity” as unknown, not “inactive.” In governance, unknown is a separate bucket.


Option 3: microsoft graph powershell (most practical for most admins)

Best for: security/identity teams who want automation without building a full app.

A practical approach is:

  1. Connect with required scopes
  2. Pull all users including signInActivity
  3. Normalize timestamps and export to CSV for review

This pattern is widely used in the field. (Get Practical)

What to add to make it “production-grade”

1) Always export a “decision record”
Include:

  • UPN, objectId
  • userType (Member/Guest)
  • createdDateTime
  • accountEnabled
  • last interactive
  • last non-interactive
  • last successful (if present)
  • license count (or presence)
  • privileged role/group indicator
  • action recommendation tier (A/B/C/D)
  • reviewer and decision date fields

2) Expect data latency
Some admins report signInActivity appears delayed (hours) compared to portal sign-in logs. (Microsoft Learn)

So run the pipeline on a schedule, but avoid making “disable” decisions from a single run. Use two-pass confirmation (e.g., 48 hours apart) for borderline cases.

3) Treat “privileged identities” as separate
If your stale account has admin roles or sensitive group memberships, your response is different. You can’t handle those like ordinary offboarding.

If you need a deeper security angle, link your stale-account program to privilege monitoring. (A related Entra privilege escalation discussion is covered on Windows Active Directory)


Option 4: access reviews (the governance-native way to automate cleanup)

Best for: recurring, auditable, low-drama cleanup—especially for guests.

Access reviews can apply recommendations for inactive users. Microsoft’s documentation explains that “inactive” is evaluated based on last sign-in, with a 30-day reference in the recommendation behavior (and app-assignment reviews can use app activity).

This is important: Access Reviews turn stale detection into an organizational control, not an admin script.

Where access reviews shine

  • scheduled recurring reviews
  • owner/manager attestation
  • audit trail of decisions
  • automated remove/deny actions (depending on configuration)

Guest cleanup is a common best-fit
Multiple practitioners advocate using access reviews for stale guests as the “easiest” governance route.

How to use this in a mature program

  • Put guests into access-reviewable groups (Teams, SharePoint, app groups)
  • Run recurring reviews with inactive recommendations enabled
  • Auto-remove users marked “deny” after the review completes
  • Keep an exception process for high-trust partners

Implications and silent dependencies (what the system design forces you to accept)

Interactive vs non-interactive is not a “nice-to-have”

Interactive sign-ins reflect human presence. Non-interactive reflects background activity and token behavior. If you ignore either:

  • you will misclassify real usage as inactivity, or
  • you will miss persistent access that still matters.

Microsoft explicitly distinguishes these sign-in types in its monitoring concepts. (Microsoft Learn)

“Last sign-in” may not equal “last successful sign-in”

A key nuance: some fields track attempts, and newer fields aim to represent “successful” sign-ins, but they may not be backfilled. (Microsoft Learn)

That changes how you interpret “staleness” during a transition period.

API data quality is a real operational constraint

Forum and community reports include:

  • delayed availability
  • discrepancies vs portal
  • odd “phantom” objects/IDs in some filtering behaviors

These are not reasons to avoid the APIs. They’re reasons to design your pipeline with uncertainty handling. (Reddit)


Common misunderstandings, risks, and correctives

Misunderstanding 1: “inactive = safe”

Inactive accounts can still be the easiest way in if:

  • they have weak auth controls,
  • they keep group memberships,
  • they bypass modern controls (legacy auth, no CA coverage).

Corrective: treat inactivity as “low visibility,” not “low risk.”

Misunderstanding 2: “delete is the cleanup”

Deletion is the last step. Disabling first is safer because it is reversible, and it exposes dependencies you didn’t know existed.

A staged program is also recommended in mature cleanup playbooks (including hybrid contexts). (Windows Active Directory)

Misunderstanding 3: “guests don’t matter”

Guests often land in Teams, SharePoint, and app groups, then linger. They can retain access long after the business relationship ends.

Access reviews are designed for this exact failure mode.

Misunderstanding 4: “the timestamp is the truth”

Timestamps are signals, not truth. Data lag and inconsistencies happen.

Corrective: use two-pass verification and tiering; don’t build one-shot kill scripts.


Expert essentials checklist (non-negotiables)

  • Track both interactive and non-interactive sign-ins. (Microsoft Learn)
  • Treat missing signInActivity as unknown, not inactive. (Stack Overflow)
  • Separate privileged accounts into a different workflow. (Windows Active Directory)
  • Use a staged response: report → validate → disable → observe → delete. (Microsoft Learn)
  • Prefer Access Reviews for recurring guest and group access hygiene. (Microsoft Learn)
  • Expect licensing/permissions constraints for Graph sign-in activity. (Microsoft Learn)

Applications, consequences, and what comes next

License optimization is the easy win—but security is the real prize

Most organizations begin because unused licenses cost money. That’s fine. But stale-account cleanup becomes far more valuable when tied to:

  • conditional access coverage gaps
  • privileged access management
  • B2B lifecycle governance
  • incident response readiness (fewer unknown identities)

Hybrid reality: “source of truth” matters

If you sync identities from on-prem AD, cleanup must respect where the authoritative lifecycle lives. A cloud-only deletion might rehydrate, or break an expected sync flow.

For hybrid teams, a combined AD + Entra cleanup playbook is the safer posture. (Windows Active Directory)

Expect governance to become more continuous

Microsoft’s direction is clear: more identity governance controls, better activity signals, and more “recommendation” logic. Access review recommendations already incorporate inactivity logic in ways that are tenant-level or app-level depending on review type.

Your advantage is building a pipeline that:

  • measures consistently,
  • handles ambiguity,
  • and turns decisions into auditable controls.

Wrap-up: the cleanest way to think about detecting stale accounts

Detecting stale accounts in Azure AD / Microsoft Entra ID is not a reporting exercise. It’s a governance system.

If you remember one idea, make it this:

An account is stale when it lacks recent evidence of legitimate use, but still retains the ability to authenticate and the authorization to access something meaningful.

Use the portal for quick checks. Use Graph/PowerShell for repeatable measurement. Use Access Reviews to turn cleanup into a recurring control with human accountability.

If you want a ready-made operational template, see:

For organizations that prefer a product-led workflow (delegation, reports, automation), it’s also worth reviewing ManageEngine ADManager Plus guidance on inactive account controls and automation. (ManageEngine)

Newsletter Signup

Top Categories

Loading...

Latest Blogs

    Loading...

Top Articles

    Loading...