Site icon Windows Active Directory

Monitoring risky sign-ins with identity protection in entra id

Picture this: a perfectly valid user signs in to Microsoft 365 at 9:02 AM. Same username. Correct password. Same app. Nothing “fails.” Yet the session originates from an anonymizing network, from a geography your tenant has never seen for that user, using an unfamiliar device and browser fingerprint. If you only watch failed sign-ins, you’ll miss it.

That gap is exactly what monitoring risky sign-ins with identity protection in entra id is meant to close.

Here’s the clean definition you can lift into a snippet:

Monitoring risky sign-ins with Microsoft Entra ID Protection means continuously detecting and investigating suspicious authentication attempts (sign-in risk) and compromised accounts (user risk), then enforcing automated controls through Conditional Access—typically requiring MFA, forcing password reset, or blocking access—based on Microsoft’s risk signals.

We go beyond the usual “click here in the portal” overview. We’ll treat Entra ID Protection as a risk engine, break down what it’s measuring, and build a monitoring + response loop that stays useful after the first week—when false positives, service accounts, VPNs, and real attackers start blending together.


Why this matters now

Identity attacks are optimized for “successful sign-in.” Phishing kits and adversary-in-the-middle tooling aim to steal sessions, bypass basic MFA, and reuse tokens. That means you need controls that respond to risk context—not just credentials.

Microsoft Entra ID Protection exists because Microsoft can see patterns you cannot: global IP reputation, malware-linked infrastructure, leaked credential telemetry, and behavioral anomalies across a massive identity footprint. The output is a risk score at two levels: the sign-in and the user.

If you don’t operationalize those two signals, Entra ID Protection becomes a fancy report you check after an incident.


The surface view (and why it’s incomplete)

Most “AI overview” style explanations stop here:

All true. But incomplete.

Because in real environments, the hard problems are these:

To answer those, we need first principles.


What “risk” actually means in entra id protection

Risk scoring in Entra ID Protection is not a morality judgement. It’s a probability estimate:

The irreducible model

At its core, risk-based identity defense reduces to four steps:

  1. Observe: collect authentication signals (IP, ASN, device, location, session traits, threat intel).
  2. Infer: compare those signals to known bad infrastructure and to that user’s baseline.
  3. Decide: classify the event/user into low/medium/high risk.
  4. Act: enforce step-up auth, force credential reset, or block.

Entra ID Protection covers steps 1–3 and feeds step 4 into Conditional Access.

Why “risk” produces surprising behavior

From this model, a few non-obvious truths fall out:


What Entra actually detects (and how to think about detections)

Microsoft documents risk detections as a catalog, including which detections are real-time vs offline, and which require P2 for full detail.

A useful way to understand detections is to group them by what kind of claim they make:

1) Infrastructure risk (the network is suspicious)

Examples include anonymous IP and malicious IP intelligence. This category is strong because it doesn’t depend on the user’s baseline—only on whether the source network is known-bad or intentionally obfuscated.

Operational implication: infrastructure risk often deserves immediate friction (MFA at minimum), but it can also generate noise if your company routes traffic through anonymized egress or security proxies.

2) Baseline deviation (this user doesn’t do this)

Detections like unfamiliar sign-in properties are based on observed history: IP, ASN, location, device, browser, tenant subnet, and related properties.

Operational implication: baseline-based detections are powerful, but sensitive to organizational changes (new VPN, new ISP, new laptop rollout).

3) Credential compromise signals (the secret is likely exposed)

Detections such as leaked credentials map more directly to account takeover risk.

Operational implication: treat these as “assume breach until proven otherwise.” You usually want password reset + session revocation patterns, not just MFA.

4) Correlated behavior (this pattern looks like an attacker)

Password spray, atypical patterns, token anomalies—these are less about one attribute and more about attacker-shaped behavior at scale.

Operational implication: you want cross-user correlation, which is where Sentinel/SIEM shines.


The monitoring loop that actually works

A mature setup has three layers:

  1. Portal monitoring for quick investigation and human decisions.
  2. Automated enforcement to reduce time-to-containment.
  3. Telemetry + hunting to discover patterns that policy alone won’t catch.

Microsoft’s risk reports and investigation guidance give you the base mechanics.
Your job is to turn them into an operating rhythm.


How to monitor risky sign-ins (end-to-end)

This is the “pure technical” portion. Treat it like an implementation runbook you can adapt.

Prerequisites and access model

If you need a refresher on Conditional Access mechanics and scoping, see WAD’s guide: How to use Azure AD Conditional Access to enforce access policies.


Step 1: Establish your baseline views (risk reports that matter)

In the Entra admin center, Identity Protection provides key reports:

Microsoft describes how to access, filter, and use these reports (including marking events as confirmed compromised or dismissing).

How to use these views in practice (not just “look at them”):

This sequence matches how incidents unfold: first you see an indicator (detection), then you validate a session, then you decide account-level remediation.


Step 2: Define your enforcement strategy as two policies (minimum viable, high value)

You need two Conditional Access policies, mapped to the two risk signals.

Policy A: sign-in risk policy (session containment)

Goal: if the sign-in is risky, require strong proof.

Microsoft’s risk policy guidance emphasizes that strong authentication (usually MFA/passwordless) is how sign-in risk self-remediation happens.

Practical tuning choice:

A realistic rollout is: High (week 1) → Medium+High (after tuning).

Policy B: user risk policy (account recovery)

Goal: if the account is likely compromised, rotate the secret.

Microsoft documents that user risk can be remediated by SSPR password reset, or by secure password change patterns, depending on your configuration.


Step 3: Handle the hardest part: exclusions that don’t create holes

Every tenant needs exceptions. The trick is to keep exceptions explicit, small, and monitored.

Common exclusion categories:

  1. Break-glass accounts (emergency access)
    • Exclude them, but lock them down with long random passwords, restricted sign-in locations, and continuous alerting.
  2. Service accounts / non-interactive identities
    • Many don’t behave like humans. They break baseline detectors.
  3. High-privilege admins
    • Often you should tighten policies, not loosen them. But avoid locking yourself out.

A useful pattern is: exclude break-glass from risk policies, but apply separate strict controls to them (limited locations, monitored sign-ins).

We have already covered monitoring and exporting Entra logs to external systems, which matters because “excluded” accounts should be more monitored, not less:


Step 4: Build a repeatable investigation workflow (triage → decision → evidence)

Microsoft provides an investigation framework for risky users/sign-ins/detections.
The important part is turning it into a consistent checklist so two different engineers reach the same decision.

Triage checklist for a risky sign-in:

Decision outcomes (keep it binary):

Microsoft supports giving risk feedback (confirm compromised / dismiss) which helps adjust risk and improves signal accuracy over time.


Step 5: Export telemetry and hunt patterns (because attackers don’t attack one user)

Portal investigation is necessary, but insufficient.

To detect mass spraying, geo waves, or targeted admin attacks, you want correlation across sign-ins.

Two practical routes:

Route A: Microsoft Sentinel / Log Analytics

We already outlined the “send logs to Sentinel” direction in our monitoring article.
Once in Sentinel, you can:

Route B: third-party reporting and dashboards

If your goal is “prebuilt reporting, delegation, and M365-focused dashboards,” ManageEngine’s guide is a good complement:

Use it when you want clean reporting without building Sentinel workbooks.


Step 6: Validate user experience and avoid silent bypass

Two key realities:

So you should harden adjacent controls:

This is where “monitoring risky sign-ins” becomes a broader identity assurance program, not a single report.


Comparison: portal-only monitoring vs risk-driven operations

A good long-form comparison is to ask: what breaks as you scale?

Portal-only approach (common, fragile)

Strengths

Failure modes

Risk-driven operations (what you’re building)

Strengths

Trade-offs


Practical tuning guidance (what usually causes pain)

These are the issues that repeatedly show up in real tenants:

Corporate VPNs and “anonymous IP” tension

If your company uses egress that resembles anonymization or shared exit nodes, you can see more anonymous-IP hits. Treat this as a design problem:

Unfamiliar sign-in properties after infrastructure changes

Large changes create baseline churn: new laptops, new ISP, new proxy. The right response is not “turn off risk.” It’s:

Admin accounts deserve different treatment

Admins should not get “normal user” policies. They should get:


A short “do this first” checklist (if you want results this week)

  1. Confirm you have Entra ID P2 coverage for the users you’ll enforce on.
  2. Review Risk detections for the last 7–30 days and identify top detection types.
  3. Create Policy A: Sign-in risk = High → require MFA.
  4. Create Policy B: User risk = High → require password change.
  5. Exclude break-glass accounts, then separately monitor them heavily.
  6. Set a weekly review: top risky sign-ins, top risky users, top detection types, and policy impact.

Key takeaways

Monitoring risky sign-ins with identity protection in entra id is not “a report you check.” It’s a control loop:

 

Exit mobile version