Trying to understand OAuth often feels like being trapped inside a maze of specs, trying to find your way out, before you can finally do what you actually set out to do: build your application.
While this can be incredibly frustrating, it’s no accident that OAuth is actually made up of many different RFCs, building upon each other and adding features in different ways.
In fact, the “core” OAuth spec, RFC 6749, isn’t even called a specification, it’s technically a “framework” you can use to build specifications from. Part of the reason for this is because it leaves a lot of things optional, and requires that an implementer makes decisions about things like which grant types to support, whether or not refresh tokens are one-time use, and even whether access tokens should be Bearer tokens or use some sort of signed token mechanism.
This has often been quoted as the biggest failure of OAuth, but is also a large part of the reason OAuth has been successfully deployed at the largest companies at scale over the last 10 years.
OAuth has been patched and extended a lot in the last decade of experience deploying systems using it. It’s been extended in ways the original authors could never even see coming. Keep in mind that when OAuth 2.0 was published in 2012, the iPhone 5 was brand new, the latest browser from Microsoft was Internet Explorer 9, single-page apps were called “AJAX apps”, and CORS was not yet an established W3C standard.
Since 2012, the web and mobile landscape has changed dramatically. More people access the internet from mobile devices than desktop devices, single-page apps are an extremely common way of creating web apps, and countless password database breaches have time and again demonstrated that storing passwords is dangerous.
It became apparent that a better solution was needed for mobile apps, so PKCE (RFC 7636) was created to provide a way to use the Authorization Code flow without a client secret.
Later, “OAuth 2.0 for Native Apps” (RFC 8252) was published, which recommends that native apps use the Authorization Code flow with the PKCE extension.
A new class of device arose along with a need to use OAuth with them: devices that have no browser or lack a keyboard, such as an Apple TV or YouTube streaming video encoder. An entirely new OAuth grant was created to address this, called the Device Grant, published as RFC 8626.
So what started out as a list of four grant types has had things added and removed, and now looks more like this.
Which, if you look closely, actually ends up distilling down to this:
So what we’ve effectively done is taken the core OAuth RFC, added and removed things, and turned it into an entirely different set of recommendations. The problem is that it requires reading far too many RFCs to understand this landscape.
If you want to implement a secure OAuth solution today, it requires reading: RFC 6749 (OAuth 2.0 Core), RFC 6750 (Bearer Tokens), RFC 6819 (Threat Model and Security Considerations), RFC 8252 (OAuth for Native Apps), RFC 8628 (Device Grant), OAuth for Browser-Based Apps, OAuth 2.0 Security Best Current Practice, RFC 7009 (Token Revocation), RFC 8414 (Authorization Server Metadata), and if you’re also implementing an OAuth server, then you need to read RFC 7519 (JWT), JWT Best Current Practice, JWT Profile for Access Tokens, and probably some others that I forgot. That’s a lot of material.
So why am I suggesting an OAuth 2.1? Surely we should instead scrap this existing work and create something simpler and more streamlined?
As it so happens, that effort is already under way as well, being led by Justin Richer under the name Transactional Authorization, or TXAuth, and likely to end up as OAuth 3. That effort takes a completely greenfield approach, rethinking how OAuth and all its related specs and extensions such as UMA might look if everything were not tied to being an extension of the original OAuth 2.0 RFC 6759. I’m definitely in support of this effort, there are a lot of nice patterns that emerge when you look at it this way, but as we all know, creating a new spec from scratch is not a quick process, much less getting broad adoption within the industry. So while effort on OAuth 3 is under way, which will take literal years to finish, there is room to tidy things up with OAuth 2 in the mean time.
At the OAuth working group meeting in Singapore last month (IETF 106), I led a discussion about OAuth 2.1 and what it should encompass.
My main goal with OAuth 2.1 is to capture the current best practices in OAuth 2.0 as well as its well-established extensions under a single name. That also means specifically that this effort will not define any new behavior itself, it is just to capture behavior defined in other specs. It also won’t include anything considered experimental or still in progress.
There was a general agreement from folks in the room that something like this is needed, and following the official meeting, I led a two-hour breakout session where a dozen or so of us dove into more details and started making a plan.
Torsten, the author of the Security BCP made a comment which I thought framed the discussion well. His goal for OAuth 2.1 is that it should make the Security BCP irrelevant because it already includes everything the Security BCP says. In other words, there should be no need to document the most secure way to implement OAuth, since that should be the only option available when you read the spec.
We still need to discuss the specifics about what form this document will take, whether that is going to be an entirely new RFC that replaces RFC 6749, or a BCP that references the other specs, or something else entirely. However, the overarching goals are:
The specifics of what will be included in OAuth 2.1 are still up for discussion within the group, and will be in the agenda of the upcoming IETF meetings. The starting point for these discussions is roughly the below.
If an authorization server intends to interoperate with arbitrary resource servers, such as OAuth services and open source projects, then there is an additional set of requirements that includes:
Of course all of these points are currently up for debate, so if you have feelings about them, you should definitely join the mailing list and discuss them!
Currently the biggest question for the group is whether or not OAuth 2.1 should make technically breaking changes. It definitely won’t define anything new itself, but if you look at what the Security BCP says, it requires PKCE for all authorization code grants, even for confidential clients. Since most current deployments of OAuth 2.0 only support PKCE for public clients, that means most current deployments will not be compliant with OAuth 2.1 out of the gate. There are arguments on both sides of this, which was a large part of the discussion during the breakout session last month, with no clear consensus.
These are the kinds of questions that will be discussed in the coming months within the group. If you have thoughts, I would be more than happy to hear them! Feel free to send an email to the list directly, or even write a blog post in response!
I’m excited for this work to kick off, and looking forward to many more discussions going forward!
Now is your chance to join and ask me your OAuth questions!
💻 Tuesday 11am-12pm PST: OktaDev Office Hours
📕 Wednesday 10am-1pm PST: Intro to OAuth 2.0 with O'Reilly
🔐 Thursday 11am-12pm PST: Protecting Your APIs with OAuth
Tue Dec 10 11am-12pm PST
Join myself and Okta coworkers for our latest edition of virtual office hours! We'll be streaming on YouTube and taking questions via the chat! Make sure to subscribe to OktaDev on YouTube and click the bell icon to be notified when we go live! You can ask us anything about OAuth or user management in .NET!
Wed Dec 11 10am-1pm PST
In this three hour workshop hosted by O'Reilly, I'll be covering the basics of OAuth 2.0 and working through the most common grant types. We'll be doing some exercises to demonstrate the OAuth flows from scratch. Access to this workshop requires an O'Reilly Safari Online subscription.
Thu Dec 12 11am-12pm PST
In this free webinar, I'll be diving in to how you can use OAuth to protect your APIs, talking about things like how to use JWT access tokens and the tradeoffs that come with them, how to design scopes to allow granular access, and how to leverage a microservices architecture protected by OAuth at a gateway.
We'll be giving away a copy of my book to one lucky winner as well!