Cover photo

Achieving an ATO via IDOR, then Reviewing the Fix

My first run in with cross-domain cookie leakage.

Welcome

Recently I've been pretty busy settling into a new role performing web application penetration testing and I thought I'd go over an interesting series of events that occurred. It started out with an account takeover (ATO) by leveraging an insecure direct object reference (IDOR) that existed in their web application. It essentially would let any user reset the password of any other user, which then they could login using. So with this being a critical finding, it needed to be fixed ASAP and a fix was pushed the next day and it needed to be re-looked at, which is where this gets interesting. I'll start from the top though.


IDOR

The initial IDOR wasn't an insanely hard find. I was just doing my standard mapping of the application when I ran into it. This mapping involved things like going page to page, noting the elements, functions, parameters, potential places for vulnerabilities to appear, and examining each request that was sent out with each action. I was walking through the password reset flow when I saw a request that looked like this:

POST /api/reset-password
Cookie: Token="<random JWT here>";

{
	"username": "example1",
	"password": "password1"
}

Of course seeing a POST request directly referencing a UUID or username always gets me excited, I went to double check that it was applicable. Creating a new user account, logging in with it, then changing the "username" parameter to that new users and attempting to login with the changed password showed that it was indeed an IDOR. Why an IDOR and not an MFLAC, you may ask? That's due to the situation of the request itself, which was missing any kind of authentication checks on it, along with any kind of authorization checks. Due to both of these being missing it was deemed that this request was not an escalation of privilege since any unauthenticated user could use it anyways, and there was no documentation written otherwise to contradict that. I'm still leaning towards MFLAC due to gut feel, but I don't need to split any more hairs.

At this point, it's reported as a critical and the dev team is given a full work-up of the finding. They wanted to get it fixed right away, so they set their dev team on it and the next day pushed a fix to a different environment and requested that it be checked. This is where a unique series of findings comes in.


Strange JWTs

Initially when going through the app several JWTs stored in cookies were noted, but they didn't seem to be important because the data inside of them wasn't really used, and the JWTs themselves seemed to not be utilized by the application for big-ticket things like session handling. These JWTs were initialized when the user first visits the site as an unauthenticated user, and didn't change throughout the entire interaction, even if the user authenticated. Along with that, they weren't validated correctly, so their signing algorithm could be changed to "none", and their contents could be edited, and it wouldn't change the outcome of a vast majority of interactions with the application.

They needed to be there for the application to respond, but to my knowledge they weren't actually used for anything. Super weird, right? For example, if essentially the entirety of the JWT payload was deleted and the request was sent, the app would still respond successfully. But if the entire cookie was removed, the app would return a 500 error. I chocked this up to the app having user tracking systems baked into its architecture, so if they failed, the app's response failed.

For those that don't know what a JWT is, you can read more about them here, and if you want to see some good examples of exploiting them you can go check out the PortSwigger Academy sections on them. Here are a few big highlights about them.

JSON web tokens (JWTs) are a standardized format for sending cryptographically signed JSON data between systems. They can theoretically contain any kind of data, but are most commonly used to send information ("claims") about users as part of authentication, session handling, and access control mechanisms.

Unlike with classic session tokens, all of the data that a server needs is stored client-side within the JWT itself. This makes JWTs a popular choice for highly distributed websites where users need to interact seamlessly with multiple back-end servers.

The password reset flow in the application looked similar to this, just to give you an idea:

But I have to note, again:

  • There was some involvement with specific JWTs that were tied to session, but the password resent API call did not use them.

  • The only thing that might have been tied to session was initialized when the unauthenticated user first visits the site, and never changes. This could be session fixation, if it was provably tied to session. This could not be proven in the test environment.

  • The requirements to send the password reset API call were just to have a JWT structured in a valid way for the way app, and stored in a specifically named cookie header. The contents of this JWT were nothing special, and its signature was not validated.

  • Nothing (including the signature) was validated for these JWTs, including expiration time.


The Fix

When the app team reached back out, they wanted their solution to be looked at in a new environment to make sure it was secure. Approaching the newly updated password reset API endpoint, it used a request that looked like this:

POST /api/reset-password
Cookie: Token="<random JWT here>"; TrackingID: "0191b30c-5885-74b6-a1b7-203a2edaf1ed";

{
	"password": "password1"
}

And the contents of the JWT looked like this:

{
  "sub": "1234567890",
  "trackingId": "0191b30c-5885-74b6-a1b7-203a2edaf1ed",
  "iat": 1516239022
  "exp": 1516239024
}

Right off the bat, I tried to reinsert the username parameter, but just got the same response, so it looks like the server is ignoring all parameters passed in the body other than the password parameter for this endpoint. But then I noticed something a little strange. The success of the request depended on the tracking ID cookie being present. That's generally fine, they totally could start using a different cookie for session tracking, but it was a little weird. So if I altered a single character in that tracking ID cookie, the request would fail.

Moving on to looking at the JWT again, it has a new parameter inside of it. As shown above, that parameter is "trackingId" and it shares the same value. The application is still not validating the signature of the JWT, so I altered a character from inside the JWT's trackingId parameter, and saw the request fail again... Weird. Swapping the tracking ID in the JWT and the cookie with another user's tracking ID who is also on the password reset page then resets the password of that secondary user, which confirms that it's being used for session handling.

Stepping back, I went to look for when that tracking ID first shows up. Ahhh, it's right when the unauthenticated user first visits the site. So while it's usually okay to store a JWT in a cookie, that JWT should be validated, and the session handler should change once a user is authenticated. Otherwise you have session fixation.

This is where the good part comes in. Since the JWTs are not validated (at all), and the tracking ID is what session is being tied to, it means that the sessions potentially will never expire, which made me pretty suspicious. There's no way that the tracking ID parameter is an actual tracking ID, because these sessions could then be hijacked by a number of third-parties. This would be like saving a user's password in plaintext then handing it over to an ad-tracking platform.

Using Burp's search feature, I input the tracking ID and opened up my scope to pick up on sites outside of my scope, in order to see if the login portal was sending that tracking ID anywhere. Indeed it was. There were about three third-party sites the app was sending the tracking ID to in various manners (header, body data, etc.), and then it was also sending it to two domains that the company owned, so this infinite session cookie was being sent all over the place. If it ever gets leaked, logged, published, or anything else, it's game over for that user and whoever gets their hands on that interaction ID has control over their account.

This type of vulnerability is normally be described as "cross-domain cookie leakage" but if you look into a lot of reports online, those require a lot more interaction and specific functionality present between an application and an attacker controlled server. In this instance, the session is tied to a cookie that it shouldn't be tied to, as the cookie is also used for tracking and is being sent to multiple third-party sites.


Review

  • If you are using JWTs, validate them. Even if they aren't being used strictly for sessions.

  • If you see JWTs in a web app, even if they aren't being used for session handling, pay attention to them and test them for vulnerabilities.

  • Thoroughly identify what is being used for authentication and authorization on each request. It may change request-to-request, especially if those requests go to a different subdomain or endpoint.

  • Whatever you're using for user sessions, don't send it off to third-parties. If you need to track their movement through the site for analytics, use a separate cookie.

  • When a user authenticates, make sure they are issued a new session token.

  • Follow KISS (keep it simple!), don't overcomplicate authN and authZ in your app. Stick with a standard and apply it across the board.


Thanks for giving this a read! I thought it was a pretty interesting situation, as login portals are some of my favorite things to test. If I messed up any definitions / references, or you just have some thoughts on the post, feel free to reach out!

Loading...
highlight
Collect this post to permanently own it.
alp1n3.eth logo
Subscribe to alp1n3.eth and never miss a post.
#web application#penetration testing#pentesting#wapen#ato#idor