Our Director of Cyber Threat Research, Kev Breen, recently discovered a vulnerability in a piece of stalkerware. What followed was a dilemma that has lasted months. Together, the Immersive Labs team has decided to help educate people on the dangers of stalkerware and how we approached our disclosure dilemma following the discovery. This is the second blog in our Stalkerware special series.
Responsible disclosure is usually clear cut, but there are certain situations where the decision is more nuanced. We recently faced one such dilemma after uncovering a vulnerability in a piece of stalkerware. The more we weighed our options, the more we realized we were well and truly dealing with a wicked problem.
How does disclosure usually work?
Before we get into the nitty-gritty of our dilemma, let’s take a look at how the disclosure process is supposed to work.
Researchers typically set out to look for vulnerabilities in applications or services to make them safer, collect a reward, or a combination of the two.
Responsible disclosure is the process of reporting findings privately to the software developer or service provider so they can fix the vulnerabilities and release updates before attackers can exploit them. In an ideal world, they work with the researcher to do exactly that within an agreed timeframe. This protects everyone.
As with all things in life, responsible disclosure doesn’t always go to plan. A lot of vendors will be responsive, but not all. Some drag their heels, others outright ignore the problem.
This is where public disclosure comes into play. When making the initial responsible disclosure to the vendor, researchers will typically set a timeline, typically 30, 60, or 90 days from the initial report. After this point, if the vendor has not replied, the researcher is free to make their findings public. This exerts pressure to ensure vendors patch vulnerabilities and make their service or application safer for everyone.
So what happens when a researcher finds a vulnerability in illegal software – malware or ransomware, for example? We go to the authorities. Do not pass Go, do not collect £200. As we found out last summer when I discovered a vulnerability in popular Android banker Anubis, vulnerabilities in malware could be used to help authorities with their investigations.
Responsible disclosure works to protect everyone involved – the application or service’s users, the organization that owns and runs it, and its customers – but only if we can trust that everyone will stick to the rules. Illegal services don’t get given the same level of trust because they are not bound by the law.
So what happens if a researcher finds a vulnerability in a ‘legitimate’ piece of software that is used for less than ethical purposes?
Stalkerware and fifty shades of ethical dilemma
Stalkerware operates in a legal grey zone and complicates the whole process of responsible disclosure. This is because, while it can be used for lawful purposes, it’s also employed by those looking to exploit vulnerable people.
In fact, stalkerware covers every shade of grey imaginable, as its users range from parents who want to track their children’s activities and whereabouts, to criminals and stalkers. It collects everything from the texts to GPS locations of children, vulnerable spouses, partners, and unsuspecting victims. You can find more information on this in our previous post.
Wicked apps and wicked problems
Now you should be able to see the nature of our wicked problem. We found vulnerabilities in stalkerware used by a few different companies. And one of these, in particular, is bad.
You remember we talked about all the data that these types of stalkerware can access on devices? Well, with this specific vulnerability, an attacker could access it all: every message, every application, every photo, video, call, geolocation – everything on every device with this piece of stalkerware installed. All live.
Before I go any further, I have to be clear on this. For every test of the vulnerability we performed, we only ever did so against our own accounts that we created for research purposes. At no point in time did we access or attempt to access the data for any user of the platform outside of these.
The Sophie’s choice of cybersecurity
So, we have a vulnerability in a platform that can be used for lawful purposes but can also facilitate highly questionable activity and puts a uniquely vulnerable set of people at risk.
What do we do about it?
Nothing: This doesn’t play well with us. We can’t be sure that we’re the only ones to find this vulnerability – and we certainly can’t be confident that no one will uncover it in the future. An attacker could find it with some potentially very negative outcomes.
Responsibly disclose to the vendor(s): The obvious choice, but ultimately this will be helping to improve a piece of software that is often used illegally. On the flip side, there are real people and real victims at risk, and it’s important their data is protected from exploitation.
Publicly disclose: This will put immense pressure on the service provider to fix it, but it will also open the floodgates to malicious actors, who will immediately begin to exploit the vulnerability before a patch can be implemented. If one ever is.
Report to Law Enforcement: If in doubt, talk to the police.
This was the decision we faced, and it was not an easy one to figure out. After making the initial discovery, it required a team discussion.
We agreed that our main goal was to protect the privacy of the data stored on or accessed via the servers. Due to the way the software is marketed, it is not in itself unlawful, even if some of the potential purposes its users put it to are illegal. As such, treating it like any other piece of software and reporting the vulnerability to the vendors seemed the most appropriate course of action.
However, this didn’t completely sit right with us because, ultimately, it’s not ‘just another piece of software’.
So, while I was drafting disclosure emails to send to the vendors, we reached out to several law enforcement agencies, working through partners and trusted sources to see if anyone could offer advice.
The end result was disappointing and only added fuel to our wicked problem. Unfortunately, there is no law that covers this, and there is nothing the law enforcement agencies we spoke to could do, especially considering that the actual service providers were not based in the UK, EU or US. Sympathetic to our plight though they were, their hands were tied – they had no jurisdiction or legal basis to pursue it.
Back to the drawing board. The next step was to rally support from our legal and risk teams to talk about our options.
While we fundamentally stand against stalkerware, what it stands for, and how it can be used, this vulnerability could allow anyone anywhere in the world to access a database of private information on people who are already victims. We all agreed it was our priority to prevent this from happening.
The sound of silence
So we reached out to responsibly disclose the vulnerabilities. As we would with any other disclosure, we sent emails outlining that we had identified an issue and wanted to report it.
And we were soundly ignored.
In one instance, our support account was actually deleted.
We waited another day or two to see if any of the vendors would respond. After resounding silence, we sent another round of emails, similar to the first but including our intention to go public with the information in 90 days if they did not resolve the issues. Public disclosure was not what we wanted to do, for fear of painting a target on the back of the software for attackers, but we hoped the threat of it would place enough pressure on the vendors to resolve the issue.
Sadly, there was still no response from the vendors. I continued to send emails and open support tickets over the course of the 90 days, all of which went unanswered.
Without support from law enforcement or confirmation that the vulns were patched, we hit a dead-end, left in the knowledge that a pretty scary vulnerability existed in a piece of software already used for nefarious purposes.
What to do when you’re being ignored
Of course, we couldn’t leave it there, and we’re still working through this dilemma. We tried extending contact with law enforcement, as well as continuing to push the vendors for a fix. We also brought in people with communications expertise to understand the potential ramifications of making this information public, and to establish how much we should share.
Ultimately, we decided that while full public disclosure might force a software patch, the window of opportunity it would provide attackers to steal the data of vulnerable people was not something we could accept. Even with public disclosure, there was no guarantee of a fix.
Eventually, we concluded that we could use the narrative and our experience to educate as many people as possible on the danger and impact of stalkerware, without putting any of the specifics into the public domain and risking victims. Education, we reasoned, might encourage the removal of stalkerware, increase vigilance or stop it being installed in the first place. We also hoped that by capturing our experiences as a research team, we could help other members of the infosec community who might end up in a similar position.
And that is where we find ourselves now!
What would you do if you found yourself in this position? Let us know on Twitter @immersivelabs!
Get a guided demo from an expert on how Immersive Labs helps with Cyber
Workforce Resilience. We’ll show you how to prepare for emerging threats with
CTF style challenges and playable cyber crisis simulations.