Dispatches From the Desert: Black Hat Day Two


Welcome to day two of Dispatches from the Desert! This week, Immersive’s Container 7 team is in the desert city of Las Vegas, sharing daily highlights from two of the world's largest security events: Black Hat and DEF CON.
Catch up with the previous blog here:
Dispatches from the Desert: Black Hat Day One
Kev Breen: Senior Director, Cyber Threat Research
As a Briefings Pass holder, day one was full of keynotes and briefing sessions – but Black Hat events aren’t just about cybersecurity training or knowledge sharing.
Black Hat is a conference of two halves. On one side, you have briefings where companies and individuals share knowledge and research, interesting observations, or new attacks and tools. On the other side, you have vendors and the Business Hall, where hundreds of cybersecurity vendors vie for your attention (and your wallets!).
My day kicked off with the keynote, but was quickly followed by some customer meetings and a trip to the Business Hall. It's always interesting to talk to vendors and understand how they’re keeping pace with emerging threats and what innovations they’re bringing to help organizations defend themselves.
If you’ve attended a conference before, you’ll know these are led by marketing teams rather than technical teams. However, if you can peer through the flashy stands, the swag handouts, and the buzzword bingo, you can gain some fascinating insights.
GenAI chatbots and third-party data risks
It’s clear that GenAI is still a hot topic, and very few vendors I met with didn't have this front and center. New vendors had AI at their core, while existing vendors have been quick to bring new and emerging technologies into their tech stack.
The most common AI implementation is a contextually aware chatbot embedded into the front-end user interface. A good example is CrowdStrike's Charlotte AI bot, which sits alongside your alerts and can help you understand what you’re looking at. It can explain what a captured PowerShell command is doing, or summarize an email and identify any traits that might suggest it’s a phishing email.
CrowdStrike is one example of many here. Something that could be easily overlooked is that many vendors aren’t running their own LLMs – they’re using one of the big providers like OpenAI, Google, or Anthropic.
There isn’t anything inherently wrong with this, but when a company talks about using GenAI, it may mean that customer data is going to a third party. It’s important that customers understand what data is going where, as it could contain sensitive information.
The battle of the LLMs
There were also several talks on AI in the briefing sessions. One of my favorites was a session by Kyle Avery of Fortra’s Outflank, called Training Specialist Models: Automating Malware Development.
In the session, Kyle discussed the differences between local and commercial LLMs, specifically asking whether a small local LLM could outperform a large commercial model. The answer is “maybe” – if you train a small base model to achieve a specific goal, then a 7B local model can outperform large, generalist commercial models.
The model they created and released is Dante, an LLM designed to create malicious payloads that can evade EDR tooling. From the research, it appears to be very successful at achieving this goal, compared to other models.
This research was aimed at red teams, whose primary role is to defeat the blue team and identify areas of weakness in organizations, so they can fix them before a real threat actor comes along. However, we know that threat actors themselves are leveraging AI in the same way, creating their own malicious LLMs in addition to using commercially available services.
It’s clear that GenAI isn’t going away anytime soon. As defenders of networks and data, it’s crucial that we keep pace with the threat actors. If understood and used properly and securely, GenAI can be a force multiplier; if you become complacent and fall behind the curve, it's one more capability the attacker has over you.
Rob Reeves: Principal Cyber Security Engineer
The second day opened with another memorable keynote from Nicole Perlroth, former lead cybersecurity reporter at The New York Times and author of This Is How They Tell Me The World Ends.
Nicole recounted her personal experiences reporting on the reality of zero-day exploit sales, changes in cybercrime and cyber-facilitated espionage, and the threat from China and Russia to the West during her 11 years at the Times.
She discussed outcomes that affected companies and nations, as well as individuals and families, such as the death of a woman in Germany who was unable to receive timely emergency care due to a ransomware attack against a hospital.
Messaging like this is especially important at technical conferences to remind people that it's not just the seven layers of the OSI model that are impacted, but Layer 8 and beyond.
Fake IT workers from North Korea
The day’s most interesting talk was on the identification and analysis of fake IT workers from North Korea. It examined how they operate and how security researchers have been able to identify aspects of their trade craft.
The speaker, SttyK (referred to as Stick or Sticky), wanted to remain anonymous rather than give out their identity, due to the threat of North Korean retaliation.
The talk covered a wide range of topics, including tools the fake IT workers are using, the process researchers took to identify fake documents and cover letters, and how they found the workers on GitHub. There was also evidence showing how researchers observed a team of fake IT workers operating out of a laptop farm, which they connected to remotely, and their love of playing volleyball!
A fascinating talk and insight into an unusual threat.
Gaz Lockwood: Principal Cyber Security Engineer
For the final day of Black Hat, I zeroed in on talks about user activity. First up was Google, which announced Facade, its in-house insider threat detection system.
Google has the mammoth challenge of spotting malicious or risky behavior across 180,000+ employees. The old “look for bad patterns in the logs” trick just doesn’t cut it at that scale.
Instead, Google has trained Facade to understand what “normal” looks like when someone accesses a resource and then to flag when something feels off. The magic happens through “embeddings”: dense, compact representations of the user and resource, which the system compares in context.
Here’s the clever bit. Insider threat cases are rare, so there isn’t a lot of real incident data to train on. Rather than wait for more examples, Google trains Facade entirely on normal behavior, then creates synthetic anomalies by swapping user and resource embeddings around.
This generates “this should never happen” scenarios on the fly. The result is a highly accurate, context-aware anomaly detection system that can pick the tiniest needles out of an enormous haystack.
DNS visibility and privacy issues
Right after that, there was a talk that hammered home just how revealing network traces can be, focusing entirely on DNS. The researchers spent two weeks collecting DNS traffic from a range of devices, clustering the activity, and scoring the results to see how patterns shifted from week to week.
Most devices barely changed, but the ones that did practically lit up in neon signs, with big swings tied to app installs, configuration tweaks, or even something as simple as a location change. The point is that DNS isn’t background noise – it’s a running diary of what you do and where you’ve been, and it’s trivial to profile you from it.
Their suggested mitigations, like more aggressive MAC randomization, encrypted DNS, and query aggregation, will blunt the edges. However, it won’t erase the core problem: modern security tooling still depends on DNS visibility.
The uncomfortable truth is that if your DNS is visible, your privacy is already leaking. The only real choice is to limit the exposure and demand honesty from those logging it.
Ben Hopkins: Cyber Threat Intelligence Researcher
The final day of Black Hat had some fascinating and insightful talks, which, like yesterday, leaned heavily towards AI.
I spent the lion's share of the day looking for talks that didn’t focus so much on AI. Instead, I took an interest in operational technology (OT) and the compromise of Windows failover clusters.
Chris Sistrunk of Mandiant gave a talk on the OT threat landscape, along with some history and case studies. Surprisingly, fewer than 10 pieces of malware exist that target and make changes to OT, like Industroyer and FrostyGoop. Most malware used in OT attacks is actually IT malware, which compromises the IT assets the OT relies on (WannaCry comes to mind).
In threat intelligence, it’s commonly agreed that hacktivists have the lowest capability, nation-states have the highest capability, and cybercriminals are somewhere in the middle.
Sistrunk challenged this to some degree, pointing out that hacktivists are getting much more aggressive when attacking OT. Cybercriminals are also becoming more capable at deploying ransomware and extracting funds from organizations that use OT.
Systematic OT compromise
I was able to have a one-to-one conversation with Sistrunk, and he mentioned that telemetry and incident response engagements at Mandiant have emphasized the risks of OT attacks. If the attacker can compromise an IT environment and the OT environment isn’t appropriately air-gapped, no technology can save you from a systematic OT compromise.
In these sorts of attacks, you’d usually have to pay thousands of dollars in ransom costs, with data destruction being a less frequent outcome because it simply isn’t profitable. That said, nation-states are inevitably embedded inside critical infrastructure, waiting for a war to start, and turning off the water will cause a country to surrender very quickly.
Windows failover clusters
The talk on Windows failover clusters gave some great insights into how attackers might laterally move across a Windows-only estate. They would compromise a single account on an active directory node, pivot to Entra, then drop onto other machines.
High-availability infrastructure is designed to ensure that the technology is available in the event of a software failure. These clusters communicate with each other over RPC and SMB, and if these cluster ports are open and an attacker manages to access one, they’ll fairly trivially be able to move across an estate.
Attackers can then enumerate the estate and elevate their privileges where they best fit their operational requirements. In short, these clusters are designed to be highly available with little thought to security, which could lead to cluster takeover, should an attacker figure out an attack flow that can abuse these flaws.
That wraps up day two! If you’re an Immersive customer and weren’t able to make it out to Black Hat or DEF CON, keep an eye out. When we return, we’ll take some of the more interesting and novel tools and techniques we’ve seen here and turn them into practical labs.
Trusted by top
companies worldwide
Ready to Get Started?
Get a Live Demo.
Simply complete the form to schedule time with an expert that works best for your calendar.
.webp)










.webp)


