Companies are increasingly moving to the cloud due to a wide range of benefits and advantages that cloud computing offers. According to McKinsey, most large enterprises aspire to have 60 percent of their environment in the cloud by 2025. While Cloud computing adoption continues to grow, security teams face several challenges in the Cloud, ranging from misconfiguration to lack of knowledge and talent. Organizations must address these issues to ensure the security posture and integrity of their cloud environments.

Cloud security is a responsibility shared between the cloud service provider and the customers using their services. The customer is responsible for many tenants of security, including Identity and Access Management (IAM), Network Security, Data protection, Incident Response, API security, and much more. This can saturate security teams who can often be fatigued by security-related alerts in the cloud.

AI in Cloud Security has been a hot topic recently, with massive growth and new capabilities being released by public Cloud providers to support organizations and their Cloud Security posture. There are a number of exciting areas in which AI can be used within Cloud Security.

Detecting misconfigurations

Misconfigurations in the Cloud are a common occurrence. The 2022 IBM Security X-Force Cloud Threat Landscape Report revealed that cloud vulnerabilities have grown 28% since 2021, and likely to continue rising year on year as Cloud adoption increases.

To combat this issue, many Cloud providers and Cloud Security Posture Management tools provide AI-driven services, which use advanced analytics and machine learning algorithms to detect misconfigured resources within cloud environments. These services continuously analyze the configuration settings and security policies of cloud resources against established best practices and industry standards. Many of these AI-based solutions can learn from historical data and security incidents to recognize patterns that might lead to misconfigurations.

These services can also provide recommendations and even automate the remediation processes. This can be useful to ensure deployed resources stay secure, but can lead to potential incidents where-by services may fail due other resources changing (networks, permissions etc.).

Behavioral analysis

Whenever you hear about Cloud breaches, the incident often  starts with leakage of some sort of credential, from simple passwords to keys used by service accounts.

Behavioral analysis (sometimes called User and Entity Behavior Analytics) techniques are extremely powerful, as they can be used to continuously monitor and analyze user behavior, application usage, and network activities to establish a baseline of normal behavior. When an attacker accesses credentials, they often aren’t acting the same way the actual owner would, and behavioral analysis can detect deviations from this baseline that can then be flagged as potential security incidents or anomalies that require further investigation

Generative AI in Cloud security

Generative AI for cyber has seen massive growth over the past year, with many new tools and services such as Microsoft Security Copilot and Google Cloud Security AI Workbench, a platform that is powered by Sec-PaLM, Google’s own large language model that is fine-tuned for security use cases.

These tools use a mixture of AI-Powered tools and feeds to allow users to quickly respond to threats by generating responses to text-based queries. For example, a user could ask “when did the user [email address] last log in?” or as part of a larger incident investigation, a user could ask “show me which email addresses received the malware link.” Not only does this massively speed up the triage of incidents, but it potentially allows security analysts who might not have the specific skills needed to investigate an incident to still have the ability to triage an incident by relying on the AI and its knowledge feeds.

Sounds great, are there any catches?

AI can be used as a force for good in Cloud Security, but its implementation needs to be carefully considered. While one of the primary benefits of AI is to reduce fatigue to security teams working in the Cloud, the potential for false positives and false negatives generated by AI-based threat detection systems is still high. Many of the AI features that are available in the Cloud to support Cloud security can be turned on very easily, oftentimes without needing fine-tuning. This can lead to huge amounts of unnecessary alerts and noise. It’s important that any AI tooling being used has been defined in a clear context and deployed properly.

Privacy is also a large concern, especially in the Cloud in which “where and who is processing my data?” can often be a question by organizations. AI’s reliance on well-structured and quality data can raise data privacy concerns, particularly when processing sensitive information in cloud environments. Remember, those large language models are essentially being trained on data and attacks that could be happening in your environment.

Here at Immersive Labs, we help organizations to continuously build and prove their cyber workforce resilience, including managing the potential impact of AI in the Cloud.

Visit our Resources Page to learn more.


Projecting the global value of cloud: $3 trillion is up for grabs for companies that go beyond adoption

2022 IBM Security X-Force Cloud Threat Landscape Report

Check Out Immersive Labs in the News.


November 9, 2023


Matt Parven