But against a backdrop of a lack of policy or clear guidance on appropriate use of AI, some of those limitations are becoming increasingly clear. Cases have emerged of employees sharing proprietary or sensitive information with external AI systems, leading to organization-wide bans..While using systems such as ChatGPT to increase work efficiency, people can unintentionally provide or share protected information.
Companies have rushed to create internal policies, lawsuits have emerged regarding the use of information to train systems, and governments have implemented all out bans to ensure that data privacy is respected.
Despite policy activations, phishing emails and other social engineering attacks continue to be a persistent issue in organizations, tricking people into parting with money or information, or getting them to download malicious files. These attacks have remained one of the hardest areas to address from a security perspective, and are now likely to get even harder.
Generative AI can learn from inputs to create more realistic, persuasive, and engaging phishing emails or other social engineering attacks. These attacks can be executed more quickly and easily than ever before, creating communications that persuade people to share information or login credentials. This kind of malicious strategy might include deploying perfect spelling and grammar, using a range of manipulation tactics, such as urgency or reciprocity, and/or exploiting information that has been shared online to increase relevance.