For anyone reading the latest news, one thing is clear: no matter what your sector, artificial intelligence (AI) is most definitely causing quite a stir and raising questions about what these tools mean for jobs and markets, as well as the associated benefits and risks of adoption.

Since the emergence of ChatGPT, the pace of these discussions has accelerated. It’s clear AI regulation has struggled to keep pace with developments. If you’re an AI expert, these conversations will be very familiar to you. But if you’re relatively new to AI, it might be hard to cut through the hype and understand what tools like these mean for you and your organization.

What is generative AI?

Generative AI learns from the information that it’s given to create or generate outputs, such as composing an email (text), creating a piece of art (imagery), or producing code or new data. This means that generative AI can have multiple uses and benefits in the workplace, helping people to complete tasks more efficiently or effectively across a range of areas. Those who are keen to explore the boundaries and limitations of applying AI in their organization are embracing these benefits.

Challenges associated with generative AI

But against a backdrop of a lack of policy or clear guidance on appropriate use of AI, some of those limitations are becoming increasingly clear. Cases have emerged of employees sharing proprietary or sensitive information with external AI systems, leading to organization-wide bans..While using systems such as ChatGPT to increase work efficiency, people can unintentionally provide or share protected information.

Companies have rushed to create internal policies, lawsuits have emerged regarding the use of information to train systems, and governments have implemented all out bans to ensure that data privacy is respected.

Despite policy activations, phishing emails and other social engineering attacks continue to be a persistent issue in organizations, tricking people into parting with money or information, or getting them to download malicious files. These attacks have remained one of the hardest areas to address from a security perspective, and are now likely to get even harder.

Generative AI can learn from inputs to create more realistic, persuasive, and engaging phishing emails or other social engineering attacks. These attacks can be executed more quickly and easily than ever before, creating communications that persuade people to share information or login credentials. This kind of malicious strategy might include deploying perfect spelling and grammar, using a range of manipulation tactics, such as urgency or reciprocity, and/or exploiting information that has been shared online to increase relevance.

Protect your organization against AI risks

So, what can you do to prepare your organization for both the benefits and risks that AI brings?

  • Develop an internal policy related to the use of AI within your organization. This should manage expectations around the use of AI by individuals and departments, as well as providing a documented process for potential future implementation and evaluation. Have a key person or team assigned to coordinate this area in your organization to make sure that nothing slips through the cracks.
  • Emphasize appropriate data and information handling. It’s important that people understand what information is sensitive in your organization, whether that be personal data or proprietary information. Make sure it’s clear what information can and cannot be shared with external AI systems and provide an easily accessible point of contact for any questions.
  • Understand and assess the potential social engineering threat. Make sure your workforce is prepared and understands how AI may impact the realism of fraudulent and malicious communications. They can be your best defense if they are equipped to do so.
  • Stay tuned to the regulatory landscape as this continues to evolve and keep abreast of updates. AI is a particularly fast-moving area from a regulatory perspective.
  • Finally, think beyond compliance and into ethics. Just because you can, doesn’t mean you should. You should always consider whether you are using AI responsibly and whether you feel confident in justifying this externally if you need to.

Here at Immersive Labs, we help organizations to continuously build and prove their cyber workforce resilience, including managing the potential impact of AI.

Visit our Resources Page to learn more.

Check Out Immersive Labs in the News.

Published

August 16, 2023

WRITTEN BY

Emma Walker

Principal Workforce SME