Artificial Intelligence in the Immersive One Platform
At Immersive, we believe that AI should enhance cybersecurity resiliency, not create new risks.
This document explains how and why we use generative AI in the Immersive One Platform, specifically focusing on the data boundaries and control mechanisms that keep your information safe. We operate on two simple principles:
AI Overview
We categorize our AI into two areas:
- Features (productivity tools for users)
- Content (lab environments).
AI and Customer Data
You retain ownership of everything you put into our AI tooling (inputs) and everything the AI creates for you (outputs). We have agreements with our AI providers that ensure they do not use your data to train the underlying AI models.
For platform features such as our In-Lab Assistants, Lab Builders, and the Crisis Sim Scenario Generator, we use a Retrieval-Augmented Generation (RAG) process. This means the AI does not have free-roaming access to your entire library. Instead, the system only retrieves and provides the AI with the snippets of information required to complete your task.
We use enterprise-grade APIs, which act as a secure, private tunnel between our platform and the AI. We do not send your metadata, account history, or personal profile information to our AI tools.
When you engage with AI content in our Labs, we create a temporary lab account that has no link to your real identity. To the AI, you are an anonymous user. As soon as your lab session ends, we permanently delete the entire virtual environment.
Customer Control & Opt-Outs
Our customers should have full agency over how their people use AI. We provide two pathways for managing AI in the Immersive One Platform based on the type of functionality:
1. Platform Settings for AI Features
For AI Features, administrative users can enable or disable AI at any time via a toggle in their settings menu.
Impact of Disabling: Disabling these features through the toggle removes the AI interface from all users’ view across the organization.
Feature Availability: While core platform functionality remains accessible, please note that an increasing number of advanced productivity tools and features will be AI-native. Disabling AI will make these features unavailable.
2. Managed Opt-Out for AI Content
The AI Content within our Labs is a fundamental component of our cybersecurity training. Because the content is deeply integrated into our catalogue, it cannot be toggled off via the standard settings menu.
Customers who wish to remove AI Content from their subscription can do so by contacting their Customer Success Manager.
Security and Safety
We ensure that the AI models we use are not vulnerable to various attacks and that their outputs are appropriate and non-harmful.
Security - Labs
- Ephemeral Accounts: AI interactions within labs use ephemeral credentials and accounts. These guest accounts are not tied to the user’s identity. Once a lab session ends, all associated data is permanently deleted.
- Least Privilege Access: Cloud lab environments implement strict Role-Based Access Control (RBAC). This ensures that users and service accounts have only the minimal permissions necessary.
- IP Restrictions: API keys and credentials used to access AI models are locked to the specific IP address of the lab environment via network access rules. This renders the credentials entirely useless if extracted or accessed externally.
- Network Proxies: Lab internet access is strictly managed via HTTP proxies, restricting connections to only the minimum domains required for the lab.
Safety - All Modules
- Automated Content Filtering: The platform integrates robust third-party safety guardrails to preprocess user inputs and model outputs. Tools such as Amazon Bedrock Guardrails, Google Cloud Model Armor, and OpenAI's Moderation endpoint are used to automatically block hate speech, insults, sexual content, violence, exposure of personal data, and prompt attacks, where applicable.
- System Prompts & Rate Limiting: Our AI tools are constrained by carefully crafted system prompts that keep the AI focused on its specific task and ignore malicious counter-instructions or prompt injections. Token generation limits and API rate limiting are also used to prevent excessive consumption or abuse.
- Mandatory Review: Our content generators do not automatically publish content. All AI-generated content is placed in an edit mode, ensuring that a human author validates and approves it before publication.
Compliance Standards
The Immersive One Platform and our data handling processes are validated by the following independent certifications and standards:
- ISO 27001 & SOC 2 Type II: Our ISMS and internal controls are independently audited to ensure data availability, integrity, and confidentiality.
- Cyber Essentials & Cyber Essentials Plus: Confirms our technical defenses against common cyber threats, including the network-level controls used to secure our AI endpoints.
- AI Governance Framework: We are currently aligning our AI operations with ISO 42001, the international standard for AI governance.
Innovation & Strategic Adaptation
The AI landscape is evolving rapidly, and we are committed to ensuring the Immersive One Platform leverages the most advanced technology to drive cybersecurity resilience. We take a balanced approach to this innovation, recognizing the need to push the platform’s capabilities while maintaining the security guardrails our customers expect.
We are cautious and deliberate in our adoption of newly available AI tools; every model or feature is subject to continuous assessment and must meet our internal safety standards before deployment. As this technology develops, we will continue to update our documentation to ensure it accurately reflects the current state of the platform’s architecture and data controls.