Cyber readiness configurator
Step 1 — Your challenge
Outline who you are, what you need to prove, and why now.
~ 1 minute
Your business
i Optional, but helpful. This will appear in “Review” and in the copied/email summary.Top drivers (pick up to 3)
i Used to tailor your recommendation and summary.Pick up to three reasons driving urgency. We’ll tailor the recommendation and summary.
0 / 3 selected
First milestone to deliver
i Procurement and rollout timelines vary. Pick the first outcome you want the program to deliver — we’ll translate that into a plan and a starting tier.Evidence outputs are mainly for… (select any)
i Selecting “Board” or “Regulators/Auditors” biases recommendations toward evidence-grade reporting and governance outputs.Step 2 — Coverage
Who needs to be ready — and how do you run readiness today?
~ 1–2 minutes
Teams who need to be ready (select any)
i Coverage shapes the plan (and the tier). Select all groups you need to include in the readiness program.Cyber resilience cadence today (best fit)
i Choose the option that best applies. Cadence and realism affect the operating rhythm we propose (and whether Advanced/Ultimate is justified).How do you measure readiness today?
i Completion metrics are a starting point. Performance metrics (speed, accuracy, decisions) strengthen evidence and confidence.Step 3 — Package fit
Add the remaining fit signals (realism, scope, delivery) before we recommend a starting tier.
~ 1–2 minutes
We already captured your objective, cadence, and evidence audience in Steps 1–2. These questions mirror the Core / Advanced / Ultimate definitions and refine the recommendation.
1) How realistic do exercises need to be?
i Realism increases trust and supports scrutiny, but changes the delivery model.2) Scope of the program
i Scope is about the breadth of coverage needed to deliver ROI and defensible outcomes.3) Current state (closest match)
i Be honest about the starting point; it improves the recommendation.4) Delivery style they need
i Services signal how much support is needed to deliver outcomes.5) How they describe the problem
i How teams frame the problem indicates maturity and evidence needs.Step 4 — Context
Add just enough context for relevance — without turning into the master configurator.
~ 1–2 minutes
Business context
i Industry and operating region help tailor suggested regulations and examples. Optional, but useful.Regulatory / assurance context (optional)
i Suggested based on your business and region. Used to tailor evidence language and stakeholder framing. Not legal advice.Suggested list is tailored by industry and region. “All” shows the full library A–Z.
Security tools & platforms (optional)
i Pick the platforms you want hands‑on simulations and examples to align to. This helps make the plan feel “your world”.Select any tools or platforms you want to practice workflows for.
Security tools & SOC platforms
Cloud platforms
Cloud‑native & DevSecOps
Other domains
Step 5 — ROI estimate
TEI (Total Economic Impact)‑based, directional estimate (not a quote). Start with revenue or budget to size quickly, then tweak team sizes and spend if you want.
~ 1 minute
How this estimate works i Directional estimate based on the Forrester Total Economic Impact (TEI) study (3‑year PV @ 10% discount). Baseline: $10B rev · 200 cyber · 1,000 dev · 3,000 workforce · $250k/yr → 327% ROI. Not a quote.
This is a TEI (Total Economic Impact)‑based sizing model to help you sanity‑check ROI. Start with annual revenue / operating budget to get a typical footprint + indicative spend, then fine‑tune team sizes and spend to match your reality. Not a quote.
i Revenue is used as a quick sizing proxy to suggest a typical starting footprint. You can override team sizes below.
$10B
i Used to scale cyber workforce benefits (attrition, hiring, training efficiency) in the preview.
200
i Used to scale developer productivity benefits in the preview.
1,000
i Used for suggested footprint context and workforce-ready planning (not a precise model input).
3,000
i Directional estimate based on revenue + footprint (revenue + team footprint). Preview is capped at roughly US$2M/yr equivalent for realism. If you model materially higher/lower spend, we apply diminishing returns to keep outputs sensible.
US$250,000
Estimated outcomes
Choose an assumption range for directional ROI, NPV and payback.
3‑year ROI (directional)
156%
3‑year NPV (directional)
+US$1,054,183
Payback (est.)
~8 mo
Indicative spend
US$250,000/yr
Adjust assumptions
Salary inputs scale benefits directionally. FX defaults are placeholders for the prototype — editable here for your environment.
Step 6 — Review
Confirm our understanding of your business. If anything looks off, jump back to edit — then book a consultation.
Our understanding of your business
Review what we’ve captured across the configurator. Use “Edit” to adjust any section.
Organisation
—
ROI inputs
Complete Step 5 to generate a directional sizing estimate.
Challenge
Drivers—
Milestone—
Evidence audience—
Coverage
Groups—
Cadence—
Measurement—
Package fit
Exercise realism—
Program scope—
Current state—
Delivery style—
Problem framing—
Context
Industry—
Region—
Regulatory—
Security tools & platforms
—
Book your consultation
Share your contact details and we’ll follow up to tailor a consultation around your snapshot.
Booked (prototype)
In a production build, this button would route to your scheduling flow (calendar + CRM handoff) with the summary attached.