Safety leaders entered 2026 with little expectation that uncertainty will ease … ever. Financial strain, geopolitical instability, accelerating synthetic intelligence adoption, and renewed expertise consolidation have turned volatility right into a structural situation reasonably than a brief disruption. That is life now, and CISOs are being requested to maneuver sooner, help aggressive AI initiatives, and shield belief, all whereas budgets, headcount, and strain for assurance tighten.
Our newest report, Prime Suggestions For Your Safety Program, 2026, gives our crew’s prioritized recommendation for safety leaders navigating this actuality. Quite than assuming stability will return, this yr’s suggestions give attention to constructing packages that may flex, rebalance, and endure as situations change.
We’ve highlighted 4 of our 12 suggestions under to focus on simply a few of what CISOs will face this yr and, extra importantly, what they need to do about it. Our suggestions for 2026 fall into 4 themes:
- Altering price range dynamics
- AI-driven disruption
- Shifting safety expertise energy
- Intensifying geopolitical danger
We design this annual steering to assist CISOs, CIOs, and expertise leaders and their groups align safety technique with enterprise priorities in an surroundings that refuses to stabilize.
Deal With Altering Budgets: Deal with AI Safety As A Enterprise Value, Not A CISO Tax
Finances predictability is gone. Inflation, commerce friction, and government enthusiasm for AI are forcing CISOs to make tradeoffs sooner and extra steadily than conventional planning cycles permit. Treating safety as a set price middle leaves packages uncovered when priorities shift midyear.
Our advice: Shift AI safety prices out of the safety price range.
AI safety isn’t a distinct segment management set. It’s a enterprise danger that scales with AI adoption throughout advertising, operations, and product groups. Funding AI safety solely from the safety price range ensures tradeoffs that weaken core defenses. CISOs ought to push to embed AI safety prices immediately into enterprise AI investments, aligning funding with danger possession and defending foundational safety packages.
Deal With AI Disruption: Put AI Governance At The Middle Of Threat
AI governance has moved far past an ethics or compliance train. AI techniques evolve constantly, rules stay fragmented, and failures escalate shortly into belief, regulatory, or government crises. What makes AI danger particularly troublesome is that many organizations nonetheless lack primary visibility into the place AI is used, what information it touches, and who owns the chance.
Our advice: Determine, assess, and socialize AI danger.
You can’t govern what you can’t stock or clarify. CISOs ought to prioritize visibility into AI techniques, embed AI danger administration into current governance processes, and talk AI danger in enterprise phrases. Deal with AI governance as a shared management accountability to make sure that accountability retains tempo with AI adoption.
Deal With Altering Tech: Strain Distributors And Plan For Their Failure
Expertise consolidation has returned, however the market seems to be completely different in 2026. Energy is concentrating amongst distributors that management information, id, cloud platforms, and AI management surfaces. Whereas consolidation can simplify operations, it additionally introduces focus danger that many organizations underestimate.
Our advice: Defend your group from safety tech failures.
Current vendor outages, delayed breach notifications, and provide chain compromises have proven how shortly supplier failures change into buyer crises. CISOs should cease assuming resilience comes robotically with scale. Construct resilience by avoiding overreliance on single platforms, demanding stronger vendor accountability, and planning for eventualities the place safety tooling itself is unavailable or compromised.
Deal With Altering Geopolitics: Rehearse For Disruption, Not Stability
Geopolitics is not background noise. Knowledge sovereignty necessities, state-aligned cyber exercise, and the collapse of distance between world occasions and enterprise operations have made geopolitics a direct enter into safety technique and continuity planning.
Our advice: Run high-impact geopolitical state of affairs planning.
CISOs ought to rehearse eventualities tied to actual enterprise dependencies corresponding to regional cloud isolation, provider compromise, or service shutdown choices. The aim is to not predict the following disruption completely however to make sure that when it arrives, decision-making is deliberate reasonably than reactive.
For a deeper dive into these insights and the total set of suggestions, Forrester shoppers can learn the total report, Prime Suggestions For Your Safety Program, 2026, and be a part of our webinar on Wednesday, April 8. Forrester shoppers also can schedule an inquiry or steering session to debate how these suggestions apply to their group.


