Artificial intelligence has already transformed much of cybersecurity, automating triage, detection, and even parts of incident response.
In your Security Operations Center (SOC), tier 1 tasks (the entry-level work of sorting alerts, investigating anomalies, and escalating real threats) are now handled faster and more efficiently than ever before.
But this rapid evolution carries a hidden cost: a crisis of experience. When machines take over the repetitive foundational tasks, how will the next generation of analysts gain the skills they once learned by doing?
We sat at the table with guest speakers Jess Burn, Principal Analyst, Security & Risk at Forrester and Graham McElroy, Global Cyber Defense Centre Leader at Capgemini, for a Fireside Chat to discuss this very topic.
We recently ran a survey among Cyberbit customers on their adoption of AI for tier 1 SOC tasks. The results are more surprising than you think.
Around 40% of respondents claimed they weren’t using AI in their SoC or didn’t plan to do so in the next 12 months. Yet, as the experts pointed out during our chat, that statistic may say more about perceptions than reality.

AI has quietly powered cybersecurity for years through machine learning, behavioral analytics, and reinforcement learning. These traditional forms of AI detect anomalies, flag potential intrusions, and help analysts make better decisions. What’s new today is generative AI, systems capable of understanding natural language, generating reports, and assisting with complex investigations.
Generative AI is already assisting with alert summarization, threat intelligence analysis, vulnerability reporting, and even language translation for global teams. But full adoption will take time.
Organizations are understandably cautious, mindful of data governance, model transparency, and the question of trust: Can we rely on AI’s decisions when the stakes are so high?
AI in cybersecurity introduces an old dilemma in a new form: trust versus efficiency.
As Forrester’s Jess Burn and Capgemini’s Graham McElroy noted, CISOs remain conservative for a reason. They want to understand where their data goes, how models are trained, and who benefits from that data.
Governance, not just functionality, will determine how quickly organizations embrace AI-driven decision-making.
Most enterprises have now issued internal guidance restricting the use of public AI tools for corporate data. Analysts are warned not to paste sensitive information into platforms like ChatGPT. Instead, they’re encouraged to rely on approved, managed AI systems embedded within the organization’s cybersecurity stack.
This level of control ensures compliance, but it also slows experimentation.
Striking the right balance between innovation and governance will define the next phase of AI adoption in cybersecurity.
Despite the hype, no one on the panel of the “Generative AI Taking Over Tier-One SoC Tasks — Now What?” Fireside Chat believed that AI would replace security professionals.
So, I believe, it’s safe to assume that the end-of-the-world scenarios where AI is taking over all our jobs are not here yet!
As our very own Matthew Dobbs observed, analysts are evolving into AI overseers and trainers. They are transforming into professionals who understand both the technology and the business context of AI use. They’ll need to interpret AI outputs, validate results, and tune models for their specific environments. This is not an entry-level skill, as it demands deep domain knowledge, experience, and critical thinking.
That’s why organizations must invest in building the next generation of defenders now.
Without deliberate training pathways, we risk losing an entire layer of foundational expertise, the kind of intuition that can’t be programmed.
The old way of learning cybersecurity, through repetitive alert triage and long hours in front of dashboards, may be fading, but the solution isn’t to eliminate training. We simply need to reimagine it!
Generative AI can also help create realistic, controlled simulation environments that replicate real-world attack scenarios. These live-fire exercises let analysts build muscle memory, practice decision-making, and collaborate across teams without risking production systems.
The shift from “learning by doing” to “learning by simulating” can make cybersecurity education more engaging, scalable, and safe. It’s also a powerful way to combat burnout, a persistent problem in the industry.
As Jess Burn noted, “Automation can actually reduce fatigue by eliminating repetitive tasks and freeing analysts to focus on meaningful, creative challenges.” When analysts spend less time clicking through false positives and more time thinking strategically, they not only become better defenders but also stay longer in their roles.
One of the most insightful discussions in this Fireside Chat centered on how AI changes the cybersecurity career ladder.
Automation may remove the “grunt work,” but it also removes the training ground for early-career professionals. That means entry-level roles will now require a higher baseline of technical competence. Organizations must respond by offering structured career paths, supported by continuous learning and mentoring.
Graham McElroy described his organization’s approach. “At Capgemini, we are combining classroom learning, certification programs, and regular live-fire exercises. This blended model ensures that even as tools evolve, the people operating them stay sharp and connected to real-world context.”
And one other thing. Training must happen on real commercial tools, not just theoretical platforms.
Every SIEM, SOAR, or EDR solution behaves differently, and mastering those subtleties can mean the difference between spotting an attack and missing it entirely.
Integrating commercial tools into training environments is crucial for companies to make learning more relevant and enable analysts to challenge AI, improve its performance, and understand its blind spots.
It’s unfortunate, I would say, that some organizations invest in AI under the false assumption that it will reduce their security spending.
AI promises efficiency but not necessarily cost reduction. As both Jess and Graham pointed out during our chat, organizations may see shifts in spending, not outright savings.
Yes, AI can reduce headcount in some areas, but the people who remain will need to be more specialized, and therefore more expensive. New roles will emerge, for example, AI engineers, data governors, security model auditors, AI ethics officers, and possibly much more.
Dependency on AI systems also creates new types of risk.
If a core AI system goes offline or delivers flawed results, analysts must have the skills and confidence to take over manually. The organizations that prepare for this dual-mode operation, both automated and human-driven, will be the ones that remain resilient.
I know how intriguing (and often terrifying) it can be to fantasize about this cyberpunk version of the future, but the reality is quite different.
Artificial intelligence can reason, learn, and act, but it lacks the business intuition, contextual awareness, and creative problem-solving that define human defenders.
The future SOC isn’t fully automated; it’s collaborative. A SOC where humans and AI systems work side by side, each amplifying the other’s strengths, now that is a realistic version of the future.
Long story short, AI is not the ultimate bad guy or a magic bullet. It is a partner.
For the CISOs and security leaders reading this, I’m happy to summarize this entire article for you in two simple sentences: Don’t treat AI as just another tool in cybersecurity; it’s a transformative force reshaping the entire profession.
But transformation without preparation is risky.

If your SOC isn’t already training its people to work with, challenge, and improve AI, the time to start is now.
At Cyberbit, we believe in empowering defenders through hands-on, experiential learning, helping organizations close the readiness gap between technology and talent. Let us show you how.
