AI isn’t simply escalating an arms race; it’s redefining it. The future of cybersecurity lies not in outspending attackers but in building resilient systems that continue to function even when breached.

Govind Rammurthy, CEO and Managing Director, eScan (Source: prhandout)
Everyone talks about AI in cybersecurity as an arms race — attackers get smarter, defenders get smarter, and the cycle continues. But that narrative misses something crucial: AI fundamentally changes the economics of cyberattacks in a way that heavily favors attackers, and understanding this asymmetry is essential for building effective defenses.
Consider what AI does for a cybercriminal. A single attacker can now launch personalized phishing campaigns against thousands of targets simultaneously, with each message tailored to the recipient's job role, recent activities, and social connections. What used to require a team of researchers and weeks of preparation now happens automatically in minutes!
In early 2024, a deepfake video call convinced an employee at a Hong Kong-based company to transfer $25 million, after believing they were speaking to their CFO and executives. The cost to create that fake? A few thousand dollars. The damage? Millions. This is the new economic reality — attacks are cheaper to execute, while defenses are becoming more expensive to maintain.
The Personalization Problem
The danger of AI-powered attacks lies not only in technical sophistication but in contextual manipulation. Traditional phishing emails were easy to spot — “Dear Customer, verify your account.” But AI now crafts messages referencing real colleagues, projects, and industry terms, making them almost impossible to distinguish from genuine communication.
A CISO recently described an incident where an AI-generated phishing email referenced a confidential internal project. The attacker pieced together details from LinkedIn, press releases, and conference talks, crafting a message so convincing that only luck — the recipient being on leave — prevented a breach.
AI can process and correlate enormous amounts of public data — from LinkedIn to patent filings — to build detailed profiles and execute highly personalized attacks. This level of contextual accuracy makes even well-trained employees vulnerable.
When Defense Becomes Detection Becomes Too Late
Traditional cybersecurity operates on prevention, detection, and response. But AI collapses these stages — attackers move faster than organizations can react.
Consider AI-powered malware that adapts in real time, modifying behavior to bypass controls and blending into legitimate traffic. By the time detection systems raise an alert, the damage is often done. The
Jaguar Land Rover (JLR) attack illustrates this. Despite £800 million invested in cybersecurity infrastructure, attackers compromised systems so deeply that global production was halted for over three weeks — costing £50 million per week and endangering 200,000 supply chain jobs. The issue wasn’t weak technology but the sheer speed and adaptability of AI-enhanced attacks.
The False Promise of AI-Only Defense
Deploying AI-powered defenses is necessary but insufficient. AI can process massive datasets, detect anomalies, and identify unknown threats — but it’s not foolproof.
AI models depend on training data. If new attack techniques aren’t represented, detection fails. Attackers can test their AI-generated attacks against multiple defense systems until one works, while defenders must wait for incidents to update their models.
Moreover, adversarial machine learning allows attackers to manipulate AI models — tricking them into misclassifying threats or even “poisoning” training data. And most critically, AI lacks human context. It can flag anomalies, but only a human can interpret whether they’re malicious or legitimate.
Designing Security for Inevitable Human Failure
Most breaches still succeed because of human behavior — but that’s a design flaw, not carelessness. Technology often assumes perfect user behavior when it should anticipate mistakes.
For instance, employees frequently misdirect emails due to autofill errors. Instead of blaming users, one practical fix is introducing a brief delay before emails are sent, allowing time to recall mistakes. Small design changes like this prevent countless data leaks.
AI-powered attacks exploit human psychology — urgency, authority, helpfulness. Training helps but won’t eliminate errors. Security must be built with human imperfection in mind.
The Supply Chain Intelligence Problem
AI doesn’t just make direct attacks more effective — it reshapes supply chain risks. Attackers compromising a vendor aren’t merely stealing data; they’re gathering intelligence about everyone connected to that ecosystem.
The Collins Aerospace ransomware incident highlights this. When its MUSE check-in software was compromised, it disrupted operations across major European airports, grounding thousands of passengers. Despite prior breaches and strengthened defenses, attackers leveraged intelligence from earlier compromises to strike again.
A single breach now provides data that can be analyzed and weaponized for years. AI accelerates this process by mapping interconnections across entire industries.
What Actually Works: Lessons from Real Incidents
Given all these challenges, what actually works? Based on analyzing major incidents and working with organizations across various sectors, several principles emerge:
Assume breach, design for containment. Stop asking "how do we prevent all attacks?" Start asking "when we're breached, how do we limit the damage?" Network segmentation, micro-segmentation, audit-trail and least-privilege access architecturearen't just buzzwords - they're practical approaches that limit an attacker's ability to move laterally through your systems. When JLR was forced to shut down all production globally, it was because they couldn't be certain which systems were compromised. Better segmentation might have enabled more targeted containment.
Long-term behavioral analysis matters more than point-in-time detection. Advanced persistent threats unfold over weeks or months. An employee's account slowly accessing data it never accessed before. A system gradually increasing its external communications. Small file transfers happening consistently at odd hours. Logins by employees who are on leave - each individual event looks normal, but the pattern over time reveals compromise. AI is valuable here, but only if your systems maintain long-term context rather than just analyzing recent activity.
Test your humans, not just your technology. Regular phishing simulations are valuable, but they need to reflect actual AI-powered attack techniques. Generic phishing tests don't prepare employees for personalized, contextually-aware attacks. Your simulations should reference real projects, real colleagues, and real business situations. If your people can identify those more sophisticated attempts, they're genuinely prepared.
Audit your supply chain's security, not just their compliance. Compliance certificates don't tell you whether a partner can actually detect and respond to attacks. You need to understand their security practices, incident response capabilities, and their own supply chain risks. After the JLR attack, how many of their suppliers went back and audited JLR's security posture? Or vice-versa, how many suppliers security posture was audited by JLR? Probably not enough.
Design for human imperfection at every level. Multi-factor authentication, transaction verification through secondary channels, delayed execution of sensitive operations, automatic backups that can't be deleted by compromised accounts—these aren't sophisticated AI techniques, but they're effective precisely because they compensate for inevitable human errors.
The Uncomfortable Economics
AI has widened the economic gap between attackers and defenders. With minimal investment, attackers can execute campaigns that force defenders to spend millions on detection, recovery, and resilience.
Organizations can’t win by chasing every new security tool. Instead, they must strengthen fundamentals: network segmentation, patch management, access controls, and a culture of shared responsibility. Investments should focus on automation, rapid response, and process resilience — not just prevention.
Beyond the Arms Race
AI isn’t simply escalating an arms race; it’s redefining it. The future of cybersecurity lies not in outspending attackers but in building resilient systems that continue to function even when breached.
AI amplifies human capability — for both attackers and defenders. The organizations that thrive will combine AI’s analytical speed with human judgment and a design philosophy that assumes imperfection.
In an era of AI-powered threats, breaches are inevitable. The difference between temporary disruption and catastrophic failure depends not on your AI tools, but on how deeply resilience is woven into your organization’s design and culture.
Empower your business. Get practical tips, market insights, and growth strategies delivered to your inbox
By continuing you agree to our Privacy Policy & Terms & Conditions
