The AI Security Budget Gap: 93% Expect Daily AI Attacks But Only 4% Have Dedicated Teams

Ninety-three percent of security leaders expect daily AI-driven attacks. Four percent have dedicated AI security teams. The gap between threat awareness and resource allocation reveals an organizational failure, not a budget one.
The AI Security Budget Gap: 93% Expect Daily AI Attacks But Only 4% Have Dedicated Teams

In April 2024, Netacea published a research report titled Cyber Security in the Age of Offensive AI that landed a statistic security leaders should have found devastating: 93% of them expected their organizations to face daily AI-driven cyberattacks within the year. Sixty-five percent expected offensive AI to become standard tooling for cybercriminals. The urgency was unmistakable.

Then came the second finding, which the industry largely ignored: despite that near-universal expectation of AI-powered attacks, the respondents’ actual defensive investments were concentrated in traditional threat categories. DDoS protection received AI augmentation at 62% of organizations. Bot defense? Thirty-three percent. Dedicated AI security teams with ring-fenced budgets? The number was so small it barely registered in the survey data.

The enterprise security industry in early 2024 had achieved a peculiar distinction: near-perfect threat awareness coupled with near-zero structural response. Everyone knew AI attacks were coming. Almost nobody had reorganized to defend against them.

The survey data is unanimous and damning

The Netacea report wasn’t an outlier. Multiple surveys from different vendors, conducted independently with different methodologies, converged on the same conclusion between late 2023 and mid-2024.

Deep Instinct’s 2024 Voice of SecOps Report surveyed 500 senior cybersecurity professionals at companies with 1,000+ employees across financial services, technology, manufacturing, retail, healthcare, public sector, and critical infrastructure. Their findings: 97% were concerned their organization would suffer an AI-generated security incident. Seventy-five percent had already changed their cybersecurity strategy due to AI-powered threats. And 66% said AI was the direct cause of their burnout.

But here’s the telling number: 41% of those same organizations were still relying primarily on endpoint detection and response to stop AI attacks. EDR, a technology that Deep Instinct’s own previous research found over half of organizations considered ineffective against novel threat types.

Darktrace’s research showed that while 87% of IT professionals anticipated AI-generated threats would continue impacting their organizations for years, only 15% of stakeholders felt non-AI cybersecurity tools were capable of detecting and stopping AI-generated threats. Yet the budget allocation didn’t reflect this gap. Detection and response consumed an estimated 35% of cybersecurity budgets, while AI-specific security received a fraction of that.

The pattern across every survey was identical: executives could articulate the threat with precision. They could not articulate what, specifically, they were doing about it that differed from what they were doing before.

Why awareness isn’t translating into action

The gap between AI threat awareness and AI security investment has a structural explanation, and it’s not that security leaders are incompetent. It’s that the traditional budgeting model for cybersecurity doesn’t accommodate a category shift.

Security budgets are built around threat categories: network security, endpoint protection, identity management, application security, incident response. Each category has established vendors, known benchmarks, and quantifiable risk metrics. When a new threat emerges within an existing category, a new ransomware variant, a novel phishing technique, it gets absorbed into the existing budget line. The endpoint team handles it. The email security team handles it. The SOC handles it.

AI-powered attacks don’t fit this model. They cut across every category simultaneously. An AI-generated phishing email is an email security problem. An AI-powered credential stuffing campaign is an identity problem. A deepfake-enhanced social engineering attack is a human factors problem. An adversarial prompt injection is an application security problem. No single budget line owns “AI threats” because AI threats don’t respect budget lines.

The result is that AI security spending gets distributed across existing categories, where it gets diluted, deprioritized, and ultimately indistinguishable from general security spending. Andy Still, then CTO at Netacea, captured the dynamic in his assessment of the research: “The pressure is on security leaders to do more with less, and so the rise of the use of AI to enhance cyber attacks could not have come at a worse time.”

Doing more with less is the wrong framing. The issue isn’t efficiency. The issue is that “more of the same” doesn’t address a threat that is structurally different from what came before.

The tool mismatch problem

Consider what a typical enterprise security stack looked like in January 2024.

The SIEM ingested logs and correlated events based on pattern matching and statistical anomalies. The EDR monitored endpoints for known malicious behaviors. The email gateway flagged messages that matched phishing signatures. The web application firewall blocked requests that triggered rules. The identity platform enforced MFA and conditional access policies.

Now consider what an AI-powered attack looks like. The phishing email contains no known signatures; it was generated de novo for this specific target, grammatically perfect, contextually aware, and personalized using publicly available information. The credential attack doesn’t trigger rate limits because the AI is rotating timing, source IPs, and attempt patterns to stay below detection thresholds. The deepfake impersonation doesn’t trip any technical control because it appears on a legitimate video platform using a legitimate account.

The tool mismatch isn’t that the tools are bad. It’s that they were designed for a threat model in which attacks have detectable artifacts; signatures, patterns, anomalies. AI-generated attacks are specifically optimized to not have those artifacts. The tool assumes the attack will look different from legitimate traffic. The AI ensures it doesn’t.

According to industry statistics compiled from multiple sources, approximately 40% of all phishing emails targeting businesses were AI-generated by 2024. Research from Harvard Business Review found that 60% of recipients fell victim to AI-generated phishing: a success rate equivalent to human-crafted campaigns, but achieved at a fraction of the cost and at vastly greater scale.

This is the math that should be keeping CISOs awake: AI doesn’t need to be better than human attackers. It needs to be as good at a thousandth of the cost. The economics change everything, and the security budget hasn’t caught up.

What the 4% are doing differently

The small minority of organizations that had established dedicated AI security functions by early 2024 shared several characteristics that distinguished them from their peers.

They created explicit AI security budget lines. Not a subsection of “innovation” or “emerging threats,” but a named line item with dedicated funding, owned by a specific individual or team. The act of naming it forced the organization to scope it, staff it, and measure it.

They treated AI security as a discipline, not a project. The organizations that were farthest ahead had recognized that AI threats weren’t a temporary spike that would be absorbed by existing teams. They were a permanent expansion of the attack surface that required persistent, specialized attention. This meant dedicated headcount, not borrowed cycles from the existing SOC.

They invested in offensive AI capabilities. Not to attack others, but to understand what AI-powered attacks actually look like in their environment. Running AI-generated phishing simulations, testing their detection stack against AI-crafted payloads, and red-teaming their processes against deepfake scenarios. You can’t defend against what you haven’t seen, and most security teams in January 2024 had never seen a properly executed AI attack in their own environment.

They updated their tabletop exercises. The standard tabletop scenario in early 2024 was still a ransomware incident or a data breach. The organizations with AI security maturity had added scenarios involving AI-generated executive impersonation, AI-powered reconnaissance, and AI-enhanced supply chain compromise. The value of a tabletop isn’t the scenario itself, it’s the discovery of process gaps that only surface under pressure.

The view from inside the machine

I architect AI-powered enterprise systems for a living, and there’s something that bothers me about the industry conversation around AI security: it treats AI as an external threat vector. Something that happens to you. The attacker uses AI. You defend against it.

But that framing misses the reality that most large enterprises are simultaneously deploying AI internally while trying to defend against it externally. The same organization that’s worried about AI-generated phishing is also rolling out AI copilots, AI customer service agents, AI code generation, and AI-assisted decision-making. Each of those internal deployments creates a new surface. Each one introduces new trust assumptions. Each one requires security evaluation that most organizations are not performing.

Through my work with the Coalition for Secure AI (CoSAI) and IETF working groups, I see the standards community grappling with this dual nature. The threat model isn’t just “AI attacks your systems.” It’s “AI attacks your AI.” Prompt injection against your customer service bot. Data poisoning of your recommendation engine. Model manipulation of your fraud detection system.

The 4% of organizations with dedicated AI security teams are starting to think about this holistic picture. The other 96% are still treating AI defense and AI deployment as separate conversations happening in different conference rooms.

Five things that should happen before the next board meeting

The path from 4% to something more defensible doesn’t require revolutionary investment. It requires structural changes that force the organization to treat AI security as a category rather than a subcategory.

Create an AI security line item in your security budget. It can start at 5% of your total security spend. The number matters less than the act of creating the line. A named budget item creates accountability, visibility, and a forcing function for planning.

Assign AI threat monitoring to a named individual. Not “the SOC will handle it.” A specific person whose performance review includes AI security outcomes. If nobody owns it, nobody is doing it.

Add AI attack scenarios to your next tabletop exercise. Not “AI might be used” scenarios, but specific scenarios: an AI-generated deepfake of your CEO requests an emergency wire transfer. An AI-generated phishing campaign targets your engineering team with code repository links. An adversarial prompt injection against your customer-facing chatbot exfiltrates PII. Run these. See what breaks.

Brief your board on the gap between your AI threat perception and your AI security investment. Use the 93%/4% framing. If your organization is among the 93% that expects daily AI attacks, ask the board what percentage of the security budget is specifically allocated to AI defense. Watch the room.

Inventory your current tools’ AI detection capabilities, honestly. Not what the vendor says on the roadmap slide. What the tool can actually detect today against AI-generated attacks. Most organizations that conduct this audit discover that the answer is “very little, and we didn’t know that.”

The budget will move. The question is when.

The AI security budget gap in early 2024 is a structural problem that will eventually be solved; either proactively by organizations that reorganize, or reactively after a sufficiently expensive incident forces the issue. The trajectory of AI-powered attacks is not ambiguous. They are getting cheaper, more accessible, and more effective on a curve that existing defenses cannot match through incremental improvement.

The 93% of security leaders who expected daily AI attacks were right about the threat. The question for 2024 was whether they would be right about their organization’s preparedness. For 96% of them, the answer was no.

The gap between knowing and doing is the most expensive real estate in enterprise security. In the age of AI, the rent just went up.