AIResearch AIResearch
Back to articles
AI

AI Cybersecurity Tools Face Hidden Limits

A new mathematical theory reveals that AI cannot boost security operations past human bottlenecks, and common assumptions about false alarms are flawed.

AI Research
April 01, 2026
4 min read
AI Cybersecurity Tools Face Hidden Limits

In cybersecurity operations, AI tools are often deployed with the promise of speeding up threat detection and response, but a new formal analysis shows their impact is tightly constrained by fundamental bottlenecks. Researchers have developed a mathematical theory that precisely defines when AI improvements actually increase system throughput, uncovering that many informal arguments about AI's benefits or drawbacks lack rigorous grounding. This work, published in a preprint, uses a pipeline model where each stage represents a step like detection or investigation, with throughput determined by the slowest stage, and AI modeled as a multiplier that can accelerate stages. optimistic views by proving that unless every bottleneck is improved, overall throughput remains unchanged, and human stages impose an absolute ceiling no matter how much AI is applied elsewhere.

A key is that throughput only increases if every original bottleneck stage is strictly improved by AI multipliers greater than one. If at least one bottleneck retains its original capacity, the system's overall throughput stays the same, regardless of how much other stages are accelerated. This result sharpens informal claims from the Theory of Constraints, providing exact conditions rather than vague assertions. For example, in a pipeline with stages having capacities of 3, 1, and 4, the bottleneck is the stage with capacity 1; improving only the other stages leaves throughput at 1, while improving the bottleneck to a higher value can raise it. The theory handles cases with multiple tied bottlenecks, requiring all to be improved for any gain, which informal reasoning often overlooks.

Ology involves defining a pipeline as a finite set of stages with positive capacities, where throughput is the minimum capacity, and AI is represented by admissible multipliers that scale each stage's capacity by at least one. The researchers prove five theorems and one proposition using elementary mathematics, such as finite minima and real number properties, without invoking complex technical details. They extend the model to include human authority stages, which cannot be accelerated, and adversarial scenarios with separate attacker and defender pipelines. The false positive model is treated separately, analyzing useful throughput under fixed and rate-dependent precision functions. All proofs are self-contained, relying on assumptions like stage-local multiplicative perturbation and the throughput being the stagewise minimum.

From the paper show that under human authority constraints, throughput cannot exceed the smallest capacity among human stages, and this bound is tight, meaning it can be achieved with sufficient acceleration of non-human stages. In adversarial settings, the attacker-defender throughput ratio worsens for the defender if and only if the attacker's relative throughput gain exceeds the defender's, highlighting that raw speedup is less important than relative improvement. For false positives, the analysis reveals a counterintuitive finding: under a fixed false positive fraction model, useful throughput plateaus rather than declines when alert rates exceed investigation capacity, contradicting common assertions. A decline only occurs if precision decreases with increasing alert rates, as shown in a repaired model with a strictly decreasing precision function.

These matter for real-world cybersecurity operations, where decisions about AI investment and human staffing are often based on informal slogans. The theory suggests that focusing AI improvements on non-bottleneck stages may be wasted effort, and human analysts remain critical as ultimate constraints. For organizations, this means that deploying AI without addressing slowest human-led steps, like investigation or decision-making, will not enhance overall security response times. The adversarial insights warn that if attackers improve their bottlenecks while defenders do not, defenders could fall behind despite other investments, emphasizing the need for strategic prioritization. The false positive urge a reevaluation of models used to predict operational burdens, as constant precision assumptions may lead to inaccurate expectations of decline.

Limitations of the theory include its deterministic and bufferless nature, ignoring stochastic effects, queueing, and parallel structures present in real systems. The model does not provide formulas for how much throughput changes or optimize multiplier allocation under budget constraints, and no empirical validation is offered. Future work could extend to stochastic throughput functionals, which would require new proof techniques and better reflect variability in actual security operations. The human authority assumption is idealized, as AI might partially assist human stages, and the adversarial model lacks strategic interaction dynamics. Despite these constraints, the theory offers a foundational framework for more evidence-based analysis of AI in cybersecurity, moving beyond anecdotal claims to precise mathematical understanding.

Original Source

Read the complete research paper

View on arXiv

About the Author

Guilherme A.

Guilherme A.

Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.

Connect on LinkedIn