The Hacker NewsFeb 19, 2026Artificial Intelligence / DevSecOps
We’ve all seen this before: a developer deploys a new cloud workload and grants overly broad permissions just to keep the sprint moving. An engineer generates a “temporary” API key for testing and forgets to revoke it. In the past, these were minor operational risks, debts you’d eventually pay down during a slower cycle.
In 2026, “Eventually” is Now
But today, within minutes, AI-powered adversarial systems can find that over-permissioned workload, map its identity relationships, and calculate a viable route to your critical assets. Before your security team has even finished their morning coffee, AI agents have simulated thousands of attack sequences and moved toward execution.
AI compresses reconnaissance, simulation, and prioritization into a single automated sequence. The exposure you created this morning can be modeled, validated, and positioned inside a viable attack path before your team has lunch.
The Collapse of the Exploitation Window
Historically, the exploitation window favored the defender. A vulnerability was disclosed, teams assessed their exposure, and remediation followed a predictable patch cycle. AI has shattered that timeline.
In 2025, over 32% of vulnerabilities were exploited on or before the day the CVE was issued. The infrastructure powering this is massive, with AI-powered scan activity reaching 36,000 scans per second.
But it’s not just about speed; it’s about context. Only 0.47% of identified security issues are actually exploitable. While your team burns cycles reviewing the 99.5% of “noise,” AI is laser-focused on the 0.5% that matters, isolating the small fraction of exposures that can be chained into a viable route to your critical assets.
To understand the threat, we must look at it through two distinct lenses: how AI accelerates attacks on your infrastructure, and how your AI infrastructure itself introduces a new attack surface.
Scenario #1: AI as an Accelerator
AI attackers aren’t necessarily using “new” exploits. They are exploiting the exact same CVEs and misconfigurations they always have, but they are doing it with machine speed and scale.
Automated vulnerability chaining
Attackers no longer need a “Critical” vulnerability to breach you. They use AI to chain together “Low” and “Medium” issues, a stale credential here, a misconfigured S3 bucket there. AI agents can ingest identity graphs and telemetry to find these convergence points in seconds, doing work that used to take human analysts weeks.
Identity sprawl as a weapon
Machine identities now outnumber human employees 82 to 1. This creates a massive web of keys, tokens, and service accounts. AI-driven tools excel at “identity hopping”, mapping token exchange paths from a low-security dev container to an automated backup script, and finally to a high-value production database.
Social Engineering at scale
Phishing has surged 1,265% because AI allows attackers to mirror your company’s internal tone and operational “vibe” perfectly. These aren’t generic spam emails; they are context-aware messages that bypass the usual “red flags” employees are trained to spot.
Scenario #2: AI as the New Attack Surface
While AI accelerates attacks on legacy systems, your own AI adoption is creating entirely new vulnerabilities. Attackers aren’t just using AI; they are targeting it.
The Model Context Protocol and Excessive Agency
When you connect internal agents to your data, you introduce the risk that it will be targeted and turned into a “confused deputy.” Attackers can use prompt injection to trick your public-facing support agents into querying internal databases they should never access. Sensitive data surfaces and is exfiltrated by the very systems you trusted to protect it, all while looking like authorized traffic.
Poisoning the Well
The results of these attacks extend far beyond the moment of exploitation. By feeding false data into an agent’s long-term memory (Vector Store), attackers create a dormant payload. The AI agent absorbs this poisoned information and later serves it to users. Your EDR tools see only normal activity, but the AI is now acting as an insider threat.
Supply Chain Hallucinations
Finally, attackers can poison your supply chain before they ever touch your systems. They use LLMs to predict the “hallucinated” package names that AI coding assistants will suggest to developers. By registering these malicious packages first (slopsquatting), they ensure developers inject backdoors directly into your CI/CD pipeline.
Reclaiming the Response Window
Traditional defense cannot match AI speed because it measures success by the wrong metrics. Teams count alerts and patches, treating volume as progress, while adversaries exploit the gaps that accumulate from all this noise.
An effective strategy for staying ahead of attackers in the era of AI must focus on one simple, yet critical question: which exposures actually matter for an attacker moving laterally through your environment?
To answer this, organizations must shift from reactive patching to Continuous Threat Exposure Management (CTEM). It is an operational pivot designed to align security exposure with actual business risk.
AI-enabled attackers don’t care about isolated findings. They chain exposures together into viable paths to your most critical assets. Your remediation strategy needs to account for that same reality: focus on the convergence points where multiple exposures intersect, where one fix eliminates dozens of routes.
The ordinary operational decisions your teams made this morning can become a viable attack path before lunch. Close the paths faster than AI can compute them, and you reclaim the window of exploitation.
Note: This article was thoughtfully written and contributed for our audience by Erez Hasson, Director of Product Marketing at XM Cyber.