Agentic Systems
An AI Agent Just Pwned Trivy, Microsoft, and DataDog in One Week
An autonomous AI agent scanned 47,000+ repositories, identified vulnerable CI/CD configurations, and compromised projects from Microsoft, DataDog, and Aqua Security using five distinct techniques. The only target that survived was defended by another AI agent.
Agentic Systems
The npm Nightmare Just Repeated Itself in AI Agents. It’s Worse This Time.
1,184 malicious skills infiltrated ClawHub, the plugin marketplace for the OpenClaw agent framework, reaching 20% of the ecosystem in weeks. Unlike npm packages, agent plugins run with full system permissions. The supply chain playbook broke because the blast radius changed.
Trustworthy AI
Researchers Just Proved That Making AI Agents Collaborate Better Makes Them Leak More Data
Every connection between AI agents creates both capability and exposure. The trust-vulnerability paradox formalizes what practitioners have observed: multi-agent collaboration scales risk faster than it scales value without trust budgeting.
Agentic Systems
The First AI-Orchestrated Cyberattack Changed Everything We Thought We Knew About Autonomous Threats
Anthropic disclosed the first documented case of AI autonomously orchestrating a cyberattack sequence. This isn't a theoretical risk anymore. The shift from AI-assisted to AI-orchestrated attacks changes the threat model fundamentally.
Agentic Systems
Two Critical CVEs Just Blew Open the MCP Ecosystem, And Developers Were the Target
Two critical vulnerabilities in MCP reference implementations confirm what the security community warned about: protocol-level design gaps become exploitable at scale. The CVEs aren't edge cases. They're structural consequences of shipping without security mandates.
Agentic Systems
OpenAI Just Adopted MCP, And the Protocol Still Doesn’t Mandate Authentication
OpenAI's adoption of MCP validates the protocol's trajectory but doesn't resolve its core security gap. Authentication remains optional in the specification, and adoption at scale amplifies the risk of every unauthenticated connection.
Trustworthy AI
The Prompt Injection Problem Is Getting Worse, Not Better: RAG Pipelines Are the New Attack Surface
Retrieval-augmented generation expanded AI's knowledge but also its attack surface. When external documents become part of the prompt, every data source becomes a potential injection vector. RAG didn't solve hallucination. It imported a new threat class.
Trustworthy AI
AI’s $4.88 Million Price Tag: When AI Deployments Create Breaches Instead of Preventing Them
The average cost of an AI-related data breach hit $4.8 million. AI systems don't just process sensitive data; they concentrate it, correlate it, and expose it through novel vectors that traditional security architectures weren't designed to handle.
Trustworthy AI
The Framework Nobody Asked For: Why NIST’s AI Risk Management Framework Was Already Obsolete
NIST did important work with the AI RMF 1.0. However, the AI landscape was moving at a speed that no standards development process, no matter how well-executed, could match.
Machine Identity
Deepfake Fraud Surged 3,000% This Year: Your Video Calls Are No Longer Proof of Identity
Deepfake fraud increased 3,000% in 2023, and the implications extend beyond social engineering. When video and voice can be synthesized in real time, visual confirmation of identity stops being a reliable authentication factor.
Trustworthy AI
56% More AI Security Incidents - And We're Still Calling This 'Early Days'
AI security incidents rose 56% in a year, and the industry response was to call it growing pains. At some point, the "early days" framing becomes a way to avoid accountability. The incidents aren't anomalies. They're the system working as designed.
Trustworthy AI
OWASP’s First LLM Top 10: The Vulnerability List That Predicted Everything We Got Wrong
OWASP published its first LLM vulnerability taxonomy, and the entries read like a roadmap of every incident that followed. Prompt injection, training data poisoning, insecure output handling. The threats were cataloged. The industry just wasn't ready to listen.
Enterprise Security
The 1,265% Problem: How ChatGPT Broke Every Phishing Training Program on the Planet
Phishing volumes surged 1,265% after ChatGPT's release. The problem isn't just volume. AI-generated phishing eliminates the grammatical errors that security awareness training taught employees to spot. The detection model is now broken.