Machine Identity
$25.5 Million in 12 Minutes: The Arup Deepfake Heist That Should Terrify Every CFO
An employee at engineering firm Arup transferred $25.5 million after a video call with deepfake recreations of senior executives. This wasn't a failure of awareness training. It was an architecture failure where visual identity was the only trust layer.
Trustworthy AI
The Framework Nobody Asked For: Why NIST’s AI Risk Management Framework Was Already Obsolete
NIST did important work with the AI RMF 1.0. However, the AI landscape was moving at a speed that no standards development process, no matter how well-executed, could match.
Machine Identity
Deepfake Fraud Surged 3,000% This Year: Your Video Calls Are No Longer Proof of Identity
Deepfake fraud increased 3,000% in 2023, and the implications extend beyond social engineering. When video and voice can be synthesized in real time, visual confirmation of identity stops being a reliable authentication factor.
AI in Production
The Enterprise AI Paradox: Why 65% Adoption and 74% Failure to Scale Are the Same Story
Sixty-five percent of enterprises adopted AI. Seventy-four percent failed to scale it beyond pilots. These aren't conflicting statistics. They're the same story: adoption without architecture produces experiments that never become infrastructure.
AI Governance
Three AI Governance Frameworks in One Week, And Zero Actionable Compliance Requirements
Three major AI governance frameworks launched in a single week. None included actionable compliance requirements. The gap between governance intent and operational implementation is where enterprise AI risk actually lives.
AI in Production
The Brilliant Intern Who Hallucinated: What Happened When We Plugged an LLM Into Support Data
If you’re deploying LLMs against structured enterprise data, start with the data model. Map the relationships first. Then design retrieval that preserves those relationships.
Trustworthy AI
56% More AI Security Incidents - And We're Still Calling This 'Early Days'
AI security incidents rose 56% in a year, and the industry response was to call it growing pains. At some point, the "early days" framing becomes a way to avoid accountability. The incidents aren't anomalies. They're the system working as designed.
Enterprise Security
ChatGPT Enterprise Solved the Compliance Problem, But Created the Architecture Problem Nobody’s Talking About
OpenAI's enterprise tier addressed data privacy concerns with encryption and access controls. What it didn't address is the harder question: how AI integrates into existing enterprise architecture without creating a parallel decision-making system.
Trustworthy AI
OWASP’s First LLM Top 10: The Vulnerability List That Predicted Everything We Got Wrong
OWASP published its first LLM vulnerability taxonomy, and the entries read like a roadmap of every incident that followed. Prompt injection, training data poisoning, insecure output handling. The threats were cataloged. The industry just wasn't ready to listen.
AI Governance
The EU Just Voted on an AI Law That Was Obsolete Before the Ink Dried
The EU AI Act was drafted for a world of narrow AI classifiers. By the time it passed committee, foundation models had rewritten the risk calculus. Regulating AI by intended use breaks down when the same model serves a thousand use cases.
Enterprise Security
The 1,265% Problem: How ChatGPT Broke Every Phishing Training Program on the Planet
Phishing volumes surged 1,265% after ChatGPT's release. The problem isn't just volume. AI-generated phishing eliminates the grammatical errors that security awareness training taught employees to spot. The detection model is now broken.
AI Governance
Italy Just Proved GDPR Is the Only AI Law That Actually Works
While governments race to draft AI-specific legislation, Italy used existing GDPR authority to force OpenAI into compliance changes. The lesson: the most effective AI regulation may already exist. It just wasn't designed with AI in mind.
AI Governance
Samsung’s $50 Billion Lesson: Why Banning ChatGPT Is the Wrong Response to the Right Problem
Samsung banned ChatGPT after engineers leaked semiconductor data. The instinct to prohibit is understandable but architecturally naive. The real problem isn't the tool. It's the absence of governance infrastructure that makes safe adoption possible.
Enterprise Security
ChatGPT Just Turned 100 Million - And Your Data Loss Prevention Strategy Didn’t Notice
Every enterprise that deployed DLP in 2022 was implicitly betting that they knew all the exits. ChatGPT’s hundred-million-user February proved that the exits had multiplied faster than anyone could count them.
AI in Production
We Launched a Feature That Couldn’t Be Closed (And What Reddit Taught Us About QA)
If you’re building in-product experiences, overlays, guided workflows, contextual help, test the dismissal paths as rigorously as you test the happy paths. The feature that works perfectly but can’t be dismissed is worse than no feature at all.