According to Futurism, a hacker breached the entire backend of AI startup Doublespeed in late October, gaining access to its massive phone farm operation. The startup, which is backed by prominent venture capital firm Andreessen Horowitz, offers a service to run hundreds of AI-generated social media accounts. The hacker, who reported the vulnerability to Doublespeed on October 31st, still had access as recently as the day of the report, viewing active phones, assigned TikTok accounts, and pending tasks. They shared a list of over 400 TikTok accounts run by the farm, with about half actively promoting products like language apps and supplements. Crucially, most of these promotional posts did not disclose they were ads, directly violating both TikTok’s terms and FTC guidelines. One example account, “Chloe Davis,” had posted roughly 200 videos of an AI-generated woman promoting a massage roller.
The Bleak Reality Behind The Bots
So this isn’t just some theoretical misuse. We’re looking at a fully operational, VC-funded business model built on deception at scale. The 404 Media report shows these aren’t clumsy bots. They’re carefully crafted synthetic personas, like “Chloe Davis,” designed to look and feel human to build trust and then monetize it. And they’re doing it while flouting the most basic rules of advertising. That’s not an oversight; it’s the core feature. The engagement is fake, the influencers are fake, and the trust is manufactured. Here’s the thing: if they’re willing to brazenly ignore ad disclosure laws, what else are they willing to do? The article rightly flags this as a potential breeding ground for disinformation or scams. It’s a slippery slope from undisclosed supplement ads to more dangerous cons.
Why Isn’t Anyone Stopping This?
Now, the most frustrating part of this whole saga might be the apparent lack of consequences. The hacker found the flaw and reported it. The operation has been exposed by journalists. And yet, the phone farm seems to chug along. Where’s the pushback from TikTok? The platforms are supposedly armed with AI to detect this stuff, but a service selling this manipulation “as-a-service” is operating in plain sight once you peek behind the curtain. It creates a perverse incentive. If enforcement is weak or slow, the financial reward for deploying hundreds of these accounts outweighs the negligible risk of getting a few shut down. Basically, it’s profitable to break the rules.
The Coming Scale Is The Real Nightmare
And this is just the beginning. Doublespeed is only on TikTok right now. They have stated plans to expand to Instagram, Reddit, and X. Think about that for a second. We’re looking at a future where every social media debate, product review, and trending topic could be silently swarmed by paid, AI-generated personas. Authentic engagement gets relegated to the highest bidder. It turns the whole idea of a public forum into a farce. The hack gave us a rare snapshot of this emerging industry in its infancy. The real question is what it looks like when it grows up, and whether the platforms we use every day have the will or the ability to stop it. I’m not optimistic.
