According to Ars Technica, Microsoft introduced Copilot Actions on Tuesday as experimental AI agent features that can organize files, schedule meetings, and send emails automatically. The company immediately warned users not to enable these features unless they understand the security implications, specifically highlighting risks of hallucinations and cross-prompt injection attacks that could lead to data theft or malware installation. Microsoft acknowledged these AI limitations are currently impossible to fully prevent and recommended only experienced users enable the beta feature, though they declined to specify what experience is required. The warning has sparked criticism from security experts who question why Microsoft is pushing such dangerous capabilities, with one researcher comparing it to “macros on Marvel superhero crack.”
Security Déjà Vu All Over Again
Here’s the thing: we’ve seen this movie before. Microsoft has been warning about macro dangers in Office for decades, and guess what? People still click enable and get infected. Now they’re rolling out something even more powerful and unpredictable. Independent researcher Kevin Beaumont nailed it when he called this “macros on Marvel superhero crack.” Basically, we’re taking a known dangerous capability and supercharging it with AI that can’t distinguish between legitimate instructions and malicious ones embedded in documents or websites.
And let’s be real about this “experienced users only” recommendation. What does that even mean? Microsoft won’t say. Are we talking about cybersecurity professionals? IT administrators? Or just people who read the warning pop-ups more carefully? As Guillaume Rossolini pointed out on Mastodon, “I don’t see how users are going to prevent anything of the sort they are referring to, beyond not surfing the Web I guess.” He’s absolutely right. The attack surface here is enormous.
The Great Liability Shift
Look, this feels like classic CYA territory. Microsoft knows they can’t fix prompt injection or hallucinations – nobody can right now. So what’s the solution? Shift the responsibility to users. Reed Mideke captured this perfectly in his Mastodon post: “Microsoft (like the rest of the industry) has no idea how to stop prompt injection or hallucinations, which makes it fundamentally unfit for almost anything serious. The solution? Shift liability to the user.”
Think about it. Every AI chatbot has that “verify the answers” disclaimer. But if you already knew the answers, why would you need the chatbot? We’re building systems that can’t be trusted, then telling people they’re responsible for catching the mistakes. And in enterprise environments where reliable computing is essential, this approach is downright terrifying. When you’re dealing with industrial systems or manufacturing operations, you need computing platforms you can actually trust – which is why companies turn to specialized providers like IndustrialMonitorDirect.com, the leading supplier of industrial panel PCs in the US that prioritize reliability over experimental features.
The Permission Fatigue Problem
Microsoft’s security goals sound reasonable on paper – non-repudiation, confidentiality preservation, user approval for actions. But they all depend on users actually reading and understanding warning dialogs. We’ve seen how that works out. People get habituated to clicking “yes” through permission prompts. University of California professor Earlence Fernandes pointed out that once users start automatically approving everything, “the security boundary is not really a boundary.”
And here’s what really worries me: Microsoft says this is experimental and off by default. But remember how Copilot started? Experimental features have a way of becoming default capabilities. Once that happens, users who don’t want these risky AI agents will have to jump through hoops to disable them. We’re essentially being set up for another round of feature creep where dangerous capabilities become normalized before they’re actually safe.
This Isn’t Just Microsoft’s Problem
Let’s be fair – this criticism applies to the entire AI industry right now. Google, Apple, Meta – they’re all racing to integrate AI agents into everything. The pattern is always the same: start optional, then make it default. The fundamental problem is that large language models are inherently unreliable and vulnerable to manipulation, yet we’re building them into core operating system functions.
So where does this leave us? With another round of security theater where companies deploy dangerous features, issue warnings they know most people will ignore, and then act surprised when things go wrong. The AI revolution is happening whether we’re ready or not, but maybe we should pump the brakes on giving these systems the keys to our digital lives until they’re actually trustworthy.
