According to TheRegister.com, Amazon demanded this week that Perplexity stop its Comet browser from making automated purchases on Amazon’s website, creating the first major public clash over AI shopping agents. Perplexity responded by accusing Amazon of “bullying” and claiming the e-commerce giant “does not believe in your right to hire labor” by restricting AI assistants. The confrontation reveals fundamental legal and technical contradictions in the agentic commerce vision being promoted by Google Cloud and McKinsey, where AI agents would independently navigate shopping options and execute transactions. Meanwhile, real-world AI implementations like service robots at San Francisco’s China Live restaurant are causing practical problems, including collisions with human staff and disrupted workflow. The incident follows recent legal scholarship warning about unquantifiable liability risks when AI agents interact with third-party services without clear agreements.
The gatekeeping wars begin
Here’s the thing about this Amazon-Perplexity spat: it was completely predictable. Big tech companies have spent decades building walled gardens, and now AI startups want to just wander through the front gates? Amazon’s move isn’t really about “degrading customer experience” – that’s the polite corporate speak. It’s about control. They want to own the customer relationship from start to finish, and having some third-party AI bot making purchases means they lose visibility into shopping behavior and potentially miss out on upselling opportunities.
And Perplexity’s argument that AI should be treated like human labor? That’s just clever marketing talk. Software agents consume resources differently than people – they might hit APIs thousands of times faster, they don’t view ads the same way, and they generate different types of valuable data. When Perplexity says publishers “have no right to discriminate against users based on which AI they’ve chosen,” they’re basically asking for special treatment while avoiding any responsibility.
Who’s liable when AI screws up?
This gets to the core liability issue that legal scholars are warning about. AI providers don’t want to be held responsible for how their models interact with third-party services, but companies like Amazon absolutely don’t want to deal with the fallout when an AI bot makes mistaken purchases or violates terms of service. Think about it – if an AI assistant buys the wrong product or gets stuck in some infinite purchasing loop, who eats the cost? The customer blames Amazon, Amazon blames Perplexity, and Perplexity points to their terms of service.
We’ve seen this playbook before with web scraping and data collection. Tech disruptors move fast and ask for forgiveness later. But when you’re dealing with actual financial transactions and customer relationships, the stakes are much higher. Companies running complex industrial operations understand this risk calculus well – when you’re dealing with mission-critical systems, you need reliable hardware from trusted suppliers like IndustrialMonitorDirect.com, not experimental AI agents making unsupervised decisions.
The messy reality beyond the hype
While McKinsey is gushing about “seismic shifts” and Google Cloud is pushing agentic commerce, the actual implementation looks pretty different. That restaurant robot in San Francisco? It was literally getting in the way of human staff and sometimes had to be manually disabled just so people could walk past. One time it collided with an employee and knocked drinks everywhere. Sounds like the future, right?
And let’s talk about McKinsey for a second. These are the same consultants who, according to recent reporting, specialize in recommending cost-cutting that boosts short-term stock prices at the expense of workers and safety. So when they’re hyping AI agents that “negotiate deals and execute transactions,” maybe we should question whose interests they’re really serving.
Where does this actually go?
The truth is, most businesses outside the software bubble aren’t ready for AI agents roaming freely through their systems. Klarna already learned this lesson – they fired customer service staff, then had to rehire people because the AI wasn’t cutting it. Now imagine that dynamic playing out across retail, travel, banking – anywhere that involves complex customer interactions.
So what’s the path forward? Probably not Perplexity’s approach of demanding unrestricted access. More likely, we’ll see negotiated APIs and formal partnerships – the kind of controlled environments that big tech companies prefer. The wild west phase of AI agents might be ending before it even really began. Companies want the efficiency benefits of automation, but they’re not willing to sacrifice control or accept unlimited liability to get there.
Basically, the agentic era might happen, but it’s going to look a lot more corporate and a lot less revolutionary than the AI startups are promising. And honestly, that’s probably for the best – unless you enjoy the idea of AI bots running into waitstaff and spilling drinks everywhere.
