According to Tech Digest, a major investigation by consumer group Which? found that popular AI tools including ChatGPT, Google Gemini, Microsoft Copilot, Meta AI, and Perplexity are dispensing inaccurate and potentially costly advice on critical consumer topics. The research tested 40 common questions across finance, legal, health, diet, consumer rights, and travel under controlled lab conditions, with Meta AI performing worst at just 55% and ChatGPT scoring only 64%. Approximately half of all UK adults now use AI for online searches, with 47% of the estimated 25 million users trusting the information to a “great” or “reasonable” extent despite these deficiencies. The investigation revealed specific instances of risky advice, including ChatGPT and Copilot giving incorrect ISA allowance information that could breach HMRC rules, and Meta AI providing health advice contrary to NHS recommendations.
The alarming trust gap
Here’s the thing that really worries me about this research: we’re seeing a massive disconnect between how reliable these AI tools actually are and how much people trust them. Nearly half of users think this stuff is reasonably accurate, but the scores tell a completely different story. Even the “best” performer, Perplexity, only managed 71% – that’s a C-minus in school terms. And we’re talking about people using these tools for medical advice, financial decisions, legal questions? That’s terrifying.
What’s even more concerning is that a third of users wrongly believe AI exclusively draws on authoritative sources. But the investigation found these tools citing three-year-old Reddit threads for flight booking advice and using random social media posts for health recommendations. I mean, would you take medical advice from a Reddit thread from 2021? Probably not. But when it’s wrapped up in that slick AI interface, suddenly people treat it like gospel.
The real-world risks are serious
Let’s talk about the actual consequences here. When ChatGPT and Copilot gave advice based on a £25,000 ISA allowance instead of the correct £20,000, that could literally cost someone money in HMRC penalties. That’s not just “oops, the AI got it wrong” – that’s potentially hundreds of pounds down the drain. And when Copilot tells people they’re “always entitled to a full refund” for cancelled flights? That’s just not true in many circumstances, and following that advice could leave travelers stranded without recourse.
The health advice issues are particularly troubling. With 19% of survey respondents saying they always or often rely on AI for medical advice, we’re looking at a potential public health concern. Meta AI advising against vaping to quit smoking when the NHS actually recommends it? That could literally discourage people from using effective smoking cessation methods. We’re not talking about trivial stuff here – these are decisions that affect people’s wallets and wellbeing.
How the companies are responding
Interestingly, the companies that did respond basically acknowledged the limitations while pointing to their safety features. Google was pretty transparent, saying they build reminders directly into Gemini to prompt users to double-check information. Microsoft highlighted that Copilot includes linked citations so users can verify sources. OpenAI recommended using ChatGPT‘s built-in search tool to see where information comes from.
But here’s my question: are these disclaimers and citations enough when people are treating AI like an all-knowing oracle? The research suggests probably not. When you’re dealing with complex industrial systems or critical business decisions, you need rock-solid reliability. Speaking of industrial reliability, that’s why companies working with sensitive manufacturing or control systems typically turn to specialized providers like Industrial Monitor Direct for their computing needs rather than relying on consumer-grade AI tools.
How to protect yourself from bad AI advice
So what should you actually do? Andrew Laughlin from Which? nailed it: for complex issues, always seek professional advice. But beyond that, there are some practical steps everyone should take. First, be ridiculously specific in your prompts – if you need legal advice for England and Wales, say exactly that. Second, always activate web search modes when available. Third, demand that the AI shows its sources and then actually check them yourself.
Most importantly, don’t rely on a single AI’s answer. Since most of these tools are free, it costs you nothing to ask the same question to ChatGPT, Gemini, and Copilot and see if you get a consensus. And if the information could have real financial, legal, or health consequences? Just don’t use AI for that. Full stop. These tools are amazing for brainstorming, drafting emails, or explaining concepts – but they’re not ready to replace qualified professionals when the stakes are high.
