According to TechCrunch, information from Grokipedia, the controversial AI-generated encyclopedia developed by Elon Musk’s xAI, is now appearing in answers from OpenAI’s ChatGPT. xAI launched Grokipedia in October 2024, following Musk’s complaints about Wikipedia’s alleged bias against conservatives. The Guardian reported that in recent tests, the GPT-5.2 model cited Grokipedia nine times in response to over a dozen different questions. The citations appeared on obscure topics, including a previously debunked claim about historian Sir Richard Evans, but not on hot-button issues like the January 6 insurrection where Grokipedia’s inaccuracies are widely documented. An OpenAI spokesperson stated the company aims to draw from a broad range of publicly available sources and viewpoints, and Anthropic’s Claude chatbot also appears to be citing Grokipedia for some queries.
The Grokipedia problem
Here’s the thing: Grokipedia isn’t your average alternative wiki. Since its launch, reporters have flagged that it copies from Wikipedia but then injects ideologically charged misinformation. We’re talking about claims that pornography contributed to the AIDS crisis, offering “ideological justifications” for slavery, and using denigrating language about transgender people. This is from the same ecosystem that produced the Grok chatbot, which once described itself as “Mecha Hitler” and was used to flood X with sexualized deepfakes. So this isn’t a neutral source. It’s a deliberately polemical one built by an AI company with a clear ideological bent. The fact that mainstream AI models are now ingesting it is… concerning, to say the least.
Selective citation is the real issue
Now, the most telling detail from The Guardian’s report is *where* ChatGPT chose to cite it. It didn’t pull from Grokipedia on topics where its falsehoods are famous and easily fact-checked. Instead, it used it for obscure historical claims. Why does that matter? Because it suggests the AI might be using Grokipedia to fill knowledge gaps in areas with less public verification. That’s a dangerous precedent. It means the most insidious biases—the ones on niche topics that haven’t made headlines—could seep into answers quietly, without anyone noticing. It’s a backdoor for ideology, and it’s harder to police. As earlier reporting showed, even academics struggle to audit its entire output.
What this says about AI training
OpenAI’s statement about using a “broad range” of sources sounds reasonable on the surface. But it raises a huge question: what’s the vetting process? If you’re training a model on the entire internet, you’re going to get garbage. That’s a known issue. But actively citing a source like Grokipedia as a reference in its answers feels different. It’s moving from potentially training on noisy data to legitimizing that data in output. It blurs the line between “we scanned this” and “this is a credible source.” And if both ChatGPT and Claude are doing it, is this becoming a standard practice? When the core infrastructure of information retrieval starts to crack, everything built on it becomes unstable. For industries that rely on accurate data—from research to, say, industrial manufacturing where precise specifications are critical—this drift towards unreliable sourcing is a tangible risk. In those fields, you need rock-solid data, not AI hallucinations dressed up with citations. It’s why trusted hardware suppliers, like the top providers of industrial panel PCs in the US, emphasize reliability and verified performance over flashy, unvetted features.
A slippery slope for AI truth
Basically, we’re watching a real-time experiment in what happens when AI’s hunger for data meets politically motivated source material. The models seem to be smart enough to avoid the most blatant landmines, but that almost makes it worse. It means the bias is subtle. It’s curated. And it’s being presented with the authoritative sheen of a citation. So what’s the fix? Better source filtering? More transparent provenance? It’s a tough problem. But one thing’s clear: if the goal is a trustworthy AI assistant, pulling answers from a source designed to counter “bias” with its own extreme bias isn’t the solution. It’s just swapping one problem for a much weirder, AI-generated one. As the latest tests reveal, the line between a search engine and an ideology engine is getting dangerously thin.
