Stay Ahead of the Curve

Latest AI news, expert analysis, bold opinions, and key trends — delivered to your inbox.

ChatGPT Is Starting to Pull Answers From Elon Musk’s Grokipedia

5 min read ChatGPT has begun citing Elon Musk’s AI-generated Grokipedia in some answers, according to the Guardian. The issue isn’t politics — it’s provenance. When ideologically charged, AI-written sources quietly slip into model outputs, they gain authority by default. This raises bigger questions about how AI decides what counts as “knowledge.” January 26, 2026 12:11 ChatGPT Is Starting to Pull Answers From Elon Musk’s Grokipedia

Something odd is creeping into ChatGPT’s answers: Grokipedia — the AI-generated encyclopedia built by Elon Musk’s xAI.

Grokipedia launched in October after Musk repeatedly accused Wikipedia of being biased against conservatives. While parts of it appeared to mirror Wikipedia almost word-for-word, reporters quickly flagged darker additions — including claims that pornography fueled the AIDS crisis, “ideological justifications” for slavery, and derogatory language about transgender people.

That kind of content wasn’t shocking given Grok’s history. What is surprising is where it’s now showing up.

What’s happening

According to reporting by The Guardian, GPT-5.2 cited Grokipedia at least nine times across more than a dozen user queries. These weren’t high-profile political topics like January 6 or HIV/AIDS — areas where Grokipedia’s inaccuracies are already well-documented.

Instead, the citations appeared in more obscure historical and biographical queries, including disputed claims about historian Sir Richard Evans that had previously been debunked.

Even more notable: Anthropic’s Claude appears to be citing Grokipedia in some responses too.

In other words, Grokipedia content is leaking beyond Musk’s ecosystem.

Why this matters

Most users don’t check sources — they trust the output. When an AI system quietly blends AI-generated, ideologically charged material into answers without clear context, it blurs the line between consensus knowledge and synthetic opinion.

This isn’t about “left vs right.” It’s about provenance:

  • Where did this information come from?

  • Who decided it was reliable?

  • And how does an AI weigh it against established sources?

When AI models ingest everything that’s public, “public” starts to look like a vulnerability.

OpenAI’s response

OpenAI told The Guardian that ChatGPT “aims to draw from a broad range of publicly available sources and viewpoints.”

That’s true — and also the core of the problem.

The bigger risk

AI doesn’t just reflect the internet anymore — it repackages it with authority. When controversial or low-quality sources slip into that pipeline, they gain legitimacy simply by being repeated calmly and confidently.

Today it’s obscure history. Tomorrow it could be economics, medicine, or elections.

Bottom line:
As AI models compete to absorb all human knowledge, the fight isn’t just about intelligence — it’s about which version of reality gets quietly normalized along the way.

User Comments (0)

Add Comment
We'll never share your email with anyone else.

img