Latest AI news, expert analysis, bold opinions, and key trends — delivered to your inbox.
A new risk assessment from Common Sense Media concludes that Grok has weak age detection, broken safety guardrails, and routinely generates sexual, violent, and inappropriate content, making it unsafe for kids and teens.
The nonprofit didn’t mince words.
“We assess a lot of AI chatbots… but Grok is among the worst we’ve seen,” said Robbie Torney, head of AI and digital assessments at Common Sense Media.
After testing Grok across its mobile app, website, and X integration using teen accounts, Common Sense Media identified multiple failures:
Poor or ineffective identification of users under 18
Explicit sexual and violent content appearing frequently
A non-functional Kids Mode, despite being marketed as a safeguard
Easy sharing of harmful outputs to millions of users on X
The issues weren’t isolated — they compounded each other.
“Kids Mode doesn’t work, explicit material is pervasive, and everything can be instantly shared on X,” Torney said.
The report lands as xAI is already under scrutiny for how Grok was allegedly used to create and spread nonconsensual explicit AI-generated images, including of women and children.
After backlash from users and policymakers, xAI restricted Grok’s image tools to paying X subscribers — but testers found that:
Some free users could still access the tools
Paid users could still manipulate real photos to remove clothing or sexualize subjects
Common Sense Media criticized this response sharply, arguing that putting harmful features behind a paywall isn’t safety — it’s monetization.
“That’s not an oversight. That’s a business model that puts profits ahead of kids’ safety.”
Grok’s problems aren’t just about bad filters — they reflect a deeper tension in AI development:
Move fast vs. protect users
Engagement vs. responsibility
“Edgy” design vs. real-world harm
With features like “spicy mode,” AI companions aimed at entertainment, and viral sharing baked into X, Grok sits at the intersection of maximum reach and minimal friction — a dangerous combo when safeguards fail.
Bottom line:
This report reinforces a growing consensus among regulators and watchdogs: AI systems that scale harm this easily won’t be treated as experiments for much longer. For xAI, Grok’s child safety failures may become the company’s biggest liability yet.