Stay Ahead of the Curve

Latest AI news, expert analysis, bold opinions, and key trends — delivered to your inbox.

X restricts Grok AI from creating sexualized images of real people

3 min read X’s Grok AI will no longer create sexualized images of real people in regions where it’s illegal, following backlash over deepfake misuse. Regulators welcome the move, but victims say it comes too late to undo the harm. January 16, 2026 13:19 X restricts Grok AI from creating sexualized images of real people

After a wave of backlash, Elon Musk’s AI tool Grok will no longer be able to edit images of real people to show them in revealing clothing — at least in jurisdictions where it’s illegal. The move comes amid growing concern over AI-generated sexualized deepfakes, which campaigners say have caused real harm.

In a statement, X said: “We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing.”

The change has drawn mixed reactions. The UK government called it a vindication of its warnings about Grok, while regulator Ofcom welcomed the update but emphasized that its investigation into whether the platform broke UK law is still ongoing. Meanwhile, victims and campaigners argue the update does little to undo the damage already done, highlighting the risks of delayed regulation.

Why it matters
This is another reminder that AI content moderation is still playing catch-up with rapidly evolving tools. Even as companies implement safeguards, the impact of harmful AI-generated media can be lasting. For users and regulators alike, Grok’s case underscores the need for proactive oversight, stronger technical controls, and clearer legal frameworks for AI-generated imagery.

The bigger picture: Tools like Grok show AI’s potential — but also its dark side — and how difficult it can be to undo harm once the technology is in the wild.

User Comments (0)

Add Comment
We'll never share your email with anyone else.

img