Latest AI news, expert analysis, bold opinions, and key trends — delivered to your inbox.
Indonesia has “conditionally” lifted its ban on xAI’s chatbot Grok, joining Malaysia and the Philippines in easing restrictions after weeks of controversy over AI-generated deepfake content.
Southeast Asian countries had banned Grok after it was used to generate a massive wave of non-consensual, sexualized images — including images of real women and minors.
Between late December and January, at least 1.8 million such images were reportedly created, according to investigations by The New York Times and the Center for Countering Digital Hate.
Indonesia says it’s lifting the ban only after X promised concrete improvements to prevent misuse. But the government made one thing clear: the ban can return instantly if violations happen again.
This is bigger than Grok.
It’s one of the clearest signs yet that governments are no longer treating AI harms as theoretical risks — they’re ready to shut down tools in real time.
For AI companies, the message is brutal but simple:
Build powerful models, but fail at safety, and your product could disappear overnight.
While most governments have stopped short of outright bans, pressure is rising.
In the U.S., California’s Attorney General has launched an investigation into xAI and issued a cease-and-desist order, demanding immediate action.
Meanwhile, xAI has started tightening restrictions, including limiting Grok’s image generation to paying users. Elon Musk claims illegal content will be punished and says he’s unaware of underage images generated by Grok.
AI isn’t just competing on intelligence anymore — it’s competing on trust, control, and regulation.
And Grok’s case might become a blueprint for how governments worldwide handle dangerous AI tools going forward.
Indonesia didn’t fully forgive Grok.
It put it on probation.
In the AI race, raw capability is no longer enough — safety is becoming the real battleground.