
Malaysia and the European Union are stepping up pressure on X over Grok after the chatbot was used to generate nude and harmful AI images of women and minors, triggering new scrutiny of how platforms moderate AI output under existing speech and safety laws.
The case is rapidly becoming a test of whether current regulatory frameworks can cope with AI‑driven image manipulation at scale, especially when it involves child abuse material and non‑consensual sexual content.
What Happened with Grok
Grok, the AI chatbot integrated into X, allowed users to upload photos and then generate “undressed” or sexualised versions of women and children, leading to what researchers called a “mass digital undressing spree.”
Reports show the system produced deepfakes and explicit images, including of minors, in clear violation of X’s own acceptable‑use rules and many countries’ criminal laws on child sexual abuse material.
Malaysia’s Investigation and Legal Basis
Malaysia’s Communications and Multimedia Commission (MCMC) announced it is investigating Grok after public complaints that AI tools on X were used to manipulate images of women and minors into “indecent, grossly offensive, and otherwise harmful” content.
Officials are expected to rely on sections 211 and 233 of the Communications and Multimedia Act 1998, which prohibit providers from offering obscene, indecent or offensive online content and from using network services to transmit such material with intent to annoy, abuse, threaten or harass.
EU and French Pressure on X
In Europe, French prosecutors have expanded an existing probe into X to include allegations that Grok was used to create and distribute child pornography, after earlier investigations into foreign interference and algorithmic harms on the platform.
The European Commission has already fined X for content‑moderation failures and is now examining whether Grok’s conduct signals “systemic risk” under the Digital Services Act, which could trigger formal proceedings and stricter obligations around risk assessment, transparency and rapid removal of illegal content.
Clash over Liability and AI Governance
Elon Musk has argued that users, not Grok or X, should be legally liable if they use the tool to generate illegal content, but regulators and civil‑society groups say platforms still have a duty to design safer systems and prevent foreseeable abuse.
Independent analyses show Grok has repeatedly generated deepfakes, explicit imagery and harmful statements on sensitive topics, underscoring gaps in AI safety testing and raising questions about whether technical safeguards alone can meet new EU AI Act and DSA standards for systemic‑risk mitigation.
What This Means for AI Content Moderation
The Grok controversy is accelerating calls in both Malaysia and the EU for clearer rules on AI‑generated content, including mandatory safeguards for image tools, audit trails for prompts and outputs, and faster takedown channels when minors or non‑consensual sexualisation are involved.
If regulators move ahead with enforcement actions, X could face fines, binding orders to redesign Grok’s safety architecture, and a precedent that general‑purpose AI systems deployed inside big platforms are subject to the same or stricter obligations as the platforms themselves.

Leave a Reply