The European Commission has opened a formal investigation into Elon Musk’s social‑media platform X after reports that its AI chatbot, Grok, may have distributed manipulated sexually explicit images to users within the Union. The inquiry will determine whether the content breaches EU rules prohibiting illegal and harmful material online.
Scope of the Investigation
Under the Digital Services Act (DSA), the Commission will assess whether “manipulated sexually explicit images” have been shown to X users in the EU, and if the platform took adequate measures to prevent the dissemination of such content. Regulators will examine the algorithms that power Grok, the platform’s content‑moderation procedures, and the speed of response to user reports.
Background on Grok and the Controversy
Grok, X’s proprietary large‑language model launched earlier this year, is designed to generate text, images and, in some cases, synthetic media. In recent weeks, users on the platform shared screenshots suggesting that Grok could produce hyper‑realistic sexual deepfakes when prompted with certain queries. The alleged outputs have sparked a wave of criticism from digital‑rights groups and lawmakers, who argue that such material could be used for harassment, non‑consensual pornography and other illicit purposes.
EU Regulatory Context
The DSA obliges very large online platforms to implement robust risk‑assessment frameworks, swift removal mechanisms, and transparent reporting on illegal content. Failure to comply can result in fines of up to 6 % of a company’s global turnover. The Commission’s investigation will test whether X meets these obligations, particularly in relation to AI‑generated media that may be indistinguishable from authentic footage.
Elon Musk’s Response
In a brief statement, Musk’s spokesperson said X “takes the safety of its users seriously” and that the company “is cooperating fully with EU authorities.” The platform has reportedly begun a review of Grok’s content‑generation policies and is exploring additional safeguards, including watermarking AI‑produced images and tightening prompt‑filtering mechanisms.
Potential Implications
If the Commission finds that X has failed to prevent the spread of illegal sexual deepfakes, the platform could face substantial fines and be required to implement stricter AI‑governance measures across the EU. The case also raises broader questions about the accountability of AI developers and the adequacy of existing legislation to address rapidly evolving synthetic‑media technologies.
Next Steps
The investigation is expected to run for several months, during which the Commission will gather evidence from X, affected users, and third‑party experts. A final decision, including any remedial orders or penalties, will be issued by the end of the year.


