Elon Musk’s artificial intelligence-powered chatbot, Grok, has generated sexualised, explicit images of people including minors on the social media platform X.
The Chair of Oireachtas Committee on Artificial Intelligence has urged the Government to fast-track the Protection of Voice and Image Bill 2025 to tackle the problem of AI deepfakes and ‘nudification’. The bill is currently before the Dáil Éireann at second stage, but a debate has not been scheduled.
The All Ireland Science Media Centre asked experts to comment.
Dr Luke Danagher, Associate Professor, School of Law, University of Limerick, comments:
“Reports that Grok is being used to generate sexually explicit “nudified” images, including of minors, are shocking, but they reflect a predictable pattern: generative tools turn identity-based abuse into a prompt and a click. Even where an image is synthetic, the harm is real and rapid: psychological trauma, reputational damage, coercion and extortion, and a profound loss of privacy and dignity. When minors are targeted, this is not “misinformation” in the abstract, it is an acute child-protection crisis.
“The Protection of Voice and Image Bill 2025 (initiated 1 April 2025) is a timely response. It seeks to translate these harms into a clear criminal rule: knowingly publishing or distributing a person’s name, photograph, voice or likeness without consent, where there is intent to cause harm or recklessness as to harm. It also looks upstream, targeting the distribution of tools whose primary purpose is replicating an individual’s image or voice. This recognises that the risk is built into the ease and scale of the technology’s deployment.
“The proposed penalties of up to seven years on indictment mirror those in Coco’s Law, signalling comparable seriousness. The Bill sends a simple message: your image and voice are part of your personal integrity, and misuse via AI systems can be a serious crime. One practical issue, if the text remains unchanged, is whether the Bill’s “primary purpose/function” wording will capture general-purpose AI systems, rather than only tools designed specifically for replication or “nudification”. Put simply: when a platform can generate ‘nudified’ images on demand, the law has to punish both the (ab)user and the system that made it so easy. If an AI tool can ‘nudify’ a child with a prompt, that is not innovation, it is industrial-scale sexual abuse by design.”
Declaration of interest: “No conflict of interest to declare. I have not received funding from these entities or related entities.”
