Children’s Commissioner said applications that create nude images of children are prohibited.

The Children’s Commissioner in England calls on the government to ban applications that use artificial intelligence (AI) to create children’s gender images.

Mrs. Rachel de Souza said a total ban was needed on the app, which allowed "nudity" - in which AI edited photos of real people to make them appear naked - or could be used to create deep-photo images that are explicit about children's sexuality.

She said the government allowed such apps to be "opted without extreme real-world consequences".

A government spokesman said child sexual abuse material is illegal and there are plans to further create, own or distribute artificial intelligence tools designed to create such content.

Deepfakes are videos, pictures or audio clips made with AI to make them look or sound real.

Mrs. Rachel said in a report released on Monday that the technology is aimed at women and young women, with many of which custom apps appearing to be only suitable for female bodies.

The report says girls are actively avoiding posting images or engaging online to reduce the risk of being targeted, “just like the way girls follow other rules to ensure they are safe in the offline world, just like not going home alone at night.”

Kids are worried that “a stranger, classmate, or even friend” can target them using technologies that can be found on popular search and social media platforms.

Mrs. Rachel said: “The evolution of these tools is happening at such a scale and at a rate that trying to grasp the dangers they pose can be overwhelming.

“We can’t sit down and make these customized AI apps so dangerous to children’s lives.”

Mrs. Rachel also called on the government:

Paul Whiteman, secretary general of the Alliance for School Leadership Alliance, said members shared the Commissioner’s concerns.

“This is an area that needs to be reviewed because the technological risks outweigh the areas around law and education,” he said.

It is illegal to share or threaten to share explicit deep-strike images in England and Wales under the Online Security Act.

The government law in February announced that it would be illegal to deal with threats of child sexual abuse images generated by AI, including making it illegal to own, create or distribute AI tools designed to create such materials.

It said at the time, the Internet Watch Foundation (a UK charity partially funded by tech companies) confirmed 245 reports on AI-generated child sexual abuse in 2024, compared with 51 in 2023, an increase of 380%.

Media regulator Ofcom released the final version of its children’s code on Friday, which puts legal requirements on platforms hosting pornography and content to encourage self-harm, suicide or eating disorders to take more action to prevent children from entering.

Regulators say the website must introduce enhanced age checks or face major fines.

Mrs. Rachel criticized the regulation, saying it took priority over "the business interests of technology companies, not the safety of children."

A government spokesman said the creation or distribution of child sexual abuse material, including AI-generated images, was "abhorrent and illegal".

They added: “Under the online security bill platform of all sizes, such content must be removed or they may face significant fines.”

“The UK is the first country in the world to introduce AI child sexual abuse crimes – making it illegal to own, create or distribute AI tools designed to generate heinous child sexual abuse materials.”