British Tech Firms and Child Safety Officials to Examine AI's Ability to Create Exploitation Images

Tech firms and child safety agencies will receive permission to assess whether AI systems can generate child exploitation images under new UK legislation.

Significant Rise in AI-Generated Harmful Material

The declaration came as revelations from a protection monitoring body showing that cases of AI-generated CSAM have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.

Updated Legal Framework

Under the changes, the authorities will permit designated AI developers and child safety organizations to examine AI models – the underlying systems for conversational AI and image generators – and ensure they have adequate protective measures to prevent them from producing depictions of child exploitation.

"Ultimately about stopping exploitation before it happens," declared the minister for AI and online safety, adding: "Specialists, under rigorous protocols, can now detect the danger in AI systems promptly."

Addressing Regulatory Obstacles

The changes have been implemented because it is against the law to produce and own CSAM, meaning that AI developers and other parties cannot generate such content as part of a testing regime. Until now, officials had to delay action until AI-generated CSAM was uploaded online before dealing with it.

This law is designed to preventing that problem by helping to stop the creation of those materials at source.

Legal Structure

The changes are being introduced by the authorities as revisions to the criminal justice legislation, which is also establishing a ban on possessing, producing or distributing AI systems designed to generate child sexual abuse material.

Real-World Consequences

This recently, the minister toured the London base of Childline and listened to a simulated call to counsellors featuring a report of AI-based exploitation. The call depicted a adolescent requesting help after facing extortion using a explicit deepfake of himself, constructed using AI.

"When I learn about young people experiencing extortion online, it is a cause of intense anger in me and rightful anger amongst parents," he said.

Alarming Statistics

A prominent internet monitoring foundation reported that cases of AI-generated abuse content – such as online pages that may include multiple images – had significantly increased so far this year.

Cases of category A material – the most serious form of exploitation – increased from 2,621 visual files to 3,086.

  • Girls were overwhelmingly targeted, making up 94% of prohibited AI depictions in 2025
  • Depictions of infants to two-year-olds increased from five in 2024 to 92 in 2025

Sector Response

The legislative amendment could "represent a vital step to guarantee AI tools are safe before they are released," commented the head of the online safety foundation.

"AI tools have made it so survivors can be victimised all over again with just a few clicks, providing criminals the capability to make possibly limitless quantities of advanced, lifelike exploitative content," she added. "Material which further commodifies survivors' trauma, and makes young people, particularly girls, more vulnerable both online and offline."

Counseling Session Data

The children's helpline also published details of counselling sessions where AI has been referenced. AI-related harms discussed in the sessions include:

  • Using AI to rate body size, body and appearance
  • Chatbots discouraging young people from consulting safe guardians about abuse
  • Facing harassment online with AI-generated content
  • Online extortion using AI-faked images

Between April and September this year, Childline delivered 367 counselling interactions where AI, conversational AI and related topics were mentioned, four times as many as in the same period last year.

Half of the references of AI in the 2025 sessions were related to mental health and wellbeing, including using AI assistants for support and AI therapy applications.

Elizabeth Alvarez
Elizabeth Alvarez

Elara is a seasoned strategist with over a decade of experience in corporate leadership and military tactics.