British Tech Firms and Child Protection Agencies to Examine AI's Capability to Create Exploitation Content
Technology companies and child safety agencies will be granted authority to evaluate whether AI tools can generate child abuse material under new British laws.
Substantial Rise in AI-Generated Illegal Material
The declaration came as findings from a protection watchdog showing that reports of AI-generated CSAM have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
New Legal Structure
Under the changes, the authorities will allow designated AI developers and child protection organizations to examine AI systems – the foundational technology for chatbots and image generators – and verify they have adequate protective measures to prevent them from creating images of child sexual abuse.
"Fundamentally about stopping abuse before it occurs," declared the minister for AI and online safety, noting: "Specialists, under rigorous conditions, can now detect the danger in AI models early."
Addressing Regulatory Obstacles
The changes have been implemented because it is illegal to create and own CSAM, meaning that AI developers and other parties cannot create such images as part of a evaluation process. Previously, authorities had to wait until AI-generated CSAM was uploaded online before addressing it.
This law is designed to preventing that problem by enabling to stop the creation of those images at source.
Legislative Framework
The changes are being added by the government as modifications to the criminal justice legislation, which is also establishing a ban on possessing, producing or distributing AI systems developed to create exploitative content.
Real-World Consequences
This recently, the minister toured the London base of Childline and heard a simulated call to counsellors involving a account of AI-based abuse. The interaction depicted a adolescent requesting help after being blackmailed using a sexualised deepfake of himself, created using AI.
"When I learn about young people experiencing blackmail online, it is a cause of intense anger in me and justified anger amongst parents," he said.
Alarming Data
A leading online safety foundation stated that cases of AI-generated abuse content – such as online pages that may contain multiple images – had significantly increased so far this year.
Cases of the most severe content – the most serious form of exploitation – increased from 2,621 images or videos to 3,086.
- Female children were overwhelmingly victimized, making up 94% of prohibited AI images in 2025
- Portrayals of newborns to toddlers increased from five in 2024 to 92 in 2025
Industry Response
The legislative amendment could "constitute a vital step to ensure AI products are secure before they are released," commented the chief executive of the online safety organization.
"AI tools have enabled so survivors can be victimised repeatedly with just a simple actions, providing offenders the ability to make possibly limitless amounts of advanced, lifelike child sexual abuse material," she added. "Content which further exploits victims' suffering, and makes young people, especially female children, less safe on and off line."
Counseling Session Information
The children's helpline also published details of support sessions where AI has been referenced. AI-related harms discussed in the conversations include:
- Employing AI to evaluate body size, physique and appearance
- Chatbots dissuading young people from talking to trusted adults about harm
- Facing harassment online with AI-generated material
- Online blackmail using AI-faked images
During April and September this year, the helpline conducted 367 support interactions where AI, chatbots and associated topics were mentioned, four times as many as in the same period last year.
Half of the mentions of AI in the 2025 interactions were connected with psychological wellbeing and wellbeing, encompassing utilizing AI assistants for assistance and AI therapeutic applications.