UK Tech Companies and Child Safety Agencies to Examine AI's Capability to Generate Abuse Images
Technology companies and child safety organizations will be granted authority to assess whether AI tools can produce child abuse images under recently introduced UK legislation.
Substantial Rise in AI-Generated Illegal Material
The announcement came as revelations from a safety monitoring body showing that reports of AI-generated child sexual abuse material have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Updated Regulatory Framework
Under the amendments, the authorities will allow designated AI companies and child protection groups to inspect AI systems – the underlying systems for conversational AI and visual AI tools – and ensure they have sufficient protective measures to stop them from producing images of child sexual abuse.
"Fundamentally about preventing exploitation before it happens," stated Kanishka Narayan, adding: "Specialists, under rigorous conditions, can now detect the danger in AI systems early."
Tackling Legal Challenges
The changes have been introduced because it is against the law to produce and possess CSAM, meaning that AI developers and others cannot generate such content as part of a testing regime. Until now, officials had to wait until AI-generated CSAM was published online before dealing with it.
This law is aimed at preventing that problem by helping to halt the production of those images at source.
Legal Structure
The changes are being added by the government as revisions to the criminal justice legislation, which is also implementing a ban on possessing, creating or sharing AI models designed to generate exploitative content.
Real-World Consequences
This week, the minister visited the London headquarters of Childline and heard a simulated conversation to counsellors involving a report of AI-based abuse. The call depicted a teenager seeking help after being blackmailed using a explicit AI-generated image of himself, constructed using AI.
"When I hear about children facing blackmail online, it is a source of extreme frustration in me and justified anger amongst parents," he stated.
Alarming Statistics
A prominent internet monitoring foundation reported that cases of AI-generated exploitation content – such as webpages that may contain numerous files – had significantly increased so far this year.
Instances of category A material – the gravest form of exploitation – increased from 2,621 visual files to 3,086.
- Female children were predominantly victimized, accounting for 94% of prohibited AI depictions in 2025
- Portrayals of infants to two-year-olds increased from five in 2024 to 92 in 2025
Sector Response
The law change could "represent a crucial step to ensure AI products are secure before they are launched," commented the chief executive of the online safety organization.
"AI tools have enabled so survivors can be targeted repeatedly with just a simple actions, giving offenders the ability to make potentially endless amounts of sophisticated, lifelike child sexual abuse material," she added. "Material which further exploits survivors' trauma, and renders young people, particularly girls, more vulnerable both online and offline."
Support Interaction Information
The children's helpline also released details of counselling sessions where AI has been mentioned. AI-related risks discussed in the sessions include:
- Using AI to evaluate weight, body and appearance
- Chatbots discouraging children from talking to trusted adults about harm
- Being bullied online with AI-generated material
- Digital blackmail using AI-faked pictures
Between April and September this year, the helpline delivered 367 counselling sessions where AI, chatbots and associated terms were mentioned, four times as many as in the equivalent timeframe last year.
Half of the mentions of AI in the 2025 interactions were related to mental health and wellbeing, encompassing using chatbots for support and AI therapy apps.