UK Technology Companies and Child Safety Officials to Examine AI's Capability to Create Abuse Images

Technology companies and child protection agencies will receive authority to assess whether artificial intelligence tools can produce child exploitation material under recently introduced UK legislation.

Significant Rise in AI-Generated Harmful Material

The announcement came as revelations from a protection watchdog showing that reports of AI-generated CSAM have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.

New Legal Structure

Under the amendments, the government will allow approved AI companies and child safety groups to inspect AI systems – the foundational technology for conversational AI and visual AI tools – and verify they have adequate protective measures to prevent them from producing images of child exploitation.

"Ultimately about stopping exploitation before it occurs," stated the minister for AI and online safety, noting: "Specialists, under strict conditions, can now detect the danger in AI models promptly."

Addressing Legal Obstacles

The amendments have been implemented because it is against the law to produce and own CSAM, meaning that AI creators and other parties cannot generate such images as part of a testing process. Until now, officials had to delay action until AI-generated CSAM was uploaded online before dealing with it.

This legislation is aimed at averting that problem by enabling to stop the creation of those images at their origin.

Legislative Framework

The changes are being introduced by the government as modifications to the crime and policing bill, which is also implementing a prohibition on possessing, producing or sharing AI systems developed to generate exploitative content.

Real-World Consequences

This recently, the official toured the London base of a children's helpline and listened to a simulated call to advisors featuring a account of AI-based abuse. The interaction depicted a adolescent seeking help after being blackmailed using a sexualised AI-generated image of themselves, created using AI.

"When I hear about young people facing blackmail online, it is a cause of extreme frustration in me and rightful concern amongst families," he stated.

Alarming Statistics

A prominent online safety organization reported that instances of AI-generated abuse material – such as webpages that may contain numerous images – had significantly increased so far this year.

Cases of category A material – the gravest form of abuse – rose from 2,621 visual files to 3,086.

  • Girls were predominantly targeted, making up 94% of illegal AI images in 2025
  • Portrayals of newborns to toddlers rose from five in 2024 to 92 in 2025

Industry Reaction

The legislative amendment could "represent a crucial step to guarantee AI tools are safe before they are released," commented the chief executive of the internet monitoring organization.

"AI tools have enabled so victims can be victimised all over again with just a few clicks, providing offenders the ability to make possibly limitless amounts of sophisticated, photorealistic exploitative content," she continued. "Content which additionally commodifies victims' suffering, and makes children, especially girls, more vulnerable both online and offline."

Counseling Interaction Information

Childline also published information of counselling interactions where AI has been mentioned. AI-related risks mentioned in the sessions include:

  • Using AI to evaluate body size, physique and appearance
  • Chatbots discouraging children from talking to trusted adults about harm
  • Being bullied online with AI-generated content
  • Digital extortion using AI-faked pictures

During April and September this year, Childline conducted 367 counselling sessions where AI, conversational AI and related topics were mentioned, four times as many as in the same period last year.

Fifty percent of the references of AI in the 2025 interactions were connected with mental health and wellbeing, including utilizing AI assistants for assistance and AI therapeutic apps.

Robert Byrd
Robert Byrd

A savvy deal hunter and content creator passionate about helping others find the best bargains online.