Calls for child safety checks on generative AI products

Almost 4 in 5 adults want child safety checks on new generative artificial intelligence (Gen AI) products, according to a recent poll.

Most people want child safety risks considered before new AI technologies are rolled outThe NSPCC says that polling reveals that 78% of the UK public would opt for child safety checks on the new technology, which uses AI to generate new content, including images and text.

Savanta surveyed 3,356 people from across the UK and found the vast majority (89%) had at least some level of concern that Gen AI technology could be unsafe for children. Meanwhile, 78% said they would prefer to have child safety checks carried out on new Gen AI products, even if this caused delays in releasing the products.

These fears are not unfounded; since 2019, the NSPCC’s confidential child helpline, Childline, has been receiving contacts from children about the harms they have experienced from AI.

The NSPCC is now calling on the UK government to adopt specific child safeguards in its legislation.

The 7 child safety risks posed by Gen AI

It comes as recent research conducted by the charity identifies seven key child safety risks associated with generative AI:

  • Sexual grooming
  • Sexual harassment
  • Bullying
  • Financially motivated extortion
  • Child sexual abuse and exploitation material
  • Harmful content
  • Harmful ads and recommendations

Generative AI products pose 7 key child safety risks, including grooming, sexual abuse and harassmentTheir report, Viewing Generative AI and children’s safety in the round, confirms that Gen AI is being used to groom, harass, manipulate and mislead children and young people.

It includes an analysis of research on Gen AI commissioned by the NSPCC and conducted by legal and technology consultancy AWO. They carried out qualitative interviews and a workshop to capture a wide range of professional voices from a variety of sectors, including child safety, Gen AI developers and policymakers.

The report unites this research with a consultation held with a panel of 11 young people aged 13-16 from the NSPCC’s Voice of Online Youth taskforce and insights from their confidential children’s helpline, Childline, about children’s experiences with AI.

They found that generative AI technology is being used to generate sexual abuse images of children, enable perpetrators to more effectively commit sexual extortion, groom children and provide misinformation or harmful advice to young people.

In the report’s executive summary, the NSPCC warns about how “Gen AI outputs can be used maliciously to target children, how children’s images can be exploited, the harms from consuming Gen AI content, and the potential for children to create harmful Gen AI content.”

Urgent action required to address child safety risks

Urgent action is required to regulate AI companies and keep children safeAs this new technology develops and becomes more accessible, so too will the child safety risks it poses.

The NSPCC report makes several recommendations for government and technology companies to help address child safety concerns.

These include stripping out child sexual abuse material from AI training data and conducting robust risk assessments on models to ensure they are safe before they are rolled out.

The government is exploring new legislation to help regulate AI technology and, this month, policymakers will meet with tech companies and third sector organisations, including the NSPCC, at a global summit in Paris to discuss the benefits and risks of AI.

The NSPCC will be fighting for the government to include specific child safety measures in its legislation.

They have set out 4 urgent actions for the government to take to make generative AI safe for children:

  1. Adopt a duty of care for children’s safety – Gen AI companies must prioritise child safety and protect the rights of children and young people in the design and development of their products and services
  2. Embed a statutory duty of care in legislation – the government must ensure that any legislation enacted to regulate AI includes a statutory duty of care placed on technology firms to ensure they are held accountable for child safety
  3. Place children at the heart of generative AI decisions – Products and services must be designed, developed and deployed with the needs and experiences of children and young people in mind
  4. Expand and promote the research and evidence base for Gen AI and child safety – the government, academia and relevant regulatory bodies should invest in studying the risks and supporting the development of evidence-based policies.

Generative AI a “double-edged sword”

Generative AI poses opportunities for creativity but it also introduces new child safety risksThe NSPCC’s CEO, Chris Sherwood, described generative AI as a “double-edged sword,” admitting that it “provides opportunities for innovation, creativity and productivity,” for children and young people while warning that it can also have a “devastating and corrosive impact on their lives.”

He continued:

“We can’t continue with the status quo where tech platforms ‘move fast and break things’ instead of prioritising children’s safety. For too long, unregulated social media platforms have exposed children to appalling harms that could have been prevented. Now, the government must learn from these mistakes, move quickly to put safeguards in place and regulate generative AI, before it spirals out of control and damages more young lives.

“The NSPCC and the majority of the public want tech companies to do the right thing for children and make sure the development of AI doesn’t race ahead of child safety. We have the blueprints needed to ensure this technology has children’s wellbeing at its heart, now both government and tech companies must take the urgent action needed to make Generative AI safe for children and young people.”

Training to safeguard children and young people

Safeguarding children includes protecting them online and from new technologiesFirst Response Training (FRT) is a leading national training provider delivering courses in subjects such as health and safety, first aid, fire safety, manual handling, food hygiene, mental health, health and social care, safeguarding and more.

They work with a large number of early years, schools and childcare providers, as well as colleges, youth groups and children’s services. Their courses include Safeguarding Children.

A trainer from FRT says:

“Safeguarding children means protecting them off and online and being aware of new and developing technologies and how children may be interacting with these, and how they intersect with issues of child safety and protection.

“It’s so important that we are mindful of the harms children and young people are exposed to when they use technology and that there are mechanisms in place to protect them, and to offer them help and support when they need it most. Children who are anxious about technology and things they have seen or experienced online need to feel they have a safe space where they can talk about their worries and experiences.

“It’s vital that anyone who works with children and young people is aware of their responsibility for safeguarding children and that they can recognise the signs that indicate a child may be experiencing abuse, including online grooming, harassment or sextortion, and know the correct action to take in response.”

For more information on the training that FRT can provide, please call them today on freephone 0800 310 2300 or send an e-mail to info@firstresponsetraining.com.