Artificial Intelligence and young people – Children’s Commissioner for England highlights her concerns

Dame Rachel de Souza, Children's Commissioner for England

Dame Rachel de Souza, Children’s Commissioner for England, says she is ‘concerned’ about the risks that Artificial Intelligence (AI) pose for children and young people.

The commissioner believes that the incorporation of AI into online platforms used by youngsters brings with it increased risks of key topical issues like cyberbullying, breaches of privacy and discrimination.

But in a specially written article for the commissioner’s website, Dame Rachel does acknowledge the advantages that AI has the potential to bring, if harnessed in the right way.

Ultimately though, she is keen to stress that much more work is needed if society is to properly understand the ways in which children and young people ‘can safely interact’ with new technologies like AI, and ‘what strong safeguards should look like’.

“In recent years there has been a huge increase in the use of artificial intelligence (AI) tools, both in terms of the number of products available on the market and their use by consumers. It is clear there are advantages to the use of AI, but there is still so much that is not yet known about these tools,” she writes.

‘Still so much not yet known’

“Children using AI are potentially exposing themselves to new risks of harm online, and their lives may be reshaped more fundamentally by them in the future. Given its recent emergence, it is unsurprising that the actual impact of AI on children’s lives is still not fully understood. Ofcom tracks children’s online and media usage, and have found that 59 per cent of 7-17 year old and 79 per cent of 13-17 year old internet users in the UK have used a generative AI tool in the last year. Snapchat’s My AI was the most commonly used platform (51 per cent), and there was no difference by gender in the number of children using these tools.

CC Pexels

“Concern about AI has not been a major theme for children in my youth voice work or on my work on children’s online lives. Where child have told me about AI, it has largely been to express pessimism about the future and their careers. For example, in The Big Ask, children said the following about artificial intelligence:

“I personally think that technology would take over and many jobs such as agriculture may be replaced with artificial intelligence. For example, a farmer’s job eliminated for a robot etc.” – Girl, 11, The Big Ask.

“Not enough jobs because of artificial intelligence taking over. No opportunities for people from poorer backgrounds. No help for people who aren’t academic.” – Girl, 14, The Big Ask.

“The Government has signalled it will take a pro-innovation approach that will focus on positioning the UK as a market to test and innovate on new AI tools, while using AI in combination with public datasets to improve public services. For example, the Department for Education has recently published its position on how generative AI could be used in the education sector.

As Children’s Commissioner, I want to sound a note of caution on the risks that AI pose for child protection. The Government’s white paper largely does not address children or child protection, other than to note that AI tools are being deployed to identify child sexual abuse material (CSAM)

“I am concerned about the risks posed by generative AI platforms available to children and the incorporation of AI tools into platforms commonly used by children. These risks may include:

  • Cyberbullying and sexual harassment: The use of AI-generated text or images to bully or sexually harass children.
  • Generative child sexual abuse material (CSAM): The Internet Watch Foundation has published on the key risks in this area. These include: AI alteration of CSAM to evade the detection systems used by platforms and law enforcement; AI-generated photorealistic CSAM; AI tools that allow perpetrators to generate CSAM material offline, where detection is not possible; and AI tools that can be used to generate CSAM images from images of real children (e.g., famous children, or children known to perpetrators).
  • Disinformation and fraud: The use of plausible-seeming AI-generated text in the service of disinformation or fraud.
  • Impacts on education: The use of AI tools that may undermine formal assessments and, ultimately, negatively impact on children’s learning.
  • Privacy concerns: AI tools rely on large datasets and there are important implications for how children’s data is used and their privacy protected.
  • Bias or discrimination: Bias in the design of systems or their underlying data leading to discrimination against some groups. Children are already aware of these issues: “The idea that it all depends on how rich you are, or dependant on your gender, race, sexual orientation, etc. It shouldn’t be! Especially with, unconscious and conscious, bias to minority groups coded into facial recognition technology and artificial intelligence, as that sector continues to grow in our lives.” – Girl, 15, The Big Ask.

“We are yet to understand the true impact of these tools on children’s lives. However, I consider that AI demonstrates the problem of emerging technologies that are not fully covered by the existing regulatory regime and how children can suffer as a result.

“I have been a strong proponent of the robust protections for children in the Online Safety Act, but it is has taken us many years to get here and many, many children who have grown up in an online environment that was and is not safe or designed for them. I am very pleased that the Act is in law and that I have a statutory role to ensure that children’s voice are heard, but AI is not covered by the Act and I am concerned that we are once again lagging behind an issue.

‘We are once again lagging behind an issue’

“More work is needed to fully understand how children can safely interact with these new technologies, and what strong safeguards should look like. I will continue to raise these issues in the implementation of Ofcom’s Children’s Code under the Online Safety Act regime and in my engagement with Ministers on tackling child sexual abuse and exploitation in the UK.

“I also look forward to addressing them in my response to Baroness Bertin’s Pornography Review, which I am pleased will look at the issue of AI-generated pornography.

Click here for more on the work of the Children’s Commissioner for England.

Author: Simon Weedy

Add your comment

characters remaining.

Log in through one of the following social media partners to comment.