Examining artificial intelligence technologies through the lens of children’s rights

©zinkevych - stock.adobe.com

Researchers, policymakers and industry should involve children and their caregivers when designing new policies and initiatives dealing with artificial intelligence (AI)-based technologies.

That’s the overriding recommendation of a report by the European Commission on Artificial Intelligence and the Rights of the Child.

Artificial intelligence-based internet and digital technologies, says the commission’s Joint Research Centre (JRC), offer children many opportunities but, if not properly designed and used, they may also negatively affect some of their rights, such as their right to protection, participation, education and privacy.

Parents, teachers and children are rarely directly involved in policymaking or research that aims to mitigate the risks and augment the benefits of AI. But when designing policies the more stakeholders interact, the more we can ensure that all sides’ perspectives are considered.The JRC report on AI and the rights of the child seeks to shed light on these aspect.Moreover, it also identifies which key requirements trustworthy AI should be driven by, methods for enabling a more effective engagement between key stakeholders, and knowledge gaps that need to be addressed to ensure that children’s fundamental rights are respected when they interact with AI technology in their daily lives.

The JRC report includes reflections from invited experts in the field who participated in and contributed to the study. It also contributes to the science for policy research carried out at the JRC in the area of trustworthy AI for children

What makes AI trustworthy?

According to the JRC report, the development of trustworthy AI-based tools used by children requires:

  1. Making strategic and systemic choices during the development of AI-based services and products intended for children to ensure their sustainability, as these tools use many natural and energy resources.
  2. Empowering children and their carers to have control over the way their personal data is used by AI technology.
  3. Explaining AI systems in child-friendly language and in a transparent manner, and holding AI actors accountable for the proper functioning of the systems they develop, operate or deploy.
  4. The absence of discriminatory biases in the data or algorithms they rely on.

Methods for effective engagement

The report also proposes some tangible methods to researchers and policymakers, to facilitate the participation of both children and relevant stakeholders in the implementation of the above mentioned recommendations.

Multi-stakeholder participatory approaches should be applied, involving children, researchers, policymakers, industry, parents and teachers to define common goals and to build child-friendly AI by design.

These approaches will be made more effective if they are based on communication and collaboration, and if they address conflicting priorities between the parties. The inclusion of under-represented populations would mitigate discrimination and promote fairness among children growing-up in diverse cultural contexts.

Also, the creation of frameworks and toolkits, incorporating aspects such as personal data protection and risk assessment, would help guide the design and the evaluation of child-friendly AI systems in the short- and long-term.

Knowledge gaps to be addressed

Since limited scientific evidence on the impact of AI on children is available, JRC authors have identified some knowledge gaps that need to be addressed in research and policy agendas.

For example, more research on the impact of the use of AI technology on children’s cognitive and socio-emotional capacities would be needed; schools should prepare children for a world transformed by AI technology and therefore develop their competences and literacy; and AI-based systems targeting children should be developed to fit their cognitive stage.

A mix of research approaches

To unearth these conclusions, JRC researchers used a mix of approaches.

They selected three AI applications for children and examined them through the lens of children’s right where certain risks, such as lack of respect of children’s privacy, possible algorithmic discrimination and lack of fairness, were also identified.

They organised two workshops with children and young people, and three workshops with policymakers and researchers in the field of AI and child’s rights, which revealed that each group prioritised different concerns.

Moreover, current policy initiatives for AI and children’s rights of eight major international organisations were reviewed. They were found to be aligned, to a certain extent, in terms of identified AI risks and opportunities for children, although differing in terms of goals and priorities.

Source: European Commission JRC. Republished under Creative Commons Licensing. 

Author: Guest author

Add your comment

characters remaining.

Log in through one of the following social media partners to comment.