Lead researcher Karolina La Fors of DesignLab stated, “The developments in the field of AI are moving so fast that we should not forget to consider the ethical limitations and especially how children define ethical and social values in their interactions with AI. Children contribute ethical standards that adults often don’t think about.”
KidsRights chairman Marc Dullaert calls on the Dutch government to involve children and young people in drawing up ethical standards for human-centric AI: “Now everything that is technically possible seems to happen to us, including the harmful effects. Children and young people are extremely vulnerable in this respect. We must, therefore, protect and involve them and be inspired by them. With this study, children and young people are lecturing the government.”
Research offers new insights
A representative study[1] of 374 children between the ages of 4 and 16 years, with most participants aged between 6 and 13 years, yielded new and surprisingly interesting insights. These insights are of great importance for the development of AI systems and society. For example, the researchers asked the children about their positive and negative experiences with AI systems in their daily lives, such as the different social roles AI can take over. When asked if children would like to be helped by a robot seller in shops, 55.3% answered yes, while 41% said no. The majority believed that robots could outperform humans as sellers. When asked whether a police officer can be a robot, 54.8% were against it, while 41.5% were in favor. The group opposing it believed that robot cops could threaten their safety. Most children (61.3%) thought a robot GP is a bad idea, compared to 35.3% who would like to interact with one.
[1] The statements of children (made with parental consent) are reliable and relevant to the population. No economic or health-related background information was requested. Any child was welcome to participate in the study. All data is anonymized. Different geographical locations and ages provide a diverse representation of children. The online and offline options to complete the surveys enhanced reaching a broader diversity of children. The survey questions were approved by the Ethics Committee of the Faculty of Electrical Engineering, Mathematics, and Computer Sciences (EEMCS) of the University of Twente and were also reviewed by three academic reviewers.
“Can a robot be your friend?”
Children indicated that they consider human characteristics and values important and that these should not be lost in the development of AI systems. AI must serve people, and human checks are important to them. For example, they see AI as a solution for socially relevant problems, such as a robot that helps children with dyslexia or one that retrieves plastic from the ocean. However, a robot is not seen as a friend mainly due to the lack of human literacy, comfort, humor, and empathy.
Children’s thoughts translated into ethical and social values:
- Human Literacy – “Robots lack human qualities.”
- Emotional Intelligence – “Robots have no emotions.”
- Love and Kindness – “Robots don’t give love.”
- Authenticity – “Robots don’t have opinions of their own.”
- Human Care and Protection – “Robots cannot comfort.”
- Autonomy – “Robots should not take over the world.”
- AI in Service – “Robots must help me.”
- Exuberance – “Robots should be able to play with me.”
Broad and early dialogue helps
While many children (70.6%) did not know the term AI before the survey, they eventually realized that they interact with a wide variety of AI systems on a daily basis. Examples of smart systems that children recognized as AI include computers, smart TVs, thermostats, smartphones, smart doorbells, robotic lawnmowers, and vacuum cleaners. The second thing that children associated with AI was different brands, such as Google, Playstation, Youtube, TikTok, Netflix, Apple Watch, NS public transport chip card, or the ordering station at McDonald’s.
These are some of the results from the research report “2022 AI Register of Children in the Netherlands: The Awareness, Ethical and Social Values and Ideas of Children about AI Systems.” La Fors and Dullaert explain that much is still unclear about the implications of AI systems for children’s development. A broad and early dialogue helps with this. Questioning children about their awareness of and ideas about AI is crucial not only for the development of more human-centric AI systems but also for the societal discussion on what it means to be human-centric, especially in a world where technological developments such as ChatGPT, GPT Force, and the Internet of Things present themselves as indispensable.
You can download the full report and the child-friendly version below!