|25 Mar 2022||13:30 - 17:15||Online|
This event has been postponed to 25 April due to UCU strike action.
Children and artificial intelligence is a timely workshop that brings together experts from academic disciplines such as sociology, psychology, computer science and linguistics, as well as leading figures from regulatory bodies, the charity sector, and child-focused agencies. The event aims to explore the future of online safety, and how to harness the opportunities of language-based AI for children while also ensuring that potential risks are minimised.
The workshop is convened by Giving Voice to Digital Democracies, a research project which is part of the Centre for the Humanities and Social Change, Cambridge, and organised by Shauna Concannon, Cambridge. It is funded by the Humanities and Social Change International Foundation.
The event is free but registration is required.
Queries: Una Yeung
Language-based artificial intelligence (AI) is already influencing the development of children around the world. Intelligent autonomous systems increasingly provide crucial functionality for toys, virtual assistants, chatbots, video games, smart phones and social media platforms, and the more children interact with these technologies, the more they influence their social and cognitive development. Recent research has shown that speaking frequently to devices such as Alexa, Cortana or Siri can change the way children communicate, and many parents remain uncertain about issues such as the appropriate age for children to be given their first smartphone and whether extended periods of screen time are potentially harmful for them. In addition, pre-teens and teenagers on social media are increasingly exposed to cyberbullying as well as content that is sexually explicit, racist, sexist, homophobic or transphobic. Although many countries impose an age limit that prohibits children under 13 from using social media, in practice it is almost impossible to police the restriction. In addition, the dominant corporations are constantly seeking new ways to attract pre-teens by developing platforms such as YouTube Kids and Messenger Kids. From a business perspective, the economic incentive is clear: if users become addicted early, they are likely to remain addicted as they pass through adolescence and into adulthood.
While concerns about such risks are widespread, there are more positive scenarios that should be considered too. For example, language-based AI techniques can protect young and vulnerable social media users, and data-driven anti-cyberbullying systems can either discourage the perpetrators or else support the victims. Further, the digitisation of learning has created numerous opportunities for intelligent autonomous systems to provide targeted personalised support for children who may be disadvantaged in more traditional classroom environments. Specifically, AI-guided learning has the potential to benefit those with conditions such as ADHD and dyslexia since automated tutorial platforms can adapt to a given student’s individual pace and style of learning.
The above summary suggests that while AI-based systems clearly pose serious potential risks for children, they also offer considerable potential benefits, therefore the task of reflecting upon how children access and use such systems is an urgent one. It is essential that children are supported to develop AI literacy and online safety practices (eg by limiting their own screen time, or by selecting sensible privacy settings), yet there are also various forms of digital protection that offer effective supplementary safeguarding.
|13:30 – 15:00||
Session 1: Children’s interactions with AI
Introduction to the workshop
Radhika Garg (Syracuse University)
Katie Winkle ( KTH Stockholm/Uppsala University)
Melanie Penagos (Independent Consultant and former Project Manager, AI for Children, UNICEF)
Q&A and discussion
|15:00 – 15:30||
|15:30 – 17:15||
Session 2: The future of online safety
Welcome to the session
Gordon Harold (University of Cambridge)
Dan Sexton (Internet Watch Foundation)
Sachin Jogia (OfCom)
Q&A and discussion
- Radhika Garg – Assistant Professor, Syracuse University
- Katie Winkle – Postdoctoral Research Fellow, KTH Stockholm / Assistant Professor in Social Robotics, Uppsala University
- Melanie Penagos – Independent Consultant and former Project Manager, AI for Children, UNICEF
- Sachin Jogia – Chief Technology Officer, OfCom
- Gordon Harold – Professor of the Psychology of Education and Mental Health, University of Cambridge
- Dan Sexton – Chief Technology Officer, Internet Watch Foundation