Giving Voice to Digital Democracies: The Social Impact of Artificially Intelligent Communications Technology
‘Hey Siri, how should I vote in the next national election?’
Using manifesto promises and gathered data, Siri (or Cortana, or Alexa, or any other virtual assistant) could determine which party championed her owner’s core socio-political and economic values – or she could name the party offering the most enticing tax breaks to the corporation that created her. And if her response was based on an ethically dubious pre-programmed agenda, who would know?
Automated conversational agents are prototypical examples of Artificially Intelligent Communications Technology (AICT), and such systems make extensive use of speech technology, natural language processing, smart telecommunications, and social media. AICT is already rapidly transforming modern digital democracies by enabling unprecedentedly swift and diffuse language-based interactions. Therefore it offers alarming opportunities for distortion and deception. Unbalanced data sets can covertly reinforce problematical social biases, while microtargeted messaging and the distribution of malinformation can be used for malicious purposes.
Responding to these urgent concerns, this Humanities-led project brings together experts from linguistics, philosophy, speech technology, computer science, psychology, sociology, and political theory to develop design objectives that can guide the creation of more ethical and trustworthy AICT systems. Such systems will have the potential to effect more positively the kinds of social change that will shape modern digital democracies in the very near future.
To this end, the various activities undertaken as part of this project explore several key ethical and social issues relating to AICT, and these events are designed to establish a dialogue involving academia, industry, government, and the public. The central research questions that provide a primary focus for the interactions include:
- What form should an applied ethics of AICT take?
- To what extent can social biases be removed from AICT?
- How can the dangers of dis/mis/malinformation in AICT applications be reduced most effectively?
- How can ethical AICT have a greater positive impact on social change?
This project is part of the Centre for the Humanities and Social Change, Cambridge, funded by the Humanities and Social Change International Foundation.
From left to right: Ian Roberts, Marcus Tomalin, Ann Copestake and Bill Byrne
- Professor Ian Roberts, Professor of Linguistics (Principal Investigator)
- Professor Bill Byrne, Professor of Electrical Engineering (Co-Investigator)
- Professor Ann Copestake, Professor of Computational Linguistics (Co-Investigator)
- Dr Marcus Tomalin, The Machine Intelligence Laboratory (Senior Research Associate)
- Dr Stefanie Ullmann (Postdoctoral Research Associate)
- Dr Shauna Concannon (Postdoctoral Research Associate)
- Una Yeung (Project Administrator)
Various workshops and an international conference will be organised as part of this project.
To be announced
- Children and Artificial Intelligence: Risks, Opportunities and the Future, 25 March 2022
- Understanding and Automating Counterspeech, 29 September 2021
- Mindful of AI: Language,Technology and Mental Health, 1 – 2 October 2020
- Fact-Checking Hackathon, 10-12 January 2020
- The Future of Artificial Intelligence: Language, Society, Technology, 30 September 2019
- The Future of Artificial Intelligence: Language, Gender, Technology, 17 May 2019
- The Future of Artificial Intelligence: Language, Ethics, Technology, 25 March 2019
- Artificial Intelligence and Social Change, a Festival of Ideas talk, 19 October 2019
- Disempowering Hate Speech: How to Make Social Media Less Harmful, a Festival of Ideas talk, 19 October 2019
- 28 February 2018: Professor Steve Young (Apple), Professor David Runciman (POLIS), Dr Hugo Zaragoza (Amazon, Barcelona)
- 7 March 2018: Dr Eva von Redecker (Social Philosophy, Humboldt University), Dr Catherine Flick (Computing & Social Responsibility, De Montfort University), Mevan Babakar (Full Fact)
- 14 March 2018: Professor Ross Anderson (ICT, Computer Laboratory), Dr Sander van der Linden (Psychology), Professor Tobias Matzner (Media, Algorithms, Society, Paderborn University)
Chubb, J., Missaoui, S., Concannon, S., Maloney, L. and Walker, J. A., ‘Interactive storytelling for children: a case-study of design and development considerations for ethical conversational AI’, International Journal of Child-Computer Interaction, 100403 9 (2021)
Saunders, D. and Byrne, B., ‘Reducing Gender Bias in Neural Machine Translation as a Domain Adaptation Problem’, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (2020)
Saunders, D., Sallis, R., Byrne, B., ‘Neural Machine Translation Doesn’t Translate Gender Coreference Right Unless You Make It’, Proceedings of the Second Workshop on Gender Bias in Natural Language Processing (2020)
Tomalin, M., ‘Rethinking online friction in the information society’, Journal of Information Technology (2022)
Tomalin, M., Byrne, B., Concannon, S., Saunders, D., Ullmann, S., ‘The practical ethics of bias reduction in machine translation: why domain adaptation is better than data debiasing’, Ethics and Information Technology (6 March 2021)
Tomalin, M., and Ullmann, S., ‘AI could be a force for good – but we’re currently heading for a darker future’, The Conversation (14 October 2019)
Ullmann, S., Discourses of the Arab Revolutions in Media and Politics (Routledge, 2021)
Ullmann, S., ‘Cambridge Researcher: advice piece on how to engage with the public’ (5 May 2021)
Ullmann, S., ‘ ”Can I see your parts list?” What AI’s attempted chat-up lines tell us about computer-generated language’, The Conversation (28 April 2021)
Ullmann, S. and Saunders, D., ‘Online translators are sexist – here’s how we gave them a little gender sensitivity training’, The Conversation (30 March 2021)
Ullmann, S. and Tomalin, M., ‘Quarantining online hate speech: technical and ethical perspectives’, Ethics and Information Technology (14 October 2019)