- About
- People
- Events archive
- Selected publications
- Blogs and podcasts
About
Giving Voice to Digital Democracies: The Social Impact of Artificially Intelligent Communications Technology
Hey Siri, how should I vote in the next national election?
Using manifesto promises and gathered data, Siri (or Cortana, or Alexa, or any other virtual assistant) could determine which party championed her owner’s core socio-political and economic values – or she could name the party offering the most enticing tax breaks to the corporation that created her. And if her response was based on an ethically dubious pre-programmed agenda, who would know?
Automated conversational agents are prototypical examples of Artificially Intelligent Communications Technology (AICT), and such systems make extensive use of speech technology, natural language processing, smart telecommunications, and social media. AICT is already rapidly transforming modern digital democracies by enabling unprecedentedly swift and diffuse language-based interactions. Therefore it offers alarming opportunities for distortion and deception. Unbalanced data sets can covertly reinforce problematical social biases, while microtargeted messaging and the distribution of malinformation can be used for malicious purposes.
Responding to these urgent concerns, this Humanities-led project brings together experts from linguistics, philosophy, speech technology, computer science, psychology, sociology, and political theory to develop design objectives that can guide the creation of more ethical and trustworthy AICT systems. Such systems will have the potential to effect more positively the kinds of social change that will shape modern digital democracies in the very near future.
To this end, the various activities undertaken as part of this project explore several key ethical and social issues relating to AICT, and these events are designed to establish a dialogue involving academia, industry, government, and the public. The central research questions that provide a primary focus for the interactions include:
- What form should an applied ethics of AICT take?
- To what extent can social biases be removed from AICT?
- How can the dangers of dis/mis/malinformation in AICT applications be reduced most effectively?
- How can ethical AICT have a greater positive impact on social change?
This project is part of the Centre for the Humanities and Social Change, Cambridge, funded by THE NEW INSTITUTE
People
From left to right: Ian Roberts, Marcus Tomalin, Ann Copestake and Bill Byrne
- Professor Ian Roberts, Professor of Linguistics (Principal Investigator)
- Professor Bill Byrne, Professor of Electrical Engineering (Co-Investigator)
- Professor Ann Copestake, Professor of Computational Linguistics (Co-Investigator)
- Dr Marcus Tomalin, The Machine Intelligence Laboratory (Senior Research Associate)
- Dr Stefanie Ullmann (Postdoctoral Research Associate)
- Dr Shauna Concannon (Postdoctoral Research Associate)
- Una Yeung (Project Administrator)
Events archive
Event recordings
![loading](https://www.crassh.cam.ac.uk/wp-content/plugins/youtube-embed-plus-pro/images/gallery-page-loader.gif)
![play](https://www.crassh.cam.ac.uk/wp-content/plugins/youtube-embed-plus-pro/images/playhover.png)
![play](https://www.crassh.cam.ac.uk/wp-content/plugins/youtube-embed-plus-pro/images/playhover.png)
![play](https://www.crassh.cam.ac.uk/wp-content/plugins/youtube-embed-plus-pro/images/playhover.png)
![play](https://www.crassh.cam.ac.uk/wp-content/plugins/youtube-embed-plus-pro/images/playhover.png)
![play](https://www.crassh.cam.ac.uk/wp-content/plugins/youtube-embed-plus-pro/images/playhover.png)
![play](https://www.crassh.cam.ac.uk/wp-content/plugins/youtube-embed-plus-pro/images/playhover.png)
![play](https://www.crassh.cam.ac.uk/wp-content/plugins/youtube-embed-plus-pro/images/playhover.png)
![play](https://www.crassh.cam.ac.uk/wp-content/plugins/youtube-embed-plus-pro/images/playhover.png)
![play](https://www.crassh.cam.ac.uk/wp-content/plugins/youtube-embed-plus-pro/images/playhover.png)
![play](https://www.crassh.cam.ac.uk/wp-content/plugins/youtube-embed-plus-pro/images/playhover.png)
![play](https://www.crassh.cam.ac.uk/wp-content/plugins/youtube-embed-plus-pro/images/playhover.png)
![play](https://www.crassh.cam.ac.uk/wp-content/plugins/youtube-embed-plus-pro/images/playhover.png)
![loading](https://www.crassh.cam.ac.uk/wp-content/plugins/youtube-embed-plus-pro/images/gallery-page-loader.gif)
Events
Events and workshops |
---|
The Future of Artificial Intelligence: Language, Ethics, Technology 25 Mar 2019 10:00am - 5:00pm, Room SG1, The Alison Richard Building, 7 West Road, Cambridge, CB3 9DT |
The Future of Artificial Intelligence: Language, Gender, Technology 17 May 2019 10:00am - 5:00pm, Room SG1, The Alison Richard Building, 7 West Road, Cambridge, CB3 9DT |
The Future of AI: Language, Society, Technology 30 Sep 2019 9:30am - 5:00pm, Room SG1, The Alison Richard Building, 7 West Road, Cambridge, CB3 9DT Giving Voice to Digital Democracies project, Centre for the Humanities and Social Change. |
Artificial Intelligence and Social Change 19 Oct 2019 2:30pm - 3:30pm, Room SG1/2, Alison Richard Building, 7 West Road, Cambridge CB3 9DT This talk is part of the University of Cambridge Festival of Ideas 2019. |
Disempowering Hate Speech: How to Make Social Media Less Harmful 19 Oct 2019 4:00pm - 5:00pm, Faculty of Law, Sidgwick Site, 10 West Road, Cambridge CB3 9DZ This talk is part of the University of Cambridge Festival of Ideas 2019. |
Fact-Checking Hackathon 10 Jan 2020 - 12 Jan 2020 10:00am, Room LR4, Baker Building, Department of Engineering, University of Cambridge,Trumpington Street, Cambridge CB2 1PZ |
Mindful of AI: Language, Technology and Mental Health 1 Oct 2020 - 2 Oct 2020 All day, ONLINE Mindful of AI workshop convened by Giving Voice to Digital Democracies |
Artificial Intelligence and Multimodality: From Semiotics to Intelligent Systems 14 Jun 2021 All day, Online Giving Voice to Digital Democracies |
Understanding and automating Counterspeech 29 Sep 2021 All day, Online Giving Voice to Digital Democracies |
POSTPONED Children and artificial intelligence: risks, opportunities and the future 25 Mar 2022 13:30 - 17:15, Online |
Combating harmful content online: the potential of Counterspeech 4 Apr 2022 16:00 - 17:00, Online event |
Online harms: how AI can protect us 4 Apr 2022 14:00 - 15:00, Online event |
Children and artificial intelligence: risks, opportunities and the future 25 Apr 2022 13:30 - 17:15, Online |
Polarisation, hate speech, and the role of artificial intelligence 23 Mar 2023 17:30 - 18:30, Online & Seminar room S1, Alison Richard Building, 7 West Road, Cambridge CB3 9DP |
Global perspectives on teaching AI ethics 30 Mar 2023 14:00 - 18:00, Online & Room SG1, Alison Richard Building, 7 West Road, Cambridge |
Artificial Intelligence: can systems like ChatGPT automate empathy? 31 Mar 2023 17:30 - 18:30, Online & Room SG1, Alison Richard Building, 7 West Road, Cambridge CB3 9DP |
CANCELLED Metaphors and AI: narratives in public discourse 25 Mar 2024 14:00 - 18:00 (tbc), Online and Room SG1, Alison Richard Building, 7 West Road, Cambridge |
Related events
- Artificial Intelligence and its Discontents. Critiques from the Social Sciences and Humanities
Research Associate Stephanie Ullman was the discussant at book launch with Ariane Hanemaayer
Institute for Humanities in Africa (HUMA), University of Cape Town, 7 November 2022 - Combatting Hate Speech and Disinformation against Social Polarisation
Stephanie Ullmann was on the panel with Handan Uslu, Kareem Darwish and Alex Mahadevan
Hrant Dink Foundation, Istanbul, 7 October 2022 - (De)Polarization and the Role of Artificial Intelligence
Stephanie Ullmann was on the online panel with Marco Guerini and Riccardo Gallotti
Bruno Kessler Foundation, Centre for Religious Studies, 12 May 2022 - Artificial intelligence and hate speech: opportunities and risks
Stephanie Ullmann was on the online panel with Claudia von Vacano and Berrin Yanıkoğlu
Hrant Dink Foundation, 25 January 2022 - Disinformation and technologies
Conference ‘Disinformation: Open Societies, Hidden Wars’
Evangelische Akademie Tutzing, Germany, 10-12 September 2021 - Covid-19, digital democracy and fake news
Stephanie Ullmann was on the online panel with Jon Roosenbeek and Nina Schick
Hay Festival Winter Weekend, 27 November 2020
Selected publications
Selected publications
- Ullmann, S. and Tomalin, M. (eds), Counterspeech: Multidisciplinary Perspectives on Countering Dangerous Speech (Routledge, 2023).
- Chubb, J., Missaoui, S., Concannon, S., Maloney, L. and Walker, J. A., ‘Interactive storytelling for children: a case-study of design and development considerations for ethical conversational AI’, International Journal of Child-Computer Interaction, 100403 9 (2021).
- Saunders, D. and Byrne, B., ‘Reducing Gender Bias in Neural Machine Translation as a Domain Adaptation Problem’, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (2020).
- Saunders, D., Sallis, R., Byrne, B., ‘Neural Machine Translation Doesn’t Translate Gender Coreference Right Unless You Make It’, Proceedings of the Second Workshop on Gender Bias in Natural Language Processing (2020).
- Tomalin, M., ‘Meta’s AI chatbot hates Mark Zuckerberg – but why is it less bothered about racism?’ The Conversation (2023).
- Tomalin, M., ‘Rethinking online friction in the information society’, Journal of Information Technology (2022).
- Tomalin, M., Byrne, B., Concannon, S., Saunders, D., Ullmann, S., ‘The practical ethics of bias reduction in machine translation: why domain adaptation is better than data debiasing’, Ethics and Information Technology (6 March 2021).
- Tomalin, M. and Ullmann, S., ‘AI could be a force for good – but we’re currently heading for a darker future’, The Conversation (14 October 2019).
- Ullmann, S., ‘Gender Bias in Machine Translation Systems’. In: Hanemaayer, A. (eds) Artificial Intelligence and Its Discontents. Social and Cultural Studies of Robots and AI. Palgrave Macmillan, Cham (2022).
- https://doi.org/10.1007/978-3-030-88615-8_7
- Ullmann, S., Discourses of the Arab Revolutions in Media and Politics (Routledge, 2021).
- Ullmann, S., ‘Cambridge Researcher: advice piece on how to engage with the public’ (5 May 2021).
- Ullmann, S., ‘ ”Can I see your parts list?” What AI’s attempted chat-up lines tell us about computer-generated language’, The Conversation (28 April 2021).
- Ullmann, S. and Saunders, D., ‘Online translators are sexist – here’s how we gave them a little gender sensitivity training’, The Conversation (30 March 2021).
- Ullmann, S. and Tomalin, M., ‘Quarantining online hate speech: technical and ethical perspectives’, Ethics and Information Technology (14 October 2019).
Blogs and podcasts
Blogs and podcasts
2024
2022
2021
- Discourses of the Arab Revolutions in Media and Politics: 5 questions to Stefanie Ullmann
- Workshop report for ‘Understanding and automating Counterspeech‘, Stephanie Ullmann
- Artificial intelligence and multimodality: workshop reflections, Marcus Tomalin
- Can I see your parts list? What AI’s chat-up lines tell us about computer-generated language, Stefanie Ullmann
- Online translators are sexist – here’s how we gave them a little gender sensitivity training, Stefanie Ullmann and Danielle Saunders
- Thoughtlines podcast | Marcus Tomalin – We are what we code
2020
- Event summary – Mindful of AI: language, technology and mental health, Stefanie Ullmann
- Searching for the facts in a global pandemic, Shauna Concannon
- Tackling the problem of online hate speech, Marcus Tomalin and Stefanie Ullmann
- Fact-checking hackathon – a write-up, Marcus Tomalin, Stefanie Ullmann and Shauna Concannon
2019
- The future of AI: language, society, technology, Marcus Tomalin
- The future of artificial intelligence: language, gender, technology, Marcus Tomalin and Stefanie Ullmann
- The future of artificial intelligence: language, ethics, technology, Marcus Tomalin and Stefanie Ullmann