About

Giving Voice to Digital Democracies: The Social Impact of Artificially Intelligent Communications Technology  

‘Hey Siri, how should I vote in the next national election?’

Using manifesto promises and gathered data, Siri (or Cortana, or Alexa, or any other virtual assistant) could determine which party championed her owner’s core socio-political and economic values – or she could name the party offering the most enticing tax breaks to the corporation that created her. And if her response was based on an ethically dubious pre-programmed agenda, who would know?

Automated conversational agents are prototypical examples of Artificially Intelligent Communications Technology (AICT), and such systems make extensive use of speech technology, natural language processing, smart telecommunications, and social media. AICT is already rapidly transforming modern digital democracies by enabling unprecedentedly swift and diffuse language-based interactions. Therefore it offers alarming opportunities for distortion and deception. Unbalanced data sets can covertly reinforce problematical social biases, while microtargeted messaging and the distribution of malinformation can be used for malicious purposes.

Responding to these urgent concerns, this Humanities-led project brings together experts from linguistics, philosophy, speech technology, computer science, psychology, sociology, and political theory to develop design objectives that can guide the creation of more ethical and trustworthy AICT systems. Such systems will have the potential to effect more positively the kinds of social change that will shape modern digital democracies in the very near future.

To this end, the various activities undertaken as part of this project explore several key ethical and social issues relating to AICT, and these events are designed to establish a dialogue involving academia, industry, government, and the public. The central research questions that provide a primary focus for the interactions include:

  • What form should an applied ethics of AICT take?
  • To what extent can social biases be removed from AICT?
  • How can the dangers of dis/mis/malinformation in AICT applications be reduced most effectively?
  • How can ethical AICT have a greater positive impact on social change?

This project is part of the Centre for the Humanities and Social Change, Cambridge, funded by the Humanities and Social Change International Foundation.

People

From left to right: Ian Roberts, Marcus Tomalin, Ann Copestake and Bill Byrne

Click here to meet the team.

Events

Various workshops and an international conference will be organised as part of this project.

Forthcoming events

Global perspectives on teaching AI ethics, 30 March 2023

Previous events

Workshops

 

Panel discussions

  • 28 February 2018: Professor Steve Young (Apple), Professor David Runciman (POLIS), Dr Hugo Zaragoza (Amazon, Barcelona)
  • 7 March 2018: Dr Eva von Redecker (Social Philosophy, Humboldt University), Dr Catherine Flick (Computing & Social Responsibility, De Montfort University), Mevan Babakar (Full Fact)
  • 14 March 2018: Professor Ross Anderson (ICT, Computer Laboratory), Dr Sander van der Linden (Psychology), Professor Tobias Matzner (Media, Algorithms, Society, Paderborn University)

Related events

Related events

 

 

Publications

Selected publications

Chubb, J., Missaoui, S., Concannon, S., Maloney, L. and Walker, J. A., ‘Interactive storytelling for children: a case-study of design and development considerations for ethical conversational AI’, International Journal of Child-Computer Interaction, 100403 9 (2021).

Saunders, D. and Byrne, B., ‘Reducing Gender Bias in Neural Machine Translation as a Domain Adaptation Problem’, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (2020).

Saunders, D., Sallis, R., Byrne, B., ‘Neural Machine Translation Doesn’t Translate Gender Coreference Right Unless You Make It’, Proceedings of the Second Workshop on Gender Bias in Natural Language Processing (2020).

Tomalin, M., ‘Rethinking online friction in the information society’, Journal of Information Technology (2022).

Tomalin, M., Byrne, B., Concannon, S., Saunders, D., Ullmann, S., ‘The practical ethics of bias reduction in machine translation: why domain adaptation is better than data debiasing’, Ethics and Information Technology (6 March 2021).

Tomalin, M. and Ullmann, S., ‘AI could be a force for good – but we’re currently heading for a darker future’, The Conversation (14 October 2019).

Ullmann, S.,  ‘Gender Bias in Machine Translation Systems’. In: Hanemaayer, A. (eds) Artificial Intelligence and Its Discontents. Social and Cultural Studies of Robots and AI. Palgrave Macmillan, Cham (2022).

https://doi.org/10.1007/978-3-030-88615-8_7

Ullmann, S., Discourses of the Arab Revolutions in Media and Politics (Routledge, 2021).

Ullmann, S., ‘Cambridge Researcher: advice piece on how to engage with the public’ (5 May 2021).

Ullmann, S., ‘ ”Can I see your parts list?” What AI’s attempted chat-up lines tell us about computer-generated language’, The Conversation (28 April 2021).

Ullmann, S. and Saunders, D., ‘Online translators are sexist – here’s how we gave them a little gender sensitivity training’, The Conversation (30 March 2021).

Ullmann, S. and Tomalin, M., ‘Quarantining online hate speech: technical and ethical perspectives’, Ethics and Information Technology (14 October 2019).

CENTRE FOR RESEARCH IN THE ARTS, SOCIAL SCIENCES AND HUMANITIES

Tel: +44 1223 766886
Email enquiries@crassh.cam.ac.uk