Published in Goethe Institute, Expert Statements in February 2022

Author: Stefanie Ullmann

Whenever it comes to fairness or ethically relevant decisions, things become difficult for Artificial Intelligence. ‘An algorithm has no sense of tact’ is how Prof Dr Katharina Zweig aptly puts it with the title of her current SPIEGEL bestseller (‘Ein Algorithmus hat kein Taktgefühl’; Heyne Verlag, 2019). In the production of written information, AI holds unfathomable opportunities. Without human reflection, however, they also carry the risk of reproducing stereotypes and – as far as the choice of terminology is concerned, for example in relation to gender and ethnicity – of having a discriminatory effect. Ultimately, Deep Learning and AI are very much like raising a child: you have to teach it what it doesn’t know. In that respect, the data with which AI is trained is itself tainted with prejudices. What kinds of bias can be found in texts that were created with the help of AI? And what solutions can be implemented to mitigate or even avoid distortions of reality? We talked about this with five experts from the UK, Germany, the Netherlands, and Switzerland.

CENTRE FOR RESEARCH IN THE ARTS, SOCIAL SCIENCES AND HUMANITIES

Tel: +44 1223 766886
Email enquiries@crassh.cam.ac.uk