Artificial Intelligence (AI) is having an ever-greater impact on how we communicate and interact. Over the last few years, smart speakers, virtual personal assistants and other forms of `conversational AI’ have become increasingly popular. In the context of health and social care, attention has begun to focus on whether an AI system can perform caring duties or offer companionship. Chatbots designed to support wellbeing and perform therapeutic functions are already available and widely used. But, for all their growing skills, machines still remain machines - driven by algorithms and data. Can a chatbot ever really care, and can, or even should, they be able to empathise?
Empathy is seen as especially important in particular contexts. For example, delivering a diagnosis will likely require more empathy than ordering a pizza. Consequently, empathy in medical interactions has been extensively studied and forms part of the curriculum for trainee doctors. As chatbots are being used in an increasing variety of contexts, researchers have begun to explore how to train them to be more `emotionally intelligent’ and empathetic. Virtual personal assistants such as Siri, and Alexa are designed to respond to users in ways that create the illusion that they understand something of the user’s psychological or emotional state. However, empathy is often thought of as a uniquely human trait that enables us to form connections with and understand one another. This talk examines what it means to be empathetic and the ethical implications that arise when positioning AI systems in roles that require them to communicate with empathy.
A talk by Shauna Concannon, Giving Voice to Digital Democracies Research Project.