On 1st and 2 October 2020, the Giving Voice to Digital Democracies project hosted a two-day virtual workshop focusing on ‘Language, Technology and Mental Health’. The interdisciplinary event brought together a diverse group of researchers from academia and industry with specialisms in many different disciplines, to discuss the different effects, both positive and negative, that AI-based communications technologies are currently having, and will have, on mental health and well-being.

Mental health conditions such as depression, anxiety and panic disorders or post-traumatic stress disorders are amongst the most commonly identified mental health problems in modern societies and the number of medical diagnoses have been noticeably increasing in recent years, particularly since the beginning of the current Covid-19 pandemic. What is more, the rise in mental health problems has often been found to be connected to the increasing use of technology and social media, especially amongst young people. The first session of day one of the workshop looked at this issue more closely.

Session 1: Social media and mental health

Cambridge-based scholar Dr Amy Orben (Emmanuel College) talked about the link between screen time and adolescent well-being and identified a bidirectional relationship between social media use and life satisfaction. She looked at the problem from a historical angle and compared today’s omnipresence of social media to child addiction to radio in the past. Once again, technology has become too invasive and powerful to be shut out. Dr Orben also pointed out that certain subgroups are more susceptible to the negative and addictive features of social media and that individual differences need to be taken into account to fully understand the effects. She used the analogy of having to identify the ‘diabetics’ of social media use in modern times.

The second speaker in the session on social media and mental health was Dr Michelle O’Reilly (University of Leicester), who looked at the issue from a practice and policy perspectives. O’Reilly pointed out that social media are not inherently good or bad, but that young people in particular nowadays use technology for a variety of different purposes. She also presented important work that she and her team have done in developing a “Digital Ethics of Care” framework, which focuses on empathy, adolescent responsibility and agency of care for others in digital spaces.

Session 2: AI and suicide risk detection

With the advance of technological solutions, internet-and mobile-based interventions have made their way into people’s lives, most commonly in the form of therapy apps. While these applications can be a helpful provider of information or mental healthcare tips, criticism remains as to how well these solutions perform in the crucial state of a user being at risk of self-harm and even suicide. The second session on the first day of the workshop looked at this concern more closely.

As the first speaker of the session focusing on AI and suicide risk detection, Dr Eileen Bendig (University of Ulm) looked at the most important qualities of an automated suicide risk detection system as well as the different risk factors it should focus on. Most notably, such systems should be flexible, consider wearables, ecological momentary assessment, individualisation, identify and offer support contacts, provide the possibility to connect to specialists and be transparent. Overall, internet- and mobile-based interventions seem to be successful and show effects in reducing depression that do not significantly differ from face-to-face interventions. However, so far very little empirical evidence exists on the effectiveness of AI or chatbot-based therapy approaches.

Founder and CEO of Qntfy, Glen Coppersmith, talked about the pragmatic and ethical realities of using AI to improve mental health. Qtfy develops technological solutions to better understand human behaviour and wellbeing and improve mental healthcare. Evidence suggests that mental healthcare patients are at the highest risk of self-harm and suicide inbetween treatments. Coppersmith and his team have turned to social media to supplement research from the traditional healthcare system. By performing language analysis of social media data 1 – 6 months prior to someone’s suicide attempt, algorithmic screening has shown to enable higher precision scores when assessing a patient’s suicidality compared to traditional screenings. Coppersmith showed that using tools and techniques from computational linguistics and natural language processing (NLP) can help identify significant spikes in sentiments like sadness or anger before and after a suicide attempt.

The first day of the workshop ended with a look at internet- and mobile-based solutions to contribute to and possibly improve traditional approaches to mental healthcare. In addition and even more recently, conversational agents have become a novel addition to the list of smart technologies deployed in modern mental health therapy. These developments were discussed in detail from both a technological as well as a conversationalist perspective in this session.

Session 3: Understanding and automating therapeutic dialogues

The second day of our workshop began with a talk by Dr Raymond Bond (Ulster University), who presented the ChatPal project and looked at ways in which novel chatbots can offer help to users that conventional mental health apps fail to provide. ChatPal is a therapy bot that focuses on supporting mental wellbeing in rural areas. According to Bond, the ChatPal chatbot goes beyond a solely user-centred design; it exists in an entire ecosystem surrounding the user. A chatbot can be a good first step for people who are not quite ready yet for direct face-to-face interaction with a therapist. The chatbot doesn’t judge and thus there is no stigma people may worry about. However, Bond also pointed out that the limitations of NLP are a concern. The chatbot may not understand a user clearly enough, it lacks human empathy and diagnostic bots in particular tend to not be supported by healthcare professionals.

In the next talk, Professor Rose McCabe (City, University of London) gave us an insight into her research on patient-doctor interaction and specifically how healthcare professionals ask patients about potential self-harm and suicide using conversation microanalysis.  She discussed the differences between yes vs. no-inviting questions and the effects on patients’ responses. McCabe also pointed out concerns regarding suicidality disclosure of patients. Doctors report an institutional pressure to illicit a yes/no response from patients. But what does it actually mean to be suicidal? Patients may, for instance, worry about the consequences of their disclosure (e.g., hospitalisation). Unfortunately, it also remains to be the case that most patients do not show explicit suicidal ideation prior to death.

Session 4: The future of digital mental healthcare

The final session of the two-day workshop considered the future of digital mental healthcare and what it may hold. Our two speakers provided two essential perspectives, one from an industry and application angle, the other offering insights into the scientific research.

The first speaker of this session was Dr Valentin Tablan, who is Chief AI Officer at IESO Digital Health, a company that offers cognitive behavioural therapy (CBT) treatments for NHS patients on a secure online platform. He presented recent research into how patient-therapist conversations are evaluated and assessed and gave insight into how patient engagement and symptom improvement are measured to best assess a patient’s progress. Tablan also talked about the importance of expert annotators of data as well as collaborative work to develop the best possible solutions and models.

Finally, Professor Maria Liakata (Queen Mary University London; Alan Turing Institute) offered a perfect finish to this session. She presented her work at the Turing Institute on time-sensitive sensors for which she used NLP tools in longitudinal studies of language use to examine how linguistic behaviour can reflect changes in mental health and wellbeing. Liakata pointed out that one of the key challenges is addressing concerns relating to data privacy, ethics and data sparsity issues in real-world datasets.

• Stefanie Ullman is a Postdoctoral Research Associate with Giving Voice to Digital Democracies.

 

CENTRE FOR RESEARCH IN THE ARTS, SOCIAL SCIENCES AND HUMANITIES

Tel: +44 1223 766886
Email enquiries@crassh.cam.ac.uk