28 Nov 2016 12:00pm - 5:00pm Seminar Room SG1, Alison Richard Building

Description

This workshop is by application or invitation only. Limited places.
If you are interested in attending, please see information below.
Deadline : 30 October 2016

The Ethics of Using Machine Learning in Professional Practice: Perspectives from Journalism, Law and Medicine.

This workshop aims to bring together practitioners from law, journalism and bio-medicine together with social scientists and computer scientists to explore the ethical questions raised by the growing use of machine learning in processes of information discovery, analysis and decision-making.
Recent examples include the deployment of machine learning methods in the development of a proprietary digital tool used to generate risk assessments which inform judges in US parole hearings, the use of bots in international newsrooms to support editors and journalists’ selection of stories for publication and Google DeepMind's partnership with the NHS to build an app for medical practitioners treating kidney disease.  Are such cases indicative of a wider trend towards the delegation of decision-making to autonomous computer systems in areas of activity which were previously the preserve of human experts?

Presentations and discussions at the symposium will explore the implications for ethics and governance of integrating machine learning and other algorithms into wider computational systems and workflows and how this process relates to evolving social processes of decision-making and accountability in professional practice in law, journalism and bio-medicine.

This workshop is by application or invitation only and discussions will be conducted under Chatham House rules. Researchers or professional practitioners interested in attending should apply by email to Dr Anne Alexander (raa43@cam.ac.uk) before 30 October with a short statement explaining why they would like to participate in the event. The Ethics of Big Data group also welcomes proposals for short presentations. Potential presenters should include an abstract of their proposed contribution.

 

 

Part of the Ethics of Big Data Research Group, series
Organised by Ethics of Big Data Research Group in collaboration with The Work Foundation and InformAll.

Administrative assistance: gradfac@crassh.cam.ac.uk

Programme

12:00 - 12:30

Sandwich lunch for participants

12:30 - 14:00

Session 1: Concepts

A round-table discussion led by the Ethics of Big Data Research group exploring the conceptual frameworks underpinning discussion of the ethics of machine learning.

14:00 - 14:30

Coffee break

14:30 - 16:30

Session 2: Applications

Presentation 1: Michael Veale (UCL)
How are public sector values entering today's public sector machine learning systems?

The use of machine learning for public sector decision support is growing—as are worries that these systems might exhibit characteristics of ethical concern. This talk will present a preliminary empirical landscape of practices of responsibility found in deployments of public sector machine learning today in the UK and further afield, and discuss how these emerging trends sit within the broader understanding of responsibility within machine learning and public services found in the computer science and public administration scholarships.

Presentation 2: Elvira Perez Vallejos (Nottingham)
Editorial responsibilities arising from personalization algorithms

Summary: News feeds, search engines and content recommendations use increasingly sophisticated and personalized algorithms to cut through Big Data in the hope of providing content that is sufficiently relevant to keep the users on the platform. Superficially, there seems nothing wrong with prioritizing information that users will likely agree with; after all, people tend to select information that aligns with their own beliefs. However, these personalization algorithms risk amplifying a polarized climate and potentially limit exposure to attitude-challenging information. We argue that these sites have a corporate social responsibility towards promoting a healthy democratic discourse by adopting a code of editorial-like responsibility.

In this presentation we will examine the question of editorial responsibility on social media platforms in the light of content recommendations generated by the platform's service personalization algorithms. Specifically, we explore the position that is frequently taken by social media platforms in which they argue that they are not media companies because they do not create content, but are technology companies that merely produce tools. This distinction convey legal protection against the liability of the platforms regarding hosting third party content. To contextualise this argument, we will present preliminary data that illustrates youth opinions on personalised and recommender systems such as Facebook newsfeed and reflections of the 'filter bubble' effect

Please note, this is a closed workshop and discussions will be conducted under the Chatham House Rule

Organised by the Ethics of Big Data Research group in collaboration with the Work Foundation and InformAll.

Upcoming Events

CENTRE FOR RESEARCH IN THE ARTS, SOCIAL SCIENCES AND HUMANITIES

Tel: +44 1223 766886
Email enquiries@crassh.cam.ac.uk