The Centre for the Study of Existential Risk


About CSER

Professor Huw Price, Academic Director CSER
Dr Seán Ó hÉigeartaigh, Executive Director CSER
 

The Centre for the Study of Existential Risk is an interdisciplinary research centre focused on the study of risks threatening human extinction that may emerge from technological advances. CSER aims to combine key insights from the best minds across disciplines to tackle the greatest challenge of the coming century: safely harnessing our rapidly-developing technological power.

An existential risk is one that threatens the existence of our entire species.  The Cambridge Centre for the Study of Existential Risk (CSER) — a joint initiative between philosopher Huw Price, cosmologist Martin Rees, and software entrepreneur Jaan Tallinn — was founded on the conviction that these risks require a great deal more scientific investigation than they presently receive. The Centre’s aim is to develop a new science of existential risk, and to develop protocols for the investigation and mitigation of technology-driven existential risks.

CSER is hosted within the Centre for Research in the Arts, Social Sciences and Humanities (CRASSH), under the management of Dr Seán Ó hÉigeartaigh. The Centre's Management Committee is supported by an international Advisory Board. See the CSER project website for more information.

CSER’s current research agenda comprises two main strands:

  1. Developing a general methodology for the management of extreme technological risk. The aim is to develop a set of methods and protocols specifically designed for the identification, assessment and mitigation of this class of risks: high-impact risks associated with technological developments. These methods will be applicable to complementary and subsequent projects on risks associated with specific technologies.

  2. Analysis of specific potential risks. Initial focus areas are likely to include artificial intelligence, synthetic biology, geoengineering, virus research, systemic risks with catastrophic consequences, and biodiversity loss.

Details of our current projects can be found on the adjoining pages.


 

 

Managing ETR

Managing Extreme Technological Risk

Templeton World Charity Foundation 

October 2015 – September 2018

Executive Summary

We know of natural risks to our species, such as asteroid impacts.  We also know that we threaten our own existence, e.g., by nuclear war.  Such home-grown extreme risks have been with us for decades.  But many distinguished scientists are now concerned that developments in technology (e.g., in AI or biotechnology) may soon pose more direct catastrophic threats.  The new capabilities of such technologies might place the power to do catastrophic harm in dangerously few human hands, or (in the case of AI) take it out of our hands altogether.

The nature and level of such extreme technological risks (ETRs) is at present difficult to assess, because they have received little serious scientific attention.  The need to consider them is a new one, but one we seem likely to face for the foreseeable future, as technology becomes even more powerful.   hence it is crucial that we investigate these risks seriously, and learn how they can best be managed, in the long term.  The purpose of the new Cambridge Centre for the Study of Existential Risk (CSER), within which this project is based, is to lead the international scientific community in conducting research on these issues.

This project begins with a preliminary model of the components needed for the study and management of ETR. We will refine the model by implementing some of it, and by learning from our successes and failures.  In this way, we will clarify both the present ETRs themselves, and how risks of this class should be managed in the future.  Our outputs will be both theoretical (e.g., scientific papers) and practical (e.g., establishing best practices for ETR mitigation).  In the process, we will greatly expand the community of scholars, technologists and policy-makers who are aware of the task of managing ETRs, and do much to define their understanding of the nature of this task.  By this means, in the long term, we hope to reduce substantially the rusk that our species will not survive its own technological success.

Managing Extreme Technological Risk is a three-year project funded by the Templeton World Charity Foundation within the Centre for the Study of Existential Risk (CSER). The project Managing Extreme Technological Risk is linked primarily to the first strand of CSER’s research agenda.  It consists of a number of subprojects:

This subproject aims to examine the limitations of standard cost-benefit analysis (CBA) as a means of assessing the importance of mitigating extreme technological risk (ETR); to develop a version of CBA more suitable to this context; and derive conclusions about the importance of mitigating ETR compared to other global priorities. Relevant disciplines include: Philosophy (especially moral philosophy, applied ethics, and formal decision theory) and Economics (e.g., the economics of sustainability, the theory of future discounting).

Successful management of ETR is likely to require early detection. This subproject aims to optimise the horizon-scanning and foresight techniques available for this task, and to understand the similarities and differences between the case of ETR and other horizon-scanning applications. Relevant disciplines include: Zoology and Ecology, Conservation Studies, Science and Technology Studies, Psychology.

This subproject asks what can be done to encourage risk-awareness and societal responsibility, without discouraging innovation, within the communities developing future technologies with transformative potential. Relevant disciplines include: Science and Technology Studies, Geography, Philosophy of Science, plus relevant technological fields (e.g., AI, Virology, Synthetic biology).

This subproject (a) explores the hypothesis that the culture of science is in some ways ill-adapted to successful long-term management of extreme technological risk (ETR); and (b) investigates the option of ‘tweaking’ scientific practice, so as to improve its suitability for this special task. It will examine topics including inductive risk, use and limitations of the precautionary principle, and the case for scientific pluralism and ‘breakout thinking’ where extreme technological risk is concerned. Relevant disciplines include: Philosophy of Science, Ethics, Science and Technology Studies, Sociology.

Unlike the previous subprojects, this subproject focuses on a key challenge relating to the long-term safe development of one of CSER’s core focus technologies: artificial intelligence. It will investigate technical and philosophical problems at the intersection of AI and philosophical decision theory, including a rigorous characterization of the the problem of decision-theoretic stability under algorithmic self-improvement. Relevant disciplines include: Mathematics, Computer Science, Philosophical Decision Theory.