The Centre for the Study of Existential Risk (CSER) is an interdisciplinary research centre within CRASSH dedicated to the study and mitigation of existential risks.
Now is an important time for efforts to reduce existential risk. There are new and largely unstudied risks associated with powerful emerging technologies and the impacts of human activity, which in the worst case might pose existential risks. CSER wants to reap the enormous benefits of technological progress while safely navigating these catastrophic pitfalls. These threaten everyone – we can only tackle them together.
Visit the Centre’s website for more information.
Managing Extreme Technological Risks
Risks associated with emerging and future technological advances, and impacts of human activity, threaten human extinction or civilisational collapse. Managing these extreme technological risks is an urgent task – but one that poses particular difficulties and has been comparatively neglected in academia. Our flagship research project, funded by the Templeton World Charity Foundation, is to develop, implement and refine an initial model of a systematic approach to addressing how this class of risks can best be identified, managed and mitigated. Read more.
Global Catastrophic Biological Risks
We have begun to develop a research agenda for Global Catastrophic Biological Risks. Our work has involved horizon-scanning for emerging issues in biotechnology, analysing gene drives, and debating gain-of-function research. On biosafety, we are developing strategies for promoting responsible research and innovation in collaboration with academics, biotech companies, and bio-hacker communities. On biosecurity, we have developed a collaborative strategy of next steps for the Biological Weapons Convention. Read more.
Extreme Risks and the Global Environment
We are developing collaborative strategies to address environmental risks with a global community of concerned academics, technologists, and policy-makers. This includes research on biodiversity loss and ecological collapse, strategies for divestment and investment decisions by major institutional investors, and reports for governments like the 2014 workshop at the Vatican which influenced the landmark Papal Encyclical on Climate Change. Read more.
Risks from Artificial Intelligence
We are part of a community of technologists, academics and policy-makers with a shared interest in safe and beneficial artificial intelligence (AI). We are working in partnership with them to shape public conversation in a productive way, foster new talent, and launch new centres like the Leverhulme Centre for the Future of Intelligence. Our research has addressed decision theory relevant to AI safety, and the near-term and long-term security implications of AI. Read more.