This week, artificial intelligence leaders in academia and industry, and legal, economic and risk experts worldwide have signed an open letter calling for the robust and beneficial development of artificial intelligence. The letter follows a recent private conference organised by the Future of Life Institute and the Centre for the Study of Existential Risk and funded by CSER’s Jaan Tallinn, in which the future opportunities and societal challenges posed by artificial intelligence were explored by AI leaders and interdisciplinary researchers.

The conference resulted in a set of research priorities aimed at making progress on the technical, legal, and economic challenges posed by this rapidly developing field.

This conference, the research preceeding it, and the support for the concerns raised in the letter, may make this a pivotal moment in the development of this transformative field. But why is this happening now?

Why now?

An exciting new wave of progress in artificial intelligence is happening due to the success of a set of new approaches – “hot” areas include deep learning and other statistical learning methods. Advances in related fields like probability, decision theory, neuroscience and control theory are also contributing. These have kickstarted rapid improvements on problems where progress has been very slow until now: image and speech recognition, perception and movement in robotics, and performance of autonomous vehicles are just a few examples. As a result, impacts on society that seemed far away now suddenly seem pressing.

Is society ready for the opportunities – and challenges – of AI?

Artificial intelligence is a general purpose technology – one that will affect the development of a lot of different technologies. As a result, it will affect society deeply and in a lot of different ways. The near- and long-term benefits will be great – it will increase the world’s economic prosperity, and enhance our ability to make progress on many important problems. In particular, any area where progress depends on analyzing and using huge amounts of data – climate change, health research, biotechnology – could be accelerated.

However, even impacts that are positive in the long-run can cause a lot of near-tem challenges. What happens when swathes of the labour market become automated?  Can our legal systems assign blame when there is an accident involving a self-driving car? Does the use of autonomous weapons in war conflict with basic human rights?

It’s no longer enough to ask “can we build it?” Now that it looks like we can, we have to ask: “How can we build it to provide most benefit? And how must we update our own systems – legal, economic, ethical – so that the transition is smooth, and we make the most of the positives while minimizing the negatives?” These questions need careful analysis, with technical AI experts, legal experts, economists, policymakers, and philosophers working together. And as this effects society at large, the public also needs to be represented in the discussions and decisions that are made.

Safe, predictable design of powerful systems

There are also deep technical challenges as these systems get more powerful and more complex. We have already seen unexpected behaviour from systems that weren’t carefully enough thought through – for example, the role of algorithms in the 2010 financial flash crash. It is essential that powerful AI systems don’t become black boxes operating in ways that we can’t entirely understand or predict. This will require better ways to make systems transparent and easier to verify, better security so that systems can’t be hacked, and a deeper understanding of logic and decision theory so that we predict the behaviour of our systems in the different situations they will act in. There are open questions to be answered: can we design these powerful systems with perfect confidence that they will always do exactly what we want them to do? And if not, how do we design them with limits that guarantee only safe actions?

Shaping the development of a transformative technology

The societal and technical challenges posed by AI are hard, and will become harder the longer we wait. They will need insights and cooperation from the best minds in computer science, but also from experts in all the domains that AI will impact. But by making progress now, we will lay the foundations we need for the bigger changes that lie ahead.

Some commentators have raised the prospect of human-level general artificial intelligence. As Stephen Hawking and others have said, this would be the most transformative invention in human history, and will need to be approached very carefully. Luckily, we’re still decades away, or possibly even centuries. But we need that time. We need to start work on today’s challenges – how to design AI so that we can understand it and control it, and how to change our societal systems so we gain the great benefits AI offers – if we’re to be remotely ready for that. We can’t assume we’ll get it right by default.

The benefits of this technology cannot be understated. Developed correctly, AI will allow us to make better progress on the hard scientific problems we will face in coming decades, and might prove crucial to a more sustainable life for our world’s 7 billion inhabitants. It will change the world for the better – if we take the time to think and plan carefully. This is the motivation that has brought AI researchers, and experts from all the disciplines it impacts – together to sign this letter.

 

CENTRE FOR RESEARCH IN THE ARTS, SOCIAL SCIENCES AND HUMANITIES

Tel: +44 1223 766886
Email enquiries@crassh.cam.ac.uk