Company
2017: DeepMind's year in review
In July, the world number one Go player Ke Jie spoke after a streak of 20 wins. It was two months after he had played AlphaGo at the Future of Go Summit in Wuzhen, China.
“After my match against AlphaGo, I fundamentally reconsidered the game, and now I can see that this reflection has helped me greatly,” he said. “I hope all Go players can contemplate AlphaGo’s understanding of the game and style of thinking, all of which is deeply meaningful. Although I lost, I discovered that the possibilities of Go are immense and that the game has continued to progress.”
Ke Jie is a master of the game and we were honoured by his words. We were also inspired by them, because they hint at a future where society could use AI as a tool for discovery, uncovering new knowledge and increasing our understanding of the world. With machine-aided science in particular, we hope that AI systems could help make progress on challenges from climate change and drug discovery, to finding complex new materials or helping ease the pressure on healthcare systems.
This potential for societal benefit is why we set up DeepMind, and we’re excited to have made continued progress on some of the fundamental scientific challenges as well as on AI safety and ethics.
The approach we take at DeepMind is inspired by neuroscience, helping to make progress in critical areas such as imagination, reasoning, memory and learning. Take imagination, for example: this distinctively human ability plays a crucial part in our daily lives, allowing us to plan and reason about the future, but is hugely challenging for computers. We continue to work hard on this problem, this year introducing imagination-augmented agents that are able to extract relevant information from an environment in order to plan what to do in the future.
This neuroscience-inspired approach also created one of the most popular demonstrations of our work, when we trained a neural network to control a variety of simplified body shapes in a simulated environment. This kind of sophisticated motor control is a hallmark of physical intelligence, and is a crucial part of our research programme. Although the resulting movements were wild and - at times - ungainly, they were also surprisingly successful and made for entertaining viewing.
Separately, we made progress in the field of generative models. Just over a year ago we presented WaveNet, a deep neural network for generating raw audio waveforms that was capable of producing better and more realistic-sounding speech than existing techniques. At that time, the model was a research prototype and was too computationally intensive to work in consumer products. Over the last 12 months, our teams managed to create a new model that was 1000x faster. In October, we revealed that this new Parallel WaveNet is now being used in the real world, generating the Google Assistantvoices for US English and Japanese.
This is an example of the effort we invest in making it easier to build, train and optimise AI systems. Other techniques we worked on this year, such as distributional reinforcement learning, population based training for neural networks and new neural architecture search methods, promise to make systems easier to build, more accurate and quicker to optimise. We have also dedicated significant time to creating new and challenging environments in which to test our systems, including our work with Blizzard to open up StarCraft II for research.
But we know that technology is not value neutral. We cannot simply make progress in fundamental research without also taking responsibility for the ethical and social impact of our work. This drives our research in critical areas such as interpretability, where we have been exploring novel methods to understand and explain how our systems work. It’s also why we have an established technical safety team that continued to develop practical ways to ensure that we can depend on future systems and that they remain under meaningful human control.
In October we took another step by launching DeepMind Ethics & Society, a research unit that will help us explore and understand the real-world impacts of AI in order to achieve social good. Our research will be guided by Fellows who are renowned experts in their fields - like philosopher Nick Bostrom, climate change specialist Christiana Figueres, leading researcher James Manyika, and economists Diane Coyle and Jeffrey Sachs.
AI must be shaped by society’s priorities and concerns, which is why we’re working with partner organisations on events aimed at opening up the conversation about how AI should be designed and deployed. For example, Joy Buolamwini, who leads the Algorithmic Justice League, and experts from Article 36, Human Rights Watch, and the British Armed forces joined us for a session at Wired Live to discuss algorithmic bias and restricting the use of lethal autonomous weapons. As we’ve said regularly this year, these issues are too important and their effects too wide-ranging to ignore.
That’s also why we also need new spaces, both within and outside AI companies, for conversations about anticipating and directing the impacts of the technology. One example is the Partnership on AI, which we co-chaired this year, and which has been charged with bringing together industry competitors, academia and civil society to discuss key ethical issues. Over the past year, PAI has welcomed 43 new nonprofit and for-profit members and a new Executive Director, Terah Lyons. And in the next few months, we’re looking forward to working with this group to examine a wide range of research themes, including bias and discrimination in algorithms, the impact of machine learning on automation and labour, and more.
We also believe in the importance of using our technology for practical social benefit, and continue to see amazing potential for real-world impact in health and energy. This year we agreed two new partnerships with NHS hospital trusts to deploy our Streams app, which supports NHS clinicians using digital technology. We’re also part of a consortium of leading research institutions that launched a groundbreaking study to determine if cutting-edge machine learning technology could help improve the detection of breast cancer.
In parallel, we've also worked hard on the oversight of our work in health. We wrote about the lessons learned from the Information Commissioner’s findings about our original partnership with the Royal Free, and DeepMind Health’s Independent Reviewers published their first open annual report on our work. Their scrutiny makes our work better. We’ve made major improvements to our engagement with patients and the public, including workshops with patients and carers, and we’re also exploring technical ways of building trust into our systems, such as the verifiable data audit, which we plan to release as an open-source tool.
We are proud of all of our progress in 2017, but know there is still a long way to go.
Five months after we played Ke Jie in Wuzhen and retired AlphaGo from competitive play, we published our fourth Nature paper for a new version of the system, known as AlphaGo Zero, which uses no human knowledge. Over the course of millions of games, the system progressively learned the game of Go from scratch, accumulating thousands of years of knowledge in just a few days. In doing so, it also uncovered unconventional strategies and revealed new knowledge about this ancient game.
Our belief is that AI will be able to do the same for other complex problems, as a scientific tool and a multiplier for human ingenuity. The AlphaGo team are already working on the next set of grand challenges and we hope the moments of algorithmic inspiration they helped to create are just the beginning.