Google’s Jeff Dean believes that major advances in AI and machine learning can come from anywhere in the world.
Jeff Dean is a senior Google Fellow and the company’s artificial intelligence (AI) lead. He designed most of Google’s biggest infrastructure projects and he is considered the brain behind Google’s AI efforts.
Dean joined Google in 1999 and co-designed many generations of Google’s crawling, indexing and query-serving systems, as well as major pieces of Google’s initial advertising and AdSense for Content systems. He is also a co-designer and co-implementer of Google’s distributed computing infrastructure, including the MapReduce, BigTable and Spanner systems; protocol buffers; the open source TensorFlow system for machine learning; and a variety of internal and external libraries, and developer tools.
The below conversation was captured from a stage interview with Dean by video link at the internet giant’s Making AI event in Amsterdam in November 2018.
What are you most excited about in terms of the opportunities of AI, and what do you see as some of the obstacles to realising this?
I think over the last eight or 10 years, we’ve seen the ability of machines to do things that they didn’t used to be able to do has really grown. Computer vision now works, whereas eight or 10 years ago it was not really working that well, and that means machines can see. Speech recognition and understanding have all really improved so these basic capabilities have gotten tremendous improvement, and what that means is that many different kinds of fields and industries are thinking about how these new capabilities can be used to improve whatever they do.
But the one that I am most excited about is the potential of using AI in the field of healthcare. There is a tremendous opportunity to basically allow doctors to also get advice from systems that are trained on medical data to give instant second opinions, to make the diagnoses that we all rely on in our own healthcare more correct more often – and that is a tremendous opportunity.
Healthcare systems vary tremendously around the world in terms of how they are structured, with regulatory issues and very real privacy issues in healthcare data. But if you think about a large healthcare system with 10m patients and 20,000 physicians and 10 years of data about patients, that’s 200,000 years of doctor wisdom that is embedded in the electronic medical record systems of that healthcare system. It would be amazing if we could use AI – and I think we can – to get the opinions of all of your 20,000 doctor colleagues as an extra set of advice for every doctor when they are making critical decisions.
To do some of these amazing things in AI requires a lot of data. To what extent is data either a barrier to or a requirement for success when you are developing products?
The most successful kind of machine learning today is what is known as supervised learning, where you essentially have some dataset, where you have both the inputs to a problem you are trying to solve and the desired outputs, often produced by humans. Either you have expert doctors labelling an image of a retina saying ‘this one is diseased, this one is not’ or through the course of operating a business, sometimes the interactions you have with that business produce the data that is useful for machine learning.
Right now, [with] some of the kinds of machine learning, we basically learn new models from scratch for every new problem we care about. These problems can sometimes be complicated, like translating English sentences into Japanese, but I think over time what we are going to see is that, more and more, we train models and AI systems to do many different things, and that will allow us to get those systems to do new things with less data, because they can leverage the expertise that they have in solving many other kinds of tasks in figuring out the best way to solve a new task.
I think it is one important component, but it is by far not the only component. You need lots of clever algorithms, you need work on research, you have to identify what are the important problems to tackle. But we do use data – for example, in our spelling correction system. So, we know when users type a query one way, don’t get the results they want, and then follow up and type a query that is related, and has a very small number of characters that are different from what they typed a few minutes ago and what they just typed – we infer that is due to a spelling correction.
That data can help improve our spelling correction systems for other users – we can predict you maybe mistyped this and you really meant something else. When you see the ‘Did you mean?’ prompt on Google, that is an example of the kind of thing that data is helpful with. And I think it is a really nice aspect of using the service; [it] creates some kinds of data that are useful for improving the service itself – that benefits all the users of the service.
How competitive do you think the AI landscape is? Can anybody compete with China and the US?
There are different places that are leaders in how this is playing out. There is the academic research side and Europe has some of the best universities in the world, producing amazing PhD graduates in the world of machine learning and AI along with great universities in the US.
The Chinese universities are definitely improving in their graduate work and their undergraduate population is very large.
The continent of Africa is seeing a tremendous upswing in interest in AI and machine learning, and has quite a powerful organised set of people that are leaders in ensuring Africans are leading some of the work in AI. I was just in South Africa with 500 machine-learning experts from 44 countries around Africa in Cape Town, and it was great to see the enthusiasm.
It is not a US-and-China-only game, there are lots of countries and participants in the world. At a governmental level, there are different levels of involvement and sophistication. The Chinese government last year came out with a clear report about a nationalised strategy for what they are going to be doing in AI and machine learning.
If you look at Canada, it has been investing in AI and machine learning for the better part of 30 years. Some of the initial work that is sparking the deep-learning movement today really came from the CIFAR [Canadian Institute for Advanced Research] research funding that provided the early seed and research work in this space.
Governments are definitely thinking about these issues – some are further ahead than others – but I think every government should really be thinking about what AI could mean for their country, what are the opportunities there and how do they want to benefit from it.
Want stories like this and more direct to your inbox? Sign up for Tech Trends, Silicon Republic’s weekly digest of need-to-know tech news.