The five-minute CIO: Tom Austin, Gartner

24 Mar 2017

Gartner fellow Tom Austin. Image: Gartner

‘It is too early to decide which AI platform to standardise. Instead, play the field. Get married in five years,’ recommends Tom Austin, Gartner’s AI evangelist.

Tom Austin, vice-president at Gartner, has been a Gartner fellow since 1997. He drives the company’s research content incubator (the Maverick programme) and is leading a new research community on the emerging era of smart machines and AI.

The research includes AI, deep machine learning, natural-language processing, conversational systems and cognitive computing.

‘It is all math that is going on; it is not a mystical or magical occurrence’
– TOM AUSTIN

Smart machines are a set of new, revolutionary and disruptive technologies that mark the beginning of a multi-decade revolution in how technology is used.

Austin is also the Gartner lead analyst on Google.

How would you describe your role?

Internally at Gartner, I am the chief evangelist for AI core technologies.

My job is to be the whip to drive another 200 or 300 people at Gartner to pick up the responsibility for understanding what the impact of AI is going to be like in industries such as healthcare, marketing, legal affairs and so forth. This thing is going to be all-pervasive.

Externally, I am the curmudgeon. I identify the problems, the weaknesses, the issues that people should consider, because almost every vendor hypes the heck out of it and pumps a lot of marketing malarkey into the marketplace. I take a very hot position internally and a cool, advised, recommended approach externally.

So, is AI really happening in the enterprise?

AI is happening, but in different ways. 40pc of our clients – large enterprises – are experimenting with branches of what I call ‘amazing innovation’ around deep neural networks, machine learning, natural language processing and so forth.

But that is experimentation; they are sending a few bright people off to play with and learn not just the strengths, but also the weaknesses, of all the major companies such as Amazon, Google, Microsoft, IBM and 40 others.

The second cut is how many are really doing something, rather than learning, to understand strengths and weaknesses and build skills. We think around 5pc of large enterprises are truly investing in AI today, specifically for customer interaction, support, improving quality of support, reducing cost of delivery and improving satisfaction. That is 5pc of a very large market. We believe that’s where the biggest activity is occurring.

And just 1.5pc of the market is really investing in serious projects in this area and intending to deploy the technology.

On the experimentation side, 40pc is a huge number and I think that people are upskilling their particularly special people so they can make intelligent decisions.

Some see AI as a kind of magic, others believe it will steal their jobs. What do you believe?

There are three ways to define AI other than as artificial intelligence.

In some cases, AI stands for ‘amazing innovation’; it can mean ‘always impossible’ or ‘aged innovation’.

What we mean by always impossible is that there are always going to be some things we think technology can’t do. For example, around 1900, they said you couldn’t fly anything heavier; and then in 1903, the Wright brothers come along.

Then we get an amazing innovation, and that is technology that does what technology couldn’t do before. We are surprised, it is big headline stuff.

And then there’s aged innovation, which represents what used to be amazing innovation; we were shocked by it and now we are used to it. Just like smartphones today.

I use those three terms: always impossible, amazing innovation and aged innovation; and each have different roles in enterprises. Marketers tend to scare or enthuse people by saying AI is just like humans; it thinks like a human, it understands like you and me. Wrong.

I’ll accept the statement that an amazing innovation can be made to appear like it thinks. It doesn’t think. It is not human. I have never met a cognate technology and, in fact, some of the people I’ve met don’t have cognitive capabilities.

So we use all these metaphors such as consciousness and awareness. I see no evidence that we can do any of that.

I used to have a teacher who said that computers only do what humans tell them to do. Is that right?

What we have learned with deep learning or deep training is that because we force-feed information to it – it doesn’t learn by itself generally – we can now get it to make decisions, if we have the right analytical model.

AI is a revolution in analytical models with high horsepower compute capability, graphic processing units and big data. Put the three together and we no longer have to write millions of lines of code for a system to identify all of the risky things in the environment, such as a car moving along.

Instead, you build an analytical model, take 100,000 hours of video and feed it into the analytical model until it learns what to see and how to see it.

It is all math that is going on; it is not a mystical or magical occurrence. We can now replace a lot of coding with a lot of data because it is far easier to do than before.

All these machines do is what we teach them to do. We either write code or feed them with data into a model that crunches the data and characterises it.

Your teacher was absolutely right – only now, all of these machines are trained with deep neural networks.

How are businesses going to make practical use of AI?

You probably think, ‘My smartphone is dumb’ – we get bored with its capabilities. We know what it does well and we know what it doesn’t do well.

People actually don’t consider what their smartphones do as AI. In fact, there is a lot of AI going on; for example, speech to text, text to speech, Siri and more.

Just look at Alexa in Amazon Echo. That’s an amazing example of an AI technology that is using speech to text in a local chip in a machine. It takes the audio, recognises the word, pumps the audio up to an Amazon server (where it does some serious speech to text work) and then does what you tell it to do. ‘Alexa, Capital One, what’s my credit card balance?’ It sets off a chain reaction of instructions. There’s a tremendous amount of AI going on.

There is a company in France called In-side that diagnoses and provides prognoses on ear problems. It collected 12,000 pictures of the external ear canal and, using an iPhone with a special snorkel, it identifies problems such as tinnitus.

It works with a company called Clarif.ai, which has created deep neural networks that process the impacts to help discriminate between the different disease states.

They now provide a very valuable service in sub-Saharan Africa. This is a simple example of how AI can be harnessed.

AI has always been in the popular imagination. But this time round, would you say it is finally tangible?

Yes. AI has gone from enthusiasm to fear and failure, and there have been several AI winters.

I concluded – based on what I was seeing around 2011 in Gartner and academic research – that they had finally figured out a formula that would be responsible for sustainable improvements in AI capabilities.

In 2012, I convinced management to allow me to spend half my time being an internal evangelist and external cynic around AI, and build a practice around it.

But this follows 65 years of failure in AI.

There was a big bang moment in 1958, and The New York Times wrote about the ‘perceptron’, a machine that would accurately perceive the environment around it. But that went bust.

In 1987, we had an AI winter around neural networks that tried to pick up where the perceptron left off, but that also failed and funding was withdrawn.

But a few persistent people stuck with it and around 2011, Google said, ‘We know how to solve this problem’. It harnessed data centres with 16,000 CPUs to feed 2m pictures of cats into a neural network. The computer responded to pictures of cats, not dogs, firing up the cells each time it recognised a cat picture.

This taught us that all we needed was more compute cycles.

All of a sudden, people wanted to train machines to learn anything from perception.

Combine the neural network with a rules engine, and machines can make decisions for you. It’s not perfect, but I am satisfied that this time round, AI is going to stick.

What advice would you give an enterprise that wanted to employ AI in its business?

There are five things I would say.

First, the technology is turbulent and keeps changing. The self-driving model for cars in 2012 is not applicable in 2017. By 2022, it will be a new paradigm or set of technologies. Do not get married to one vendor just yet.

Second, all of the big application system providers such as Oracle, SAP, Microsoft and Salesforce are building big AI systems that will come out this year. Watch out: this will be a 100-car train with every boxcar full of objects moving at 150kph down the track. It could get messy.

Third, you need real experts on your staff. Hire some master’s graduates who have done theses on natural language processing and deep learning, and send them on a mission to experiment with AI, the different platforms, and identify their strengths and weakness.

Fourth, it is too early to decide which platform to standardise. Instead, play the field.

Fifth, get married in about five years.

Want stories like this and more direct to your inbox? Sign up for Tech Trends, Silicon Republic’s weekly digest of need-to-know tech news.

John Kennedy is a journalist who served as editor of Silicon Republic for 17 years

editorial@siliconrepublic.com