Don’t believe the hype: Today’s artificial intelligence is really not that clever

14 Jun 2017291 Shares

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

Michael Olaye, CEO of Dare. Image: Dare

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

Are we getting a little ahead of ourselves when we start describing the wonders – and dangers – of AI? Michael Olaye, CEO at Dare, believes that we are.

If you listen to most technology commentators at the moment, you’d think that the robot uprising has already started. It’s as if the super intelligent machines have already gained self-awareness and are getting ready to take over and slaughter humanity.

Everyone from Stephen Hawking to Elon Musk has something to say about the existential threat posed by the smart speaker sitting on your kitchen table, and its inevitable successors.

Here’s something you probably haven’t heard: True artificial intelligence (AI), also known as strong AI, does not exist – not at the moment anyway.

What we’ve been calling AI for the last few years is a low level version of AI, known as weak AI, which is non-sentient and focuses on narrow tasks.

The machine that beat Lee Sedol at Go was not true AI, and neither are Alexa, Siri, Cortana or Google Assistant, much as their respective developers would like to position them as such.

For the most part, these are deep learning applications modelled on the way neurons in our brains work, but they are a million miles away from true AI in its world-ending, superhuman form.

Immense technological challenges stand in our way before we come anywhere close to something that resembles true AI.

Navigating these challenges is a huge task, one that brings about a wide range of opportunities and obstacles, but we can only meet these if we let go of the hype and see what’s actually going on with our technological progress.

How clever is AI today?

When we put aside the hype and put things in perspective, the current state of play for AI is much simpler than we think.

Make no mistake, we have some incredibly valuable and powerful tools at our disposal. The most recent and important innovation is deep learning, which has solved a huge number of technological challenges that seemed insurmountable less than a decade ago.

Take computer vision, for instance. Let’s say you wanted to scan a set of images and separate out all the cat pictures. Previously, you would have had a huge task ahead of you.

Firstly, you would have to figure out a way of identifying shapes and outlines in the image, most likely using sudden changes in colour or lighting to see where the different objects start and end.

Then you would analyse the different shapes in the image. You would have to manually code it to identify two ears, four legs, a tail, whiskers and a face, and try and figure out how confident you are that this outline resembles the shape you’re looking for.

It’s an enormous amount of work, manually programming for every different scenario. No matter how much work you do, it would still be highly inaccurate. A puppy that looked sufficiently cat-like would trigger a false positive, while a very slim cat could get mistaken for a mongoose.

And when, after all that, you discover that the cat trend is over and dogs are the new craze, you’ve got to start all over again.

Deep learning versus AI

It was a huge problem as computers simply couldn’t ‘see’ it.

Images meant nothing to them, unless they were tagged with words that could be understood, or they were manually coded to try and find a shape.

Deep learning gives computers vision in a remarkably clever way, as, instead of coding everything manually, deep learning gives software the ability to ‘learn’ by analysing thousands (or millions) of tagged examples.

By feeding in a huge database of cat pictures, as well as images that are not cats, you train the algorithm to accurately determine which is which.

The key development here is the use of a neural network. Put simply, deep learning splits the analysis across a set of neurons – individual units of code that look at a fragment of the input and react in a certain way.

Machine learning at the cutting edge of weak AI

As the software is trained, overall patterns start to emerge from the network. Which images trigger more neurons than others? Which neurons tend to activate together?

From this, it can start to make predictions about which image fits into a given category, growing in accuracy as the training continues.

With simple applications, this process happens in less than a second and, if you need to switch from cat pictures to dog pictures, it is simply a case of switching the training data.

Deep learning and machine learning (its parent category) sit at the cutting edge of weak AI because they don’t just work with images. Whether it is the voice recognition and natural language processing that Amazon’s Alexa uses, or AlphaGo’s strategy for beating a world Go champion, this approach has unlocked a whole world of possibilities by eliminating the practical barriers of manual coding that existed before.

What deep learning can’t do, but we hope true AI will

Deep learning is undoubtedly a great achievement by some incredibly clever people, but that doesn’t make the technology clever. In the world of AI vision, this is small game.

Despite its sophistication, deep learning neural networks are nothing compared to a real brain. Its method of learning is brute force; thousands of examples just get crammed in in the hope that it finds an accurate enough pattern.

It can’t achieve the same level of accuracy as someone who can understand concepts, thinking through what something really means or really is.

This becomes a real problem with language, where the meaning cannot be pinned down and categorised as easily as with cat pictures.

AlphaGo would make dog’s dinner of your shopping list

A quick conversation with one of the many AI assistants is likely to disappoint if you’re looking to do more than making simple requests, and it would be a stretch to call it a conversation.

AI today also suffers from the specialisation problem. AlphaGo may be the best Go player in the world, but it would make a dog’s dinner of your shopping list.

While it is great having an AI that is brilliantly efficient at a single task, it’s still not intelligence; it’s just a very good tool.

Then there’s a wide range of unsolved challenges that lie ahead. What do you do when you can’t get enough training data on a particular concept? How do you teach an AI inference and logical reasoning? How do you extract tone from a piece of text, or understand the social context in which a conversation is occurring? Could an AI ever learn how to learn?

In short, AI today is a good tool for specific purposes, but it is not yet intelligent.

What does the future hold?

There’s good reason to believe AI will never attain the levels of consciousness and the full capabilities of human beings, no matter how advanced technology gets.

There is a world of difference between simulating an activity and that activity physically happening. Human beings are comprised of complicated chemical reactions and real life experiences that can never be replicated in full. Machines will always be at a disadvantage.

But that doesn’t mean that we don’t face challenges, both technical and ethical, as we get closer to true AI. And it’s important that we approach them with clarity, ensuring that AI makes the world a better place and isn’t used for malicious purposes.

Algorithms can be used to perpetuate discrimination, as we saw with facial recognition technology designed to identify criminals, and deep learning can be used for the technology behind drone strikes, in the same way it can be used for innocent cat pictures.

‘Let’s stop obsessing over the science fiction’

All the focus on the prospect of AIs turning evil, or humans mistreating ‘conscious’ AIs, is a distraction from the real problem: humans doing harm to other humans, using AI.

We need appropriate safeguards to ensure that technology isn’t misused and that deep learning applications don’t learn harmful behaviours. This means ensuring that existing laws are not circumvented, and that new laws are created where necessary.

It also means we approach technological innovation with our eyes open, taking off the rose-tinted glasses and looking at the genuine potential and risks of the technology. The less we obsess over the AI uprising, the more we can have a proper conversation about our future.

Let’s stop obsessing over the science fiction fantasy scenario and start thinking about what’s happening in the here and now. After all, it affects all of us.

By Michael Olaye

Michael Olaye, CEO of Dare and CTO at Oliver Group, leads the long-term technology vision of the Oliver Group agencies (Oliver, Dare, Adjust Your Set, Marketing Matters and Aylesworth Fleming) and is responsible for the group’s technical collaboration, industry thought leadership and advanced technology incubations.

For the past 18 years, Olaye has climbed the technology ladder as an engineer (developer), creative technologist, lead architect, technical director, and head of technology and innovation at companies including Xplorer, Hyperlink-Interactive, U-dox, Creative Partnership and Havas.