Robot whisperers can save manufacturing in the West, says ex-Google VP

29 Jun 2017

Robot and human working together. Image: Bluskystudio/Shutterstock

Few people are as invested in AI as Andrew Moore, who fears we’ll all be too busy playing Candy Crush during the robot takeover.

Since the Terminator film franchise popularised a future robot takeover in the 1980s, the general public has been somewhat fearful and sceptical of the mass adoption of artificial intelligence (AI).

The only problem is that AI is already here, and it is set to expand, with serious research now showing that it will influence nearly every aspect of our daily lives.

Even a simple action such as viewing a webpage online involves some degree of AI control, while devices such as the Amazon Echo or Google Home constantly listen and learn about us in our own homes.

An opinion held by many is that companies such as Google and Facebook are not working overtime on developing the latest AI and machine learning technologies to help usher in a better future, but rather for their own financial gain, in collaboration with marketers and governments.

This is not entirely inaccurate, as Facebook and Google openly discuss how their algorithms constantly try to tell you what they think you should be looking it, including adverts.

But does this mean that these companies are conspiring to take over the world through AI, laughing maniacally like a James Bond villain?

Andrew Moore

Andrew Moore, dean of the School of Computer Science at Carnegie Mellon University. Image: CMU

Silicon Valley isn’t evil

“I have never met a private corporation that was actually planning on doing evil or scary things.”

Those are the words of Andrew Moore, the dean of the School of Computer Science at Carnegie Mellon University (CMU), who knows more than most about the inner mindset of major tech giants.

From 2006 to 2014, Moore rose through the ranks of Google to eventually be named as its vice-president of engineering, overseeing the organisation’s biggest money-spinner: e-commerce through building the suggestive algorithms behind its search results.

Speaking with Siliconrepublic.com, he adds a caveat that companies such as Google are more misleading, if not a little bit disingenuous, when it comes to branding their AI.

“In my opinion, some of these companies can come across as naïve as describing their goals as being purely for humanity, without mentioning they absolutely need advertising revenue to continue to provide their services.”

He added that the real issue is these companies pumping billions of dollars towards spoken dialogue systems to make the likes of Alexa, Siri and Google Assistant better at understanding your questions.

In the near future, products such as the Amazon Echo could be a ‘Google killer’, as more people move to voice search instead of typing out queries manually, but is this what we need as a society?

“The kind of questions people want answered are not necessarily the ones that make the most money through ads,” Moore said, “but what many of us in the academic – and many in the search engine companies – want to see is people being able to ask questions and get high-quality answers on things like healthcare, or emergency services or government regulations.”

‘Embarrassed and worried’ about AI ethics problems

One area he agrees could do with some regulation is the ethical side of widespread AI.

As we documented earlier this year, who – or what – is legally responsible for any harm brought to a person by AI is still a very murky area in law, with debate raging over whether it is possible to actually create ethical AI.

While supranational organisations such as the EU are formulating laws that would, in effect, give robots and AI basic rights, engineers are still creating algorithms that can’t read the face of a person of colour because its designers simply didn’t think about it.

In such a new field, no one appears to be sure about what to do, Moore said, and, unfortunately, he has yet to see anyone offer any real solution – a situation he described as leaving him “embarrassed and worried” about where AI is going.

Tesla accidentally creating an ‘AI winter’

Equally worrying, he added, is the attitude being taken by autonomous vehicle companies – Tesla in particular – that are pushing hard towards getting an artificially intelligent car on the road ahead of its competitors.

“I feel it is actually irresponsible for us to use the same mentality that I used when I was working with Google – where you throw something really cool out, see if it works, gather the data and improve it. You can’t do that with safety-critical systems.”

So far, Tesla has stood firm over news of accidents involving its autonomous vehicles, citing that it has a much better safety record than a typical human driver.

This may be true, Moore argued, but if an enormous human tragedy were to occur due to a poorly tested vehicle on real roads, the long-term damage to Tesla’s public goal of autonomous car networks saving livings could be jeopardised.

“I’m worried that a major AI winter would occur, with the potential for AI to save millions of lives through vastly safer infrastructure and healthcare suddenly getting pushed back a decade because there’s a big turn in public opinion when one of these disasters happen.”

Tesla Autopilot

Illustration of Tesla’s Autopilot sensors at work. Image: Tesla

Rethinking our priorities

It isn’t just software that we might have to worry about – in the event of a ‘robot takeover’, Moore believes that an unregulated world will ultimately be “disastrous”.

He said the truth is that manufacturing jobs are going to be lost to robotics, but their loss does not mean mass unemployment. Rather, we should focus our efforts on areas lacking in humans, such as education and elderly care.

“The thing we can afford to do as a society is if there are more profits in jobs created in retail or unskilled manufacturing [through automation], we can use that to spend on some professions that are absolutely not going to disappear.

“Imagine you had an education system with one teacher for every five kids and there was really in-depth, well-trained community policing?”

He continued: “We can spend our money doing this, or we can spend it on giving the ‘technology 1pc’ their own private islands while the rest of the world has a dreary life playing Candy Crush.”

The West needs to hire the ‘robot whisperers’

He goes on to raise the interesting possibility that even with the mass automation of manufacturing, nations such as the US and others in the West could actually remove up to 30 jobs from countries such as China, Vietnam or the Philippines, and create one job in its place.

This to me sounded like he was suggesting a conflict is brewing, not in a military sense, but rather a battle for controlling future commerce between eastern and western states.

“I think we’re beyond that point,” he explained. “In fact, I wouldn’t paint it as east versus west, but as rich versus poor.”

He cites the example of China’s coastal regions – once the focus of much of its manufacturing – now moving inland to cheaper locations, as costs rise due to greater wage demands and increased wealth.

“Being a ‘robot whisperer’ and setting up the production plants for automated small batch production – this will happen and will locally create more jobs in manufacturing in the high-wage countries.”

US is ‘sucking’ at cybersecurity

With the likes of WannaCry and Petya/GoldenEye testing cybersecurity researchers to their limit, AI is now seen by some as the not-so secret weapon in tackling the sheer quantity of cyberattacks that happen on a daily basis.

While major corporations such as IBM are already using AI to predict and fight hundreds of thousands of cyberattacks each day, governments are also pumping billions of dollars into AI research for cybersecurity.

For example, the US military’s advanced research division, DARPA, held the Cyber Grand Challenge last year, which pitted the country’s most advanced intelligent algorithms against one another to hunt for their vulnerabilities. The point is to create AI capable of securing a future where billions of devices are connected as part of the internet of things, something that has already proven to be incredibly vulnerable following the Mirai botnet meltdown in 2016.

Moore actually had a horse in the DARPA race as the winner of the event was named as CMU spin-out ForAllSecure, led by David Brumley, who is also the director of the university’s CyLab Security and Privacy Institute.

“[Brumley’s] team won this quite handily and it was the most boring competition you could ever see,” Moore joked, saying that people just stood around watching computer screens and racks of servers.

“But behind the scenes, it was amazing, as there was some game theory going on similar to playing poker where, even if you have an exploit, the [the AI] might pretend it doesn’t because it’s better to bluff that you’re vulnerable, as it might be better to use it at a later point.”

Being based in the US, this success is welcome relief in his eyes as, in his own words: “The US is sucking right now [in cybersecurity]. We have much smaller armies of both offensive and defensive cybersecurity people and we’re way behind China and Russia.”

One thing for certain is that these armies are likely to only increase in size in the decades to come.

Colm Gorey was a senior journalist with Silicon Republic

editorial@siliconrepublic.com