The AI future is now: 5 ways Google wowed at I/O 2018

9 May 20181.98k Views

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

The main stage at Google I/O 2018. Image: Google

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

Google I/O was all about pushing the boat out on AI.

Google’s latest edition of its annual tech jamboree, I/O, saw the internet giant debut what has to be one of the most compelling visions for AI yet: a human-sounding technology called Duplex that can make calls and book appointments on your behalf.

It was one of a number of major reveals that shows the innovation engine at Mountain View in Silicon Valley is operating at full throttle.

AI and machine learning were pretty much the order of the day as Google revealed new hardware and software as well as changes and features you can expect in Android P.

Here are some of the jaw-dropping announcements that Google made at I/O.

Duplex: The weirdly human voice of AI

The AI future is now: 5 ways Google wowed at I/O

Google CEO Sundar Pichai on stage at I/O 2018. Image: Google

Developed by engineers and product designers in Silicon Valley, New York and Tel Aviv, Duplex could be Google’s Siri-killer.

The company described Google Duplex as a new technology for conducting natural conversations to carry out real-world tasks over the phone.

CEO Sundar Pichai demonstrated to a wowed audience how the technology could book a hair appointment in what seemed like a natural, human voice, complete with ‘umm’ and ‘mm hmm’ utterances.

“The technology is directed towards completing specific tasks, such as scheduling certain types of appointments,” explained principal engineer Yaniv Leviathan and engineering VP Yossi Matias.

“For such tasks, the system makes the conversational experience as natural as possible, allowing people to speak normally, like they would to another person, without having to adapt to a machine.”

They added that the Google Duplex technology is built to sound natural, in order to make the conversation experience comfortable.

“It’s important to us that users and businesses have a good experience with this service, and transparency is a key part of that. We want to be clear about the intent of the call so businesses understand the context. We’ll be experimenting with the right approach over the coming months.”

At the core of Duplex is a recurrent neural network (RNN) that is designed to cope with conversations and natural sounds, including understanding, interacting, timing and speaking.

“To obtain its high precision, we trained Duplex’s RNN on a corpus of anonymised phone conversation data. The network uses the output of Google’s automatic speech recognition (ASR) technology, as well as features from the audio, the history of the conversation, the parameters of the conversation (eg the desired service for an appointment or the current time of day) and more. We trained our understanding model separately for each task, but leveraged the shared corpus across tasks. Finally, we used hyperparameter optimisation from TFX to further improve the model.”

The way this is heading, we won’t know if the person on the other end of the line is real or a machine.

Google Photos adds AI-suggested actions

The AI future is now: 5 ways Google wowed at I/O

Image: Google

The next version of the Google Photos app will suggest quick fixes and other tweaks, such as brightness and colour, while you are viewing the photos, using the power of AI.

Nan Wang, a software engineer at Google Photos, explained that people look at 5bn pictures in the app every day but they want to do more than just view them.

“Today, you’ll start to see a range of suggested actions show up on your photos right as you’re viewing them, such as the option to brighten, share, rotate or archive a picture,” Wang said.

“These suggested actions are powered by machine learning, which means you only see them on relevant photos. You can easily tap the suggestions to complete the action.”

Wang said that the company is also working on the ability for users to change black-and-white shots into colour with just a tap.

New chip for machine learning

At I/O, Google revealed that it is rolling out its third generation of silicon technology in the form of the Tensor Processor Unit (TPU) 3.0.

The TPU is Google’s custom application-specific processor designed to accelerate machine learning and model training. The resultant TensorFlow workloads are used by researchers, developers and businesses to drive applications in big data and machine learning.

Pichai said that the TPU 3.0 pod, consisting of several TPUs grouped together, will be eight times more powerful than last year’s predecessor, adding that it can handle up to 100 petaflops.

“These chips are so powerful that, for the first time, we had to introduce liquid cooling to our data centres,” Pichai said.

Smart Compose feature for Gmail

Taco Tuesday Smart Compose

GIF: Google

Gmail already has some nifty features for machine learning. However, at I/O, it followed up its major revamp of Gmail last month with a new feature called Smart Compose, which suggests phrases as you type and lets users autocomplete them by hitting Tab.

“Smart Compose helps save you time by cutting back on repetitive writing, while reducing the chance of spelling and grammatical errors. It can even suggest relevant contextual phrases,” explained product manager Paul Lambert.

“For example, if it’s Friday, it may suggest ‘Have a great weekend!’ as a closing phrase.

“Over the next few weeks, Smart Compose will appear in the new Gmail for consumers, and will be made available for G Suite customers in the workplace in the coming months,” Lambert added.

Expect more gestures in Android P

new system navigation

GIF: Google

A new update to Android P could lend itself to a kind of digital detox, allowing users to use their phones less.

Android P, which comes out later this year, will come with a new dashboard that tells you how often and for how long you are using your smartphone. It will allow users to set limits, such as a half hour of Facebook or Instagram each day. Once you’ve reached your limit, the app icon will change colour.

All of this is part of a focus on digital wellbeing.

Another core facet of Android P is, again, the use of machine learning and AI. Google has partnered with DeepMind to build Adaptive Battery, which priorities battery power for the apps and services you use the most. Adaptive Brightness learns how you like to set the brightness, given your surroundings.

Another feature called Slices allow you to surface parts of specific apps that you use the most.

“With Android P, we put a special emphasis on simplicity,” explained product management VP Sameer Samat.

“The look and feel of Android is more approachable with a brand new system navigation. In Android P, we’re extending gestures to enable navigation right from your home screen. This is especially helpful as phones grow taller and it’s more difficult to get things done on your phone with one hand.

“With a single, clean home button, you can swipe up to see a newly designed Overview – the spot where, at a glance, you have full-screen previews of your recently used apps. Simply tap to jump back into one of them. If you find yourself constantly switching between apps, we’ve got good news for you: Smart Text Selection (which recognises the meaning of the text you’re selecting and suggests relevant actions) now works in Overview, making it easier to perform the action you want.”

Editor John Kennedy is an award-winning technology journalist.

editorial@siliconrepublic.com