Google and Movidius chip vision will make every machine see

27 Jan 2016

Google will use Movidius's chips to make devices that can understand their environment. In turn, Google will help the Dublin tech firm develop its neural technology roadmap

Technology giant Google has signed a deal with Dublin tech company Movidius that will see the Irish company’s MA2450 chip feature in forthcoming personal devices, like smartphones, that will be contextually aware.

The deal will see Google use Movidius processors alongside the entire Movidius software environment to run machine intelligence locally on devices.

This means smartphones will still be able to do advanced complex tasks such as understand images and audio with remarkable accuracy without being connected to the internet.

By marrying sophisticated software algorithms to a powerful, purpose-built Vision Processing Unit (VPU), Movidius brings new levels of intelligence to smart devices and enables a new wave of intelligent and contextually aware devices, including drones and AR/VR devices.

‘By working with Movidius, we’re able to expand this technology beyond the data centre and out into the real world, giving people the benefits of machine intelligence on their personal devices’
– BLAISE AGΫERA Y ARCAS, GOOGLE

It is understood the deal will see Google place a volume order for MA2450 chips and software. In turn, Google will contribute to Movidius’ neural network technology roadmap.

“What Google has been able to achieve with neural networks is providing us with the building blocks for machine intelligence, laying the groundwork for the next decade of how technology will enhance the way people interact with the world,” said Blaise Agϋera y Arcas, head of Google’s machine intelligence group in Seattle.

“By working with Movidius, we’re able to expand this technology beyond the data centre and out into the real world, giving people the benefits of machine intelligence on their personal devices.”

Movidius’ chip technology has been at the centre of developments by Google to create Project Tango smartphones that can sense their immediate environment.

For example, a Project Tango smartphone with a Movidius chip will be able to use its sensors to calculate the entire dimensions of a room, thereby enabling the device to see its surroundings.

A vision for the future of machines

Google will utilise Movidius’ MA2450, which is the only commercial solution on the market today with the performance and power-efficiency to perform complex neural network computations in ultra-compact form factors.

The MA2450 is the most powerful iteration of the Myriad 2 family of vision processors, providing a series of improvements over the first-generation Myriad 2 VPU announced last year, the MA2100.

“The technological advances Google has made in machine intelligence and neural networks are astounding,” explained Remi El-Ouazzane, CEO of Movidius.

“The challenge in embedding this technology into consumer devices boils down to the need for extreme power efficiency, and this is where a deep synthesis between the underlying hardware architecture and the neural compute comes in.

“Movidius’ mission is to bring visual intelligence to devices so that they can understand the world in a more natural way. This partnership with Google will allow us to accelerate that vision in a tangible way.”

Movidius, which was founded 10 years ago by David Maloney and Sean Mitchell, recently raised €38m in a move that will enable it to generate 100 new jobs in Dublin.

In the last two years, Movidius has established offices in Silicon Valley, continued to scale its R&D team, appointed new members to its technical advisory board, collaborated with new customers and partners, and launched the next generation of its vision processor for mobile and connected devices.

The company now has offices in Silicon Valley, Ireland, Romania and China.

Mitchell, who is chief operations officer at Movidius, explained: “We are envisioning a world with a range of different types of devices that will be able to see and think without being constantly connected to the cloud. What we are after is raising the level of what you consider intelligence in these devices, enabling machines to enable autonomous decision making.

“This will result in machines that will be able to perceive their environment, extract the data to understand this and make decisions.

“Our goal is to be the de facto platform to bring visual intelligence across a range of markets.”

Machine vision image via Shutterstock

John Kennedy is a journalist who served as editor of Silicon Republic for 17 years

editorial@siliconrepublic.com