Apple’s new acqui-hire shows it’s still gearing up for autonomous driving

28 Jun 2019

Image: Drive.ai

News of Apple’s acquisition of Drive.ai, plus the development of ‘deepnudes’ and a new biometric detectable from a distance.

Apple confirmed its acquisition of Drive.ai, an autonomous driving start-up, on Tuesday (25 June).

For an undisclosed sum, Apple has gained Drive.ai’s existing autonomous vehicles, other assets and some new Drive.ai hires, chiefly in engineering and product design, according to sources speaking to Axios.

The move has been seen as a sure sign that Apple’s ambition to enter the autonomous vehicles market has not subsided, though it has been widely documented that this is one area where the tech giant has some significant catching up to do.

Drive.ai was valued at $200m in 2017 and had raised $77m in venture capital. Best known for its automated shuttle pilots in the Texan cities of Arlington and Frisco, the company was reportedly in discussion with a number of potential buyers. While some staff members have joined Apple’s operation, the San Francisco Chronicle reported that the company had already submitted plans to close the business with the loss of 90 workers earlier this month.

From deepfakes to ‘deepnudes’

The rising panic from deepfakes spreading online has hardly peaked and already the issue has spawned an ugly sister: ‘deepnudes’.

The $50 DeepNude app reportedly created a nude photo of any woman in seconds using artificial intelligence (AI). Created by an anonymous programmer going by ‘Alberto’, the app used generative adversarial networks (GANs) trained on thousands of images of naked women – which can be found in abundance online.

Following a report in Vice, the server for the app crashed and, later, the creator took the app offline due to backlash and a rethink of the technology he wanted to put out into the world.

Did you train robots with the Mannequin Challenge?

Speaking of using readily available online material to train AI, a team at Google AI made use of the viral Mannequin Challenge to train a neural network in depth perception.

The 2016 video trend involved people frozen still in a tableau as the camera weaved through it. We humans can easily see this 2D video rendering as a three-dimensional scene, but machines are still learning this skill.

Around 2,000 Mannequin Challenge videos were used to train the neural net, which became able to predict the depth of moving objects with improved accuracy.

The offshoot of this training will help develop robots that can better navigate their way through complex environments. But there are ethical questions around this practice of seeing any online material as fair game for algorithmic training and application, and the repercussions of that attitude (see above).

There’s a new biometric in town

The US Department of Defense has a new device that can identify people through a unique cardiac signature detected from a distance. The prototype device uses an infrared laser to pick up a heartbeat from up to 200 metres away. It has been developed on the back of technology used to detect vibrations in structures such as wind turbines.

As biometrics detectable from a distance go, your cardiac rhythm is likely more distinctive than your gait and tougher to disguise than your face, but the technology is limited by the lack of an extensive database of examples. However, it could be effective in confirming the identity of someone whose heartbeat was already logged.

Want stories like this and more direct to your inbox? Sign up for Tech Trends, Silicon Republic’s weekly digest of need-to-know tech news.

Elaine Burke is the host of For Tech’s Sake, a co-production from Silicon Republic and The HeadStuff Podcast Network. She was previously the editor of Silicon Republic.

editorial@siliconrepublic.com