Google is working on ethical guidelines in the wake of military AI furore

31 May 20181.47k Views

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

A Google sign at its Mountain View headquarters in California. Image: JHVEPhoto/Shutterstock

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

Google is dealing with the employee unrest stemming from the Project Maven contract.

Disquiet around Google’s involvement with US Pentagon programme Project Maven has been building for some months now.

Early in April, a petition from employees at the company emerged, imploring CEO Sundar Pichai to withdraw Google from the endeavour. Thousands of staff signed the petition, which stated: “Google should not be in the business of war.”

The company culture at Google permits open discussion about decisions made, thus enabling staff to voice their discomfort and ethical concerns around the firm’s part in Project Maven. Chagrined staff also were reported to have left Google over the issue.

Ethical rules

On Wednesday (30 May), The New York Times reported that Google is working on a set of guidelines aimed at managing decisions relating to defence and intelligence contracts.

Pichai spoke to employees last week and said that the company wanted to develop principles that “stood the test of time”. Google told The New York Times that these guidelines would preclude the use of AI in weaponry projects, but it is still unclear how this principle would apply in practice.

Google in a unique predicament

The debate around the project centres on Google’s corporate image – its unofficial slogan was ‘Don’t Be Evil’ for years – and, although some say AI could help reduce civilian casualties from drone strikes, others believe the company should not be engaging with military at all on principle.

The petition signed by employees addressed this: “The argument that other firms, like Microsoft and Amazon, are also participating doesn’t make this any less risky for Google. Google’s unique history, its motto ‘Don’t Be Evil’ and its direct reach into the lives of billions of users set it apart.”

Fei-Fei Li, chief scientist at Google Cloud, told colleagues in emails that they should “avoid at ALL COSTS any mention or implication of AI” when announcing the Project Maven contract. “Weaponised AI is probably one of the most sensitised topics of AI – if not THE most. This is red meat to the media to find all ways to damage Google.”

Unrest will likely continue

While the guidelines have yet to take shape, there will likely be continued unrest around Google’s Pentagon contract. While directly banning projects related to weaponised AI, there are concerns that ‘non-offensive’ involvement would still enable offensive actions such as drone strikes.

Google is not the only tech firm working with the US military. Amazon works closes with the US defence department, which uses its AWS Secret Region service, and its Rekognition machine-vision system is marketed to the defence sector. Meanwhile, Microsoft’s Azure Government Cloud Computing platform is rated for classified work in the US and provides cloud services to the UK ministry of defence.

A Google sign at its Mountain View headquarters in California. Image: JHVEPhoto/Shutterstock

Ellen Tannam is a writer covering all manner of business and tech subjects

editorial@siliconrepublic.com