Meredith Whittaker: ‘The tech industry at large has a culture of retaliation’

20 Sep 2019

Still from ‘Meredith Whittaker: We need to regulate, organize and have a counterweight against the tech industry’. Image: Ada - Heute das Morgen verstehen/YouTube

Meredith Whittaker helped lead a mass walkout of Google employees to highlight important issues, and now she’s turning her focus to harmful AI.

Google’s famous unofficial motto once said: ‘Don’t be evil.’ Then, to the surprise of many, the phrase was scaled back to just one reference in its code of conduct distributed to employees as of 2018.

Coincidentally at that time, one former employee who would have noticed this apparent change in outlook was Meredith Whittaker. Having joined the search giant during its meteoric rise in 2006, Whittaker went on to found the company’s Open Research Group – using research to solve global problems – and was a Google Cloud programme manager.

She also went on to co-found the AI Now Institute with Microsoft Research principal researcher Kate Crawford in 2017, in order to produce “interdisciplinary research on the social implications of AI and act as a hub for the emerging field focused on these issues”.

However, while Whittaker was still at Google in 2018, her work at the AI Now Institute and her belief in a fair and equal workplace were brought to the forefront. A New York Times piece revealed the startling revelation that Google’s former head of Android, Andy Rubin, received a $90m payout following allegations of sexual harassment. Likewise, another Google executive, Amit Singhal, was reportedly paid $45m in 2016 after being accused of groping an employee at a party.

But those who had accused the men of wrongdoing were required to go through arbitration, angering Whittaker and 20,000 other Google employees across the world, who downed tools and took part in one of the company’s first public walk-outs.

Anger towards the company was already verging on revolt for thousands of Google employees over its involvement in Project Maven, a US government attempt to develop AI for uses in warfare – a contract that Google has since cancelled after a number of employees resigned and many more signed a petition against the deal.

Certain people seen as ‘acceptable collateral damage’

“The Andy Rubin story was one story that really exposed, in no uncertain terms, the corruption at the heart of the company’s leadership,” Whittaker said in conversation with Siliconrepublic.com

“[It was clear] the way in which some people’s lives were deemed as extremely valuable and their wellbeing and their enrichment was top priority for the company, while other people’s lives, careers and opportunities were seen as acceptable collateral damage.”

Following a “river of stories” that came out documenting alleged abuse, Whittaker said that she began to question how “built into the company’s DNA” this toxic culture was and how many people of colour and contractors had also experienced abuse and exploitation?

‘Google and the tech industry at-large has a culture of retaliation that makes it very difficult for people to speak up’
– MEREDITH WHITTAKER

As the months continued following the Rubin scandal, Whittaker held the rare position of being one of the most outspoken critics of her employer, also coming to a head following the announcement of the members of Google’s AI ethics board in April of this year.

Among those named was Heritage Foundation president Kay Cole James, accused by Whittaker and others for being anti-LGBTQ for her organisation’s opposition to the US Equality Act. The resulting uproar – similar to the result of the Project Maven deal – saw the board scrapped for the time being.

In June, her walkout co-organiser and colleague Claire Stapleton announced she was leaving the company, and just a month later Whittaker decided that enough was enough and followed suit. But in a blog post revealing the letter she shared internally with her soon-to-be ex-colleagues, she revealed she was focusing full-time on her work with the AI Now Institute.

A culture of retaliation

In the months prior, Whittaker claimed to have faced retaliation – in the form of exclusion and threats of demotion – from Google executives for her role in the walkouts, and she added that she was not the only one.

“I think it was unfair and corrupt to retaliate against people who bring to light ethical transgressions; who ultimately bring to light the power that any company that wants to live by a set of core values should be grateful to have an opportunity to remedy and that’s not what happened here,” she said.

“I’m not the only case. Google and the tech industry at-large has a culture of retaliation that makes it very difficult for people to speak up.”

However, she believes that such high-profile protests have lit a match under the highest echelons of Silicon Valley, forcing these companies to address a problem that has remained in place for decades. Or, at least, inspire those working in the major tech companies to not take it lying down.

“I think the fight is not won and it’s unclear that you’ll ever win a fight like that once and for all,” Whittaker said.

“I think what you do is you build enough power among the workers – among the people who are at risk from these practices – so they can consistently and persistently push back when this behaviour emerges.”

A matter of public interest

So, will Silicon Valley ever change? Not really if the tech world’s structure of late-stage capitalism driven by shareholder demands is to remain in place, something Whittaker said is “literally reshaping our geopolitical landscape, social institutions, our access to resources and opportunities”.

“The interests of these companies are not always a public interest,” she said.

“It’s going to be really important that we get [Big Tech] regulation in place so we can have much more robust public oversight and we begin to develop mechanisms where we can reject the use of these technologies and put public welfare above the profits of these large multinational companies.”

One of these technologies remains the focus of her current work at the AI Now Institute: facial recognition. While not the only focus of the institute, headlines about Amazon’s Facial Rekognition technology being able to see fear and algorithmically-biased AI not being able to read the face of a person of colour have brought it to the world’s attention.

Now, Whittaker and the institute are working to better understand these technologies before one of them is released into the wild just to please shareholders.

“Our focus is on how are these technologies being deployed in high stake domains … and how do we ultimately tell a different story about AI?” she said.

“One that is not written by the marketing departments of these large corporations, but centres on the perspective of the people whose lives are being shaped and reshaped by these systems.”

Want stories like this and more direct to your inbox? Sign up for Tech Trends, Silicon Republic’s weekly digest of need-to-know tech news.

Colm Gorey was a senior journalist with Silicon Republic

editorial@siliconrepublic.com