Margaret Mitchell has taken up an opportunity to build ethics into an AI company from the ground up.
A researcher fired from Google during a tumultuous period for the company’s AI division has announced her next move in business.
Margaret Mitchell will join Hugging Face, a start-up setting up to be the definitive reference point for natural language processing (NLP) technologies. According to Bloomberg, Mitchell will take up her new role in October and will be working on a set of tools to ensure that datasets used to train AI models aren’t biased.
Hugging Face maintains an open-source library of pre-trained NLP models that can be deployed by users with minimal machine learning experience. Its platform is used by more than 5,000 organisations including Google, Facebook, Amazon and Microsoft and paying clients include Intel, Qualcomm and Bloomberg.
The company raised $40m in a Series B round announced in March. This followed a $15m round raised in late 2019.
Mitchell described the move as “exactly where I should be to move AI forward from its very foundations” in a tweet. “This is a really cool opportunity for me to help model builders better understand the harms and risks in the kind of models they are building,” she told Bloomberg reporter Dina Bass.
Mitchell also said she was eager to work at a company where AI ethics were being considered from the ground up. She previously led Google’s ethical AI group along with Dr Timnit Gebru, and both contributed to a paper that caused major controversy at the company.
Unrest at Google AI
The paper, co-written by Gebru, Mitchell and others, raised issues around training language models on data gleaned from the broadest segments of the internet. Using such broad datasets unchecked can introduce the risk of racist, sexist and abusive language ending up in the training data. An example of this was recently uncovered by Lero and UCD researcher Abeba Birhane, who found racist and misogynistic terms in an MIT image library that was used to train AI.
The unpublished paper was not well received at Google and Jeff Dean, head of Google AI, wrote that it “ignored too much relevant research”. According to The New York Times, Google demanded that Gebru either retract her name from the paper or pull it entirely, which she refused.
Both Gebru and Mitchell have since exited the Big Tech brand. Google claims that Gebru resigned though her dismissal was widely reported. Mitchell was fired for a breach of Google’s code of conduct and security policies, as she reportedly used a script to sift through company emails looking for examples of discrimination against Gebru.
The fall-out from the departure of the ethical AI leaders has been widespread.
David Baker, an engineering director at the company, and software developer Vinesh Kannan both left Google, citing the treatment of Gebru as the reason (and, in Kannan’s case, the “mistreatment” of diversity recruiter April Christina Curley).
Later, Samy Bengio announced that he would leave Google Brain, Google’s deep learning AI research team. Though this followed a reorganisation of Bengio’s team that had cut some of his responsibilities, his departure was linked to the controversy at Google AI owing to his previous support of Gebru following her abrupt exit.
As well as causing turmoil internally, the problems at Google AI have affected work with researchers and partners outside the organisation.
Luke Stark, an assistant professor in AI ethics at Canada’s Western University, turned down a $60,000 Google Research Scholar award “to show his support for Gebru and Mitchell”. Prior to that, University of Texas computer science researcher Vijay Chidambaram tweeted that he would no longer apply for Google funding and Prof Hadas Kress-Gazit cancelled her participation in a machine learning and robot safety workshop.
In the meantime, Google vice-president Marian Croak was named head of responsible AI research at the company, taking leadership of a new centre of expertise on responsible AI within Google Research.