Google’s new appointment comes amid ongoing unrest in the company’s AI teams.
Google vice-president Dr Marian Croak has been appointed to oversee the company’s research on responsible artificial intelligence (AI).
Croak has been a VP at Google for the past six years. In this role she has worked across site reliability engineering and the roll-out of public Wi-Fi in India’s rail network. Her graduate background is in quantitative analysis and social psychology and her dissertation explored factors that influence bias.
A pioneer in VoIP (voice over internet protocol), with more than 200 patents to her name, Croak also invented technology that allowed people to securely donate to charity via text message following Hurricane Katrina.
In her new role, Croak will be tasked with making sure Google’s work in AI is responsible and has a positive impact. To that effect, she will lead a new centre of expertise on responsible AI within Google Research.
According to Reuters, she will manage 10 teams, including a dozen scientists who are doing research into accessibility, social good and the fairness of health algorithms, among other things.
“I’m excited to be able to galvanise the brilliant talent that we have at Google working on this,” Croak told Google Research senior programme manager Sepi Hejazi Moghadam for the Google blog.
“We have to make sure we have the frameworks and the software and the best practices designed by the researchers and the applied engineers … so we can proudly say that our systems are behaving in responsible ways.
“I am thrilled to support teams doing both pure research as well as applied research – both are valuable and absolutely necessary to ensure technology has a positive impact on the world.’’
Google’s AI research unrest
Croak will report to Jeff Dean, the head of Google AI. Dean has been the subject of criticism recently following the abrupt dismissal of Dr Timnit Gebru as co-lead of the company’s ethical AI team.
Last December, Gebru claimed on Twitter that she had been fired by Dean. However, Dean wrote an email to staff saying that the company respected “her decision to resign”.
The discord in the ethical AI team is reported to have begun with a paper co-written by Gebru, which raised many issues around training language models on the data gleaned from the broadest segments of the internet. “Researchers have sought to collect all the data they can from the internet, so there’s a risk that racist, sexist and otherwise abusive language ends up in the training data,” wrote Karen Hao, senior AI reporter at MIT Technology Review, who has seen the paper co-authored by Gebru.
According to The New York Times, Google demanded that Gebru either retract her name from the paper or pull it entirely, which she refused. Dean also claimed that Gebru’s paper “ignored too much relevant research” that he claimed has shown improvements in AI made in recent years.
Google is also currently investigating Gebru’s colleague Margaret Mitchell, who was reportedly sifting through emails looking for examples of discrimination against Gebru.
‘What I’d like to do is have people have the conversation in a more diplomatic way, perhaps, than we’re having it now’
– MARIAN CROAK
The appointment of Croak – notably one of Google’s few black executives – comes as Google is struggling to settle the unrest in its AI research division. However, the company has been criticised for the manner in which Croak has been appointed, with one ethical AI team member saying they were “the last to know about a massive reorganisation”.
In a meeting announcing her leadership of the team on Thursday (18 February), Croak reportedly told employees that she respected Gebru and that what happened to her was unfortunate.
In her interview for the Google blog, she attempted to address some of the thorny issues. She said that the field of responsible AI and ethics is new with high-level, abstract principles only developed in the last five years.
“There’s a lot of dissension, a lot of conflict in terms of trying to standardise on normative definitions of these principles. Whose definition of fairness, or safety, are we going to use? There’s quite a lot of conflict right now within the field, and it can be polarising at times. And what I’d like to do is have people have the conversation in a more diplomatic way, perhaps, than we’re having it now, so we can truly advance this field.”
Gebru responded to the appointment with the following statement: “Marian is a highly accomplished trailblazing scientist that I had admired and even confided in. It’s incredibly hurtful to see her legitimising what Jeff Dean and his subordinates have done to me and my team.”