Experts suggest ‘bias bounties’ could improve AI ethics

21 Apr 2020

Image: © SFIO CRACHO/Stock.adobe.com

A new paper has suggested that bias bounties could be used to transform AI ethics from principles to practices.

A group of researchers has proposed that bias bounties and safety bounties, similar to bug bounties that offer a financial reward in return for a reported tech bug, may be the answer to discovering bias and safety issues in artificial intelligence (AI).

As reported by VentureBeat, the paper, written by experts from Google Brain, Intel, OpenAI and other top research labs in the US and Europe, looked at ways to turn AI ethics principles into practice.

“Bug bounties provide a legal and compelling way to report bugs directly to the institutions affected, rather than exposing the bugs publicly or selling the bugs to others,” the paper said.

“Bias and safety bounties would extend the bug bounty concept to AI and could complement existing efforts to better document data sets and models for their performance limitations and other properties.”

Researchers added that bounties could provide a way to increase the amount of scrutiny applied to AI systems, and bounties for security, privacy protection or interpretability could also be explored in AI.

Discoveries of bias in AI

The paper referenced a discovery last year by Ziad Obermeyer, who uncovered racial bias in a healthcare algorithm that affected millions of patients.

Another high-profile case occurred in September 2019, when a viral selfie app used crowdsourced data from ImageNet, which had previously been used to classify and label 14m images, to label selfies for users. However, some users reported that the database associated racial slurs with images of people of colour.

Technology reporter Julia Carrie Wong wasn’t impressed with the results she got while using the app. She said: “People usually assume that I’m any ethnicity but Chinese. Having a piece of technology affirm my identity with a racist and dehumanising slur is strange.”

While bias bounties could serve as an incentive to weed out bias in AI and ensure that all users have a fair and consistent experience, the authors of this particular report warned that bounties “are not sufficient for ensuring that a system is safe, secure or fair”.

The paper featured 10 recommendations on how drafted AI principles can be put into practice, including increased scrutiny of commercial AI models and increased government funding for researchers in academia. The entire report can be read here.

Kelly Earley was a journalist with Silicon Republic

editorial@siliconrepublic.com