IBM reveals a toolkit for detecting and removing bias from AI

19 Sep 20181.28k Views

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

Image: Charles Taylor/Shutterstock

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

Big Blue responds to concerns that algorithms used by tech giants for AI may not make fair decisions.

IBM has launched a tool that will scan for bias in AI algorithms and recommend adjustments in real time. The global tech giant has launched the AI Fairness 360, an open source library to help detect and remove bias in machine-learning models and datasets.

The AI Fairness 360 Python package includes a comprehensive set of metrics for datasets and models to test for biases, explanations for these metrics, and algorithms to mitigate bias in datasets and models.

Containing more than 30 fairness metrics and nine state-of-the-art bias mitigation algorithms developed by the research community, it is designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare and education.

Can AI amplify human prejudice?

“As AI becomes more common, powerful, and able to make critical decisions in areas such as criminal justice and hiring, there’s a growing demand for AI to be fair, transparent and accountable for everyone,” IBM’s Animesh Singh and Michael Hind said in a blogpost.

“Underrepresentation of datasets and misinterpretation of data can lead to major flaws and bias in critical decision-making for many industries,” they continued. “These flaws and biases may not be easy to detect without the right tool. We at IBM are deeply committed to delivering services that are unbiased, explainable, value-aligned and transparent.”

The move by IBM is one of a number of similar moves by tech and consulting giants in recent weeks to address concerns about human bias seeping into the most advanced AI algorithms and amplifying human prejudice. For example, Accenture’s Applied Intelligence group recently introduced a new AI Fairness Tool to its suite of offerings, aimed at creating agile, responsible AI that enables organisations to integrate ethical assessments into the innovation process without slowing it down.

New research from Accenture Applied Intelligence, SAS and Intel revealed that 72pc of AI adopters conduct ethics training for their technologists.

John Kennedy is an award-winning technology journalist who served as editor of Siliconrepublic.com for 17 years.

editorial@siliconrepublic.com