Karen Conway of Fidelity Investments believes that companies need to pay attention to the ethical implications of the AI they create.

“There’s a saying: ‘You can’t be what you can’t see.’ Well, it’s the same for AI,” according to Karen Conway, senior software engineering manager at Fidelity Investments.

Conway sat down with SiliconRepublic.com recently to explain why it’s important to have well-regulated and fair AI.

And she said that the efficacy of AI depends largely on who is involved in designing it and developing it. If an AI product is made only by a certain cohort of people, there’s a high chance it will have some inherent unintended biases.

“We expect technology to be neutral, we expect it to be unbiased and impartial. It doesn’t have feelings or emotions despite what you might actually hear.”

“At the end of the day, AI is just code. It learns patterns from the data supplied and the models designed by the development team and although we’ve very talented and experienced designers and developers creating the AI, the problem is if they’re all too similar, the data that’s being used is built on bias and it leads to unintended bias.”

The solution to unintended bias? A more diverse workforce for AI projects.

“We can’t see the biases unless we have that diversity in there. If we can get a more diverse base working in STEM, we’ve access to many more perspectives,” explained Conway.

>> READ MORE

Words by Blathnaid O’Dea