Do companies need to hire dedicated data science ethicists? Could a philosophy degree become a hot commodity in the world of data science?
There is a startling amount of data generated every day – around 2.5 quintillion bytes, according to software company Domo. In 2017, 456,000 tweets were sent per minute.
This rate of generation will likely only rise in tandem with the breathless pace at which the internet is expanding. More and more users are coming online, and emerging technologies such as the internet of things (IoT) mean the internet is more ubiquitous than ever.
It is the role of the data scientist to convert the reams of data generated into actionable business insights, even predictions. Data scientists can detect and perceive things about the flow of business that previously eluded enterprise leaders. So, now that enterprises have answered the question of what they can do, they have a new question to grapple with: an ethical question of ‘should’.
Is ethics being given enough of a spotlight?
A number of high-profile data privacy cases involving some of the biggest names in tech have certainly brought the issue of data ethics into the public consciousness. Earlier this year, we heard from data scientist Vin Vashishta, who asked the question: why don’t data scientists get ethics training?
Feeling that technologists need to embrace an ethics code similar to the medical professional, universities such as Harvard, MIT and University of Texas at Austin have introduced ethics courses to provide guiding principles to budding data science and AI professionals. The idea is that they would eventually be required for all computer science majors.
Facebook’s Mark Zuckerberg once famously declared that the motto of his firm was “move fast and break things”. This rapid thirst for expansion, for pushing the bounds of possibility at lightning pace, is a philosophy embraced by many at the helm of the tech industry. In many ways, these people probably never anticipated how bloated their influence upon society would become.
While academics, data science professionals and even the public have responded to the growing question of ethics in the technology industry in a timely and engaged manner, where is the response from leaders? Have companies made any substantial changes to the way they run and the way they hire, to respond to this pressing issue?
The trolley problem in the boardroom
As EY’s Laurence Buchanan astutely pointed out in an interview with Siliconrepublic.com, companies can expect that they may soon end up debating classic ethical thought experiments such as the trolley problem in the years to come. “Say you’re an automotive company, you’re moving into the area of self-driving cars – boards are going to face real ethical questions and dilemmas. Two driverless cars crash – how does the car choose what to crash into? That’s an ethical question; it has nothing to do with technology at all.” These questions are, without exaggeration, questions of life and death, but is it possible that the imperative present isn’t as apparent in a boardroom?
When you’re a doctor or a psychologist, you interact daily with the people whose lives you could impact with your actions. For data scientists, not so much – they write most of their software behind a screen, converting complex moral problems into lines of code, which risks alienating them from the actual choices they are programming a machine-learning algorithm to make.
Data ethics culture
For one, is it a little myopic to assume that ethics training automatically improves someone’s moral fibre? Who’s to say that reading the works of Immanuel Kant and Jeremy Bentham will do anything to help people develop a conscience? Is a conscience, especially in these types of contexts, something that can actually be taught? Better yet, is a conscience enough?
Many of the ethical problems that arise for data scientists manifest in instances of discrimination. The use of facial recognition by police forces has been a particular area of contention in light of mounting evidence that this technology frequently misidentifies people of colour.
Bharat Krish, CEO of RefineAI, highlighted an instance where he recognised this blind spot in his own facial recognition algorithms, remarking that he was quick to notice the error due to being dark-skinned himself. This begs the question: can a non-diverse data science team, even armed with ethics training, deliver data science that promotes equality?
These are the questions that those hiring must consider. Aside from the obvious moral imperative, the fallout from making an incorrect decision when the stakes are so high could topple an organisation. Perhaps in future interviews, data scientists won’t be asked about how well they can use TensorFlow, but probed about their instinct in a moral quandary.