AI appears regularly in the media, but are journalists distorting reality, both intentionally and unintentionally? Colm Gorey investigates.
With scrutiny of the media at an unprecedented level, the ways and means in which journalists cover particular topics have also been made a topic of conversation. While politics, sport and lifestyle have been staples of newspapers for centuries – and also in the earliest days of online journalism – technology appears to be still finding its feet.
None more so than the area of artificial intelligence (AI), which has exploded into the mainstream through pop culture and, increasingly, billionaire celebrities with big ambitions and platforms, such as Elon Musk. Will humanity be erased by autonomous ‘killer robots’ or will we co-exist with them in our homes, cars and workplaces?
However, away from the end-of-the-world scenarios or possible utopias, there is another pertinent question: where is this news coming from? And how does this skew how journalists report on the technology?
That was the purpose of research undertaken by the University of Oxford and the Reuters Institute for the Study of Journalism published late last year. Looking specifically at the UK media, the authors of the report analysed 760 articles that referenced AI across six varied platforms of varied political leanings.
The report found that almost 60pc of news articles were associated with industry products, initiatives or announcements, with 33pc of unique sources across all articles affiliated with industry. This is almost twice as many from academia and six times as many as those from government. Somewhat unsurprising was another discovery that Musk was referenced in 12pc of all articles due to his many comments.
Hesitancy from academia
As a tech journalist, this doesn’t surprise me, given that most of my daily inbox is full of companies, both large and small, bombarding you with comments on what the Amazon Echo has done now, or how a new multibillion-dollar corporate AI is able to defeat a human at an ancient board game.
Not only that, but for quite a number of journalists, I’m sure, their knowledge of AI may only go so deep on the topic, resulting in sheepish looks the moment they’re quizzed on the finer details of neural networks versus deep learning.
This makes a journalist’s connection with a particular story about a product greater than, say, how a new algorithm can be used to find potential new antibiotics.
For one of the authors of the Reuters report, Dr Scott Brennen, it is a sign that there isn’t such an established, unified front in academia to get the word out about AI development away from industry.
“On one hand, a lot of the scientists that I’ve spoken to have said it’s increasingly made important to them that they were expected to do public outreach, generally, not only talking to journalists but also other public communications,” Brennen said.
“Not that it was only expected, but they wanted it to be part of their evolving mission. That being said, there were other factors that were pushing this idea in the other direction. Some of the researchers were very concerned that, in the past, they were misquoted.”
This sentiment is shared by one of Ireland’s leading academics in AI, Prof Barry O’Sullivan of University College Cork, who is also a board member of the AI4EU initiative. He cited the relatively recent rules regarding how academics converse with the media.
“Certainly for science in Ireland, there’s a new national code of integrity policy, with [a researcher] supposed to state when commenting in public when something is opinion or whether it’s fact,” O’Sullivan said. When addressing the issue of hype, he added that AI is not the only field that falls victim to sensationalism.
“Scientists, researchers, governments and industry are trying to make stories attractive to [journalists] to carry them as much as [journalists] are trying to find an angle on a story to make it interesting to the public, so there’s an amplification of hyping as well,” he said. “[Researchers] hype a little bit, [journalists] emphasise slightly differently. That’s not a criticism, it’s just how people tell each other stories.”
Public perception versus media narrative
This isn’t to say that the abundance of industry coverage of AI has permanently altered the general public into following what the big corporations are saying. Another recent University of Oxford study undertaken by the Future of Humanity Institute showed that among the US public, a university researcher was considered the most trustworthy source of information on AI, with technology companies much lower on that list.
At first glance this would appear to suggest that despite the media’s reliance on industry for content surrounding AI, the general public aren’t being overly convinced by it. So, is the media’s influence on the topic of AI significantly less than we think?
An interesting point raised by the report authored by Brennen and his colleagues was that when it comes to the ethics of AI, the media are great at raising a call for a discussion about ethics, but not facilitating it.
“Articles frequently identify ethical topics or questions, but then stop before going further,” the authors wrote. “Sometimes, this questioning without answering involves pushing the work of actual ethics off on others, such as academics, government or one of a number of new organisations meant to address the ethics of AI.”
Why should the conversation around AI trail off in the media when the political topics of the day – or even the goings-on in sport – are debated endlessly? Perhaps in the years to come, as AI permeates further into our lives, the academic voice will be just as strong as those found behind the desks of major corporations. Or perhaps that’s just a question for another day.