How Europe is handling robo-journalists in the AI age

6 Jul 2021

Image: © Zentangle/Stock.adobe.com

The Council of Europe has recently adopted key resolutions concerning AI and its intersection with media and journalism.

Should computers write the news? What about AI-controlled newsfeeds? If AI tools remove deliberately misleading information, is that an infringement of freedom of expression, or does it protect public discourse?

During the recent Council of Europe Ministerial Conference (10 and 11 June), a final declaration and four resolutions were adopted to address these worries.

The resolution domains included digital technologies, safety of journalists, the changing media and information environment, and the impact of the Covid-19 pandemic on freedom of expression.

This short sentence belies the scope of the concerns and the magnitude of what they address.

The press release from the council specified the need for “regulatory conditions surrounding automated processes for creating and disseminating news, including natural language processing (NLP), robo-journalism and algorithmically prepared newsfeeds”.

The examples provided by the council aren’t arbitrary but represent three major areas of concern.

A 37-page document on the implications of AI-driven tools on the media served as a basis for much of the discussion of the two-day conference. Written by Prof Natalie Helberger, assistant professor Sarah Eskens and three other colleagues, the report addressed the implications of AI for freedom of speech.

In their paper, they discuss AI tools used by journalists (NLP), AI tools replacing journalists (robo-journalism) and AI tools that affect the spread of journalism (newsfeeds).

AI tools for journalists

First off are the AI-driven tools, which usually aim to help journalists sift through vast amounts of information.

Picture the prototypical political journalist, Robert Caro, in his writing of the 1974 Pulitzer-winning biography The Power Broker.

In researching the infamous New York politician Robert Moses, Caro spent years trawling through the New York Public Library and its documents to write his 1,162-page opus.

In the age of NLP and digital documents, AI aims to cut this process to an afternoon.

NLP exists at the intersection between computer science, linguistics and artificial intelligence. By using computers to analyse and process vast amounts of documents, NLP is particularly suited to this investigative style of story.

It has already been used to some success, with a team of Reuters journalists using NLP tools to tread though 10,300 legal documents relating to the US Supreme Court. While this process required oversight and modification of the tools, AI vastly reduced the legwork required in their journalism.

So what’s the problem?

Two issues are spotlighted in the comprehensive document from Helberger et al: an understanding of these tools, and who will get to use them.

On a purely practical level, journalists would need to understand these tools to properly engage with the data. This means understanding incomplete data, the limitations of modelling, sampling bias, as well as any technical skills needed to run these programs.

But if the world is becoming increasingly digital and some professions require upskilling as a result, why should journalism be excluded?

It shouldn’t, but this leads to the second, and perhaps more consequential issue. If there is a financial barrier between journalists and these tools, the report authors warn of the decreasing feasibility of small and local news organisations.

They fear this could lead to the monopolisation of the news industry by large organisations, and the stamping out of smaller organisations. If these organisations aren’t around to provide local news, there may be a failing of journalism to act as public watchdog at anything but a national (or even international) level.

The report cites open-source software as one solution. This would level the playing field so that small organisations have all the software of bigger organisations, even if they’re missing some hardware.

Training and upskilling remain a concern, along with access to experienced individuals for mentorship. Online learning and free resources may only achieve so much, and so government supplementation may be necessary.

The report authors double down on this need for training and awareness however, as to use the software irresponsibly could constitute journalistic malpractice.

Robo-journalism risks

“From a copyright perspective, you can ask who is the author of a work and who should own the copyright over a fully automated produced work,” Eskens told Siliconrepublic.com.

“There will be legal issues regarding liability: who is liable when an automatically produced news article contains illegal content or causes harm?

“As we also discuss in the report, you could ask whether content produced by robot journalists is protected by freedom of expression. Such a question might matter when governments want to try to censor robot journalism content. At this moment, robot journalists do not have freedom of expression rights, because robots do not have legal personhood and cannot be holders of rights.”

Existing robo-journalism is somewhat convincing, if shallow. According to Eskens, the possibility of think pieces or in-depth robo-journalism aren’t of immediate concern. Instead, robo-journalism would likely be used in presenting straightforward information such as match reports.

If this is the case, the group highlight the importance of editorial responsibilities and oversight. The implementation of these tools requires that they be understood by all involved.

After all, if basic processes can be automated, there is the possibility of an increase in the quality of journalism by reducing a writer’s overall workload.

But this isn’t without its difficulties.

The particular example chosen in the report is the role of assessing newsworthiness. Should a paper run a piece because its subject matter is likely to generate clicks? Are there obligations to run certain stories? Are there obligations not to run others?

Despite journalism schools, countless textbooks and a broad-reaching academia on the matter, there are no definitive answers to the above questions. No two editors will ever be entirely alike. And so there is no one ethical guide to translate into code.

‘Editors, academics, practised journalists and programmers would all be needed to bring AI writers into the newsroom’

When robo-journalists are characterised by being “limited to no human intervention beyond the initial programming choices”, it would be up to the individual programmers to decide what stories were being automated, to what extent they were being automated, and when AI would be used.

Algorithmic transparency is another key element here, where the audience are aware of the coding and its operations, as to attribute authorship properly. How much of a story came from the editor and how much lies in the code could be key to an ethical automated news stream.

These concerns need to be addressed not just through the lens of computer science, but with an integrated awareness of journalistic ethics. Editors, academics, practised journalists and programmers would all be needed to bring AI writers into the newsroom.

Using AI to spread information

AI’s role in content recommendations and management is a big one.

‘Filter bubbles’ are a commonly flagged issue. As platforms learn more and more about us, they supply the information we want to see, rather than a balanced view. We become locked in a feed of information that confirms our existing beliefs. Or so the concerns go.

However, the report on AI in the media highlights that empirical evidence for these bubbles in the real world is thus-far scarce. Laboratory experiments show it is possible, and that the exasperation of existing human biases does happen, but maybe not to the extent we are worried about.

If these filter bubbles are on the horizon, one particular area to address is how minority groups can overcome these selection algorithms. The use of biased datasets may be unrepresentative of the public, especially for marginalised groups.

“If you communicate in a minority language, the algorithms that are used to filter your content might be less well-equipped to find information in your own language, simply because the algorithms are trained on a dominant language such as American English or British English,” said Eskens.

On the other side of this potential problem is the use of AI to combat inappropriate content, such as hate speech and disinformation

If certain fringe groups use online platforms to deliberately spread disinformation, the scope of the challenge may require an AI solution. Whether spreading unfounded views is an aspect of freedom of expression or its antithesis is a thorny issue, but the report authors lean towards the possible need for AI.

Eskens highlighted that research has raised additional problems.

A study by Sap and others shows that tweets written in African-American English are more likely to be labelled as offensive compared to others. If such biases also exist in the algorithms that decide which content gets promoted on social media feeds, this might mean that content by minority groups [is] less promoted and therefore has less visibility.”

This might involve allowing people to switch the algorithms that drive their news feed. Through control and transparency, users wouldn’t be told what to look at but could customise how they want their system to work.

An understanding of these tools is required in their individual application to decide whether they threaten or realise freedom of expression.

This exemplifies the resolution’s “precautionary approach, based on ‘human rights by design’ and ‘safety by design’ models”.

With AI occupying such a large part of the future market and technology, journalism will need train its critical eye on the software – both for what it can gain from its use, and what its dangers are.

Sam Cox was a journalist at Silicon Republic covering sci-tech news

editorial@siliconrepublic.com