Big Bird is watching: Twitter pecks at trolls who spoil conversations

15 May 2018

Image: S1001/Shutterstock

Using machine learning and other data signals, Twitter has hatched a plan to stop trolls spoiling healthy debate.

Twitter has revealed that it is using new tools, including machine learning, to identify signals that indicate when trolls are about to ruin a conversation.

The tools ensure that while suspect trolls’ tweets remain on Twitter, they will not appear in communal areas such as search or conversations in such a way as to enflame debate.

‘We’re tackling issues of behaviours that distort and detract from the public conversation in those areas by integrating new behavioural signals into how tweets are presented’
– TWITTER

The move follows on a promise made in March by Twitter co-founder and CEO Jack Dorsey to improve the health of public debate on Twitter and address the behaviour of trolls.

The result is that people contributing to the healthy conversation will be more visible while those that try to poison or undermine the debate with negativity will be digitally sidelined.

Twitter vice-president of trust and safety, Del Harvey, and director of product management and health, David Gasca, said in a blogpost today (15 May) that the company is building on Dorsey’s promise.

They said that while some Twitter accounts belonging to trolls have violated policy and the platform has taken action, other trolls don’t necessarily violate Twitter’s policies but have managed to distort or ruin erstwhile healthy conversations.

They said that while less than 1pc of Twitter accounts have been reported for abuse, others have not because they have not violated policy per se, but have still have managed to have a negative impact on people’s experiences of Twitter.

Pecking up signals

Twitter said that it currently uses policies, human review processes and machine learning to determine how tweets are organised and presented in communal areas such as conversations and search.

“Now, we’re tackling issues of behaviours that distort and detract from the public conversation in those areas by integrating new behavioural signals into how tweets are presented,” Twitter said.

“By using new tools to address this conduct from a behavioural perspective, we’re able to improve the health of the conversation and everyone’s experience on Twitter, without waiting for people who use Twitter to report potential issues to us.”

Harvey and Gasca said that there are many new signals that Twitter is taking in, most of which are not visible externally.

“Just a few examples include: if an account has not confirmed their email address, if the same person signs up for multiple accounts simultaneously, accounts that repeatedly tweet and mention accounts that don’t follow them, or behaviour that might indicate a coordinated attack. We’re also looking at how accounts are connected to those that violate our rules and how they interact with each other.”

“These signals will now be considered in how we organise and present content in communal areas like conversation and search. Because this content doesn’t violate our policies, it will remain on Twitter, and will be available if you click on ‘Show more replies’ or choose to see everything in your search setting.”

Chirp chirp, beep beep

No doubt attention-seeking trolls will be hopping with rage and crying censorship over the latest development, but Twitter said that early testing of the new tools in various markets around the world shows that keeping the negative commentary out of sight is having a positive impact.

The results have been a 4pc decrease in abuse reports from search and an 8pc drop in abuse reports from conversations as people see fewer tweets that will disturb their experiences on the platform.

“Our work is far from done,” said Harvey and Gasca.

“This is only one part of our work to improve the health of the conversation and to make everyone’s Twitter experience better. This technology and our team will learn over time and will make mistakes.

“There will be false positives and things that we miss. Our goal is to learn fast and make our processes and tools smarter. We’ll continue to be open and honest about the mistakes we make and the progress we are making.

“We’re encouraged by the results we’ve seen so far, but also recognise that this is just one step on a much longer journey to improve the overall health of our service and your experience on it,” they said.

John Kennedy is a journalist who served as editor of Silicon Republic for 17 years

editorial@siliconrepublic.com