Black women, and female politicians and journalists bear the brunt of harassment on social media.
Women are abused on Twitter every 30 seconds, according to what Amnesty International said is the largest crowdsourced study into online abuse against women.
The report, compiled by Amnesty and Element AI using crowdsourcing and AI techniques and technologies, revealed that 1.1m abusive or problematic tweets were sent to women last year at the average rate of one every 30 seconds.
‘Twitter’s failure to crack down on this problem means it is contributing to the silencing of already marginalised voices’
– MILENA MARIN
More than 6,500 volunteers from 150 countries signed up to take part in Amnesty’s Troll Patrol, a unique crowdsourcing project designed to process large-scale data about online abuse.
As part of the study, volunteers sorted through 228,000 tweets sent to 778 women politicians and journalists in the UK and US in 2017. Using this information, machine-learning techniques were then used in conjunction with software company Element AI to extrapolate data about the scale of abuse that women face on Twitter.
Is Twitter a toxic place for women?
The results were shocking but also confirmed what most of us know about online abuse. “It’s clear that a staggering level of violence and abuse against women exists on Twitter,” said Kate Allen, Amnesty’s UK director. “These results back up what women have long been saying: that Twitter is endemic with racism, misogyny and homophobia.”
The study by Amnesty and Element AI looked at two types of tweets: abusive and problematic. Abusive content violates Twitter’s own rules and includes tweets that promote violence against or threaten people based on their race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability or serious disease. Problematic content is defined as content that is hurtful or hostile, especially if repeated to an individual on multiple or cumulative occasions, but not as intense as an abusive tweet.
Precisely 7.1pc of tweets sent to the women in the study were problematic or abusive, amounting to 1.1m tweets.
Black women were disproportionately targeted, being 84pc more likely than white women to be mentioned in abusive tweets. One in 10 tweets mentioning black women were abusive or problematic, compared to one in 15 for white women. The study also found that black and minority ethnic women were 34pc more likely to be mentioned in abusive tweets than white women.
It found that online abuse against women cuts across the political spectrum. Politicians and journalists faced similar levels of online abuse, observed with both Liberals and Conservatives alike, while left- and right-leaning media organisations were both affected.
Politicians included in the sample came from across the US and UK political spectrums. The journalists included were from a diverse range of US and UK publications including The Daily Mail, The New York Times, The Guardian, The Sun, Gal-dem, PinkNews and Breitbart.
“With the help of technical experts and thousands of volunteers, we have built the world’s largest crowdsourced dataset about online abuse against women,” said Milena Marin, Amnesty International’s senior adviser for tactical research.
“We found that, although abuse is targeted at women across the political spectrum, women of colour were much more likely to be impacted, and black women are disproportionately targeted. Twitter’s failure to crack down on this problem means it is contributing to the silencing of already marginalised voices.
“By crowdsourcing research, we were able to build up vital evidence in a fraction of the time it would take one Amnesty researcher, without losing the human judgement which is so essential when looking at context around tweets.
“Troll Patrol isn’t about policing Twitter or forcing it to remove content. We are asking it to be more transparent, and we hope that the findings from Troll Patrol will compel it to make that change. Crucially, Twitter must start being transparent about how exactly they are using machine learning to detect abuse, and publish technical information about the algorithms they rely on,” Marin said.