Twitter opens content moderation consortium to researchers globally

1 day ago

Image: © Tada Images/Stock.adobe.com

The social media platform will give successful applicants from all over the world access to its data to study online content moderation.

The Twitter Moderation Research Consortium (TMRC) was launched earlier this year, with a select group of experts dedicated to studying the platform’s governance issues.

Now, the social media company is offering researchers the opportunity to apply for membership of the consortium. This would give successful applicants access to an archive of data dating back to 2018 related to state-backed information operations.

In a blogpost yesterday (22 September), Twitter said membership is open to global researchers from across academia, civil society, NGOs and journalism.

Who is eligible?

Researchers from anywhere in the world can apply, but they must hold a primary institutional affiliation with an academic, journalistic, non-profit or civil society research organisation. Students must be at master’s degree or PhD level.

Applicants should also have prior experience and relevant skills for data-driven analysis and must have a specific public interest research use case for the data.

Those whose primary institutional affiliation is in industry or government will not be eligible.

According to Twitter, successful applicants will be researchers with “a demonstrable history of independent research” or who can show an ability to be entrusted with the data and to “pursue research for a qualified purpose”.

Once accepted, researchers will be provided access to the TMRC’s datasets to work independently.

Twitter said by providing researchers with access to “specific, granular data”, it aims to support “an unprecedented level of empirical research into state-backed attacks on the integrity of the conversation on Twitter”.

Moderation and misinformation

So far, the consortium has operated in a pilot capacity, sharing data from about 15 information operations to partners including the Stanford Internet Observatory, the Cazadores de Fake News, and the Australian Strategic Policy Institute.

The company said by inviting researchers to apply for membership to the TMRC, it wants to “remain transparent” while addressing security and integrity challenges.

“Over time, we intend to share similarly comprehensive datasets about other content moderation policy areas and enforcement decisions, such as data about tweets labelled under our misinformation policies.”

In an attempt to fight misinformation on its platform, Twitter rolled out Birdwatch in 2021, a feature that allows users with access to identify information in tweets that they believe to be misleading.

However, the company has come under scrutiny recently with whistleblower Peiter Zatko alleging that it has deceived regulators, the public and its own board of directors about “extreme, egregious deficiencies” related to privacy, security and content moderation.

Twitter is one of several online platforms that has faced challenges tackling content moderation and misinformation, which has caught the eye of policymakers. Earlier this year, an agreement was reached on the EU’s Digital Services Act – a landmark piece of legislation that demands tech companies take control of content moderation.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Jenny Darmody is the deputy editor of Silicon Republic

editorial@siliconrepublic.com