New rules to clamp down on ‘toxic’ Wikipedia users ratified by board

26 May 2020

Image: © chrisdorney/Stock.adobe.com

The Wikimedia Foundation Board that oversees Wikipedia has agreed to a new set of safety standards to clamp down on ‘toxic’ users.

A set of new rules for Wikipedia will be finalised by the end of the year in a bid to tackle issues of discrimination and promote inclusivity on the platform. The Wikimedia Foundation Board voted to ratify the new rules, including a new universal code of conduct that will be applicable across all of Wikimedia’s services.

Users who fail to comply with these new rules can face a ban or limited access to Wikimedia’s services. A retroactive review process will also be introduced for cases where someone feels they’ve been targeted by a user.

Wikipedia relies on thousands of volunteer editors who use their spare time to review edits and changes made to articles on the platform by more than 39m English language speakers alone. However, some of the most marginalised in society have reported issues of targeted harassment on the platform.

A piece in the New York Times last year reported that members of the LGBTQ community who volunteer to edit Wikipedia are persistently targeted by harassment that can last for months. In one example, an editor said that if they reveal themselves to be a feminist or LGBTQ, harassment will increase.

An urgent need

In its announcement, the Wikimedia Foundation Board said: “The board does not believe we have made enough progress toward creating welcoming, inclusive, harassment-free spaces in which people can contribute productively and debate constructively.

“In recognition of the urgency of these issues, the board is directing the Wikimedia Foundation to directly improve the situation in collaboration with our communities. This should include developing sustainable practices and tools that eliminate harassment, toxicity and incivility, promote inclusivity, cultivate respectful discourse, reduce harms to participants, protect the projects from disinformation and bad actors, and promote trust in our projects.”

Researchers at MIT revealed last February that they have developed an AI designed to automatically update inaccuracies on Wikipedia. While designed to help those editing the platform, the researchers envisioned that their AI could one day do the entire process automatically.

Colm Gorey was a senior journalist with Silicon Republic

editorial@siliconrepublic.com