Twitter considers new features including prompts against offensive language

2 Jul 2021

Image: © Julien Eichinger/Stock.adobe.com

The proposed changes give users greater control over their conversations and who can take part in them.

Twitter is considering new features for its platform after users have complained about profanity in replies and having to toggle between public and private accounts.

An unverified Twitter account associated with the company’s design team tweeted yesterday (1 July) about changes the social media company is considering in addressing these common complaints, including allowing users to decide who sees their tweets.

The proposed “trusted friends” feature will allow users to tweet to a group of followers of their choosing, similar to Instagram’s ‘close friends’ option.

Twitter user A Designer said this feature would address issues raised by users toggling from public to protected accounts or “juggling alt accounts”.

“With trusted friends, you could tweet to a group of your choosing. Perhaps you could also see trusted friends’ tweets first,” they tweeted.

Another early idea floated in the Twitter thread from A Designer was “Facets”, which would let users tweet from distinct ‘personas’ meant for friends, family, work or public, all within a single account.

According to these tweets, others would be able to follow a whole account or just the facets they are interested in.

Support Silicon Republic

This feature builds on a previous one introduced by Twitter last year in which users could limit who can reply to their tweets, such as only followers or those mentioned.

The other proposed feature, called “reply language prompts”, will allow users to choose phrases they don’t want to see in their replies and enable automatic actions, such as moving violating replies to the bottom of the conversation.

When another user uses a specified phrase in their reply, Twitter will highlight the phrase and prompt the user to reconsider their language choice.

A Designer explained that this feature is just one proposal on how Twitter could guide users to be more conscientious about using potentially hurtful language in a reply and allow users to set their own boundaries of accepted language.

“It’s like spellcheck, but for not accidentally sounding like a jerk in the replies,” they tweeted.

While a prompt would not stop a user from posting a reply, it has previously shown to be effective in the case of retweeting unread articles, according to Twitter.

“Perhaps the right prompt (in the right moment) can help everybody be their best selves,” proposed A Designer.

Vish Gain is a journalist and copywriter with Silicon Republic

editorial@siliconrepublic.com