Twitter takes the war to the trolls in attempt to wipe out online abuse
Twitter has announced a suite of new and improved tools to help tackle harassment, which will be rolled out over the coming weeks.
Firstly, the site will be using algorithms to identify people who are "engaging in abusive behaviour, even if this behaviour hasn't been reported to us" and penalising those accounts, a blog post written by VP of engineering Ed Ho explained.
That could mean, for example, someone who repeatedly tweets at a person who doesn't follow them.
As punishment, Twitter could set a "time out" limitation, so only that person's followers can see their tweets for a set period.
In response to feedback from users, the site is adding more options for filtering out certain kinds of accounts.
Those without a profile picture, or a verified email address, or phone number, are often created purely to "troll" others without detection, so it will be possible to stop these accounts from appearing in your notifications panel.
The mute button, which was introduced last year to remove certain words, phrases, or conversations from your notifications, will now be accessible from the home timeline, with an additional option to decide how long to block the chosen content.
Finally, Twitter promises to make its reporting process more transparent.
"Since these tools are new, we will sometimes make mistakes, but know that we are actively working to improve and iterate on them every day," Ho wrote.
Twitter has been accused in the past of not doing enough to stamp out abuse, so are these features another step in the right direction?
"Twitter, along with other social media services, needs to do more than just provide software tools to help deal with negative issues," says internet psychologist Graham Jones.
"What they are doing is useful and will help people manage things better and deal with abuse and other negative things. But the real issue is not tools and techniques, but education.
"Social media companies ought to be sponsoring events in schools to help children and parents understand how to behave online with their services and how to deal with issues should they arise."
Jones explains that the negativity seen on social media arises because online communication is devoid of the feedback systems we rely on in real life, like tone of voice, body language and facial expressions (no, emojis don't count).
"What this means is that the usual elements of communication that inhibit people from abusive comments, or being negative, are not there online, so it's much easier for anyone to be negative while on social media than they would be in the 'real world'."