It’s staple slice of mom wisdom: “If you don’t have anything nice to say, then don’t say anything at all.”
Now Instagram is finally paying attention to this bit of maternal wisdom by announcing it will be automatically hiding offensive comments in a bid to reduce bullying and harassment.
A global study released earlier this month found that nearly 60% of young women aged 15 to 25 have been victims of online harassment and abuse, with a staggering 39% of those saying they’ve been threatened with sexual violence while online.
In South Africa, things aren’t much better. We have the highest prevalence of cyberbullying out of 28 countries surveyed in a 2018 report by research company Ipsos Global Advisor.
More than half of the SA parents who took part in the survey said they know at least one child in their community who has been a victim of cyberbullying, an increase of 24% since 2011.
The most common type of an online harassment, research suggests, is mean comments – and if you’ve ever felt compelled to scroll through the comments section of a popular Instagram post, chances are you've seen something unsavoury written there.
Instagram, which has over 1 billion users, has now introduced an algorithm to their app, which will hide any comments which appear to be nasty or hurtful. The only way to see the comments will be to click on the “view hidden comments” option.
The microblogging site has also tweaked its comment-warning feature. After a user writes a potentially offensive comment – but before the comment is posted – a pop-up message will appear which reads: “This may go against our guidelines”.
The pop-up message will also tell users that if they post a negative comment, it will be hidden and Instagram may investigate whether to delete the user's account.
The app rolled out its “restrict update” in October last year, which allowed users to hide abusive content, but their latest feature will do so automatically.
In a blog post, the company explained that more than 35 million Insta accounts are using or have used the restrict function since it launched, allowing people to safely control their Instagram experience.
Explaining the latest move, the company wrote: “These new warnings let people take a moment to step back and reflect on their words and lay out the potential consequences should they proceed. We just started testing this feature in select languages.”
The announcement of Insta’s new anti-bullying measures is the latest attempt by a social-media company to show the public they are working towards more effectively moderating their content.
Facebook, which owns Instagram, recently came under fire for what critics say are lazy platform policies that allow hate and misinformation to spread.
A group called Stop Hate for Profit launched a boycott of Facebook advertising, resulting in over 500 global companies announcing they will not work with the tech giant until “meaningful action” is taken.
The same group targeted Instagram last month and recruited popular users such as Kim Kardashian West – who boasts 190 million followers – to boycott the platform for a single day to bring attention to the issue.
“I love that I can connect directly with you through Instagram and Facebook, but I can’t sit by and stay silent while these platforms continue to allow the spreading of hate, propaganda and misinformation,” Kim tweeted.
Twitter has conducted similar tests. Earlier this year, the microblogging site began prompting users to consider rewriting their reply to a tweet before publishing, if it contained potentially harmful language.
Sources: twitter.com, stophateforprofit.org, instagram.com, socialmediatoday.com, adespresso.com, firstsiteguide.com, ewn.co.za, plan-international.org