Plans & Pricing
Does My Company Subscribe?
The major social media companies are responding to demands that they do something about the abuse and harassment that often takes place on their sites.
While online abuse and harassment have always been a problem, the splintering of public opinion over the US presidential election, the hacking of “Ghostbusters” actress Leslie Jones’ Twitter account and the so-called Gamergate controversy have helped shed light on the size and complexity of the issue.
To help reduce targeted hate speech, abuse and harassment, Instagram this week announced a way for users to protect themselves against such abuse, following Twitter’s introduction of its own anti-harassment tools last month.
A survey from Data & Society conducted by Princeton Survey Research Associates International (PRSAI) showed how widespread the problem is.
According to the findings, 72% of internet users have witnessed harassing behavior online, and 36% have experienced some type of online harassment directed at them.
Teens and millennials are much more likely than older generations to witness and experience online harassment, and women under 30-years-old were much more likely than men in this age group to report experiencing online abuse or harassment.
Instagram has since September enabled users to apply filters to their comment streams based on specific keywords, and this week said users will soon be able to turn off comments on any post.
People with private accounts will be able to delete unwanted followers to help control who can view their posts. And, users will be able to anonymously report friends who may be posting about self-harm, prompting Instagram to contact them with support organizations to help.
Twitter’s tools let users mute entire conversation threads, as well as mute conversations based on keyword filters. Twitter users can also report offensive tweets based on conduct that “directs hate against a race, religion, gender or orientation, which, according to the company, will improve its ability to process the reports.”
While Facebook has gotten a lot of flak for its lack of response to its fake news issue, the platform does have a set of community standards in place, which were updated last year to provide clarity about how it defines content such as hate speech, bullying, harassment and self-harm.
‘These efforts won’t fully eradicate the problem, but they show that the companies are taking online harassment seriously,” said Debra Aho Williamson, eMarketer principal analyst.
“Twitter has been criticized for its response to harassment and hate speech, and although Instagram hasn’t seen the same level of criticism, these have been issues for the image-sharing app as well.”
You've never experienced research like this.
Nearly all Fortune 500 companies rely on us.
Inquire about corporate subscriptions today.