The usual response to offensive content is a platform ban, but a recent study suggests deplatforming doesn’t quite work the way it is supposed to.
A recent study found that users banned from social media platforms like Twitter or Reddit tend to end up on more lax platforms and exhibit higher levels of toxicity than before. The common approach to dealing with a user who posts material that’s deemed offensive to viewers would be to ban them from the platform. However, the social landscape is so different these days that it isn’t as simple as it once was.
Since the beginning of social media, users have been able to post whatever is on their minds. Content sharing started out with mostly friends and family within a user’s circle but, over time, services have evolved into mass communication networks that are capable of connecting people across the globe. Due to this, the chances of encountering wildly different views from the mass population have grown exponentially. When these views are deemed to be toxic, the solution is usually to simply remove the user from the website or platform. The problem is, that solution has not evolved in the same way the social media landscape has.
According to a peer-reviewed study conducted by an international collaboration of researchers, those banned from social websites and applications for acting offensively or posting inappropriate content tend to quickly find another platform to use. The new platform, often with fewer guidelines, then allows them to post more frequently and without fear of being shut down. The study collected 29 million posts from Gab, and those posts were cross-referenced through machine learning (with some human help) to identify posts from recently banned Twitter accounts and Reddit users. After correlating accounts between the extremely different social sites, it was found that about 59-percent of recently banned Twitter users created a Gab account not long after being removed. For Reddit, about 76-percent created a Gab account after being banned.
Moving The Problem Elsewhere
After new accounts were made by these individuals on sites like Gab or Parler, the study found that toxicity among these users increased dramatically. When comparing same-user content on Twitter and Gab, the latter was deemed to be far more offensive. In addition, after moving to a more lax platform, the same users tended to post more frequently, likely due to finding a more concentrated group of similar-minded individuals. The results would seem to suggest that, while banning can work in silencing offensive content, it does not help in the goal of ending misinformation and toxic content in general.
Reddit and Twitter do have systems in place to disarm offensive content without having to ban the user. For instance, Twitter made use of content moderation labels to identified misinformation during the pandemic. Reddit also has “shadow-banning” that removes visibility of certain user posts without the person knowing. In essence, they think the posts are visible to others when it is only visible to the original poster. Of course, these features can also lead to a user choosing to move their account to a different platform, but it is less likely than when a user is banned from a platform altogether. While the answer isn’t very clear, the latest study suggests that banning individuals completely doesn’t stop misinformation and offensive content from being spread to others.
Next: Twitter Study Finds Social Media Reinforces Echo Chambers & Polarization
Here’s How Much Steve Jobs’ 1973 Job Application Just Sold For At Auction
About The Author