Instagram’s anti-bullying efforts now extend to discouraging abusive behavior in posts, not just comments. Starting today, Instagram will start warning users when they’re about to post a “potentially offensive” caption for a photo or video that’s being uploaded to their main feed. This update has been announced officially by the company.
To tackle this situation, Instagram is taking the help of Artificial Intelligence. The company is integrating some AI-powered tools that will analyze and warn the user if they are posting offensive posts. The tool will pop-up a notification that will say that the caption “looks similar to others that have been reported.” It will then encourage the user to edit the caption, but it will also give them the option of posting it unchanged.
The new feature builds upon a similar AI-powered tool that Instagram introduced for comments back in July. The company says that nudging people to reconsider posting potentially hurtful comments has had “promising” results in the company’s fight against online bullying.
This is one of the latest steps taken by Instagram to tackle offensive content on its platform. In October, the service launched a new “Restrict” feature that lets users shadow ban their bullies, and last year, it started using AI to filter offensive comments and proactively detect bullying in photos and captions.
Unlike its other moderation tools, the difference here is that Instagram is relying on users to spot when one of their comments crosses the line. It’s unlikely to stop the platform’s more determined bullies, but hopefully, it has a shot at protecting people from thoughtless insults.
Instagram says the new feature is rolling out in “select countries” for now, but it will expand globally in the coming months.
For the latest tech news and updates, Install TechCodex App and follow us on Facebook and Twitter. Also, if you like our efforts, consider sharing this story with your friends, this will encourage us to bring more exciting updates for you.