X Close Search

Instagram Introduce Tough New Measures to Combat Online Abuse and Hate Speech

The tougher measures follow a number of recently introduced safety features that Instagram has developed to help users filter out unwanted or abusive interactions. These include comment filters that allow users to tailor conversations and filters to automatically block out specific harmful words or phrases. If you are affected by anything touched on within this article, follow the links to our various support services or click on the blue logo icon at the bottom right of the screen to start using Cybersmile Assistant, our smart AI support assistant.

Instagram have recently announced tougher enforcement measures in an effort to curb the spread of abusive or hateful speech through direct messages. Currently, people who are reported for sending abuse via DMs on Instagram are prohibited from sending messages for a short period of time – but from now on, first-time offenders will have their accounts suspended and people who repeatedly send hateful messages will have their account deactivated.

This means that any user who has their account disabled for sending abusive messages, won’t be able to make a new account to get around the ban. Instagram have also stated that they will cooperate with law enforcement when it comes to requests for legal data in cases of hate speech.

“Now, if someone continues to send violating messages, we’ll disable their account. We’ll also disable new accounts created to get around our messaging restrictions, and will continue to disable accounts we find that are created purely to send abusive messages.”

Instagram statement

The new enforcement update follows a number of recently introduced safety features that Instagram has developed to help users filter out unwanted or abusive interactions. These include comment filters that allow users to tailor conversations and filters to automatically block out specific harmful words or phrases.

There has also been encouraging progress with Instagram’s pro-active AI safety feature, designed to automatically detect potentially offensive language that has been previously flagged or identified as abusive and intervene before it is sent. Instagram have noted that the number of users changing or deleting posts after receiving the alert means that it is an effective addition to their suite of safety tools.

“It is encouraging to see Instagram continuing to protect users from people who are intent on spreading hate and abuse with these tougher enforcement measures in addition to their existing safety tools.”

Laura Lewandowski, Chief Policy Officer, The Cybersmile Foundation

If you are affected by anything touched on within this article, we can help you. Visit our Help Center or click on the blue logo icon at the bottom right of the screen to open Cybersmile Assistant, our smart AI support assistant.

To learn more about Cybersmile and our work, please explore the following recommendations:

What do you think about Instagram’s efforts to tackle hate speech and abuse? Let us know by contacting us or tweet us @CybersmileHQ.