Meta claims to have removed 13.6 and 3.3 million pieces of content on Facebook and Instagram respectively which had violated the company’s policies on inciting violence. The company claim to have removed most prohibited posts before they were reported by users.
Most hateful content which required action was posted on Facebook but hate speech accounts for less than 1% of content on both social platforms.
The news comes in the wake of accusations that Facebook has misled investors and the public about its measures to combat hate speech and false information.
Facebook prohibits posts attacking people because of their faith, race, ethnicity, sexual orientation, and other sensitive attributes. The company uses artificial intelligence to search for images and text which potentially violate their policies, before referring them to human reviewers to make a final decision.
Meta Chief Technology Officer Mike Schroepfer said that the company has developed software which can analyze posts for potential violations across multiple categories and in several languages at once. “The problems we are dealing with are always evolving and so is the way we approach them,” he said.
Keep up-to-date with publishing news: sign up here for InPubWeekly, our free weekly e-newsletter.