The prevalence of hate speech on Facebook in July-September period this year was 0.10 per cent to 0.11 per cent, meaning that out of every 10,000 views of content on Facebook, 10 to 11 of them included hate speech, the company has said.
Revealing the prevalence of hate speech on its platform for the first time as it faces the flak for an increase in such posts, Facebook said on Thursday that the company is proactively detecting about 95 per cent of hate speech content it removes.
“We calculate hate speech prevalence by selecting a sample of content seen on Facebook and then labeling how much of it violates our hate speech policies,” it said in a statement.
“Because hate speech depends on language and cultural context, we send these representative samples to reviewers across different languages and regions”.
Facebook admitted that defining hate speech isn’t simple, as there are many differing opinions on what constitutes hate speech.
“Nuance, history, language, religion and changing cultural norms are all important factors to consider as we define our policies,” it added.
Facebook first began reporting its metrics for hate speech in Q4 of 2017 and then, its proactive detection rate was a mere 23.6 per cent.
“Today we proactively detect about 95% of hate speech content we remove. Whether content is proactively detected or reported by users, we often use AI to take action on the straightforward cases and prioritize the more nuanced cases, where context needs to be considered, for our reviewers,” Facebook said.
The company said it has invested billions of dollars in people and technology to enforce these rules, and has more than 35,000 people working on safety and security at Facebook.