blobbotronic - stock.adobe.com

Facebook advances AI used to detect hate speech

Facebook can now proactively find and flag hate speech better. Meanwhile, the social media giant's AI Research made BlenderBot, an advanced conversational AI system, open source.

Facebook has been able to remove more hate speech from its platforms thanks to an AI-based detection system that identifies offensive speech more effectively and in multiple languages.

In a May 12 blog post, Facebook said its increased success in identifying hate speech is due to expanding its proactive detection technology for hate speech to new languages, as well as better detection for English.

The boost in Facebook's automated content curation comes after other social media giants, including Twitter and YouTube, ramped up AI for curating content in March due to the COVID-19 pandemic. The companies said at the time that they were giving a bigger role to AI because working at home due to the pandemic limited workers' ability to curate content manually.

AI-powered content curation

Social media firms have been slow to respond to the growing threat of malicious content and are now trying to catch up, said Alan Pelz-Sharpe, the founder of Deep Analysis, an advisory firm in Nashua, N.H.

While machine learning and AI are helping organizations respond to this threat, the technologies must be used in conjunction with humans.

"There is no doubt that a lot of content, if not the majority, can be processed and filtered automatically through the use of machine learning and AI," Pelz-Sharpe said. "However, it is naive to think that it can all be curated automatically."

"There is a mountain of past content that can be used to train AI to be more effective in the future, but capturing and identifying intent is like fighting with fog; every time you think you have a grasp of it, you find things have changed," Pelz-Sharpe continued.

Meanwhile, Facebook said it has clearly benefited from using more automation to find hate speech.

The social media giant took action on 9.6 million pieces of objectionable content in the first quarter of 2020. Facebook reported that it found about 88% of that content before users reported it.

That's a big jump compared to the previous quarter, in which Facebook acted on 5.7 million pieces of objectionable content. The company found about 80% of the content before it was reported by users.

We will see these firms rely ever more heavily on AI to automate analysis and to flag and remove malicious content.
Alan Pelz-SharpeFounder, Deep Analysis

Facebook also restored far less content after appeals at the start of this year compared to the third and fourth quarters of 2019.

Facebook, however, said in the blog post it does not have an estimate of how much hate speech is on its platform, and so cannot determine how accurate its automated systems are.

"We will see these firms rely ever more heavily on AI to automate analysis and to flag and remove malicious content, but it that work will always require some human intervention," Pelz-Sharpe said.

Conversational AI

In a related development, Facebook AI Research made BlenderBot open source on April 29. BlenderBot is an advanced conversational AI chatbot that Facebook claims blends empathy, knowledge and personality to create a more human-like chatbot. Facebook has had problems with its chatbot in the past, and had to take two offline in 2017 after they began communicating with each other in an unintelligible English-like language.

The chatbot, trained on social media posts, including many from Reddit, is built on a model with 9.4 billion parameters, which Facebook claims is 3.6 times more than the largest existing system. It can "talk" in up to 14-turn conversation flows and can discuss almost any topic.

Chatbot

On its own, the bot doesn't likely have much commercial value. Yet, using the open-source code, enterprises could theoretically make their commercial bots more conversational.

"The core idea is to build desirable conversational skills," said Forrester analyst Vasupradha Srinivasan.

An "example is the difference in experience between a bot that copies and pastes policy information versus a bot that understands the policy statement and generates human-like words, paraphrasing the policy document," she said. The feat sounds simple, she added, but in reality, it's highly complex.

Still, Srinivasan continued, for current commercial applications, "it's important that buyers not get swayed simply with the AI and conversational buzzwords and focus on understanding what a feature delivers in terms of experience."

By making BlenderBot open source, Facebook likely hopes that the community will further advance the bot's capabilities.

"As the community continues to experiment, Blender continues to learn, assimilate and apply," Srinivasan said.

Dig Deeper on AI business strategies

Business Analytics
CIO
Data Management
ERP
Close