Facebook tackles coronavirus misinformation, hateful memes with AI

Facebook tackles coronavirus misinformation, hateful memes with AI

Facebook logo phone 4611

Facebook’s artificial intelligence helps the social network recognize hate speech before reporting it.

Angela Lang / CNET

Facebook has doubled artificial intelligence to detect coronavirus misinformation and hate speech, but the social network notes that machines can find it difficult to identify offensive content online.

On Tuesday, the world’s largest social network presented its AI systems with various challenges when it came to finding copies of posts that contained coronavirus misinformation or recognized hateful memes. Like other social networks, Facebook uses a mix of human reviewers and technology to identify content that violates its rules before users report it. As AI has progressed, misinformation and hate speech keep cropping up on Facebook and other social networks.

There is a lot at stake, since misinformation about COVID-19, the respiratory disease caused by the coronavirus, can lead to someone’s health being compromised. Joking about how drinking bleach can cure the coronavirus or wearing a mask can make you sick, continues to show up on social media despite efforts to stop it from spreading. Online hate speech can also fuel violence in the real world. Facebook has been criticized for not doing enough to combat hate speech related to a genocide in Myanmar against the Rohingya, a mainly Muslim group.

Facebook’s chief technology officer Mike Schroepfer said in a press conference that he knows AI is not the answer to every single problem.

“These problems are basically human problems in terms of life and communication,” said Schröder. “So we want people to be in control and to make the final decisions, especially when the problems are nuanced.”

With almost 2.6 billion active users per month, Facebook sees AI as a tool that can be used to solve the “drudgery” of tasks that would take a long time to complete.

Look for copies of coronavirus misinformation

Facebook has retrieved malicious coronavirus misinformation and is working with more than 60 fact-checking organizations, including the Associated Press and Reuters, to check content on the social network.

In April, Facebook flagged around 50 million posts related to COVID-19. Since March, Facebook has removed more than 2.5 million posts about the sale of masks, disinfectants, surface disinfectant wipes, and COVID-19 test kits – items the social network has temporarily banned to prevent price cuts and other forms of exploitation.

Detecting copies of posts that contain misinformation can be difficult because users sometimes change an image with augmented reality filters. Pixels that make up an image can also change when a user takes a screenshot. Two pictures can look identical, but contain different words.

“These are difficult challenges, and our tools are far from perfect,” said Facebook in one blog entry. “Furthermore, the controversial nature of these challenges means that the work is never done.”

In one example, Facebook showed three identical pictures of toilet paper with a current headline. One is a screenshot, so the pixels differ from those in the original shot. Another contains the heading “COVID-19 is not included in toilet paper”, while the other two contain misinformation that says “COVID-19 is contained in toilet paper”.

Screenshot-2020-05-11-at-5-07-15-pm.png

These pictures look the same, but one contains misinformation.

Facebook

If a fact checker marks a post as wrong, Facebook will show it lower in a user’s news feed and contain a warning. However, removing this content can be an exchange of blows, as thousands of copies can reappear on the website.

Using a tool called SimSearchNet, Facebook can identify these copies by comparing them to a database of images that contain misinformation.

Facebook posts that promote the sale of items that the social network has temporarily banned, such as masks and hand sanitizers, can be difficult to spot when an image is cropped or otherwise changed. Facebook claims to have another database and system that it can use to recognize ads that users change to avoid detection.

On the marketplace, a Facebook feature that allows users to buy and sell goods, people take pictures of objects against an unusual background, with unusual lighting and at strange angles. Facebook said it was able to improve banned goods detection by using data such as public images of masks and hand sanitizers, and photos that look like these products. Facebook is trying to train its AI systems to understand the key element in the photo, even if there is a different background, said Schröder.

However, the system was not perfect. According to a report by, people who create hand-sewn masks have been identified by Facebook’s automated content moderation systems The New York Times.

Proactively recognize hate speech

Facebook said it had made progress in hate speech detection before a user reported it.

In the first three months of 2020, AI proactively recognized almost 88.8% of hate speech removed from Facebook, compared to 80.2% in the fourth quarter Enforcement of Community Standards report The social network was released on Tuesday. The company edited 9.6 million hate speech content in the first quarter, up from 3.9 million in the previous quarter.

The social network attributed this surge to new technologies that enable machines to develop a deeper understanding of the meaning of different words. Facebook defines hate speech as a direct attack on people based on “protected characteristics” such as race, sexual orientation and disability. The company has also developed a system that enables machines to better understand the relationship between images and words.

Facebook uses techniques to match images and text that are identical to those that have already been removed from the social network. It has also improved its “machine learning classifiers”, which are used to assess whether text and responses can be hate speech. The company relies on a technology called self-supervised training It does not have to re-train its systems to recognize hate speech in different languages.

Hate speech can be difficult for AI to recognize because of its nuances and cultural context. Some people have reclaimed bows, others use an offensive language on Facebook to denounce their use. Users have tried to avoid recognition by misspelling words or avoiding certain phrases. Videos and images on Facebook contain a “significant” amount of hate speech.

“Even experienced human reviewers can sometimes struggle to distinguish a cruel remark from something that falls under the definition of hate speech, or overlook a phrase that isn’t widely used,” Facebook said in one blog entry.

Screenshot-2020-05-11-at-5-45-38-pm.png

Linking the words and text creates a hateful message that the AI ​​can hardly identify.

Facebook

Memes that contain hate speech, for example, are particularly challenging because machines have to grasp the connection between words and images. For example, a hateful meme could contain a picture of gravestones and the words “everyone in your ethnic group belongs here”. Taken separately, the images and words may not violate Facebook’s rules. But when they are put together, they generate a hateful message.

Schröder could not say whether Facebook saw an increase in hate speech Asians because of the corona virus pandemic.

However, the company has seen a huge change in behavior across the social network due to the pandemic.

“One of the challenges of hate speech in general is that it changes and is contextual based on knowing what’s going on and what’s going on,” he said.

On Tuesday, Facebook also released a record of more than 10,000 examples of hateful memes to help researchers help the social network improve hate speech recognition.

The company also launched a new contest called the “Hateful Meme Challenge,” which includes a $ 100,000 prize pool. The challenge participants hosted by DrivenData create models that are trained on the record of hateful memes.

Source link

Similar Posts