More harm than good? Twitter struggles to label misleading COVID-19 tweets

More harm than good? Twitter struggles to label misleading COVID-19 tweets

coronavirus-twitter-logo-9731

Twitter uses labels to guide users to trusted information about COVID-19.

Image by Pixabay / Illustration by CNET

For the latest corona virus pandemic news and information, visit WHO website.

The automated technology that Twitter started using this month to flag coronavirus misinformation tweets is making mistakes and causing concern that the company relies on artificial intelligence to review content.

May 11th Twitter started tagging tweets that spread a conspiracy theory about 5G that causes the coronavirus. The authorities believe that the wrong theory has caused some people to set cell towers on fire.

Twitter will remove misleading tweets that encourage people to focus on behaviors such as Damage cell towers. Other tweets that don’t do the same harm but contain false or controversial claims should be given a label that redirects users to trusted information. The label says “Get the Facts About COVID-19” and leads users to one page with curated tweets that expose the 5G coronavirus conspiracy theory.

Screenshot-2020-05-21-at-3-05-03-pm.png
Screenshot-2020-05-21-at-3-06-56-pm.png

However, Twitter’s technology has made numerous mistakes and labeled tweets that refute the conspiracy theory and provide accurate information. Tweets with links to messages ReutersBBC Wired and Voice of america about the 5G The coronavirus conspiracy theory has been highlighted. In one case, Twitter applied the label to tweets that shared a page that the company itself published titled “.No, 5G doesn’t cause coronavirus“Tweets with words like 5G, Coronavirus, COVID-19 or Hashtags # 5Gcoronavirus were also incorrectly flagged.

Experts say that the mislabeled tweets can confuse users, especially if they don’t click the label. Since Twitter does not notify users when their tweets are labeled, they are unlikely to know that their tweets have been tagged. Twitter also offers users no opportunity to object to the rating of their posts.

“Wrong labeling does more harm than non-labeling because people rely on it and trust it,” said Hany Farid, a computer science professor at the University of California at Berkeley. “As soon as you get it wrong, it takes a few hours and it’s over.”

make mistakes

Twitter declined to say how many 5G coronavirus tweets were flagged or provide an estimated error rate. The company said its Trust and Safety team is tracking tagged tweets related to corona viruses. The incorrectly labeled tweets identified by CNET were not fixed. The company said its automated systems were new and would improve over time.

“We are developing and testing new tools so that we can scale our application of these labels appropriately. Errors will occur,” said a Twitter spokesman in a statement. “We appreciate your patience as we work to get this done right, so we’re taking an iterative approach so we can learn and make adjustments.”

Screenshot-2020-05-21-at-3-05-16-pm.png

The company first labels tweets about the 5G coronavirus conspiracy theory, but plans to tackle other jokes.

With 166 million active users that can be monetized per day, Twitter is facing a major moderation challenge due to the wave of tweets flowing through the website. The company said its automated tools help employees review reports more efficiently by displaying content that is most likely to cause harm and help prioritize which tweets to review first.

Twitter’s approach to misinforming corona viruses is similar to Facebook’s efforts to combat inaccurate content, although the world’s largest social network relies more on human reviewers. Facebook works with more than 60 third-party fact-checkers worldwide to verify the accuracy of posts. If a fact checker posts a post as wrong, Facebook displays a warning and shows the content in a person’s newsfeed lower to reduce distribution. Twitter automatically flags content without human verification.

Farid from UC Berkeley said he was not surprised that Twitter’s automated system made mistakes.

“The difference between a heading with a conspiracy theory and one that exposes it is very subtle,” he said. “It’s literally the word ‘not’ and you need a comprehensive understanding of language that we don’t have today.”

Instead, Twitter can act against users who spread misinformation about corona viruses and have a large number of followers. Researcher at Oxford University published a study in April that showed that high-profile social media users such as politicians, celebrities or other public figures shared about 20 percent of the wrong claims, but generated 69 percent of the total social media engagement.

More harm than good? Twitter struggles to label misleading COVID-19 tweets 1


Currently running:
Look at that:

How to recognize fake messages


3:21

Fool Twitter’s automated system

Something Twitter users also test the system by tweeting the words 5G and Coronavirus and flooding the site with incorrectly labeled tweets.

Screenshot-2020-05-22-at-12-09-35-pm.png

Ian Alexander, a 33-year-old YouTuber who publishes technology videos, said he discovered the new label on May 11th on a tweet that had nothing to do with the 5G coronavirus conspiracy theory. He decided to test Twitter’s system by tweeting: “If you enter 5G, COVID-19 or Coronavirus in a tweet, it will appear below it …” The label automatically appeared on the tweet.

Alexander labeled tweets and said, “Could be more harmful than good” because someone may only see the clue on their timeline without clicking through.

Other tweets with misleading coronavirus information slip through the cracks. Actress Fran Drescher, who has more than 260,000 followers, tweeted on May 12: “I can’t believe that all commercials for 5G. Gr8 4cancer harm birds, bees and Mor viruses like Corona. Choose it bac.” Another user’s tweet included comments from Judy Mikovits, featured in “Plandemic,” a viral video with coronavirus conspiracy theories. She believes 5G is playing a role in the coronavirus pandemic. Both tweets had no label. (CNET does not link to these tweets because they contain incorrect information.)

Other social networks claim that they have successfully flagged incorrect content. In March, Facebook posted warnings on around 40 million COVID-19 posts. When people saw these warning signs, according to Facebook, 95% of the time they saw the inaccurate content.

However, a study by MIT found that labeling false messages can lead users to believe stories that have not received a label, even if they contain misinformation. The MIT researchers call this phenomenon the “implicit truth effect”.

David Rand, professor at the MIT Sloan School of Management, who co-authored the study, said one possible solution is for companies to ask social media users to rate content as trustworthy or untrustworthy.

“It would not only help to inform the algorithms,” said Rand, “but it would also make people more demanding in their own parts because it only makes them think about accuracy.”

Source link

Similar Posts