close
Computer Sciences

Educating Social Media Users on Content Evaluation Aids in the Fight Against Misinformation

Social media networks generally put the majority of users in the backseat while battling the spread of false information. Platforms frequently employ machine-learning algorithms or human fact-checkers to alert consumers to information that is inaccurate or misleading.

“Just because this is the status quo doesn’t mean it is the correct way or the only way to do it,” says Farnaz Jahanbakhsh, a graduate student in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

She ran a study with her colleagues in which they gave social media users that authority instead.

First, they conducted a poll to find out how people avoid or filter out false information on social media. The researchers used their findings to create a prototype platform that allows users to rate the accuracy of content, specify which individuals they trust to rate accuracy, and filter items that show up in their feed based on those ratings.

They discovered through a field investigation that people were capable of accurately evaluating misleading posts without any prior instruction. Users also appreciated being able to examine assessments in a systematic manner and rate posts. The study’s participants employed content filters in a variety of ways, with some blocking all inaccurate content while others actively looking for it.

“This work shows that a decentralized approach to moderation can lead to higher content reliability on social media,” says Jahanbakhsh. “This approach is also more efficient and scalable than centralized moderation schemes, and may appeal to users who mistrust platforms,” she adds.

“A lot of research into misinformation assumes that users can’t decide what is true and what is not, and so we have to help them. We didn’t see that at all. We saw that people actually do treat content with scrutiny and they also try to help each other. But these efforts are not currently supported by the platforms,” she says.

Jahanbakhsh wrote the paper with Amy Zhang, assistant professor at the University of Washington Allen School of Computer Science and Engineering; and senior author David Karger, professor of computer science in CSAIL. The research will be presented at the ACM Conference on Computer-Supported Cooperative Work and Social Computing.

Fighting misinformation

The spread of online misinformation is a widespread problem. However, there are drawbacks to the existing strategies used by social media companies to flag or delete inaccurate content. Users may become agitated when platforms employ algorithms or fact-checkers to evaluate messages, for instance, since they perceive these actions as impinging on their right to free expression among other things.

“Sometimes users want misinformation to appear in their feed because they want to know what their friends or family are exposed to, so they know when and how to talk to them about it,” Jahanbakhsh adds.

Users frequently make an effort to evaluate and identify false material on their own, and they also make an effort to help one another by asking friends and professionals to explain what they are reading. But because these initiatives lack platform backing, they risk failing.

Even if a person can respond to inaccurate post with a comment or an angry emoji, most platforms view those acts as indications of involvement. For instance, on Facebook, this may mean that more users, including the user’s friends and followers, would see the misleading content, which is the exact reverse of what the user meant.

The researchers set out to develop a platform that enables users to provide and observe structured accuracy ratings on posts, nominate people they trust to evaluate posts, and utilize filters to manage the content that appears in their feed in order to get around these issues and dangers. In the end, the researchers want to make it simpler for users to assist one another in identifying false material on social media, which lightens everyone’s job.

To determine if users would value these features, the researchers first polled 192 people they’d found through Facebook and a mailing list. According to the survey, users are acutely aware of false information and make an effort to track it down and report it, but they are wary of having their conclusions twisted. They have doubts about platforms’ attempts to evaluate content on their behalf. Additionally, they would not trust filters run by a platform even though they would appreciate filters that block unreliable content.

The researchers created Trustnet, a Facebook-like prototype platform, using these observations. In Trustnet, people can follow one another to check what content they post and post true, complete news stories. However, a user must first-rate content as correct or inaccurate, or ask about its veracity, both of which will be available to other users, before they can submit it to Trustnet.

“The reason people share misinformation is usually not because they don’t know what is true and what is false. Rather, at the time of sharing, their attention is misdirected to other things. If you ask them to assess the content before sharing it, it helps them to be more discerning,” she says.

Users can also select trusted individuals whose content assessments they will see. In case they follow someone they are socially connected to (perhaps a friend or family member) but who they would not trust to evaluate information, they do this in a private manner. Additionally, the platform provides filters that allow users to customize their feed based on how and by whom posts have been evaluated.

Testing Trustnet

They conducted a study in which 14 people used the platform for one week after the prototype was finished. Despite getting no training, the researchers discovered that consumers could evaluate content ineffectively by relying on their own knowledge, the content’s source, or the article’s logic. They could manage their feeds using filters as well, though they did so in slightly different ways.

“Even in such a small sample, it was interesting to see that not everybody wanted to read their news the same way. Sometimes people wanted to have misinforming posts in their feeds because they saw benefits to it. This points to the fact that this agency is now missing from social media platforms, and it should be given back to users,” she says.

“Users did sometimes struggle to assess content when it contained multiple claims, some true and some false, or if a headline and article were disjointed. This shows the need to give users more assessment options perhaps by stating than an article is true-but-misleading or that it contains a political slant,” she says.

Since Trustnet users sometimes struggled to assess articles in which the content did not match the headline, Jahanbakhsh launched another research project to create a browser extension that lets users modify news headlines to be more aligned with the article’s content.

While these results show that users can play a more active role in the fight against misinformation, Jahanbakhsh warns that giving users this power is not a panacea. For one, this approach could create situations where users only see information from like-minded sources. “However, filters and structured assessments could be reconfigured to help mitigate that issue,” she says.

In addition to exploring Trustnet enhancements, Jahanbakhsh wants to study methods that could encourage people to read content assessments from those with differing viewpoints, perhaps through gamification.

She is also creating methods that let individuals publish and read content evaluations through regular web browsing, rather than on a platform because social media platforms can be hesitant to make modifications.

This work was supported, in part, by the National Science Foundation.

Topic : Article