Journalism is intended to check the power of government. Today, it appears as though journalism is failing in its ability to seek truth and report it, allowing an epidemic of fake news to engulf our screens.
Evidence of the effectiveness of fake news and its ability to spread can be seen in the recent 2016 presidential election.
The most circulated fake news stories concerning the election on Facebook produced more engagement than those from major news outlets, according to a BuzzFeed News analysis.
The most popular headlines included, “Pope Francis shocks world, endorses Donald Trump for president,” “Donald Trump sent his own plane to transport 200 stranded marines,” and “WikiLeaks confirms Hillary sold weapons to ISIS.”
Headlines dominating the fake news cycle during the 2016 election period. Photo courtesy of Sky News.
Some stories boasted nearly 2 million Facebook engagements in the three month period leading up to the U.S. election, according to data from Buzzfeed.
The pervasiveness of social media cannot be understated. Roughly 68 percent of U.S. adults under the age of 65 are Facebook users, and roughly three-quarters of those users access Facebook on a daily basis, according to a study conducted by the Pew Research Center.
In a society where information has the potential to influence people’s perceptions and inform their decisions, organizations have developed various approaches to stop the spread of misinformation.
News Literacy
One of the ways people are working to counteract the effects of fake news is through increased news literacy.
A key component of news literacy is being able to judge the reliability of news and information, a concept which statistics show many people struggle with. Sixty-eight percent of people worldwide cannot distinguish reliable journalism from rumors or falsehoods, according to research from global communications firm Edelman. In addition, the Pew Research Center found that 88 percent of Americans feel the prevalence of fake news leaves them confused about what is accurate.
“News literacy is the only genuinely reliable antidote to the spread of fake news,” said Mira Cohen, director of education at the Ronald Reagan Library. “As fake news becomes easier to spread, consumers need to become more adept at asking important questions about their news sources.”
Statistics on news literacy rates around the world. Graphic by Josie Lionetti.
One of Cohen’s roles at the library is working on the Situation Room Experience, where students are assigned a fictitious role in a foreign policy crisis scenario.
The scenario involves requiring students to make important decisions under pressure. The experience is an example of students fully engaging with the information they are given and using it to inform their decisions.
In fact, 80 percent of American middle school students cannot tell the difference between “sponsored content” (advertising) and a news article, according to a 2016 study conducted by the Stanford History Education Group
Cohen believes that education is the best way to increase news literacy skills for the next generation.
The News Literacy Project is an organization whose mission is to help students better discern fact from fiction.
Founded by Alan C. Miller, former L.A. Times investigative reporter, the News Literacy Project is a non-partisan, nonprofit organization that helps teachers teach students how to navigate today’s information environment.
The idea for the organization came when Miller visited his daughter’s middle school for a career day. In talking with his daughter’s classmates, he began to see that they were unable to distinguish between different types of information online. The students also lacked appreciation for the role high-quality journalism plays in a democratic society.
Peter Adams, senior vice president of education for the News Literacy Project, is working on a news literacy e-learning platform called the Technology Virtual Classroom.
Peter Adams, Vice President of Education for the News Literacy Project. Photo courtesy of News Literacy Project.
The program contains a free virtual learning hub that teachers can access, which teaches four core lessons about fundamental news literacy concepts.
The company also runs news literacy camps by partnering with newsrooms and school districts.
Adams said he feels that news literacy is essential in today’s modern space, and is particularly important for helping future generations make informed decisions.
“Students today are inheriting the largest and most complex information environment in human history,” Adams said. “They have a lot of resources and access that no other generation had.”
News literacy not only matters because of how misinformation is presented but also what individuals do with that misinformation.
“If we are all voters in a democracy and I vote based on misinformation, I am actually voting in a way that can affect more than just me,” Adams said.
The goal of the News Literacy Project, Adams said, is not to change how people think or vote, but to ensure they do so based on accurate information.
“We want to educate people in order to allow them to respond with sharper, more nuanced and meaningful criticisms to the coverage they think is subpar,” Adams said. “News-literate consumers make journalism better and can provide nuanced feedback.”
Communications Professor Roslyn Satchel teaches multiple classes that deal with democracy and media literacy.
“It’s really becoming herding sheep for these sites who are in the business of manipulating public opinion,” Satchel said. “If lawmakers are looking to public opinion to decide what they should do in terms of lawmaking and the public is depending solely on misinformation, societies remain divided.”
Satchel’s research looks at the ways in which culture influences media, law and religion.
Before her time at Pepperdine, she worked at coalitions around the world which pointed to the media as a major influence on the creation of negative effects for audiences. These included issues of racism, sexism, classism and discrimination on the basis of ability or origin.
Satchel sees fake news as a creation of corporate interests looking to fragment vulnerable populations.
“Corporate interests have taken research intended for good about what moves and inspires people, and have used it to evoke certain emotions in audiences that make them act in certain ways politically,” Satchel said. “That fragmentation leads to political polarization in society.”
Even though polarization may not be the public’s fault according to Adams, both Adams and Satchel believe it is still the public’s responsibility to stop fake news from spreading.
“People must take responsibility for themselves, speak up when they aren’t being represented adequately, and be active audiences,” Satchel said. “Ultimately it is the passivity that makes us vulnerable to manipulation as the public.”
Adams said that becoming an actively engaged reader is relatively simple and that 90 percent of misinformation can be caught in just a few steps.
Strategies for verifying information online. Graphic by Josie Lionetti.
According to Adams, the steps begin with the distinction between standards-based and user-generated content which he said is surprisingly effective for detecting misinformation.
“The best thing anyone can do is search for the claim and the name of the publication,” Adams said.
Adams added that research conducted by the Stanford History Education Group, found a difference in how students go about evaluating sources.
“Students stay on the page, they begin to evaluate that article and don’t leave the page,” Adams said. “But professional fact checkers tend to immediately start opening new tabs and search the central claim and name of the organization that published it, and see if they can verify pieces of the article.”
Cohen also said there are ways to more effectively approach information that may be false.
“Check the website and see if it looks like a reliable source such as a newspaper in a city that you have heard of before,” Cohen said. “Do be careful as some fake news sites mimic actual news sites.”
Cohen warned users to be wary of any headlines that may seem outrageous.
“If it looks unusually incendiary, it may be fake news,” Cohen said.
Ultimately, news literacy is as important as ever, Cohen said.
“I think that with the advent of the internet and the constant availability of online content on smartphones, bad actors are more easily able to distribute misinformation to large numbers of people,” Cohen said.
Like Cohen, Adams said he does not believe news literacy is decreasing. Rather, there is a more consistent need for education in today’s modern world, especially for young adults.
“Educators are vulnerable to overestimating a student’s savviness when it comes to digital information,” Adams said. “This is because students grew up with digital technology and know how to use some aspects of social media or the web better than their parents or their teachers.”
Adams admits that while students are good at using some tools online, they are lacking in skills that are essential for identifying fake news.
“If I talk to 30 high schoolers, two of them maybe know how to do a reverse image search,” Adams said. “This is a survival skill right now and you absolutely have to be able to do that if you want to detect misinformation.”
A combination of both news literacy skills and technology skills can be the most effective strategies when fighting against fake news, Adams said.
Technology
Sites like Facebook are coming up with their own ways to eliminate fake news and its effects, making it harder for foreign actors to manipulate audiences.
In response to claims that Facebook and its other apps — WhatsApp, Instagram, and Messenger — functioned as the chief propagators of fake news online, the company implemented changes.
These changes include investing millions into artificial intelligence and other technologies to combat the rise of misinformation.
Last year, Facebook revamped its News Feed algorithm to prioritize content shared by friends and family over posts from publisher pages. Additionally, Facebook hired third-party fact checking on highly shared stories and published tips to help users detect and report fake news.
Tips from Facebook for flagging fake news. Graphic by Josie Lionetti.
On April 10, Facebook put out a variety of announcements regarding changes to the platform in hopes of regaining some of the integrity they lost following the 2016 election.
Algorithms
“Click-Gap” is an algorithm will down-rank links to news articles that are receiving large amounts of traffic from Facebook but are not linked to other parts of the internet. Click-Gap is similar to “PageRank,” a similar system developed by Google’s founders when they first created the search engine.
PageRank analyzes linking patterns between websites to determine which ones deserve the most prominence in a web search.
However, some critics of Facebook and its new content moderation technologies feel the social network site is blocking free speech.
In Facebook’s attempt to reduce the growth of fake news, news sites across the internet have seen a decline in their Facebook traffic.
This has sparked outrage among website owners, particularly those of far-right sites who claim they are being disproportionately targeted. It is possible those sites could see their reach even further diminished by Click-Gap.
Mark Zuckerberg, Facebook CEO. Photo courtesy of AP.
Facebook struggles with the fine line of tackling fake news and maintaining community standards, while also respecting people’s right to free speech.
“To what extent, do we as a society want to shut down free speech?” Cohen said. “While we can all agree that Russian bots are an issue, we still want to ask, should Facebook be expected to shut out your uncle’s blog of false information and ranting opinions?”
In addition to the algorithmic technologies Facebook is working with, the company employs approximately 30,000 people for its content moderation team.
While Adams believes there is value in content moderation and deleting or flagging inauthentic accounts, it is not the perfect solution.
“Algorithms can’t catch everything and bad actor figures work to stay ahead of figures,” Adams said.
Facebook experienced an algorithm error in March, when a shooter accused of killing 49 people in two mosques in New Zealand live-streamed the murders over the internet.
The company algorithmically flagged and pulled down 1.2 million uploads of the video but Facebook algorithms were flagging the video by matching video and audio, which prompted some issues, according to Adams.
This meant that when people were playing the video on their computers and shooting video of the stream with their phones, it altered the video enough to skirt around the algorithm, allowing the footage to be reposted millions of times over.
Adams said the same types of shortcuts are true for hate speech online, such as when white nationalist and white supremacist groups convey messages by placing text on top of pictures.
“This type of approach gets around the algorithm unless a user flags it, so you need both technology and active audiences working together to stop the spread of misinformation and offensive content,” Adams said.
Despite some of the flaws in algorithmic technology to detect fake news, Adams said he is hopeful the innovation continues.
“I hope they never stop innovating automotive technologies,” Adams said. “They need people on their side as well as consumers who need to be educated. It is up to consumers to catch that stuff on their own.”
Fact Checking Sites
On the company’s website, Facebook claimed they are committed to fighting the spread of false news. One way they are working to do this is through fact-checking.
Facebook works with third-party fact checkers who are certified through the non-partisan International Fact-Checking Network to help identify and review false news.
For each piece of content up for review, the third-party fact-checker is asked: “How accurate is this story? Provide your rating below.”
Facebook’s third-party fact-checker product provides 9 rating options. Source: Facebook. Graphic by Josie Lionetti.
Additionally, Facebook includes over 50 fact checking affiliates in a list on the company’s website.
“When fact-checkers rate an article as false, we show it lower in the News Feed —reducing future views by over 80 percent on average,” Facebook Product Manager Tessa Lyons wrote in a statement published on Facebook’s site.
Despite the strides fact-checkers made, Lyons wrote about the limitations of fact-checking in “Hard Questions,” a series from Facebook that addresses the impact of their products on society.
One of these issues is that fact-checking is limited to only certain countries. Therefore, many of the countries in which fake news is created and distributed from Facebook do not have the fact-checking capabilities to detect misinformation.
The problem is not limited to countries without fact-checking capabilities.
“Even where fact-checking organizations do exist, there aren’t enough to review all potentially false claims online,” Lyons wrote. “It can take hours or even days to review a single claim.”
However, Cohen believes users taking advantage of fact-checking sites on their own, apart from sites like Facebook, can be a helpful tool.
“There are several fact-checking websites consumer can use, including snopes.com, politfact.com and factcheck.com,” Cohen said.
Government Regulation
More recently, Facebook CEO Mark Zuckerberg called for more government involvement to fight the spread of misinformation.
Zuckerberg wrote in an announcement made on his Facebook page on Mar. 30, “I believe we need a more active role for governments and regulators. By updating the rules for the Internet, we can preserve what’s best about it — the freedom for people to express themselves and for entrepreneurs to build new things — while also protecting society from broader harms.”
Zuckerberg said in the announcement that Facebook needs regulation in four areas — harmful content, election integrity, privacy, and data portability.
Specifically, he believes it should not be up to Facebook to define what qualifies as a political advertisement, and instead return that job to the government.
Political advertisements that made inaccurate claims served to spread misinformation during the 2016 presidential election, and have even played a role in other elections around the world. These advertisements come from third-parties working in foreign countries like Russia and Macedonia to influence various elections.
Zuckerberg highlighted some of the changes Facebook already made concerning political advertisements, such as their requirement for advertisers to verify their identities before purchasing political ads. Facebook also has a searchable archive which collects data about the advertisement and its owner, as well as the ad’s global reach.
“Deciding whether an ad is political isn’t always straightforward,” Zuckerberg wrote. “Our systems would be more effective if regulation created common standards for verifying political actors.”
In his statement, he touched upon the fact that the current online political advertising laws focus on candidates and elections, instead of the political issues. Another hurdle is that some of these laws are only applicable during elections, despite the fact that misinformation is not time restrictive.
“We believe legislation should be updated to reflect the reality of the threats and set standards for the whole industry,” Zuckerberg wrote.
In his closing remarks, he noted that while he believes Facebook has an important part in addressing these problems, it must extend beyond the company.
“We’ve built advanced systems for finding harmful content, stopping election interference and making ads more transparent,” Zuckerberg wrote. “But people shouldn’t have to rely on individual companies addressing these issues by themselves.”
A new landscape
Despite efforts by Facebook to combat fake news, the problem is encroaching on new territory.
The tactics foreign actors are using to spread fake news is shifting towards private messaging services like WhatsApp, Signal, and Telegram. This shift is based on the assumption that people are more likely to trust information shared between family or close friends.
This assumption is what prompted Facebook to revamp their News Feed algorithm last year to prioritize content shared by friends and family over posts from publisher pages.
These sites have become the frontlines for sharing digital disinformation. In light of the private nature of these applications, the companies which run them nor policymakers can review what is being posted. This poses an obstacle to stopping the spread of misinformation.
Evidence of the effects of disinformation can be seen with the most recent election in Brazil, where WhatsApp was flooded with fake news about both of the candidates.
With another U.S. presidential election swiftly approaching, it is likely that interference from bad actors generating misinformation on these sites will play a role.
However, through the various strategies one can use to combat fake news, the effectiveness of misinformation can begin to shrink.
From furthering news literacy in the classroom, to creating new algorithms, utilizing fact-checking, and even increasing government regulation, steps are being made. Whether those steps will stop the spread of fake news is to be determined.
____________
Follow the Pepperdine Graphic on Twitter: @PeppGraphic