The State of Fake News on Social Media in 2019

By Toby Cox / 5 September 2019

“Fake news” is becoming both more widespread and convincing as social media’s reach expands and technology advances. Most people report seeing fake news on social media at least once a month, and although this negatively impacts people’s views of social media, it doesn’t deter them from using these platforms.

Collette McLafferty had barely started her day on June 9, 2014, when she saw a headline in the New York Post that claimed she was being sued for $10 million for being “too old, ugly, and untalented” for her job in a P!NK cover band. 

In a matter of hours, new headlines rolled in from Time, Yahoo! News, and CBS Radio, and McLafferty’s phone, email, and social media accounts were saturated with messages. The chaos settled after three days, but the damage had already been done. 

“There were pages and pages of search results on Google referring to me as a ‘bad, ugly singer,’  and the girl who ‘got sued for ruining the P!NK tribute band,’” McLafferty said. “It knocked [off] almost 20 years of career highlights.”

McLafferty was not actually being sued for being a “bad, ugly singer,” but it was too late. When anyone typed into Google “bad, ugly singer,” the search results featured McLafferty. This resulted in a blow to her business’s reputation until she reclaimed the condemning keywords in her book, "Confessions of a Bad, Ugly Singer" published in 2018.

McLafferty’s story shows how fake news and the virality of misinformation can damage individuals and businesses.

Fake news is more than a buzzy phrase. It represents a threat to the validity of information, causing people to question anything they read online and making "reality" seem less absolute. 

The Manifest surveyed 537 U.S. social media users to learn how people gauge their own ability to spot fake news and how fake news influences their use of social media, if at all. 

Our Findings

  • Fake news is a bipartisan issue: Nearly all liberals (92%), moderates (94%), and conservatives (94%) think fake news on social media is a problem.
  • Nearly all social media users (97%) say they are confident in their ability to recognize fake news on social media, but experts say this may change as fake news becomes more convincing.
  • Fake news remains widespread. Around half of people have seen fake news on Facebook (70%), Twitter (54%), and, to a lesser extent, YouTube (47%), Reddit (43%), and Instagram (40%) in the past month.
  • Although 53% of consumers say fake news negatively impacts their opinion of social media, these same consumers are unlikely to change how often they use social media. Just 1% of people say they would cancel their Facebook account as a result of fake news.

Fake News Is a Bipartisan Issue

As the 2020 U.S. presidential election looms, perhaps the only thing people on opposite ends of the political spectrum can agree on is that fake news on social media is an issue. 

Nearly all liberals (92%), moderates (94%), and conservatives (94%) think fake news on social media is a problem, which means that fake news is not a “liberal” issue or a “conservative” issue — it’s an “everyone” issue.

More than 90% of liberals, moderates, and conservatives believe fake news on social media is a problem.

The 2016 U.S. presidential election elevated “fake news” as a buzz phrase that refers to misleading, inaccurate, or falsified information that is presented as truth. 

“Fake news became used as a weapon with a lot of tools behind it we’d never had before, such as social media, data mining, and artificial intelligence,” said Anna Liotta, founder of The Generational Institute and author of Unlocking Generational CODES

After the 2016 election, people began wondering if fake news played a role in helping President Donald Trump win the election. 

Stanford researchers, The Guardian, NPR, Vox, and The New York Times all published reports on the impact of fake news on the election and concluded that although fake news is unlikely why President Trump won, fakes news distorts reality and polarizes people with different political beliefs, feeding people what they want to hear instead of actual news. 

New technologies such as artificial intelligence and capabilities such as big data present challenges on the fake news front.

For example, the Cambridge Analytica scandal showed how seemingly innocent personality quizzes (e.g., “What type of pizza are you?”) could collect data on users, such as personality traits, beliefs, and interests. 

“Big data can be linked to fake news because of the way diverse data sets can be leveraged to make secondary inferences about people, including to figure out their personal beliefs, inclinations, and political leanings,” said Ray Walsh, a digital privacy expert at ProPrivacy, a privacy education and review site.

When collected en masse, big data like this can be used in targeted fake news and propaganda campaigns. 

Even something as simple as Facebook likes can reveal details about individual social media users. 

When Facebook expanded its reactions options in February 2016, it gave users the opportunity to respond emotively to content on a micro-level. 

On Facebook, people can like or love posts or indicate whether a post shocked them, made them angry, or made them laugh.

People can indicate if they like or love a post or if a post makes them sad or angry or makes them laugh. While this may seem innocent, it opens a window of opportunity to track users’ beliefs and infer what types of fake news might appeal to certain types of users.

Although people might be more aware of fake news than they were in the 2016 U.S. presidential election, the technology that supports fake news has advanced. People likely will see more convincing fake news show up on their news feeds, which will be catered to their interests and beliefs in the months leading up to 2020 U.S. presidential election. 

Fake News Comes in Many Forms, and Some Are More Convincing Than Others

The phrase “fake news” encompasses a wide range of how misinformation can be presented, from outrageous headlines to more intricate deep fakes, yet people think they can spot fake news.

Nearly all social media users (97%) are confident in their ability to recognize fake news on social media.

97% of people are confident they can recognize fake news on social media.

Although people believe they know when news is fake, fabricated stories still slip by people’s defenses. 

“A lot of people today just read the headline, and they believe it,” said Brandi Zatorski, marketing manager at LYFE Marketing, a social media marketing agency in Atlanta. “You need to read the whole article and then crosscheck it across multiple sources.” 

Sometimes, a headline can be a red flag that a story is fake, while other times, the whole article must be read to know if it is real. 

For example, Daria Serdiuk, marketing manager at Chanty, a SaaS company-building a tool for team communication and collaboration, knew a news story on Facebook about a lottery winner being arrested for dumping $200,000 worth of manure on his ex-boss’s lawn was fake.

“Right off the bat, I could tell it was fake because it seemed too comedic to be true,” Serdiuk said. “Another red flag was the way the website looked when I landed on it. The ads and the overall look were a dead giveaway that everything was fake.”

Serdiuk thought critically about the story and the quality of the website before coming to the conclusion that it was fake. 
 

In 2018, a fake news story about a lottery winner dumping manure on his ex-boss's lawn circulated the internet.

Serdiuk’s intuition was correct: The story was fake and was originally published on World News Daily Report, which boasts the motto “where facts don’t matter.” 

To read the disclaimer about the satirical nature of the articles, users must scroll all the way to the footer. 

World News Daily assumes all responsibility for the satirical nature of its articles and for the fictional nature of their content. All characters appearing in the articles in this website – even those based on real people – are entirely fictional and any resemblance between them and any person, living, dead, or undead, is purely a miracle

“World News Daily assumes all responsibility for the satirical nature of its articles and for the fictional nature of their content. All characters appearing in the articles in this website – even those based on real people – are entirely fictional and any resemblance between them and any person, living, dead, or undead, is purely a miracle,” the footer reads. 

In the event its near-ridiculous headlines aren’t enough of a giveaway, World News Daily's motto and disclaimer should be enough to let people know its stories are fake news. 

The problem is that these disclaimers do not appear in social media posts, and while 94% of social media users post content on social media, another study found that 60% of people share posts on social media by reading the headline alone

When shared on Twitter, for example, the story looks like any other news story, especially if Twitter users trust the person sharing it and read the headline only. 

For example, college football coach Mike Leach shared the story with his 182,000 Twitter followers in October 2018. 
 

Mike Leach shared the story about the lottery winner dumping manure on his ex-boss's lawn with his 182K Twitter followers in October 2018.

The tweet was posted without a disclaimer that it was a fictional story, yet it was retweeted 505 times and received nearly 2,000 likes.

Sensationalized stories are not the only form fake news takes, and experts say people’s confidence in detecting fake news may change as it becomes more convincing.

“I am confident that people creating fake news are constantly improving their game, and that’s why [people should] assume everything needs to be verified,” Liotta said. “I’m confident that they are going to get better and better at making it look more and more real.”

I am confident that people creating fake news are constantly improving their game, and that’s why [people should] assume everything needs to be verified.

Already, social media users see different kinds of fake news appear on their news feeds that look convincing — so convincing they’ve been shared by politicians and viewed by millions of social media users. 

Deep Fakes Present Threat to Individuals, Politicians, Celebrities, and Society 

Technological advances often indicate progress, but what happens when these same technologies are applied maliciously to distort reality and blur the lines between fact and fiction to the point they are indistinguishable?

Deep fakes are photos or videos that have been manipulated or created to make it seem like someone is saying or doing something they’re not. 

“It’s like Photoshop on steroids,” said Justin Lavelle, chief communications officer for BeenVerified, a search engine that enables people to look for people and businesses in public records.

Although deep fakes typically target people in high-power positions, new deep fake technologies can impact anyone. 

“From revenge-seeking exes to discontented employees, just about anyone with some digital know-how (or a small amount of cash to hire someone with that knowledge) could potentially manufacture a fake video,” Lavelle said.

Deep fakes take one of three main forms, according to Lavelle:  

1. Face swapping is when the face of someone in a video is replaced with someone else’s face. 

For example, in January 2019, a video circulated the internet of Jennifer Lawrence with Steve Buscemi’s face at the Golden Globe Awards. 

In January 2019, a deep fake featuring Jennifer Lawrence with Steve Buscemi's face circulated the internet.

This video is meant to entertain, but face swapping technology is also being used maliciously by abusive exes or those seeking to tarnish someone else’s reputation.

2. Lip syncing is when an audio file is converted to “mouth points” on a video. 

This type of deep fake can make it seem like someone is saying something they didn’t actually say. 

For example, in June 2019, two artists released a deep fake on Instagram of Mark Zuckerberg appearing to boast about controlling people’s data and his partnership with Spectre, an ode to James Bond films. 

In June 2019, two artists released a deep fake on Instagram of Mark Zuckerberg.

The artists never intended for the video to pass for real but wanted to show how deep fakes could harm someone’s reputation and to see if Facebook would remove the video on Instagram (it didn’t). 

3. Puppeteering is when someone appears to do something he or she never did. 
 
For example, a Samsung lab demonstrated how puppeteering can be done using just one still image by creating a “living portrait” of the Mona Lisa.

Computer scientists created a deep fake of the Mona Lisa.

Mona Lisa appears to be talking and laughing in the video.

It is also possible for all three of these types of deep fakes to be combined. 
 
For example, a video of Tesla CEO Elon Musk singing David Bowie’s “Space Oddity” while in space used a video filmed in 2013 by Astronaut Chris Hadfield as the base and combined face swapping, lip syncing, and puppeteering technology. 

In August 2019, a deep fake of Elon Musk singing in space was released.

This video makes its fabricated nature explicit in the oxymoronic name of the channel (RealFakes) and by including “DeepFake” in the title. 

The technology used to create deep fakes provides us with entertainment but also alludes to more severe implications in the future as this technology becomes more sophisticated. 

Deep Fakes and Altered Videos Make It Difficult for People to Know What’s Real

Deep fakes could lead to instability and confusion about what is real and what isn’t.

“[Worst case scenarios include] political instability and chaos as a result of loss [of] faith in any media and the confusion about basic reality and a sense that truth is just your opinion,” said Aaron Lawson, Ph.D., assistant director of SRI International’s Speech Technology and Research (STAR) Laboratory. 

“It is likely people will simply learn to ignore images, video, or audio that they don’t already agree with based on the assumption that it has been faked,” Lawson said.

It is likely people will simply learn to ignore images, video, or audio that they don’t already agree with based on the assumption that it has been faked.

Lawson also said that people may start to assume that anything they don’t already agree with is “fake.” 

Although realistic deep fakes are outside of most organizations’ budgets, recent altered videos have shown us that convincing fake news doesn’t require high technology and can be done with basic editing programs. 

For example, in May 2019, an altered video of Nancy Pelosi stammering her words and appearing drunk during a speech circulated the internet and was even shared by President Trump to his large Twitter following.

The altered video of Nancy Pelosi shows how people can manipulate footage and damage people's credibility.

According to computer programming professor Hany Farid in a CBS interview, this video was low-budget and low-tech, created using the functions of basic editing software, yet people still believed it.

The development of technology to make convincing deep fakes is considered a global threat: In the 2019 World Wide Threat Assessment, experts predict entities will create deep fakes to influence the results of the election. 

People should be careful as technology advances and leaves room for increasingly convincing deep fakes designed to misinform and manipulate viewers’ opinions. 

Fake News Disrupted the Media Industry and Inspired an Industry of Its Own

Fake news isn’t going away and is only becoming more of a problem as people malicious intent use emerging tech to create realistic fake news, which has given rise to an industry dedicated to fact-checking. 

More than half of people have seen fake news on Facebook (70%) and Twitter (54%) in the past month.

People see fake news on Facebook (70%), Twitter (56%), YouTube (47%), Reddit (43%), and Instagram (40%).

Many social media users have also seen fake news on YouTube (47%), Reddit (43%), and Instagram (40%) in the past month.

In response, an industry dedicated to fact-checking and verifying “truth” has grown.

For example, TruthGuard is a website currently in beta that allows internet users to rate publications based on the validity of their content. 

“The idea of [TruthGuard] is simple: If you can rate doctors, teachers, restaurants, hotels, and other types of businesses, why not content producers?” its website reads.

TruthGuard depends on internet users to crowdsource reports of fake news. 

Internet users can use TruthGuard to see recent reports of fake news and read why these articles were flagged.

On its website, users can look up publications, see how other people rated different publications, and see instances of fake news reported.

TruthGuard also offers a free Google Chrome extension to make it easier for people to report fake news when they see it. 

TruthGuard offers a Chrome extension to make it easy for people to report fake news.

If users are reading something online they think is fake news, they can click the TruthGuard icon in their Chrome browser and fill out the form, which is sent to TruthGuard for verification. 

The prevalence of fake news has inspired a wave of fact-checking companies dedicated to helping internet users determine what is fact and fiction and has encouraged internet users to report fake news. 

People Call Out Fake News When They See It

Fake news is more of an annoyance to social media users than a deterrent, and people don’t hesitate to call out those who share false information. 
 
Although 53% of consumers say that fake news negatively impacts their opinion of social media, these same consumers are unlikely to change how often they use social media.

People Aren't Likely to Change How They Use Social Media Because of Fake News

Of all the actions people can take after seeing fake news on social media, most said their use of social media would not change:

  • Facebook (53%)
  • YouTube (50%)
  • Twitter (49%)
  • Pinterest (49%)
  • Instagram (46%)
  • Snapchat (45%)
  • Reddit (45%)
  • LinkedIn (37%)

Furthermore, only 1% of people say they would delete their Facebook account after seeing fake news. 

Only 1% of people would delete Facebook because of fake news.

Some experts attribute this lack of change as an indication of indifference. 
 
“At the end of the day people really don’t truly care because if they did, they would need to make a change,” said Johnathan Dane, founder and CEO of KlientBoost, a digital advertising company. “The habits have already been formed.”
 
Just because people don’t change their social media use as a result of fake news, however, doesn’t mean they don’t care, and social media users are quick to point out fake news. 

For example, when a Twitter user tweeted a likely false story about her younger sister saying to a waitress, “First off, I ordered crab legs, not your attitude,” people took to the comments section to voice their opinion in true internet fashion: with snarky memes, infographics, and comments. 

People call out fakes news using comments and memes.

Users commented with memes and Venn diagrams to discuss how this likely never happened. 

Social media users assume responsibility for the content that shows up on their newsfeeds by calling attention to fake news when they see it. 

For example, TruthGuard Chief Editor Sarah Bauder said she sees fake news several times per month. When Bauder sees fake news on Facebook, she chooses to not be an idle bystander. Instead, she: 

  1. Flags the post on Facebook
  2. Comments on the post explaining that it’s fake and why

Although people may not change their social media habits after seeing fake news, users exhibit a low tolerance for fake news by calling it out on social media through memes and comments. 

Fake News on Social Media Is Evolving as Technology Advances

Fake news is not unique to social media, but digital platforms provide the ideal breeding ground for cultivating and sharing fake news. Within hours, a fake news story can go viral, shared and seen by thousands, as proven by McLafferty’s story and the manipulated video of Nancy Pelosi. 
 
Most people use social media daily and are confident they can spot fake news, but fake news takes many forms, from obviously sensationalized headlines to deep fakes, which use artificial intelligence or skillful editing to produce realistic (but fake) “news” stories.
 
As a result, people should be skeptical of what they see on social media, including stories and advertisements. 

Social media and digital marketers and businesses that use social media ads should be aware of what red flags consumers look for to determine if what they’re seeing on social media is trustworthy.
 
Businesses that fail to use due diligence when creating and sharing content or ads may harm their brand in the long run.

About the Survey

The Manifest surveyed 537 social media users in the U.S.

Most survey respondents are female (64%), and 36% are male.

About 42% of respondents are millennials (ages 18-34); 36% are Generation Xers (ages 35-54); and 22% are baby boomers and older (ages 55+).

Respondents’ political leanings are split among liberal (30%), moderate (41%), and conservative (29%). 

owner

Want to Hire a Service Provider?

Get a free shortlist of best-fit companies from a Clutch Analyst

Based on your budget, timeline, and specifications we can help you build a shortlist of companies that perfectly matches your project needs.

Tell us about your project