Fake news, fact checks, and Facebook
Misinformation is a real problem. Meta's recent announcements may mean more of it.
It was 8pm in late October 1938, and as usual, Americans tuned into their radio stations before bed. On air the night before Halloween was the seventeenth episode of Mercury Theatre on the Air, a dramatic adaptation of a science fiction novel about a Martian invasion.
Produced as a fictional radio broadcast of the invasion, “War of the Worlds” was taken by many of its listeners as truth, not fiction. The show’s intensity, pacing, and unusual intermission led to many Americans believing that Earth was indeed being conquered by aliens. Listeners reported feeling terrified. (Aside: the original broadcast is worth a listen… they did those sound effects live).
The power of media remains very relevant. One well-produced radio drama persuaded people who should have known better that a far-fetched science fiction novel had come scarily true. The hordes of angry listeners sparked numerous newspaper articles and captured the national imagination.
“War of the Worlds” remains an example of how easily we humans start to believe things, even things that we should intuitively know aren’t real.
While propaganda is (probably) as old as humankind, its especially concerning today because of the breadth of misinformation and our repeated exposure to such information. Although optimistic views of the internet purported that being sp connected would correct myths and false information (and it can), frequent exposure to incorrect information is certainly a peril of the internet age.
And that matters.
When seeing is believing
Human psychology means that the more times we see or read something, the more likely we are to believe it—whether or not the information is actually true. This is called the “illusory truth effect.” We even start to like music more the more we’ve heard it—a phenomenon I can confirm personally.
This has certainly happened to me, and likely to you too. Remember those “facts” on the back of candy wrappers and cereal boxes about how humans swallow around 8 spiders each year in their sleep? I read variations of this idea repeatedly in childhood. This possibility haunted me for years (spiders are not my friends). I only found out in my thirties that this myth was never true. It was… fake.
Few of us, if any, are immune to falling prey to misinformation from well-meaning folks—and we’re even less immune to well-packaged disinformation from bad actors who purposefully anger and divide us. The more frequently we see that information, the more we start to believe it (and those beliefs are hard to undo—some effects of misinformation continue even after we understand it’s wrong!).
One study1 on the effects of repeatedly seeing false information noted:
Repetition can affect beliefs about truth. People tend to perceive claims as truer if they have been exposed to them before. This is known as the illusory truth effect, and it helps explain why advertisements and propaganda work, and also why people believe fake news to be true.
Even when we intuitively recognize something as false or unlikely, the more we are exposed to it, the more plausible this idea or “fact” becomes to us. And our emotions also influence whether we trust the information. This is particularly alarming when it comes to our political system:
The evidence suggests that global politics have already been strongly influenced by online propaganda campaigns, run by bad actors who understand that all they need to do to help a lie gain traction is to repeat it again and again.
Repetition leads to acceptance—and especially for those of us online, accepting false news as true can lead to real problems. Because we see false information over and over. Today, over 20% of Americans—something like 37% of young people—get their news primarily from social media. And misinformation is everywhere on social media, from the left and right. Neither political “side” is immune.
Most unfortunately, the more we see it, the more we tend to believe it.
Social media and false information
Let’s talk about social media. Now might be the time for me to admit that I worked for Facebook for over half a decade.2 And actually, I loved it. Loved getting to work on big problems. We reached literally half of the US population and over 3 billion users worldwide—almost a full third of the world population.
Social media can be used for good (at least I believe so). During my final few years there, my job centered around writing nonpartisan information to encourage turnout in civic events and, occasionally, writing things to combat misinformation—things like: “If you’re in line when polls close on Election Day, stay in line because you can still vote.” (PS: That’s true). Our goal was simply to help people find clear, accurate information about common voting questions. The team I worked on helped register 4.4 million voters in the 2020 US election—and we didn’t care if they were Republicans or Democrats.3
We worked on other projects most people would think are unambiguously good. Our Facebook promotions helped solve a poll worker shortage in several states, sent 40% of the traffic to the Canadian census, about doubled the number of Filipinos who registered to vote in 2022 and helped millions of Indians find accurate info in a confusing bifurcated election. I remain very proud of that work.
In my heart, I continue to think hope that social media can be used for good. But there’s clearly bad, too. Those billions of users have generated billions (trillions?) of content, some of it wholesome, some of it frankly awful. Content found on Facebook and Instagram has fueled terrorist attacks, led to ISIS recruiting young Westerners, and promoted real-world ethnic violence—and don’t get me started on revenge porn. These are all terrible.
Platform decisions, like when and how to moderate and demote content, can help mitigate some of the harms of social media. When I was employed at Facebook, tackling false information was a big focus. The tides have changed, it seems.
Meta’s announcement earlier this week included some tidbits that they are:
Swapping official fact-checkers for community moderation.
No longer demoting content, even known false information.
Comments on censorship and free expression.
Other new initiatives, like limiting “frustrating enforcement actions”—in layman’s speech, that’s putting people in “Facebook jail.”
I have thoughts. Let’s dive in.
The end of fact-checking
My feelings about moving from official third-party fact-checkers to community moderation are mixed. This could actually work out okay.4 What matters is how community notes works and how it is received by Facebook’s users. Will social media users believe that Community Notes are accurate and adequate?
Because, let’s be clear, there will continue to be a flood of compelling misinformation within Facebook’s products.
Can Community Notes serve as a robust fact-check? With the rise of AI (and it’s only just begun), will Community Notes appropriate identify digitally-generated images, video, audio, and more that too many people believe are real? Will people of one ideological group dominate the Community Notes to the detriment of truth, or will people of good conscience on all political sides work together to robustly counter falsehoods, à la Wikipedia?
Only time will tell.
I do have concerns. Near the top: even if Community Notes are comprehensive, balanced, and accurate, will they work? Will people believe them?
Do fact-checks work?
Efficacy has always been my biggest concern about fact-checks on social media. If my memory is correct (caveat: it has been 2 full years), internal company research indicated that the perceived bias of fact-checkers often made Facebook’s third-party fact-checks less effective and, perhaps at times, counter-productive. External research has found the same.5
With this in mind… perhaps moving to a new method isn’t the end of the world. If people are more likely to believe corrections done via Community Notes are accurate.
The belief is the important thing. Because the independent fact checks weren’t necessarily working. Liberals were usually more inclined to trust these independent fact-checks than conservatives were.
Fact or fiction
Why did this happen? Because many prominent Republicans claimed they were specifically targeted by Facebook for their beliefs.
However, contrary to the claims of conservative talk-show hosts and lawmakers, the internal studies I remember clearly showed that Meta’s systems actually tended to flag, demote, or remove far more left-leaning content than right-wing content. Mother Jones was “censored” far more than Brietbart. This was especially true during the first Trump administration, when Facebook routinely made concessions to Republicans. (Also: yes, Facebook’s employees do lean left and were mad about this).
It’s the classic scenario we’ve been discussing—the more this “fact” is repeated, the more we believe it. The perception remains. Republicans still believe they were censored more than their left-leaning counterparts on the platform, even when the evidence doesn’t support that.
In fact, when I started employment at Facebook back in 2017, I fully expected to find the company censoring conservatives—and was pleasantly surprised that this phenomenon was more imagined than real. Facebook’s auto-detection censored at least as much left-leaning content, if not more. Now I’m passionate at correcting this misinfo. In reality, straight-up political censorship by Facebook was rare, and seldom on purpose if or when it did happen.6
At the end of the day, Facebook’s independent fact checking program wasn’t effectively ending misinformation. And pushback against the program may have occasionally backfired, causing false information to be shared more widely (“they don’t want you to see this!”). So perhaps it’s time for a change.
In defense of fact-checking
That said… I also believe fact-checking as a profession has a place. Even biased people can check their biases.
Because yes, fact-checkers, like most journalists, tend to be politically left-leaning. I certainly agree that it would be better to have more balance of personal political views among journalists (one reason why I’m writing!). However, this doesn’t mean that independent fact-checkers routinely make political decisions at the expense of the truth. True trained journalists focused on fact-checks have long committed to a robust code of ethics and nonpartisanship (things I clearly care about).
What’s more, independent fact-checkers pride themselves on their commitment to accuracy over partisanship. At the launch of Factcheck.org back in 2003, longtime journalist and political fact-checker Brooks Jackson wrote:
Our goal here can’t be to find truth — that’s a job for philosophers and theologians. What we can do here is sort through the factual claims being made between now and election day, using the best techniques of journalism and scholarship.
And I can think of no better job for a journalist than holding politicians accountable for getting the facts right, regardless of their party or political philosophy.
Just like my mostly left-leaning team of Facebook employees could very successfully run nonpartisan voter registration campaigns—and question our own biases—most fact-checkers can recognize and check theirs. Because there are objective facts out there. “Alternative facts” often don’t stand up to scrutiny. To quote Mr. Jackson again:
‘As the late Sen. Daniel Patrick Moynihan was fond of saying, “Everyone is entitled to their own opinion — but not their own facts.”’
Giving up the fight
In my view, Facebook’s move means it is essentially retreating from accountability about the false information it hosts. It’s not the first Big Tech firm to decide this route. Just another extraordinarily large, powerful company essentially abdicating responsibility for ensuring that its products do no (or at least less) harm to users.
This worries me.
Relying solely on community moderation means that malicious actors—think Russia, Iran, neo-Nazis, and more—will be better able to disseminate and amplify clearly false information. This misinformation has real-world consequences, like convincing parents not to vaccinate their at-risk children or causing victims of natural disasters to threaten federal disaster relief workers.
A lot of the programs that Meta is ending are the very things that expert fact-checkers believe can quell the tide of misinformation. From the earlier Harvard Kennedy School study:
“Experts agreed that social media platforms have worsened the problem of misinformation and were in favor of various actions that platforms could take against misinformation, such as platform design changes, algorithmic changes, content moderation, de-platforming prominent actors that spread misinformation, and crowdsourcing misinformation detection or removing it…. they should also factor in laypeople’s opinions about it and be vigilant about protecting democratic rights in the process.”
Now: Less moderation. Less deplatforming. Less removing of content. (Luckily, they are at least implementing crowdsourcing misinformation detection).
Rereading”How is Facebook addressing false information through independent fact-checkers?”—an article that won’t be live for long—made me nostalgic for the time when Facebook thought it had a role in making sure patently fake information didn’t overtake the unsuspecting masses. It no longer seems to think it should have much of a role at all.
As new Chief Global Affairs officer Joel Kaplan put it:
“For less severe policy violations, we’re going to rely on someone reporting an issue before we take any action.… We are in the process of getting rid of most of these demotions and requiring greater confidence that the content violates for the rest. And we’re going to tune our systems to require a much higher degree of confidence before a piece of content is taken down.”
I can see how that rhetoric might appeal to free-speech enthusiasts, especially those without much insight into what a de-regulated platform looks like. But I am someone who wants to make the world a better place. And in reality, waiting for users to report info means more bullying and harassment left online, more propaganda forwarded around the site, and more nefarious actors.
Facebook and Instagram users, beware.
Free expression and censorship
Like Mr. Zuckerberg7, I do believe in free expression—just within bounds. Yes, people should be able to express opinions that are not “politically correct” and talk about real issues (I just think this already happens). I want a place to freely discuss and debate ideas, especially when people engage with good faith. With a heck of a lot of users and years of my own investment, I’d like that place to be Facebook.
We just need guardrails. Counter to some of the political rhetoric from the incoming administration, there has got to be a place for a private internet company to draw lines around what is acceptable behavior and what is not. To say “we want less bullying on our site.” Or for Congress to tackle such policies for all internet companies, instead of the whims of a few powerful guys.
I do wonder how real Mr. Zuckerberg’s commitment to free speech it. Of late, he seems to be taking a page out of the playbook of another tech titan.
I’ve been seriously less than impressed by the “free speech” of Mr. Musk in action. When acquiring the company formerly known as Twitter, Mr. Musk promised free expression. In practice, he routinely demotes and blocks speech that he doesn’t agree with (especially tweets that criticize him personally). Perhaps more egregiously than his predecessors. Silencing critics… the opposite of free speech.
Mr. Musk’s approach to moderation also hasn’t worked out in a business sense: reducing moderation has made Twitter/X far less palatable to the advertisers it depended on. Financial analysts estimate that the now-private company is worth 80% less than when he bought it (this was, however, before the election). I expect that Facebook will lose ad revenue from this decision in the long term, especially if misinformation proliferates… but at least that’s financial incentive to clamp down.
Incentivized disinformation
Unfortunately, if X’s move away from content moderation is accurate warning, false info will ramp up and Community Notes will fail to address it. Election and political misinformation are rift on X—and apparently lucrative too.
Some users on X who spend their days sharing content that includes election misinformation, AI-generated images and unfounded conspiracy theories say they are being paid "thousands of dollars" by the social media site.”
False information can be outrageous and sticky, begetting likes and shares and follows, a bigger audience, and sales. Money and power talk. So incentives to create misinformation continue.
X’s precedent makes me worried that Meta’s changes will just mean more misinformation and more real-world consequences. Still, they are different sites, with different users and different CEOs. Maybe, just maybe Meta will implement its community-generated moderation better. I personally have confidence that Facebook’s employees will do their best with Community Notes to identify and counteract egregious misinformation—that’s a plus!
Now, not all the announcements Meta made are bad. In fact, Facebook desperately needs to improve its appeals process to restore content that has been unjustly removed. It could certainly stand to improve transparency about why content was removed in the first place; the experience can be jarring. I know this firsthand, having recently been put in temporary “Facebook jail” firsthand after a very benign, apolitical comment was removed. The experience was frustrating even with insider information… I once worked on a project to improve transparency around what’s colloquially known as Facebook’s “shadow bans.” Clearly it didn’t work.
So I’m here for that change. I hope they put more humans to work restoring accounts and Facebook privileges. Hacked and lost accounts are a real issue, so are shadow bans. I’m glad they’re addressing it.
If Meta manages to create a vigorous Community Notes product, if its users are more likely to accept those fact checks and therefore combat misinformation, and if it continues to fight obviously false information using other avenues, there is hope for a future with less egregious misinformation reaching millions of people around the globe.
May it be so.
Note: For other moderate perspectives on the recent announcements by Meta leadership, I recommend Nate Silver about the rise and fall of fact-checking; and “Meta runs back towards politics” by Katie Harbath, former Facebook public policy director and longtime Republican.
The effects of repetition frequency on the illusory truth effect, Aumyo Hassan, Sarah J Barber
My time at Facebook spanned mid-2017 to late 2022. During that I worked with diverse products, from civics, to donations, to the launch of Facebook Dating, to what happens to your Facebook account when you die, to simply getting people back into hacked accounts (a pet passion of mine as an employee). Super interesting stuff!
To my knowledge, helping register 4.4 million Americans in 2020 was the biggest and most successful voter registration campaign in US history, at least in terms of absolute voters registered. Pretty neat.
I have a bias towards clamping down on easily disproven myths—and I do think moderation is important. Zero moderation would make Facebook a cesspool for bad content. My perspective comes from knowing a fair bit about what kind of “violating” content, from ISIS recruitments to revenge porn, is found on the platform without moderation. And even benign (but easily disproven) misinformation can have real-world consequences. That said, I do agree that Facebook should have a far more robust system for appeals and restoring platform privileges for those in “Facebook jail.”
Some of the interventions used by Facebook were noted by external researchers on the subject to be of mixed effectiveness in combatting misinformation:
There are disagreements on which interventions are most effective against misinformation… and on the extent to which interventions against misinformation are effective…. in isolation, individual interventions have limited effects… it is only in combination that they can have a meaningful impact. (Note: I removed in-text citations to enhance readability):
That’s not to say higher-ups at Facebook don’t make political concessions to the current party and administrations in power. They do.
In conversation, I usually refer to Meta’s CEO as “Cousin Mark.” In keeping with my style guide for Middling, “Mister” it is.