Meta’s new policies have been in the news. As a former employee, I keep thinking about what these seemingly small tweaks could mean for our society. This post builds on another I wrote a few weeks ago—read Fake News, Fact Checks, and Facebook first.
Imagine that our hometown friend Joe Schmo keeps posting outrageous things all over Facebook. We want the best for him… but we’re worried. First, he’s posting about what he does with his critically endangered pet orangutan (probably illegal). Next, he makes contemptuous asides about his neighbors on the other side of the tracks—worrisome. Later, he attempts to sell ghost guns on the platform.
Which means our friend Joe is violating the social media company’s policies again, and again and again.
What happens, if anything? Is he blocked, banned, or suspended? Does the consequence depend on who is doing the violating—or does the same action beget the same consequences for all users, even the famous ones?
The consequences
For social media in general, and Facebook in particular, ordinary Joe Schmo has been highly likely to face the set outcome prescribed for going against the policy he breaks. If he routinely posts violating or even borderline hate speech, his posts may be removed or demoted—racist comments included. When he crosses the line enough, his Facebook privileges get revoked.
This is true regardless of political persuasion.
Perhaps his ex-girlfriend, Jane Drain (very creative, I know), posts a left-wing rant about how all men are awful and should die. Some construe this to be a threat specifically against our friend Joe. She starts sharing conspiracy theories—because yes, these happen on the left, as well. Jane, like her ex, suffers the appropriate consequence from violating Facebook’s Community Standards: a “shadow ban,” having the post removed, or—for severe policy violations—having her whole account suspended.
Facebook has generally worked using a cascading series of penalties for rule-breakers. Many of these posts are automatically detected and removed, sometimes with appeals and a human reviewer in cases where Meta gets it wrong. Then the resulting consequence occurs. These “penalties” run the gamut from a 10-minute block on posting anything, to an hour pause, to a block on running ads, to a full account suspension for repeated and egregious violations. (Don’t livestream illegal activities on Facebook, friends.)

Not so well targeted
Like I wrote last week, the idea that Republicans are targeted more than Democrats by these automated systems is, empirically false.1 Notably, a decade of research shows that conservative content often has an advantage on Facebook. It’s far more accurate to say that Facebook’s system just gets it wrong sometimes, and its wrong across the board—whether you’re liberal or conservative. And gasp, Facebook may even block more left-leaning content.
There’s plenty of evidence for getting it “wrong”: a recent company announcement actively asserts that Meta makes mistakes… which all Facebook users already knew. Mr. Kaplan, the company’s new Chief of Public Affairs, noted that last December, for every 10 pieces of content that were flagged, demoted, or removed for violating company policies, Meta thinks they get it wrong 1–2 times.
Falsely blocking 10–20% of posts is certainly worth addressing. With literally billions of users on its apps (including Instagram and WhatsApp here), even a 10% failure rate winds up a mountain of user content wrongly flagged and removed—like people trying to start a fundraiser for Syrian hamsters.2
And yes: far too many accounts are unnecessarily flagged, suspended, or shut down. That’s one reason Facebook needs a much better appeals and customer support system: for all the times these systems and even its human reviewers get it wrong. I strongly support efforts in this direction.3
Although, author note: Getting it right" 80–90% of the time is really an astounding technological feat. That’s a ton of false, misleading, scammy, harassing or downright horrific content that people never see, or at least see less of.
Standards for thee and not for me
What happens when a public figure posts the same thing our friends Joe or Jane have been reprimanded for posting? When high-profile political candidates—or even the leaders of countries—post content that normally violates the policies?
What should the company do in these cases? I can think of at least three distinct approaches:
Option 1: Declare a “newsworthy” exemption
The normal policies are still around, and they pertain to ordinary folk like you and me, but are waived for people at the top. The words and actions of public figures—the “elites,” if you will—simply have fewer consequences. In my mind, this is the least “fair” option as far treating some users preferentially (although in many ways it mimics real life).
Now the company’s rationale for a newsworthy exception may be legitimate, for the corporation at least. Many people do want to know what their leaders are saying. Blocking beloved celebrities like Taylor Swift or entertainment hosts like Tucker Carlson is certain to alienate many users: if their following is big enough, such actions create a blowback, a boycott, and/or negative PR. Moreover, it’s likely in the company’s best business interest to keep top celebrities, politicians, and other influencers on their platform because it’s content other users want to see.
However, creating a “newsworthy” exemption comes at an interesting cost, which is this: the company essentially cedes its rulemaking power to those who qualify for newsworthy exceptions. Instead of bright, uncrossable lines, its standards are now up for grabs. Once the line is crossed, the offender, if they’re a powerful enough public figure, can keep crossing further and further with few consequences. More on this later. (Also, this piece had an interesting look at the harms of newsworthy content).
Option 2: Enforce the policies equally for all people, regardless of rank or popularity
This method enforces company standards (“rules:) consistently, no matter who is doing the posting. Everyone is treated equal. This approach is, in my opinion, the most just method: clear rules with clear, consistent penalties for breaking them—even if you’re famous, rich, and powerful.
It is, of course, important that the standards are:
Created after significant research and thought, with care to make them as fair and unbiased as they could be (personal note: for Facebook, this was certainly true by the second half of the 2010s)
Clear enough for the average user to understand, and
Consistently enforced across the political spectrum (i.e., if you target any specific group with a violent threat, your post is removed, regardless of your political affiliation)
Unfortunately, there are negative real-world consequences to the company’s image for option 2. When very influential and powerful people run afoul of the policies, they’ll accuse your company of everything from unfair enforcement to cries of political persecution.
Remember, by Facebook’s own calculations, up to 20% of posts taken down are done so wrongly, so wrongfully taking down content for a famous user will happen. It’s only a matter of who and when. (Another plug for better appeals.)
Option 3: Change the policies for all users so that everyone can do whatever the powerful person can do
Instead of holding ordinary users accountable and making an exception for powerful public figures, the company can simply change the policies so that everyone is treated the same… just with lower standards for all posts. A race to the bottom, if you will.
Changing the standards for everyone is also a fairly egalitarian way of approaching the issue—but one with a tradeoff. This method allows significantly more obvious misinformation, hate speech, spam, and so forth to remain on Facebook.
Evolution or revolution?
Essentially, the three approaches I’ve outlined are the evolution of Facebook’s Community Standards/policy enforcement over the past decade, although of course I’m oversimplifying the approaches and tradeoffs.
Facebook started as sort of a wild west. Policies? What policies? Twenty-something tech guys didn’t need policies.4
Then the user base exploded, the internal team grew bigger and more diverse—read: beyond male college-age software engineers. Et voilà! Community Standards were born. By the early 2010s, the Blue App, plus their new acquisitions Instagram, created policies and generally enforced them across the board, Option 2 above.
Along with the techno-optimism of the era, this was perhaps the golden age of Facebook: a News Feed that was mostly family and friends, not ads; viral campaigns and memes like Kony 2012; “feel-good” causes the ALS ice bucket challenge. I’m nostalgic just thinking about it.
During this time, the company mostly held the power over its platform—not so subject to the whims of powerful figure. Public figures and politicians generally complied Facebook’s “rules” or they got kicked off the platform. And who wanted to lose that kind of reach??
The cusp of change
Power in enforcing Community Standards shifted mid-decade when a certain political leader who shall not be named rose to the forefront of American political discourse. Unlike typical politicians of the era, he consistently violated Facebook’s policies—particularly on misinformation and hate speech (for example, targeting specific minorities with his comments on Mexicans).
Let’s be honest here: an average user would have been kicked off the platform after a mere handful of violations.
But banning this leader didn’t happen right away. Instead, in 2016, Mr. Zuckerberg and team declared a “newsworthy” exception to certain posts rather than risk the political fallout of removing offending content or suspending the account of popular candidates. The newsworthy exception, of course, benefitted other influential users and politicians around the world, not just in the United States (particularly but not exclusively on the political right).
Equal enforcement returns…
Around January 7, 2021, Facebook tried moving back to system 2 after critique from its own Oversight Board. This meant essentially ending the newsworthy exception. Finally, after years of exemptions, the company actually suspended the account of a sitting U.S. president—for policy violations that you or I would have had our account been suspended for long before.
I cannot overstate that he had repeatedly violated the policies for years previous. In fact, during 2020, a journalist copied the then-president’s posts word for word. The journalist’s account was suspended for using the exact same wording.5
But bans are deeply unpopular. In this case, the political fallout was real, probably cementing the longstanding myth that conservatives were targeted unfairly by Facebook—most users not realizing how long this politician had been uniquely exempted from the consequences of his posts. Ironically, the most consistent application of the rules was perceived as being the most “unfair.”
The former (now current) president who had been banned was eventually reinstated with “guardrails”. To get an idea of what “guardrails” means, this is what the company posted at the time:
“When there is a clear risk of real world harm — a deliberately high bar for Meta to intervene in public discourse — we act…. Like any other Facebook or Instagram user, Mr. Trump is subject to our Community Standards. In light of his violations, he now also faces heightened penalties for repeat offenses — penalties which will apply to other public figures whose accounts are reinstated from suspensions related to civil unrest under our updated protocol. In the event that Mr. Trump posts further violating content, the content will be removed and he will be suspended for between one month and two years, depending on the severity of the violation.”
These guardrails are now gone. At Mr. Zuckerberg and team’s bidding, these Community Standards are now on their third iteration. From what they’ve said publicly (and internal friends have mentioned), this means Meta is removing or changing these policies and will enforce fewer of its previous standards. More, not less, of high-profile users’ content is marked as “newsworthy,” and thus free from the restraints of Community Standards.
Some celebrate this as a win for free speech. While I’m a firm believer in the value of free expression, I certainly worry about the effects of lowering the standards for more people. Real-world harm can ensue. It’s quite a tricky issue. I don’t know that there’s a perfect method for enforcing this. Maybe we just all delete Facebook and throw away our smart phones? (Only half-joking).
On the policies
Are Facebook’s policies perfect? No. Of course not. Meta, if you’re listening, you can hire me to help fix them ;)
Again, to claims that Facebook’s Community Standards or other policies may have historically favored liberals and unduly demoted conservative content. If this was advertently true—and repeated data about the reach of conservative posts on the platform says otherwise—I’m confident that the balance has never been nearly as skewed towards the left as some would have you believe. However imperfect the Community Standards may be, they generally have nonpartisan outcomes.
Even when Facebook gets things wrong, they’re wrong across the board.
Let me tell you a bit about the process of creating the policies. A varied group of people review each Community Standard very thoroughly, balancing the chances of limiting discussion with harming groups of people. Each point of view is weighed carefully. I speak from experience, having actually helped rewrite Facebook’s policy for personal fundraisers (RIP) and later interviewing for several policy jobs.
Writing consistent policies that can even be applied requires wrestling with some tough questions. Two quick examples:
Meta’s Community Standards generally disallows female nudity. That’s good: there’s far too much porn on the site. But blocking all nudity created backlash for censoring innocuous posts of women nursing their infants. (While I don’t post such pictures, I have found breastfeeding images empowering—and frankly these pictures helped me understand how to get my baby to latch properly).
Quite honestly, it’s hard to ban sexualized images and solicitation without infringing on breastfeeding mothers’ rights to post non-sexual images. Moreover, when it comes to nudity, where do you draw the line? Lingerie? Bathing suits? Society tends to consider some images decent and others indecent for public purview—and it’s not always based on how much skin is showing. Then… do you treat topless men the same as topless women? Is it sexist not to? So many nuances to consider.I’ll keep this one shorter: Free speech reigns in the U.S., where we tend to prioritize the right to say anything. This perspective is fairly unique. For instance, in Germany, Holocaust denial is illegal. How does one write and enforce a policy that can be used for both countries? With a user base spanning literally a third of the globe, Facebook’s content policies need to be generally enforceable yet responsive to local concerns. It’s a gargantuan task.
The future
I wrote this more to think through this messy issue than to persuade anyone to one side. You can clearly see my bias towards fairness: rules that apply to everyone. Moreover, I believe that a private company should be able to apply their rules consistently, regardless of how powerful their users are.
However, I want to be clear that reasonable, intelligent people can disagree on which option is optimal and best for society. Each features tradeoffs. Treating “newsworthy” content differently may mean less headache for the company, at the expense of actually introducing bias. “More free speech” also means increasing known misinformation distributed far and wide, with all that comes along with that—in my view that’s a major contributor to our increasingly fractured society with less media trust. “More moderation” equals more high profile accusations of bias, and requires rigor to ensure that the policies themselves remain unbiased.
All to say, no method is perfect—and the tradeoffs are real. Because silly as it seems, social media policies influence real people’s lives.
Today, millions of highly engaged social media users essentially live in their own political bubbles. Facebook hosts an increasing array of fake accounts, many of them run by American enemies abroad purposefully to cause division (I think they’re doing a good job of it), along with an army of AI bots intended to spread political propaganda. Because of Meta’s recent decision to reduce moderation and enforcement, I imagine more of this content now goes unchecked.
As someone who has seen the effects of foreign disinformation campaigns and bad actors, lowering the standards sets off alarm bells for me. I am working to lower my personal consumption of social media, and to try to educate those in my circle more about media literacy.
A lesson
My parting thought is about power. When companies make exceptions to treat the wealthy, famous, well-connected, or the powerful better than they do the average Joe or Jane, they cede much of their power to later hold them accountable.
The seeds for some of our issues today may stem from Facebook executives’ decision nearly a decade ago to give leeway to content that otherwise would have been booted off the platform. This may go for individuals and the rest of us, too.
The moral of the story: we should be very thoughtful about how, and for whom, we bend the rules.
See CJR’s “The myth of social media anti-conservative bias refuses to die” and NBC’s “Facebook and Twitter don’t censor conservatives. They hire and promote them.” Although the second is an opinion piece, it links to numerous studies on how Facebook explicitly helped rather than censored conservatives. The Mother Jones demotion I specifically remember seeing in internal research. Note this line: “Ben Shapiro and conservative sites did indeed win from those algorithmic changes, and Mother Jones and progressive sites did indeed lose.”
Which, for the record, is a type of hamster and doesn’t necessarily mean raising money for Syria, a sanctioned country under U.S. law (and therefore illegal). Ask me how I know ;)
Believe you me, Facebook’s customer support operations need a huge lift. A subject for a different post someday, perhaps.
At least, this is what I was told about pre-Sheryl Sandberg Facebook. High school me was not working at Facebook back in 2006, nor indeed for the next decade.
As you can imagine, the newsworthiness exception announced in 2016 and its application was a big internal debate at the time, and especially in 2020.