There are a few things Facebook’s proven, time and again, that it can’t get quite right—like keeping its user data under wraps, curbing partisan political ads, and, as proved this week, blogging. Truly though, you need to give props to the multibillion-dollar tech giant for perfecting the art of the well-timed news dump in the midst of a major political event. It turns out the impeachment was no exception.
In an email to Gizmodo, a Facebook spokesperson confirmed that on December 19—the day after Donald Trump became the third president in U.S. history to be impeached, this time with a late-night House vote—the company last updated its standards surrounding hate speech. And boy, what an update it was—one that, even if the timing wasn’t intentional on Facebook’s part, you probably missed.
Here are a few of the “dehumanizing comparisons” that Facebook users aren’t allowed to post anymore, per the update:
– Black people and apes or ape-like creatures
– Black people and farm equipment
– Jewish people and rats
– Muslim people and pigs
– Muslim person and sexual relations with goats or pigs
– Mexican people and worm like creatures
– Women as household objects or referring to women as property or ‘objects’
– Transgender or non-binary people referred to as ‘it’
“Statements denying existence” of these kinds of marginalized groups also received a blanket ban, meaning that statements like “trans people don’t exist” would also be struck down under these new guidelines. Naturally, all of these rules apply to all content—not just text—meaning that all those pesky memes would also be held accountable.
As with all of the company’s community standards, the consequences of posting these kinds of things, on paper, range from mild to severe:
The consequences for violating our Community Standards vary depending on the severity of the violation and the person’s history on the platform. For instance, we may warn someone for a first violation, but if they continue to violate our policies, we may restrict their ability to post on Facebook or disable their profile. We also may notify law enforcement when we believe there is a genuine risk of physical harm or a direct threat to public safety.
And again, as with the case with Facebook’s standards, it’s likely that these, in fact, were considered hate speech beforehand, and this is a way to make the impossible task of moderating a deluge of content easier to swallow—especially because comparing a race to a particular animal means something extremely different depending on what that race is. (Also, maybe just stop generalizing about races on Facebook and everywhere else.)
Calling out language like the above shows just what people were getting away with—or trying to get away with—in the face of Facebook’s self-proclaimed AI prowess in stopping exactly that. Back in November, for example, the company boasted 7 million pieces of content flagged by its AI as a potential hate speech contender, and have previously teased the idea of forming a specific moderator coalition dedicated to the task.
These weren’t the only updates that snuck under the radar. In fact, per Facebook’s spokesperson, every change to the company’s community standards that happened this past December took place when impeachment was dominating everyone’s attention. And while some of these—like the company’s blanket ban on Census interference—made the newswires, there were bans on livestreaming capital punishment and mocking survivors of sexual abuse that were mysteriously absent. And in the absence of an RSS feed or any sort of notification system on the page of Community Standards updates, it’s likely that these changes were swept under the rug with barely anyone noticing.
Facebook declined to comment on whether these policies were publicized as much as the Census announcement—or whether they were publicized at all.
Aside from the hate speech updates, the company snuck in eight other changes to its community standards in the middle of one of the biggest stories of political hellfire in years.
- The “Violence and Incitement” policy was expanded to ban “misinformation that contributes to the risk of imminent violence or physical harm,” (rather than just immediately contributing to that harm).
- The “Coordinating Harm and Publicizing Crime” standard expanded to ban Census fraud, rather than just voter fraud.
- The “Fraud and Deception” standard expanded, now banning users from engaging with, promoting, or facilitating anything related to “fake or manipulated documents” like phony coupons or medical prescriptions. It did the same for “betting manipulation,” “fake fundraising campaigns,” and “debt relief or credit repair scam[s].” To top it off, recruiting a workforce to run these scams also got the ban.
- The policies for “Sexual Exploitation of Adults” expanded to include “forced stripping,” atop the already banned content surrounding “non-consensual sexual touching, crushing, necrophilia or bestiality.” Mocking the victims of in any of those categories—or admitting to participating in it yourself—is verboten under the new ruleset.
While sharing revenge porn violated the standards before, this update adds that threatening to share it and “stating an intent to share,” are both violations, as is “offering” these pictures, or asking for them at all.
Also (finally) banned: upskirts.
- The sections of the “Human Exploitation” category dealing with private citizens were amended to include “involuntary minor public figures.”
- The policies on “Violent and Graphic Content” now ban livestreams or pictures of “capital punishment.”
- The policy surrounding “Cruel and Insensitive” content elaborated a ban on “sadism towards animals:”
Imagery that depicts real animals that are visibly experiencing and being laughed at, made fun of, or contain sadistic remarks for any of the following (except staged animal vs. animal fights or animal fights in the wild):
-premature death
– serious physical injury (including mutilation)
– physical violence from a human
- In a grim expansion of the “Memorialization” policies, the company added that Facebook users who die by suicide can have living relatives ask that pictures of the weapon used or any content “related” to the death be removed from their profile photo, cover photo, and recent timeline posts. Family members of murdered Facebook users can have any pictures of the assailant (convicted or alleged) cut out, too.
And just in case you were wondering:
For victims of murder, we will also remove the convicted or alleged murderer from the deceased’s profile if referenced in relationship status or among friends.
It’s worth noting that in general, the company’s not particularly shy about touting updates to its litany of community standards, even going as far as to put out a semi-regular report tackling how they’re tackling updates like these, and how well they’re being enforced.
But for updates like these—ones that don’t only show the holes being constantly punched in those automated systems, but also shine a light on some of the worst sides of Facebook’s roughly 250 million-strong U.S. user base—a news dump will do.
Update 12:55 pm ET, Jan. 10: In a statement sent to Gizmodo on Thursday night, after publication, a Facebook spokesperson detailed how their process for updating community standards works. Further, the spokesperson said the timing of the update outlined above was due to the holidays.
We regularly update our Community Standards – we have a policy development every other week where we consider changes to our Community Standards. There’s a number of different reasons we might consider a policy change – among them:Local or regional trends identified by our local and regional policy and communications teamsFeedback from an external partnerShifts in language or social normsFeedback from our content reviewers that an existing policy is confusing or difficult to applyThere’s typically two types of presentations made in the policy development meeting – a heads up and a policy recommendation. The heads up is a flag to internal teams and people that the content policy team is going to look into updating an existing policy or adopting a new one. Once a heads up has been represented, the content policy team will kick off internal working groups with subject matter experts within the company. They’ll also work with our stakeholder engagement team to reach out to external experts and affected communities to solicit input. Importantly, internal and external working groups are global. Once we’ve completed the working group process, content policy considers policy options, one of which they ultimately bring back to the aforementioned policy development meeting as the policy recommendation. We debate and discuss the recommendation at the meeting and it’s either adopted or it isn’t (if the latter, the team will do whatever due diligence is needed to revise / strengthen the policy recommendation).There’s usually a bit of a lag between the policy being adopted and the start of enforcement. This gives us time to develop training materials and operational guidelines for content reviewers. In some cases, a policy will involve product work – e.g. a warning label that’s placed on content that is especially graphic – which also requires that we give ourselves time between adoption and enforcement.We publish the minutes from the policy development meeting I’ve been referencing here: [link]And we update our Community Standards monthly. It’s usually on the last Thursday of every month, but because of the holidays, we went out with December updates on December 19.
Correction: A previous version of this article incorrectly characterized Trump’s impeachment. While impeachment can result in the removal of a president, Trump