For the past year or so, Facebook has mandated that certain page owners and political ad-spenders on the platform verify their identities as a way to curb interference in the coming election. Now, these checks are being extended to certain people’s profiles, too. According to a post made on the company’s blog yesterday afternoon, the company is putting in those same checks and balances on profiles that have been found to engage in what Facebook determines to be a “pattern of inauthentic behavior,” and whose posts “rapidly go viral” in the United States.
According to the company, these changes are being made in the name of making “people feel confident that they understand who’s behind the content they’re seeing,” adding that “this is particularly important when it comes to content that’s reaching a lot of people.”
This new verification process is twofold. Aside from offering the company some sort of federal ID and proof of address, those profiles that are running pages will need to go through another verification process that involves (among other things) matching the location data from their device with the location data posted to their profile. Without this extra step, per Facebook, a user will be locked out of posting to that page—and if the user in question decides not to verify their identity at all, or if their federal ID doesn’t match the name on their account, the distribution on their formally viral posts will be stunted.
On one hand, this new set of checks and balances is more than welcome. This month has seen Facebook hunkering down in an attempt to quash the types of misinformation that have become all too prevalent in the coronavirus-era. And because the pandemic’s caused all tech companies’ usual hoards of content moderators to thin out substantially, Facebook is largely relying on a slew of less-than-perfect automated systems to do the dirty work of sniffing out these sorts of campaigns right now. Asking for ID’s is, in a sense, asking for bad actors to jump over another hurdle that doesn’t require human review—so it’s an easier lift on Facebook’s end during a time where they sorely need it.
On the other hand, these sorts of systems are too little, too late. Yesterday, the sitting president’s current screed against social media companies writ large prompted Facebook CEO Mark Zuckerberg to launch into a tirade of his own on Fox News, where he said, in short, that private companies a la Facebook “shouldn’t be the arbiter of truth,” echoing the stance we’ve seen him take over and over in the past. By stifling these sorts of viral posts—rather than say, banning them outright, or fact-checking them in real-time, the company is taking that same, half-assed approach: sure, it won’t tell you what you should and shouldn’t believe, but it might shield it from your line of sight.
It’s an attempt to placate both sides that leaves neither of them satisfied. When human rights groups or advocates come knocking, Facebook can point to their automated efforts to curb the spread of this sort of intel—a point that ignores just how quickly these sorts of posts spread. Meanwhile, the owners of these posts and pages—whether they’re in office or otherwise—can be placated with the knowledge that they’re still getting eyeballs on their work, just… not as many, which will likely go over as well as anything else Facebook-related these days. Facebook makes choices regarding how its content is amplified, that’s just a fact. Maybe, just maybe, instead of trying to be a both-sides kind of guy, Zuckerberg should pi