There’s a lot going on this summer. The presidential race is building steam, civil rights protestors are still in the streets, the pandemic is taking a nasty turn, Hamilton is on Disney+. Amid all those news events—and partly because of them—businesses, activists, and lawmakers are zeroing in on an issue that seems less dramatic but is still pretty important: digital advertising, the underlying financial model of the open internet.
The highest-profile example is the Stop Hate for Profit campaign, which has convinced some major advertisers, including the likes of Verizon and Unilever, to pause their spending on Facebook until the company takes dramatic steps to deal with the spread of hate speech on its platform. But how exactly does this stuff turn a profit? The answer goes far beyond Facebook’s content policies.
Re: Targeting is made possible by the Omidyar Network. All WIRED content is editorially independent and produced by our journalists.
“A lot of those debates, when you track them down to their technical causes, it inevitably boils down to advertising technology,” said Aram Zucker-Scharff, the ad engineering director for The Washington Post’s research, experimentation, and development team. “So many of the problems that people are talking about on the web right now, these are problems that arise out of detailed and persistent third-party, cross-site user behavior tracking.”
There’s a lot to unpack there. Over the next few weeks, WIRED is going to be taking a look at the various ways in which the modern digital advertising market underwrites the proliferation of harmful, divisive, and misleading online content, while at the same time undermining real journalism. To start, we need to understand the three main categories of ad tech and the position they fill in the food chain of online garbage.
Social Media
Companies like Facebook and Twitter make almost all their money from ads. Hence the Stop Hate for Profit boycott: The loss of advertising revenue is the only thing, the thinking goes, that could make the world’s biggest social network change how it deals with racism and disinformation. But what exactly is the relationship between advertising and social media bad actors? It’s not as though white supremacists on Facebook are making money from their posts. The economics are a bit more complicated.
Critics of Facebook have long argued that while the platform doesn’t monetize hate or disinformation directly, its reliance on microtargeted advertising encourages that stuff to exist. A social network that’s free for users makes money in proportion to how much time those users spend on the platform. More time means more opportunities to serve ads and to collect data that can be used to help advertisers target the right people. And so for a long time, social media companies have designed their platforms to keep people engaged. One thing that tends to hold people’s attention really well, however, is polarizing and inflammatory content. This isn’t exactly surprising; consider the old journalistic mantra “If it bleeds, it leads.” An algorithm that prioritizes keeping users engaged might therefore prioritize content that gets people riled up—or that tells people what they want to hear, even if it’s false. Even if advertising isn’t directly funding divisive or false content, that stuff is keeping people on the platform. Facebook’s own internal review concluded, for example, that “64% of all extremist group joins are due to our recommendation tools.”
The other issue is with the substance of the ads themselves—particularly political ads. The same features of a platform built around engagement and microtargeting can make paid propaganda especially potent. In June, for example, Facebook took down a Trump campaign ad that featured an upside-down red triangle reminiscent of a Nazi symbol. Data from Facebook’s Ad Library shows that the campaign tested several variations of the ad, using different artwork; the triangle one appeared to perform the best. In other words, Facebook’s algorithm optimized for an ad that Facebook ultimately decided violated its own policies.
“Facebook’s entire business model is an optimization of a robust data-mining operation extending across much of our lives to microtarget ads against the cheapest and most ‘engaging’ content possible,” said Jason Kint, the CEO of Digital Content Next, a trade organization representing publishers (including WIRED parent company Condé Nast), in an email. “Sadly, the content that tends to receive the most velocity and reach by Facebook’s algorithms often swims in the same pool with disinformation and hate.”
Facebook disputes this. In a recent blog post, the company’s vice president of global affairs and communication, Nick Clegg, insisted that “people use Facebook and Instagram because they have good experiences—they don’t want to see hateful content, our advertisers don’t want to see it, and we don’t want to see it. There is no incentive for us to do anything but remove it.”
Facebook might really want to remove hateful content. But it’s hard to keep track of billions of posts a day, and automated systems have a tougher time with things like hate speech. The boycott doesn’t really change that. It doesn’t even hurt the company’s bottom line that much, because most Facebook advertising comes not from giant corporations but from little-known small and medium-size businesses. (It’s also not clear how seriously companies are taking the boycott. HP, for example, added its name to the list but continued to buy new Facebook and Instagram ads in the first week of July.)
Facebook is not just a victim of its own massive success, though; the company makes policy decisions that facilitate the spread of disinformation too. Take its decision to exempt politicians from its fact-checking policies, including for ads—meaning elected officials and candidates are allowed to straight-up lie on the platform and target those lies to specific slices of the electorate. (Under pressure, Mark Zuckerberg recently announced that Facebook would remove politicians’ posts that incite violence or aim to suppress voting.) In response, a number of critics have urged Facebook to join Google and its subsidiary YouTube in banning the ability to microtarget political ads. That way, false claims can at least be subject to scrutiny and not beamed directly to narrow audiences. Multiple House Democrats have introduced bills that would mandate just that. (Twitter, meanwhile, forbids political ads entirely.)
“What I think makes microtargeting so pernicious in the political context is, they can microtarget so granularly to individuals who are susceptible to believe it, without the benefit of the surrounding argument or the counterargument that exists if someone puts the ad on television for example,” said David Cicilline, the chair of the House Antitrust Subcommittee and author of one of the bills, in an interview in May.
Programmatic Display
Social media gets most of the attention, but if you really want to follow the money behind online hate and disinformation, you have to understand programmatic display advertising.
According to a new report by the Global Disinformation Index, tens of millions of advertising dollars will flow this year to sites that have published high volumes of coronavirus disinformation and conspiracy theories. The report includes screenshots showing jarring juxtapositions: an ad for Merck appearing on the right-wing fringe site World News Daily beneath the headline “Tony Fauci and the Trojan Horse of Tyranny”; a Dell ad running above a Gateway Pundit article blaming “faulty models, junk science, and Dr. Fauci” for destroying the economy; and even an ad for the British Medical Association next to a headline suggesting that “compulsory vaccination” will genetically modify people, turning them inhuman.
How does this happen? In a word: automation.
In the 1990s and early 2000s, digital display advertising—banner ads, pop-ups, and so on—was just the digital analogue of print advertising: A brand would buy space directly from a website. But today that is increasingly rare. What has risen in its place is something known as programmatic advertising. With programmatic, ads no longer target specific publications. Instead, they target specific types of users, based on things like age, sex, location—along with the creepier stuff, like what their browsing history reveals about their interests. Advertisers now put their ads into an automated system with instructions to reach a certain audience, wherever they are. They have some power to tell the system to keep their ads away from certain sites and content, but the results are spotty.
Programmatic advertising is the economic fuel of the “free” internet. Its rise has made it dramatically easier for anyone to create their own site and immediately make money from traffic. Unfortunately, the same convenience that allows a food blogger to turn their following into an income also allows anyone to set up a site pushing hate speech or propaganda and get it monetized without any advertiser explicitly choosing to pay them.
Facebook. Instagram. YouTube. Amazon. How much do you trust these products? Take our survey and tell us what you think.
“Previously, without ad tech, it was much harder for them to make money,” says Augustine Fou, an ad fraud consultant. Automation changed everything. The key change, Fou explains, was “the ease with which you can copy and paste a few lines of code onto your site and start running ads and making money. Prior to programmatic, you’d have to get an advertiser or a media buying agency to give you money.” Ad-tech tools do allow brands to block certain sites and types of content, but advertisers frequently don’t take advantage of them.
The paradox of programmatic advertising is that while it might be easy to tap into, the actual mechanism is absurdly complex: a series of real-time auctions mediated by layers of automated middlemen. Every time a page or app running programmatic ads is loaded, the publisher starts by sending its available ad space, along with whatever information it has on the user loading the page—essentially the inventory it’s selling—into its ad server. (The most popular ad server by far is run by Google.) The ad server beams out a bid request to advertisers looking to target that type of user. Brands put their ads into an ad-buying platform, along with their target audience and what they’re willing to pay. (Google also owns the biggest buying-side platform, which is particularly popular among smaller businesses.) The platform sends that bid to the ad exchange, where it competes against other bids for the target audience. The winning bid then competes against all the winners from all the other exchanges. Finally, the winner of winners appears on the publisher’s site. Believe it or not, this is a dramatically oversimplified account; the real thing is much more complicated. But, in a nutshell, that’s how a household name like Merck or Dell can end up sponsoring Covid denialism. (Two weeks ago, Google finally announced that it would begin blocking ads from running on stories promoting debunked coronavirus theories.)
Technically, most social media advertising could also be described as programmatic display, in the sense that it’s targeted at users based on behavioral data through an automated auction. The difference is that social media ads appear in the closed system of a given platform, while what I’m calling programmatic advertising follows you all around the web. But the two share important similarities.
“So much of it is about creating a space in which users can be targeted when they are, for lack of a better term, vulnerable,” says the Post’s Zucker-Scharff. They’re vulnerable to the right piece of fake news showing up at the right part of a feed or on a site at the right time. That sort of thing only happens because they can be targeted with this type of data.”
Search
The last big bucket of digital ads is search: Results that advertisers pay to have displayed above or below the actual results from a search engine. This is a much simpler system. You pick your search engine, specify which search terms you want to trigger your ad, and pay based on how many clicks the ad gets. There’s no complicated chain of intermediaries that makes other ad ecosystems so ripe for foul play. And Google, which accounts for some 90 percent of the global search engine market, has pretty robust policies concerning the ads that run on its platform.
Still, even search ads can be used to further misinformation on crucial issues. Two recent reports by the Tech Transparency Project, an internet watchdog and persistent Google critic, illustrate how. In the first, the researchers scraped the results of thousands of Google searches for information on how to get money from the federal coronavirus relief bill. They found ads with text like “Get Your Stimulus Check – Claim Your Money Now” that, when clicked, sent users to several varieties of scams. Because Google recently changed the format of ads, less savvy users could easily confuse these for organic search results. Some were designed to get credit card numbers or other personal data; some prompted users to download browser extensions that would supposedly help them get their money, but in fact essentially turned their computers into click machines to deliver fake impressions for programmatic ads on scam websites. Some simply took users to other, lousier search engines that make money by flooding their own results with ads—a technique, known as search arbitrage, that shows how bad actors can combine different types of ad tech to finance their schemes.
The second report found similar skullduggery around search queries for how to register to vote. In particular, fraudsters are preying on people by offering to help them register for a hefty fee—even though US law requires voter registration to be free.
“Hopefully, moving forward, the company will increase its due diligence on ads like this that have predatory practices—either extracting money or installing malware or leading voters to scams—particularly when people are not only trying to be safe to avoid interacting with people during the pandemic, but trying to find vital information,” said Katie Paul, the director of the Tech Transparency Project.
Google has responded to the reports by emphasizing that it’s “constantly improving our enforcement to stay ahead of bad actors who are trying to take advantage of users.” A spokesperson said the company had taken down the stimulus ads and disabled the voter registration ads even before they were reported in the press. They also pointed out that Google announced in April that all advertisers would be required to verify their identities to run ads on any Google platforms.
Still, the recent reports show that even a platform with the resources and tech savvy of Google Search can struggle to stay ahead of bad actors.
“Tech companies need to evolve their due diligence efforts to keep up with these scammers,” says Paul. “Because every time a platform ends up cracking down on something, the scammers are just going to evolve slightly to avoid that crackdown.”
More Great WIRED Stories
- How Taiwan’s unlikely digital minister hacked the pandemic
- Tips for staying productive when the world is on fire
- The thing about a summer without blockbusters
- Dystopia isn’t sci-fi—for me, it’s the American reality
- Iranian spies accidentally leaked videos of themselves hacking
- ? Prepare for AI to produce less wizardry. Plus: Get the latest AI news
- ?️ Listen to Get WIRED, our new podcast about how the future is realized. Catch the latest episodes and subscribe to the ? newsletter to keep up with all our shows
- ✨ Optimize your home life with our Gear team’s best picks, from robo