Google Admitted Advertisers Paid for Ads Served to Bots

September 15, 2017 Reid Tatoris

Google Admitted Advertisers Paid for Ads Served to Bots – What Does That Mean for Publishers and Advertisers?

Google made a major announcement on August 28, saying it will provide refunds to advertisers whose ads were presented to bot traffic. It isn’t offering a full refund—only on its platform fee—but it’s still significant. This is because Google is officially admitting that ads are shown to bots, and is offering its customers compensation. For most advertisers, this will have a negligible impact, but for small publishers mainly relying on ad networks to fill their inventory, the consequences could be extreme.

Google is not identifying bad impressions. Rather, it’s identifying sites it deems are fraudulent and refunding for ads that ran there. It wasn’t clear, but it appears that it’s identifying certain sites as being fraudulent because they had high levels of bot traffic. But Google is likely only finding the most egregious offenders. So if you’re a publisher—especially a large one—it’s unlikely that Google is flagging your site as being bogus. Most of what it’s flagging are likely fake sites set up specifically to spoof impressions.

What does this look like? Here is an example:

Screenshot A
 
This is an example site with high levels of bot traffic. It has no content—just a number of ads. A quick look reveals it’s clearly set up solely to siphon ad dollars from legitimate sites. While this is what we usually think of in relation to ad fraud, it can also look like this (Screenshot B):
 
ad fraud
Screenshot B
Nothing seems suspicious here. There is real content, it looks normal, and there are no brand safety concerns. However, if you analyze this site you’ll find that the traffic consists almost entirely of bots. Digging further, you would soon find the reason why. The content has been scraped from Android Central, a legitimate, quality site (Screenshot C):
 
ad fraud
Screenshot C

A bad actor wrote a bot to scrape content they didn’t create, reposted the content on a bogus site, then directed fake bot traffic to it. The result is ad dollars are being diverted away from legitimate, deserving content creators.

This poses a big challenge to smaller sites—especially those that don’t sell their own inventory. It appears that Google is only looking at bot traffic coming to sites. So if someone is targeting a bot attack at your site on the day Google is doing its evaluation, then your site is at risk of being labeled a bad site. For smaller sites flagged by Google, there is really no way to fight back. (Large corporations can push back, but a small blog simply cannot.)

Anyone using DoubleClick for Publishers (DFP) has had some sort of issue where they’ve tried to contact Google and can’t even get a response. The recent ‘Adpocalypse’ on YouTube is a great example. Here, Google received complaints from big advertisers that a few of their ads were shown next to YouTube videos they found offensive, such as someone yelling a racist rant. Google responded, but because everything is algorithm-driven, hundreds of small YouTube channels lost their monetization for things few would really find offensive (e.g., light swearing or political conversations).

In general, small sites/creators that don’t have the resources to fight back are impacted any time Google changes a policy. This is mainly an issue of scale. It’s not that Google doesn’t care, but rather there is no possible way for it to review every site that shows an ad bought through DFA. Google does its best, updating its algorithms in an attempt to catch the bad stuff. The smaller your organization is, and the less information you have about the metrics Google uses to evaluate your bot traffic levels, the more vulnerable you are.

Google has a big incentive to keep its customers happy. If big advertisers are complaining about ads being shown to bots, it’s going to take action. But ultimately it only wants to prove that it does take action. It wants to show it’s doing something to alleviate customer concerns, but it doesn’t really have an incentive to completely fix the problem and remove all bot traffic from DFP. And the disconcerting part is that its customers don’t have much insight whether or not the problem has been fixed.

Each advertiser uses a different company to estimate bot traffic, no one agrees, and advertisers can’t sort out the difference because it’s not their core function. The result is that Google claims to have taken action, most advertisers feel a little better, then don’t do much more. Many legitimate sites are shut off or dinged by Google, with no way to fight back. Fake sites are shut down, only to start up again on a new domain. A year from now, another big advertiser will discover its ads have been shown to bots, Google will tweak its algorithm yet again, more fake sites will be marginally eliminated, and more small publishers will be hurt.

As a publisher, the best way to protect yourself is to proactively identify and block bot traffic, so that a bot attack never lands your site on Google’s list. At the very least, follow the expert advice regarding how to minimize potential bot traffic from hitting your site. Examples include not sourcing traffic nor using audience extension programs.

For marketers, the truth is that somewhere between 20 – 40 percent of your ad spend is being wasted. Goodwill gestures, such as Google giving back your serving cost, doesn’t come close to matching your expenditure. Marketers should use multiple verification vendors for every campaign, cookie users to confirm they’re human, and aggressively target only those deemed legitimate.

Unfortunately, for most campaigns the number one metric is still fill rate. If you really want to eliminate wasted bot impressions, you’re going to end up cutting down on available inventory—there is no way around it. In reality, there are simply fewer real human impressions than advertisers would like to believe.

 
 
Previous Article
 Web Scraping Protection And The Wide Open Web
Web Scraping Protection And The Wide Open Web

The world wide web is estimated to hold almost 50 billion indexed pages. These are pages that are accessib...

Next Article
Overview of Bot Mitigation and Anti Scraping Services
Overview of Bot Mitigation and Anti Scraping Services

There are a number of anti scraping services that can be taken to control web scrapers, some more effective...