Ads Appearing on Google’s Network Alongside COVID-19 Misinformation

Search and social platforms have had a lot to contend with when it comes to COVID-19.

From struggling monetization to constantly evolving ad policies, and blatantly false information slipping through the cracks in approval, adapting quickly has been the name of the game.

There is another side of Google’s advertising web that had been somewhat flying under the radar until now: where display ads show on its network of sites.

What is Google’s Ad Network?

When a visitor lands on a website and sees advertising, it’s a symbiotic relationship. The brand is paying to have their ad shown, the site owner makes money, and different platforms (like Google) broker that relationship.

Sites that use Google to automatically supply ads from buyers are part of their advertiser network.

In the age of disinformation, this creates questions around the responsibility of ad networks when it comes to which sites and what content it powers advertising for.

Sites vs. Page Content

A report this week from Bloomberg shows this gray area is rearing its head again.

A research group called the Global Disinformation Index found that in 49 sites running disinformation, 84% of them had Google ads on them.

Example of the ads and website pairings include:

  • Veeam ad appearing on an article citing claims that Bill Gates’s charities relating to pandemics are part of a plot for world domination
  • Microsoft Teams ad appearing on an article that Gates attempted bribery on Nigerian lawmarkers for a COVID-19 vaccine
  • O2 communications ad showing an an article claiming links between the virus and 5G networks

When questioned, Google responded by highlighting that they monitor specific page content, and not necessarily a site at large.

“We are deeply committed to elevating quality content across Google products and that includes protecting our users from medical misinformation. Any time we find publishers that violate our policies, we take immediate action.” – Christa Muldoon, Google spokesperson

Brands that use Google’s network can create a blacklist of sites they do not want their ads on, but it raises the question of where the responsibility truly falls.

In a world with billions of sites, who owns the responsibility for not only the brand association, but the monetization of sites that feature false information?

Brands vs. Information

This isn’t the first time Google has faced backlash for where its ads run, including a very public debate over its ads appearing on Breitbart.

As they continue to expand monetization opportunities across both YouTube and websites, they face a unique offshoot of this issue:

If a creator is banned, do you police their appearance at large?

Case in point: David Icke, a British conspiracy theorist who had been on YouTube for over 14 years.

He was finally banned from the platform recently due to spreading misinformation about COVID-19. Now, he can be seen and heard in other YouTube channels, and in commercials for a streaming network around Yoga called Gaia.

A Google spokesperson confirmed that as long as the videos don’t break rules, blocked creators may appear in other videos owned by other accounts.

As the world marches on with creators joining platforms every day, and more information than ever being pumped out over the internet, these questions will continue to permeate the landscape. Should there be monetary reward for disinformation, and how does a platform selectively choose when and how to enforce that?

There may never be an easy and swift answer, but the debates will continue on.

Original Bloomberg coverage can be read here.

Image courtesy of Disinformation Index

#