April 26, 2024

Programmatic

In a world where nearly everyone is always online, there is no offline.

Fake News, Inappropriate Content And The Rise Of The Self-Policing Platform

<p>AdExchanger |</p> <p>"Data-Driven Thinking" is written by members of the media community and contains fresh ideas on the digital revolution in media. Today’s column is written by Vejay G. Lalla, a partner in the advertising, marketing and promotions group at Davis & Gilbert. Over the last year, the online advertising industry has grappled with the threat posed<span class="more-link">... <span>Continue reading</span> »</span></p> <p>The post <a rel="nofollow" href="https://adexchanger.com/data-driven-thinking/fake-news-inappropriate-content-rise-self-policing-platform/">Fake News, Inappropriate Content And The Rise Of The Self-Policing Platform</a> appeared first on <a rel="nofollow" href="https://adexchanger.com">AdExchanger</a>.</p><img src="http://feeds.feedburner.com/~r/ad-exchange-news/~4/i-Nh-sEHIT8" height="1" width="1" alt="" />

Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Today’s column is written by Vejay G. Lalla, a partner in the advertising, marketing and promotions group at Davis & Gilbert.

Over the last year, the online advertising industry has grappled with the threat posed by fake news and incendiary content to online placements. Although advertisers, their agency partners and publishers use a patchwork of solutions to stem the flow of problematic content, the industry has not yet come together to find a cohesive solution.

The answer may be to move away from after-the-fact solutions and advertiser-directed monitoring, to publishers and networks offering more robust automated brand-safety solutions that ensure that ads are only placed alongside brand-safe content.

The Problem

Regardless of politics, everyone can agree that the 2016 US presidential election was a game-changer. But while the political implications have been thoroughly explored, the implications for the advertising industry are still coming to light.

Just last month, we learned that Russia’s campaign of spreading content to influence the election potentially included hundreds of thousands of dollars in advertising on Facebook, Twitter and Google. Facebook has since revealed that Russia-linked content generated on its platform may have reached 126 million Facebook users, and representatives from Facebook, Google and Twitter have since testified before Congress to discuss the full extent of the problem. Meanwhile, advertisers continue finding their ads alongside fake news and other incendiary content, including content from foreign actors. 

The brand-safety implications are obvious, but the solution is less so. How does the advertising industry plot its way forward? How does a site, ad network, agency or brand determine what is legitimate news content versus fake news?  What is political advocacy versus deliberate foreign influence? Who bears the responsibility of policing that content?

The Solutions Thus Far

The standard terms used in most media buys require publishers and networks to use “commercially reasonable efforts” to comply with the advertiser’s “editorial adjacency” guidelines, which include restrictions on the type of content their placements can appear alongside. In the event that a placement violates those guidelines, the advertiser’s sole remedy is to request either a make-good or a refund from the media company.

Although advertisers and their agency partners have gotten better at defining the type of content that would trigger this remedy, the remedy itself is an after-the-fact one. Even though the advertiser, in many cases, does not have to pay for placements that violate its editorial adjacency guidelines, an advertiser’s reputation takes a hit each time its advertising appears alongside inappropriate content.

Many advertisers choose to negotiate additional terms governing their media buys. Blacklists, or lists of sites where placements cannot run, have been a part of the equation for years, but need to be constantly updated and cannot reasonably catch every piece of questionable content. Some advertisers have chosen to instead only run “whitelist” campaigns, where advertising can only run on a pre-determined list of sites.

For example, after JPMorgan Chase’s ads appeared alongside incendiary fake news content, JPMorgan’s in-house team eliminated most of the 400,000 sites for its programmatic ad buys, including sites where only a few impressions were served, narrowing it down to a whitelist of 5,000 hand-selected sites and 1,000 pre-selected YouTube channels. JPMorgan has since reported that its cost and visibility have not materially changed as a result.

Whitelists present an obvious problem for ad networks. The cost and efficiency benefits of programmatic ad buying rely on a “long tail,” or impressions being served across hundreds of thousands of smaller sites. An increase of advertisers targeting a small number of popular sites could potentially squeeze inventory for networks, driving up the cost to serve ads on the most popular sites.

The Rise Of The Self-Policing Platform

The answer may instead be for publishers and networks to offer technologically advanced brand-safety tracking as a part of their service.

Advertisers and their agency partners have been pushing for years to contractually require platforms and networks to implement third-party brand-safety tracking. In particular, many agreements require publishers and networks to implement third-party tracking technologies that monitor brand-safety incidents and either take down offending placements in real time or block the advertiser from making those placements in the first place. Publishers and networks can see their agreements terminated if they do not implement the contractually required tracking or if too many brand-safety incidents occur.

The industry may be moving toward platforms and networks that implement their own brand safety tracking to assuage advertisers’ concerns. Major social networks have responded to the fake news epidemic by more closely partnering with third-party organizations that offer brand-safety tracking. Because platforms like Google and Facebook have always resisted overtly censoring content, the focus has been instead on identifying and flagging bogus, incendiary, violent or sexually explicit content, so that only brand-safe content is being monetized.

For example, Snapchat in April began partnering with Integral Ad Science, DoubleVerify and Moat to filter out sexually explicit content for its advertising products. Facebook and Google have followed suit, and recently announced similar plans to begin using automated measures to stop monetizing inappropriate content.

These measures are a step in the right direction, and are likely to develop further as automated and artificial intelligence tools become more sophisticated. In particular, publishers and ad networks may similarly begin to implement their own tracking technologies to determine which website placements are safe for advertisers.

Although the cost of implementing such technologies combined with the squeeze on inventory can be seen as a threat to the bottom lines of publishers and networks, they could reap many benefits from offering a reliable solution to advertisers. In particular, rather than having to adhere to a whitelist – and therefore ignoring a sizable portion of available inventory – a network that can promise brand-safe placements could monetize a larger swath of the sites in its network.  Moreover, in an age of fake news and incendiary content, reliable brand-safety solutions can be a powerful selling tool to attract advertising dollars from ever-more-conservative advertisers.

Regardless of who leads the charge, the advertising industry as a whole will benefit as reliable brand-safety tracking becomes the norm online.

Follow Davis & Gilbert (@dglaw) and AdExchanger (@adexchanger) on Twitter.

This post was syndicated from Ad Exchanger.