Programmatic

Getting To The Holy Grail: How Publishers Measure The Incremental Value Of Ad Tech Partners

If a new ad tech partner generates $50,000 in revenue for a publisher, is it safe to assume that gross revenue will increase by the same amount?

Publishers that unify their analytics and rigorously test those partners increasingly are finding that the answer is no.

“There is no such thing as purely incremental revenue,” said LittleThings Chief Digital Officer Justin Festa. “Everything you add is going to come at the expense of something else. Understanding what those things are will help you understand how well that partner is doing.”

Header bidding has enabled many publishers to cram a dozen or more partners into one setup, encouraged by the added revenue they see each partner bring. And outstream video, content recommendation engines and native partners promise to add more revenue to a publisher’s bottom line.

But now the pendulum is swinging away from these complicated setups. Publishers are finding they can remove ad tech partners with little effect on their bottom line.

Some even see their revenue increase when they remove a partner – for a variety of reasons.

Less clutter can lead to faster page load times, stronger engagement and higher viewability for the remaining ads on the page, publishers pointed out. With fewer partners, it is less likely that their inventory will be resold, which can drive down the value.

“We are collectively wising up as publishers,” said Paul Bannister, EVP of strategy at CafeMedia. “We want to simplify. We don’t want to deal with in-image ads and outstream video and social-sharing things, and users don’t either. While it’s possible I’m leaving a little bit of money on the table, by simplifying internal operations the team can focus on strategic things versus eking out an extra $50,000 here or there.”

How Publishers Test

To measure incremental revenue, publishers need two things: an A/B test for partners and a centralized place to store and analyze data. They must also pay close attention to the revenue metrics they use.

“People overvalue partners because they look purely at CPM,” Festa said.

Instead, looking at revenue per session, page-load time and CPM compared to the next highest bid can tease out the true value a partner contributes.

The publishers CafeMedia, Ranker, LittleThings and Chegg never use a partner for 100% of their inventory, instead holding one back on a certain percentage of inventory to measure their effect. New partners run ads on a small subset of inventory until they prove their worth.

CafeMedia, for example, created “header-bidder holdout,” a testing mechanism that sits atop its Prebid wrapper. The product withholds each ad tech partner 1% of the time.

“We can see if the partner is slowing down the page or sniping out an impression for a penny more,” Banniseter said, adding that the tool can also test by country or device.

After testing, CafeMedia dropped three partners to only run on 5% of its inventory in case performance improves. So far, one partner has worked its way back into the stack. CafeMedia slowly scaled up how much inventory the partner could bid on – from 20% to 50% to 99% – to see if it maintained value.

Ranker uses a similar approach, deploying its header-bidding wrapper, PubFood, to test different partners and evaluating the results using DoubleClick for Publishers (DFP). It will randomly turn off a bidder in its wrapper 5% of the time and send the data to DFP.

“A lot of people don’t realize that DFP is a free data warehouse,” said Ranker CTO Premesh Purayil. “If you are savvy, you can send a lot of information for each impression.”

After using as many as 10 partners in its post-header-bidding partner ramp-up, it removed a few after testing, Purayil said.

At Chegg, understanding incremental revenue was a year-long project for advertising VP Emry Downinghall. When Chegg acquired StudyBreakMedia in May 2016, the advertising team shifted its focus from adding partners to understanding the effects of the supply-side platforms already plugged into its inventory.

For the first time, the team gained access to a data scientist, which increased its understanding of how to run statistically significant tests.

“We’ve learned that if you have the ability to measure, you should throw away your preconceived notions of what is going to happen,” Downinghall said. “You might think something is intrusive, but the data paints an entirely different picture.”

The publisher focuses on the “added value” metric. For example, Chegg will use bidding data to compare a winning bid of $2.50 to the next highest losing bid of $2.25, for added value of a quarter. Chegg then weighs added value against page-load time.

“If we see a 4% lift in revenue, but an 8% increase in GPT [Google Publisher Tag] fire time, we will not add them,” Downinghall said.

Despite the testing, it’s the only publisher interviewed that didn’t remove partners – except for one with large discrepancy issues.

LittleThings works with fewer than 10 partners now, the result of aggressive testing. Since it made a commitment to remove partners that added little or no value, it’s removed outstream video, its content recommendation engine and some display ad tech partners.

LittleThings can test partners both outside and inside its header bidding setup. The publisher passes all of its testing information into Google Analytics, which allows it to track bounce rate, share rate, pages per session and page load times. These metrics allow LittleThings to assess the impact of ads on the user experience and how the much time users spend with content because of ads.

Using this setup, LittleThings determined that using a content recommendation engine didn’t make sense.

When Festa tested the removal of the content widgets, the publisher recovered 72% of the lost revenue because more people watched a video once they navigated to the page, initiating a pre-roll ad.

Users also clicked more internal recommendations and shared more articles from the less cluttered pages. After taking all these factors into account, LittleThings determined that removing the module increased overall revenue by 7%.

When The Strong Survive

The publisher trend toward removing ad tech partners reflects the overall consolidation in the ad tech space. Many publishers once bought into ad tech pitches promising unique demand, but they view such claims with skepticism in the age of header bidding.

Ranker, for example, once worked with ad tech partners that promised 100% fill rates and created “new” ad units within images or content. It discovered these partners work by refreshing tons of nonviewable ads.

“It devalued our inventory because there was a lot of volume at a lower rate than we would want on the open market, and the viewability metrics were horrible,” Purayil said.

Plus, working with these partners slowed down or crashed the site and led to a barrage of low-quality malware ads.

“We realized we needed to protect our inventory,” Purayil said. “As a general rule of thumb, we don’t work with anyone that is repackaging demand.”

Soon, publishers may have no choice but to remove ads from their pages. Google released an “Ad Experience Report” in June that gave publishers warnings or failing grades for their intrusive ad experiences, which its Chrome browser will soon block.

Could it be that an ad filter ends up increasing publisher revenue?

Publishers wouldn’t go so far as to make that conclusion. They want to control their own fate, which is why they’ve invested in testing frameworks to help make decisions about which ad tech partners to work with.

Downinghall of Chegg warns publishers that measuring incremental revenue, though transformative, requires investment. Creating the “added-value” metric involved extensive engineering work and the use of its data scientist.

“Not many publishers can answer the question of how much value a partner is providing,” he said. “It takes a serious commitment to data and engineering to understand performance.”

This post was syndicated from Ad Exchanger.