March 29, 2024

Programmatic

In a world where nearly everyone is always online, there is no offline.

Frequency Management: Let’s Do Better Than Average

<p>“Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media. Today’s column is written by Steve Latham, global head of analytics at Flashtalking Frequency ranks among the most important factors in determining advertising effectiveness. Regardless of the quality of the placement, creative or context, too<span class="more-link">... <span>Continue reading</span> »</span></p> <p>The post <a rel="nofollow" href="https://adexchanger.com/data-driven-thinking/frequency-management-lets-do-better-than-average/">Frequency Management: Let’s Do Better Than Average</a> appeared first on <a rel="nofollow" href="https://adexchanger.com">AdExchanger</a>.</p><img src="http://feeds.feedburner.com/~r/ad-exchange-news/~4/OTwD_eBDpzk" height="1" width="1" alt="" />

Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Today’s column is written by Steve Latham, global head of analytics at Flashtalking

Frequency ranks among the most important factors in determining advertising effectiveness. Regardless of the quality of the placement, creative or context, too much exposure – or not enough – can lead to disappointing outcomes.

Years of intensive research indicates the optimal frequency during the customer journey typically ranges between five and 15 impressions, depending on the brand, offer and purchase cycle.

Unfortunately, most frequency distribution curves are heavily polarized at the extreme highs and lows. Even if a campaign achieves an average frequency within the optimal range, it’s highly likely that a small percentage of users account for a disproportionately high share of impressions, while most users received too few ads to make a difference.

It’s time to revisit the average frequency metric. When relying on the “average” frequency metric – calculated as total impressions divided by the number of exposed users – the advertiser assumes the “average” is representative of the larger group, due to two underlying assumptions. The first assumption is that each publisher has unique inventory, and the second assumption is that each publisher is actively seeking the optimal frequency to its unique users.

Unfortunately, this is almost always wishful thinking.

Assumption No. 1: unique audiences

Prior to programmatic buying and audience targeting, it was reasonable to think that media buys on ESPN.com, HGTV.com and NBC.com offered unique audiences, and that each publisher would seek to deliver the desired frequency across all users.

Today that paradigm no longer exists as programmatic media vendors, including demand-side platforms, aggregators and publishers, inadvertently buy the same audiences through multiple sources. Overlap and redundant targeting are constant issues.

For example, consider an advertiser that buys from one DSP and two programmatic aggregators. Each buy may be segmented into unique strategies or tactics based on audience demographics, behavior or other criteria. If each media vendor optimizes for average frequency, the advertiser still risks overserving ads to its target audience due to the fact that each vendor is targeting the same small group of individuals. This is a costly scenario for advertisers, not only in terms of wasted impressions but also for the potential to annoy that small and important set of targeted users.

Assumption No. 2: frequency distribution

Compounding the overlapping audience issue is that many programmatic media vendors seek to win the “last-touch” prize by allocating a large share of impressions to a relatively small subset of users who are likely to convert, in what is known as “cookie-bombing.”

For example, an advertiser buys 10 million impressions from their programmatic vendor, who serves 3 million impressions to 3 million users at a frequency of one and another 2 million impressions to 1 million users at a frequency of two, saving the remaining 5 million impressions to retarget the 50,000 users (frequency of 100) who click or visit the advertiser’s site.

With the hope of winning the last-touch conversion, publishers may serve each retargeted user (some of which are bots) 100 impressions over several weeks – many of which are not viewable. While the overall average frequency looks OK at 3.3 impressions per user (10 million impressions served to 3,050,000 users), most audiences are underserved (one or two) or overserved (100).

While this practice is not nearly as prevalent as it was a few years ago, it still takes place more than it should.

The cumulative effect of overlapping audiences and cookie bombing results in excessively high frequency for a subset of exposed users. This harms advertisers in the form of wasted spend and lost opportunities to find new customers – not to mention the risk of annoying retargeted users.

Follow Flashtalking (@flashtalking) and AdExchanger (@adexchanger) on Twitter.

This post was syndicated from Ad Exchanger.