March 28, 2024

Programmatic

In a world where nearly everyone is always online, there is no offline.

Performance Advertisers Are Turning To Lift Tests To Defend The Spend

<p>AdExchanger |</p> <p>Two years ago, lift tests weren’t something OLX Group even talked about. Today, lift tests, which measure the incrementality of a marketing channel or advertising tactic, are in heavy rotation at the Argentinian web company, which owns and operates 17 classified apps and sites around the world, including Craigslist competitor Letgo in the US. Performance<span class="more-link">... <span>Continue reading</span> »</span></p> <p>The post <a rel="nofollow" href="https://adexchanger.com/mobile/performance-advertisers-turning-lift-tests-defend-spend/">Performance Advertisers Are Turning To Lift Tests To Defend The Spend</a> appeared first on <a rel="nofollow" href="https://adexchanger.com">AdExchanger</a>.</p><img src="http://feeds.feedburner.com/~r/ad-exchange-news/~4/2t37ix4-iA4" height="1" width="1" alt="" />

Two years ago, lift tests weren’t something OLX Group even talked about.

Today, lift tests, which measure the incrementality of a marketing channel or advertising tactic, are in heavy rotation at the Argentinian web company, which owns and operates 17 classified apps and sites around the world, including Craigslist competitor Letgo in the US.

Performance advertisers in particular are under increasing pressure to justify the return on ad spend of everything they do, said Federico Vazquez, CMO of OLX Group, which primarily uses tools from Facebook, Google and app marketing platform Jampp to lift test its digital media roughly once a quarter.

Although a growing number of advertisers believe their budgets are vulnerable to ad fraud, they’re still planning to spend more on mobile, which makes measurement that much more critical.

Robust lift testing can be a first line of defense.

“It can help you quickly select the right partners to work with,” Vazquez said.

Heavy Lifting

It’s easy to distort the results of a lift study by not asking the right questions or not constructing a solid testing model at the start.

For example, advertisers need a statistically significant number of users and for the division between its test and control groups to be truly random. Other than the ads being tested, each group must be exposed to exactly the same media.

Ideally, advertisers should also pause other campaigns.

And because most DSPs will optimize for conversions and frequency capping, advertisers should opt for fixed-price auctions so they can better control what the two test groups see.

Seasonality will also skew the results, as will an app’s maturity in the market.

Newly launched apps usually don’t have a large enough user base to test and are still in the hockey stick phase. Then again, mature apps with established names have to be careful not to count organic traffic in their lift experiment.

But most basic of all is the need for advertisers to ask themselves what they even want to learn from the test. Most don’t know what they’re looking for or how to contextualize their findings, said Jampp co-founder Diego Meller.

“We constantly encounter advertisers at large companies that are unsophisticated in their testing,” Meller said. “Most are just comparing their results to what they see on Facebook, but that is not apples to apples.”

The Facebook Question

Like most app marketers, Facebook is one of OLX’s primary media channels, which means it’s both imperative to test it and painful to turn off for testing purposes.

Facebook, unsurprisingly, doesn’t recommend the practice. According to Toby Roessingh, a quantitative researcher in Facebook’s marketing science division, an incrementality test is scientifically kosher as long as the audience is appropriately split and randomized.

“The idea behind experiments is to hold all else equal so that there’s only one possible explanation for the difference in behavior you see between two groups of people,” Roessingh said. “Those groups of people may very well be exposed to other ads, but, by randomization, we can be sure it’s the same amount of other ads.”

In a lift test, Facebook determines if a campaign or channel generated an increase in conversions by comparing test and control – the holistic impact of ads. This is different from app-install attribution, for which Facebook relies on last-click.

Still, OLX questions Facebook’s tendency to self-attribute.

“When a platform self-attributes, it’s almost impossible to independently track what they provide to you,” Vazquez said. “And Facebook has a very particular way of seeing things. Their view-through attribution window is 24 hours, but it’s hard for me to believe that if someone sees an ad on Facebook and an ad on TV, that a day later it’s the Facebook ad that fully caused the install.”

A lack of standards in the app-install attribution space makes it difficult for advertisers to compare performance across channels.

To be fair, Facebook works with partners through its Lift API, including Oracle Data Cloud and Nielsen Catalina, to help measure offline sales, and with vendors like Visual IQ and MarketShare for multitouch attribution.

To date, advertisers have run more than 3,000 lift studies through Facebook measurement partners and over 20,000 tests through Facebook directly.

“We believe in building and supporting our own solutions, as well as partner solutions,” said Jonathan Lewis, a measurement manager at Facebook. “[We believe in] enabling measurement at both touch points so that our advertisers have a clear picture of the value they’re getting from their investment in Facebook.”

This post was syndicated from Ad Exchanger.