April 19, 2024

Programmatic

In a world where nearly everyone is always online, there is no offline.

Data Quality: In Demand But Hard To Define

<p>AdExchanger |</p> <p>"Data-Driven Thinking" is written by members of the media community and contains fresh ideas on the digital revolution in media. Today’s column is written by Jason Downie, senior vice president and general manager of data solutions at Lotame. Marketers can have all the data they could ever want, but if that data is of low<span class="more-link">... <span>Continue reading</span> »</span></p> <p>The post <a rel="nofollow" href="https://adexchanger.com/data-driven-thinking/data-quality-demand-hard-define/">Data Quality: In Demand But Hard To Define</a> appeared first on <a rel="nofollow" href="https://adexchanger.com">AdExchanger</a>.</p><img src="http://feeds.feedburner.com/~r/ad-exchange-news/~4/PR2j5Zjp8f4" height="1" width="1" alt="" />

Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Today’s column is written by Jason Downie, senior vice president and general manager of data solutions at Lotame.

Marketers can have all the data they could ever want, but if that data is of low quality, the only thing they’ll scale is the size of their mistakes.

Every business leader knows that data quality is important. What’s concerning is that 84% of CEOs worry about the quality of data [PDF] inside their organizations.

Unfortunately, data quality has so many definitions that it has no real meaning. One reason for this confusion is that ad tech has historically been an incredibly crowded and fragmented space.

But as consolidation continues and margins shrink, the remaining ad tech players have the opportunity lead a more meaningful discussion around quality.

Here, a little common sense can go a long way. Consider the way we talk about ZIP+4 data, for example. For many advertisers, ZIP+4 – a five-digit code plus four digits to pinpoint a segment within the delivery area – is a mark of quality but to what end? If marketers want income insights, ZIP+4 can yield quality results but the data set is useless for gender, because neighborhoods aren’t segregated by gender.

Intellectually, we can all understand how the purpose for which we use a particular data set influences the way we think about the quality of that data. But as a practical matter, we often bypass common sense questions because we know that, above all else, we must scale.

Marketers often miss out on the right data because they need scale.

Changing Marketers’ Mindsets About Data Quality

Common sense only gets you so far because most marketing challenges are so specific. Here, it’s helpful to think about quality as a process, rather than as a result.

Imagine you want to know the gender of an audience with 10 million profiles, sourced from multiple vendors. What accuracy rate counts as a quality outcome? Perfection is unrealistic at that scale, but 50% accuracy is no better than a guess.

The challenge in navigating that territory between 50/50 and perfection comes down to investigative prowess and the client’s needs. In addition to asking common sense questions about sourcing, collection and chain of custody, marketers need to test the data.

When marketers see wide variations in accuracy, they need to have discussions with their vendors about methodology. Quality is process above all else.

Emphasizing process requires more work than assembling the biggest data set. But quality and utility are tougher to gauge than scale. The more rigorous the standards and processes are for verifying the integrity of the data, the better.

Quality Must Mean Something Specific To An Organization

Marketers allocate budget based on ROI, which forces us to put data quality in a larger context. That’s a good thing, because marketing is a lot more than the accuracy of data inputs. Having the right data isn’t the same thing as deploying it for maximum effect in the real world, where strategy, creative and media budget all contribute to the overall outcome.

For the gender example, a 70% accuracy rate isn’t great, especially when compared to site targeting, which requires no data at all because content is used as a proxy for gender. But is 70% accuracy acceptable? The answer depends on the relative cost and efficacy of the alternative options. Targeting by site is accurate, but it’s also expensive. In some cases, it’s possible that 70% – what we might call lower quality data – performs well enough.

That’s not to suggest that we should settle for low quality data – we shouldn’t, but we do need to be pragmatic. There is no one-size-fits-all for data quality. Each organization has unique challenges, needs and goals.

Marketers who can embrace the idiosyncratic nature of the data quality question put themselves in the best possible position to deploy their data in a way that’s most meaningful to their organizations. If improvements in data quality aren’t leading to attributable increases in ROI, marketers are just spinning their wheels.

Follow Lotame (@Lotame) and AdExchanger (@adexchanger) on Twitter.

This post was syndicated from Ad Exchanger.