November 24, 2024

Programmatic

In a world where nearly everyone is always online, there is no offline.

The Era Of Responsible Data

<p>AdExchanger |</p> <p>“Data-Driven Thinking" is written by members of the media community and contains fresh ideas on the digital revolution in media. Today’s column is written by Kym Frank, president at Geopath. Marketers have eagerly embraced big data. When I worked at a media agency, I so often hunted for new data, insights, optimization tools and differentiation that<span class="more-link">... <span>Continue reading</span> »</span></p> <p>The post <a rel="nofollow" href="https://adexchanger.com/data-driven-thinking/era-responsible-data/">The Era Of Responsible Data</a> appeared first on <a rel="nofollow" href="https://adexchanger.com">AdExchanger</a>.</p><img src="http://feeds.feedburner.com/~r/ad-exchange-news/~4/5qjKB-1az38" height="1" width="1" alt="" />

Data-Driven Thinking” is written by members of the media community and contains fresh ideas on the digital revolution in media.

Today’s column is written by Kym Frank, president at Geopath.

Marketers have eagerly embraced big data. When I worked at a media agency, I so often hunted for new data, insights, optimization tools and differentiation that I could easily fill an entire week with meetings with promising data partners and vendors showcasing their latest measurement innovations.

There are a lot of shiny new toys out there. It is very easy to get sucked into a great sales pitch or a sexy new technology. As a result, I’ve begun urging marketers to embrace responsible data, rather than big data. There are a few questions that should be asked to avoid a few common hazards.

What’s the question? Are these the right data to answer this question?

The first step to any worthwhile data and research exploration is for marketers to truly understand the question they are trying to answer. The better, more focused the question is, the more valuable the findings will be.

In 1966, Abraham Maslow said, “I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.”

It sometimes seems that all data providers are confident that they can answer marketers’ questions with their data sets, but that is simply not the case. Sure, you could pound a screw into a block of wood with a hammer but the result wouldn’t be as good as if you had just used a screwdriver.

Before even beginning the conversation with a data or research provider, it is a fantastic exercise to document the question that needs to have answered, along with the exact data specifications and needs to answer that question.

It is easy to be blinded by the science and get sucked into sexy new methodologies, data collection techniques and technologies out there. Marketers must ensure that they are selecting the best methodology to answer their question rather than the newest or most differentiated solution in the marketplace.

How many people does this represent? 

A brand may receive a report showing that its target audience is overindexing in a specific geography, and the number is huge!  Let’s say it’s an index of 400. You should shift your budget there, right?

Before you do, find out how many people that represents. It could be 2,000 people, but it could also be four people. As a rule, I’m always suspect of any index without a sample size attached.

What are the inherent biases in the data?

There are innumerable ways that a data set can be biased. The data may be from a non-representative subset of the population, it may be platform- or publisher-specific, or the collection methods could create patterns that are easily misinterpreted.

It is imperative to fully understand the biases in the data set being used and what how they can impact the results.

Are these results statistically significant?

So, all the data have been loaded, the algorithms run and the results show a 5% lift in brand trust among those exposed to advertising. Up is good, right?

But, if the results are not statistically significant, that finding is directional at best. Marketers must always make sure that their providers are applying significance testing to the data or they run the risk of making decisions based on results that won’t be replicated in the actual population. What confidence level is the significance measured at – 95% is the best practice. The lower the number, the less confidence there is in those findings.

Does this make sense?

Before the research begins, marketers must be realistic about the potential outcomes. They should ask others, write it down, share it and be honest with themselves. Frustration is the gap between expectations and reality, so they should go into any analysis with grounded expectations.

As data and the potential uses of these data continue to grow exponentially, I challenge the industry to move from an era of big data to an era of responsible data – one where we use hammers only on nails and make sound business decisions based on actionable insights.

Follow Geopath (@GeopathOOH) and AdExchanger (@adexchanger) on Twitter.

This post was syndicated from Ad Exchanger.