Programmatic

As AI Expands Across Advertising, Marketers Must Mitigate Unconscious Bias

Brand Aware” explores the data-driven digital ad ecosystem from the marketer’s point of view.

Today’s column is written by Emily Ketchen, regional head of marketing, Americas, at HP.

Anyone who has raised a child knows that along with the pride and joy of watching them mature comes the sinking disappointment of seeing your own faults reincarnated in your offspring.

Unfortunately – but not surprisingly – this cycle extends to our technological progeny.

Artificial intelligence (AI) is a thrilling new frontier, packed with promise and possibility. PwC projects that AI will contribute up to $15.7 trillion to the global economy by 2030, more than the combined output of China and India.

For marketers, AI is manifesting in chatbot customer service integrations, social media analysis and myriad forms of data analysis. And if you’re like me, you’ve probably benefitted from AI marketing while online shopping, where AI takes the form of automatic recommendations based on what you’ve already purchased. 

But none of this is without risk. Establishing AI as a great equalizer serving up fair and unbiased fact is a myth. We have seen this many times already: Voice command software struggled to understand accents; MIT’s “Norman” AI turned into a psychopath by consuming violent content from Reddit; and US courts used AI to predict the likelihood of future crime during sentencing, falsely predicting future criminality among African Americans at twice the rate as Caucasians.

The reason for bias in AI is deceptively simple and, therefore, dangerous: AI is our own humanness, our own flaws and prejudices, staring back at us. With few regulations and accountability mechanisms, the responsible use of AI tools in marketing rests with a company’s strategy.

Expect bias

To be human is to be biased. It is something we cannot escape, but we can mitigate it with a thoughtful, candid and deliberate approach. An honest evaluation of where preconceptions and biases exist within an organization – and therefore in its data, which influences our outreach – is the first step.

The building blocks of AI follow rules. Those rules are learned via data, which feeds algorithms. Developed by humans, the data inherits their biases. And, because machine learning is designed to predict the results users want, inherent bias is further validated and even deepened.

To mitigate this, marketers must ask hard questions of themselves and about their audiences, in terms of what preconceptions exist. Then, they can identify data that is directly relevant to their intended audience while stripping away any ancillary information that could morph into bias. Remember that even the concept of fairness depends on who you are talking to.

Invite diversity

A dynamic result requires diverse input. This means that AI use cannot only be the domain of technologists and mathematicians. Marketing professionals also have a critical stake in AI’s potential. While successful marketers must establish a strong understanding of AI’s data capabilities, complicated problems can only be solved when hard findings are weighed against human perspectives. This blend of automation and human touch means a broad range of variables, including gender, race, sexual orientation and more, are more likely to be considered.

If marketing professionals embrace society’s diversity and invite it into their programs, bias is more likely to be illuminated and challenged. By creating a full spectrum of experiences and perspectives, by checking assumptions and uncovering unconscious bias, AI will be more closely aligned to deliver a truly impartial and authentic result.

Know your data

To build AI, data comes either from collection (think of Facebook collecting personal information) or it’s purchased. Remarkably, emails from the disgraced Enron are one of the most influential data sets for training AI systems in the world, simply by virtue of the fact that, historically, it was one of the largest compilations of human interactions available. One can imagine the biases likely present in those communications.

When personalizing campaigns or reaching new audiences, it is critical to understand where data comes from. Past behaviors that influence data sets can easily lead to mistakes and missed opportunities. Consider a university that inadvertently screens out an ethnic group because historically not many students from that group applied to that school. If not eliminated, those trends will show up again in the AI model and reinforce the bias.

Don’t overlook bias through omission, either. For instance, while thousands of people entered an AI-powered beauty contest, only one of the 44 winners selected had dark skin, revealing that the data sets used to train the AI were made up of mostly light-skinned people. If we want to be fearless in reaching new audiences and empathetic in creating personalized outreach, brands must continue to scrutinize data and assume accountability for bias.

Ironically, AI has thus far been almost too human, revealing our shortcomings and faults. When we shift to a more intentional approach that balances data with a human touch, “artificial” marketing tools can be not only smarter and faster but also equitable and inclusive.

Follow HP (@HP) and AdExchanger (@adexchanger) on Twitter.

This post was syndicated from Ad Exchanger.