piping liquid into vials
PHOTO: Louis Reed

Marketing isn't famous for being overly concerned with facts. At best, marketing informs people of products and services that can elevate their potential and wellbeing. At worst, marketing spins facts and bamboozles the public. Less discussed, however, is the way that marketers can deceive themselves.

For several decades, there has been a push to make marketing more scientific, data-driven and measurable. Numerous agencies, software platforms and self-proclaimed gurus have rallied to the cause. It has been such a popular idea that CEOs and CFOs now believe that marketers actually control the metrics they measure.

The more that digital systems track every click, view, search and comment, the more confidence there seems to be that marketing can be done with scientific precision. I think this confidence is misplaced. While I’d like to see more scientific thinking introduced into marketing, I also want to be realistic about where it can help us, and how it can mislead us.

More Data, Not So Much Science

Marketing sits at an intersection between the quantitative and qualitative, the measurable and immeasurable, the systemic and creative. In practice, it’s hard to control our variables. It’s hard to interpret the meaning of everyday analytics. It’s hard to get statistically significant samples. It’s even harder to distinguish between correlation and causation.

Perhaps because of this complexity, marketers use data to justify what they did rather than to scrutinize their choices and assumptions. There’s pressure to prove we created revenue or value. That pressure is incompatible with the objectivity critical to a scientific method based on observations, hypotheses, predictions, experiments, and analysis.

In essence, the pressure to be more data-driven can make marketing less scientific. Of course, there are questions that we really want answers to, and maybe scientific methods can help.

Related Article: Your Data Driven Marketing Fitness Regime

What Marketers Want to Know

When I talk to marketers about their challenges with data and analysis, a few topics come up. I raise these to show just how hard it is to study the central blind spots in modern marketing.

  1. Attribution. Every marketer wants to trace the events that led to a customer acquisition. How did we create a lead or customer? If they interacted with our content and campaigns, how do we know those made a difference? How did it affect the viewer's beliefs and actions? What return on investment are we actually getting from our AdWords spending or Facebook ads?
  2. Resonance. Marketers also want to know how people react to branding, words and images. If we use one headline versus another in a blog, email or white paper, how does it affect the reader's actions? What about different color schemes? Cartoons versus live people? Questions versus statements? Are we even focusing on the right variables? What aspects of human psychology and neuroscience are we tapping into?
  3. Technology. Marketing departments now spend more on technology than people. Do the systems make a difference? How do we really know that something has made us more efficient, productive or profitable? How do we attribute gains or losses to a software system versus, say, a new team member, a struggling competitor, macroeconomic conditions or factor we haven’t considered?

Scientists question how we can know something. They exercise skepticism as a virtue, not as a weapon to win corporate battles. Marketers, generally, are not given the time, resources and freedom to figure out how they know things and why.

Related Article: Marrying the Art and Science of Marketing: From Engagement to Bliss

Interrogating Data

The big questions in marketing are resistant to controlled scientific study. So, we either avoid trying to answer them, or we make assumptions.

Let’s take a narrow case: email headlines. Maybe I observe that people frequently click on headlines with “5 Tips …” or “10 Things ….” My hypothesis is that more people in our email list will click on numbered headlines than un-numbered headlines.

I could run a series of A/B tests to see how people respond to a numbered headline versus a non-numbered equivalent (while keeping the time of receipt, topic, frequency, etc., consistent). Maybe my analysis of the data shows that people click on numbered headlines significantly more often than non-numbered headlines.

But what do their clicks mean?

Does a click mean a person expects the information to be useful? Does it mean the numbering suggests a quick, light read that will help someone gain information with minimal effort? If someone does click, are they glad they did so, or do they feel disappointed by the content?

There are so many unknowns and renegade variables. For example, we probably can’t measure whether people read the blog post linked in email unless we invade their privacy or enroll them in a more formal experiment (which, in itself, is likely to generate different results from an experiment in the wild). If we look at the comment section or social media to gauge reactions to the blog post, we might be misled because people who comment on articles are a small, non-representative segment of the audience (though maybe that’s the important audience to us).

We go out trying to find the answer to what seems like a straightforward question, and we either admit our limits or bamboozle ourselves. At the end, maybe we conclude that, in this case, numbered headlines were clicked 10% more often than non-numbered headlines. And that’s all we know for sure.

Related Article: Marketers Want Better Measurement, Attribution Models

Get Comfortable With the Unknown

Data-driven marketing is often about measuring customer interactions in a way that makes marketing look good. It’s not scientific at all. And that’s OK.

We should still use data and analytics to understand what happened, even if we can’t be certain about why it happened. Email opens, website conversations, and social metrics are useful feedback. Although analytics may not tell us which variable to change, they do tell us when it’s time to be creative and try something new.    

In most cases, we are best off leaving science to business and marketing scholars who have the time, budgets and training to do it well. Maybe we even collaborate with them to identify questions and experiments they wouldn’t consider otherwise. I think collaboration between academia and marketers is an untapped opportunity.

As marketers, we have to go with our gut more often than we would like to admit. So instead of misleading ourselves with comforting data, let’s get comfortable with the unknown — or let’s conduct proper social science.