Did you see the article in the New York Times on January 5 about Pandora and big data?
It's all about how Pandora serves you particular ads or predict what artists you’ll want to vote for based on your choice of songs. These are the types of articles that make marketer's hearts sing about the promise of big data.
How Did Pandora Do That?
On the surface, this sounds pretty impressive, but if you think through the data that Pandora uses and could use, it is pretty straightforward: what you listen to, your zip code, your device, when you listen, how often you listen. Overlay commercially available market data based on zip code, create models based on usage behavior and “voila” -- you are ready to do some powerful targeted marketing tests.
For example, are you feeling more adventurous or relaxed on the weekend when you’re listening to music and more apt to respond to an ad for a vacation than you are on Monday morning while you’re at work and maybe thinking about where you’ll buy lunch?
Like a lot of the news about using big data over the last few years, this article has a big “cool” factor and there’s not a lot about the “how did they do that.” That’s the “blue collar” side to Big Data.
In an earlier article, Bringing Big Data Analytics into Focus for Marketers: 3 Principles to Simplify Your Life, I talked about how to bring big data down to size and making it manageable.
Today I’d like to share a story about what goes into getting powerful big data analysis.
One Company's Journey to Big Data Analysis
An international B2B company has multiple online and offline channels that it uses to market to its target audiences. It has always been challenged to figure out just the right marketing mix. It spends multiple millions of dollars a year, and it’s been really hard to figure out how to correctly attribute and credit marketing programs.
The company has gone to great lengths to establish a data warehouse with a flexible and powerful visualization tool. It systematically identifies the critical data sources, going through the process of identifying the critical components of each data set (such as PPC, banner advertising, email, web data, call center data), discards the data not considered to be important, and breaks this out at the customer level. Fortunately there are customer IDs that can link all of the data points together.
Building this Extract, Transform and Load (ETL) process took a few months and is now automated. New data feeds are going through a similar cleansing procedure and being uploaded into the system. Creating a data architecture and data model to provide a stable and flexible querying framework also took time. The result: The firm is now able to calculate the number of touches from all channels that it needs in order to successfully reach and succeed in engaging its target audience.
Simple? No, but an example of how a strong data quality assurance process enables insights from big data. It is the same type of basic “blocking and tackling” that goes into any successful data analytics initiative.
To make big data work, you need to spend time on data cleansing, formatting, data quality and quality assurance. You need to determine the data models and queries to use to answer your questions. Sure, you can get lots of value from big data, but don’t forget that it’s the up front planning and less glamorous work that makes it possible.
Title image by Lonely Walker (Shutterstock)
Editor's Note: Read more of Phil's thoughts on analytics in Think Outside the Marketing Box for Digital Analytics