The last few years have been marked by dramatic increases in the volume and granularity of data available to marketers. User-level data can now be made available to marketers in near real-time and to stay competitive these days, marketers — especially those catering to mobile audiences — have little choice but to use this data.

But in a landscape where the same user is presented with thousands of offers, it becomes imperative for marketers to leverage their data to tailor propositions and gain efficiencies in marketing costs. With media becoming increasingly expensive, the unit economics of bulk buying on a cost per impression (CPM) basis are rarely positive. 

Maximize Results, Minimize Costs 

To maximize results while controlling costs, marketers must be ready to dynamically change their advertising strategies based on performance and learning from cohorts of similar users.

But tools traditionally used for generating insights from data — such as pivot tables in spreadsheets — are often ineffective while handling large volumes of data. In addition, granular data often holds hidden insights that are not accessible through high-level generalizations such as correlation analysis. 

Statistical Models to the Rescue

Luckily, a wide array of statistical techniques and models are readily available to help marketers capture and express relationships between different data elements in insightful and effective ways. However, each model comes with its own range of applications, strengths and weaknesses.

Target, Recommend and Optimize

Understanding these models — even without going into the involved math behind them — equips marketers with a formidable set of tools. The three broadest use cases for data at scale include targeting the right audience, recommending the right product or solution to show and optimizing advertising spend to maximize return. 

Here is a brief overview of statistical tools that can be deployed in each of these use cases:

Targeting Methodologies

Targeting the right audience requires different strategies depending on how much the marketer knows about the perfect target audience profile. In an early scenario where the right audience is in the process of being identified, the marketer can use naive Bayesian models to find segments that are over-indexed in terms of performance. 

These models compute the probability that any particular segment is over performing given its performance relative to the baseline performance and the size of the segment. Using a probabilistic treatment helps bootstrap campaigns by identifying promising segments early. 

Isolating High-Value Segments

Managing the tradeoff between going heavy on segments that work — “exploitation” — and finding new segments that can be meaningful — “exploration” — is a common dilemma.  A good way to manage that dilemma is to use what are known as “multi-armed bandit” models. The name is derived from a scenario in which a set of one-armed bandit slot machines have different payout probabilities. 

Learning Opportunities

When players begin, they are unaware of the best option, and have to find it while keeping in mind the opportunity cost of searching in the wrong direction. To do that, statistical modelers often employ a so-called “epsilon-greedy” approach in which a small portion (epsilon) of the games are played on the random slot machines, while the rest are played on the machine with highest historical win rate. 

Bottom line: Marketers can allocate a fixed portion of their spend for exploration while spending the rest on the channels yielding the highest return.

Maximizing segment potential

In the case where a marketer already has some data on the desired audience, two approaches can be adopted depending on the volume of data already available. Where there are low volumes, marketers can use regression analysis to fit a model to the conversion data. A regression model is akin to fitting a line to a set of data points but can be rendered in many dimensions where each dimension corresponds to an audience feature. 

The marketer can then use the statistical weighting of these dimensions to change campaign targeting parameters. For example, if marketers have already built high value cohorts comprised of hundreds of thousands of users and have access to a large pool (via an exchange) then they can leverage lookalike modelling to find more users that are similar to the high value cohort.

Bottom line: Lookalike modelling fits a model (similar to a linear regression model but typically more sophisticated) to the user features of a cohort. This model can then be applied across a broader pool to pick out users that are the most similar to the cohort. These models need to be tuned carefully but can be very effective.

Recommendation Methodologies

Recommendation entails selecting the right offering to advertise to the user, obviously very important for marketers with a wide range of products to sell. In addition to selecting trending products, marketers can use two common statistical models to personalize recommendations — collaborative filtering and product similarity.

  • Collaborative filtering: In collaborative filtering, made popular by Netflix, users who like the same media offerings are grouped together. Then, within the group, the most popular offerings are shown to users who haven't seen or used them already. 
  • Product similarity: On the other hand, product similarity is lookalike modelling for products. Given that a user has expressed an interest in a set of products, product similarity helps identify other products that are similar on the basis of features such as price, category or reviews.

Optimization Methodologies 

After identifying the right audience and the right product to advertise to it, marketers then must purchase advertising inventory at the right price to make the campaign’s ROI effective. To do this, marketers constantly tweak their campaigns by changing price, creative, messages and targeting parameters — an optimization process that is currently done largely by hand. When large volumes of data flood a system, control mechanisms such as dynamic pricing feedback loops and reinforcement learning-based models can help rein in the complexity.

  • Dynamic pricing feedback loops: Dynamic pricing feedback loops monitor the output performance metrics at a meaningful granularity to sustain a threshold performance by adjusting price and creative selection.
  • Reinforcement learning models: These models help achieve longer-term objectives when performance reporting is staggered and multiple criteria such as gross margins and user satisfaction are being tracked.

Marketing with the Models 

While these are only a few of the many tools that are available to marketers looking to exploit the huge opportunities that large volumes of high quality data are opening up, becoming conversant with them will be helpful when implementing campaigns in-house or evaluating vendors who offer these capabilities. 

fa-solid fa-hand-paper Learn how you can join our contributor community.