CLEVELAND, Ohio – When it comes to measuring the effectiveness of content, most marketers would love a simple formula ("X + Y = Success!" comes to mind).
Unfortunately, measuring content does not lend itself well to this simple equation. There are too many variables for any single formula to take into account.
Considerations About Measuring Content
That doesn't mean you can't measure it. You can.
But you have to start with the end in mind, said Andrea Ames, a senior enterprise content experience strategist, architect and designer with IBM, speaking at Content Marketing World in Cleveland last week. Sponsored by the Content Marketing Institute (CMI), a UBM company, the event attracted more than 3,500 attendees.
Ames' session, Measuring the Effectiveness of Content, attracted an overflow crowd. It's not surprising: putting meaningful metrics around content is hard to do.
As the curator of technical documentation for IBM's products, Ames has two overarching goals for measuring content effectiveness. One is purely selfish: she wants to show the marketing organization that she and her team are relevant " ... or we may not be around very long" and, two, she wants to ensure that customers are obtaining value from IBM's products and services.
Defining 'Effective' ContentAmes defines effective content as content that moves a customer or a prospect through their journey, not IBM's journey.
In other words, effective content is all about the customer's needs, not yours. If you can figure out the metrics, models, and frameworks to tell if your content does this, then you will be measuring your content effectively.
"It's all about audience and purpose," she said. "That's the core of every communications exercise: Who's our audience and what will speak to them?"
How you measure this depends on the audience you are addressing. Business stakeholders will want to see a direct connection between your metrics and the metrics that drive the growth, for example. But marketing executives will be interested in content that improves their customer acquisition costs or conversion metrics.
Using 'Closed Loop' Frameworks
To answer these questions, Ames uses "closed loop" frameworks. These frameworks are based on models that take into account quantitative and qualitative measures as well as heuristics (i.e., trial and error) to validate the models and improve results over time.
"Frameworks help you apply things consistently so you can take a baseline, make course corrections ... and know you're going in the right direction," she said.
Ames likes to use running as a simplified example of how frameworks can be used to measure the effectiveness of content over time. If you know a runner, then you know running involves a lot of metrics, both qualitative (How do I feel?) and quantitative (How far did I run?).
To build a framework, Ames first figures out what she is measuring, or, in her parlance, the "story" she is telling. So in the case of running, the story is fitness. To tell the story of fitness you first have to define it. So what is fitness? For Ames it is the "relationship across distance, time and heart rate".
If the goal of running is improved fitness, then what story must the data tell to know if fitness is being improved? The formula looks like this: Improved fitness = Increased distance + decreased time + decreased heart rate.
So, in English, if you can run farther in less time with less exertion, then the data is telling you a successful story.
Evaluate Multiple Metrics
To achieve a true picture of your "fitness", however, you have to understand these numbers interrelate and interact; in context.
To do this you have to add in other metrics such as time of day, the weather (temperature, wind speed, rain/snow, sunny, etc.), as well as subjective factors like: How did I feel while I was running? How much pain was I in afterwards?
You also need to factor in your starting fitness level: beginner, intermediate, advanced to know if your improvements are merely incremental or outstanding and, very importantly, how they are changing over time.
Normalize the Data
And then you have to normalize all of this data. So feeling "good" needs to mean the same thing all the time so you can compare your performance over time.
Without normalization, then validating the accuracy of the model that the framework is built around becomes problematic. But, once the model has been validated, you now have a baseline from which you build out your framework.
The framework itself is a close loop that starts with the baseline, instructs you to measure performance periodically, takes a final measure at some point to determine the overall impact of the project, and then uses that to set the baseline for the next project: wash, rinse, repeat.
Once the framework is in place, you can then use it to measure the effectiveness of your content, or "fitness", over time and make course corrections as needed. Ames uses surveys, for example, to measure effectiveness and then feeds that data into her models.
"What validation does is give you the ability to see what those correlations really are," said Ames. "You can take a framework like this and apply it in a small way to lots of different circumstances."
Title image by Jennifer Burk