It’s fair to say there’s an important difference between content used for marketing and content that is itself being marketed. When content is your product, you may very well apply a new scale of value to it.
And that could be the problem right there. Never mind for a moment the obvious ethical issues regarding the dividing line between promotional and informational content (although we at CMSWire think about that quite a lot).
If we treated the content we publish as our product, representing what our institutions stand to the same extent as the products and services we provide our customers, then we may very well elevate its value to something above what some of my colleagues over the years have dismissed as “just marcom.”
But a new issue can and often does arise: When we place a greater value on content, in hopes that our customers or readers will do the same, we become more protective of it.
If your business depends on the cultivation of quality information, including the acquisition of some of that information from outside sources, then publishers (which we all become, to a certain extent) must consider the value of collaboration.
“For a couple of years, we had been thinking that the end game for the Web, for us and people like us, could be a Google-like experience,” said Mark Trenchard, who directs web services and customer experience for Stanford University School of Medicine.
“At the end of the day, it was not so much structure of content, but deep semantic understanding of what the content is, out there, and ways of pulling it together topically and semantically.”
Most every publisher with a significant online presence since the dawn of the Web has experimented with the science of organizing information — with one form or another of knowledge management (KM). It was a big industry at one point, and a big topic of this very publication, until folks started realizing there wasn’t a heartbeat.
The basic idea is for a publisher to network information together using some system of relationships, such as a taxonomy or a semantic network, all of which would make this information more accessible from a simple search portal.
(You’ve probably already gathered that this idea came to fruition prior to the rise of big data.)
Trenchard gave an example of a visitor to Stanford Med’s web site searching for a particular disease type. The strategy behind Stanford’s index enables editors to determine the specific instances when and where it makes sense to resolve searches through outside sources, including collaborators.
“We want to assert our point of view; we want to make sure it’s curated,” he explained. “But I can see, in that kind of a model... the key is understanding the semantics of something.
“We can get at other content, but what is this? Is it an event, a person, what topic, is this legitimate?”
He offered a more general example of a web site that assembled an experience around the context of the information being searched for — that became instantly less generic when it ascertained the reader’s probable set of interests. It’s as easy, he said, as knowing the contextual difference between a search for “Kobe beef” and one for “Kobe Bryant.”
Sites that present readers with what Trenchard calls “brochure-ware” are under the impression that the site itself is the result of the search — that the reader typed your name into Google and clicked on the top link from the resulting list.
Trenchard makes the point that publishers (which we should all admit we are) have an opportunity here to elevate the value of their brands by giving customers a means of input, and responding to that input with a personalized experience. This way, customers perceive the brands as providers of their solutions, rather than sources for “just marcom.”
(Why should Google get all the glory?)
Stanford Medical is literally comprised of hundreds of sites, many of which include blogs. Some of those blogs include curated content from elsewhere, and that very fact has multiplied traffic on these sites, to the point where Trenchard now refers to them as news channels.
In Part 1 of our discussion with Trenchard, we discussed the problem of how to make the content you publish to customers produce returns on investment, in and of itself.
If you’re marketing your core product line properly, then you have targeted customers in mind. And I’ve always said, rule #1 for making a message make sense is for you to know your audience.
So how do you target information toward an ideal customer, if we know in advance that all information we produce will attract only the subset of customers who would search for that information specifically?
“I come down to a basic model that is not at all artificial intelligence,” responded Trenchard. “I do believe that, ultimately, prediction is going to be the next generation of targeting; and Adobe is already starting down this path, but I think there’s a long way to go.”
For now, he said, it’s a matter of applying real intelligence, not the artificial variety.
With any anonymous user today, it’s possible to gather analytics about what she types into the inputs. “The question is,” he noted, “can we make sense of it?”
Here’s what he means: “Cancer” is so broad, in and of itself, as to imply a variety of possible meanings even in terms of just diseases. A library of all first-level topics in a publisher’s taxonomy could let a user browse under “C” to get a list of all conceivable subtopics.
That’s not exactly helpful. We could apply a certain personality to the search process (“Where do you want to go today?”) but the risk there is that the site looks intrusive or prodding.
“Can I help you with your cancer needs?” is not an appropriately sensitive way to approach anyone who truly needs information about one of the many things under the “cancer” umbrella.
This may seem prodding in and of itself, but it’s actually an effort at paying close attention: Trenchard suggests that publishers take note of how long visitors on a site linger on a particular page, and how they exit that page when they do exit.
Do they share the information? Do they seek something we already know to be directly related? What can we infer from what the user is doing?
“The third factor — which I don’t know how we get, and this is where maturity models need to come into play — is, what are people actually talking about in more social channels?” asked Trenchard.
He perceives a time when the tools that analyze a customer’s involvement with Twitter and Facebook bridge the semantic gap between what they’re talking about socially, and what subjects they’re exploring with Stanford directly.
“If there’s someone with a specific disease type, they are showing that pattern. If there’s someone just exploring various topics of stem cell research, I may not be able to understand what they’re looking for, and target it.”
From a data mining perspective, what Trenchard outlines sounds dangerous. It deals with the sensitivity of subject matter that typically extends no further than the doctor’s office walls.
Such customers are under the impression that, when they search for medical information online, they’re purely anonymous. Or, when they don’t believe they are anonymous, they don’t use the Web at all.
For this reason, Trenchard warns, “If you’re going to start targeting, you’re also going to need to accompany it with testing, because you don’t know. In our world, it is very painful if we go too far.
“We have to get to the point where we help people discover what they’re looking for. We can never go into the point of pushing them into anything.
“That’s where the marketing challenge comes for us,” he continued. “We start to lose credibility if we peddle our wares too hard, and it really has to stop at the door of help and value.”
In just a few weeks’ time, on November 3 and 4 at the W Hotel City Center in Chicago, CMSWire will sponsor two solid days of all-out discussion and introspection on the topics related to digital customer experience delivery.
DX Summit 2015 will feature Forrester’s Mark Grannan, plus numerous other industry leaders in the DX space including: Tony Byrne, founder and CEO of Real Story Group; Tami Cannizzaro, senior director, Marketing at eBay; Mike Gilpin, CTO of Siteworx; Bruno Herrmann, director of Globalization at The Nielsen Company; Deb Lavoy, founder and CEO of Narrative Builders; Meghan Walsh, senior director, Global Marketing at Hilton Worldwide; and Melissa Webster, VP, Content and Digital Media Tech at International Data Corporation (IDC).
For More Information:
- DX Digest: Mark Trenchard on Content for Customers
- DX Digest: Mike Hughes on the Customer’s Perception
- DX Digest: Mark Grannan on Digital Experience Integration
- DX Digest: Mike Hughes on Customer Journey Mapping
- DX Digest: Mark Grannan on Core Services Architecture
- DX Digest: Sheryl Pattek on the Customer Lifecycle
- Axis41 Presents at Adobe Summit 2014 [video features Mark Trenchard]