It’s autumn. For a lot of folks, that means it’s football season. For product teams, it’s planning-for-next-year-and-product-roadmap season.
Now’s a good time to remind ourselves to beware the Next Feature Fallacy and avoid falling into a moral panic when we inevitably reach a plateau in adoption and growth. Because when we panic, product teams inevitably find themselves focusing on an illusive grail: the killer feature that’ll reverse the curve.
This sort of thinking rarely works. Worse, it often signals that a product roadmap has gone off the rails. Or that maybe it never was on the right track in the first place.
That’s because the Next Feature Fallacy represents a betrayal of sorts of a core tenet of modern product management: the critical role of the Minimum Viable Product (MVP). MVPs are a widely-cited, but often misunderstood, part of building a product roadmap. An MVP is an early version of a product, that is designed to ensure that product vision and strategy are aligned with market needs.
I’ve emphasized the last phrase to call out what’s germane here: If a product team has to resort to a cavalry-charging, deus ex machina new feature to save the day, that’s a good indication they’ve actually reached the end of an entire string of fallacies.
Related Article: Use Failure to Drive Product Growth
Rationalizing a Fallacy
By fallacy we mean a mistaken belief, especially one based on unsound argument. Such arguments may sound perfectly reasonable at the point of inspiration, and might even seem to hold up throughout the design and development process. When a product fails, though, 20/20 hindsight reveals them as rationalizations. An extremely ambitious product plan may have been built on very thin ice, with no real understanding of the market or prospective users.
Does research cure rationalizations like this? There’s the rub: too often, user research and market validation is pursued in a way that reinforces our existing assumptions.
There’s an old adage that “somebody who tries to be their own lawyer has a fool for a client.” It holds up for software and technology product teams, too. Unless we’re willing to recognize the various cognitive biases that can creep into the feature development cycle and take the right steps to ensure we’re being objective, we may be simply using research to validate assumptions or ideas we’ve already embraced.
Related Article: Cautionary Data Analytics Lessons From the Battle of Britain
Bias Comes in Many Flavors
All of us have biases capable of distorting how we collect and interpret data and arrive at decisions about products and features. It’s simply human nature. Here just a few of the suspects likely to trip up product teams:
Confirmation Bias: It’s the most common cognitive bias, as we collect evidence — often subconsciously — that supports a hypothesis, and ignore or don’t weed out contrary data. It tends to crop up when there’s time pressure to make decisions, and manifests in the form of narrow test sets (“my friends love it!”) or only token questioning of users, and making other assumptive leaps.
How do you prevent it? Test or discuss a new feature with a diverse group of users, ask objective and non-leading questions (even if they make you uncomfortable), and actively try to invalidate your own assumptions or hypotheses.
Hindsight Bias: It’s also called the knew-it-all-along effect or creeping determinism, and it’s our natural inclination to generate a false narrative to explain an unforeseen outcome — to make it seem predictable despite the fact there wasn’t any objective basis for making any sort of prediction.
We want closure, not ambiguity, but constructing these false narratives just colors our ongoing decisions. The best thing a product manager can do is admit that she or he doesn’t have all the answers about why something happened, and there may never be a clear-cut reason.
The Ambiguity Effect: Again, we hate ambiguity, and in this case, that fear of the unknown leads us to take the clearest available path. It’s toward the options where the probability of a favorable outcome is known, versus taking another path where that outcome is a mystery. For a product manager or analyst, it’s tough to justify a course of action where there’s very little information to support the decision, so we run with an option where we can reassure ourselves with available data.
Yet we might be cutting ourselves off from real innovation. So we should investigate multiple designs, and work harder to uncover insights that illuminate them with hard data. (By the way, the “ambiguity effect” was first described in 1961 by Daniel Ellsberg, the very analyst who later leaked the Pentagon Papers. The more you know ....)
Loss Aversion: We all strive to avoid loss of anything we value, even if we’re exchanging it for something of even greater value. In product design, there are regular skirmishes between those who want to retain some element, feature, or function and those who want to cut it because they’re certain it will actually improve product usability. Or they may want to add a feature that’ll supersede the older one. Sometimes, we hang onto these features even though their value is largely sentimental.
The answer here is to keep your methodology lean and user-focused. There should be no sacred cows when it comes to maximizing value for the customer. If that means jettisoning old for new, so be it. Make sure any new features stand up to rigorous, objective testing.
The Framing Effect: It’s the “glass half full, or half empty?” example. If you’re viewing it through a positive frame, it’s half-full, but if the boss just chewed you out and your cat died, you’ll see it as half-empty.
When interpreting data, or reporting that data to others, we may be prone to putting a positive spin on it, even unwittingly, to support our biases, skewing analysis because of how the data is being reported. Overcoming this demands we ask our ego to leave the room and intellectually divorce ourselves from our ideas so we can evaluate them objectively.
The Bandwagon Effect: Doing something the same familiar, safe and anxiety-reducing way others are already doing it can lead us far, far astray. One example? In mobile apps, the “Hamburger Menu” quickly escalated to the level of design cliché, but it created as many problems as it ostensibly solved.
Learning Opportunities
What drove its initial popularity? Well, its popularity. When contemplating new features, we may find ourselves leaning toward solutions that have already gotten love elsewhere. That doesn’t mean they’re right for your customers, unless you ratify that through the feedback and prototyping process. Even if a feature has already succeeded in your segment, this doesn’t mean lightning will strike twice. “Since it worked for (INSERT COMPETITOR’S NAME HERE), it has to work for us.”
Related Article: How Product Life Cycle Design Impacts Customer Experience
User Requests Are Not a Roadmap
Most of us have a research bias of one kind or another, so how can we go about the process of roadmap planning in a way that’s truly open to hard truth? One essential piece is customer needs and feedback. But don’t substitute that input for strategy. Customers aren't always the best judge of what they really need (and will pay for), or of what will provide long-term value to your business.
In fact, they’ll often ask for you to ladle on more features, which leads to another debacle: “feature fatigue.” An overload for feature choices is a recipe for disengagement.
To avoid this, the product team at Intercom prioritizes the features that will be used the most often by the most users.
Source: Intercom
They focus their feature development efforts in the upper right corner, where they’ll deliver maximum value to the greatest portion of customers.
In other words, to give your product actual staying power and engagement, you need to validate approaches that address genuine and prevalent needs. The features you land on may not be as sexy as some others, but they’re actually going to satisfy a broad demand. This is a classic example of knowing your customer better than they know themselves.
Related Article: Why We All Need Design Thinking
Is Your Product Team Made of Soldiers or Scouts?
What’s a good metaphor for the two divergent mindsets that come into conflict, either visibly or invisibly, in nearly any product development process?
Julie Galef, a writer and speaker who focuses on issues of rationality, science, technology and design, nailed it in a question she says we each must answer: Are you a soldier, prone to defending your viewpoint at all costs, or a scout, spurred by curiosity?
Check out her TEDx talk on the subject. Putting our most rock-ribbed assumptions and comfortable notions to the test can be anathema, but the “scout” mentality is the one that will drive us toward a clear understanding of the right path to follow, not just the safest or most obvious. Otherwise, we may turn our backs on a feature or improvement that lays out a better — and more profitable — route.
Reach out to trusted partners and your user community. Or you can hire a “scout,” too. There are scores of SaaS product research and management consultancies who can bring professional impartiality to bear on your product roadmap.
Basing that roadmap on assumptions or unvalidated arguments is liable to get you lost. Or you’ll get where you’re going but, like the immortal Clark Griswold, you’ll find you’ve arrived at a place nobody else wants to visit.
Learn how you can join our contributor community.