customer experience, Discussion Point: What's the Best Vendor Software Ranking Method?

If you're in the software world -- who isn't, right? -- you've heard of the Gartner Magic Quadrant and Forrester Wave reports that rank software vendors.

But they aren't the only ones measuring vendor performance. There are crowdsourcers like these guys and these guys. Other research firms produce reports.

Each ranks software vendors differently. And each never fail to advance industry conversation.

Vendor software rankings come with debate. They even come with lawsuits.

Today, we continue the conversation — without Forrester Research, which declined to participate in the discussion, and Gartner Research, which did not return a request to participate.

Instead, CMSWire caught up with some of the others who produce these rankings, as well as those who've used them as software buyers or users. Rather than ask each of them the same question, we tailored it to meet their experience and expertise.

The Question

As a long-time software user and now member of a research and analyst firm, how do you feel about the analyst model for providing software rankings, such as the Forrester Wave and Gartner Magic Quadrant? How can software users benefit from software rankings, whether they come from an analyst/research firm or a crowd-sourced platform?

The Answer

Chris Lanfear, senior analyst, Technology Business Research Inc.

2014-29-September-Christopher Lanfear.jpg

Lanfear covers digital agencies and marketing technology platforms for TBR Inc. He has more than 15 years of marketing and technology industry experience as a marketer and industry analyst. Tweet to Christopher Lanfear.

The crowd-source technology ranking sites are a great disruptor to the traditional analyst model, especially for SMB customers that can’t afford the reports and advisory of the big firms. As a past buyer of software, and marketing software in particular, I know you have to have some method of narrowing your demo list; there are literally hundreds of content management systems, hundreds of marketing automation tools and countless other platforms and tools.

Even after narrowing by technology stack, price, cloud versus on premises, you are still left with too many vendors to evaluate. As the feature sets in many technology categories are fairly standard, the magic is in the user experience. Getting to that level of understanding of a software product can be difficult.

Analyst reports are often written at a high level and don’t get down to the experience of the product. They can tell you about the features and stability of the company but not how functional the tool is. The real value is reserved for those with access to inquiry time where an analyst can provide more detailed advice and even a shortlist of solutions to demo.

The vendors and tools covered by the big analyst houses are usually the market leaders — enterprise solutions with prices to match. You are going to miss a lot of options from emerging or niche vendors that might be better suited to your needs if you stick with analyst reports.

Crowdsourcing sites collect hundreds of reviews from actual users from a variety of organizations (and maybe a few competitors) giving buyers lots of data to work. Since most of the people writing these reviews have extensive hands-on experience with the products they are reviewing, there is often lots of user experience information — especially faults, problems and poor usability — it’s the nature of the reviewing game.

Although one can be easily overwhelmed with so many reviews, it’s important to read them to find themes about a particular solution, including high- and low-rated reviews. Buyers should look for reviews that reflect their situation and organization including budget, staffing levels, marketing complexity, B2B versus B2C, size/type of organization and digital maturity. Buying the solution that best fits a given scenario and not automatically selecting the market leader is critical.

Ultimately, buyers should really look at multiple sources of information to find tools to demo if they have the time and access, and they should turn to one final group for advice: colleagues and industry connections. Some of the best technology advice I have received over the years has come from the marketing teams at partner companies, co-workers and people in my network.

The Question

As a long-time software user and member of companies that received accolades in the Forrester Wave and Gartner Magic Quadrant, how do you feel about the analyst model for providing software rankings? In which ways are they useful for the vendors in the running, and general software users, and what is your response to those who say only clients get attention in these analysts reports?

The Answer

Aaron Dun, chief marketing officer, Intronis

2014-29-September-Aaron Dun.jpg

Dun leads marketing and strategy for leading cloud backup provider Intronis, which is used by more than 35,000 small businesses to protect their data. He has more than 15 years of experience in B2B technology, both as a buyer and vendor. Tweet to Aaron Dun.

Ahhh, the dreaded analyst vendor ranking reports! I have been involved with analyst relations virtually my entire career (and I have the bruises to prove it.) As both a vendor agonizing over every pica of positioning and as a buyer of technology using the rankings to help guide my buying decisions, I have seen both sides of this coin.

First as a technology buyer: I have used a number of reports to help identify vendor short lists, but increasingly, I am using my various peer networks to identify new technologies or vendors to work with. The challenge is always the unique set of requirements you have may not fit neatly into the published material. I have engaged with analysts to help work through requirements and “best fit” based on those requirements, and those conversations have been valuable up to a point. As with any decision however, it’s good to triangulate on a number of different sources to make the best decision possible.

But as a vendor, the process is excruciating.

First let me say, that I am on record as not believing for a minute that analysts at the big analyst firms are influenced by whether or not you are a client. In my opinion, the only thing your client relationships buy you is access. And if you have more to spend on that access, you have a greater opportunity to influence.

I know the comments will flood in about how people “know for a fact they were left out of a report because they weren’t clients.” Sorry, not buying it. I have on at least two occasions been in ranking reports without actually being a paying client. For those people who still insist client bias, I have a simple recommendation: Call the CEO of that analyst firm, tell the CEO the name of that analyst, and then call your lawyers to join in the current round of lawsuits to make it a class action case.

Now that we have dispensed with that nonsense, let me share the problems with these ranking reports as I see them.

  • Driving from the rear view mirror: Most major rankings summaries are generated on a review of the prior year (or several years) worth of data. This will create a natural bias against up and comers who don’t have the money to spend on greater access or do not have as much real market traction as yet. Each firm tries to address this in its own way, but none of them have perfected it as of yet.
  • Market impact: Some analysts place a heavy emphasis on the number of inquiry calls they get about a particular vendor. Even without that focus however, emerging vendors have a hard time showing up in a meaningful way because they haven’t full “arrived” in the market yet. This creates a circular cycle where the only vendors getting shortlisted are on last year’s rankings report; it is hard for an emerging vendor to break through and show up on enough shortlists and generate enough inquiries to move the needle.
  • Subjective analysis of qualitative data: In the final reports picas matter, tremendously. How big your circle is, or how close your dot is to another vendor (or line) has a major impact, and that placement is not entirely driven by the hard qualitative data. This is where access comes into play. If one vendor can afford more access to that analyst, either by inviting them on an office tour, or having them speak at a user conference, or engaging them on a webinar, for example, these touchpoints create the opportunity for a more rounded understanding of the vendor's business and will necessarily influence positioning. Some analyst firms include the point value that corresponds to the vendor's circle size and even let buyers manipulate the variables to create their own “leaders” to shortlist. While this approach is helpful, it devalues the analyst's insights by reducing everything to a point system that the buyer may or may not agree with.

I still believe that the analysts have a large role to play in helping IT buyers sort through the myriad technology choices available to them, but I think the role of the rankings summary is doing these analysts a disservice. It’s time for buyers to move away from relying on these reports for shortlists and instead get back to using analysts to frame a buying process, design requirements, and short list vendors based on those requirements, and stop relying on a where a vendor sits on a “magically wavy” rankings report.

The Question

How is the crowdsourced methodology and rankings reports useful to software users, how do you arrive at your rankings, and how do you respond to people who are skeptical of crowdsourced sites because it may not be always clear what motivates users to publish their reviews?

The Answer

Tim Handorf, co-founder and president, G2 Crowd

2014-29-September-Tim Handorf.jpg

Handorf runs G2 Crowd, a crowdsourced business software review site. Tweet to Tim Handorf.

We now rely on sites like Yelp, Amazon and TripAdvisor in our personal lives. These sites feature peer reviews to help us make quick, easy purchase decisions. In addition, consumerization in the enterprise is a trend, as reported by IDG and others. Although purchasing software is usually not as simple as the decisions we make as a consumer, we believe business software purchasers want a tool to help select the right software for their business that is similar to their experience as a consumer.

According to Nielsen, the most trusted form of advertising is a recommendation from someone you know. By requiring LinkedIn authentication for people to post reviews, site users are able to read reviews from trusted sources within their own networks. One size never fits all; getting feedback from people similar to you is quite valuable.

Furthermore, we believe we can replace most of the current research process for SMBs, and accelerate the process for enterprises. We can reduce the time needed to create and validate a vendor shortlist, whether done by a project manager at a large company, or a senior-level marketer at a smaller organization.

In addition, buyers that are further down the process can use G2 Crowd to validate what the vendors they have been talking to have been saying. Purchasers don’t have to take the vendor's word for it. They can take their customers' word for it.

This all sums up to buyers will make better software selection decisions for their company, and set better, more realistic project expectations based on information from peers.

How do we arrive at these rankings? Our category GridSM reports (using CRM as an example) are updated in real time, based on user ratings and reviews. The horizontal axis represents satisfaction, which is derived straight from user reviews, while the vertical axis represents each company’s market presence, as calculated from vendor size, market share based on publicly available data and social signals. Based on a combination of these scores, products can be grouped and compared against each other within our reports as “Leader,” “High Performer,” “Contender,” or “Niche.”

How do we respond to people who are skeptical of crowdsourced sites because it may not be always clear what motivates users to publish their reviews? In the B2B world, professionals have historically been reluctant to share their true opinions of products than in the B2C world, where people feel much more comfortable reviewing, say, a hamburger or a hotel room. A B2B professional's career reputation might be at stake.

Until recently, social networking and online sharing were primarily a consumer phenomenon, but now are not becoming only accepted in the business world but also required to drive successful B2B marketing efforts.

People usually discover G2 Crowd through one of several channels: most find us through search engine results, and some find us through vendors who have chosen to license our reports.

The reason vendors such as HubSpot, who recently cited us as the only third-party validation source used in its IPO filing, choose to license our reports is because we have credibility with software users and purchasers. If we’re not credible with them, first and foremost, then our reports have zero value to anyone, let alone vendors.

There are many reasons a user might be motivated to write a business software review. Most of the motivations are acceptable as long as they are real users of the software and aren't motivated to write a biased reviewed based on personal gain. Although we don't know the motivations of most of the reviews, we have some ways to validate reviews that go beyond what is being done for consumer reviews. We use LinkedIn to verify all writers to ensure that users are not employees, competitors or a business partner. In addition, we take it one step further and enable users to upload screenshots of their software while they’re logged in to validate they are current users.

We continuously refine our methodology, but the fact remains that the law of large numbers applies to our reports. This is why we don’t even include a product on a GridSM until it has 10 reviews.

Mind you, this isn’t a large number, but it displays the scrutiny we put all of our reviews through: we remove reviews written by employees, competitors, partners and consultants, and users can upload screenshots to validate their reviews.

The Question

How do you see your ranking methodology and rankings reports useful to software users, how do you arrive at your rankings, and how do you respond to people who are skeptical of models that rely on user reviews because it may not be always clear what motivates users to publish their reviews? 

The Answer

Ian Michiels, prinicipal and CEO, Gleanster Research

2014-29-September-Ian Michiels.jpg

Michiels of Gleanster Research is an analyst, strategic consultant and business executive with a strong background in analytical and creative marketing. Over his career he has had opportunities to advise and guide hundreds of executives across companies such as Nike, Sears Holdings, T.Rowe Price, Franklin Templeton and Adobe to hundreds of up-and-coming start-ups. Tweet to Ian Michiels.

We publicly make our methodology and a comprehensive Q&A available for users to check out.

The FLASH chart is designed to help buyers answer two critical questions: Do customers of these solutions perceive them to be easy to adopt and use? Do customers believe the solutions deliver value? Eight users with current or past experience with one or more solutions from this vendor gave them an average score of “x” based on the criteria in the chart. This information should (1) be taken with a grain of salt given the sample size and (2) be married with other sources of rankings data available in the market research industry.

We cover EVERY vendor large or small, client or not.

All relevant vendors are included in our FLASH vendor rankings and they pay nothing to be included. In fact, they cannot pay to influence their placement. We report raw data and share ALL the raw data online.

Software users have access to the full methodology and all raw data from our website.

How do we arrive at our rankings? Using a one to five point rating scale, survey respondents are asked to assess the solution provider(s) they are currently using or have had the experience of using within the past two years, across four different dimensions: ease of deployment, ease of use, features and functionality and overall value. To qualify for possible inclusion on one or more charts, vendors with less than $10 million in annual revenue must be rated by a minimum of five qualified survey responses and vendors with more than $10 million in annual revenue must be rated by a minimum of eight qualified survey responses. A mean class performance score is calculated for each vendor.

At first pass, you might think, “Hey, vendors can bias the data by having eight happy customers participate.”

Yes they can. And that’s the point.

We believe it’s valuable for buyers to know if a vendor has happy customers that are willing to literally bias the results across the board. That’s a vendor you should probably talk to. It’s not very easy getting user reviews these days. Keep in mind, many vendor comparison models ask for three customer references which may or may not be contacted by the research firm. We have tripled that by capturing eight reviews. More than that is just plain unlikely. For some vendors eight reviews represent half of their customer base.

Vendor rankings are crowd-sourced by end users in Gleanster surveys. Respondents are asked to rank their current or past experience with relevant vendors. Eight reviews is certainly not a statistically valid sample size, but it’s quite difficult to get in front of actual users. Gleanster promotes this survey independently AND allows vendors to promote the survey link prior to publication to drive customer participation. The eight best survey responses are taken into account on the rankings. All vendors have equal ability to be covered on the rankings charts. Vendors do not pay Gleanster for placement on the chart and cannot influence placement with an analyst relationship.

How do we respond to people who are skeptical of models that rely on user reviews because it may not be always clear what motivates users to publish their reviews? Buyers should be skeptical. But that doesn't change the fact that the data they extract from ranking charts is still useful, even if biased. If it causes them to ask better questions during the demo process, it serves an extremely valuable purpose.

Software buyers have access to an abundance of data from analysts who provide context about vendors based on the solutions offered, market presence and analyst perspectives. It’s more difficult to capture user feedback based on criteria buyers consider when investing in technology solutions. Our rankings are based on end-user feedback and should be used as a directionally relevant data point in a software decision, one of many. The data is not to be statistically valid and frankly no vendor ranking model is perfect. It’s up to the buyer to determine which data points merit weight in your decision process.

Gleanster survey participation is not biased by monetary or reward based incentives. Participants get access to the report final PDF as a benchmark tool. In most cases they are motivated by a relationship with the vendor they are ranking. If the vendor can drive eight users to share their experience, good for them.