Researching and selecting a web content management system is often an arduous process. On the outset, few organizations realize what they are getting into. And most team reps and decision makers have other daily responsibilities that compete for their attention. It's not surprising that simple looking decision making tools suddenly become attractive.

Once the requirements collection and product evaluation phases are complete, the most difficult part of a CMS selection process starts. In this phase the group must take all the information that was gathered and meaningfully use it to make an accurate decision. With project fatigue setting in, this is where things often go wrong.

From Requirements to Decisions

This translation phase is where people often get a bit wild with spreadsheets and scoring, all in the hopes that math will heroically make a complicated and confusing (and, lets face it, subjective) decision obvious and irrefutable.

The process looks something like this. There are a bunch of selection criteria. People rate the products on each criterion. People weight the criteria. A spreadsheet is used to perform some multiplication and addition and voila! out comes some a very quantitative looking assessment.

Numeric Simplicity Saves the Day?

Nothing looks more convincing than a score where one option has more points than another.

But, users don’t necessarily want to use a system just because it has the highest cumulative, weighted score. They want to use a system that helps them efficiently get their jobs done while introducing the fewest number of annoyances.

If the measurement of accuracy is the overall satisfaction with the solution, this method is extremely faulty.

Matrix Scoring is Rarely Accurate

There are several reasons why the matrix scoring method fails to accurately select the right solution.

First, the rating and weighting wind up being very subjective and arbitrary. Veterans of this approach know this to be true when the they remember the feeling of not knowing what to put down or wanting to change their score when they see another product or have more coffee.

Second, the final score hides information that is important to the users. A typical example is where a user finds a critical (to him or her) feature totally unusable but that is overshadowed by excellent ratings in a majority of less important features.

Usually you can’t correct this with weightings -- especially if there are lots of selection criteria. You can’t discuss trade-offs and compromises if you are just working with total scores.

Lastly, criteria tend to be of unequal granularity. How can a broad criteria like “usability” be compared with something as specific as “SSL on the login page?”

cms-feature-doubt-matrix-01.jpg
A Bogus Selection Matrix

Doubt Provokes Discussion, Casts Light on Key Concerns

I take a different approach to the decision making process. Instead of forcing the selection committee into making numerical ratings, I ask them to list their doubts with each solution.

Examples of doubts are:

  • a concern that the feature would not support a specific task
  • unnecessary complexity or awkward behavior in doing a specific task
  • an unsatisfactory explanation by the supplier about how a feature worked
  • doubt about the vendor’s stability or ability to support the customer
  • a potential technical incompatibility with the legacy infrastructure

Each of these doubts are investigated as whether they are valid -- that is, if it was a misunderstanding or oversight, if there is a suitable work-around, or if there is a reasonable compromise.

Through a number of facilitated sessions, we work through comparing the relative weaknesses of the competing solutions and determining what is tolerable.

Follow-up demos and calls with the vendors are scheduled and executed. Ultimately, the solution with the fewest legitimate and significant concerns wins.

Facilitating these sessions is not as easy as simply reporting matrix scores but I think that it is good that people put some real intellectual energy into making such an important and complex choice.

Best = Set of Least Painful Compromises

At first glance, this system seems designed for selecting the lesser of evils and to some extent that is true -- there is no such thing as a perfect content management system. There will always be compromises involved in the final decision (I should note here that there is an option of selecting nothing if no solution is good enough). But is this approach really any worse than a numerical system that decides a 5 out of 1000 score is better than 3 out of a 1000. I think not.

Three Key Benefits

A Clear Focus

The first benefit of the doubt technique is that it keeps the focus on things that have real impact on the users of the CMS -- forcing the group to think through the implications of specific aspects of the solution. This is better than having a people register their concern in the sparse format of a low numerical score and then just move on.

Deeper Understanding

The second benefit is that selection committee members interactively learn more about their needs and more about the software features as they watch demos and work with prototypes. As a result, their selection criteria become more sophisticated over time and potentially critical information is allowed to enter the decision making process at any time.

Post Decision Posture

The third key benefit is that after the product is selected, the selection committee can all clearly verbalize the reason behind the decision. If there is a complaint about the implemented solution, the  selection committee can refer back to previous analysis sessions, express that the problem was  identified as a concern and then explain the plan to lessen the impact.

Overall, by making the process more discussion and investigation oriented, one encourages additional richness in the discovery process and the team exits with the realistic perspective that they are embarking on an investment with a tool that while imperfect, is at least deficient in ways that the group has consciously decided they can work with, or around.