Researching and selecting a web content management system is often an arduous process. On the outset, few organizations realize what they are getting into. And most team reps and decision makers have other daily responsibilities that compete for their attention. It's not surprising that simple looking decision making tools suddenly become attractive.

Once the requirements collection and product evaluation phases are complete, the most difficult part of a CMS selection process starts. In this phase the group must take all the information that was gathered and meaningfully use it to make an accurate decision. With project fatigue setting in, this is where things often go wrong.

From Requirements to Decisions

This translation phase is where people often get a bit wild with spreadsheets and scoring, all in the hopes that math will heroically make a complicated and confusing (and, lets face it, subjective) decision obvious and irrefutable.

The process looks something like this. There are a bunch of selection criteria. People rate the products on each criterion. People weight the criteria. A spreadsheet is used to perform some multiplication and addition and voila! out comes some a very quantitative looking assessment.

Numeric Simplicity Saves the Day?

Nothing looks more convincing than a score where one option has more points than another.

But, users don’t necessarily want to use a system just because it has the highest cumulative, weighted score. They want to use a system that helps them efficiently get their jobs done while introducing the fewest number of annoyances.

If the measurement of accuracy is the overall satisfaction with the solution, this method is extremely faulty.

Matrix Scoring is Rarely Accurate

There are several reasons why the matrix scoring method fails to accurately select the right solution.

First, the rating and weighting wind up being very subjective and arbitrary. Veterans of this approach know this to be true when the they remember the feeling of not knowing what to put down or wanting to change their score when they see another product or have more coffee.

Second, the final score hides information that is important to the users. A typical example is where a user finds a critical (to him or her) feature totally unusable but that is overshadowed by excellent ratings in a majority of less important features.

Usually you can’t correct this with weightings -- especially if there are lots of selection criteria. You can’t discuss trade-offs and compromises if you are just working with total scores.

Lastly, criteria tend to be of unequal granularity. How can a broad criteria like “usability” be compared with something as specific as “SSL on the login page?”

cms-feature-doubt-matrix-01.jpg
A Bogus Selection Matrix

Doubt Provokes Discussion, Casts Light on Key Concerns

I take a different approach to the decision making process. Instead of forcing the selection committee into making numerical ratings, I ask them to list their doubts with each solution.

Examples of doubts are: