You've decided it's time to get a new content management system. Out goes the RFP, in come all the vendor responses. Next step: scoring.  Here's how to do it right.

When I first become involved in a CMS selection process the first question from the client is usually about how to allocate a score to the proposals that come in from vendors and integrators. Views on the benefits of scoring are usually polarised between those in favour (who need to keep procurement departments on-side) and those who think it is a complete waste of time.

In my experience over the years just allocating a score to each of perhaps several hundred criteria and then summing the result to identify the ‘winner’ is a total waste of time, but when used correctly scoring can be a very useful input into the selection process. This is especially the case when there is be a requirement to show that the selection has been rigorous and fair, for example in a public sector situation, but even when this is not the case the process has considerable value.

Slim Down the Selection Criteria

The first place to start is to avoid having a list of two hundred or more selection criteria. CMS products have matured to the point that all of them will meet at least 70% of your requirements. Let’s call them Level 1 Requirements, and they might include the way in which MS Office files (including tables and charts) are transformed into clean HTML. These are not ‘mandatory’, which is a very unhelpful adjective, but in the RFP these requirements should be set out in a way that indicates that any failure to meet these requirements means that the vendor may not proceed to the next stage. In the RFP ask the vendor to give an exception report, listing those cannot be met.

This should leave you with perhaps forty Level 2 requirements which are critically important for the success of the implementation. This is where you can start allocating scores, but not on a linear basis. I use:

10 clearly and demonstrably meets the requirement
6 requires some customisation but this is the case with all customers
2 requires customisation which is specific to the organisation, with the risk that it may not be possible or could be expensive
0 does not meet the requirement
-5 the requirement is ignored or the response is unintelligible

The -5 score is important as it penalises the vendors who cannot be bothered to read the RFP and just respond with a boiler-plate tender.

Allocating Scores

The next step is for the team to decide how they will allocate the score. To achieve a score of 6 the requirement could be that the customisation has been carried out for at least one of the reference sites, so that there is a benchmark. Allocating the range of responses and scores to a criterion can be a very instructive task and can often help clarify the draft requirement, especially where the requirement turns out to be three criteria in one.

When the proposals are received every member of the selection team should be able to score every criterion. That is a crucial requirement, as this means that everyone understands and buys-in to the criteria. Having just IT score the IT elements is not the slightest a sensible approach. The scoring should be done individually, and then a matrix of the scores produced by the team leader.

Dealing with Variations Between Team Member Scores

Now comes the really valuable aspect of scoring. There will inevitably be variations between members of the team about the scores. This becomes important when considering the relative importance of the criteria in comparing vendors. A simple scoring process, even with a weighted score as suggested above, does not enable an organisation to decide whether a very effective way of visually comparing two different versions of a web page (for which one vendor gets a 10 and another only 2) is more important to the business than a very good digital asset management application (for which the scores are reversed.)
This is when it is invaluable to conduct a forced-pair analysis, in which each criterion is set against all the others in a matrix. There is a good description here.

This approach, though valuable, does become tiring above a 20 by 20 matrix, but this has the benefit that it forces the organisation to decide what is really relevant to the business and focus in on a small set of Level 2 features.

Overall the value of scoring is to focus the CMS project team in on those features that are really important, and not to make life easier through just a simple sum of scores.

Score From Scratch for Each Evaluation Round

A final comment. The scores used for the initial round of filtering will not be appropriate to subsequent rounds, because the differences between the vendors will now be smaller. Throw away the scores at each round. They have done their job in facilitating a discussion about just what the business needs, and how the CMS functionality will meet these needs. Never ever sum the overall scores.

Now comes the entertainment, when the short-listed vendors turn up to do a show and tell. More about managing these events in my column next month.