revolving doors
Testing search applications brings with it a number of peculiarities specific to search PHOTO: “Revolving doors” by John is licensed under CC BY 2.0

Before releasing an enterprise software application into the production environment, the application must jump through a final hoop: User Acceptance Testing (UAT).

At this point, the application has already gone through a series of internal tests to ensure the system is bug free. UAT gives end users the chance to test a system against specification. 

It is entirely possible, and indeed common, for a search application to meet specification requirements and then fail miserably during UAT because of an incorrect specification. 

The Complexities of Search Testing

Usually testing is conducted in a small collection of documents, for example invoices. In this case, users might test if the data from the invoice is correctly tagged and stored so that the application can produce a list of invoices by date — an easily verified requirement. Just 10 invoices may be enough to verify in this case. Box ticked and off to verify the next task. 

Search applications add a layer of complexity to this process. 

With search applications, confirming an autosuggest feature is working is usually possible — but whether the suggestion it provides is sensible is an entirely different matter. 

Only a user seeking a response to a query can decide if the autosuggest is appropriate. As with so much of the search experience, it comes down to relevance. Is the feature or piece of information relevant for that individual?  

An external test team can easily carry out UAT using test scripts, but search requires subject experts to ascertain relevance.

Three Quirks of Search Acceptance Testing

Three aspects of acceptance testing are unique to search.

The first is that even if individual features work as expected, users will judge the quality of the search experience by the combination of features and usability of the search interface. 

Second: in principle, every employee will be a "user" of the search application, which creates a wide range of expectations. 

The third is that the complete repository needs to be tested to judge the ranking and performance of filters and facets. These could work well with 200 documents and fail totally with 2 million, let alone 20 million or more. 

This is where IT teams unfamiliar with search applications discover that crawling even a relatively small collection of documents may take a couple of weeks. And should the crawl go astray, they may have to start all over again once the problem is resolved. 

The outcome is a totally derailed UAT schedule.

UAT, Meet User Satisfaction Testing

Although UAT does have its uses, it assumes the specification is correct and will result in high user satisfaction. 

The complex functionality of search applications makes writing a search specification challenging. So teams often reach the UAT stage only to find a suboptimal core specification. 

To mitigate this, add an additional round of User Satisfaction Testing (UST). Sit people down in front of a terminal and monitor their progress through a set of queries that they are able to judge have resulted in satisfactory outcomes. 

Testing can be done on either an on-premises or remote basis, but these tests take a substantial amount of time to plan, undertake, analyze and then feed back into software changes. Any major changes then need to be retested. Your search vendor will look carefully at your proposed test procedures as they are waiting to get paid.

Continue these tests after the initial installation, carrying them out after every major upgrade — especially if the upgrade adds new content repositories. This is just one more reason (along with search log analysis, training and support) why you need a search team in place from the outset. For those who feel they still can’t make a business case for a search team, remember: the application investment return will be zero.

Search Testing Takes Time, So Plan Ahead

At the outset of planning for a new search application — including a SharePoint upgrade — carefully consider what is reasonable from a UAT perspective and what is desirable from a UST perspective. It comes down to acceptance criteria in which you consider both UAT and UST outcomes together.  

Incidentally, testing open source search software is even more challenging. Agree on staff requirements for the tests and allow time on the schedule for the initial crawls (which rarely work the first time out) and to resolve the UST outcomes, which may take several months to complete and analyze. Short cuts at this stage will result in users ignoring the search application and calling a friend. 

When it comes to winning user's approval for your search investment, there are no second chances.