You can always tell when a search-related project doesn't understand search use-cases by the decisions they make when it comes to search user interfaces (UI).
It becomes most obvious in cases where the search UI comes adorned with so many filters and facets that it makes the average Christmas tree look bare.
A great deal of research has been carried out into information behaviors, and information seeking in particular. For the purposes of this article, I'm going to put much of the research on information seeking to the side and instead focus on three pragmatic use cases, which will hopefully help anyone developing enterprise and website search applications.
3 Search Use Cases: Learning, Doing and Deciding
Expectations change according to the context in which the search is being performed. Let's see how this happens in three use cases: learning, doing and deciding.
In this case, you need to undertake a task or make a decision and are unclear how to go about it. “How do I start up a cross-divisional project?”
You may not even know how to frame the ‘best’ query. You expect the search application to guide you to the information you need through features such as auto-suggestion for queries.
The learning process is usually free of any time pressure, and the person may return to the search application a number of times to accumulate all the information they need. This is often called exploratory search. They expect the search application to provide a good list of relevant documents (high recall) and also information that has been promoted to the top of the results list as an introduction on the topic in question.
Here, you need to complete a task. You are searching for either a specific application to complete a task — “how do I start the process to replace an employee who is leaving" — or a document that provides detailed guidance on the process. The process may differ according to country, but you expect that either the location-specific application or the guidance document will top the results list, even when you have used a very short query such as [replacement employee] and not included location in the query.
Doing queries are very high precision queries. These are standard tasks and as such, people expect help to appear on the first page of results.
You have done the research and asked the experts but now you need to make a decision. Search here is to ensure you have found not only the most relevant information, but almost always the most recent.
You have the sales for Q1 and Q2 but need to make sure that Q3 data is included before making a decision and are sure that data is available. Based on your learning searches you are confident about the search term, but the performance metric here is currency. You need to be confident that the index is no more than a day behind publication, and ideally updated in real time.
This scenario poses quite a challenge when there is extensive use of social media.
Search is a Balancing Act of Recall, Precision and Currency
You cannot tune a search application to give both high recall and high precision. The best you can do is rely on promoted information and hope that it's the latest available. All too often no one in the business owns the promoted content, so you have no easy way of knowing how reliable it is.
In November I highlighted the categories of information that employees were looking for. For each of those categories, I recommend you talk to users and find out the balance between the three search cases I have outlined above. In many cases the task completion/high precision search is a direct result of poor navigation or a lack of training.
Sort out the navigation and training and track the outcomes through the search logs. High precision is much more difficult to achieve than high recall.
Keep in mind that people search is the second most important category that employees look for. Finding people, and in particular their roles and expertise, fits into all three of the use cases.
Search, Survey, Deliver
Start off by doing some searches yourself, ideally with a few colleagues, just to reassure yourself that I know what I am talking about.
Then widen the research with some user surveys that only ask for the relative balance between the number of searches undertaken in each category. The results will give you a mandate to prioritize the optimization of recall, precision and currency and give you a good platform on which to improve search satisfaction in 2017.
If you need some encouragement, read the forecast by Deputy Managing Director of Microsoft Research Lab, Susan Dumais, for search progress in 2017.
Learn how you can join our contributor community.