Over the years I have been involved in many usability tests where employees are given a search task to perform, such as, "Find the technical support manager for air compressors in Argentina." While an apparently simple task, the diversity of approaches employees take becomes visible very quickly. Multiple start points are immediately apparent, reflecting the experience and expertise of the employee performing the search.
An enormous amount of research has gone into information seeking over the last few decades. A survey of this research published in 2007 ran to over 400 pages, and the pace has accelerated since then.
Counting Clicks Is Only Part of the Search Story
When assessing enterprise search performance, the focus is always on counting clicks, worrying about "precision at k," mean reciprocal rank, and other formulae which assume users work their way sequentially through the ranked list of results. These clicks do not reveal an element of the search process: the stopping strategy for the search.
Relatively little research has been carried out into what might cause a user to stop a search. In enterprise search this could be something as simple as the date shown in the result snippet. One user may decide anything older than 2016 is not going to be relevant, while another user may stop at 2017.
Click traffic will not make this stopping strategy apparent, especially in cases where a session is halted and then resumed with a different query some time later.
Related Article: Which Search Platform Is Right For Your Business?
The Scent of a SERP
Peter Pirolli and Stuart Card developed the information foraging model for information seeking while working at Xerox PARC in the early 1990s. Ed Chi, a fellow Xerox PARC employee, further developed the model in the late '90s. The concept of an "information scent" refers to the way (for example) pigs can find truffles even though they are well hidden.
So what's the connection between truffle hunting and Search Engine Results Pages (SERPs)? The answer is a search user's view of results pages is informed by a wide range of proximal clues, which together create an information scent in the mind of the searcher. For example, a glance at 10 PowerPoint files listed on the first page of results could bring a search to an abrupt halt before it has even started.
The Intersection of Scent and Stopping Strategies
David Maxwell, a PhD student in computer science at the University of Glasgow and Leif Azzopardi, associate professor at University of Strathclyde, presented a paper at the 40th European Conference on Information Retrieval in March which prompted this column. You can download the paper (along with many other interesting papers) from Maxwell’s personal website. In their paper, Maxwell and Azzopardi hypothesize, model and then validate the impact the information scent of a SERP has on stopping strategies and therefore, search performance. In summary (and there is a substantial amount of data and analysis in the paper), they believe the role the quality of SERP presentation has had on search effectiveness and satisfaction has been significantly underestimated.
The paper goes on to discuss the search ability of users. Again, in the "click count" world, all users are assumed to have equal search proficiency and an equal command of the languages being used on the SERP. The paper shows search proficiency influences opinions about the usefulness of the page based on information clues from SERP, and the authors set out some potential categories of user proficiency.
Related Article: The Next Generation of Enterprise Search Is in Sight
Implications for Enterprise Search
I have written previously on stopping strategies and on the quality of information snippets in results lists. As with any research, the outcomes presented in this paper should not be generalized without carefully considering the methodology and analysis. The authors rightly set out where further research is required to understand more clearly the impact of information scent on stopping point determination. This research will undoubtedly lead to a more reliable assessment of information seeking behaviors in an organization.
Even so, I believe all enterprise search managers can take aways some lessons from the current research:
- Relying only on search click traffic analysis is rather like assessing a holiday beach from a monochrome print.
- Usability studies provide essential information about how the user is performing, not just how the system is performing.
- SERP presentation values are likely to have a significant impact on achieving high levels of search satisfaction. Further research (at an organization level) will be needed to assess the improvement in performance.
- If this proves to be the case, then using cognitive search applications to present a small number of highly personalized results could be counter-productive.
- Key performance indicators, such as "precision at k" calculations, may potentially need to be completely reconsidered.
- Finally, academic research should not be ignored just because it is difficult to find and use.
Learn how you can join our contributor community.