A search vendor claimed that tuning ranking algorithms would dramatically improve search quality during a recent presentation. This, in the view of the vendor, would transform the accuracy of the results.
Luckily, the presentation was over the phone and I was on mute.
First, I have great difficulty in understanding what an "accurate" search means, as accurate implies a definitive metric against which performance is achieved. No search vendor has ever been able to offer a suggestion for this metric.
Second, all search vendors live in a parallel universe where content quality is flawless. It never is, and probably never will be. No matter what rules you put in place for the production of quality content, they will always be very difficult (if not impossible) to apply to millions of legacy documents.
Information Quality Management
Search teams seem unaware of the significant amount of work already conducted in defining information quality standards and guidelines.
These efforts date back to the pioneering work at MIT in the early 1990s, which recognized information had to be fit for purpose and not just "accurate." John Stone wrote his PhD thesis on the impact of information quality on decision making in 2006. Every information manager should download and read Stone's work.
"The Philosophy of Information Quality," published by Springer in 2014, also provides a very good resource on the development of information quality management. The high quality contributions are slightly undermined by the lack of index in the book — Springer clearly does not have a commitment to information quality! Springer published "Data and Information Quality" only a few months ago as another resource on the topic.
MIT remains at the heart of information quality management. It organizes an annual conference, which in 2016 took place in Spain. The papers from previous conferences can be downloaded from the conference archive.
The International Association for Information and Data Quality (IAIDQ) also organizes an annual conference. In the context of work on information quality there is no significant differentiation between data and information, though some initiatives, notably around ISO 8000 – 2011, emphasize master data management. The Association for Computing Machinery (ACM) publishes the Journal of Data and Information Quality, but access is limited to ACM members.
7 Metrics of Information Quality
Information quality covers seven generally accepted dimensions:
Learning Opportunities
- Accessibility: is the information easily retrievable?
- Accuracy: is the information free from error and unambiguous?
- Believability: does the information comes from reputable, trustworthy sources?
- Completeness: is the information comprehensive?
- Consistency: is the information objective and free from personal bias?
- Relevance: is the information fit for purpose?
- Timeliness: is the information timely for use?
The level of user satisfaction with search depends on these metrics in two ways: The first is the extent to which poor quality information lands at the top of a ranked list because the search algorithms cannot take any of these quality metrics into account. The software deems the information relevant, but not the user.
Dissatisfaction with search also arises when a user feels unable to rely on information. This is unrelated to the technical performance of the search — e.g., speed of display, effective UI — but when the user discovers several hours, if not days later the quality of the information is unfit for purpose.
Promoted Information Quality
Although improving the quality of millions of documents is not feasible, it is possible to work through promoted content ("best bets") to ensure they all meet the seven quality criteria. That means someone has to own every piece of promoted information, and that person is given the time and incentive (support from their manager) to keep this content to as high a quality standard as possible.
Evaluation Metrics
Users' attitudes towards information quality are usually never taken into consideration in search evaluation. Search teams take pride in the volume of searches and the number of click-throughs, but rarely have the resources to see what happens to the information post-retrieval.
Extend search evaluation to follow through on the impact it has on decision making. Only then can you make a business case for further investment in search teams or in enhancing or replacing search applications.
Editor's Note: Martin White is working on a research project with Professor Paul Clough of the Information School, University of Sheffield, to develop an end-to-end evaluation framework for search which blends both qualitative and quantitative metrics. They hope to finish their work towards the end of the year. For those who would like to help, contact Paul Clough through the email address on the iSchool website.
Learn how you can join our contributor community.