We all like receiving a personalized service, whether it is being recognized by the Maitre d’ at our favorite restaurant or having a range of books offered to us by Amazon when we are looking for a specific title. Sometimes we may wonder just how these personalization algorithms work. I was just looking for a book on Notre Dame Cathedral in Paris and among the books I was offered was one on the history of Britain from 1945-1951 and one on the history of the Habsburg family.

The majority of enterprise search vendors seem inordinately proud of their personalization engines, which toil away out of sight. These systems typically commit to delivering accurate and personalized results at the top of page one. What more could you want?

The information retrieval research community has devoted incredible effort to the pursuit of effective personalization of search results. Invariably this research focuses on web search, where a significant amount of prior search history is usually available to help define the intent behind the query. No attention has been paid to whether personalization of enterprise search query results meets user requirements.

Understanding Search Intent

The initial challenge is understanding the reason (usually referred to as the intent) for the search. I recently posted a list of potential intents to illustrate the range that an enterprise search application has to divine. Employees have multiple roles within an organization, as individuals and as members of a team. To further complicate matters, people often are members of a number of different teams. These roles may differ between geographic locations and business units.

The question that no vendor seems capable of answering is what critical mass of usage data will give a reliable prediction of relevant content? These systems have to learn from past searches and potentially the content of documents being compiled by team members. Given the current pandemic circumstances, there's also the question of to what extent the past is still a good indicator of intent.

Related Article: Reading Between the Lines of Enterprise Search

Artificial Intelligence, Transparency and Trust

Most vendors are proposing that AI, machine learning and knowledge graphs can, together, deliver highly relevant information. That sounds good, but how far along this road has the vendor actually travelled? This is where we come to the need for explainable AI (abbreviated to xAI by some).

Explainable AI emerged as a response to the increasing “black box” problem of AI, where models and their performance are incomprehensible to humans. A critical element of search is whether the user feels confident enough to trust the results. If the black box is impenetrable, it will jeopardize this level of trust. Remember that enterprise search is often used when other seeking approaches have failed, and so in effect the future of the business is being based on an algorithm that cannot be validated.

Related Article: What Is Explainable AI?

Learning Opportunities

Fairness and Ethics in AI

Related to the transparency discussion is the question of to what extent results are fair, unbiased and take account of ethical principles. A plethora of AI ethics checklists are available to gauge such factors. In my view these should not only be taken into account when the algorithms are being optimized for an organization, but also ensure that the software developers at a vendor or in the open source community have signed up to a set of ascertainable AI checklists. I am especially impressed with Tool: The Box developed by the AI Ethics Lab. The range of elements in the checklist illustrates the complexity of the problems to address.

Of course AI is not just a search issue, but applies across the organization. It would be good to see organizations setting up specialist AI Ethics teams to look at all aspects of AI development and use in the organization. See the UK Institute for Ethical AI and Machine Learning for further inspiration.

Related Article: Responsible AI Moves Into Focus at Microsoft's Data Science and Law Forum

Monitoring Personalization Success

One personalization issue that often goes ignored is what impact delivering personalized results will have on the overall performance of a search application. It is difficult enough to understand why enterprise search fails to meet expectations (see the excellent schematic from ClearBox Consulting) but now we need to take into account the quality, consistency, scalability and extensibility of personalization. Personalization might work well with English queries and content but what happens when German users query English content as against German content?

Another tricky issue is when a user decides to change query terms on a less than optimal initial search. Here we return to transparency again. In my view users should be informed that their initial results were personalized and then given the opportunity to switch off the personalization.

Related Article: SharePoint Syntex: The First Stop on the Road to Project Cortex

Asking the Right Questions

I'm not suggesting that AI-related support has no place in enterprise search. But discussions need to take place within an organization, with vendors and with implementators to ensure the level of performance, transparency and evaluation is appropriate so that all concerned benefit from this powerful search ally. 

fa-solid fa-hand-paper Learn how you can join our contributor community.