Linda C. Smith, then a doctoral student at Syracuse University's School of Information Studies, published a landmark paper in 1976 on the role of AI in information retrieval. Forty years later, her thesis came to fruition.

Over the last few years, software vendors have positioned AI as the solution to every known enterprise search problems. Yet surveys carried out from 2016 to 2020 (and indeed way back to 2007) all indicate that search is not in any way a solved problem and indeed, only in a minority of cases is search regarded as very satisfactory. Why are companies prepared to accept anything less?

The Rise of Explainable AI

Research papers on how to optimize search with AI continue to be published at a rapid clip, but alongside this deluge is a growing concern about the implications of handing over business processes, including search, to black box AI applications. Since the beginning of this year, I have added over 30 articles on the topic to my collection. This movement is called Explainable AI (xAI). Rather than seeking to undermine the value of AI, xAI initiatives are concerned about the potential loss of control of business-critical applications which might occur if we do not know how AI search (as just one example) is configured, applied and then modified in the light of new data. Another stream of research touches on fairness and bias in AI search results, though this issue dates back to around 2009 as "search neutrality."

Related Article: What Is Explainable AI (xAI)?

Vendor Policy Commitments to xAI

A movement is now starting among IT companies that realize the competitive advantage transparency in AI applications could offer. IBM is one of the leaders in this approach, along with MicrosoftIBM's commitment to transparency is summed up well in this statement:

“We envision that suppliers will voluntarily populate and release FactSheets for their services to remain competitive in the market. The evolution of the marketplace of AI services may eventually lead to an ecosystem of third party testing and verification laboratories, services, and tools. We also envision the automation of nearly the entire FactSheet as part of the build and runtime environments of AI services. Moreover, it is not difficult to imagine FactSheets being automatically posted to distributed, immutable ledgers such as those enabled by blockchain technologies.”

One challenge here is how to present the AI elements of a software package in an explainable way. A recent research paper that set out to present a taxonomy of explainable AI elements does a good job illustrating the challenge — it runs to 33 pages and 110 references. The scariest part is that the genie is already out of the bottle and we are now racing to keep up.

Related Article: Has Microsoft 365 Been Clinically Tested?

A Growing Push for Corporate AI Policy Commitments

Over the last couple of decades companies have struggled with achieving effective governance of personal data, which compared to the complexity of AI is well-defined in national legislation, certainly in Europe. Recognition is now spreading that an equivalent set of policy commitments on AI in the workplace will now be expected, driven by international organizations such as the OECD as well as trade unions, and of course customers.

Learning Opportunities

Again, the scale and complexity of these policies and in particular, who on the Board owns them, are gradually emerging. Take a look at a recent paper by Michael Hilb if you want a neutral view of the corporate governance implications of AI.

Related Article: IBM and Microsoft Sign 'Rome Call for Ethics': What Happens Next?

Considerations for AI in Search

Enterprise search presents a special challenge when it comes to AI transparency. Most other enterprise processes are close to linear in execution, so the impact of AI on performance can be relatively easily assessed and monitored. In the case of search, every query is a new workflow as it is dependent on the knowledge of the individual and the intent behind their search.

What follows are a few, but by no means exhaustive, areas where you should work closely with your vendors and stakeholders.

  • AI needs to be trained on existing data and information sources. Are you confident that the data and information is relevant to future business directions post COVID-19 pandemic?
  • Vendors tout personalization, where the first page of results presents only the most relevant ones to the user, as a core benefit of AI. Will an individual employee be able to validate the assumptions made on what their interests are?
  • Within any company there will be differences of approach and opinion on almost any topic. How can you be sure that search results contain no bias that might inadvertently result in the company not making the optimum decision about a future direction?
  • NLP models are typically trained on large collections of documents, usually in English. Can you be certain that the search results in other languages parallel the outcomes in English so that you have comparable sets of relevant results?
  • If AI algorithms are managing the ranking of results, the extent of this management may not be obvious by analyzing query logs. Do you have the ability to override and rewrite the algorithms to reflect operational requirements?

These are just a few of the areas where a deep knowledge of how AI is playing out behind the scenes is going to be essential.The primary cause of search dissatisfaction is a lack of investment in a search team. AI will not be a solution to this under-investment.Are you prepared for the challenges?

Related Article: When Personalized Enterprise Search Results Are Hidden in a Black Box

fa-solid fa-hand-paper Learn how you can join our contributor community.