woman searching on smartphone next to robot graffiti
PHOTO: Shane Rounce

Last month, while speaking about implementing taxonomies in SharePoint and Office 365 at Taxonomy Bootcamp London, a broader trend caught my attention.

The event being a Taxonomy Bootcamp, it was great to discuss metadata, taxonomies, ontologies with fellow practitioners. When I said “metadata is the soul of search” everyone understood what I meant. However, we all agreed we face a common challenge: As machine learning and artificial intelligence (AI) gets more and more attention, more and more people and organizations think these emerging technologies are smart enough to function without human intervention and as soon as you deploy them, magic happens in your organization.

Related Article: Machine Learning Can Improve Enterprise Search, But You Still Need to Train It

Questions to Ask Before Jumping Into AI

Before getting caught up in the promises and possibilities of artificial intelligence, take a step back and see if you recognize yourself in any of the common questions I hear from organizations: 

  • Can search be smart enough to provide relevant results?
  • If we, for example deploy a chatbot, can we eliminate the need for a well-configured search?
  • Does AI magic happen without any optimization of content or taxonomy?

These questions are cropping up more and more often as the promises of AI become more impressive. Some of the vendors aren't making this any easier, either as they communicate that their tool is the best and the smartest. So what’s the truth beyond the hype?

Related Article: Who Needs Cognitive Search When We Lack the Resources to Make it Work?

The Reality Behind 'Intelligent Search'

Whenever I have a client who wants to install a chatbot “instead of investing in search," or meet someone who wants to rely on machine learning and/or artificial intelligence to "learn and help search," or someone who expects relevant results from "intelligent search" without any forethought into content quality, I always ask them a few questions.

  • How will you know the results are “relevant” if you don’t know your content’s quality?
  • How will you know the answers that the “intelligent” chatbot gives are really what the users need? (Remember, what they want and what they need might be two different things.)
  • How do you know if or when the “intelligent” search is intelligent enough?
  • How do you measure its success if you don’t even understand how it works because it’s an “intelligent” black box?
  • Can an “intelligent” tool be intelligent enough to replace human content curation?

To be honest, I don't think tools will ever fully replace the need for human intervention. Dave Clarke, CEO of Synaptica, expressed it perfectly during his keynote in London:

"AI … will not replace the need for human curated taxonomies or ontologies. On the contrary, it is taxonomies and ontologies that will empower AI with the semantics and logic to improve search, categorization and perform machine reasoning.”

Related Article: How AI-Related Search Could Bring Us Closer to the Intelligent Workplace

Don't Expect Magic

What this means in practice is we have to be smart enough, and do the required planning to help these "intelligent" tools so they can help us and our users later. What does "help" look like in this case? To help the tool to help you later, you need strong taxonomies, training sets, and understanding of your content, users and requirement in place before introducing the tool. 

None of this is to say these emerging technologies are useless. They can be useful for many organizations, but only if deployed and used the right way. Don’t expect magic. Because even these smart and intelligent tools don’t know what “magic” really means to you. And you don't want them to surprise you.