My favorite projects in my 20 years in the search industry have involved helping companies select, update or improve their search platforms.

These projects may be called a Search Workshop, Search Selection, Search Audit or simply a Search Review, but the technique I use to help companies decide which capabilities they want from their search platform remains the same. 

Set Realistic Expectations

A common element early in these engagements is gathering user requirements and setting expectations — the Intranet is not the Internet. This usually involves meeting with representative users to teach them about great search, and finding out what they're looking for.

After a brief discussion about available capabilities in modern search platforms and the difference between intranet search and big Internet "search engines," I invite them to think about what capabilities they want in their updated (or new) platform.

Choose Your Dream Search

Next, I ask them to participate in a "mind game." They list the capabilities or features they want in their search platform, while I write them down on a whiteboard. I record every suggestion, no matter how basic or advanced, realistic or pipe dream (at least with today’s search technology). 

If the participants are shy, I'll seed the discussion by writing a few items on the board to get them started. Capabilities like:

  • Relevant results
  • Simple search form
  • Advanced search button 
  • Easy to read results 
  • Natural language input 
  • Suggestions/Best Bets

Pretty soon people will start to suggest their own ideas. These are usually more advanced, and sometimes less feasible, but not always:

  • Document preview 
  • People like me and/or popular results
  • Spelling correction 
  • No special or complex syntax 
  • Remember the results I previously clicked on 
  • Proactively email new documents as they show up
  • Answers to my question without having to open the document (for example, if you type another employee’s name, their phone number, email and location display above any results)
  • Spinning fireballs

OK, I made that last one up, but as you see, as users become engaged, their creative juices start to flow. Public facing “search engines” have set user expectations really high, so the Intranet search has to compete feature-wise. 

If the group leaves out any suggestions I would have made, I may write a few of them up on the board. However, I don't want to appear to be endorsing the ones I write over the ones they nominated.

Depending on how successful I was in getting the group to open up, I've seen as many as 25 to 30 reasonable suggestions — as well as some real doozies.

Reality Check

When we’ve got a good list, I remind participants about their budgetary and time limitations. Now's when they prioritize the listed capabilities. 

Learning Opportunities

To start the process — and to add an element of reality to it — I give them each a budget of $100 “Dev Dollars” ($DD) that they can use to "pay for" the capabilities they prefer. As we move through each item on the list, I ask everyone how many $DD they are willing to invest to get that capability. 

Usually it starts out with $20DD allocated to the first two or three capabilities, and virtually every participant who suggested a feature will spend a large amount of their $DD — at least on the first round. The rest of the currency is generally spread evenly across the remaining capabilities. 

Now comes a reality check: I tell participants we can only put in eight or 10 features at first release.

Pretty quickly people start to reallocate their money, with 20 percent of the features usually ending up with about 80 percent of the $DD. 

Even in the first round, we start to see some trends and have some really interesting discussions. In round two, users discuss the benefits of each capability, and after a few minutes of discussion, people will change their minds. The really useful capabilities rise to the top, and the fringe drop off the funding list. 

By the end of the second or third round, we usually have clear winners.

In the course of an hour or two, we’ve engaged users; a great exchange of ideas has taken place among the users and the search team; and we have a pretty good idea of what capabilities people want AND are willing to pay for — even when it’s only Dev Dollars.

Users leave with a better understanding of search and the search team leaves with a group of cooperative ‘co-conspirators’ when it comes time to justify their budget and limitations. And they have a list of future enhancements already prioritized for updates after the rollout — overall a real win-win. 

Title image "telescope" (CC BY 2.0) by  berlinrider 

Learn how you can join our contributor community.