I was reviewing a sample chapter from Lou Rosenfeld and Rich Wiggins’ upcoming book on search log analysis. This chapter is covers Michigan State University’s steps in patching around an aging AltaVista engine. It is good history, but not very good advice. MSU’s first step was to build a manual Best Bets system to match individual queries to editorially chosen URLs.
Best Bets are very effective, but are usually a last resort, not the first. The strength of Best Bets is that the results are very, very good. The weakness of Best Bets is that the manual effort only improves the results for a single query. That had better be an important query! Most other kinds of tuning help all queries or at least a broad set, perhaps all results from one website or one web page template.
Here is what I suggest for improving your search:
- Get a better search engine. This will help all queries, even the ones you don’t measure. If you don’t already have a metric for “better”, use the relevance measure from step 4 combined with the required number of documents and query rate.
- Look at the top few hundred queries and record the rank of the first relevant result.
- For each query without a good hit in the top three (“above the fold”), find one or more documents (URLs) which would be good results.
- If you want a single number for goodness, use the ranks from step 3 to calculate MRR (mean reciprocal rank). Invert each rank number and average them. You’ll get a number between 0 and 1, where “1” means the first hit was relevant every time. If you are getting above 0.5, your engine is doing a pretty good job — you’re averaging a good result in the second position. You need at least 200 queries for MRR measurements to be statistically valid.
Now you have a list of failed queries matched with good documents. Start at the top of that list, and try the following actions for each one. When one of your preferred documents is ranked above the fold, you are done with that query and should move on to the next query in your list.
- Are the preferred documents in the index at all? If not, get them in and recheck the ranking.
- Are the documents ranked above the preferred ones good quality or junk? If they are unlikely to be a good answer for a reasonable query, get them out of the index and recheck the ranking.
- Do the preferred documents have good titles (the
<title>tag in HTML)?
If not, fix that, reindex, and check the ranking.
- Take a critical look at the preferred documents and decide whether they really answer the query. If they don’t, add a page which does answer it. Index that page and recheck the ranking.
- Do the documents include lots of chrome, navigation, and other stuff which swamp the main content? If so, configure your search engine to selectively index the page (Ultraseek Page Expert) or use engine-specific markup for selective indexing in the page templates. Reindex and check the ranking.
- Do the terms in the preferred documents match the query? The query is “jobs” but the page says “careers”? If so, consider adding the keywords meta tag or synonym support in your engine (or go to the next step). Reindex and check the ranking.
- Add a manual Best Bet for this specific query pointing to the well-formatted, well-written document with the answer. Schedule a recheck in six months to catch site redesigns, hostname changes, etc. and hope that it doesn’t go stale before then.
As you go through this process, you’ll find entire sites which are not indexed, have bad HTML, are heavy with nav and chrome, or are designed so that they just don’t answer queries (click for the next paragraph). Fixing those will tend to improve lots of things: WWW search rankings, web caching, accessibility, and bookmarkability.
Search matches questions to answers. It is really hard to improve the quality of the questions (get smarter customers?), and the matching algorithms are subtle and tweaky, so don’t be surprised when most of your time is spent improving the quality of the answers.