Susan Farrell on Decreasing Mozilla Support Costs

A redesign of Mozilla’s online help system used input from user behavior (e.g. search terms) and forum questions. The effort paid off by greatly reducing the number of support requests in a short time, and by allowing staff to respond to almost all support requests within 24 hours. A post by Susan Farrell, of usability consulting firm Nielsen Norman Group (NN/g), includes nice graphs of both payoffs.

If you’re not familiar with NN/g, that’s “Nielsen” as in the author of Designing Web Usability and Mobile Usability, and “Norman” as in The Design of Everyday Things and Turn Signals are the Facial Expressions of Automobiles. In the spirit of under-promising, the partner roster also includes Bruce Tognazzini (ex-Apple, author of Tog on Interface and Tog on Software Design).

At the end of summer 2011, Mozilla staff received over 11,000 user questions per month. Why so many? Clearly because many users are asking these questions! But beyond that, why do users need to ask the questions they do? Farrell identifies several issues:

  • 400 pages of online documentation were difficult to search.
  • Time spent fielding user questions took staff time from writing new help files, or improving old ones…
  • …But on the other hand, the accumulation of the new articles staff could write “caused more findability problems.”

For reasons which Farrell does not explain, the number of incoming questions dropped to over 7,000 questions per month by the end of 2011. That’s when a three-person team began a 14-person-week program of discovery, iterative testing, and revision of Mozilla’s support site. The team was able to test seven versions of key pages in a two-week period, and followed with prototype design (with more testing and revisions) over the following nine weeks. (Farrell asserts that this speed of execution proves that “usability can be agile.”)

This dramatic improvement is in the context of diminishing returns: “In recent years, we’ve seen a decline of the ROI for usability, most likely due to approaching a ceiling of usability improvements….”

Yes, all of us are minutes away from some website or other that will provoke hoots of derision when reading this observation. But overall, we clearly are not stuck in 1995. There is progress.

Important components of the effort were understanding user behavior, and the support site’s information architecture. How did users interact with the structure of the available information? For example, what did users search for? Farrell’s post breaks out the research methods in more detail, with background links to other NNGroup.com resources.

By April, user questions dwindled from 7,000+ to a little over 2,000 questions per month. The number rebounded later, but stayed below 3,000 per month. The smaller number of questions fueled further improvement, as staff were able to respond more quickly to the remaining issues.

Caveat: Given the decreasing number of questions before the project began, how can we know that further decreases can be attributed to the project? Farrell does not address this.

Can other organizations emulate this feat? Obvious problems are:

  • Cost justification
  • Political and accounting scope

Cost Justification

Farrell writes “Is it worth spending 14 weeks to become 3 times better? This depends…and thus cannot be answered in general.”

For Mozilla, sheer scale overcame a lot of hesitation. For more likely scenarios, I’d ask questions like:

  • If 14 weeks buys me three times better, is there any way I could get twice as good in a month? Or does that just allow 5%?
  • Can some of the user interaction data be collected as a side effect of another activity?
  • Can some interpretation of user interaction data be input to existing design efforts?

…Steering clear of all the temptations to lie to one’s self about how a little leverage and synergy will pave the streets with gold, of course.

But a deeper issue affecting many organizations is how return on investment is calculated. When the landscape is divided up among fiefdoms and silos, no single manager’s perspective will include enough return to justify the project. No employee in any one of these silos will get approval for such a project. There is no rational incentive. This is the scope problem.

Political and Accounting Scope

Take a group of dedicated help file authors, for example. In this scenario, help file authors get access to applications late in the development life cycle. They create complementary online help systems, usually overlapping with application testing. They occasionally see help systems in a pre-production environment. They hardly ever see help systems in a production environment, as their target audience sees them.

These help file authors are discouraged from interacting with their end users, if not actually forbidden to do so. Several silos separate help file creators from consumers. Reports of end user abilities are simplistic and ambiguous. (Are they idiots? Or geniuses?) Help file authors know their audience only through conjecture. Are help system improvements limited to adopting a particular vocabulary and optimizing for search? Does the system even address the right questions? What if no one uses it? (That would be the dirty little secret managers should know, even if help systems impress the client decision-makers who pay for them.)

End users may leave click trails as they interact with the application and the help system, but this is invisible to help file authors.

This is nothing but an issue of collecting and communicating data, and a management decision to extend existing efforts. Of course there are existing efforts, even if they’re simple and largely ignored. This is the bread and butter of the application server world. MadCap Software’s Pulse product also promises to reach out from the help file author’s world to gather user clickstream data, among other things.

Baffled end users do not communicate with help file authors, or anyone help file authors work with. Users talk to personnel in other departments, who are responsible for customer contact. Help file authors receive no reports of these interactions.

In this environment, help file authors might as well be ice fishing. Who knows what’s swimming around down there? All you can do is drop a line into the dark water and try not to freeze to death before something happens…except that help file authors are not allowed to cut holes in the ice.

Since help file authors have no user interaction data, they cannot form hypotheses about help system traffic and room for improvement. Since help file creation and user support budgets come from different buckets, there’s no incentive to evaluate how help system changes might affect support costs.

In this situation, the outcome of rational low-level decisions across the organization is not a rational decision for the organization as a whole. Return on investment for a project like this can only be evaluated at a level high enough to span help file authoring, server administration, and user support costs.

Is this where you live? Then I advise you not to hold your breath waiting. You may or may not want to buttonhole decision-makers, lobbying for support. But at the very least, keep the idea in your pocket until you get a chance to mention it to the right person.

Leave a Reply

Your email address will not be published. Required fields are marked *