Extreme Usability at P-Camp08


This session was originally suggested by Mary Hodder, who could not attend. Rather than let the topic drop, three of us stepped in. Meghan Ede, Nancy Frishberg and Daniela Busse all have lots of usability experience, and varying amounts of experience in working in agile or extreme programming environments. This session builds on at least 2 other usability focused sessions earlier in the day at P-Camp08. (And on writing these notes I uncovered at least one other page in this wiki about a previous session on Extreme Usability.)

 

 

The questions posed from the audience included:

 

That audience (numbering roughly 30 people) were mostly Product Managers, who expressed concerns about being "too close to the product", working on short releases, and sometimes for small organizations ("start ups where I do it all").

 

Recap of definitions of Extreme Programming include

(These notes make no judgment about the accuracy or relevance or relative importance of all/any of the above definitions.)

 

 

Meghan's claim (made earlier in the day also) is that there are about 6 sorts of competencies that might get called "usability" work, and that any individual is probably good at 1, 2 or at most 3 of them. Few of us can do it all. Projects need all of these skills but in varying amounts; some can be brief interventions, others will be useful over the full life of the project.

 

  1. Visual design (encompasses graphic design)
  2. Interaction design
  3. User Research (encompasses fieldwork, site visits, lab usability, and many other techniques)
  4. Prototyping (using a variety of tools)
  5. Information Architecture (IA)
  6. Documentation (including help, manuals, installation, setup guides...)

 

Following the session we remarked on two more specialties that deserve to be grouped with the 6 above:

  1. Accessibility
  2. Internationalization/Localization

 

The key thread that brings all these different competencies together is that they rely on directly observing the users from the target group who are expected to use the product (software, hardware, etc.) as part of the work of designing and creating products. Unlike using a product manager (or a marketing person) as a surrogate for the user (as is common in agile development practices), usability folks actually listen to and watch the behavior of users (who may be ordinary consumers, system administrators, chocolate lovers, rocket scientists - depending on the kind of product under development). Note that often the customer for a product is not the same as the user: we suggested the example of the person who makes the buying decision for an enterprise who may or may not in fact be the user of the system. The shared calendar solution may be used by executives as well as administrators and those in all other functions. In this case the customer (buyer, perhaps the CFO or a purchasing agent who reports to the CFO) and user (various roles) may be the same. The pharmacy inventory system for a hospital or hospital consortium may be selected for all sorts of reasons other than its usability by the pharmacists and the pharmacy assistants. In this latter case the customer is likely distinct from the users, and may apply wildly different criteria to make the selection decision than the users would (price per seat, ability to network across several institutions vs formating options for printed reports or automatic fill-in of brand names from generic names, just to cite a couple of differences).

 

We described the 5-second page recall "test" that Jared Spool uses as an extremely inexpensive method that reveals whether the information you're hoping users will find easily is indeed available to them in the first 5 seconds. (You might not get another 5 seconds of their time and attention.) This method does not require huge upfront investments - it's cheap - but it can contribute to a project that's open to hearing from users.

 

Other methods, such as home visits, require several weeks to find prospective participants who meet the criteria, as well as get the team's calendars to coordinate, and all the recording equipment ready to go. The results from a specific home visit may not be immediately available, but the depth of understanding of learning is far greater from such visits than from a 5 second test. A product's use, current and future versions will continue to benefit from what's learned in home visits.

 

Examining customer call logs is a slightly indirect usability method. The analysis of customer call logs may give great feedback about the specific difficulties customers and users are having with installing and using the product. Depending on the organization, this information may be stuck in the customer support center, or may make its way back to product development and inbound marketing as requirements or requests for enhancements.

 

 

One of the discussion points was how to fit a usability person (or more than one) into the development team. Shall the usability person remain outside a 2 or 3 week sprint cycle, fitting longer activities (home visits, lab studies) into the schedule with periodic reports that align with start of sprint or next release? Or shall the usability person function like a partner in paired programming, creating Visio or Powerpoint or html mockups for prototyping? Which sort of usability functions are we talking about out of the 6 mentioned in the session?

 

References (in roughly the order we mentioned them in the session)

 

Notes contributed by Nancy Frishberg (nancyf at acm.org)