Why you need to do more usability testing...

September 03, 2009

Users women cultureUsability testing has a hidden benefit that is not apparent unless you do a lot of usability testing. The benefit is cultural absorption. By cultural absorption, I am not referring to culture as in a field study (contextual inquiry, ethnographic study), instead I am talking about user culture.

User culture is what you gain from understanding how users think, walking in their shoes, taking their perspective on issues like:

User culture, situated outside of companies, is usually abstracted, misunderstood and distorted (in that order!). User culture typically runs counter to your development culture. Regular usability testing helps bridge this culture gap.

Development cultures are technological sub-sets of company culture:

  • Dev groups typically think: users know more than they do, that users can do what they normally do (or most of it), that shortcuts are helpful, that features are more helpful than they usually are, for example. 
  • Company culture is typically characterized by big picture thinking CEO's that get user experience but don't terraform their company culture (see Why you can't innovate like Apple), with a few or single grassroots user experience champions. Middle management is usually too busy to get up to speed on usability research and as a result contact with customers is limited or not existent.

Usability testing is usually thought of as offering benefits to understanding how users use your design. That's all good, but one of the under-noticed benefits of usability testing is exposure to user cognition. User cognition is characterized by rules, behaviors, habits, values, beliefs, attitudes and patterns. That is why I consider it and call it a culture. It's unique to your users and it's typically different to the way your organization functions and thinks about its user experience problems.

Usability testing is typically believed to be good on a regular basis because it helps designers and developers vet usability problems. While this is true, it is rarely recognized that iterative, rapid usability testing is good because it puts you in closer contact with users, their culture and their cognitive requirements you design for. This is why I believe you need to do more usability testing!

Happy Usability Testing!
Frank Spillers, MS (Usability Consultant)

Announcing: Experience Capture Studio- new usability testing software (Beta)

June 16, 2009

ECS-Software-Package After 6 months of hard work, we are pleased to announce our latest version of usability testing software. Experience Capture Studio or "ECS" is officially available!

The software dramatically updates our previous LiveLogger product with integrated video viewing and logging notes (in one environment). This provides a more powerful solution allowing for observational logging to be contexted with captured video during a usability test.

This Beta release adds numerous improvements to the overall usability testing software solution: Video file sizes are dramatically smaller and export of video is very quick. The software can import up to 4 video feeds; ideal for mobile or consumer device usability testing. More enhancements aimed at making usability testing logging easier are planned for future releases.

Experience Dynamics & Usability Lab Rental partnered with New Zealand based Intranel to develop the software, utilizing their extensive ethnographic software package experience. 

New Features:


1) Live note taking: Take notes and flag "events" (usability metrics such as error rates) as markers in the live video stream.
2) Playback a session and find or edit flagged events. Metrics such as success rate can be altered in real-time if accidentally recorded or if the user gets it seconds after hitting the fail button.
3) Quick video exporting: Find incidents (observed metrics such as a user getting confused by navigation) and export an instant highlight clip.
4) Instant Reporting: Metrics from the observed test are logged as graphs and exportable tables. Common Excel or JPEG and WMP video formats are supported.

Learn more about key features in Experience Capture Studio

Usability Labs Whatever your Scenario

With the release of ECS, our labs are now suitable for specific usability testing scenarios such as mobile device (iPhone, tablet, BlackBerry, Nokia etc) or medical or healthcare devices (glucose monitoring; patient record-keeping) as well as Web/Software based usability testing.

Explore your usability testing scenario or get in touch and we can talk about how we can help you with your desired usability lab. 

Happy Usability Testing!
Frank Spillers, MS (Usability Consultant)

Do you have to sit next to the person to get the most "direct observation"?

February 26, 2009

DSCN0323 Many of our usability lab rental customers often ask us if they can or should moderate usability tests sitting next to the user.

The most common questions include:

1. Can I sit next to the user during usability testing?

Yes, absolutely. Though there is no methodology rule for sitting next to the user during the test. The reason you would sit next to them is to provide more intimacy, or "hands on" moderation. In other words to make the user feel more comfortable.

2. Do I have to sit next to the user during the usability test? Not necessarily. Our usability labs are designed so that you do not have to sit next the user if you choose not to. This is really a moderator preference. We find it is easier to use our intercom system to communicate with the user, leaving them to work alone on their tasks. There is less chance of moderator stress or participant bias this way.

3. What benefit is there from being next to the user vs. being behind the one-way mirror?
The one-way mirror (or located in another room is another common configuration with our usability lab) is valuable because it gives moderators space to log notes or observe, comment and do their own thinking out loud! On the flip side, the benefit of sitting chair side is increased intimacy or "bedside manner".

It will depend on what kind of test and what type of user you are dealing with. When I first started moderating usability tests over ten years ago, I used to almost always sit next to the user. I used to think it added better moderator observation. The more usability testing I conducted, the more I found it awkward to note take and or observe so close to the user.

These days I find it easier to moderate using our usability lab's intercom system and or create a link with walkie-talkies, or a conference call bridge. Sometimes if I feel the user is anxious, I will sit chair-side. The bottom line, is there is no rule, it is up to you and depends on what usability lab set up you have.


Happy Usability Testing!
Frank Spillers, MS (Usability Consultant)

What makes live observation of a usability test a must?

January 28, 2009

Usability_testing_lab_experience_dy Usability testing gains it's strength from direct observation. Like listening to someone describe water vs. putting your hands under a tap to directly experience water, direct live observation of a usability test is more "real".  Moreover, direct observation is critical for accelerating understanding and decision-making.

Live usability testing observation  can not be underestimated. All too often stakeholders (and even clients who hire usability consultants) will skip the test, in favor of reading the report or watching the videos later.

Bottom line: You are wasting a learning opportunity (and your companies money) if you do not attend a usability test for a design of which you have stakeholder say in direction of a site or product.

What makes a usability test live observation so compelling?

Usability testing in-person lets you see, hear and feel your users as they succeed and fail. This "direct experience" makes usability lab testing powerful over other methods (see discussion below).

The dynamics of the user in person with all their human problems (twitches, sneezes, strained vision aka body language) seem to be part of it. The direct contact with the user as you see clearly their facial expressions, hear their sighs and breathing, get a feel for who they are- all seem to factor into a rich observation experience.

First-hand observation of your customer as they "think aloud", can save you a dozen meetings later. Understanding how they make sense out of your design in all the fidelity of their physical presence can leave a powerful impression on you.There's just something very different from "being there" and watching it on video later.

I often meet marketers, developers, project and program managers who have witnessed a usability test. They usually smile when they recall a usability test, with that "I get it" look and they usually want more. Rarely do I meet someone who has seen a real (professional) usability test who said, "oh that's boring, I think I will attend a meeting instead on testing day..." Yet I have worked with clients who do not show up to usability tests.

Top 10 Excuses for NOT Attending a Usability Test

1. Our senior managers can't make it to the usability test. If senior management is involved in decision making, which invariably they are, attending at least 2-3 sessions can potentially save countless hours spent trying to understand issues and back-track on poorly informed decisions.

2. I'll watch the video highlights.Watching the game on TV and being in the stadium are two different things. If your company is paying to host a game, go to the game!

3. We can't bring our whole team out. 1-3 Stakeholders at the minimum should be in attendance.

4. It's too expensive to travel. Consider remote testing to save budgets. Remote testing gives you everything you need minus a few dynamics such as user facial expressions, and the "in-person" factor. Remote testing is great for iterative, rapid testing with a team of familiar players.

5. I'll just read the report. Test reports are important however, adding the context gained from live observation will make the report many times more meaningful.

6. I'm in meetings and won't make all the sessions. If you are managing the project, you should be there for all or almost all sessions. If you are a peripheral stakeholder, try to make 2-3 sessions.

7. I'll wait to hear what the usability experts say. The behavioral interpretation from the usability expert is important, however you will be able to more readily understand decisions by having been to the user sessions.

8. It's not my site (or direct responsibility) that's being tested. You can always learn from user behavior no matter what site they are using.

9.  I've attended usability tests before and I don't need to be there. To keep current and avoid the known problems of generalization and assumption, attend a few sessions and then decide that.

10. I'm not really sure how it works, so it's all a little intimidating... What better way to learn from the experience itself!

Conclusion: Attending and getting as much live Usability Testing exposure as possible is crucial to making better decisions, understanding the context of user problems or issues and for strengthening your user empathy and advocacy skills.

Happy Usability Testing!
Frank Spillers, MS (Usability Consultant)

5 Things You Should Never Say or Do to Users (during usability tests)

January 23, 2008

How you ask a question during usability testing can color your data. At Experience Dynamics, it's important to us that we get *clean* data and that our usability testing for our clients runs smoothly. The subtleties of moderation are easily missed and usually come from years of practitioner practice.

Here are 5 Things you should Never Say or Do to users (during usability tests): I teach them regularly in my usability training courses...

1)  Praise

Example: "How am I doing?" "Good Job! You're doing great!"

Why is this bad?

Giving your users praise sets up an unhealthy relationship with the researcher and the subject. If the user makes a mistake will you be there to tell them they are doing poorly? Will you provide "therapy" to the user and tell them it's okay and console them?

I have witnessed colleagues do this. I did it once, and realized I was stuck when the user got angry and insisted I tell them if they were right. To do so would have embarrassed them, I may as well have said "You are a stupid user, don't worry".

The problem with Praise during testing is that it violates one of the principles of solid usability testing: there is no right or wrong, the user is not there to make you happy by "doing a good job".

Best Practice: Take a more neutral interaction with the user. The user does not need to know if they are making a mistake or not or if they succeed or fail- that's for you to observe, not the user!

2) Feature Like/Dislike

Example: "Do you like this feature?"

Why is this bad?

If you ask users what they like or dislike, you have turned the usability test into a focus group. Focus groups elicit opinions, usability tests elicit behaviors. Margaret Mead once said, "what people say and what they actually do are two very different things". If you ask people what they like, you'll miss how they would actually use it when they got home with it.

Best Practice: Give users familiar tasks to perform and watch them! If users really hate a feature they will vocalize it. (Note: usability testing uses a verbalization technique called the "Think Aloud" protocol).


3) Asking about 'Ease of Use'

Example: "Is this easy to use?"

Why is this bad?

It is really difficult to gauge ease of use from a questionnaire, partly for the reason mentioned in #2 above, and partly because ease of use is relative. Humans are highly flexible and will internalize difficulty with machines, often blaming themselves. What's easy for some is mind-boggling for others.

How is it that all these years major software manufacturers have given us ease of engineering instead of ease of use?

Best Practice: Again, watch users, don't ask them. Remember ease of use is not the only usability metric that counts. More on usability metrics in another post.

4) Asking about expectations

Example: "Is this what you were expecting to be on this page?"

Why is this bad?

I once accompanied a usability lab rental customer on site, with a role of observing and acting as technical support. The client was a *major* ad and interactive agency that was conducting usability testing for it's *major* financial services client. The financial services client was present, but had no idea that a "worst practices" usability test was being delivered by the agency! The facilitator sat with each user and on each screen asked, "Is this what you were expecting here?"... and each user said "I guess so, I don't know".

Best Practice: Let users vocalize their expectations by walking through your site or web application with the industry standard "Think Aloud" method.  Expectations do not need to be asked, users will tell you what they think should happen 90% of the time (either verbally or through their behavior, non-verbally).

5) Giving Instruction

Example: "Click on that button, scroll down, look at that in the top corner"

Why is this bad?

When a user is lost or confused, common sense tells us to help them. Forget about it! This is one of my cardinal usability testing rules, I stress in my usability testing training with corporate teams. If you instruct or direct the user, like with praise, they will rely on you as their crutch when they need help again.

Another thing, we have realized in over 55 live usability reviews our Portland User Interface Special Interest Group has conducted since 2001 is: let the user go off track if they need to, their confusion will teach you something about their expectations and problem-solving techniques.

Best Practice: Instruction should only be offered if you are consciously moderating and feel it is safe to "reel the user back in" (usually I leave them as long as 5-10 minutes in an off-track path).

Happy Usability Testing!
Frank Spillers, MS (Usability Consultant)

If you enjoyed this article and you are interested in refreshing your usability testing skills, you might check out my exclusive Usability Testing Skills refresher web seminar...

-or-

Join the Portland User Interface SIG, the group meets online and is open to anyone with an interest in learning more about usability!

Formal vs. Informal Usability Tests

April 09, 2007

Wireframe What type of usability tests should you be conducting and why?

Formal Usability Testing

Also called "High Fidelity" usability testing.

Where it gets it's name: Design concepts are typically more finalized. Formal testing can take place in pre-release design, but not always. Websites in their current state (before a re-design) are considered Hi-Fi tests.

How common is it? Very common- even more common than informal usability testing. 

Advantages: Click-able prototypes are easier to follow (for stakeholders). Formal usability testing is often the test of choice for including developers, project managers, executives etc.

Drawbacks: A certain level of HTML "smoke and mirrors" design needs to be created (for web sites). More level of coding complexity is involved in testing software applications- in this case Informal testing is better.

Informal Usability Testing

Also called "Low Fidelity" usability testing or "paper prototype" testing.

Where it gets it's name: Design concepts are tested in draft "wireframe" or unpolished state. No coding or graphic design has occurred at this level. The focus is solely testing the "information architecture" or the "interaction design".

How common is it? Very common, contrary to what most marketers might think. Usability guru, Jakob Nielsen called this testing "Guerilla HCI" to refer to the fast and frequent use of this technique in corporate environments. However, most low-fi testing is usually done by usability engineers, with users behind closed doors or "in the trenches".

Advantages:  Yes! You can get design feedback early on. Feedback can be rapidly acquired in less than two weeks and inserted into the development lifecycle rapidly. A benefit for the quick turn Agile development cycles.

Drawbacks: You can't always test dynamic page level interactions. This will become more of a problem as "Web 2.0" interface design elements become more mainstream (such as fading and hovering elements).

Mixed Fidelity Usability Testing

Mixed-Fi? Mixed fidelity tests are more common for us at Experience Dynamics. We typically test with low-fi concepts that are "taped together" with HTML and some JavaScript. This gives us a rapid "cut and paste" site that can be iterated and refined on the fly. Informal HTML prototypes allow us to prototype and test quickly thereby keeping costs down, but still including stakeholders in our usability labs or remote testing sessions (more on Remote Usability Testing in a future post).

Conclusion: Usability testing can and should be done early on and throughout the product design lifecycle. It is very common for usability practitioners to test concepts that only exist on paper or as static PhotoShop files. Moreover, with basic HTML, a hybrid fidelity can be achieved bringing both the need for speed and user validation to a design.

Happy Usability Testing!
Frank Spillers, MS (Usability Consultant)

Usability from WWII to the present- the historical origins of usability testing

February 27, 2007

Ivan_sutherlands_sketchpad_1963 Where does usability testing come from? How long has it been around? Is it new, old?

If you are wondering where the methodologies you use come from, you ought to know that there is a very long history (and heaps of military,  academic and corporate research) behind usability techniques.

(Ivan Sutherland's Sketchpad 1963 pictured left)

Usability Comes of Age in the "Dot Com" Period

For many, the reference point for usability testing is the dot com boom circa 1998-2001. This is the first time usability testing was used on a wide-scale basis for commercial (e-commerce) purposes. Before this, usability testing was confined to academic or corporate R&D research (Apple, Sun, HP, Bell Labs, AT&T, Microsoft and others). It also marks the first time usability (aka "customer experience") figured significantly in an executive team's decision making process (e.g. Amazon, eTrade, Google, Dell etc).

"In our first year we didn't spend a single dollar on advertising... the best dollars spent are those we use to improve the customer experience."      - Jeff Bezos, Amazon.com

A Historical Time line of Usability Research

For practical purposes we consider World War II the emerging point of usability research. Post WW II is also when the inter-disciplinary field of Cognitive Science was founded. Cognitive Science is the obscure field where usability engineering or HCI (Human Computer Interaction) (aka Human Factors in the US or Ergonomics in Europe) is studied. As early computers and Artificial Intelligence (Nazi code-cracking) emerged, so did the study of how humans process information and perform with computer-based interaction.

Historical Marker 1: 1930-1954

Colonel John C. Flanagan perfects the "Critical Incident Technique"

World War II was the starting point of electronics and electrical systems controlled by human operators through a "user interface".  Industrial psychologists such as John Flanagan discovered that by reducing the amount of buttons, knobs, switches and control panels in new fighter aircraft- they could also dramatically improve operator performance. The P-51 Mustang fighter, for example, "became one of the conflict's most successful and recognizable aircraft".*

Developed by Flanagan, the Critical Incident Technique (or CIT) is a set of procedures  used for collecting direct observations of human behavior that have critical significance and meet methodically defined criteria. These observations are then kept track of as incidents, which are then used to solve practical problems and develop broad psychological  principles. [source: Wikipedia]

In today's parlance: you need to do usability testing because your customer is interacting with your company, brand, product or service through an informational display (website, software application) creating a self-service situation. Any self-service situation with a computer interface will likely cause errors, confusions and failure (if not designed to meet a user's expectations). Observing users perform tasks helps find out what those errors are before it's too late.

For example: compare the Supermarine Spitfire to the P-51 Mustang fighter cockpit (using CIT, an early usability testing technique). [hat tip to Jurek Kirakowski for pointing out this example]

Wwii_aircraft_cockpits_1

The period 1950-1969 saw an increase in usability research related to computer interfaces (as micro-electronics began its boom). IBM was active in this area early on as were other more academic/ R&D innovators such as Doug Engelbart at SRI; Ivan Sutherland at the University of Utah and Alan Kay (Note: Engelbart, Sutherland and Kay- and others, are known as the inventors of many user interface hardware and software designs that you use today).  Early pioneers such as Sutherland developed advanced interfaces in the late 1960's that are not yet in the public mainstream today, such as virtual reality technology and tablet PC's.

Historical Marker 2: 1970-1983

Xerox PARC (Palo Alto Research Center)

Xerox is largely responsible for much of the innovation in user interfaces (still in use today!). Many know these as WIMP (Windows, Icons, Menus, Pulldowns). Xerox R&D work and resulting usability and user interface innovations propelled the current age of corporate usability research.

Xeroxstar8010large

 


 





[above: Xerox Star system. Hat tip to Bruce Damer, inventor of the related Xerox Elixer desktop UI]

Historical Marker 3: 1983-1992

Apple unleashes the Macintosh user interface

Apple's built its design of the personal computer around a strong emotional connection with the user. This was also reflected in their advertising.

Apple_ii

above: Apple II- early 1980's... usability takes root in research circles.

Late 1970's usability research exploded in the 1980's (in the R&D sense) with many great achievements in user interfaces adopted by the masses (e.g. Atari jump-started the video game industry with the innovations of usability pioneers Alan Kay and Brenda Laurel for example). Note: If you haven't read any of Brenda Laurel's work, you're missing out...

From the early 1990's to the mid-1990's usability research continued but was more of an R&D hang-over from the boom of portable and personal electronics spurred by the mass adoption of Microsoft's Personal Computer and Windows operating system. Why Microsoft won the PC battle and Mac did not is the subject of David Gelertner's book Machine Beauty.

Historical Marker 4: 1998-2003

Usability becomes recognized as a strategic win to Web site marketing efforts

The mad rush to build the Internet was triggered by the recognition that information could be "easily" indexed and edited with the new mark-up language (HTML). Furthermore, business could be conducted online and products sold through online catalogs or e-commerce sites.

Boo

above: Boo.com. Naive Web design tricks such as those epitomized by Boo.com were a wake up call to usability in the early days of the Web.

Unfortunately the ease of learning HTML meant that anyone could play on the Web. Likewise, anyone could run a usability test. Evangelism favoring "just do it" was promoted by experts like Jakob Nielsen and his "Discount or guerilla HCI" and authors such as Steve Krug with his "going out of business usability testing". Nielsen's colleague (Rolf Molich) however, showed that not all usability testing methods and approaches are conducted equally. More in a future post on his findings and the implications of best practice usability testing techniques!

Usability testing solutions exploded with new "bots" like WebCriteria's Max and online panels like Vividence- replacements to "old school usability testing"-- or at least that's how the Sales VP's at these companies positioned usability.  (Disclosure: I worked for WebCriteria and got a bird's eye view into this piece of usability history)


Historical Marker 5: 2004-2007

The Web gets a makeover with Web 2.0 and a focus on User Experience

New energy, new thinking and new players are starting to dominate how things are done on the Web (Yahoo, Google, Flickr, etc.). These new approaches signal a maturity never seen before.

New Web  2.0 start-ups aimed at destabilizing the dominant position of traditional software companies and software applications or tools, dominate today's discussions (just ask a Venture Capitalist what they think about social video or mobile applications-- two hot areas of development at the time of writing).


Web20logos

Above- the logos of Web 2.o (interesting analysis of Web 2.0 logos by Stephen Coles)

Usability is now being recognized (in the USA in particular) as a strategic "win" to Web site marketing efforts. No new Web 2.0 start-up would be caught dead without considering user experience it seems. How many are actually doing usability research? (Not all, and I am not sure of the answer to this... but many are serious about improving usability with their tools and applications). Clients, partners, VC's and end-users are all demanding high standards of usability with your design. I have been tracking this over the past 8 years of my own usability testing at Experience Dynamics. It is amazing to see the tides turning!

Happy Usability Testing!
Frank Spillers, MS (Usability Consultant)

For further interest see: David Meister 's The History of Human Factors and Ergonomics (Lawrence Erlbaum, Mahwah, NJ, 1999). Chapter 4 covers The Formal History of HFE and Chapter 5 covers The Informal History of HFE.

Disclaimer: the above historical time line is my best effort at explaining what I believe are significant points and players in usability history. If I missed a major detail, please let me know! This history is how I teach it in my usability testing training based on my own understanding as a usability professional.

 

How many users should you test with in usability testing?

December 12, 2006

Question: How many users do you need to test with for a usability test?

Answer 1: = 5 users (Jakob Nielsen and Thomas Landauer, 1993).

Answer 2: = 15 users (Laurie Faulkner, 2004), PDF file.

So, which is it, 5 or 15? And why are we arguing about an extra 10 users, doesn't one need to test with at least 100 or more users for statistical significance, accuracy and validity?

Statistical Validity in Usability Testing

Usability research is largely qual-itative, or driven by insight (why users don't understand or why they are confused). Qual-itative research follows different research rules to quant-itative research and it is typical that sample size is low (i.e. 15 or 20 participants).

The end result of usability testing is not statistical validity per say (the outcome of quant-itative research) but verification of insights and assumptions based on behavioral observation (the outcome of qual-itative research).

Why don't we do large numbers in usability testing?

  1. We are looking for behavioral based insight (what they do).
  2. Statistics tell half the story and often are devoid of context (e.g. Why did they fail?)- Also one of the major problems with gaining insight from web analytics (website traffic statistics).
  3. Our objective is to apply findings to fix design problems in a corporate setting (not academic analysis).
  4. Research shows that even with low numbers, you can gain valid data.
  5. Usability testing is being used industry-wide and has been for past 25 years. Experts, authors and academics put their reputations and credentials behind the methodology.

Behavior vs. Opinion

Usability research is behavior-driven: You observe what people do, not what they say.

In contrast, market research is largely opinion-driven: You ask people what they think and what they think they think. You need big samples for market research because of this (though focus groups bend this because they are somewhat qualitative). This is why phone or web surveys require hundreds or thousands of responses. Behavior-driven research is more predictable. Basically, if 10/15 users are confused you can assume that many more will also be confused as well.

Example: If you ask someone "what do you think of this homepage?", you will need several hundred responses to gain statistical validity in order to validate what will be opinion-driven data. Asking someone their opinion does not constitute usability requirements, since usability testing is about isolating "how they will actually use" the design not just "what they think" of the design.

If you give a small set of users a scenario that forces them to interact with home page elements and observe their behavior, and listen to their unsolicited reactions, you will get a better idea of what they think and need. The driver here is expectation (governed by cognitive factors) vs. opinion which can be driven solely by emotional, social or personal factors.

Suggested Sample Sizes for Research

Corporate Usability Research:

  • Surveys (phone and web) = ~240-~1,000 +
  • Focus Groups = 15-20 (depends on audience segments involved and goals of study)
  • Usability Testing = 10-15 participants
  • Field Studies = 15-40 participants
  • Card Sorting = 15-30 (higher is better since card sorting uses the statistical method of cluster analysis)

Academic Usability Research:
Samples are usually larger depending on size and scope and research objectives (e.g. 15 users per segment or 40-100 users in a usability test).

Jakob Nielsen's "test with 5 users" assumption

I think it is important to understand that Jakob Nielsen was trying to promote usability testing as a regular usability research activity in corporate environments. I believe he conducted this research (using a call center software application in the early 90's, rumor has it) in order to demystify the perceived complexity of setting up and running a usability test.

Remember in the early 1990's, only the hard core research and development labs at Apple, Bell Labs, Microsoft, IBM and Sun were doing usability testing. In Nielsen's much respected and equally criticized article "Why You Only Need to Test With 5 Users" (written in 2000) he recommends (based on the early 1990's analysis) that instead of opting for higher accuracy, you go for the "fast and dirty" approach of conducing three tests instead of one "elaborate" study.

Later on in the article Nielsen says that the rule only applies if your users are comparable. If you have other segments or user types, you will need to test more users.

Translation: 5 users per audience segment or target user group, or for a website with 3 diverse segments you will need 15 users for the one test.

Magic Number 15 for Usability Testing...or Why 5 Users is Not Enough

Laurie Faulkner ( PDF: 2004) has conducted new empirical research showing benefits from increased sample size.  In her study, "Beyond the five-user assumption: Benefits of increased sample sizes in usability testing", she wrote:

It is widely assumed that 5 participants suffice for usability testing. In this study, 60 users were tested and random sets of 5 or more were sampled from the whole, to demonstrate the risks of using only 5 participants and the benefits of using more. Some of the randomly selected sets of 5 participants found 99% of the problems; other sets found only 55%. With 10 users, the lowest percentage of problems revealed by any one set was increased to 80%, and with 20 users, to 95%.

At Experience Dynamics, (usability consultancy) we have found that the cost savings of using fewer users is negligible. In other words, after you spend the time and money to set up, facilitate and report on the test, adding a few more users does not add "that much" time and money to the overall project.

The benefit you get from adding a few more users to the total (or in the case of 5 users, doubling the amount) is far greater than the small test that gives you "quick and dirty" results. In the case of running a series of usability tests or iterating your testing process (recommended for refinements based on evolving design decisions), you may want to choose a smaller number of users: I recommend no less than 8 users.

Happy Usability Testing!
Frank Spillers, MS (Usability Consultant)

Special Event-Usability Testing methods- What are we observing and why?

November 14, 2006

Usability_testing_event_1   Note: If you missed this seminar, we will be running it again in future. Stay tuned to Experience Dynamics usability seminars for details.

When conducting usability testing, what do you measure and why? How do you capture metrics and what you should be measuring?

In this World Usability Day exclusive web seminar, we will discuss usability testing observation metrics and best practices.

Agenda:

1. Usability Testing metrics: What are the things you should be measuring? How to measure qualitative vs. quantitative data (e.g. satisfaction vs. effort).

2. Usability testing observation best practices: Do you measure time on task every time? What do you need to do a good job capturing metrics if you are doing "quick and dirty" discount usability or "guerrilla" testing, without undermining your own efforts?

3. New tool for usability testing logging: LiveLogger. Just released this week, we will review a new  usability test logging application. We will review the new LiveLogger interface and discuss what the tool does, how it captures and reports on usability testing metrics.

Summary: In this 1 hour live web seminar (held twice on World Usability Day), we will review usability testing observation best practices.

Length: 60 minutes

Who should attend: People new to usability testing or want to conduct rapid usability testing; usability managers; user experience team; anyone responsible for user advocacy or usability testing.

Note: If you missed this seminar, we will be running it again in future. Stay tuned to Experience Dynamics usability seminars for details.

Here is an independent review of the seminar by Corey Bates at UseTube. Thanks for your comments Corey.

Happy Usability Testing!
Frank Spillers, MS (Usability Consultant)

Usability Lab Rental- new site launch, new labs, new software!

November 06, 2006

Portable_usability_testing_lab_1 On World Usability Day (11.14.06) we launch the second version of Usability Lab Rental.com, an Experience Dynamics company, providing usability testing software and usability lab solutions.

Two exciting new products to announce with the site launch! The new site launch will coincide with World Usability Day who's theme is "Making Life Easy".

1. New Usability Testing logging software- now available for a free 30 day download.  Does LiveLogger make your life easier (centralizes test logging notes and provides quick real-time reporting of user and task performance)? Tell us what you think!

2. Next generation digital portable usability labs now shipping. Perfect if you are looking for a corporate solution to your ramp up your internal usability efforts.

Future posts will provide more in-depth information to help you conduct better usability tests.

Happy Usability Testing!
Frank Spillers, MS (Usability Consultant)