pwshub.com

Moderated usability testing: All you need to know

Moderated Usability Testing Guide

Moderated usability testing is one of the most powerful tools in a UX designer’s toolbox. It informs UX designers and helps validate various assumptions while learning more about users and how they interact with your product.

Yet, they’re often conducted based purely on intuition.

Just show the product to the user and ask them to perform a few tasks. Nothing too complicated, right?

But there’s quite a big difference between poorly conducted and truly nailed moderated tests.

In this article, I’ll explain the basics of moderated usability testing and provide a step-by-step guide on designing and conducting tests that bring meaningful insights.

Moderated vs. unmoderated usability testing

This section is crucial, so don’t skip it just yet. Of course, I’ll cover more than “moderated are moderated, and unmoderated are…unmoderated.”

Unmoderated usability tests are a series of tasks that users perform on a prototype or actual product without the presence of a UX researcher. They are very scalable yet don’t go deep into users’ thinking processes, making them a great quantitative research method.

So, although many designers treat unmoderated usability studies as qualitative research, that’s a rookie mistake. How would five or ten unmoderated tests help you? You can’t really assess overall usability with such a small sample, and the shallow nature of unmoderated tests doesn’t allow for deep, revealing insights.

Given all that, unmoderated tests are best when used:

  • As a quantitative method with numerous participants
  • As a way to test the discoverability of features
  • To assess the overall usability of a solution

Moderated tests, however, are much more revealing and inform UX designers, thanks to the ability to follow up with questions. For the same reason, they are not as scalable, so they fall under the umbrella of qualitative methods.

Use moderated tests:

  • As a qualitative method with a few participants
  • As a way to assess if the solution drives actual value to users
  • To capture discoverability and usability signals for further research

Moderated usability testing: Step-by-step guide

Moderated Usability Testing Steps Involved

Step 1: Prepare the test

Define research questions

The first thing we should do is clearly map out the research questions we want our test to answer. It’s like preparing for a user interview. Clearly defining research questions will help us plan opening questions and usability tasks and choose the right audience to recruit.

Let’s imagine we are building an AI-driven calorie counter that tells how many calories and macros a meal has based on a picture. Some of our research questions could be:

  • Will users understand how to take pictures and edit the results manually if needed?
  • Are they going to trust the results?
  • Is there any related feature that might be valuable for users so we can build on top of the current product?
  • How strong is the need to track calories easily?

Plan opening questions

Starting a moderated usability session with a few exploratory questions is good practice. This will already give you some insights and build an understanding of the user’s actions during the test.

Refer to your research questions and plan opening questions for any other user interview. If you need some inspiration, refer to our user interview questions library.

Some decent opening questions for our AI-calorie tracker case could include:

  1. What do you think about AI in general? — to gauge the trust in AI solutions
  2. What’s most important for them when tracking their calories and nutrition? — to discover the potential for new features
  3. Imagine there was an app that could perfectly calculate calories based on photos; what would be a fair price to pay for it? — to understand if they’d really value the product

Prepare tasks

We also need to define actual tasks that users will perform during the usability test.

You should have at least one task per each category, that is:

  • Discoverability tasks — validating if users will be able to find a specific option
  • Usability tasks — validating if users will complete the user journey easily
  • Value-oriented tasks — validating if users would get value from the solution

An example of each for our example case could be:

  • Discoverability — launch the app and try to take a photo of a meal (to see if they can easily find the option)
  • Usability — complete your profile with your nutrition needs (to see if the process is going to be easy and understandable for users)
  • Value — take a picture of the plate on your screen (to show users the value proposition and ask follow-up questions afterward)

Aim for up to three tasks for each category. Otherwise, you’ll make the test excessively long.

Design the prototype

The most common mistake when preparing a prototype for a usability test is covering only the journeys you plan to test.

Keep in mind that users rarely will follow the exact journey you envision. They might click different options and make mistakes along the way. You want to see what users do in those paths and if they’ll be able to recover. If your prototype will react only to happy path actions, you’ll miss that learning opportunity.

Set up the test

Last, you need to set up an actual test. Although you could share a prototype link, I strongly recommend you use a professional user testing tool.

A good tool will not only help you handle scheduling, recording, and transcribing but will also automatically measure metrics such as completion rates for your tasks, deviations from the happy path, and time needed for each task. It’ll allow you to focus on the user rather than tracking everything yourself.

My go-to tools include UserZoom, UXCam, and UserTesting.

Recruit participants

Make sure the testers you recruit resemble the user persona you target. The most efficient way to ensure that is to use a screener.

There are two ways to recruit participants: you can do it yourself or pay a specialized agency. Do it yourself if you have a robust user base and want to recruit your own users.

Hire someone to do it if you don’t have many users and want to recruit people who haven’t used your product yet.

Step 2. Conduct the test

Opening interview

The opening interview has two objectives — one, to learn more about the user and discover new insights, and two, to build the context and put the participant in a proper state of mind. This way, the user will behave more naturally during the test.

Five to ten minutes is usually enough.

Some universal opening questions include:

  • What product do you use to [solve the problem] today?
  • Why do you use [product]?
  • What do you like about [product]?
  • What do you wish it did today?
  • How would that help you [solve the problem]?

Performing tasks

Interacting frequently with the participants during the session’s test phase is the most important thing. You want to know what’s going on in their heads and how they perceive the actions they perform.

Ask participants to share the screen (or see it in the usability testing tool if you are using one) and go through the tasks one after another.

Ask them to think out loud whenever they do a task. Whenever you encounter a pause, remind them about sharing their thinking process. Those moments of silence are often when the most important thoughts pass through the participant’s mind.

After each task, ask them to reflect on it and share thoughts about what happened.

Ask follow-up questions frequently:

  • Why did you press that button?
  • What do you expect to happen if you do X?
  • Would you use that particular feature? Why? Why not?

Closing interview

Reserve some time at the end of the test to ask a few questions about the experience as a whole. Zooming out helps uncover insights that are hard to get when asking about specific tasks.

My go-to closing questions are:

  • How was the experience overall?
  • What did you think of the interface?
  • What did you find most useful about this [solution]? What was most difficult?
  • How does this compare to [alternative mentioned at the start of the interview]?
  • What would make you use [solution] over [alternative]?

Step 3. Summarize the test

After the test concludes, don’t jump straight into another session. Properly summarizing moderated tests helps you retain and share insights more easily and allows you to improve subsequent tests if needed.

Analyze task performance

Your usability testing tool should summarize for you key metrics like the completion rate, the time taken to finish a task, and deviations from the happy path.

Review these and look for worrying signals (like a task taking much longer than you expected).

Remember that these are only signals, and you need to follow up with unmoderated usability tests to further validate if a given task is problematic. However, sometimes, even a small sample of moderated tests is enough to spot some critical problems in the prototype.

Summarize the session

Do a written summary of the observations and learnings from the usability test. I usually go for quick snapshots:

Interview Snapshot after Moderated Usability Testing

The snapshot includes:

  • User name and photo — to quickly recall the actual interview
  • Quick facts — to frame the context
  • Key insights — the main lessons learned
  • Opportunities — next steps based on the interview
  • Memorable quote — if you were to remember one thing from the interview
  • Admin data — date, interviewer, links, etc.

Iterate on the test plan

You should revisit your test after every two to three runs. Look at the test results and ask yourself:

  • Are there any confusing tasks/questions that need to be reframed?
  • Should we add new tasks or questions to dive deeper into a specific subject?
  • Is it time to iterate on the prototype based on past learnings?
  • Would it be beneficial to recruit a different type of audience to learn more?

It may be that iterating every two or three sessions is too extreme and will make it harder to analyze and compare overall results. I wouldn’t worry about that, though.

Moderated usability tests are qualitative in nature, and the main goal here is to explore as many diverse insights as possible.

You can always follow up with unmoderated tests to build confidence in specific areas.

When should we use moderated usability tests?

Moderated usability tests are powerful tools that can bring countless insights and learnings.

They are also a costly tool. You need a skilled researcher to conduct one, moderate those tests manually, and invest adequate time to wrap them up properly. Moderated tests are usually longer and more stressful to participants, so you need a higher incentive to recruit someone.

Given all that, when is the best time to use moderated tests?

They are best suited for early validations. If you are at a stage when you already have a developed idea for a feature that can be prototyped, jump into moderated testing right away.

The closer you are to the actual development, the less sense it makes to run moderated tests. Why invest time and energy if it’s too late to drastically pivot anyway?

Also, you can probably already see that I’m a big fan of combining user interviews with user tests. If you have already recruited a person and scheduled a time in your calendar to moderate a session, why not kill two birds with one stone?

I use moderated usability tests as a tool that delivers significant new insights about users and their attitudes toward the tested solution.

If you need the insights, go for moderated testing.

If you are at a stage when you need solution validation, other research methods, including unmoderated tests, will be a better and often cheaper option.

Source: blog.logrocket.com

Related stories
1 month ago - User research enables UX designers to create a positive experience for users. Here are some best practices to follow to maximize UX research. The post User research methods: A guide for UX designers appeared first on LogRocket Blog.
1 month ago - Establishing direct lines of communication between key parties means enabling teams, organizations, and consumers to speak with each other effectively. The post How to enable cross-functional communication in B2B2C appeared first on...
1 month ago - QuickBooks is a popular accounting software for small businesses. It excels in bookkeeping, invoicing, inventory tracking, sales tax management, and more. QuickBooks dominates the market, with over 6.5 million users worldwide. Despite...
3 weeks ago - Buildertrend, JobNimbus and Procore are construction accounting tools that offer inbuilt communication platforms. Acumatica includes CRM features as part of its construction management software. Sage Construction easily integrates with...
1 month ago - In today's fast-paced work environment, automation is crucial for optimizing your repetitive tasks and enhancing your productivity. Deploying Python functions to automate the merging of multiple Word documents into a single, cohesive file...
Other stories
58 minutes ago - The 2024 Gartner Magic Quadrant positions AWS as a Leader, reflecting our commitment to diverse virtual desktop solutions and operational excellence - driving innovation for remote and hybrid workforces.
1 hour ago - Understanding design patterns are important for efficient software development. They offer proven solutions to common coding challenges, promote code reusability, and enhance maintainability. By mastering these patterns, developers can...
1 hour ago - APIs (Application Programming Interfaces) play an important role in enabling communication between different software systems. However, with great power comes great responsibility, and securing these APIs is necessary to protect sensitive...
2 hours ago - This article aims to celebrate the power of introversion in UX research and design. Victor Yocco debunks common misconceptions, explores the unique strengths introverted researchers and designers bring to the table, and offers practical...
2 hours ago - The Zeigarnik effect explains why people tend to remember incomplete tasks first and complete work that’s already started. The post Understanding and applying the Zeigarnik effect appeared first on LogRocket Blog.