UX Research Studies that Get Results

Erik Johnson
Aug 31, 2023

Product companies can sometimes be wary of investing in research, and sometimes product or technical teams are often skeptical of the value that research can provide. One big frustration for all involved — including researchers — is when a research effort ends up not having a meaningful impact.

It’s the type of situation that gives research a bad name — time and money are spent, data and learnings are generated, but ultimately, nothing changes. Six months after the study completes, the report is just another .pdf buried in the company email system. Ugh.

Over the years, I’ve developed a research framework that helps prevent this problem, inspired by the “Understanding by Design” method for planning educational curriculum. Under this method, rather than planning “forwards” — by planning for Day 1 first, then Day 2, etc — instructors plan “backwards” by thinking about what material students should have mastered at the conclusion of the course.

UBD Design Process for education
  1. Identify what students should know by the end of the course
  2. Decide what will be accepted as evidence that students learned it
  3. Plan instruction and learning activities so that students can demonstrate that evidence

By keeping the ultimate goal in mind throughout the process, the aim is to create alignment between the learning activities and the ultimate assessment. This same principle should be followed in research studies.

Instead of starting with questions like “what type of study should we do?” or “how will we recruit/test plan/etc.?”, researchers should instead focus on what should be learned at the END of the study.

All other activities should flow from that ultimate purpose.

Research Design Process:
  1. Identify what we want to learn by the end of the study
  2. Decide what will be accepted as evidence of success/failure
  3. Plan study activities that will generate that evidence

Applying the UBD Design Process

1. Identify what we want to learn

To establish the ultimate goals for the study, it’s important to involve the whole team in a collaborative planning session. One useful activity is for the team to write out every assumption they have about the project. This should be a free-writing “everyone participates” exercise using Post-Its (or remotely, Trello), not a situation where one person is dictating/dominating.

Once the list of assumptions is generated, read through, remove duplicates, then have participants sort the assumptions into columns of “Definitely True”, “Likely to be True”, “Unsure if True”, and “False”. This allows space to talk about different perspectives and refine assumptions which are unclear to some members of the team.

Finally, participants identify which assumptions are high risk or low risk by using upvotes and labels or by physically sorting the cards on another axis.

A sorted list of assumptions on Trello

Using an activity like this to frame the discussion allows the team to uncover knowledge gaps and possible blind spots in a productive way. Having the assumptions written on cards and allowing people to freely move them around shows areas of conflict or disagreement without making people feel defensive.

It’s important to involve a diverse group in the collaborative planning session to make sure expectations for the study are clear, to avoid groupthink, and also to get buy-in from a broader audience early.

Often, the move from “assumptions” to “what we want to learn” is easy — the highest / riskiest assumptions are natural areas of focus for the study. Asking participants to individually write down responses to a prompt such as “If we only learn one thing from this study, I want to learn about [X]” — either pre-workshop or during — is another good activity to help boil down a long list of assumptions to a few key learning goals.

Assumption: “We assume our users do not have a lot of technical knowledge about chemical hazards.”
Learning goal: “We want to learn whether the way we are presenting chemical hazards is easy to understand.”

2. Decide what counts as evidence

Now that assumptions are uncovered and learning goals are defined, it’s time to get tactical. What evidence can we collect that will help us reach our goals?

For each learning goal, define outcomes. Outcomes are what we expect to see people do during the actual study. Often I frame these as “success” and “failure” (with “partial success” included if appropriate), and make sure that they are measurable.

EX:
Learning Goal: “We want to learn whether the way we are presenting chemical hazards is easy to understand.”
Success:
“Participants will be able to identify the overall hazard level from our summary view and list the 3 most serious hazards from the hazard detail view.”
Partial Success: “Participants will be able to identify either overall hazard level or the 3 most serious hazards, but not both.”
Failure: “Participants will not be able to do either task.”

Again, this is critical to do ahead of the study and with input from multiple stakeholders. Otherwise, the results can be perceived as biased, or worse, as irrelevant.

For most learning goals, there will be MANY ways to measure “success,” and it’s a huge problem if your research team uses a measure that your product team thinks is invalid.

Have those conversations beforehand!

3. Plan study activities that will generate evidence

Now it’s time for the actual study design. Notice that this is the final step of the process —don’t commit yourself to doing a focus group, or 1:1 usability tests, or a qualitative survey until you’ve uncovered assumptions, defined learning goals, and thought about what outcomes constitute success or failure. That’s because you won’t know what method is most appropriate until you’ve defined those things!

If the main learning goals are around usability, and the team views success as “time on task” and “ability to complete tasks without assistance”, then using a focus group will be a complete failure. On the other hand, if your riskiest assumptions are around brand presence, small-scale user testing is going to be hugely inefficient.

In either case, if the evidence your team wants to see doesn’t match with your study design, your results are not going to have the right impact.


Plan backwards, get results

Start with the end of the study — what you hope to learn and the impact you hope to have. Move backward to define what evidence your audience will find convincing. And finally, choose the methods and study design to get you there. At every stage, incorporate perspectives from a broad and diverse team using collaborative planning activities that reduce bias and keep everyone focused on the end goal.

Doing your plan “backwards” — starting with what you hope to learn and what evidence will convince your team you’ve learned it — will lead you to choose more appropriate methods, create better-defined test plans, and ultimately lead to your research having impact throughout the organization.


Read More

Candida Hall
Apr 22, 2024
We explore when and how to use the dashboard to help improve the overall experience.
Research
UXUI Design
Candida Hall
Feb 8, 2024
Being in the tech industry, it’s impossible to escape the conversation about AI. This article discusses 3 reasons why AI can't replace product designers.
UXUI Design
Erik Johnson
Nov 29, 2023
Increase participation and get better results with these 10 tips.
Workshop
UXUI Design
Candida Hall
Oct 20, 2023
Empathy is one of the buzziest buzz words in Design for good reason - it’s an essential skill for producing good usability. This article is about strengthening your empathy skills and incorporating them into design, management, and your everyday life with 5 easy tips.
Research
Workshop
Erik Johnson
Sep 7, 2023
Quick and collaborative, design teardowns are a great way to get out of creative ruts that we all experience. The technique is low fidelity, easy to do, and has a high reward - plus it’s fun!
Wireframing
Prototyping