Back to Basics 6: Putting It All Together
Welcome back for another installment of our “Back to Basics” blog series, focused on the ins and outs of conducting impactful user research. So far, we have discussed a range of topics that can help you get the most out of your user testing efforts: From understanding what user testing is and is not, to selecting the right user testing method, recruiting the right users, considering where and how to execute the research, and fitting the research into the development cycle.
In this week’s article, we will share high-level considerations that ensure that all of these pieces fit together and that you walk away with impactful results for your product or interface. We will discuss:
- Key considerations in designing the research, including the importance of clear research objectives and getting stakeholder input and buy-in
- Logistics of executing the research sessions, including the components of a test plan and participant packet, as well as strategies for accurately capturing results
- Analysis and reporting of results, with a focus on building out a research story and communicating findings in ways that will make participant feedback more salient and give clear direction to designers or development teams
Designing the research
Here is an all too common scenario in the world of user testing: Having understood the importance of gathering user feedback in the creation of successful products, you or your client is committed to performing user testing. The project manager approaches you and says: “We’d like to do some user testing on our new whirligig”. Being a dedicated research professional, you can’t help but let out a long sigh at the blank looks you get when you ask, “Great, what would you like to know?”
All too often, we’ve seen (well-intentioned) research efforts lead to ineffectual results and disappointed stakeholders, largely due to the use of unsuitable or suboptimal methods. The problem…? A lack of clearly defined research questions and objectives.
In contrast, clearly formulated research objectives allow you to think through possible outcomes or results, thus generating testable hypotheses. Once these are in place, it becomes more straightforward to generate a meaningful test plan and collect data than can actually address or settle the questions at hand.
Once you have established the key research objectives and questions, the next step is to formulate a test plan. When it comes to user testing, key components of that plan include:
- Specific areas of the interface which will be evaluated
- Tasks or scenarios, which will be presented to participants, that can drive interactions with those areas of interest
- Identification of specific areas for behavioral observation as well as follow-up discussion
- Metrics, including both objective and subjective, which will be assessed during testing
At its core, the goal of the test plan is to tie specific components of the user testing back to overarching research objectives. By so doing, the test plan also serves as a communication mechanism with stake-holders, ensuring that everyone is on the same page and that the research has full buy-in. In the case of user testing with a prototype interface, this test plan will also allow the development team has built-out the appropriate areas of the prototype (that all the “ducks are in a row”, so to speak).
When creating your task scenarios, there is a balance to be found in structuring the tasks such that you drive participants to interact with the areas of the interface which are important to your overall research objectives, without leading them so much that the interactions become artificial. Focusing on providing a description of what the user is trying to achieve, while avoiding overly specific terms can help with finding this balance. That is, you will typically want to avoid using specific terms which are found in the interface, focusing instead on more generic descriptions of task goals.
Executing the research
Once you and your stakeholders have agreed on a test plan, the next thing to consider is what you will actually take into the test sessions (whether the sessions are conducted in a usability lab, other ad-hoc facility or remotely). Assuming the test plan was constructed with due diligence, this step is fairly straightforward.
Obviously, you need to ensure that any necessary equipment or devices are present for interacting with the product or interface (e.g., a smartphone for testing a new mobile app or a suitable PC for interacting with a new website).
You will also want to ensure that you (or your session moderator) have a clear understanding of various elements of the interface, how those elements play a role in the task scenarios, and what a “successful” or “problematic” interaction outcome entails. This is another place where having a clear and detailed test plan helps out.
Next, you will need two packets: One for the moderator and one for the participant. For the moderator, you can simply turn the test plan into a “Moderator packet” that will be used to guide observations and discussions with the participant. This packet can also serve as a useful place to take detailed notes that can later be used during reporting (e.g., insightful quotes about a particular feature). Of course, you don’t have to use pen and paper. We have found spreadsheets to be an excellent tool for organizing searchable notes taken during the sessions.
At this point, you will want to grab a colleague to serve as a stand-in participant and go through a “dry run” of the session. This will help you identify any oversights in the Moderator Packet, get feedback from an outside source on the clarity of the material, and adjust any timing issues before the real thing. An even more precise way of getting feedback on testing material would be to run a pilot session with an actual (non-researcher) participant. Although this is more costly (you still have to pay your participant), it will provide more unbiased feedback from a third-party source.
For the participant, you can translate your test plan into a “Participant packet” that participants will use to provide their answers. Generally, this will involve stripping out any moderator notes, presenting just your task scenarios and relevant subjective metrics (such as task-based ease-of-use ratings) or overall evaluative metrics (such as the System Usability Scale).
This brings us to the topic of capturing or recording the user test sessions. Regardless of how good a note-taker you may be, there will always be details or observations that can slip through the cracks. Using solutions such as Morae to record on-screen interactions can be an invaluable tool for later review and analysis. Having a dedicated user testing facility can also make it much easier to record interactions. Beyond simply whathappened, video recordings can also capture salient participant reactions, which can be used in the form of audio or video clips when communicating findings and results to stakeholders.
Communicating research results
Having clearly identified your research objectives, developed a test plan and executed your research sessions, it is now time to analyze and report your findings. This may be a good time to go back and read our thoughts on the importance of defining research objectives up front. That’s because reporting research findings is much more straightforward if the research sessions were driven by clearly articulated research questions.
In such a case, those research objectives, agreed upon by stakeholders, allow you to build a story around the outcomes of the user testing. A focus on the research story provides:
- The Lens: it allows you to focus on Big Q research questions, think through the possible outcomes of those questions, and honestly assess the implications of those potential findings
- The Language: it allows you to communicate with a wide range of stakeholders in terms that are understandable, and speak directly to key design challenges and decisions
- The Filter: it helps you to keep extraneous noise out of the research results, targeting analysis and communication of results on the key questions at hand
By focusing on the research story, the communication of results strives to surface the most important findings from user testing. Typically, these findings are communicated in terms of issues and recommendations. Using screenshots and/or images to support the issues descriptions helps to ensure that results aren’t misinterpreted. Providing direct and concrete recommendations gives development teams a path forward, and can serve as the basis for iterative or subsequent user tests. Finally, going back to why you might want to record or capture participant interactions with the product during testing, the use of video or audio clips can help bring results to life, reinforce the need to implement improvements, lend credibility to the results in the view of stakeholders, and help build empathy for real-life users.
Final thoughts
Whether you are a junior UX researcher trying to discover new ways to add value across your organization, or an executive that is trying to understand the ins-and-outs of integrating user testing into your overall development cycle, we hope you are finding this Back to Basics series useful. Our hope in creating these blog articles was to help UX practitioners and advocates re-calibrate their user testing process so that at the end of the day, they are sure to walk away with data-driven and actionable insights that can increase product usability, satisfaction and, ultimately, adoption.
For the next part of our Back to Basics series, we will be taking a closer look at various methods and tools used in UX research. We will have some in-depth discussions about how to understand who your users are, what your users do, what problems they encounter, and how they react to your product design compared to alternative designs. Until then, let us know how your UX research is going and how you’ve been able to use this Back to Basics series!
At Human Interfaces, an expert team of UX professionals work with clients from a variety of industries to develop custom research solutions for any UX challenge. If you need help from a full-service UX research consultancy for a study, recruitment, or facility rental, visit our website, send us an email, or connect with us through LinkedIn.