Unchecked bias is one of the most significant threats to the user experience (UX) field today. In Bill Albert’s (2015) article, “The Fox Guarding the Usability Lab,” he claimed that bias stemming from a “conflict of interest,” in which designers test their own designs, gets in the way of best practices and could irreparably stagnate the growth of this field over time. The issue of bias in modern user research and user-centered design could impact whether the field blossoms—as we know it should—or loses credibility among stakeholders. While we wholeheartedly agree that bias is challenging, solving this problem by ensuring that designers do not test their own designs is a nice goal but often unrealistic. Situations arise where teams are small, budgets are tight, and UX professionals have to be generalists who test their own designs. In those cases, teams must be thoughtful and take care to mitigate bias that when testing one’s own design can introduce. Ideally, the solution to bias is creating a culture of self-reflection and mutual accountability.
Successful UX research and design is about gaining an accurate understanding of people’s experiences and applying that knowledge to create more usable products. Unchecked bias erodes our ability to accurately understand and interpret those experiences, and thus prevents us from designing the best possible experience improvements. Moreover, unchecked bias can produce outcomes that excludes entire sets of users. As a result, it is our duty as UX practitioners to effectively check those biases and ensure we are taking the necessary steps to mitigate them.
While many people have suggested methods of mitigating bias within specific contexts, we contend that those solutions might solve the short-term problems but miss the larger context that creates the problem of bias that UX practitioners experience within their work. Paul Sherman (2009), in his article “Testing Your Own Designs: Bad Idea?”, maintained the position that, in reality, small teams often have to test their own designs, but bias can be mitigated with proper planning and methodology. Through the description of a case study, we intend to demonstrate that the only way to be thorough about bias recognition and mitigation is to establish a culture of self-reflection and mutual accountability. This essay will show that with proper planning and preparation, we have mitigated bias through our teamwork and implementation of a culture of self-reflection.
The Case Study
In the spring of 2017, the Municipal Securities Rulemaking Board (MSRB) contracted the Bentley University User Experience Center (UXC) to conduct a usability testing and redesign project for the MSRB’s website (full disclosure: all authors of this article work at the UXC).
The MSRB team consisted of 8–10 people, each with expertise in the content of the site as well as at least one other area. For example, one of the team members was a programmer in addition to his years of experience in municipal securities. The team’s central goal for this project was to compile user-driven design improvement ideas to kick off a 1- to 3-year website redesign project. Unfortunately, none of their team members had expertise in user research or user-centered design.
Our UXC team was tasked with filling that void. Our two main goals were to provide direct user data that could help the MSRB team make user-centered decisions throughout their redesign and to provide clickable Axure prototypes (wireframe prototypes) for the MSRB website. Each of our four team members had both research and design experience, but only one of our team members could accurately define the term “municipal security.”
Culture at the Kickoff: Setting the Tone
It was apparent from our initial meetings with the MSRB team that they were responsive to open and realistic communication and collaboration, which dramatically reduced the likelihood of failure of the project. During the kickoff meeting, the MSRB project manager bluntly stated, “We know very little about UX and want you to shout it out if our suggestions do not align with best practices.” The message was clear: If we are all self-aware and transparent throughout the project, the project has a higher likelihood of being successful. That message was received as it was intended, and we reciprocated with the unpleasant truth about our lack of municipal security expertise. And as team members introduced themselves, they established exactly what their role and experience was. From that meeting forward, we knew what lens each person was looking through and what biases those lenses came with. Because both teams—our UXC team and the MSRB team—were open to feedback, we were not afraid to bring concerns and biases to the forefront.
Although sometimes difficult, this attitude of transparency and openness is possible in all newly formed project teams if the tone is set at the beginning of a project. Even if there are levels within an organization that have a lack of transparency and self-reflection, each person on a team has the opportunity to influence the tone of a first meeting by being clear about his or her background, skills, weaknesses, and biases. Making the decision to be transparent about who you are and what you can contribute is a powerful decision that is rarely regrettable and can influence the people around you.
Iterative Project Plan: Planning for Bias
Because we were a small group tasked with providing an array of user-centered design decisions for our client, we designed a plan to keep any biases we had in check. We wanted to give our clients the benefit of our expertise which included our abilities to realistically and objectively look at each other’s work. To accomplish our goals, we developed the following iterative project plan:
- Test the original MSRB website with actual users of the website. The tests were 1:1 and run in our labs at the Bentley User Experience Center.
- Analyze that data.
- Create a first iteration prototype.
- Test the first iteration prototype.
- Analyze the new testing data.
- Apply this data to designing a second iteration of the prototype.
- Present and deliver the final prototype to the client.
If the project team is self-aware, there is a certain amount of bias mitigation that can be built directly into the project plan. Our UXC team consisted of the following (all of which are authors of this paper): Elizabeth, Aaron, Nick, and TJ. First, we made the decisions about which research methods we’d use and the frequency of testing before we began designing our prototypes so that our biases about our designs would not cloud the frequency with which we chose to test.
We recognized that testing our own designs could present a bias problem. To deal with this problem, we split the team into two groups. Aaron designed half of the wireframes; TJ designed the other half of the wireframes. Nick reviewed the design of both Aaron’s and TJ’s designs to ensure a common look and feel. Then Aaron tested TJ’s wireframe designs, and TJ tested Aaron’s wireframe designs. This process allowed us to not test our own design, which, along with our team’s culture of self-reflection, mitigated much of the bias.
One last measure to mitigate bias was to keep Elizabeth removed from the first round of design decisions so that she could more objectively analyze the data we received from the user tests of the wireframe designs and make objective recommendations. Nick, TJ, and Aaron recognized that after a two-week design sprint there would likely be bias coming from more than one point of view, so Elizabeth lent her fresh pair of eyes to tell us when we were testing or analyzing with bias. That perspective can be extraordinarily helpful after a design sprint and worked well for our team especially because we have worked to build a culture of self-reflection and mutual accountability.
Generalizing Our Experience
We recognize that not all clients are as open and self-aware as our client was in this case study, but we believe that the acceptance of a more self-reflective and mutually accountable culture can be cultivated and generalized to other situations. As many know, each situation presents different challenges, but setting this cultural tone with your clients at the beginning of a project will yield positive results for those involved.
With any consulting project, one of the largest challenges is the unpredictability of the client. While we were fortunate to have a transparent and strong working relationship with the MSRB team, we recognize that kind of relationship can be rare in UX consulting. Part of establishing this culture with a client is building expectations and communication directly into the proposal and the first interactions with a client. We established that this was going to be a challenging process, and we expected critical feedback to come from the MSRB team, our team, and all participants. We explained that being transparent with feedback would provide an environment for the most informed decisions. While that can be easier said than done, it can be the difference between success and failure in this field especially when the UX team does not have expertise in the subject area (e.g., our team did not have deep experience with municipal securities), and/or the client does not have expertise in UX.
When the UX team is entirely in-house, we foresee two very different hurdles when trying to create this culture of self-reflection and mutual accountability: (a) maintaining the culture within a large pre-existing corporate structure and (b) minimizing tunnel vision bias within a team that works on the same product consistently over time. The first issue is a topic unto itself and is not addressed by our case study. For the second issue, the MSRB’s approach to this project presents a reasonable cultural solution to the tunnel vision: maintain a culture of team transparency and honest feedback and bring in consultants from time to time to provide a new perspective.
It is easy to let habit influence design and research decisions. If a single team works on a single project for years, their mental models for the project will begin to overlap and cloud their view of their own biases and background influences. Bringing in a fresh set of eyes can broaden the team’s perspective beyond the project to see those biases and receive feedback on them in a way they can address going forward. The team has to be open to feedback and change, but if they are open to it, consultants as well as internal teams can help increase creativity and refresh a team’s ability to be self-reflective and mutually accountable.
Next steps for this process would be to run a scientific test to compare the results of a team testing their own designs in the presence or absence of a self-reflective culture. Measures would have to be determined to evaluate the outcomes, perhaps by creating situations where one team was trained to be self-aware and the other team had no training in that area. An evaluation of this type would need to be run multiple times to see if patterns emerged and were consistent.
We would like to thank our partnering team at the MSRB who made this project possible.
Albert, B. (2015). The fox guarding the usability lab. Journal of Usability Studies, 10(3), 96–99.
Sherman, P. (2009). Testing your own designs: Bad idea? Envision the future the role UX professionals play. UXmatters. Retrieved from http://www.uxmatters.com/mt/archives/2009/09/testing-your-own-designs-bad-idea.php