Bias
All research is subject to bias, whether in our choice of who participates, which pieces of information we collect, or how we interpret what we’ve collected. Proactively engaging with bias helps us improve the credibility of our research. The following list of biases is our starting point.
Research design bias
What it is: When the team doesn’t acknowledge bias or designs their research to advance their existing beliefs. For example, if an agency executive believes that they already understand user needs, that executive might discourage the team from speaking directly to users—why learn what the executive already knows?
How to counter:
- Be mindful of the kinds of bias that can affect design research
- Surface the team’s assumptions
- Clearly identify the team’s research objectives (which should center around people’s needs in accodance with the U.S. Digital Services Digital Services Playbook)
- Note the research’s inherent limitations
Sampling bias
What it is: When some members of the target population are less likely to be included in the study. For example, if a team leans too heavily on digital-first participant recruiting processes, it risks excluding members of the public who don’t interact with government online. We shared our experiences building a prototype of an online recruiting tool on our blog.
How to counter:
- Discuss research participant recruitment strategies
- Clarify the target population, which can involve clarifying the difference between stakeholders (usually public servants) and users (usually the public)
- Look for ways to encourage diversity or representativeness in the sample by asking, “Who haven’t we talked to yet?” For example, people who access this service via screen reader or people who have limited access to technology are groups to consider
- Document the plausible shortcomings of your participant recruitment strategy
- Be careful of the conclusions drawn from any one study
Interviewer bias
What it is: When the interviewer’s own beliefs or assumptions influence how they lead a session. This can be especially apparent at the start of an interview — for example, if the interviewer expresses excitement about a particular aspect of the product or service (“Our team is really proud of the new search feature!”).
How to counter:
- Be mindful of your ability to prime participants
- Refrain from asking close-ended questions
- Practice interviewing beforehand
- Periodically echo what you’ve heard during the interview back to interviewees (“Just to be sure I heard you correctly, you said…”)
- Conduct post-interview debriefs as described in the 18F methods
Social desirability bias
What it is: The tendency for people to respond in ways that paint themselves in the best possible light, regardless of whether or not that bears any resemblance to reality.
How to counter:
- Build rapport with participants
- Emphasize the goals of the research, and how honest feedback is the best way to meet those goals
- Distance yourself from any proposed design solutions (“these are just a few ideas the team came up with, but you’re the expert here,” “I didn’t design this, so if you don’t like it, you won’t hurt my feelings”)
- Consider changing the research mode. For example, some research has shown that social desirability bias may be less likely when interviews are conducted over email compared to face-to-face
Confirmation bias
What it is: When you (or your team) interpret research in a way that conforms with your own beliefs or values.
How to counter:
- Bring attention to the team’s shared values and beliefs—for example, by conducting a hopes and fears exercises [18F methods] before the research begins
- Emphasize diversity when recruiting research participants
- Invite the team to observe research in action, and hold post-interview debriefs [18F methods]
- Consider different perspectives that challenge your beliefs
- Use a variety of research methods when collecting data to triangulate your findings
- Collaboratively analyze data before synthesizing it, and note any plausible alternative interpretations. Involve your partners and stakeholders in the synthesis process; stakeholders could include people who may not be directly involved in the project but could be users of your service or may be impacted by a change your work introduces to the agency, as well as, community leaders or others who are representative of your user base. See our blog for more on how to get partners onboard with research findings
The observer effect (the Hawthorne effect)
What it is: When the people who participate in research modify their behavior simply because they’re being observed. An example is when an office becomes unusually quiet while an interviewer conducts on-site interviews.
How to counter:
- Build rapport
- Ask for introductions from key stakeholders, and be mindful of the ability to prime participants while informing their consent
- Blend in by noticing and following participants’ social and cultural norms
- Pay attention to what people say they do, as well as what they actually do (you might ask participants to teach you their process)
- Use mixed methods as well as unmoderated research modes, like monitoring forum posts or web analytics (while being mindful of privacy)
- Be careful to avoid over-interpreting what you see or hear
Avoiding bias
In general, you can avoid bias and arrive at better solutions by intentionally including people throughout the design process. Help your team see research as a team activity, and understand why it’s better to talk to a few users throughout the design process than none at all (as Erika Hall says, “The most expensive [usability testing] of all is the kind your customers do for you after launch by way of customer service.”).
Bias is a starting point for improving the team’s research practice—everyone benefits when we share a commitment to asking better questions.