Skip to main content
U.S. flag

An official website of the United States government

Dot gov

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Https

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Plan

Planning ensures that everyone’s time is respected throughout the research process, and helps the team adapt its approach in response to the real world.

Writing a research plan

A research plan (sometimes also called a research protocol), describes the design of your research. Typical 18F research plans include:

  • Background
  • Goals
  • Research questions
  • Methods
  • Team participation
  • Timeline
  • Participants and recruiting
  • Ethics considerations
  • Outputs and outcomes

18F maintains a research plan template. (18F/GSA access only). Your research plans do not have to follow this template. What’s important is that you create a plan at all. Research planning helps you and your team:

  • Openly commit to learning more about the problem(s) at hand
  • Agree on which information is most useful for informing future decisions
  • Learn about design research itself
  • Encourage reflective practice (for example by reviewing how well the plan matched reality)

Background

Describe factors that the research will need to account for, including any shared beliefs or forces motivating the research itself. Review and summarize any relevant secondary sources (like websites, reports, case studies, presentations); or link to prior research plans or earlier versions of the concepts you’re testing.

Goals

Design research is fundamentally about reducing risk and informing decisions. When writing your goals, use verbs that specify the output like “describe,” “evaluate,” “quantify,” or “identify.” Avoid vague words like “understand” or “explore.” Example goals could include: “describe user goals and pain points,” or “identify and evaluate the hypothesis behind our proposed design.”

Research can also have subgoals. For example, some agencies choose to work with 18F to learn more about our approach. Explicitly stating these kinds of subgoals helps provide an honest account of the coaching work that the team will undertake alongside the research itself.

Everyone on the team should agree on the research goals. Clarifying research types is a useful starting point for this conversation.

Research questions

Research questions are high-level questions that reflect what you want to learn to make better evidence-based decisions. Research questions are different from interview questions. Research questions should be relevant, actionable, and practical. They should also be ethical: consider whether answering your research questions would put participants in a compromising position. For example, studying the degree to which participants adhere to a law or policy enforced by the researcher’s own office or institution could jeopardize participants’ careers and/or pose authority and coercion issues.

  • Bad question: How do we get unemployed adults interested in our website? (This question is bad because it isn’t directly focused on users and their goals; it also assumes that a website is the right solution for unemployed adults.)
  • Good question: How do unemployed adults navigate their job search in their first six months of unemployment? (This question is good because it seeks to gain a fuller picture of unemployed adults within the context of a specific activity in a specified period of time.)

Consider holding a research alignment workshop to help stakeholders share and discuss what they’re interested in learning. Regardless of how you build alignment, focus on the value of obtaining useful information.

Methods

Choose one or more methods appropriate for meeting your goals and answering your research questions. Multiple methods can help you challenge or verify information collected and create a more complete understanding. 18F’s Methods provide an overview of our preferred research and design methods. Use these as a starting point, not as a list of constraints.

Team participation

Good research is collaborative. People who help accomplish the research are more likely to agree with its outputs.

When planning your research, review with your partners the typical activities involved in 18F’s research, and determine which members of your partner agency’s team will help at each stage of the research process (that is, plan; do; analyze, synthesize, and share). Including partners in this process helps meet our team’s principles of designing together and training advocates.

Consider whose perspective might be missing from the planning process. Identify opportunities to include people who have direct experience using the product or service. Involve people who interact directly and regularly with end users. Ideally we are designing and building with the people who will be impacted by the outcomes of our research, not just for them.

Run a frames of reference bias identification workshop (18F/GSA access only) so the team can avoid influencing the evidence they gather based on the things they presume to be true. The team should also collectively review this guide’s bias and ethics pages to ensure these are accounted for in the research.

Timeline

Your timeline should provide a useful estimate of how your research process will unfold. Remind everyone that the timeline is just an estimate, and that the actual timeline will depend on a few things outside of your control, like your partners’ ability to participate, your participants’ availability (if applicable), etc.

Plan more time than you think you need, and consider especially:

  • If your research is meant to inform a decision, note when the team anticipates that it will make that decision (for example, is your research due before the next quarterly planning meeting?)
  • How you plan to involve the team in any level-setting exercises, such as hopes and fears [18F methods], provisional personas [18F blog], etc.)
  • How you plan to handle any participant-related logistics (such as inviting participation, getting informed consent, and scheduling)
  • If your research involves workshops and/or fieldwork:
    • Who needs to be where and when?
    • What do they need to do?
    • When must they be done?
    • Where do they go from there?
  • How you plan to involve the team in analysis, synthesis, and sharing
    A safe estimate for research analysis is about twice as long as the research itself

Here’s a sample timeline for a contextual inquiry (on site) followed by eight 1-1 interviews (remote) with stakeholders:

Research activity Estimated time to complete
Initial meeting 1 day
Research design (research planning) 1 day
Contextual inquiry 1 day
Session design 0.5 day
Recruiting and scheduling 1 week
In-depth interviews (remote) 1 week
Initial analysis 4 days
Collaborative analysis 2 days
Communicating the results 2 days
Sharing 1 day

Participants and recruiting

Most of 18F’s design research depends on you directly interacting with people. Who those people are matters. Participants are the people you’ll recruit to take part in your research. For planning purposes, recruiting involves identifying participant groups and defining your recruitment criteria relative to your research question. At 18F we often design for the diverse U.S. public. It’s our responsibility to include and learn from people with a range of perspectives and a diversity of needs. We must ensure our products and services are accessible to everyone, regardless of their abilities. This means we need to consider the barriers various groups might face and include people from those groups so we can ensure access.

Identifying participant groups

Because of the time-limited nature of 18F engagements, participant groups can depend on the type of research you’re doing and where you’re at in the overall design process. For example, if you’re doing stakeholder interviews as part of a Path Analysis project, you’re likely to learn more about who you need to talk to with each interview you do. We recommend asking “Who else should we speak with?” in these discovery issues. This can help you learn of user groups whose needs should be considered. You might focus future rounds of research on learning from people within these user groups.

Once you’ve framed a problem or research hypothesis, it’s important that your participant groups include people who represent the make-up of the public who may experience the problem or need to use the related service. User profiles and personas are a good place to start, if they are based on existing data. Revise them as you learn more about the users of your service.

You may need multiple research plans to account for the variety of ways people may experience or navigate your design. The audience you’re designing for may be very broad. It’s not always possible (or preferable) to design a single experience that meets the needs of the entire population. Focus on identifying the goals, behaviors, preferences, obstacles, and past experiences that might shape people’s interactions with the experience you’re designing. If you’re conducting usability tests, consider how to prototype an experience for someone who uses assistive technology.

Consider especially:

  • People who have disabilities or use assistive technologies
  • People who have limited digital skills or low literacy
  • People who may need help using the service in question
  • People who have limited internet access

The Access Board’s Section 508 standards require that our designs are accessible to people with disabilities. The best way to make sure our products and services are accessible is to design for these users from the start. Include people with disabilities in your user research and usability testing. To learn more about inclusive design, visit Digital.gov’s Accessibility for Teams, 18F’s Accessibility Guide, or the TTS Accessibility guild #g-accessibility (18F/GSA access only).

Defining recruitment criteria

Recruitment criteria specify the people you want to participate in your research. This depends on your research questions. How specific you are in defining your target audiences can differ at different stages of a project. When you’re just getting started with foundational research, your understanding of who you need to recruit might be pretty high-level, but you’ll develop a more nuanced understanding of the unique perspectives to include as the project progresses.

Example criteria might include:

  • A particular demographic (for example, young people aged 16 to 24)
  • A specific audience (for example, small business owners)
  • A particular experience (for example, veterans who’ve recently moved home)
  • A problematic situation (for example, people who suffer from opioid abuse)
  • Particular ways of accessing your service (for example, people who rely on a screen reader, use speech recognition software, or who only access the internet at a library or day center)

If you’re doing usability testing, consider the following questions as well:

  • What are the specific behaviors we’re looking for from participants?
  • What level of tool knowledge do participants need?
  • What level of domain knowledge do participants need?

Review your recruitment criteria with your team. Make sure you’re planning to recruit the right people to help answer your research questions.

Recruitment for demographic diversity

It’s important to consider the audiences for whom you’re designing products and services and to ensure those persons are represented and included as you conduct our research. There are a few strategies you can use to include research participants who are representative of the users you are designing for:

  • Start recruitment early. Develop your participant pool sooner rather than later and tap into your agency partners’ resources to locate the appropriate participants. It’s often easiest to get access to people who already use a service, but sometimes your research goals involve learning from people who aren’t currently using an existing service they may be difficult to find. If there’s an existing website, you might be able to add a link to provide feedback with an option to sign up to speak with us.
  • Tap into organizations and networks that serve the populations you’re trying to engage. Personal networks are a fine place to start your recruitment efforts, depending on how representative your networks are. For example, asking family, friends, and colleagues to participate in research may contribute to the likelihood of socio-economic bias impacting the findings.
  • Intercept testing in government buildings. Doing in-person research in public buildings that are visited by a wide cross-section of the population, such as libraries and post-offices, is one way to reach a diverse group of participants. If you reach a point where you realize you’ve excluded a specific set of users who will use your product/service, intercept testing can be a great save. Intercept testing can be used as a way to be intentional about testing with a specific set of diverse users. For example, setting up intercept testing at a library in a low-income neighborhood might increase a team’s chances of ensuring some of the feedback on a product or service comes directly from low-income users.

Recruitment for behavioral diversity

People who look differently may behave similarly when using a product or service. Create behavioral Personas that group target audiences by behaviors rather than demographics.

Recruitment within underserved communities

The Executive Order On Advancing Racial Equity and Support for Underserved Communities Through the Federal Government contains a list of communities historically underserved by the Federal government and defines underserved communities as: “populations sharing a particular characteristic, as well as geographic communities, that have been systematically denied a full opportunity to participate in aspects of economic, social, and civic life, as exemplified by the list in the preceding definition of ‘equity:’”

Examples of underserved communities:

  • Black, Latino, Indigenous and Native American people, Asian Americans and Pacific Islanders, and other people of color
  • Members of religious minorities
  • LGBTQ+ people
  • People with disabilities
  • People who live in rural areas
  • People otherwise adversely affected by persistent poverty or inequality

Compensating research participants

GSA can compensate members of the public for participating in user research. We can not compensate government employees. We must do research with people who will actually use our services. See the TTS Handbook for specifics on the process we use to compensate user research participants.

Section 508 standards require that our designs are accessible to people with disabilities. The best way to make sure our products and services are accessible is to design for these users from the start. Include people with disabilities in your user research and usability testing. To learn more about inclusive design, visit Accessibility for Teams, 18F’s Accessibility Guide, or the TTS Accessibility guild #g-accessibility (18F/GSA access only).

Why do we offer compensation?

Compensating participants helps us reduce bias in our research. Not compensating research participants can limit our participant pool to people who have the privilege and flexibility to donate their time. We compensate participants for more than just the time they spend speaking with us. There can be additional costs like transportation, time off from work, and child care. We also compensate to show we value participants’ lived experiences and expertise. Sometimes we ask participants to imagine or recall a painful personal experience, including previous difficulties that resulted from interactions with government services.

In addition to paying participants, there are other ways to recognize the value of participants’ knowledge and experience. These can include:

  • Resources for additional information on the topic
  • Offering to find out answers to questions they may have
  • Sharing outcomes of the research, and how the research impacted the final product

Ethical considerations

Research affords your team powerful opportunities to interact with people and to explore what’s possible. While 18F’s UX team agrees on our own ethical principles for design research, these are just our own. Discuss and clarify ethical principles with your team and your partners. Note any ethical dilemmas or concerns.

Next, engage your team in a conversation about bias. Bias is always present in research, but you can help mitigate it by discussing the types of bias we actively work to mitigate. Power dynamics are always at play when people interact with government. As a researcher in the federal government, be aware that people’s willingness to share may change depending on their level of trust as we discuss further on our blog in government.

Outputs and outcomes

Before you get started, discuss with your team (including your agency partners) the desired outputs and outcomes of the research.

  • Outputs are the documents, diagrams, etc. you will make to share the research with a broad audience. Will you produce a report, useful insights, validated design hypotheses, or something else?
  • Outcomes are the changes you expect to see through doing the research. Outcomes should tie back to the goals and subgoals listed earlier. How will doing the research impact the product being developed, the people involved, etc.? How will you know?

We follow a lean, iterative process, which allows the team to be more responsive and flexible to redefine outputs based on what the process finds. Avoid over-specifying your outputs, because you don’t know what you’ll find until the research is underway. For example, it’s safer to say “We’ll produce a persona” (a type of artifact) than it is to commit to “We’ll provide 10 useful insights,” because it’s difficult to know how many useful insights the research will produce. That said, discussing possible outputs is useful because it can directly affect how you choose to document the research.

Involving partners in research planning

Hold a meeting to bring the team — including your agency partners — together to agree on the research plan. Tailor the agenda to your project’s history and your partner’s design maturity. For example, if your partner doesn’t yet have personas, you might create provisional personas before the planning meeting; if your partner hasn’t ever planned research before, you might draft a plan for them to respond to. Be ready to educate your partners on the methods you chose and why you chose them, provide example outputs from prior research, etc.

Create an agenda and invite anyone who has an interest in the team’s research. Depending on where you’re at in the design process, you might begin the meeting with level-setting exercises such as:

Next, review and confirm elements listed in the research plan. It’s especially important to confirm:

  • The timeline
  • What you hope to learn or do (outcomes)
  • What you plan to produce (outputs)
  • How the team will participate in the research

An example agenda for a research planning meeting might include:

Activity Time
Introductions 9:00am
Hopes and fears 9:30
Knowledge inventory 10:00
Discuss research goals 10:30
Review (or co-create) research plan 11:00
Discuss participants and recruiting 12:00pm
Lunch 12:30
Review (or co-create) session materials (such as interview guides, wireframes, or prototypes) 1:30
Discuss desired outputs and outcomes 2:30
Establish roles 3:00pm

Documenting research

Set up a roster

A roster is a spreadsheet to collect participants’ names, titles, contact information, and to track whether they’ve been contacted, interviewed, thanked, etc. A roster should note if specific people have opted out of the research.

Create a folder to contain your roster, interview guides, session recordings and notes, etc. This folder should also be accessible only to the core team, as it will likely contain personally identifiable information (PII); see Privacy. A good way to share interview notes without jeopardizing PII is to assign each participant a participant number, e.g. “p1,” and refer to those in calendar invitations and notes documents. Destroy this roster at the end of the engagement.

Documenting the sessions

Session documentation can take many forms. We often conduct research that may cover sensitive topics or information. Consider the following as you decide how you will document your sessions:

  • What is the lightest-weight way to document your sessions and still capture the information you need to create your desired outputs, conduct shared analysis, etc.?
  • What type of documentation will your participants be most comfortable with (see Privacy)?
  • Did you ask your participants for consent for this form of documentation?

Documentation methods

  • Verbatim notes - This is the most common type of note-taking by 18F researchers. Write down everything the participant says, to the extent possible, during each session. The goal is to capture as much as possible during the precious time we have with our participants, and avoid introducing cognitive biases that come into play when we are selective about what we write.

    Taking verbatim notes also curbs the natural tendency to want to understand and analyze what is being said. If you’re having trouble writing everything down, focus on capturing what the interviewee says, since you or the interviewer can always go back and clarify what questions the interviewer asked.
  • Interaction notes - Write down all of the actions people take and the reactions they have. For example, capturing a note such as “scrolled to top of page, re-read instructions, scrolled back down to input field and typed in name” would be sufficient. If conducting usability testing, consider flagging bugs or usability issues.

    Note: If there are two notetakers available for a session, consider having one person take verbatim notes, and the other take interaction notes. In this case, it’s best to work in separate documents, as working too close to each other in the same file can be distracting.
  • Spreadsheet notes - These are most commonly used for content audits to track insights and quality of existing content.
  • Sticky notes (digital or physical) - Frequently used in workshop and collaborative settings. We have a subscription to a digital tool for remote workshops and collaboration. Physical stickies will need to be documented via photos or transposed to digital tools.
  • Photography - Highly recommended for workshops! During workshops with government stakeholders you don’t need consent forms, but you should still ask for permission if you are taking photos of participants.
  • Video recording - Many of our interviews are done via video chat. You can record sessions from within the video conferencing apps themselves, or you can use video recording software to capture other types of recordings, provided you have participant consent. One of our video chat software options includes automated transcription. Transcription quality varies depending on speaking style of people being recorded.
  • Voice recording - You can also make an audio recording in lieu of video, which can be helpful if you need to review portions of a session. You can record interviews using the voice memo app on your work phone.

  • Transcripts - If you would like to obtain full transcripts of your recordings, you can do so by submitting a micropurchase (under $10,000) request for the service to the TTS Office of Acquisitions team. Consult with OA about whether an Open Market Justification form is needed.

Regardless of the method you choose, keep in mind the overall reasons why we document research as you proceed:

  • Team members who can’t attend the sessions can look at the notes and get a very clear sense of what the user said and did;
  • When even attendees’ memories eventually fade, we can refer back to the notes; and
  • We create a starting point for analysis and synthesis.

Additional reading