Research Resources

Design and iterate implementation strategy

Authors
Stephanie Lin
Contributors
Chloe Lesieur
Summary

Implementing partners and researchers should work closely together during the study design phase of a randomized evaluation to create a feasible implementation strategy. This resource is intended to provide a framework for researchers making study design decisions with their partners. The general approach covered here is outlined below:

  1. Identify stakeholders in the community
  2. Get buy-in and feedback on study design decisions
  3. Iterate design decisions with your implementing partner

Identify stakeholders in the community

Before making study design decisions, identify which stakeholders should be aware of or involved in the decision-making process. Since stakeholders’ roles will vary by intervention and by study, partners will be able to provide insight on who should be involved and what kind of insight they will be able to provide.

Questions to ask1 at the early stages with a partner to identify stakeholders might cover topics such as organizational background, program history, and preparation necessary for an evaluation (Brown et al. 2015). Examples of questions might include:

  • Who is implementing and supporting the intervention? This might include program administrators and front line staff, data providers and managers, enrollment specialists—and in the case of technology-based interventions—engineers in charge of the platform or program.
  • Are certain groups especially interested in bringing about change in the program? Have others raised significant concerns about prior efforts to make changes? These people may shape how the study is approached and how results are used within the organization.
  • Who has the ability to impact the study if their concerns about the evaluation are not addressed? This might include funders, leadership within the implementing organization, and partner organizations that refer participants to the intervention.
  • Who is participating in the study?
  • Who is the primary audience for study results, and how will they use them? This may include other implementing agencies, policymakers, or funders.

Secure buy-in

In meetings with new stakeholders, groups, or influential individuals, get buy-in before asking for feedback on the research design (Glennerster 2015).2 Consider meeting in small groups with stakeholders to allow for more candid responses. In discussions with stakeholders, researchers should consider the provider’s typical program procedures, including enrollment, recruitment, and the unit of program delivery (Arnold Ventures 2016).3 It is important to secure buy-in from stakeholders so that 1) they understand the purpose of any disruptions to their typical processes, and 2) they have an opportunity to point out mechanisms that make the proposed design infeasible so that the research team can make adjustments. While it may not be possible to ensure that all stakeholders are entirely enthusiastic about implementing the study design, researchers should make every effort to incorporate feedback, respond to ethical and practical considerations, and highlight the benefits of study results.

Case study: In an evaluation of a summer jobs program in Philadelphia, implementing partner WorkReady, their providers, and the research team placed paramount importance on implementing the lottery in a way that placed youth in appropriate jobs while retaining random assignment.4 Youth who received jobs would need a reasonable commute to their workplace, so assigning individuals to difficult-to-reach positions could not only create obstacles for the youth and the providers, but also negatively impact the research. For example, far-flung job placements could lower compliance with the program (i.e., increase dropout)—reducing the researchers’ ability to estimate the impact of the program.

To address this potential issue, researchers designed a randomization strategy that included geographic blocking based on the preferences of each provider. Applicants were subdivided into pools by the geographic catchment area appropriate for specific jobs and then randomized to either the treatment or control group for the positions.

Researchers or implementing partners may be nervous about speaking to service providers before the research questions and design are fully thought-out. Be sure to get partners’ input on the appropriate timing to start presenting the project, but also make sure to stress the importance of these early conversations. Plan to have discussions with smaller groups of people and use it as an opportunity to get feedback and learn more about program operations and needs, rather than to present how the study “will work.”

Below is a framework and questions to consider during these smaller meetings with stakeholders:

  • Using a similar approach to securing support from high-level organizational leadership, share researchers’ backgrounds, major research questions, and responses to broader organizational concerns. Ideally, organizational leadership will be able to introduce the research team to the smaller group.5 Researchers should be aware of the power dynamics between implementing partner staff members during these meetings, including any tensions between staff members and organizational leadership that may exist as a result of implementing the study. For example, researchers might ask if staff are contractually obligated to participate in the study and the implications if they refuse.
  • Frame the meeting as an opportunity to share current thinking and options for a potential study design and to get information and feedback from different groups of stakeholders. Stress the importance of their involvement for the success of the study.
  • Ask the group for their perspectives and backgrounds related to research. What are their goals in taking on a randomized evaluation? What do they hope to learn?
  • Assess initial thoughts and reactions to the proposed evaluation. Does the group have any concerns about randomizing participants? Can the research team help address those concerns?
  • Consider the group’s reactions to determine the direction of the discussion. Use their input about how the program currently works to inform study design.

Knowledge from different stakeholders about the intervention and population will be essential for a successful study design. Researchers should engage in careful and consistent communication to ensure that study design plans make sense in the context of the intervention and adjust course when necessary.

Case study: Researchers in Ontario conducted an evaluation of a program guiding students at high schools with low transition rates to college through the college application and financial aid process. While outcomes were measured at the individual student level, the evaluation used a school level randomization strategy. The researchers chose this approach rather than targeting individual students who were more likely to graduate because it was more in line with the inclusive mission of the program, which was to help all graduating seniors at schools with low transition rates to college. Additionally, a school level strategy was less burdensome to implement than targeting individual students because whole classes could be scheduled to participate at once (Oreopoulos and Ford 2016).

Iterate study design decisions with stakeholders

Engaging stakeholders in discussion is a crucial first step, but the process of designing the study will likely involve many subsequent discussions and revisions to the proposed experimental design. Researchers should expect to repeat the process of identifying stakeholders, securing buy-in, getting feedback, and proposing revised study design decisions as the team receives new information about new stakeholders and perspectives.

Case Study: Sometimes the process of getting feedback and stakeholder buy-in reveals that a randomized evaluation will be infeasible. In one such case, J-PAL North America partnered with the Commonwealth of Pennsylvania to explore the possibility of conducting a randomized evaluation of the Centers for Excellence (COEs), a coordinated care initiative for individuals with opioid use disorders.6 Discussions with service providers and staff at the COEs revealed wide variation in care coordination practices. Because a randomized evaluation would estimate the average effect across different COEs, this variation would make it difficult to interpret the results of a randomized evaluation. After working with J-PAL North America staff and researchers, Pennsylvania ultimately decided that a randomized evaluation of the COEs would not be feasible at this time. Even though a randomized evaluation was not launched, the initial work staff did to develop a randomized evaluation was useful in thinking about how to measure the impact of the state’s many efforts to address the opioid epidemic. For example, in the process of scoping a randomized evaluation of the COEs, staff from Pennsylvania discussed how to measure outcomes such as persistence in treatment and health care utilization.

Last updated September 2021.

These resources are a collaborative effort. If you notice a bug or have a suggestion for additional content, please fill out this form

Acknowledgments

We are grateful to Noreen Giga and Emma Rackstraw for their insight and advice. Chloe Lesieur copy-edited this document. This work was made possible by support from the Alfred P. Sloan Foundation and Arnold Ventures. Any errors are our own.

1.
For more examples of questions to consider when making study design decisions with your partner, see Innovations for Poverty Action’s (IPA’s) evaluation toolkit, page 99, which includes a partnership development questionnaire and other helpful guidance for implementing randomized evaluations.
2.
Rachel Glennerster’s blog provides insight on developing good relationships with implementing partners. This post discusses how to be a better researcher partner.
3.
Arnold Ventures’ checklist guide to getting things right in an RCT includes helpful discussion on considering the level at which the program is delivered and the unit of randomization.
4.
J-PAL North America’s guide to implementing randomized evaluations with governments includes several examples of researchers working closely with implementing partners to make study design decisions, along with a general framework for working with governments to develop research projects.
5.
J-PAL North America’s guide to conducting background research and assessing the circumstances includes an overview of questions and considerations for preliminary conversations with an implementing partner.
6.
J-PAL North America’s guide to implementing randomized evaluations with governments includes several examples of researchers working closely with implementing partners to make study design decisions, along with a general framework for working with governments to develop research projects.
    Additional Resources
    1. "San Code of Research Ethics" | South African San Institute | Accessed August 30, 2018.

      Developed by the South African San Institute, this code of ethics for considering and implementing research projects discusses the ways in which researchers should respectfully engage with the San in South Africa. It details guidelines for respect, honesty, justice and fairness, and care that researchers wishing to engage with the San should follow to carry out a successful research project. This is a helpful overview on topics related to ensuring respectful engagement with communities participating in research and engaging communities in decisions about designing research studies.

    2. The Politics of Random Assignment: Implementing Studies and Impacting Policy" | Journal of Children’s Services | Vol 3, No 1.” n.d. Accessed August 30, 2018.

      This article discusses challenges related to implementing randomized evaluations drawing from years of experience at the Manpower Demonstration Research Corporation (MDRC). The “Lessons on How to Behave in the Field” section shares approaches to addressing particular questions and concerns that researchers may face when developing a randomized evaluation, including making language choices that are sensitive to the partner’s perspective, not giving evasive responses to questions about random assignment, and recognizing that different staff members throughout the organization will offer different perspectives.

    3. Washington, DC, and MT Bozeman. 2012. “‘Walk Softly and Listen Carefully’: Building Research Relationships with Tribal Communities.” NCA I Policy Research Center and MSU Center for NativeHealth Partnerships. 

      The Center for Native Health Partnerships (CNHP) and the National Congress of American Indians (NCAI) Policy Research Center created this guide as a resource for those engaging in research partnerships with tribal communities. The document presents values guiding research partnerships with tribes as well as helpful context and considerations related to research with Native communities.

    4. Real-World Challenges to Randomization and Their Solutions” J-PAL North America.

      This resource is intended for policymakers and practitioners generally familiar with randomization who want to learn more about how to address six common challenges. The document draws from Running Randomized Evaluations: A Practical Guide by Rachel Glennerster and Kudzai Takavarasha.

    Chabrier, Julia, Todd Hall, and Ben Struhl. 2017. “Implementing Randomized Evaluations in Government: Lessons from the J-PAL State and Local Innovation Initiative.” J-PAL North America.
    https://www.toolkit.povertyactionlab.org/file-research-resource/implementing-randomized-evaluations-government-lessons-state-and-local

    Arnold Ventures. 2016. “Key Items to Get Right When Conducting Randomized Controlled Trials of Social Programs.” https://www.arnoldfoundation.org/wp-content/uploads/Key-Items-to-Get-Right-in-an-RCT

    Brown, Julia, Lucia Goin, Nora Gregory, Katherine Hoffman, and Kim Smith. 2015. “Evaluating Financial Products and Services in the US: A Toolkit for Running Randomized Controlled Trials.” IPA. https://www.poverty-action.org/publication/evaluating-financial-products-and-services-us-toolkit-running-randomized-controlled.

    Glennerster, Rachel. 2015. “What Can a Researcher Do to Foster a Good Partnership with an Implementing Organization?” Running Randomized Evaluations: A Practical Guide (blog). April 9,
    2015. http://runningres.com/blog/2015/4/8/what-can-a-researcher-do-to-foster-a-good-
    partnership-with-an-implementing-organization
    .

    Oreopoulos, Philip, and Reuben Ford. 2016. “Keeping College Options Open: A Field Experiment to Help All High School Seniors Through the College Application Process.” Working Paper 22320. National Bureau of Economic Research. https://doi.org/10.3386/w22320.

    In this resource