Research Resources

Assessing viability and building relationships

Authors
Elisabeth O’Toole
Kim Gannon
Contributors
Jacob Binder
Summary

This resource guides researchers through background research and early discussions with a program implementer who has expressed interest in a randomized evaluation, and with whom a partnership seems potentially viable. It provides guidelines for researchers to conduct early conversations with twin goals: to build strong working relationships and to assess whether a randomized evaluation is feasible and appropriate for the partner’s context. It focuses on general guidance for researchers, but J-PAL specific resources for staff, affiliates, and invited researchers can be found at the bottom of this page.

Introduction

Early discussions with partners serve dual purposes: (1) gathering enough information to assess the practical and statistical feasibility of a randomized evaluation, and (2) establishing strong working relationships with key stakeholders. By using initial conversations to discuss research questions, outcome measures, goals, and priorities, researchers and partners can jointly assess whether a randomized evaluation is in their best interests.1 

Locate potential partners

Conducting independent due diligence on each potential project and implementing partner is critical to making an informed decision about 1) the feasibility of an RCT and 2) the compatibility of interests and motivations of both parties. Identifying the right partner can take time. Following the guidance in this document will help avoid pitfalls such as hastily selecting partners due to convenience, reputation or timing. Conducting due diligence often means exploring more opportunities so the best decision can be made for both the researcher and the implementing partner. 
 
When entering a new thematic area or geographic area, strategies such as desk research, exploring professional networks and even attending relevant regional and national events that convene stakeholders can be a helpful place to start. Doing so will help researchers understand various actors (NGO, government, otherwise) working in specific fields of interest.  
 
Tips:

  • Access research databases of NGOs and government programs in the country or region of interest; take note of interesting and relevant programs.
  • Look for RFPs and funding opportunities (such as the World Bank) that often have matchmaking components for researchers and implementing partners.
  • Leverage networks and common connections for introductions, but don’t hesitate to make new connections. Be mindful that, while cold-calls or cold-email approaches can be successful, they will likely require more time and commitment to initiate the conversation.
  • Stay up to date on upcoming events where practitioners and researchers meet, especially those that attract practitioners more inclined to participate in a rigorous impact evaluation. For example,  J-PAL’s event website, IPA’s event website, 3IE’s event website, and CGDev’s event website are all useful resources to find relevant events.
  • When considering a partnership with a government, read Researchers Should Collaborate With Governments for Both Sides to Be More Effective.

Early conversations with a potential implementing partner

After determining where and how to engage with potential partners, the next step is to prepare the discussion topics needed in order to assess the possibility of working together. Generally, in the “getting to know you” and “early scoping” phases of communications, information should be sought about the potential partner’s organization (history, funding, trajectory), programs (objectives, capacity, monitoring systems), and appreciation for learning and evaluation. This discussion will provide an indication of the potential partner’s willingness to learn about their programs, regardless of the outcome. For example, being prepared to use the same language as the partner (e.g., understanding whether a partner refers to the people they serve as patients or clients and using that term rather than participants), can help to foster trust and mutual respect.

Consider the following suggestions for these first conversations:2

  • Generate trust by starting conversations from the partner’s needs and constraints, rather than from the “ideal” research design. Ask key stakeholders, program managers, and frontline service providers about their program, what they hope to learn, and what challenges they face. Such conversations will provide information needed to refine the research question and gather information that pertains to study feasibility and design. For example, discussions about the number of individuals served over a certain timeline can serve dual goals of informing power calculations and better understanding the program and partners. 
  • Be sure the partner’s goals for an academic partnership align with what researchers can promise. For example, if a partner is hoping to “prove” that some aspect of their program works, clarify that you cannot guarantee positive results. If a partner places a high priority on an underpowered outcome, be sure to explain the dangers of running an underpowered evaluation.3 From this conversation, gauge how the partner might use or react to the results from an evaluation—whether those results are positive, negative, neutral, or inconclusive—and be sure they understand your intention to publish results regardless of the outcome. Note that there may be fruitful cases where researchers’ and partners’ goals are not exactly the same, but for which the marginal cost of meeting both is very small. However, identifying a design that is both feasible and addresses the partners’ and researchers’ goals may be an iterative process. 
  • Explain the basic concept of a randomized evaluation, potential opportunities for random assignment, and related studies or random assignment possibilities. This conversation will help to assess the potential partner’s willingness to incorporate random assignment into their operations and identify potential challenges.4
  • Inquire about time or resource constraints the potential partner may be facing. Ensure that a potential partner understands the length of the research process and likely duration of their involvement. If the potential partner has never participated in academic research before, the timeline may be much longer than they expect. Furthermore, the financial runway for both research and program resources should be factored into the matchmaking decision.
    • Note: In most cases, at least 3-4 months is required for researcher and implementing partner matchmaking, project development and piloting. The baseline survey may require several more months.
  • Establish an equal partnership. Demonstrating a willingness to design research that is appropriate to the context and minimally disruptive to the partner’s operations, while also demonstrating respect for the partner’s insight and contextual knowledge, may alleviate concerns about the effect of the research on the community and partner.5 This view of partnership is important for the full study team to internalize and return to throughout the study.
  • Identify a champion within the organization. This may be the initial point of contact or someone with a particular interest or background in research. For example, a senior-level official who is intrinsically motivated to understand whether and how their program works can assist in developing a high-level vision for the partnership, implementing the partnership, facilitating support at all levels (including lateral buy-in), and helping ensure project sustainability (Carter et al. 2018).6

With this groundwork, researchers and partners can begin to discuss what a randomized evaluation might look like in their context, keeping in mind that random assignment can take many forms7 depending on the research question and the needs of all stakeholders. Prospective partners may not have the research or technical background to understand benefits or costs of an evaluation design, nor which research design is most suitable for their goals.8 Researchers can provide information to support partners in making informed decisions about their participation in a randomized evaluation and the proposed study design.

Detailed background research

If initial discussions with the potential partner seem promising, researchers can begin to think in more detail about potential research questions and design. In addition to literature review, background research may include continued discussions with the potential partner.9 These discussions, desk research, site visits, or focus groups can illuminate: 

  • How the program operates in practice. This includes whether services are delivered with fidelity to the design, the extent of variation (formal and informal) in program implementation, the history of changes to the program, and potential future changes. This also illuminates where random assignment could plausibly and ethically be carried out, which people can be recruited and consented into the study, and what potential threats to the research design exist. 
  • Program and partner details. This includes growth trajectory, sample size, time frame in which people are served, retention rates, and data capabilities. Gathering information on relevant programmatic facts within the first few conversations is an encouraging sign of a partner’s interest in collaborating on an evaluation and of their ability to provide relevant data. These details can inform whether a partner is likely to have the technical and logistical capacity to implement the program, reach enrollment targets, and provide data needed for an evaluation. 
  • Key personnel and other stakeholders. These may be within the organization or external parties such as funders or government partners. Gather information on their roles, involvement in decision-making, familiarity with research, and relation to one another.
  • A timeline of major changes that may affect the program, such as:
    • A similar program that may be implemented in the same service area
    • Changes to data systems or processes
    • Changes in program funding
  • Potential implications for the community or individuals from study implementation, including:
  • Representativeness of the community context or study population. Communities or partners willing to adopt a program for a randomized evaluation may respond differently to the program than those who are not (Allcott 2015). 
  • Potential sources of concern about research or a randomized evaluation. Understanding the community’s relationships with universities, research, and researchers—particularly any other ongoing research in the area, any perception of high fatigue from prior surveys, or any history of research that the community perceived as harmful—can help researchers plan for potential challenges or concerns from partners. Concerns may especially involve groups whose circumstances have made them vulnerable to manipulation or from pre-existing controversy over a program or political decision.

Equipped with this information, researchers and partners can assess the fit of a randomized evaluation. If the partnership seems viable, researchers can use this information to tailor their approach to communicating with the partner, to offer trainings or resources as appropriate, and to design a rigorous and practical randomized evaluation.

Last updated March 2021.

These resources are a collaborative effort. If you notice a bug or have a suggestion for additional content, please fill out this form.

Acknowledgments

We are grateful to Todd Hall, Emma Rackstraw, and Sophie Shank for their insight and advice. Jacob Binder copyedited this document. This work was made possible by support from the Alfred P. Sloan Foundation and Arnold Ventures. Any errors are our own.   

1.
A randomized evaluation—or any type of impact evaluation—may not be the right fit for the needs of some potential partners. To help these partners understand different types of evaluation, pages 12-13 of J-PAL’s Introduction to Evaluations and J-PAL’s What Is Evaluation? lecture provide approachable introductions to different types of research questions and evaluation methods. Additionally, Mary Kay Gugerty and Dean Karlan discuss the appropriateness of randomized evaluations in different situations in Ten Reasons Not to Measure Impact—and What to Do Instead.
2.
For more resources on establishing ongoing communications and sharing results, see J-PAL’s resources “Formalize research partnerships and establish roles and expectations” and “Communicating with a partner about results.”
3.
J-PAL’s resource “The risk of an underpowered randomized evaluation” lays out why an underpowered evaluation may consume substantial time and monetary resources while providing little useful information.
4.
J-PAL has developed resources to help facilitate these conversations, including publications titled Why Randomize?,  Real-World Challenges to Randomization and Their Solutions, and Common Questions and Concerns about Randomized Evaluations.
5.
J-PAL North America’s "Designing and iterate implementation strategy" outlines steps and questions for consideration after research partnerships have been formalized. In particular, it highlights ways to identify particular aspects of the partner’s program to consider when in this design phase—many of which can be identified in these early conversations.
6.
This resource discusses qualities and benefits of identifying a strong senior-level champion in a partner organization. It details specific roles, conversations, and tasks to facilitate a successful partnership—specifically in government organizations.
7.
This resource is a partner-oriented guide illustrating different potential forms of randomization—including phase-in, rotational, and encouragement designs. It describes plans for evaluations in cases of entitlement programs and where resources exist to extend the program to everyone in the study area, as well as in cases where access to the program is guaranteed for a portion of the population.
8.
J-PAL offers many capacity-building resources to potential implementing partners.
9.
If also working with an existing partner, such as a J-PAL or IPA office, engaging the office can help to understand the local research and project landscape. Local staff will be able to provide additional guidance as to authorizations, logistics, cost implications, etc. that will inform the implementation prospects of the project.
10.
This resource provides a nuanced discussion of ethics in randomized evaluations.
11.
This resource provides suggestions for alternative strategies in cases where partners have concerns about randomization, especially with specific populations. Researchers can demonstrate their recognition of ethical or practical challenges by presenting alternative randomization strategies.
    Additional Resources
    1. “Evaluating Financial Products and Services in the US: A Toolkit for Running Randomized Controlled Trials.” 2015. Innovations for Poverty Action. November 10, 2015. https://www.poverty-action.org/publication/evaluating-financial-products-and-services-us-toolkit-running-randomized-controlled.

      Page 18 of IPA’s Evaluating Financial Products and Services in the US: A Toolkit for Running Randomized Control Trials provides more questions determine whether a randomized evaluation is right in a particular context.

    2. Glennerster, Rachel. 2017. “Chapter 5 - The Practicalities of Running Randomized Evaluations: Partnerships, Measurement, Ethics, and Transparency.” In Handbook of Economic Field Experiments, edited by Abhijit Vinayak Banerjee and Esther Duflo, 1:175–243. Handbook of Field Experiments. North-Holland.

      For more information about whether a randomized evaluation is right for a given program or partner, see Section 1.2 of Rachel Glennerster’s The Practicalities of Running Randomized Evaluations: Partnerships, Measurement, Ethics, and Transparency (available as a part of a Handbook of Economic Field Experiments, published at Elsevier).

    3. What is the Risk of an Underpowered Randomized Evaluation?” J-PAL North America.

      This resource outlines why an underpowered randomized evaluation may consume substantial time and monetary resources while providing little useful information.

    4. Gugerty, Mary Kay, and Dean Karlan. n.d. “Ten Reasons Not to Measure Impact—and What to Do Instead.” Stanford Social Innovation Review. Accessed October 2, 2018.

      In their book The Goldilocks Challenge, and summary article in the Stanford Social Innovation Review, Mary Kay Gugerty and Dean Karlan discuss the appropriateness of randomized evaluations in different situations and alternative strategies where appropriate.

    5. Glennerster, Rachel, and Shawn Powers. 2016. “Balancing Risk and Benefit: Ethical Tradeoffs in Running Randomized Evaluations.” The Oxford Handbook of Professional Economic Ethics, April. https://doi.org/10.1093/oxfordhb/9780199766635.013.017.

      See Rachel Glennerster and Shawn Powers’s Balancing Risk and Benefit: Ethical Tradeoffs in Running Randomized Evaluations for a detailed discussion about ethics in randomized evaluations.

    6. "Common Questions and Concerns about Randomized Evaluations." J-PAL North America.

       It is helpful to review this resource, along with Why Randomize and Real-World Challenges to Randomization and Their Solutions (both listed below), before initial conversations with a potential partner in order to be reminded and informed of examples and ways to explain RCTs in non-technical language. These resources can also be sent to a potential partner after the first meeting or call with a partner so as to clarify or expand on concepts of RCTs.

    7. Real-World Challenges to Randomization and Their Solutions.” J-PAL North America. 

      Includes more information about solutions to practical challenges to randomization, as well as case studies.

    8. J-PAL video resource: Why Randomize?

    9. "Right-Fit Evidence," IPA

      This resource is helpful if the potential partner wants to learn more about monitoring and evaluation.

    10. Feasibility Checklist (Hoekman 2019) (J-PAL internal resource)

    11. J-PAL’s guide to early-stage engagement with partners (J-PAL internal resource)

    Allcott, Hunt. 2015. “Site Selection Bias in Program Evaluation.” The Quarterly Journal of Economics 130 (3): 1117–65. https://doi.org/10.1093/qje/qjv015.

    Carter, Samantha, Iqbal Dhaliwal, Julu Katticaran, Claudia Macías, and Claire Walsh. 2018. “Creating a Culture of Evidence Use: Lessons from J-PAL Government Partnerships in Latin America.” J-PAL LAC.

    In this resource