Three key lessons for turning important research questions into successful research projects

Posted on:
Authors:
Mary Alice Doyle
Care management service providers conduct a home visit in Camden, New Jersey as part of a J-PAL evaluation.

May 21 marked my last day as a research manager at J-PAL North America. When people ask what working at J-PAL was like, I always tell them how much I learned.  

Before I joined J-PAL North America, I worked as an economic analyst for several years and had just finished a master’s degree in economics, where I learned about randomized evaluations and other impact evaluation methods. I felt confident I could get up to speed on the technical details of the J-PAL projects I was hired to manage, but I had never worked on a randomized evaluation. There was a lot I didn’t know about what, exactly, it took to implement one.

Over the next eighteen months I supported three randomized evaluations at various stages of implementation, including evaluations of health care hotspotting, SNAP take-up, and clinical decision support software. Aside from managing specific projects, a key part of my job focused on distilling our research team’s institutional knowledge on best practices for managing randomized evaluations, which resulted in the recently released J-PAL North America Evaluation Toolkit.

I could fill dozens of pages with what I learned from researchers and my colleagues at J-PAL, but I want to share my three key takeaways on what it takes to turn important research questions into successful research projects—and highlight the tools we developed to help researchers and research staff apply these (and other) lessons.

Lesson 1: Working well with stakeholders is central to a project’s success.

There are a lot of stakeholders that contribute to a typical randomized evaluation. The research team itself is typically larger than your average research team. Then there are implementing partners, funders, data providers, research participants, and institutional review boards (IRBs). The list goes on.

Navigating all of these relationships can be tricky, especially if you are used to working on research projects independently or with just one or two colleagues, as I was. But I’ve seen so many successful research partnerships at J-PAL that I now know that developing research questions in collaboration with partners—and taking the time to seek their input at every stage—is worth it.

All stakeholders are important, but establishing a good relationship with implementing partners is essential. To help research teams navigate this relationship when starting a new project, we created a brief, "Assessing viability and building relationships," to work with partners to identify the building blocks for a strong research partnership. We also have a resource on how to design and iterate an implementation strategy that emphasizes how to engage partners in initial discussions that are essential to study design.

Obligations to stakeholders don’t end when the fieldwork is complete. I came on to one project just as the team was preparing to submit the results to a journal. People would ask me questions like, "when should we share results with the funders?" and "whose permission do we need to publish the data?" As I was getting my head around these relationships, I found it helpful to map out our obligations to each stakeholder. This fed into our more general guide on pre-publication planning and proofing that we (and you!) can use for other projects.

Lesson 2: Everything is much harder than it needs to be if you don’t set up good systems from the start.

This is true of any project but, because there’s so many people involved in a randomized evaluation and so many moving parts, good project management is especially important.

What do I mean by "systems?" Have a clearly defined folder structure in a shared drive. Choose a task management system and agree within the team how you will use it. Write your analysis code as though someone else is going to read it. Use version control. Ensure your email inbox is not the only home for important documents or information. Establish a system to document any decisions you make. And take time at team meetings to make sure everyone is aware of these systems and processes.

Our new resource on managing files, data and documentation for randomized evaluations documents some of our practices for data management and file management. For more general project management that’s not specific to randomized evaluations, the way our team works is heavily based on Gentzkow and Shapiro’s Code and Data for the Social Sciences: A Practitioners’ Guide, though other teams at J-PAL work differently. The important thing is to have a system in place.

Lesson 3: RCT-specific administrative steps can be confusing to navigate, but help is available!

Because most randomized evaluations involve working with human subjects, collaborating with implementing partners, and accessing administrative data, there are administrative steps that don’t arise with other types of research.

Research teams generally need to get IRB approval to approach and work with data from human subjects. Researchers need to define their relationships with implementing partners and how they will work together, and set up agreements to allow you to access and analyze the data you need. It’s a lot.

Most challenges lie in trying to follow processes that may not be clearly documented. Once you understand these processes, most of these steps are not too onerous.

Based on our collective experiences my colleagues developed primers on navigating each of these processes, which are included in the J-PAL North America Evaluation Toolkit. See "Formalizing research partnerships" and "Defining intake and consent processes" for guidance on navigating each of these steps, and "Administrative steps to launching a randomized evaluation in the United States" for an overview.

By distilling what I—and my colleagues—have learned into actionable resources, I hope the toolkit can help other researchers and research staff design and implement successful studies.

If you are managing a randomized evaluation or about to start one, I encourage you to take a look! Of course, keep in mind that this toolkit is just one of many other tools and resources developed by J-PAL, and it illustrates the broader support that J-PAL can provide through my colleagues’ capacity-building services and resources.

 

 

Authored By

  • J-PAL logo

    Mary Alice Doyle

    former Research Manager, J-PAL North America