Skip to main content
J-PAL J-PAL
The Abdul Latif Jameel Poverty Action Lab
  • About
    • Overview
    • People
      • Affiliated Professors
      • Invited Researchers
      • J-PAL Scholars
      • Board
      • Leadership
      • Staff
    • Strengthening Our Work
    • Code of Conduct
    • Initiatives
    • Events
    • Blog
    • News
    • Press Room
  • Offices
    • Overview
    • Global
    • Africa
    • Europe
    • Latin America and the Caribbean
    • Middle East and North Africa
    • North America
    • South Asia
    • Southeast Asia
  • Sectors
    • Overview
    • Agriculture
    • Crime, Violence, and Conflict
    • Education
    • Environment, Energy, and Climate Change
    • Finance
    • Firms
    • Gender
    • Health
    • Labor Markets
    • Political Economy and Governance
    • Social Protection
  • Evaluations
  • Research Resources
    • About Us
    • Our Work
    • Join ASPIRE
    • Newsroom
  • Policy Insights
  • Evidence to Policy
    • Pathways and Case Studies
    • The Evidence Effect
  • Blog
  • Careers
  • Courses
  • For Affiliates
  • Support J-PAL

Utility menu

  • Blog
  • Careers
  • Courses
  • For Affiliates
  • Support J-PAL

Quick links

  • Evaluations
  • Research Resources
  • Policy Insights
  • Evidence to Policy
    • Pathways and Case Studies
    • The Evidence Effect
  • About

    The Abdul Latif Jameel Poverty Action Lab (J-PAL) is a global research center working to reduce poverty by ensuring that policy is informed by scientific evidence. Anchored by a network of more than 1,100 researchers at universities around the world, J-PAL conducts randomized impact evaluations to answer critical questions in the fight against poverty.

    • Overview

      The Abdul Latif Jameel Poverty Action Lab (J-PAL) is a global research center working to reduce poverty by ensuring that policy is informed by scientific evidence. Anchored by a network of more than 1,100 researchers at universities around the world, J-PAL conducts randomized impact evaluations to answer critical questions in the fight against poverty.

      • Affiliated Professors

        Our affiliated professors are based at over 130 universities and conduct randomized evaluations around the world to design, evaluate, and improve programs and policies aimed at reducing poverty. They set their own research agendas, raise funds to support their evaluations, and work with J-PAL staff on research, policy outreach, and training.

      • Invited Researchers
      • J-PAL Scholars
      • Board
        Our Board of Directors, which is composed of J-PAL affiliated professors and senior management, provides overall strategic guidance to J-PAL, our sector programs, and regional offices.
      • Leadership
      • Staff
    • Strengthening Our Work

      Our research, policy, and training work is fundamentally better when it is informed by a broad range of perspectives.

    • Code of Conduct
    • Initiatives
      J-PAL initiatives concentrate funding and other resources around priority topics for which rigorous policy-relevant research is urgently needed.
    • Events
      We host events around the world and online to share results and policy lessons from randomized evaluations, to build new partnerships between researchers and practitioners, and to train organizations on how to design and conduct randomized evaluations, and use evidence from impact evaluations.
    • Blog
      News, ideas, and analysis from J-PAL staff and affiliated professors.
    • News
      Browse news articles about J-PAL and our affiliated professors, read our press releases and monthly global and research newsletters, and connect with us for media inquiries.
    • Press Room
      Based at leading universities around the world, our experts are economists who use randomized evaluations to answer critical questions in the fight against poverty. Connect with us for all media inquiries and we'll help you find the right person to shed insight on your story.
  • Offices
    J-PAL is based at MIT in Cambridge, MA and has seven regional offices at leading universities in Africa, Europe, Latin America and the Caribbean, Middle East and North Africa, North America, South Asia, and Southeast Asia.
    • Overview
      J-PAL is based at MIT in Cambridge, MA and has seven regional offices at leading universities in Africa, Europe, Latin America and the Caribbean, Middle East and North Africa, North America, South Asia, and Southeast Asia.
    • Global
      Our global office is based at the Department of Economics at the Massachusetts Institute of Technology. It serves as the head office for our network of seven independent regional offices.
    • Africa
    • Europe
    • Latin America and the Caribbean
    • Middle East and North Africa
    • North America
    • South Asia
    • Southeast Asia
  • Sectors
    Led by affiliated professors, J-PAL sectors guide our research and policy work by conducting literature reviews; by managing research initiatives that promote the rigorous evaluation of innovative interventions by affiliates; and by summarizing findings and lessons from randomized evaluations and producing cost-effectiveness analyses to help inform relevant policy debates.
    • Overview
      Led by affiliated professors, J-PAL sectors guide our research and policy work by conducting literature reviews; by managing research initiatives that promote the rigorous evaluation of innovative interventions by affiliates; and by summarizing findings and lessons from randomized evaluations and producing cost-effectiveness analyses to help inform relevant policy debates.
    • Agriculture
      How can we encourage small farmers to adopt proven agricultural practices and improve their yields and profitability?
    • Crime, Violence, and Conflict
      What are the causes and consequences of crime, violence, and conflict and how can policy responses improve outcomes for those affected?
    • Education
      How can students receive high-quality schooling that will help them, their families, and their communities truly realize the promise of education?
    • Environment, Energy, and Climate Change
      How can we increase access to energy, reduce pollution, and mitigate and build resilience to climate change?
    • Finance
      How can financial products and services be more affordable, appropriate, and accessible to underserved households and businesses?
    • Firms
      How do policies affecting private sector firms impact productivity gaps between higher-income and lower-income countries? How do firms’ own policies impact economic growth and worker welfare?
    • Gender
      How can we reduce gender inequality and ensure that social programs are sensitive to existing gender dynamics?
    • Health
      How can we increase access to and delivery of quality health care services and effectively promote healthy behaviors?
    • Labor Markets
      How can we help people find and keep work, particularly young people entering the workforce?
    • Political Economy and Governance
      What are the causes and consequences of poor governance and how can policy improve public service delivery?
    • Social Protection
      How can we identify effective policies and programs in low- and middle-income countries that provide financial assistance to low-income families, insuring against shocks and breaking poverty traps?
Displaying 3961 - 3975 of 8489

Navigating hospital Institutional Review Boards (IRBs) when conducting US health care delivery research

Authors
Fatima Vakil
Contributors
Laura Feeney Jesse Gubb Sarah Margolis
Last updated
June 2023
In Collaboration With
MIT Roybal Center logo

Summary

This resource provides guidance to avoid challenges when partnering with a hospital to implement a randomized evaluation and working with the hospital's institutional review board (IRB), which may be more familiar with medical research than social science evaluations. Conducting randomized evaluations in these settings may involve logistical considerations that researchers should be aware of before embarking on their experiment. Topics include how to write a protocol to avoid confusion, objections, and delay by taking pains to thoroughly explain a study design and demonstrate thoughtful approaches to randomization, risks and benefits, and how research impacts important hospital constituencies like patients, providers, and vulnerable groups. The resource also addresses how to structure IRB review when work involves multiple institutions or when research intersects with quality improvement (QI) in order to reduce concerns about which institution has responsibility for a study and to minimize administrative burden over the length of a study. For a more general introduction to human subjects research regulations and IRBs, including responding to common concerns of IRBs not addressed here, please first consult the resource on institutional review board proposals.

The challenge

Randomized evaluations, also known as randomized controlled trials (RCTs), are increasingly being used to study important questions in health care delivery, although the methodology is still not as commonly used in the social sciences as it is in studies of drugs and other medical interventions (Finkelstein 2020; Finkelstein and Taubman 2015). Economists and other social scientists may seek to partner with hospitals to conduct randomized evaluations of health care delivery interventions. However, compared to conducting research at their home institutions, it may seem as if some hospital IRBs may have a more lengthy process for reviewing social science RCTs. While all IRBs follow the guidelines set forth in the Common Rule, differences in norms and expectations between disciplines may complicate reviews and require clear communication strategies. 

Lack of familiarity with social science RCTs

Hospital IRBs, accustomed to reviewing randomized evaluations in the context of clinical trials, may review social science RCTs as if they were comparatively higher risk clinical trials, while researchers may feel that social science RCTs are generally less risky. These IRBs may be more accustomed to reviewing clinical trials, research involving drugs or devices, or more intensive behavioral interventions. Thus, a protocol that mentions randomization may prompt safety concerns and lead to review processes required for trials that demand more scrutiny and risk mitigation.

Randomization 

Randomization invites scrutiny on decisions about who receives an intervention, as well as scrutiny of the intervention itself. This can cause a disconnect for the researcher when trying to justify the research, or trigger assumptions that reviewers may have regarding ethics. Is the researcher denying someone something they should be able to receive? Unlike a drug or device that requires testing, it may not be clear to a hospital IRB why an intervention, for instance, to increase social support or to provide feedback on prescribing behavior to physicians cannot be applied to everyone. 

Researchers being external to the reviewing IRB organization

Although not a challenge inherent to the field of health care delivery, partnering with a hospital to conduct research may involve researchers interacting with a new IRB. This presents challenges when IRBs are unaware of, or unable to independently assess the expertise of researchers, may not understand the division of labor between investigators at the hospital and elsewhere, or may not be clear on where hospital responsibility for a project starts and ends. This may also pose logistical challenges if it limits who has access to IRB software systems. 

Despite these challenges, researchers affiliated with J-PAL North America have successfully partnered with hospitals to conduct randomized evaluations. 

  • In Clinical Decision Support for High-cost Imaging: A Randomized Clinical Trial, J-PAL affiliated researchers partnered with Aurora Health Care, a large health care provider in Wisconsin and Illinois to conduct a large-scale randomized evaluation of a clinical decision support (CDS) system on the ordering of high-cost diagnostic imaging by healthcare providers. Providers were randomized to receive CDS in the form of an alert that would appear within the order entry system when they ordered scans that met certain criteria. Aurora Health Care’s IRB reviewed the study and approved a waiver of informed consent for patients, the details of which are well documented in the published paper. 
  • In Health Care Hotspotting, J-PAL affiliated researchers partnered with the Camden Coalition of Healthcare Providers in New Jersey and several Camden area hospitals (each with their own IRB) to evaluate the impact of a care transition program that provided assistance to high-cost, high-need patients with frequent hospitalizations and complex social needs, known as “super-utilizers.” The study recruited and randomized patients while in the hospital, although the bulk of the intervention was delivered outside of the hospital after discharge. Each recruiting hospital IRB reviewed the study.
  • In Prescribing Food as Medicine among Individuals Experiencing Diabetes and Food Insecurity, J-PAL affiliated researchers partnered with the Geisinger Health System in Pennsylvania to evaluate the impact of the Fresh Food Farmacy (FFF) “food-as-medicine” program on clinical outcomes and health care utilization for patients with diabetes and food insecurity. A reliance agreement was established between MIT and the Geisinger Clinic, with the Geisinger Clinic’s IRB serving as the reviewing IRB.
  • In Encouraging Abstinence Behavior in a Drug Epidemic: Does Age Matter? J-PAL affiliated researchers partnered with DynamiCare Health and Advocate Aurora Health to evaluate whether an app-based abstinence incentive program through a mobile application is effective for older adults with opioid use disorders. Advocate Aurora Health reviewed the study, and a reliance agreement was not needed, as it was determined that the co-PIs at the University of California, Santa Cruz and the University of Chicago were not involved in human subjects research. 

The following considerations are important to ensure positive relationships, faster reviews, and increased likelihood of approval when working with hospital IRBs:

Considerations for the institutional arrangement

Partner with a PI based at the IRB's institution

RCTs generally involve partnerships between investigators and implementing partners. Hospitals differ from typical implementing partners in that they are research institutions with their own IRBs. Consequently, hospital partnerships often require partnership with a researcher based at that hospital. Therefore, if you want to work with a hospital to conduct your research, plan to partner with a PI based at that hospital. Beyond the substantive knowledge and expertise that a hospital researcher brings, there may be practical benefits for the IRB process. This includes both institutional knowledge and the mundane task of clicking submit on an internal IRB website. Procedures for approving external researchers vary from hospital to hospital and having an internal champion in this process can be invaluable. 

Building a relationship with a local PI as your co-investigator can be beneficial for a number of reasons: utilizing content expertise during the research design, addressing the alignment of the research goals with the mission, interest, and needs of the hospital, and also in navigating IRB logistics.  A local PI may already have knowledge of nuances to consider when designing a protocol and submitting for IRB review. 

  • In the evaluation of clinical decision support, J-PAL and MIT-based researchers partnered with Aurora-based PI, Dr. Sarah Reimer, who played a key role in getting the study approved. Aurora requires all research activities to obtain Research Administrative Preauthorization (RAP) before submitting the proposal to the IRB for approval. Dr. Reimer ensured the research design made it through the formal steps of both reviews. She also helped with the approval process allowing the external J-PAL investigators to participate in the research, which included a memo sent to Aurora’s IRB explaining their qualifications, experience, and role on the project. 
  • Challenges in collaboration are always a possibility. In Health Care Hotspotting, researchers initially partnered with Dr. Jeffrey Brenner, the founder of the Camden Coalition and a clinician at the local hospitals. When Dr. Brenner left the Coalition and the research project, the research team needed to find a hospital-based investigator in order to continue managing administrative tasks like submitting IRB amendments. 

Decide which IRB should review

Multi-site research studies (including those with one implementation site but researchers spread across institutions, if all researchers are engaged in human subjects research) can generally opt to have a single IRB review the study with the other IRBs relying on the main IRB. In fact, this is required for NIH-funded multi-site trials. Implementation outside of the NIH-required context is more varied. 

All else equal, it may be easier to submit and manage an IRB protocol at one’s home university, with partnering hospitals ceding review. However, not all institutions will agree to cede review. This seems to be the case particularly when most activities will take place at another institution (e.g., you are the lead PI based at a university but recruitment is taking place at a hospital and the hospital IRB does not want to cede review). Sometimes hospitals may require their own independent IRB review, which results in the same protocol being reviewed at two (or more) institutions. Understanding each investigator’s role and level of involvement in the research, along with the research activities happening at each site is crucial when thinking about a plan to submit to the IRB. Consult the IRBs and talk to your fellow investigators about how to proceed with submitting to the IRB and the options for single IRB review. For long-term studies, consider how study team turnover and the possibility of researchers changing institutions will affect these decisions.

  • In Health Care Hotspotting, all four recruiting hospitals required their own IRB review. MIT, where most researchers were based, ceded review to Cooper University Hospital, the primary hospital system where most recruitment occurred. Cooper has since developed standard operating procedures for working with external researchers and determining who reviews. Other institutions may have similar resources. Maintaining separate approvals proved labor intensive for research and implementing partner staff.
  • Reliance agreements can be easier with fewer institutions. In Prescribing Food as Medicine among Individuals Experiencing Diabetes and Food Insecurity, a reliance agreement was established between MIT (where the J-PAL affiliated researcher was based) and the Geisinger Health System. MIT ceded to Geisinger, since recruitment and the intervention took place at Geisinger affiliated clinics. 

We recommend single IRB review whenever possible, because a single IRB will drastically reduce the effort required for long term compliance, such as submitting amendments and continuing reviews. While setting up single IRB review initially may not be straightforward,  doing so will pay dividends over the length of the project. 

If a researcher at a university wants to work with a co-investigator at a hospital and have only one IRB oversee the research, a reliance agreement or institutional authorization agreement (IAA) will be required by the IRBs. Researchers select one IRB as the ‘reviewing IRB’, which provides all IRB oversight responsibilities. The other institutions sign the agreement stating that they agree to act as ‘relying institutions’ and their role is to provide any conflicts of interest, the qualifications of their study team members, and local context if applicable but otherwise cede review. 

SMART IRB is an online platform aimed at simplifying reliance agreements, funded through the NIH National Center for Advancing Translational Sciences. NIH now requires a single reviewing IRB for NIH-funded multisite research. Many institutions use SMART IRB, however some may use their own paper based systems or use a combination of both (e.g., SMART IRB for medical studies that require it, paper forms for social science experiments). An example of a reliance agreement form can be found here. The process of getting forms signed and sent back and forth between institutions can take time. If you are planning to collaborate with a researcher at another institution, plan ahead by looking into whether they have SMART IRB and talk to other researchers about their experiences with obtaining a reliance agreement via traditional forms. 

Anticipate roadblocks to reliance agreements to save yourself time in the future and avoid potential compliance issues. Explain your roles and responsibilities in the protocol, check to see if your institution(s) use SMART IRB, discuss with your co-investigator(s) which institution should be the relying institution, and talk to your IRB if you are unsure whether a reliance agreement will be needed. 

Understand who is “engaged” in research

It is fairly common for research teams to be spread across multiple universities; however, not all universities may be considered engaged in human subjects research. Considering which institutions must approve the research can cut down on the number of approvals needed, even in cases of single IRB review.

An institution is engaged in research if an investigator at that institution is substantively involved in human subjects research. Broadly, “engagement” involves activities that merit professional recognition (e.g., study design) or varying levels of interaction with human subjects, including direct interaction with subjects (e.g., consent, administering questionnaires) or interaction with identifiable private information (e.g., analysis of data provided by another institution). If you have co-investigators who are not involved in research activities with subjects and do not need access to the data, even if they are collaborators for authorship and publication purposes, reliance may not be necessary (See Engagement of Institutions in Human Subjects Research, Section B 11). To learn more about criteria for engagement, please review guidance from the Department of Health & Human Services. When in doubt, an IRB may issue a determination that an investigator is “not engaged” in human subjects research. 

  • In Treatment Choice and Outcomes for End Stage Renal Disease, the study team was based at MIT, Harvard, and Stanford. Data access was restricted such that only the MIT-based team could access individual data. The study design, analysis, and paper writing were collaborative efforts, but the IRBs determined that only MIT was engaged in human subjects research. 
  • One J-PAL affiliated researcher experienced a weeks-long exchange when trying to set up a reliance agreement between their academic institution and their co-PI’s hospital. As co-PIs they designed the research protocol together. The plan was for recruitment and enrollment of patients into the study to take place at the hospital where the co-PI was situated and for our affiliate to lead the data analyses. The affiliate initiated the reliance agreement, but because the study was taking place at the hospital and our affiliate would not actually be involved in any of the direct research activities (e.g., recruitment, enrollment), their IRB questioned whether their level of involvement even qualified as human subjects research and subsequently did not see the need for a reliance agreement. Our affiliate provided their IRB with justification of their level of involvement in the study design and planned data analyses. An academic affiliation was also set up for the J-PAL affiliate at the co-PI’s hospital for data access purposes. Because the study received federal funding, the investigators emphasized the need for compliance with the NIH’s single IRB policy. Ultimately, the reliance agreement was approved. 

Determine whether your study is Quality Improvement (QI)

Quality improvement (QI) is focused on evaluating and identifying ways to improve processes or procedures internally at an organization. Because the focus is more local, the goals of QI differ from research, which is defined as “a systematic investigation… to develop or contribute to generalizable knowledge” (45 CFR 46.102(2)). Instead, QI has purposes that are typically clinical or administrative. QI projects are not human subjects research and are not subject to IRB review. However, hospitals typically have an oversight process for QI work, including making a determination that a project is QI and thus not human subjects research. If you are doing QI, it is crucial to follow the local site’s procedures, so make sure to discuss your work with the IRB or, if separate, the office responsible for QI determinations.

Quality improvement efforts are common in health care. In some cases, researchers may be able to distinguish between the QI effort and the work to evaluate it. In such work, the QI would receive a determination of “not human subjects research” and would not be overseen by the IRB, whereas a non-QI evaluation would be considered research and would receive oversight from the IRB. If the evaluation only involved retrospective data analysis, it might be eligible for simplified IRB review processes, such as exemption or expedited review. 

Quality improvement projects are often published as academic articles. The intent to publish does not mean your QI project fits the regulatory definition of research (45 CFR 46.102(d)) and should not, on its own, hinder you from pursuing a QI project. If your QI project also has a research purpose, then regulations for the protection of human subjects in research may apply. In this case, you should discuss with the IRB and/or the office responsible for QI determinations. The OHRP has an FAQ page on Quality Improvement Activities that can help you get a better understanding of the differences between QI and research.  

Understanding QI matters because hospital-based partners may feel that a project does not require IRB review. While a project may be intended to evaluate and identify ways to improve processes or procedures, there may be research elements embedded in the design. If so, then your project must be reviewed by an IRB, per federal regulations. If you are unsure if your project can be classified as QI, human subjects research, or both, talk to your IRB and/or the office that makes QI determinations. 

Additionally, there may be funder requirements to be mindful of when it comes to IRB review. For example, J-PAL’s Research Protocols require PIs to obtain IRB approval, exemption, or a non-human subjects research determination. 

  • The Rapid Randomized Controlled Trial Lab at NYU Langone, run by Dr. Leora Horwitz, does exclusively QI, because the lab prioritizes the speed of projects. By limiting the projects to just QI and certain types of evaluations, the lab is able to complete projects within a timespan of weeks. Researchers have investigated non-clinical improvements to the health system, such as improving post-discharge telephone follow-up.
  • Lessons from Langone: QI projects may not need IRB review, but that doesn’t mean ethical considerations should not be taken seriously. Before getting started, the researchers discussed the approach with their IRB, who then determined that their projects qualified as QI. Two hallmarks of QI work include: no collection of personal identifiers, since this is generally unnecessary in evaluation of an effect, and the prioritization of oversubscribed interventions to avoid denying patients access to a potentially beneficial program (Horowitz, et al 2019).

Quality improvement should not be undertaken to avoid ethical review of research. As the researcher, it is your obligation to be honest and thorough in designing your protocol and assessing the roles human subjects will play and how individuals may be affected. 

Do not outsource your ethics! Talk to your IRB about your options for review if you think your protocol involves both QI and research activities. It is always possible to submit a protocol, which may lead to a determination that a project is not human subjects research, or exempt from review. 

Consider splitting up your protocol

A research study may involve several components, some of which may qualify for exemption or expedited review while others will not. For example, researchers may be interested in accessing historical data (retrospective chart review, defined below) to inform the design of a prospective evaluation. In this case, the research using historical data could be submitted to the IRB as one protocol, likely qualifying for an exemption, while the prospective evaluation could later be submitted as a separate protocol. Removing some elements of the project from the requirements of ongoing IRB review may be administratively beneficial and doing so allows some of the research to get approved and commence before all details of an evaluation are finalized. Similar strategies may be used when projects involve elements of QI. An IRB may also request that projects be split into separate components. 

Considerations for writing the protocol

The following sections contain guidance for how to write the research protocol to avoid confusion and maintain a positive relationship with an IRB, but these points also relate to important design considerations for how to design an ethical and feasible randomized evaluation that fits a particular context. For more guidance on these points from a research design perspective consult the resource on randomization and the related download on real-world challenges to randomization and their solutions. 

Use clear language and default to overexplaining

IRB members reviewing your protocol are likely not experts in your field. Explain your project so that someone with a different background can understand what you are proposing. If you have co-investigators who have previously submitted to hospital IRBs, consult with them on developing the language for your protocol. You should also consult implementing partners in order to sufficiently describe the details of your intervention. Because IRBs only know as much as you tell them, it is important to be diligent when explaining your research. By being thorough and anticipating questions in the protocol, you are leaving less ground for confusion and questions that may arise and elongate the process of review and approval. Questions from the IRB aren’t necessarily bad or indicative that you won’t receive approval, however providing ample explanation is a way to proactively answer questions that may come up during review, therefore potentially saving time when waiting to hear back about the determination. 

Be mindful of jargon and specific terminology

The IRB may imbue certain words with particular meaning. 

  • “Preparatory work,” for example, may be interpreted as a pilot or feasibility study and the IRB may instruct you to modify your application and specify that your study is a pilot in the protocol and informed consent. If you are not actually conducting a pilot, this can lead to confusion and delay. 
  • Similarly, mentioning “algorithms” may suggest an investigation of new algorithms to determine a patient’s treatment, prompting an IRB to review your work like it was designed to test a medical device. 
  • In the medical world, IRBs often refer to research with historical data as retrospective chart review. Chart review is research requiring only administrative data — materials, such as medical records, collected for non research purposes — and no contact with subjects. Chart reviews can be retrospective or prospective, although many IRBs will grant exemptions only to retrospective chart reviews. Prospective research raises the question of whether subjects should be consented, and prospective data collection may require more identifying information to extract data. Proposals therefore require additional scrutiny. 
  • If working only with data where the “identity of the human subjects cannot readily be ascertained” (including limited data sets subject to a DUA) these studies can be determined exempt under category 4(ii) (45 CFR 46.104). If chart review involves data that is identified, this still qualifies for expedited review (category #5) if minimal risk. In this case researchers are trading off a longer IRB process for greater freedom with how to code identifiers.

IRBs may provide glossaries or lists of definitions that are helpful to consult when writing a protocol. NIA also maintains a glossary of clinical research terms. 

Carefully consider and explain the randomization approach.

Typical justifications for randomization are oversubscription (i.e., resource constraints or scarcity), where demand exceeds supply, or equipoise. Randomization given oversubscription ensures a fair allocation of a limited resource. Equipoise is defined as a “state of genuine uncertainty on the relative value of two approaches being compared.” In this case, randomization for the sake of generating new knowledge is ethically sound. 

Careful explanation of the motivations behind randomization and the approach are crucial in making sure that both the IRB and participants understand they are not being deprived of care that they would normally be entitled to. 

  • In Health Care Hotspotting, eligible patients were consented and randomized into an intervention group that was enrolled in the Camden Core Model and a control group that received the usual standard of care (a printed discharge plan). Great care was taken to explain to patients that no one would be denied treatment that they were otherwise entitled to as a result of participating or not participating in the study. Recruiters emphasized that participation in the study would not affect their care at the hospital or from their regular doctors. However, the only way to access the Camden program was through the study and randomization. More detail on this intake process is in the resource designing intake and consent processes in health care contexts.

Different design decisions may reduce concerns about withholding potentially valuable treatments. Strategies such as phasing in the intervention, so all units are eventually treated but in a random order; randomizing encouragement to take up the intervention rather than randomizing the intervention itself; and randomizing among the newly eligible when expanding eligibility can all be considered during study design. For more details on these strategies, refer to this resource on randomization. 

Explain risks and benefits

Behavioral interventions may not pose the risk of physical harm that drug or medical interventions might, however social scientists still must consider the risks participants face. Using data (whether administrative or collected by researchers) involves the risk of loss of privacy and confidentiality, including the possibility of reidentification if anonymized data is linked back to individuals. The consequences of these risks depend on the data content, but health data is generally considered sensitive. Risks beyond data-related ones do exist and must be considered. Are there physical, psychological, or financial risks that participants may be subjected to?

Do not minimize the appearance of risks. Studies are judged on whether risks are commensurate with the expected benefits. They need not be risk free. Being thoughtful in the consideration of risks is more effective than asserting their absence.

Don’t outsource your ethics; as the researcher, you have an obligation to consider risks and whether they are appropriate for your study. Researchers should ensure that they consider a study to be ethical before they attempt to make the case to someone else like an IRB. Many but not all behavioral studies will be minimal risk. Minimal risk does not mean no risk, but rather a level of risk comparable to what one would face in everyday life (45 CFR 46. 102(i)). Such risks, for example, may include taking medication that has the risk of side effects. Minimal risk studies may be eligible for expedited review, as detailed in this guidance set forth by the Office for Human Research Protections (OHRP). If you do not plan to use a medical drug or device intervention in your research, clearly distinguishing your study from medical research may be helpful in ensuring that your research is reviewed commensurate with risks. If you have a benign behavioral intervention* that is no more than minimal risk to participants, use those terms and justify your reasoning (*please see section 8. in Attachment B - Recommendations on Benign Behavioral Intervention for a more detailed definition of what constitutes a benign behavioral intervention). This will put your study in a shared framework of understanding. Check your IRB’s website for a definitions page and contact your IRB if you have any clarifying questions. 

  • In Encouraging Abstinence Behavior in a Drug Epidemic: Does Age Matter?, the intervention is administered using an app for a mobile device. Through the app, participants will receive varying incentive amounts for drug negative saliva tests. Participants must have a smartphone with a data plan in order to run the application. There is a potential risk of participants incurring extra charges on their phone plan if they exceed their data limit while uploading videos to the app. To address this risk, researchers made sure that the option in the app settings to “Upload only over Wi-Fi” was known to participants so that they can upload their videos without using mobile data and risking extra charges. 

There is no requirement that your study must pose no more than minimal risk. Studies that are greater than minimal risk will face full institutional review board review. You must also explain the measures you plan to take to ensure that protections are put into place for the safety and best interests of the participants.

All studies must report adverse or unanticipated events or risks. An example of an adverse event reporting form can be found here (under “Protocol Event Reporting Form”). However, for studies that are greater than minimal risk, your IRB may ask you to propose additional safety monitoring measures. It is possible that hospital IRBs with a history of reviewing proposals for clinical trials are more accustomed to the need for safety monitoring plans and may expect them even from social science trials that do not appear risky. 

  • In Health Care Hotspotting, where the only risk was a breach of data confidentiality, weekly project meetings involved a review of data management procedures with staff as well as discussion of any adverse events. The hospital IRB submission prompted researchers to detail these plans. 

Additionally, depending on the level of risk and funder requirements, you may encounter the need for a Data and Safety Monitoring Board (DSMB) to review your study. A DSMB is a committee of independent members responsible for reviewing study materials and data to ensure the safety of human subjects and validity and integrity of data. National Institutes of Health (NIH) policy requires DSMB review and possible oversight for multi-site clinical trials and Phase III clinical trials funded through the NIH and its Institutes and Centers. The policy states that monitoring must be commensurate with the risks, size, and complexity of the trial. For example, a DSMB may review your study if it involves a vulnerable population or if it has a particularly large sample size. The DSMB will make recommendations to ensure the safety of your subjects or possibly set forth a frequency for reporting on safety monitoring (e.g., submit a safety report every six months). Be sure to check your funder’s requirements and talk to your IRB about safety monitoring options if you think your study may be greater than minimal risk.

Address the inclusion of vulnerable populations

The Common Rule (45 CFR 46 Subparts B, C, and D) requires additional scrutiny of research on vulnerable populations, specifically pregnant people, prisoners, and children. This stems from concern around additional risks to the person and fetus (in the case of pregnancy) and the ability of the individual to voluntarily consent to research (in the case of those with diminished autonomy or comprehension).

The inclusion of vulnerable populations should not deter you from conducting research, whether they are the population of interest, a subgroup, or included incidentally. Although it may seem easier to exclude vulnerable individuals, relevant statuses such as pregnancy may be unknown to the researcher and unnecessary exclusions may limit the generalizability of findings. In addition to generalizability of findings, diversifying your research populations is important to avoid overburdening the same groups of participants. If you are considering the inclusion of vulnerable populations in your research, brainstorm safeguards that can be implemented to ensure vulnerable populations can safely participate in the research if there are additional risks. In some cases (including if funders require), you may need a Data Safety and Monitoring Board (DSMB) to review your research and provide safety recommendations.

  • One J-PAL affiliated researcher proposed a study with pregnant women as the population of interest. Although the researcher was working with a vulnerable population, the study itself was a benign behavioral intervention involving text messages and reminders of upcoming appointments. Thorough explanation of the research activities and how the vulnerable population faced no more than minimal risk led to the study being approved without a lengthy review process. 
  • In the evaluation of clinical decision support, researchers argued that it was important to understand the impact of CDS on advanced imaging ordering by providers for all patients, including vulnerable populations. Although not explicitly targeted for the study, vulnerable populations should not be excluded from having this benefit. Furthermore, the researchers noted that identifying vulnerable populations for the purposes of exclusion would result in an unnecessary loss of their privacy. 

Consider the role of patients even if they are not the research subjects

In a hospital setting, patient considerations will be paramount, even if they are not the direct subject of the intervention, unit of randomization, or unit of analysis. It therefore behooves researchers to address them directly, even if they are not research subjects. 

  • In the evaluation of clinical decision support (CDS), healthcare providers were randomly assigned to receive the CDS. The provider was the unit of randomization and analysis, and the outcome of interest was the appropriateness of images ordered. The IRB considered potential risks to patients in terms of data use and patient welfare. The researchers proposed and the IRB approved a waiver of informed consent based on the following justification: the CDS could be overridden, ensuring providers maintained professional discretion in which images to order, which contributed to a determination of minimal risk to the patients. Further, obtaining consent from patients would not be practicable, and may have increased risk by requiring the collection of personally identifiable information that was not otherwise required for the study. You can learn more about activities requiring consent and waivers in this research resource.

These considerations boil down to communication of details and proper explanation in the protocol.

Conclusion

While social scientists may be initially hesitant, collaborating with hospitals to conduct randomized evaluations is a great way to increase the number of RCTs conducted in health care delivery research. Understanding levels of risk, writing a thorough protocol, strategizing for effective collaboration, and considering the role of quality improvement (QI) are important when planning for a submission to a hospital IRB. 

 

Acknowledgments: This resource was a collaborative effort that would not have been possible without the help of everyone involved. A huge amount of thanks is owed to several folks. Thank you to the contributors, Laura Feeney, Jesse Gubb, and Sarah Margolis; thank you to Adam Sacarny for taking the time to share your experiences and expertise; and thank you to everyone who reviewed this resource: Catherine Darrow, Laura Feeney, Amy Finkelstein, Jesse Gubb, Sarah Margolis, and Adam Sacarny.  

Creation of this resource was supported by the National Institute On Aging of the National Institutes of Health under Award Number P30AG064190. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. 

Additional Resources References
  • Research Resource on Institutional Review Boards (IRBs)
  • Research Resource on Ethical conduct of randomized evaluations
  • Research Resource on Define intake and consent process
  • The Common Rule - 45 CFR 46
  • The Belmont Report 
  • OHRP guidance on considerations for regulatory issues with cluster randomized controlled trials
  • SACHRP Recommendations: Attachment B - Recommendations on Benign Behavioral Intervention
  • OHRP guidance on engagement of institutions in human subjects research
  • OHRP FAQs on quality improvement activities

Abramowicz, Michel, and Ariane Szafarz. 2020. “Ethics of RCT: Should Economists Care about Equipoise?.” In Florent Bédécarrats, Isabelle Guérin, and François Roubaud (eds), Randomized Control Trials in the Field of Development: A Critical Perspective. https://doi.org/10.1093/oso/9780198865360.003.0012

Finkelstein, Amy. 2020. “A Strategy for Improving U.S. Health Care Delivery — Conducting More Randomized, Controlled Trials.” The New England Journal of Medicine 382 (16): 1485–88. https://doi.org/10.1056/nejmp1915762.

Finkelstein, Amy, and Sarah Taubman. 2015. “Randomize Evaluations to Improve Health Care Delivery.” Science 347 (6223): 720–22. https://doi.org/10.1126/science.aaa2362.

Horwitz, Leora I., Masha Kuznetsova, and Simon Jones. 2019. “Creating a Learning Health System through Rapid-Cycle, Randomized Testing.” The New England Journal of Medicine 381 (12): 1175–79. https://doi.org/10.1056/nejmsb1900856.

Checklist for social scientists publishing in medical journals: before, during, and after running an RCT

Authors
Anisha Sehgal
Contributors
Jesse Gubb
Last updated
June 2023
In Collaboration With
MIT Roybal Center logo

Summary

This resource is intended for researchers in the social sciences who are considering publishing their randomized evaluation in a medical journal. Publishing in a medical journal allows researchers to share their study with researchers and practitioners in different disciplines and expand the reach of their findings to those outside of their immediate professional network. This resource highlights journal guidelines and requirements that may be unfamiliar to those used to publishing in the social sciences. Most medical journals are members of the International Committee of Medical Journal Editors (ICMJE) and follow its recommendations and requirements for publishing.1 Some of the requirements for publishing necessitate action before a study has even started. While most of the content in this checklist is relevant across all medical journals, certain elements may differ from journal to journal. Researchers are encouraged to consult requirements listed in this resource in conjunction with the requirements of their specific journal(s) of interest, as well as requirements from funders and universities that could influence the publication process.  

Key steps for publishing a randomized evaluation in a medical journal:

  • Before participant enrollment begins, register a trial in an ICMJE compliant registry such as clinicaltrials.gov. Failure to do so may prevent publication in most medical journals. Note that the American Economic Association (AEA) registry is not an ICMJE compliant registry. 
  • Upload a statistical analysis plan (SAP) to the trial registry prior to the start of the study. Although not strictly a requirement for most medical journals, publically archiving a SAP in advance will allow greater flexibility in presenting and interpreting causal results. In particular, ensure primary vs. secondary outcomes and analysis methods are prespecified in the SAP. 
  • Draft a manuscript according to journal requirements. Medical journals require papers to contain a limited number of figures and tables and adhere to specific formats and short word counts. Consult the Consolidated Standards of Reporting Trials (CONSORT) guidelines used by most journals as well as individual journal requirements before drafting your manuscript. 
  • Be aware of embargo and preprint policies. Some journals may de-prioritize study publication if study results are public or shared through a working paper or preprint. However, other journals publicly state they will not penalize such papers. Journals also impose restrictions about what can be shared after acceptance and prior to publication. Make sure to check the journal’s specific guidelines and adhere to these rules.

​​1. Before starting an RCT

Register the trial before enrollment begins

The ICMJE defines a clinical trial as, “any research study that prospectively assigns human participants or groups of humans to one or more health-related interventions to evaluate the effects on health outcomes.” If a randomized evaluation meets the definition of a clinical trial, researchers must register it in an ICMJE compliant registry before the first participant is enrolled. If researchers fail to meet this condition, they risk having their paper rejected by medical journals. Note that registration on the AEA’s RCT Registry (supported by J-PAL) does not meet this requirement. 

Not every randomized evaluation that researchers submit to a medical journal will meet the ICMJE definition of a clinical trial and require registration. However, if it is unclear whether a study meets the definition of a clinical trial, follow the registration guidelines for clinical trials as a cautionary measure. This way if a journal editor feels that a study is a clinical trial, researchers will have met the registration requirement. 

Some scenarios where registration prior to enrollment may be unnecessary (although different journals may reach different conclusions) include the following: 

  1. Secondary analysis of a clinical trial does not need to be registered separately. Instead, researchers should reference the trial registration of the primary trial. 
    • Example: In Health Care Hotspotting, researchers conducted a randomized evaluation of a care transition program’s impact on hospital readmissions for those with complex needs. In a related secondary study, the same research team is analyzing the effect of the program on outpatient utilization. Researchers referenced the original study’s registration instead of separately registering the secondary analysis.  
  2. Analysis of a clinical trial implemented by someone else without researcher involvement is also considered a secondary analysis and does not need to be registered separately. Researchers may reference the existing clinical trial record, if one exists. As a best practice, researchers may also register a secondary analysis separately.
    • Example: In The Spillover Effects of a Nationwide Medicare Bundled Payment Reform, independent researchers conducted a secondary analysis of a randomized evaluation implemented by the Centers for Medicare and Medicaid Services (CMS). This trial did not have an existing registration, so the researchers created one for their analysis. The trial was registered prior to analysis but after enrollment had begun.
  3. Evaluations of health care providers where the study population is made up solely of health care providers and the outcomes of interest include only effects on providers (rather than patients or community members), do not need to be registered. In this case, researchers may still consider registering the trial to avoid any barriers to publication if a journal editor disagrees with their assessment. Regardless, if the trial evaluates the effects of the provider intervention on patient outcomes then the trial should be registered. 
    • Example: In a clinical trial evaluating an educational intervention's impact on physical therapists’ prescribing behavior, registration is not required because outcomes are measured at the provider level (see case study example three in Hinman 2015). 

More information on trial registration, including J-PAL requirements, can be found in this resource on trial registration. Information on labeling a journal submission that is not a clinical trial is discussed below. 

Develop a trial protocol

A trial protocol outlines the objectives, methodology, study procedures, statistical analysis plan (SAP), and ethical considerations of a study. This is the same protocol that researchers must submit to an institutional review board (IRB) prior to beginning human subjects research. Medical journals require the trial protocol to be submitted at the time of manuscript submission. Because IRBs may vary in what information they request, researchers should use the SPIRIT reporting guidelines when preparing protocols to ensure completeness. An example is the trial protocol (with an integrated SAP) for Health Care Hotspotting.

Researchers should plan to submit the original approved protocol along with all amendments and their rationale. Amendments should always be reviewed and approved by an IRB prior to taking effect. Significant changes to the original protocol should be described in any future publications. The trial protocol is typically included with a publication as an online supplement. 

Develop a statistical analysis plan (SAP)

A statistical analysis plan (also known as a pre-analysis plan) provides detailed information on how data will be coded and analyzed. A SAP can be integrated into a trial protocol or may exist as a separate document. Most medical journals require a SAP to be submitted along with the manuscript.

Unlike trial registration, journals differ on when a SAP must be completed. However, for some journals, completing a SAP prior to the launch of the intervention, or at least prior to receiving data and beginning analysis, can increase the credibility of findings and/or allow researchers greater freedom in using causal language when interpreting results. Defining analysis plans in advance also reduces concerns about specification searching (p-hacking), particularly when there is researcher discretion about what outcomes to consider and when and how they are measured. Most medical journals will require researchers to indicate which analyses are prespecified and which are post hoc. As with the trial protocol, the final SAP should be submitted to the journal, with any changes to original plans tracked and justified. 

J-PAL does not universally require SAPs, but many individual J-PAL initiatives and funders require that an analysis plan be completed prior to the launch of an evaluation. For example, all J-PAL North America funded projects must upload a SAP to the AEA RCT Registry. See this resource on pre-analysis plans for more detailed information and recommended practices. 

Guidelines for the Content of Statistical Analysis Plans in Clinical Trials provides a checklist that researchers can follow to help structure their SAPs. A detailed explanation of each section can be found in Appendix 2 of the article. Some important topics to cover in a SAP include:

Outcomes

Ensure primary and secondary outcomes are prespecified. Defining a single primary outcome aids in interpretation of the overall result of the research, particularly if multiple outcomes yield different results. The primary outcome should be one that the study is well powered to detect, whereas important outcomes that researchers may be less confident observing effects for can be listed as secondary outcomes. For example, in medical research a short term clinical endpoint may be selected as a primary outcome, with mortality selected as a secondary outcome. Include details on how and when outcomes will be measured, their units, and any calculations or transformations made to derive outcomes. 

  • Example: Prespecifying a primary outcome may be unfamiliar to social scientists. In Mandatory Medicare Bundled Payment Program for Lower Extremity Joint Replacement and Discharge to Institutional Postacute Care, researchers chose to measure discharges to postacute care as the primary outcome of a bundled payment model on lower extremity joint replacements. Prior observational studies suggested that discharges to postacute care would be the main outcome to change as a result of the bundled payment model, and power calculations confirmed that a reasonable effect size could be detected. Researchers analyzed secondary outcomes to measure additional impacts of the bundled payment model, including spending, the number of days in postacute care, changes to quality of care, and patient composition.
     

Planned analyses

Include statistical models, any plans for covariate adjustment, and additional analysis beyond the intention to treat effect, such as local average treatment effect (LATE), and approaches to noncompliance or partial take-up.

Missing Data

Outline how missing data will be dealt with and reported. Attrition due to imperfect matches in administrative data and survey non-responses are both situations where researchers will need to plan to address missing data. Medical journals often have strong opinions about how to handle missing data and may request sensitivity analysis or multiple imputation even when missingness is low. Defining an approach in advance will reduce concerns about specification searching. The plan may include flexibility depending on the severity of missing data. See the Journal of the American Medical Association's (JAMA) guidelines on missing data and the New England Journal of Medicine's (NEJM) recommendations for handling missing data.

  • Example: In Health Care Hotspotting, even though match rates to administrative data were greater than 95 percent, researchers were asked to include an analysis using multiple imputation as a sensitivity check.

Multiplicity

Testing multiple hypotheses increases the probability of false positive results. Therefore, researchers should have a plan to address multiple testing when designing an evaluation. Absent a prespecified approach, journals may treat results for secondary outcomes as only exploratory. The multiple hypothesis testing section of this resource on data analysis has suggestions on statistical corrections when examining multiple outcomes. 

  • Example: In Health Care Hotspotting, because researchers did not address multiplicity in their SAP, they were asked to remove p-values from all but the primary outcome and include the following caveat about interpreting secondary outcomes: “There was no prespecified plan to adjust for multiple comparisons; therefore, we report P values only for the primary outcome and report 95 percent confidence intervals without P values for all secondary outcomes. The confidence intervals have not been adjusted for multiple comparisons, and inferences drawn from them may not be reproducible.”
     

2. While implementing an RCT

Medical journals typically require significant detail on how participants progress through an evaluation. During an evaluation’s implementation stage, researchers should keep track of participants’ study involvement, the mechanics of randomization, and fidelity to the study protocol.

Track participants throughout the evaluation

Researchers should track the flow of participants as they move through each stage of the trial and should report this in their paper. The Consolidated Standards of Reporting Trials’ (CONSORT) Statement, a minimum set of recommendations for reporting RCTs, recommends using this diagram (see Figure 2) to display participant flow. All trials published in medical journals include this CONSORT flow diagram, and one should always be included as a figure in your paper (typically the first exhibit). See an example of a CONSORT flow diagram below. 

At a minimum, researchers should track the number of individuals randomly assigned, the number who actually receive the intervention, and the number analyzed. Information on individuals assessed for eligibility prior to randomization can be included if available. Any additional information on exclusions or attrition (also known as loss to follow-up) should be included as well.

Diagram of randomization flow
Source: Finkelstein et al. 2020Source: Finkelstein et al. 2020

Source: Finkelstein et al. 2020

Track the mechanics of randomization

Researchers should keep track of the randomization process and the mechanics of implementation, as these details must be reported in detail in a manuscript. This typically includes the type of randomization procedure — was randomization stratified, done on the spot or from a list —  as well as who generated random assignments, how the information was concealed or protected from tampering, and how information was revealed to participants. Items 8-10 of the CONSORT checklist cover reporting on randomization processes. See Financial Support to Medicaid-Eligible Mothers Increases Caregiving for Preterm Infants for an example of reporting these details in a manuscript. 

Monitor implementation fidelity

The CONSORT statement notes that “the simple way to deal with any protocol deviations is to ignore them.” This is consistent with the intention to treat approach to randomized evaluation. However, providing detail is helpful to contextualize and interpret results. See the resource on implementation monitoring and real-time monitoring and response plans for guidance. Researchers approach monitoring implementation and reporting on it in different ways. 

  • In Health Care Hotspotting, researchers quantitatively tracked program metrics in the treatment group and reported them in a table. 
  • In Effects of a Workplace Wellness Program on Employee Health, Health Beliefs, and Medical Use, researchers reported adherence by presenting completion rates for screenings and other elements of the intervention, as well as control group awareness of the intervention.

3. After completing an RCT 

Clinical trial reporting in medical journals follows a specific style characterized by consistent structure, short word counts, and a small number of tables and figures. The Consolidated Standards of Reporting Trials’ (CONSORT) Statement provides a set of standardized recommendations, in the form of a checklist and flow diagram, for reporting randomized evaluations in medical journals. Researchers should consult the CONSORT Statement, as well as particular journal requirements, in order to ensure a manuscript is properly formatted and structured. This section highlights elements of analysis and manuscript preparation that may be less familiar to social scientists.

Adhere to medical journal analysis norms

CONSORT guidelines state that researchers should not report statistical tests of balance in participant characteristics, and many medical journals follow this recommendation. This practice differs from a common norm in economics to report balance tests in summary statistics tables. Researchers can find more information on the logic behind this rule in the CONSORT Explanation and Elaboration resource. 

Results should be reported for all primary and secondary outcomes, regardless of their statistical significance. Detailed results should be reported for each outcome, typically mean levels for each group, treatment effect, confidence interval, and p-value. Confidence intervals are preferred to standard errors. For binary outcomes, both absolute and relative effects should be reported. Researchers should make clear which analyses are prespecified, particularly in the case of subgroup analyses. 

Below is an example of a results table with multiple outcomes from Health Care Hotspotting. Each row highlights one outcome, with mean levels for each group, an unadjusted group difference (from a regression without controls), an adjusted group difference (from a regression with controls), and corresponding confidence intervals. In this case, all analyses were prespecified and p-values were omitted. 

Example of table in medical journal

Source: Finkelstein et al. 2020

Construct tables and figures 

Journals typically cap the total number of exhibits (tables and figures) at five. Figure 1 should typically be the participant flow diagram. A summary statistics table, typically Table 1, should always describe baseline demographic and clinical characteristics for each group. The remaining figures and tables display the results of the evaluation.

For studies with multiple treatment arms or subgroup analyses, it may be helpful to use an exhibit with multiple panels or a hybrid exhibit that combines a table and a figure to provide more detail. Figure 2 in Effectiveness of Behaviorally Informed Letters on Health Insurance Marketplace Enrollment gives an example of a hybrid exhibit. Note, however, that some journals may not view this as a single exhibit. Additional exhibits may be included in a supplemental appendix.

Prepare your manuscript with the correct structure

Before preparing a manuscript for a medical journal, you should consult the specific word and exhibit limits of that journal as well as its requirements on how the manuscript should be structured. Compared to social science publications, reports of clinical trials in medical journals are consistently structured and short (e.g., 3000 words) with binding limits on the number of words and exhibits. Medical journals are generally much more prescriptive about what each section in the paper must accomplish. Medical journals do allow for supplemental appendices where researchers can include details that did not fit in the main body of the paper. Most journals follow CONSORT reporting requirements. Consult the CONSORT checklist as you write and make sure to include all sections. Some journals require submission of the checklist to ensure that all elements have been addressed.

Some key requirements from the CONSORT checklist have been highlighted below:

Title

Include ‘randomized’ in the manuscript title so that databases properly index the study and readers can easily identify the study as a randomized trial.

Abstract

The abstract should be framed as a mini paper, with clear headings, and include a sufficient summary of the work, including results and interpretation. The abstract is not a preview that raises questions to be answered in the main text. Effort should be focused here because this is the first section that will be reviewed and often the only section that many readers will read. A full list of what to include can be found within Item 1b. of the  CONSORT Statement. Trial registration information (registry name, trial ID number) should be listed at the end of the abstract. 

Methods

Despite the short length of medical journal articles, the methods section must include a high level of detail. Required details include mechanisms of random assignment, power calculations, intake and consent procedures, where IRB approval was granted, statistical analysis methods, and software packages used. 

Results

Participant flow and baseline characteristics should be described first, followed by the effects of the intervention. Results are presented in text with similar detail to the tables discussed above (i.e., control and treatment group means, effect sizes, confidence intervals). The results section presents results without interpretation, which is reserved for the discussion section.

Discussion

The discussion section offers the opportunity to discuss limitations and generalizability, and provide additional interpretation. This section is where researchers can discuss and weigh the sum total of the evidence if results across outcomes are mixed, or draw comparisons to results in other scholarship. This section also offers an opportunity to summarize and conclude, though information provided in other parts of the paper should not be repeated in detail. If a journal allows a separate conclusion section it is typically a single summary paragraph. 

Researchers should ensure their paper is properly pitched for a medical or public health audience rather than an economics audience. Consulting the stated aims and scope of a journal can help with framing study results and emphasizing its relevance to the field. Highlighting the clinical rationale and patient care implications of a study can be helpful as medical journal readership is often interested in these factors. If possible, researchers should have someone who has frequently published in medical journals review their manuscript before submission to make sure the manuscript is properly framed for the intended audience.

Submit your manuscript for review and publication 

There are some additional considerations to keep in mind at the time of submission:

Cover letter

Journals may require submission of a cover letter along with a manuscript. There are conflicting opinions on whether or not the cover letter is important. One view is that it needs to say nothing more than “Thank you for considering the attached paper.” Another view is that the cover letter is an opportunity to pitch the paper and explain why the findings are important and relevant to the readers of this journal. It can also be used to provide context for information included elsewhere in the submission, such as whether there are preprints of the paper or potential conflicts of interest.

Study type label

When submitting a paper to a journal, researchers typically select what type of study is being submitted. If the study is labeled as a clinical trial, journals will expect the study to have been registered prior to enrollment. If this was not the case, this may lead the paper to be automatically rejected. If the study is a secondary analysis of a clinical trial, make sure to specify this. Journals may list secondary analyses as a subcategory of clinical trials or may list them as a separate type. A detailed definition of clinical trials and considerations for trial registration are discussed in Section 1 of this resource.

Preprints and working papers

Journals may be reluctant to publish studies or results that are already available online. Distributing a manuscript as a working paper or uploading it to a preprint server may lead journals to de-prioritize publication of a paper. Researchers should confirm the policies at their target journal(s) prior to releasing a working paper. The BMJ, The Lancet, NEJM, and JAMA do not penalize studies that are already available as preprints as long as researchers provide information about the preprint during the submission process. Other journals may accept manuscripts only if they have been significantly revised after being released as a working paper.

Embargoes

 Most journals impose embargoes where authors cannot disclose paper acceptance or share results until the work has been published. Most journals allow researchers to present at conferences, but may impose restrictions, such as prohibiting researchers from sharing complete manuscripts, and may request that researchers inform the journal of planned presentations. Embargoes will typically include a post-acceptance window where the media has access to upcoming publications, in order to conduct interviews and prepare articles that can be released once the manuscript is published. 

Researchers should consult individual journals to ensure they are adhering to their specific embargo rules. Researchers working with implementing partners should ensure that everyone understands and can abide by the embargo policy, and both parties should work together to develop communication plans to take advantage of media access periods. This resource on communicating with a partner about results has additional details on embargo policies and communication plans. While embargoes may seem restrictive, medical journals tend to publish frequently, so the publishing timeline and embargo period can often move along quickly. 

4. Examples of published work from J-PAL researchers 

A selection of J-PAL affiliated-studies that have been published in medical journals has been highlighted below. 

  • Health Care Hotspotting — A Randomized, Controlled Trial (NEJM)
  • Mandatory Medicare Bundled Payment Program for Lower Extremity Joint Replacement and Discharge to Institutional Postacute Care (JAMA) 
  • Financial Support to Medicaid-Eligible Mothers Increases Caregiving for Preterm Infants (Maternal and Child Health Journal) 
  • Effect of a Workplace Wellness Program on Employee Health and Economic Outcomes: A Randomized Clinical Trial (JAMA)
  • Effects of a Workplace Wellness Program on Employee Health, Health Beliefs, and Medical Use: A Randomized Clinical Trial (JAMA Internal Medicine)

 

Acknowledgments: Thanks to Marcella Alsan, Catherine Darrow, Joseph Doyle, Laura Feeney, Amy Finkelstein, Ray Kluender, David Molitor, Hannah Reuter, and Adam Sacarny for their thoughtful contributions. Amanda Buechele copy-edited this document. Creation of this resource was supported by the National Institute On Aging of the National Institutes of Health under Award Number P30AG064190. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Endnotes Additional Resources References

1  The full list of journals that follow ICMJE recommendations can be found on ICMJE.org.

  • The CONSORT statement provides information on the CONSORT checklist and flow diagram, as well as the CONSORT Explanation and Elaboration Document, which contains the rationale for items included in the checklist. 
  • The ICMJE website includes information about:
    • Clinical Trial Registration
    • Member Journals
    • Preparing a Manuscript for Submission to a Medical Journal
  • JAMA has extensive instructions for authors, covering the requirements for reporting different types of research like clinical trials, including the handling of particular topics like missing data.
  • J-PAL’s Research Resources provide additional information on several topics discussed in this resource, including: 
    • Trial registration
    • Pre-analysis plans
    • Implementation monitoring
    • Real-time monitoring and response plans: Creating procedures
    • Data analysis, including a section on heterogeneity analysis and multiple hypothesis testing
    • Communicating with a partner about results
  • SPIRIT Checklist for Trial Protocols

Finkelstein, Amy, Annetta Zhou, Sarah Taubman, and Joseph Doyle. “Health Care Hotspotting — A Randomized, Controlled Trial.” New England Journal of Medicine 382, no. 2 (January 9, 2020): 152–62. https://doi.org/10.1056/NEJMsa1906848.

Finkelstein, Amy, Annetta Zhou, Sarah Taubman, Joseph Doyle, and Jeffrey Brenner. "Health Care Hotspotting: A Randomized Controlled Trial." AEA RCT Registry. (2014) https://doi.org/10.1257/rct.329.

Finkelstein, Amy and Jeffrey Brenner. “Health Care Hotspotting: A Randomized Controlled Trial.” Clinicaltrials.gov. (March 18, 2014). https://clinicaltrials.gov/ct2/show/NCT02090426.

Finkelstein, Amy, Yunan Ji, Neale Mahoney, and Jonathan Skinner. “Mandatory Medicare Bundled Payment Program for Lower Extremity Joint Replacement and Discharge to Institutional Postacute Care: Interim Analysis of the First Year of a 5-Year Randomized Trial.” JAMA 320, no. 9 (September 4, 2018): 892–900. https://doi.org/10.1001/jama.2018.12346.

Finkelstein, Amy, Yunan Ji, Neale Mahoney, and Jonathan Skinner. “The Impact of Medicare Bundled Payments.” Clinicaltrials.gov. (January 23, 2018) https://clinicaltrials.gov/ct2/show/NCT03407885

Gamble, Carrol, Ashma Krishan, Deborah Stocken, Steff Lewis, Edmund Juszczak, Caroline Doré, Paula R. Williamson, et al. “Guidelines for the Content of Statistical Analysis Plans in Clinical Trials.” JAMA 318, no. 23 (December 19, 2017): 2337–43. https://doi.org/10.1001/jama.2017.18556.

Hinman, Rana S., Rachelle Buchbinder, Rebecca L. Craik, Steven Z. George, Chris G. Maher, and Daniel L. Riddle. “Is This a Clinical Trial? And Should It Be Registered?” Physical Therapy 95, no. 6 (June 1, 2015): 810–14. https://doi.org/10.2522/ptj.2015.95.6.810.

J-PAL. “Health Care Hotspotting in the United States.” https://www.povertyactionlab.org/evaluation/health-care-hotspotting-united-states

J-PAL. “The Spillover Effects of a Nationwide Medicare Bundled Payment Reform.” https://www.povertyactionlab.org/evaluation/spillover-effects-nationwide-medicare-bundled-payment-reform

Reif, Julian, David Chan, Damon Jones, Laura Payne, and David Molitor. “Effects of a Workplace Wellness Program on Employee Health, Health Beliefs, and Medical Use: A Randomized Clinical Trial.” JAMA Internal Medicine 180, no. 7 (July 1, 2020): 952–60. https://doi.org/10.1001/jamainternmed.2020.1321.

Song, Zirui, and Katherine Baicker. “Effect of a Workplace Wellness Program on Employee Health and Economic Outcomes: A Randomized Clinical Trial.” JAMA 321, no. 15 (April 16, 2019): 1491–1501. https://doi.org/10.1001/jama.2019.3307.

Ware, James H., David Harrington, David J. Hunter, and Ralph B. D’Agostino. “Missing Data.” New England Journal of Medicine 367, no. 14 (October 4, 2012): 1353–54. https://doi.org/10.1056/NEJMsm1210043.

Yokum, David, Daniel J. Hopkins, Andrew Feher, Elana Safran, and Joshua Peck. “Effectiveness of Behaviorally Informed Letters on Health Insurance Marketplace Enrollment: A Randomized Clinical Trial.” JAMA Health Forum 3, no. 3 (March 4, 2022): e220034. https://doi.org/10.1001/jamahealthforum.2022.0034.

Combination of Tomas Dulka's, Helena Lima's, and Chuka Ezeoguine’s headshots
Blog

DEDP Alumni Spotlight: Varied paths, one mission toward evidence-based poverty alleviation

We are delighted to share the accomplishments of three alumni from the DEDP Master’s program at MIT. These exceptional individuals represent the growing number of alumni who embark on varied, ambitious careers in a wide range of roles within research, policy, and implementing organizations, but...
Student at computer
Blog

Overcoming administrative burdens: Strategies to increase FAFSA filing in the United States

A new J-PAL policy insight highlights evidence from randomized evaluations on interventions to increase FAFSA filing rates. In this post, we situate the insight within the larger context of higher education and barriers to accessing public programs, also known as “administrative burdens.”
A headshot of a man.
Blog

20 for 20: Empowering change in Ghana as an African researcher

As J-PAL celebrates its 20th anniversary, Edward Asiedu shares his experiences as part of the J-PAL network and sheds light on the impactful work he has accomplished in Ghana. In this blog post, he will take you through the journey that led him to this field and share the perspective of an African...
People walking in narrow street with full of food stalls during daytime in India
Blog

Shaping the right incentives for firms to facilitate climate adaptation and improve environmental quality

IPA and JPAL have supported numerous rigorous research studies exploring diverse questions related to climate change and sustainable development. In particular, since 2020, the King Climate Action Initiative at J-PAL has funded several randomized evaluations addressing these questions as well as...
A group of people stand in front of a backdrop.
Blog

A decade of empowering change: Celebrating 10 years of improving lives through evidence in Southeast Asia

To mark the tenth anniversary of J-PAL Southeast Asia, we hosted a gala dinner on July 18, 2023, in Jakarta, Indonesia. The event was attended by a wide range of J-PAL partners, including representatives from government institutions, academics, local and international development practitioners, non...
Three people smiling looking at computer screen together
Update
J-PAL Updates

J-PAL North America August 2023 Newsletter

J-PAL North America's August newsletter features a blog series on fostering inclusion in economics; a new Washington Post article on rigorous research in maternal health; and a new Policy Insight and blog on federal student aid take-up.
Microsite

North America at 10

Resource
Basic page

Reflecting on a decade of J-PAL North America

Featured Resources
Resource
Basic page

Featured work

Resource
Basic page

Voices from the J-PAL North America network

Children sit in circle in classroom
Update
J-PAL Updates

August 2023 Newsletter

J-PAL Africa reflects on how Cognitive Behavioral Therapy (CBT) can be used to reduced crime and violence in schools, while J-PAL North America looks at low cost options to increase court appearances in the US. Read these stories and more through our August 2023 newsletter.
Resource
Basic page

Event Registration | J-PAL at 20: Listening, Learning, and Innovating

Side-by-side images of Mikey Jarrell and Ashley Vicary
Blog

DEDP Alumni Spotlight: Digging deeper into research questions inspired at MIT

In January 2023, the Data, Economics, and Design of Policy program welcomed its fourth cohort of students, who came from around the world to MIT for eight months of learning, building friendships, and tackling challenging projects and questions. As the class of 2023 graduates and joins our vibrant...

Pagination

  • First page « First
  • Previous page ‹
  • …
  • Page 263
  • Page 264
  • Current page 265
  • Page 266
  • Page 267
  • …
  • Next page ›
  • Last page Last »
J-PAL

J-PAL

400 Main Street

E19-201

Cambridge, MA 02142

USA

Contact

+1 617 324 6566

[email protected]


Press Room

Stay Informed

Receive our newsletters

Subscribe

 

Privacy Policy

Accessibility

MIT