Research Resources

Define intake and consent process

Authors
Contributors
Kim Gannon
Chloe Lesieur
Summary

Far from a simple administrative step, decisions about a study’s intake and consent process are critical for the success of a study. This process can affect statistical power, bias, and the validity of the study through effects on the composition of the consented study sample, the intake/consent rate, and the attrition rate. Consent processes can also influence the behavior or morale of study participants, implementing partners, and key study staff. Recognizing that researchers have many demands on their time at the beginning of a study, we created this guide to highlight the importance of thoughtful design of consent procedures, particularly in areas that the Common Rule1 does not provide explicit guidance or regulation.

Introduction

We structured this resource to parallel our suggested order of operations for designing a recruitment and consent process.

  1. First, understand which activities require consent, and whether a waiver or alteration of consent may be appropriate.
  2. Next, determine who must provide consent (or assent) and what to consider for interventions with impacts beyond the individuals directly contacted or randomized.
  3. Next, assess existing intake processes.
  4. With this key information, researchers must decide whether to seek consent before or after revealing assignments to treatment or control, considering tradeoffs between statistical power, different sources of bias, and staff burden.
  5. The timing of consent relative to revealing treatment assignments may have consequences for the decision of who should administer informed consent and how to train personnel in consent and study recruitment. Training personnel in how to describe the risks and benefits of an intervention or study participation in order to ensure true understanding from participants, maximize take-up, and minimize attrition. 
  6. Determine what level of compensation to offer in order to provide fair compensation without unduly influencing potential participants.

The US Department of Health and Human Services (HHS) provides decision charts as a guide for researchers to decide if an activity requires IRB review or informed consent. With information and justification provided by the research team, the IRB makes the ultimate determination of informed consent requirements.2

Consent may be required to administer a program or procedure, collect personal data, or both. Combining the consent process for both the program procedures and data collection can create efficiencies and minimize some types of bias (see "When should randomization and consent occur?").

However, knowledge of assignment to treatment or control, or knowledge of the nature of the intervention or research may influence an individual’s decision to consent to share data. Separating the consent process for data collection from the experimental program or procedure may reduce the threat of this potential bias.3 For interventions or treatments that would be available in the absence of a research study, ethical standards may require that researchers allow participants to deny or withdraw consent to the use of their data without losing access to the treatment. 

Consent to the intervention

Consent to the program or intervention may be required under a variety of circumstances, including: 

  • If the investigators are involved in the design or implementation of the program or procedures.
  • If the program or procedure involves more than minimal risk.4
  • If consent to the program or procedure would be required even in the absence of a research study. For example, many health- or medical-related interventions obtain informed consent as standard practice. In this case, researchers may consider incorporating consent for the research itself—including for use of data—into this existing consent process. 

If the program, procedure, or intervention does not meet these criteria, it may not be necessary to obtain consent to administer the program as part of a research study. Researchers can work with their research partners to propose a consent process or request a waiver of consent from the IRB. An example of a study where the consent requirement was waived, and an example where consent was incorporated into existing procedures, are described in the case studies at the end of "Assess existing intake consent processes" in this resource.

Consent to the use of personally identifiable information5

Whether and how researchers must obtain consent to analyze data related to humans depends on whether personally identifiable information (PII) will be collected for study purposes; whether that PII will be accessible to researchers; whether the sharing or use of the data involves more than minimal risk to the subject; and whether it would be practicable to obtain consent from all participants.6

Detailed requirements for informed consent are listed in 45 CFR 46.116, and most IRBs provide template consent forms. Here, we highlight required elements of informed consent most relevant to data collection and additional requirements that may apply based on the type of data collection.

  • Primary data collection: Interviewers must obtain consent from the participant at the start of any interview or observation. 
  • Administrative or other secondary data: Researchers who know which data sources they intend to use should check the consent requirements of each data provider.7 Those who do know which sources they will use can include all categories, types, or specific data sources that they hope to collect in the consent form to mitigate the need to re-consent participants. However, some data holders may not accept generic data descriptions in a consent form and may require researchers to re-obtain consent for each individual before releasing records. Data holders may require IRB approval or documentation of a waiver of consent prior to releasing data, even if such approval would not otherwise be required, and even for de-identified data. 
  • De-identified secondary data: Research involving only the use of secondary data that is already publicly-available and/or de-identified is exempt from ongoing IRB review (45 CFR 46.104(d)(4)(i)-(ii)), and does not require consent or a waiver. For studies that otherwise require IRB review, but where researchers only have access to secondary, de-identified data, researchers may seek a waiver of informed consent. Data custodians may require IRB approval or documentation of a waiver of consent prior to releasing data, even if such approval would not otherwise be required. 
  • Health- related data in the US: Researchers may also need to seek individual authorization or a waiver of authorization, per HIPAA requirements
  • Future re-use of data: The revised Common Rule, effective January 2019, creates the possibility of seeking prospective consent to unspecified future research from subjects for secondary research use of their data. Broad consent is detailed in 45 CFR 46.116, and your Institutional Review Board may be able to provide additional information. 

Waivers or alterations of informed consent

An IRB may grant an alteration to specific part(s) of the informed consent and documentation requirements, or may waive the requirement entirely, based on conditions identified in 46.116(e-f) and 46.117 of the Common Rule. Researchers must request the waiver or alteration in an IRB application, and justify it based on the Common Rule’s requirements. 

Researchers may request waivers or alterations to account for context-specific factors if doing so protects the populations included in the study. Examples of these factors include: if documentation of informed consent would be the only link between an individual and the research, if low literacy rates make written consent inappropriate, or if signing a consent form would conflict with cultural norms, as explained in 45 CFR 46.117. In Conducting Ethical Economic Research: Complications from the Field, the authors discuss these and other factors in more detail (Alderman, Das, and Rao 2016).

In other cases, a waiver or alteration may be necessary to ensure the scientific validity of a study. For example, if the aim of an evaluation is to determine which types of individuals respond to a letter, but only those participants who would respond to the letter would be able to provide consent, a requirement to obtain consent would bias the results. This case may meet the criterion in 46.116f3ii, “The research could not practicably be carried out without the requested waiver or alteration.”

See the Appendix for example language used to apply for a waiver of informed consent.

As described in the previous section, individuals must provide consent if researchers will use their private data, and/or if they will receive a treatment or intervention, unless researchers receive a waiver of informed consent. In this section, we discuss special cases where additional consent requirements might be considered.

Children or minors

In the United States, the legal age when an individual can provide consent to treatments or procedures varies by state and locality. In most localities, this age is 16 or 18 but may vary by type of research or procedure. For research involving minors, researchers obtain permission from the parent or legal guardian, and obtain assent—an affirmative agreement to participate in research—from the minor. The assent procedure—including what information is conveyed, whether written documentation is required, etc.—should be modified to the age and developmental stage of the minor. The IRB may require researchers to obtain consent from individuals who reach the age of majority during the study to supplement their original assent.  

In some cases, for example research on sexuality, abuse, or drug use, the process of obtaining parental consent may not protect the minors in a study, or may put minors at risk. In these cases, an IRB may waive the requirement of obtaining parental consent. If research involves solely treatments or procedures for which minors can give consent outside the research context (under applicable state and local laws, for example, research on sexually transmitted diseases or pregnancy), then parental permission (or waiver thereof) is not a consideration—minors may provide their own informed consent (“Research with Children FAQs” n.d.).  

The Common Rule and other federal regulations provide additional protections for minors beyond the assent requirement. Guidance from HHS is available here and here. Further discussion on this topic can be found in the blog post, “Research with adolescents: issues surrounding consent.”

Beyond the study population: other populations to consider

Some interventions have impacts beyond the unit of randomization or analysis. For example, an intervention may affect individuals within a cluster, or interventions may have spillover or general equilibrium effects. Interventions designed to affect the behavior of individuals or organizations that have influence over a large number of people—such as health care providers, politicians, judges, voters, or large employers—may have impacts on an unidentified, and potentially very large, number of other individuals. In these cases, it may not be possible or practicable to seek consent from each individual who may be affected. For example, there may be too many individuals to track, there may not exist a list of potentially affected individuals, and there may not exist credible alternatives to receiving or being impacted by the intervention.

The Common Rule does not provide clear guidance in these cases. While some IRBs with social science research experience may give guidance or instructions to research teams under their purview, many researchers must take responsibility for assessing whose consent may be required, or whether it is ethically justifiable to administer an intervention without full consent of all those who may be impacted.8

When consent from all impacted individuals is not possible, one option is to work with representatives familiar with or from the local community who can provide guidance on the context as well as local preferences and standards. For example, in a hospital-based study, these representatives may include the hospital’s medical board, community members of the hospital’s ethics committee, or lead physicians. In a school-based study, these may include members of the Parent Teacher Association, or the school principal. With guidance from these representatives, and approval by the relevant IRB(s), consent to participate in the intervention may not be required from all individual subjects.

Assess existing intake processes

Existing programs may already have recruitment, intake, and/or consent processes. A thorough understanding of existing intake processes can inform study design decisions—from the consent process to opportunities for random assignment or variations in treatment. Incorporating informed consent and random assignment into existing processes can leverage a partner’s expertise and minimize overall burden on study personnel and participants. 

Official protocols may be out of date, may not reflect the needs or realities of work on the ground, or otherwise may not match the actual recruitment and intake processes, which may differ substantially across sites. If possible, direct interaction with staff responsible for recruitment and intake, or observation of the intake process, can be helpful. Observing intake during the study period can help research teams identify and address challenges and gain insight into the partners’ perspective.

Case studies

Informed consent required for both the program and data collection. Nurse-Family Partnership (NFP) is a non-profit organization that provides low-income, first-time mothers with intensive support through regular home visits in order to improve pregnancy outcomes, child health and development, and the economic self-sufficiency of the family. Nurses inform mothers about the program and seek their consent to participate as part of the program enrollment process. J-PAL-affiliated researchers are collaborating with NFP on an evaluation of the NFP program in South Carolina. Rather than create a distinct process for seeking consent to data access as part of the evaluation, researchers worked with NFP to include study consent and on-the-spot randomization into treatment or control groups to occur before program consent. This single process was the most logistically feasible and helped to ensure balance between treatment and control groups (i.e., that each group would be equally likely to consent to program participation).

Waiver of informed consent. Benefits Data Trust (BDT) sends targeted outreach letters and provides person-centered application assistance to individuals who are likely eligible for benefits and services, such as elderly individuals who are likely eligible for the Supplemental Nutrition Assistance Program (SNAP). J-PAL-affiliated researchers collaborated with BDT to study the impact of BDT’s programs on take-up of SNAP and investigate the differences between those who respond to the letters, apply for SNAP, and enroll in SNAP, and those who do not reply, apply, or enroll. The research team requested and received a waiver of the informed consent requirement from the IRB. This waiver was necessary for the validity and practicability of the study because only those individuals who responded to the outreach letters would have had opportunity to provide or refuse consent, precluding an analysis of responders and non-responders. The waiver was justifiable because the study involved no more than minimal risk, and because the researchers were able to design the study such that they had no access to PII, nor any interaction with the study population.

Intervention has potential impacts beyond the unit of randomization or observation. J-PAL affiliated researchers partnered with Aurora Health Care, a large health system in Wisconsin and Illinois, to study the impact of clinical decision support on the number and clinical appropriateness of certain high-cost medical imaging orders (primarily MRIs and CT scans). All health care providers were included in the study sample; for treatment providers, a pop-up window appeared when signing off on an image order if the scan met certain criteria. The window would display alternative scans with a higher appropriateness rating. The primary outcome of interest was the number of “targeted” scan orders per physician. The IRB waived most components of informed consent; however, researchers were required to send the providers an email informing them of the study and providing an opportunity to opt out. Those who opted out were required to use the decision support tool, but their data was not used as part of the evaluation. 

Because the intervention was designed to influence which images a physician orders for patients, the IRB and research team considered potential impacts on individual patients, though patients were neither the unit of observation nor randomization. The IRB did not require researchers to seek informed consent for patients. Rationale considered for this decision included: 1) there was no pre-existing list of patients to enroll; 2) it would not have been practicable to seek consent from the thousands of patients treated by Aurora Health Care in the year of the study; and 3) because providers maintained the ability to override the decision support tool and provide whatever care they deemed necessary, the study presented no more than minimal risk to the patients. 

We define two broad strategies for when to seek consent: consent before or after randomly assigning subjects to treatment or control conditions (or informing them of their assignments). Variations of these strategies may be used whether subjects are recruited from an existing list, on a rolling basis, on a referral basis, or on-the-spot.

Seeking consent before random assignment maximizes statistical power and may minimize bias from differential non-participation. Seeking consent after random assignment may minimize ethical challenges as well as burden on recruitment staff. The decision of when to seek consent requires that research teams balance risks of bias, power, ethics, and staff burden, as well as consider feasibility and the preferences of implementing partners.

Consent before randomization 

Process: Individuals consent to the program, evaluation, and/or to the use of their personal information for research. After providing this consent, recruitment staff reveal the individual’s randomly assigned study condition (i.e., treatment or control). 

Reasons for use:

  • Bias: Obtaining consent from all subjects prior to random assignment minimizes bias introduced by individuals who might be more willing to consent if they view providing data as an exchange for receiving treatment. Although the informed consent process must assert that respondents will not lose any rights by consenting or refusing to participate, there is no guarantee that respondents will believe this assertion. Respondents may assume that researchers may retaliate if they do not consent, or that researchers may be more likely to provide a beneficial treatment if they consent or respond in a certain way (Alderman, Das, and Rao 2016; Roberts 2002). 
  • Statistical power: By randomizing only those willing to provide consent, obtaining consent prior to random assignment maximizes the effective take-up rate, and therefore the statistical power per individual who must be approached to request consent.9  If consent is required for the program, but not for data use, it may be possible to obtain consent only from those assigned to the treatment group (Zelen 1979). However, because provision of consent is neither universal nor random, it would not be possible to construct a control group of individuals who would have consented to treatment, requiring the use of an intent-to-treat (ITT) strategy. Obtaining consent from all potential subjects prior to random assignment ensures balance on willingness to provide consent. 

Challenges:

  • Ethics and bias: Knowledge of group assignment and the availability of a potentially beneficial alternative treatment may impact participants’ outlook or behavior.  For example, awareness of a potentially life-saving health program and their subsequent exclusion from this program may cause an individual to suffer enough stress to cause their health to deteriorate. This behavior, termed “resentful demoralization," presents an ethical challenge and a threat to the internal validity of the study (i.e, it may lead to differential attrition or non-compliance or to a violation of the exclusion restriction if the impact is correlated with the outcomes of interest for the evaluation). 
  • Staff burden: This strategy may affect the morale of program staff or study recruiters who must communicate control group status to a participant. This may threaten the long-term viability of the study if staff do not comply with the study protocol, decrease their recruitment efforts, or resign their position. Providing support to staff and consistent refresher training on the importance of the research and random assignment may help to alleviate this concern. We present strategies to mitigate staff burden in the section “Training and motivating the enrollment team,” which appears at the end of "Who will conduct enrollment?"

Consent after randomization

Process: Recruitment staff reveal an individual’s randomly assigned study condition (i.e., treatment or control). Researchers then seek informed consent from the study sample. In a variant on this strategy, recruitment staff may seek consent to the treatment only from those assigned to the treatment, and seek consent for data use from all human subjects, whether assigned to treatment or to control. 

Reasons for use:

  • Ethics: Requesting consent and offering access to the treatment or program only to those assigned to the treatment group may alleviate ethical challenges of informing individuals about their assignment to the control group. This may minimize psychological burden on individuals assigned to the control group, as well as to study or program personnel. 
  • Staff time: If only the treatment group must consent (assuming a waiver of consent for data use), and depending on rates of consent and take-up, this strategy may minimize the number of individuals study staff must approach to seek consent. 

Challenges:

  • Bias and statistical power: If consent to the program is obtained only from the treatment group, there is no way to know which members of the control group would have consented to the program if assigned to treatment. This bias could lead to an average difference in treatment and control groups unrelated to treatment assignment—a violation of the exclusion restriction. Eliminating this bias would require the use of an ITT strategy. The minimum detectable effect at a given level of power is inversely proportional to take-up, but proportionate to the square root of sample size. Thus, this strategy may require researchers to consent and treat exponentially more study participants than obtaining consent before randomization.10
  • Bias and differential data attrition: Knowledge of or interaction with the intervention may influence the likelihood that individuals consent to data collection, introducing bias (McRae et al. 2011), particularly if individuals are aware of the connection between the intervention and the data collection. For example, individuals assigned to a treatment they perceive to be beneficial may be more likely to answer a survey or provide administrative data. Individuals assigned to a control group may refuse to answer a survey or refuse to allow the researcher to access their administrative data if they are unhappy with their placement. 

Who will conduct enrollment?

After assessing current intake procedures and determining when consent will be sought relative to random assignment, research teams must decide who will conduct the study recruitment and enrollment process. Enrollment staff may be hired specifically for the study, or existing program staff may take on this responsibility. 

A key principle in deciding who will conduct enrollment is to minimize participant’s perception of the association of the program/intervention with data collection. Potential participants may feel pressure to consent to the use of their personal data if they believe doing so will result in better treatment by the program staff (even if the informed consent materials are clear that this is not the case). Participants who consent under such conditions may be more likely to drop out of the study or withdraw consent later, when they no longer feel in-person pressure from a service provider. Additionally, participants may answer intake or survey questions differently if they associate the interview with receipt of an intervention or services.   

Ideally, enrollment staff have two qualities:

  • Demonstrated sensitivity and familiarity in working with the community that comprises the study sample. This can facilitate understanding and connection between the enrollment staff and the potential participants, which may improve participants’ receptivity and understanding of the informed consent process. Existing staff of the implementing partner may have a level of training and familiarity with the community that would be difficult to replicate in newly hired enrollment specialists or research assistants. They may also have greater familiarity with the local language or dialect. 
  • Distinction from direct service providers. Whether they are employed separately, or in a different “division” of the implementing organization, utilizing different staff for enrollment facilitates specialization of labor and quality control of baseline data; and minimizes the association between the program/intervention and the data collection. Utilizing separate enrollment staff may also minimize the burden of the study on direct service providers. Retaining enrollment specialists requires identifying individuals who would be interested in and satisfied with a focus on outreach and initial engagement—and not directly seeing clients progress through the intervention. These individuals may have less of a background in the program implementation, and/or an interest in research. 

Further suggestions for developing an enrollment team include:

  • Limit the number of staff conducting enrollment. This minimizes the number of staff who must be trained, and decreases the likelihood of variation in the enrollment process. While this strategy minimizes the number of staff who might experience morale challenges from communicating control group assignments, it increases the mental burden on enrollment staff, since each will communicate with more individuals than if enrollment were decentralized. 
  • Utilize service providers to conduct study enrollment and consent if logistical or financial barriers prevent the use of distinct enrollment specialists. This would increase the association of the program with the data collection. If this means recruitment must be decentralized, this will also increase the number of staff involved in conducting recruitment relative to the ideal scenario. However, financial or logistical barriers, such as a decentralized recruitment process or a large geographical catchment area, may necessitate the use of service providers. The additional responsibility of enrollment is likely to increase the amount of work done by these staff and teams should consider modifying expectations accordingly. By devoting only a portion of their time to enrollment, staff may develop fewer specialized skills for recruitment or may not retain information from trainings as well as staff who specialize primarily in enrollment. An enrollment support specialist and frequent refresher training sessions may mitigate these challenges.
  • Hire an enrollment support specialist. If the recruitment and enrollment staff are not direct members of the research team, a designated specialist working with the researchers can maintain contact with field staff. This specialist can provide technical support for technology used, answer questions about study protocols, remind field staff of the importance of the randomization and the evaluation, and provide motivation to field and enrollment staff.

Training and motivating the enrollment team

Any individual involved in participant recruitment and consent should be well-trained in study intake processes (and, if necessary, survey enumeration) and be especially sensitive to and familiar with the community they intend to work with. Some steps to consider include:

  • Train enrollment staff in study intake and informed consent procedures. This training will also provide an opportunity to collect and respond to feedback from field staff. Allow time to modify the process in response to the feedback before the full project launch. This training may be combined with a survey training, if applicable.
    • Train staff on the dangers of over or underselling the program, as detailed below in the section "Describing risks and benefits." 
    • Train enrollment staff in the mechanics of obtaining consent—including language prompts, comprehension checks, and the use of any associated forms or technology. 
  • Train enrollment staff in the importance of random assignment, and prepare them to address potential community or participant concerns about randomization. Enrollment staff are the front line in operationalizing random assignment, and may face difficult questions about denying treatment to eligible individuals. Ensuring they are prepared to have these conversations, understand their role in compliance with the research protocol, and are convinced of the rationale behind conducting a randomized evaluation, can help ensure a successful study. The "Resources" section at the end of this page links to non-technical resources that explain randomized evaluations.
  • Practice enrollment and consent processes. This can help ensure understanding and comfort with the form and process.  
  • Conduct refresher trainings regularly. These trainings will reiterate the content from the original training, train new staff, and provide an opportunity to discover necessary course corrections and receive regular input from enrollment staff.
  • Observe or check in on enrollment regularly. If possible, observing or shadowing the enrollment process can help to identify challenges and ensure adherence to enrollment protocols. Observation can help with the training of new staff, as well as help research staff better understand the process. Regular check-ins can help teams identify and address challenges in the enrollment process.
  • Provide motivation to enrollment staff. Researchers can also proactively work to support staff and maintain motivation and morale, before challenges become crises. Enrollment staff may spend many hours per day interacting with individuals who have experienced trauma and other difficult life experiences. Staff may need support—for example, training, mental health care, or encouragement—to remain motivated and engaged while working in such a context. Further, staff may feel particularly demoralized if, by chance, random assignment produces a long series of assignments to the control group. Some studies have implemented monitoring systems to notify the research team if an enrollment specialist has had a long (e.g., 3 or greater) series of control group assignments. The team then sends an encouraging note or checks in with the recruiter. 

Case Studies

Enrollment specialists separated from service providers. The Camden Coalition of Healthcare Providers and J-PAL-affiliated researchers collaborated on a randomized evaluation of a care management program serving individuals with complex medical and social needs. The study team utilized enrollment specialists who, while separate from other program staff, were sensitive to their concerns about research participation. Some specialists were hired specifically for enrollment; others were redeployed from other positions within the Coalition. While the Coalition hired specialists from the community as much as possible, to ensure staffing across the full study period, they were flexible in this criterion. Enrollment specialists approached potential participants to describe the program and seek consent prior to random assignment. Some patients were wary of participating in research in general, and both patients and recruitment specialists often felt discouraged when patients were assigned to the control group. By limiting enrollment to a small number of recruitment specialists, researchers and the Camden Coalition were able to provide specialized support and training to the specialists. For example, the Coalition places a high value on mental health, and provides funding to staff for therapy. The specialists also supported each other and developed best practices, including language to introduce the study without promising service, methods of preventing undue influence, and ways to support disappointed patients. To help maintain momentum and morale throughout the study period, the team celebrated enrollment milestones with a larger group of Coalition staff, which helped to illustrate that enrollment was a part of a broader organizational goal.

Service providers conducted enrollment with support from a recruitment support specialist. The Nurse-Family Partnership (NFP) program had a de-centralized method of identifying women to participate in their program prior to the incorporation of a randomized evaluation. Nurses who deliver the program are well-trained in working with low-income, first-time mothers from the local communities. NFP and local implementing partners believed their nurses were better equipped to work with the study population than external surveyors. They also believed that shifting the recruitment model to a centralized process with enrollment specialists was infeasible given the scale of the study and the program. Thus, the study informed consent process was incorporated into the pre-existing program recruitment and consent process. The NFP nurses conduct the baseline survey, informed consent, and on-the-spot randomization. This also requires significant time and resources from the research team: The research team conducts quarterly in-person enrollment and fielding trainings for nurses and nurse supervisors. The team also hired a recruitment support specialist, who operates a phone line that nurses call for emotional and technical support, coordinates in-person and web-based trainings for new nurses, sends encouragement to the nurses, and monitors fidelity to the evaluation design in real time.

Intake process revised based on enrollment staff input. The Office of Supportive Housing in Santa Clara County, HomeFirst, and J-PAL-affiliated researchers are collaborating on a randomized evaluation of rapid rehousing for individuals experiencing homelessness. The study will enroll study subjects over a three-year period, with individuals randomized intro treatment or control conditions at the point of intake. Initially, researchers implemented a process with independent random draws such that the number of individuals assigned to treatment and control would approximately balance out over a long period of time and many intake specialists. In practice, some intake specialists saw long streaks of assignments to the control (or treatment) group. Given the mental burden of communicating control group assignment to individuals, the study team revised the randomization process to ensure that study assignment balances out over smaller batches of individuals for each intake specialist, reducing the length of any control or treatment group streaks.

Describing risks and benefits

While the Common Rule has requirements for the topics covered in informed consent, and emphasizes that information must be presented in a way that facilitates understanding,11  minor variations in the language used to discuss these topics may affect subjects’ understanding, willingness to consent, and compliance. Ensuring understanding—and therefore ensuring truly informed consent—is much easier said than done and depends heavily on context. In a medical context, a review found that many patients did not understand key aspects of the informed consent, including randomization, consent, and the procedures involved (Behrendt et al., 2011). A more detailed discussion of factors influencing understanding and consent can be found in “Informed Consent and the Capacity for Voluntarism” (Roberts 2002). Piloting the consent form and process, and seeking review from individuals familiar with the study population, can help researchers assess whether subjects will understand everything necessary to provide true consent, and assess take-up rates.

Further, enrollment staff may be tempted to under or oversell the intervention to prospective subjects.

Enrollment staff may oversell the intervention (e.g., by overstating the benefits, understating the risks, or understating the time or effort required to participate) if under pressure to reach enrollment quotas. Overselling the program may cause disproportionate disappointment in the control group, and misalignment with the true nature of the program may lead to higher rates of non-compliance or attrition in the treatment group.

Staff may be tempted to undersell the program (e.g., downplay the potential benefits or overstate potential risks) if they are concerned about disappointing potential control group members. However, underselling the program may lead to a low consent and take-up rate. 

Research teams may mitigate the challenges associated with describing the risks and benefits of a program and research study by training and supporting staff involved in recruitment throughout the research study, as described in the section "Who will conduct enrollment" above.  

Compensation

Compensating research subjects to incentivize and offset the time and inconvenience of participation is a common practice and may be explicitly required by some IRBs. Compensation is distinct from reimbursement of direct expenses, and is not considered a potential benefit to research participation.

The Office of Human Research Protections states, “IRBs should be cautious that payments are not so high that they create an “undue influence” or offer undue inducement that could compromise a prospective subject’s examination and evaluation of the risks or affect the voluntariness of his or her choices” (Office of Human Research Protections n.d.). However, there is debate among ethicists, researchers, and IRBs about whether and when compensation moves from an acceptable inducement to “undue influence.” We recommend that researchers carefully consider their proposed compensation level, and present their own justification in the protocol submitted for review and approval by the relevant IRB. All information concerning compensation, including the amount and schedule of payment(s), should be explained in the informed consent document.

Amount of compensation

The amount of compensation offered has implications for the composition of the study sample, take-up rates, and ethics. Because each individual has a distinct opportunity cost of time and marginal utility of the compensation, any particular level of compensation leaves potential for distortions in the study sample. Determining an appropriate amount of compensation requires a careful balance between an amount that is so low as to be exploitative, and one that is so high as to be unduly influential. Further, researchers should consider how the compensation may affect the external validity of the study, and if the compensation could affect the study’s outcomes.

Researchers can attempt to benchmark compensation rates, assess potential distortions, and assess ethical implications, by:

  • Considering the prevailing wage rates of the relevant population. Some research involves distinct groups whose appropriate compensation rates may differ substantially. In these cases, it may be appropriate to consider offering differing amounts of compensation per participant group, with justification and approval from the IRB (University of Toronto 2011).
  • Considering the amount paid by other studies in the same setting. For example, there are many studies run on university campuses. A new study run on the same campus might benchmark their compensation against other similar studies. 
  • Discussing compensation with community experts, leaders, or representatives. This is particularly important if wage rates are not available or applicable to the population you are working with.

Levels of compensation that are too high may distort incentives to participate or to respond to survey questions. There is concern that high rates may compromise an individual’s assessment of the risks, particularly if the level of compensation would be transformative to their lifestyle. In addition, individuals may behave or answer questions differently in an attempt to feel they have “earned” the compensation, or may attempt to enter a study whose enrollment criteria they do not meet.

Levels of compensation that are too low may impair researchers’ ability to recruit or retain a sufficient number of participants. Further, compensation that is unfairly low may be exploitative, and may give the appearance of “outsourcing” research to lower income communities in order to lower the cost of research. This concern may arise in two scenarios: 1) compensation that is below the prevailing local wage rate, or 2) compensation that meets the local wage rate, but where the research is conducted in a lower-income country or community ( Largent and Fernandez Lynch 2017; University of Toronto 2011). 

The IRB must weight potential risks and benefits of research exclusive of compensation, and determine that for the average individual in the study population, participation would not be unreasonably risky (Emanuel 2005; Largent and Fernandez Lynch 2017). Based on this determination by the IRB, some bioethicists argue, “Given the rarity of possible undue influence in IRB-approved studies and the real risk of exploitation when payment is too low, we argue that the default rule for IRBs should be altered: rather than asking whether proposed offers of payment are too high, as IRBs are currently wont to do, they should instead start by asking whether payment is high enough to be fair” (Largent and Fernandez Lynch 2017). Relatedly, the University of Toronto issues this guidance with respect to compensation level and vulnerable groups: “Within the local context, researchers may conduct research with financially- and socially-vulnerable groups (e.g. homeless youth, intravenous drug users, gambling addicts). Determining appropriate compensation for these populations raises difficult issues, as [IRBs] try to balance ethical principles including respect for participants’ autonomy, prevention of undue influence and providing protection for vulnerable persons. It is essential that when determining acceptable amounts and kinds of compensation for vulnerable groups, researchers and the [IRB] not confuse protection with paternalism. Research participants who are competent to consent to research should be considered to be autonomous in how they utilize compensation. It is not within the purview of researchers or [IRBs] to set restrictions (directly or indirectly) on compensation beyond those that would normally be set for non-vulnerable populations” (University of Toronto 2011).

Compensation that is set appropriately may be able to “draw in a more diverse pool of research participants because a greater range of individuals is likely to find participation attractive as offers of payment increase. This could help ensure that the burdens of socially valuable research are spread more evenly over the population, rather than primarily among lower-income groups” (Emanuel 2005; Largent and Fernandez Lynch 2017). 

Finding this level of compensation is difficult, and an objectively “best” level of compensation may not exist.

Delivery of compensation

Compensation may take many forms, including cash, gifts, or lotteries.13 It may be paid at the time of study enrollment or pro-rated throughout the study period. The University of Toronto and the US Food and Drug Administration have thoughtful guidance on determining compensation methods that are appropriate given a study’s budget constraints, population, and ethical considerations.

This guidance includes:

  • For studies lasting more than a few days, compensation should not be contingent upon the subject completing the entire study. Rather, compensation should accrue as the study progresses or be pro-rated for individuals who withdraw prior to the completion of the study. However, a small proportion of the compensation may be designated as a bonus for completion of the study.
  • If budget constraints prevent the payment of compensation that would realistically and respectfully compensate the participant for their time, the University of Toronto recommends that, “tokens of appreciation, such as gift cards or gifts, may be more appropriate. They should be referred to as such as (sic) tokens (or honorariums) and not as compensation” (University of Toronto 2011). Some researchers also view gifts of equal value to cash compensation as having a lower likelihood of undue influence. 
  • Lotteries or draws are sometimes used rather than cash, particularly when the budget does not allow full compensation. These may be acceptable if the prize amount is not so large that it would create undue influence. Federal or local regulations may apply to lotteries.

An example of the language submitted to the IRB to waive informed consent can be found below. In this study, researchers measured the impact of sending informative letters and postcards on take-up of a program.

Disclaimer: this is provided for information only. This is not intended as a template, nor can J-PAL guarantee that an IRB would approve similar language.

“It would not be practicable to obtain consent from all individuals for this study. Therefore, we are requesting a waiver of informed consent. Part of this study will analyze which individuals do or do not respond to our outreach. By limiting the study to individuals who actively consent, we would bias the results of the study, and it would no longer provide useful results on the effectiveness of our outreach efforts. 

Not obtaining consent poses a very minimal risk to these individuals. The information study participants will receive will contain accurate and potentially useful information on ____. As a result, the potential harms from the intervention are limited. This study poses only a very minimal risk of a breach of confidentiality to participants. Researchers will receive only a limited dataset, minimizing even this risk, and researchers will have no direct contact with individuals. The implementing partner will only receive the information necessary to perform their standard procedures, consistent with their current agreement with ____.” 

Last updated March 2021.

These resources are a collaborative effort. If you notice a bug or have a suggestion for additional content, please fill out this form

Acknowledgments

We are grateful to Mary-Alice Doyle, Amy Finkelstein, Louise Geraghty, Mary Pelak, Anna Spier, Aaron Truchil and Annetta Zhou for their insight and advice. We thank Aurora Health Care, Benefits Data Trust, the Camden Coalition of Healthcare Providers, Homefirst, the National Service Office of the Nurse Family Partnership, and the Office of Supportive Housing in Santa Clara County and HomeFirst and associated research teams for allowing us to use case studies from their programs. Chloe Lesieur copyedited this document. This work was made possible by support from the Alfred P. Sloan Foundation and Arnold Ventures. Any errors are our own.

1.
In the United States, the Federal Policy for the Protection of Human Subjects (also known as the “Common Rule” or 45 CFR 46) regulates what information researchers must provide in informed consent, how they must document consent, and when an Institutional Review Board (IRB) may grant waivers to the requirements. This resource assumes a working knowledge of these basic legal and ethical requirements. For details on the requirements of the Common Rule, see the original text, your IRB’s website, and the resources listed at the end of this document. Requirements for consent may vary by country.
2.
A more detailed description of when ethical review is required, and when informed consent is required, begins on page 202 of “The Practicalities of Running Randomized Evaluations: Partnerships, Measurement, Ethics, and Transparency” (Glennerster 2016).
3.
The rationale for this is similar to that of preventing the threat of social desirability bias by not using program staff to conduct surveys. See, for example, the discussion on page 196 of “The Practicalities of Running Randomized Evaluations: Partnerships, Measurement, Ethics, and Transparency” (Glennerster 2016). In this case, we consider that it may be “socially desirable” to provide any data or consent if the individual perceives this as being related to the intervention they might receive.
4.
Minimal risk is defined by 45 CFR 46.102(j) as “the probability and magnitude of harm or discomfort anticipated in the research are not greater in and of themselves than those ordinarily encountered in daily life or during the performance of routine physical or psychological examinations or tests” (Protection of Human Subjects 2018).
5.
In addition to compliance requirements involving IRB oversight, use of data may be subject to locality based laws or statutes, or federal regulations such as HIPAA and FERPA. See pages “Using administrative data for randomized evaluations” for more information.
6.
These criteria are derived from 45 CFR 46.116 – General requirements for informed consent, as well as 45 CFR 46.104 – Exempt research.
7.
Example requirements can be found in J-PAL North America’s Catalog of Administrative Data Sets, which documents access procedures for a variety of non-public data sets.
8.
How to Make Field Experiments More Ethical” is a relatively short, accessible discussion relating to these topics (n.d.). McRae et al. provide an in-depth discussion on consent requirements for cluster-level interventions (2011). Additional relevant analysis and discussion of consent and other ethical implications of cluster randomized trials can be found in the literature, particularly in bioethics, biomedicine, and public health literature. See, for example, articles by Hutton (2001) or; Sim and Dawson (2012).
9.
For instance, to be powered to detect the same effect size with 25% take-up, we would need to offer treatment to 16 times more people and provide treatment to 8 times more people (assuming equal numbers of treatment and control) than if we had 100% take-up. See Power Calculations 101: Dealing with Incomplete Take-up (McKenzie 2011) for a more complete illustration of effect of the first stage on power.
10.
For instance, to be powered to detect the same effect size with 25% take-up, we would need to offer treatment to 16 times more people and provide treatment to 8 times more people (assuming equal numbers of treatment and control) than if we had 100% take-up. See Power Calculations 101: Dealing with Incomplete Take-up (McKenzie 2011) for a more complete illustration of effect of the first stage on power. 
11.
Requirements are detailed in section 46.116 of the Common Rule, and many IRBs have template documents on their websites. For example, the MIT IRB provides templates and guidance on their website.
12.
Ensuring true understanding and truly informed consent is much easier said than done and depends heavily on context. A more detailed discussion can be found in “Informed Consent and the Capacity for Voluntarism” (Roberts 2002) and “The Ethics of Public Health Nudges” (Soled 2018).
13.
Federal or local regulations of lotteries or gambling may apply; researchers should consult with a local IRB.
    Additional Resources
    Resources for the Common Rule, IRB, and other regulatory requirements
    1. General requirements for informed consent. 2021. Code of Federal Regulations. Vol. 45 CFR 46.116.

      “General requirements for informed consent” (45 CFR § 46.116) falls under the “Basic HHS Policy for Protection of Human Research Subjects” within the Department of Health and Human Services’ Code of Federal Regulations. This is the official source on the manner in which a research subject or legally authorized representative must be “informed” and what information must be conveyed. It also describes when and how broad consent might be accepted in place of informed consent, and instances in which an IRB might allow informed consent to be altered or waived.

    2. Committee on the Use of Humans as Experimental Subjects. “Forms & Templates.” Massachusetts Institute of Technology. Accessed May 16, 2019. https://couhes.mit.edu/forms-templates

      Massachusetts Institute of Technology’s Committee on the Use of Human Subjects for Research (COUHES) provides templates and forms for consent, assent, and requests for waivers or alterations of consent. COUHES is MIT’s Institutional Review Board and therefore these documents exemplify the paperwork needed to attain appropriate permissions for university-based research involving human subjects.

    3. Office of Human Research Protections, Department of Health and Human Services. “Informed Consent FAQs.” Text. HHS.gov. Accessed March 6, 2019. https://www.hhs.gov/ohrp/regulations-and-policy/guidance/faq/informed-consent/index.html.

      The Health and Human Services’ website provides detailed responses to frequently asked questions about the Common Rule and informed consent. These include clarifications to official language (e.g., “What is the meaning of ‘legally effective informed consent’” and “What does it mean to minimize the possibility of coercion or undue influence”). They also address how to uphold Common Rule requirements under various circumstances (e.g., “What happens if a child reaches the legal age of consent while enrolled in a study,” “What constitutes coercion or undue influence when students are involved in research in a college or university setting”), and specify logistical parameters (e.g., “How far in advance of research participation can consent be obtained,” and “How should child assent be documented”).

    4. Ozler, Berk. (2019). Research with adolescents: Issues surrounding consent. Retrieved July 2, 2019, from https://blogs.worldbank.org/impactevaluations/research-adolescents-issues-surrounding-consent.

      This blog post discusses the requirements of the Common Rule, variations in implementation by IRB, and defines circumstances where researchers and review boards might consider allowing the minor to provide informed consent herself.

    5. Office for Human Research Protections, Department of Health and Human Services. "Research with Children FAQs". Text. HHS.Gov. https://www.hhs.gov/ohrp/regulations-and-policy/guidance/faq/children-research/index.html.

      This list of FAQs provides information on consent and assent practices when the research subjects are children.

    6. Office for Human Research Protections, Department of Health and Human Services. "Special Protections for Children as Research Subjects." Text. HHS.Gov. https://www.hhs.gov/ohrp/regulations-and-policy/guidance/special-protections-for-children/index.html.

      This guide presents four categories of research involving children, and the determinations IRB must make to approve research under each category.

    7. Office of Human Research Protections, Department of Health and Human Services. 2016. “Human Subject Regulations Decision Charts.” Text. HHS.Gov. https://www.hhs.gov/ohrp/regulations-and-policy/decision-charts/index.html.

      This guide provides a series of decision trees intended to help determine whether or not (1) an activity is considered research on human subjects and therefore must be reviewed by an IRB, (2) a necessary review qualifies for expedited procedures, and (3) informed consent or the documentation of informed consent can be waived.

    8. Social & Behavioral Sciences Institutional Review Board. “Research in Schools (FERPA, PPRA).” University of Chicago. Accessed May 16, 2019. https://sbsirb.uchicago.edu/investigatorguidance/.  

      The University of Chicago’s IRB provides guidance on the implications of the Family Educational Rights and Privacy Act (FERPA) to informed consent, which is applicable to research collecting student data from schools that receive federal funding. It outlines when consent must be obtained for the release of personally identifiable student records as well as exceptions in which consent is not required.

    9. "Using administrative data for randomized evaluations." J-PAL North America Evaluation Toolkit.

      This guide focuses on the ethical and legal framework surrounding the use of administrative data for randomized evaluations. It discusses the implications for informed consent of administrative data subject to the Health Insurance Portability and Accountability Act (HIPAA).

    10. Glennerster, R. “Chapter 5 - The Practicalities of Running Randomized Evaluations: Partnerships, Measurement, Ethics, and Transparency.” In Handbook of Economic Field Experiments, edited by Abhijit Vinayak Banerjee and Esther Duflo, 1:175–243. Handbook of Field Experiments. North-Holland, 2017. https://doi.org/10.1016/bs.hefe.2016.10.002.

      Section 3.3, titled “Practical issues in complying with respect-for-human-subjects requirements,” discusses considerations for informed consent in field experiments. It addresses the documentation of consent among illiterate participants, identifies who must consent in evaluations randomized at the group level, and describes situations in which burden of consent is large enough for the requirement to be waived. This section also provides information about balancing concerns of partners and IRBs (e.g., if an IRB adds nuance to a consent form that makes it more complicated, this might decrease the likelihood that less educated participants will fully understand the form).

    11. Inter-university Consortium for Political and Social Research (ICPSR). “Recommended Informed Consent Language for Data Sharing.” University of Michigan. Accessed June 7, 2021. https://www.icpsr.umich.edu/web/pages/datamanagement/confidentiality/conf-language.html

      ICPSR provides guidance on the appropriate language to obtain informed consent from participants on sharing their data with researchers. It gives examples of language to avoid, language to include, and potential concerns and solutions to them.

    12. National Bureau of Economic Research (NBER) “Moving to Opportunity Final Evaluation Consent Forms.” Accessed June 7, 2021.

      For examples of consent form language, the Moving to Opportunity (MTO) experiment has consent forms available online that researchers can reference.    

    Resources for study design:
    1. Zelen, M. “A New Design for Randomized Clinical Trials.” The New England Journal of Medicine 300, no. 22 (May 31, 1979): 1242–45. https://doi.org/10.1056/NEJM197905313002203.

      Consent requirements were first introduced with the Common Rule in 1978. This paper assesses the ethical and statistical implications of various experimental designs for clinical trials and considers tradeoffs of establishing informed consent before and after randomization.

    2. McRae, Andrew D, Charles Weijer, Ariella Binik, Jeremy M Grimshaw, Robert Boruch, Jamie C Brehaut, Allan Donner, et al. “When Is Informed Consent Required in Cluster Randomized Trials in Health Research?” Trials 12 (September 9, 2011): 202. https://doi.org/10.1186/1745-6215-12-202.

      This paper discusses consent requirements for cluster-level interventions and identifies who the “human subject” is in these cases (e.g., whether the doctor or patient is the subject in a research study that provides training to doctors). The rationale for which individuals are “subjects” of research and therefore must consent may be obvious to those familiar with randomized evaluations in the social sciences, and particularly in development economics. However, revisiting and clearly defining the rationale may be useful for researchers working with partners or IRBs from different fields and different perspectives.

    Primers on randomized evaluation:
    1. Abdul Latif Jameel Poverty Action Lab. “Why Randomize.” 2016.

      Abdul Latif Jameel Poverty Action Lab. Why Randomize? 2017. https://www.youtube.com/watch?v=Uxqw2Pgm7s8&feature=youtu.be.   

      J-PAL’s ‘Why Randomize’ resources—which include a one-page document and a video—summarize the rationale for using randomized evaluations to measure  the impact of programs and policies. These resources could be useful to distribute among audiences that have not had much exposure to evaluation methods. They explain the need to construct a counterfactual and why randomly selecting study participants to form that counterfactual enables a high degree of confidence in study results.

    2. Abdul Latif Jameel Poverty Action Lab. “Common Questions and Concerns about Randomized Evaluations.” 2016. 

      J-PAL created this resource to help researchers anticipate and address partners’ concerns about randomized evaluations. It explores possible questions about the ethics and feasibility of conducting randomized evaluations, as well as the scope and generalizability of their results.

    3. Heard, Kenya, Elisabeth O’Toole, Rohit Naimpally, and Lindsey Bressler. 2017. “Real World Challenges to Randomization and Their Solutions.” Cambridge, MA. Abdul Latif Jameel Poverty Action Lab. https://www.povertyactionlab.org/sites/default/files/research-resources/2017.04.14-Real-World-Challenges-to-Randomization-and-Their-Solutions.pdf.

      This resource describes study intake and randomization procedures that account for ethical and practical challenges brought up by implementing partners. It presents variations of the evaluation design that can be used to assess programs that don’t lend themselves to a basic lottery randomization (e.g., programs with enough resources to extend to everyone who is eligible, programs that are entitlements and cannot withhold services). It also suggests ways to avoid common implementation challenges (e.g., spillover, crossover and attrition).

    Abdul Latif Jameel Poverty Action Lab (J-PAL). "Health Care Hotspotting in the United States." J-PAL Evaluation Summary. https://www.povertyactionlab.org/evaluation/health-care-hotspotting-united-states.

    Abdul Latif Jameel Poverty Action Lab (J-PAL). "The Impact of a Nurse Home Visiting Program on Maternal and Child Health Outcomes in the United States." J-PAL Evaluation Summary. https://www.povertyactionlab.org/evaluation/impact-nurse-home-visiting-program-maternal-and-child-health-outcomes-united-states.

    Alderman, Harold, Jishnu Das, and Vijayendra Rao. 2016. “Conducting Ethical Economic Research.” The Oxford Handbook of Professional Economic Ethics, April. https://doi.org/10.1093/oxfordhb/9780199766635.013.018.

    Behrendt, C., Golz, T., Roesler, C., Bertz, H., & Wunsch, A. (2011). What do our patients understand about their trial participation? Assessing patients’ understanding of their informed consent consultation about randomised clinical trials. Journal of Medical Ethics, 37(2), 74–80. https://doi.org/10.1136/jme.2010.035485

    Doyle J, Abraham S, Feeney L, Reimer S, Finkelstein A (2019) Clinical decision support for high-cost imaging: A randomized clinical trial. PLoS ONE 14(3): e0213373. https://doi.org/10.1371/journal.pone.0213373.

    Emanuel, Ezekiel J. 2005. “Undue Inducement: Nonsense on Stilts?” The American Journal of Bioethics 5 (5): 9–13. https://doi.org/10.1080/15265160500244959.

    Emanuel, Ezekiel J, David Wendler, and Christine Grady. 2000. “What Makes Clinical Research Ethical?,” 12.

    Feeney, Laura, Jason Bauman, Julia Chabrier, Geeti Mehra, and Michelle Woodford. 2017. “Administrative Data for Randomized Evaluations.”

    Finkelstein, Amy, and Matthew J. Notowidigdo. “Take-up and Targeting: Experimental Evidence from SNAP.” The Quarterly Journal of Economics. Accessed June 14, 2019. https://doi.org/10.1093/qje/qjz013.

    Glennerster, Rachel. 2016. “The Practicalities of Running Randomized Evaluations: Partnerships, Measurement, Ethics, and Transparency.” Abdul Latif Jameel Poverty Action Lab. https://doi.org/10.1016/bs.hefe.2016.10.002.

    “How to Make Field Experiments More Ethical.” n.d. Washington Post. Accessed February 28, 2019. https://www.washingtonpost.com/news/monkey-cage/wp/2014/11/02/how-to-make-field-experiments-more-ethical/.

    Hutton, J. L. 2001. “Are Distinctive Ethical Principles Required for Cluster Randomized Controlled Trials?” Statistics in Medicine 20 (3): 473–88. https://doi.org/10.1002/1097-0258(20010215)20:3<473::AID-SIM805>3.0.CO;2-D

    Largent, Emily, and Holly Fernandez Lynch. 2017. “Paying Research Participants: The Outsized Influence of ‘Undue Influence.’” Volume: 39, Issue: 4. The Hastings Center. https://www.thehastingscenter.org/irb_article/paying-research-participants-outsized-influence-undue-influence/.

    McKenzie, David. 2011. “Power Calculations 101: Dealing with Incomplete Take-Up.” Text. Development Impact (blog). May 23, 2011. http://blogs.worldbank.org/impactevaluations/power-calculations-101-dealing-with-incomplete-take-up.

    McRae, Andrew D, Charles Weijer, Ariella Binik, Jeremy M Grimshaw, Robert Boruch, Jamie C Brehaut, Allan Donner, et al. 2011. “When Is Informed Consent Required in Cluster Randomized Trials in Health Research?” Trials 12 (September): 202. https://doi.org/10.1186/1745-6215-12-202.

    Office of Good Clinical Practice. 2018. “Payment and Reimbursement to Research Subjects - Information Sheet.” WebContent. U.S. Food and Drug Administration. https://www.fda.gov/RegulatoryInformation/Guidances/ucm126429.htm.

    Office of Human Research Protections, Department of Health and Human Services. n.d. “Informed Consent FAQs.” Text. HHS.Gov. Accessed March 6, 2019. https://www.hhs.gov/ohrp/regulations-and-policy/guidance/faq/informed-consent/index.html.

    Protection of Human Subjects. 2018. Code of Federal Regulations. Vol. 45 CFR 46. https://www.ecfr.gov/cgi-bin/retrieveECFR?gp=&SID=83cd09e1c0f5c6937cd9d7513160fc3f&pitd=20180719&n=pt45.1.46&r=PART&ty=HTML.

    Protection of Human Subjects. 2018. Code of Federal Regulations. Vol. 45 CFR 46. https://www.ecfr.gov/cgi-bin/retrieveECFR?gp=&SID=83cd09e1c0f5c6937cd9d7513160fc3f&pitd=20180719&n=pt45.1.46&r=PART&ty=HTML.

    Roberts, Laura Weiss. 2002. “Informed Consent and the Capacity for Voluntarism.” Am J Psychiatry, 8. https://www.ncbi.nlm.nih.gov/pubmed/11986120

    Sim, Julius, and Angus Dawson. 2012. “Informed Consent and Cluster-Randomized Trials.” American Journal of Public Health 102 (3): 480–85. https://doi.org/10.2105/AJPH.2011.300389.

    University of Toronto, Research Ethics Policy and Advisory Committee. 2011. “Compensation and Reimbursement of Research Participants.” 2011. http://www.research.utoronto.ca/policies-and-procedures/compensation-and-reimbursement-of-research-participants/.

    Zelen, Marvin. 1979. “A New Design for Randomized Clinical Trials.” New England Journal of Medicine 300 (22): 1242–45. https://doi.org/10.1056/NEJM197905313002203.

    In this resource