Research Resources

Increasing response rates of mail surveys and mailings

Contributors
Summary

Drawing on evidence and examples from literature on mail experiments and mail surveys, this resource suggests strategies for increasing responses to mail surveys and mailings targeted at a fixed pool of respondents in a randomized evaluation. 

We do not address tradeoffs between mailings versus other survey modalities, but some strategies presented in this resource may also apply to modalities such as text message or web-based surveys. Many of the principles for contacting and encouraging respondents to interact with the materials are also applicable to other mail-based interactions, such as outreach, information interventions, or nudge interventions. 

Introduction

A researcher may determine that relying on mailing to deliver study materials or to conduct surveys is the most appropriate strategy for the context and budget of their evaluation. Relative to in-person, phone, or other survey modes facilitated by an enumerator, mailings may be a relatively inexpensive and logistically simple way to reach a large number of individuals. Mailings may be contextually appropriate: for example, governmental or nongovernmental organizations may already communicate with individuals by mail. In many such cases, mailing is a first point of contact, and possibly the only means of communication with study participants.1 This resource assumes that researchers plan to use mail surveys in their study but does not exclude the use of other survey methods as well (as discussed later in this resource).

For any survey modality, low rates of survey response can threaten the statistical power of an evaluation. Additionally, non-response may threaten the interpretation of the analysis if those who are harder to reach may have systematically different characteristics from those who respond. It is important to note, however, that low response rates do not necessarily equate to non-response bias. A recent paper (Dutz et al. 2021) demonstrates that participation rates alone could be a poor indicator of non-response bias and that non-response bias may even increase with participation rates, if the remaining holdouts are increasingly different from survey respondents. For a randomized evaluation, differential non-response driven by treatment assignment may introduce bias into the analysis and impact evaluation results.

While mailings have been a staple method of many surveys and social science research studies in the United States, survey response rates by mail have been declining over the years, falling by roughly 35 percentage points since the 1970s from a baseline average of 77 percent (Stedman et al. 2019). This resource therefore discusses possible strategies for increasing responses to physical mailings that are sent as part of the outreach in a study.2 The appropriateness, effectiveness, and cost of each strategy may vary greatly based on the context (e.g., the specific population of interest, research question, sample size or response rate needed for adequate statistical power, outside options for data collection) of the evaluation. We therefore do not present specific recommendations and cannot provide guidance on cost effectiveness. 

Reasons for low response to mailings and surveys

Response rates for mail surveys used in randomized evaluations vary widely, and it is difficult to estimate “typical” response rates. In recent years, researchers have investigated various techniques and incentives for maximizing response rates to “mail-to-online panel” surveys (Yan et al. 2018). Individuals in the studies received survey invitations by mail and were directed to a website to complete the survey. The response rates ranged from two to eleven percent. Although not mail surveys, the first point of contact for these surveys were mailings, and therefore present comparable response rates.

While researchers cannot resolve all the reasons for non-response (e.g., poor health or absence), there are several issues they can address. These include non-contacts (when mailed surveys do not reach the intended respondents), refusals (when mailed surveys reach the potential respondent but do not receive a response), break-offs (when mailed surveys receive partial responses due to a refusal partway through), and language barriers. 

Reasons for non-response may be impossible to determine, except in some cases of mail returned to sender for an incorrect address or if surveys are returned only partially completed.3 Given this uncertainty, researchers should attempt to address the challenges of non-response at the outset. They can do this by:

  1. Increasing efforts to get into contact with respondents;
  2. Reducing the survey burden and making it more convenient for respondents to respond; and 
  3. Encouraging cooperation from respondents and gaining compliance. 

Some suggestions for increasing mail survey response rates may be more important than others depending on what the survey is used for in the evaluation (e.g., measuring intervention outcomes or delivering nudges as part of the survey). For example, if an intervention is embedded within a survey, researchers should be careful to decouple the effect of the intervention itself from the joint effect of survey components that encourage response. Researchers may consider adjusting for differential response rates between treatment and control group. 

Getting into contact

  • Understand the target population. Delineate the demographic, geographic, and temporal characteristics (Lavrakas 2008) of the target population and decide what type of units will be included (e.g., individual or household level). This will assist with decisions about the timing of the survey, survey implementation mode, and survey instrument. For example, if the survey is aimed at working parents of school-age children, it may be best to avoid mailing at times when they are least likely to receive the mailings, such as school holidays. Furthermore, consider that response rates may correlate with demographic characteristics; for example, older populations may be less likely to move and/or may be more likely to respond to surveys or mailings. Personal interest or experience in the subject of the study are also likely to affect response rates; for example, individuals sampled from a list of registered voters may be more likely to respond to a survey related to political attitudes than an individual from a general population list, and individuals sampled from those who had visited a particular geographic area may be more likely to respond to a survey related to that area compared with an individual from the general population (Neher and Neher 2018).

  • Consider sampling only among respondents who have indicated willingness to respond. Researchers could sample from a group of respondents who had previously consented to participating, or they could sample and randomize based on people who responded to an initial outreach. However, researchers might worry about the representativeness of the sample if the study is aimed at a more niche, harder to reach population.

Case study: In one study (Liebman and Luttmer 2012), researchers surveyed respondents from an internet panel about their understanding of Social Security benefits and the links they perceived between the benefits and labor supply. The panel was pre-recruited by a third-party organization and the researchers were able to select a sample that was representative of the target population by filtering based on basic demographic characteristics of interest. Since members on the panel had already elected to participate in future surveys and polls, sampling from this pool was likely to give a high response rate and have lower fielding costs. A potential concern when using a pre-recruited panel is that professional respondents (i.e., experienced and frequent survey-takers) might produce biased responses due to the experience of having seen and answered many surveys before.

  • Get updated contact information. In addition to increasing contact rates, reducing undeliverable mail and re-mailings will also reduce associated material, labor, and mailing costs. Options for updating contact information include:

    • The National Change of Address (NCOA) database provides the most current addresses for all individuals and households who have registered address changes with the USPS.
    • Credit reporting agencies, such as Experian,4 maintain large repositories of credit profiles and property records that include records of telephone numbers and addresses of credit-active consumers.
    • Skiptracing agencies and industrial tools often used in debt recovery, e.g., LexisNexis Accurint and TransUnion TLO, can help locate hard-to-find individuals.5
  • Increase contact points. This gives respondents greater ability and opportunity to fill out the survey and leads to a higher chance of them responding. A meta-analysis (P. Edwards et al. 2002; P. J. Edwards et al. 2009) of postal questionnaire response rates found multiple contact points to be effective. However, there is currently no quantitative evidence on the size of the marginal impact of each new contact point on response rates. The timing between contact points matters. Two weeks may be efficient in serving as a reminder while allowing time to realize the responses from previous mailings.
    • Send pre-notification and advance letters. When participants have been pre-notified about a mail study, they seem more likely to reply (Rao et al. 2010). An advance letter can include information about the topic, purpose, and organization of the upcoming survey. The letter should be designed using a similar aesthetic style as the questionnaire that is to follow and should observe guidelines for ensuring legibility (see the point on aesthetics and legibility of mailing materials in the section on “encouraging cooperation and gaining compliance”) (Holbrook, Krosnick, and Pfent 2007). If researchers have other contact information for the respondents, such as phone numbers or email addresses, they can also send pre-notification texts or emails to alert respondents about the questionnaire packet they are about to receive. The U.S. Centers for Disease Control and Prevention (CDC) recommends that the prenotification reaches participants one week prior to the mailing of the questionnaire (CDC 2018a).
    • Follow up and send reminders, considering multiple modes of contact. Researchers might start with low-effort contact methods such as blast emails or texts, followed by more labor-intensive follow-up methods such as phone calls for respondents who are harder to reach. The number of attempts and contacts will likely depend on the budget and logistical ability of the implementing organization. In the OHIE (Finkelstein et al. 2010), the researchers had four points of contact with respondents for their initial survey, including an initial screener postcard, two survey mailings, and a telephone follow-up. Following this protocol, the researchers received an effective response rate of 45 percent. Postcard reminders can be a low-cost yet effective follow-up nudge as they are relatively cheap and do not need to be opened (Levere and Wittenburg 2019). However, since the contents of the postcard are exposed, researchers need to be cautious about including private or sensitive information. 
    • Send replacement questionnaires. Mailing replacement questionnaires to non-respondents have been shown to boost responses (P. Edwards et al. 2002; P. J. Edwards et al. 2009). However, this may not be the most cost-effective method since sending more questionnaires is more costly than pre-notifications and reminders, in that it requires more printing and mailing larger packages. The CDC (2018a) suggests sending a reminder postcard one week after the questionnaire and a replacement questionnaire two weeks after that.
  • Maintain contact. If the survey has multiple waves of follow-up, researchers will need to recontact participants for each survey wave. Maintaining contact with survey respondents between survey rounds can increase the respondents’ likelihood to respond to future surveys. Examples of maintaining contact include sending thank you cards to respondents or sending letters to answer respondents’ concerns. Researchers should also reflect on whether and how this additional contact might have spillover effects on the study’s treatment effect and potentially contribute to differential attrition.
  • Obtain alternate contact information. Researchers can keep track of respondents by obtaining the contact information of the respondents’ family and friends, e.g., name, addresses, and phone numbers. This tactic can be helpful particularly for populations that move or change phone numbers frequently. Social media details can also be especially helpful as they stay constant even if people move.
  • Conduct real-time monitoring and obtain feedback. When possible, record the reason for non-response by tracking mailings until they reach the respondents. There is the possibility that potential respondents are not receiving the mailings in the first place: respondents could have moved or missed the mail when it was delivered. If working with an external organization to send mailings, researchers can also have one piece of mail in each batch of mailings be sent to the research team to confirm that the mailings were actually sent out and that they were sent on the designated dates.
    • Use express delivery instead of standard mail delivery. Sending questionnaires by special delivery (P. J. Edwards et al. 2009), including recorded, registered, and certified delivery, rather than standard delivery, provides assurance that the mailing has actually reached the desired recipient, especially since they require a signature upon delivery. Moreover, people appear more likely to open mailings from private shipping services such as FedEx or UPS, plausibly due to higher perceived importance of the mail (Cantor and Cunningham 2002).
    • Track returned mailings. Be sure to provide a return address on the envelope. This will allow the mailings to be returned to the sender if undeliverable. Based on the reason for return, the researcher can then decide how to address the issue if the number of returned pieces of mail is significant. Researchers could send a placebo mailing to households before starting the experiment to gauge the number of mailings that are undeliverable and remove those from the sample. Researchers may also request a new address from the postal service, if one is on file, and update the research team’s database for future mailings.

Increasing convenience and reducing survey burden

There are inherent burdens of participating in a survey that contribute to survey non-response, such as the time spent responding and the effort to complete and return the survey. As much as possible, researchers should reduce these barriers and inconveniences.

  • Reduce the length and complexity of the questionnaire; ensure clarity of questions. Bradburn (1978) suggests that some factors of respondent burden include survey length, required respondent effort, and respondent stress. Some guidance around designing simple, easy-to-understand surveys can be found in J-PAL’s resource on survey design.6 Online task crowdsourcing platforms such as Amazon Mechanical Turk allow researchers to test features of survey design including comprehensibility and length with similar populations.
  • Facilitate questionnaire return. Researchers should provide a return envelope that is prepaid and pre-addressed to make it as convenient as possible for respondents to return the questionnaire.
  • Provide questionnaires and materials in relevant languages. If a large percentage of the target population speaks a different language, providing translations or dual-language questionnaires may be a cost-effective option to overcome language barriers.

Encouraging cooperation and gaining compliance

Potential study participants might have unobservable attitudes and characteristics that make them unwilling to respond. Accordingly, researchers should make surveys and mailings as enticing and salient as possible to encourage people to respond. The literature on mail survey design and administration suggests that the appeal of survey design elements vary for different segments of the population. In this section, we outline common themes of gaining compliance that will engage most participants.

  • Establish a trustworthy message.  Include a cover letter plus a teaser with information about the identity of the researchers and organization, purpose of the survey and materials, and benefits of the survey. Use of recognizable graphics, such as logos of organizations or universities associated with the study and trusted by the target population, can help add credibility. Similarly, demonstrations of support from public figures or opinion leaders can provide an assurance of confidentiality.
  • Provide financial incentives. Incentives show researchers’ appreciation and respect for the participants’ time and effort. Providing incentives also conveys trust to the participants if they are provided prior to completion of the survey.
    • Weigh trade-offs of different incentive options. There are a few categories of incentives: monetary, in-kind, and lottery incentives. If researchers are considering using lottery-style incentives, they should confer with their IRB and check the local laws and regulations of where the survey is conducted, since lotteries may be illegal in certain jurisdictions.7 Studies looking into the effectiveness of incentives have found that monetary incentives (P. J. Edwards et al. 2009) seem to more than double response rates compared to that of non-monetary incentives. Fixed amounts (P. Edwards et al. 2005) seem to generate greater response rates than the promise of a lottery. For monetary incentives, response rates to cash are greater than checks or online codes (P. Edwards et al. 2005). Monetary incentives are easy to implement with mail surveys and are fairer to individuals who participate compared to lotteries. With that being said, having respondents redeem their incentives (e.g., through online gift cards or coupons) could be a way to track mail open rates and minimize costs associated with unopened incentives or non-responders. Ultimately, piloting different incentive options with the target population will help researchers understand which options work best for the target response rate.
    • Consider the timing of incentive delivery. Upfront, unconditional incentives are most effective for response rates (P. J. Edwards et al. 2009). When questionnaires are paired with financial incentives, the CDC suggests sending the incentive along with the questionnaire for easy implementation (CDC 2018b).
    • Decide on the incentive amount. Some participants would respond regardless of incentive amount, so researchers will want to provide an amount that encourages the marginal respondent to reply.8

Case study: In the OHIE (Finkelstein et al. 2010), the initial mailings included $5 cash incentive plus the chance to enter a lottery to receive an additional $200. In the second round of mailings, the researchers included a $10 cash incentive. The survey is 5 pages long.

  • Personalize mailings. The CDC recommends some ways of personalization such as handwriting addresses on mailing and return envelopes and signing cover letters individually (CDC 2018a). However, some of these options may not be feasible for large-scale mailings. The cost-effectiveness of personalized mailings to improve response rates is debatable (Gendall 2005).
  • Take into account the aesthetics and legibility of mailing materials. To improve the aesthetics, consider printing the materials on glossy paper and folding them in a way that clearly differentiates them from advertisements. Create multiple versions of the material and conduct focus groups to test which design generates the highest response rates and most accurate responses for the target demographic. To improve legibility and readability, employ four basic design principles (Williams 2014):
    • Contrast (color or weight) can drive a reader’s attention to specific elements on a page, e.g., the CDC (2018a) recommends using lightly shaded background colors with white response boxes.
    • Repetition of elements (such as graphics or logos) helps to maintain consistency.
    • Alignment creates order on the page and helps the reader navigate information.
    • Proximity helps clarify which elements are related: elements that are associated with each other should be placed closely together. Avoid large blocks of text that can be difficult to read and process. Instead, use paragraph spaces to create white space between key points. 
  • Employ behavioral insights. Some applications of behavioral insights for survey design include the use of priming (including subtle messaging such as quotes), reframing the message in the loss domain (such as emphasizing lost opportunities or highlighting the cost of inaction), personalizing the information, and mentioning social norms and pressure (such as the number of people who have completed the survey) in the design of the questionnaire form (Behavioural Insights and Public Policy 2017). One particularly effective suggestion is to include a clear, simple call to action. Condense multiple steps of a call to action (e.g., call to participate, email with questions, visit a website for more information) into a single step (e.g., call this number to participate), and include a stated time frame. This can reduce cognitive confusion and help respondents make the decision to participate the default choice (Johnson et al. 2017). It can also be helpful to emphasize to participants the value of their input and the benefits of their participation in the survey, but researchers should be careful of language potentially leading to biased responses or differential response rates.

Responsive design

Even with the above suggestions, there is always some level of uncertainty about the survey response rate that researchers will get. Responsive survey design is a method for handling this uncertainty by using the data collected in early stages of the survey to inform later stages of research and survey design. Piloting is essential for figuring out the most effective survey protocol, especially when deciding between different ways of increasing survey response rates. Pre-tests, focus groups, cognitive interviews, and real-time monitoring can help inform the design of the pilot, the final survey design, and implementation. In order to address non-response bias, researchers should consider opportunities to test and correct for these biases when fielding surveys, such as through validation with administrative data, modeling sources of non-response, and randomizing additional interventions (such as incentives and reminders) intended to mitigate non-response. 

  • Adopt mixed-mode strategies. Many researchers use a mix of different survey modes (e.g., mail, telephone, web) in order to maximize survey response rates. Responsive survey design is suitable for evaluating the most cost-effective survey modes to use in follow-ups, as researchers can test which survey methods will help achieve the desired response rates among their target population. The evidence on mixed-mode strategies identifies sequential mixed-mode strategies (e.g., providing mail surveys followed by web survey access) to yield higher response rates than concurrent mixed-mode strategies (e.g., providing mail survey and web survey access at the same time). In fact, concurrent mixed-mode strategies may not even increase response rates compared to solely using mail surveys (Millar and Dillman 2011; Medway and Fulton 2012). If researchers choose to provide the option to answer the survey using computer-assisted methods (such as online surveys) during subsequent follow-ups, this can help to reduce the survey burden on respondents and cater questions to the respondents by implementing skips. For members of the population who are comfortable using the internet and have internet access, online web surveys may be a better mode and can also help track response rates.
  • Alternatively, a mailing can serve as a gateway to a survey that is hosted elsewhere (e.g., an online web survey or a telephone survey). For example, researchers could provide a quick response (QR) code or web link to an online survey or a phone number to call for a telephone survey. The most practical design to implement is mail followed by web, as they are both self-administered and do not require the additional involvement of an interviewer. The American Community Survey’s strategy as of 2016 is to contact households by mail and provides a link to a web survey to encourage internet response. Upon nonresponse, the Census Bureau then sends a paper questionnaire (National Academies of Sciences and Medicine 2016). 
  • Consider multi-phase sampling. Since response rates are highly variable, responsive survey design lends itself readily to multi-phase sampling. For a particular survey round, the researcher divides the survey sample into multiple phases of follow-up based on response rates. Researchers should prioritize cases and samples that are important for the survey and be sure to vary the level of effort and incentive for each phase. Assuming that researchers have a fixed expense budget, an important consideration is the bias-variance trade-off. Increasing response rates on a subsample and up-weighting the subsample may reduce bias but increase variance.[9] One recommendation is to pursue the least intensive survey methods with the whole sample first. Researchers can then follow up with a random sample of the non-respondents with more intensive survey methods and incentives in the next phase.

Case Study: In the twelve-month follow-up survey in the OHIE (Finkelstein et al. 2010), the researchers used a more intensive follow-up protocol for thirty percent of non-respondents. This protocol included an additional follow-up phone call to complete a phone survey and two additional mailings. The first mailing was a postcard that provided information for accessing the survey online, a $5 incentive, and multiple ways for respondents to update their contact information: an email address, an 800 toll-free number, and a detachable prepaid postcard. The second mailing was in the form of a letter containing the same information as the previous postcard and a $10 incentive, but without the detachable address update card. The researchers weighted the subsample respondents proportionally to the inverse of the probability of receiving additional follow-up. The initial response rate from the basic survey protocol for the twelve-month survey yielded a response rate of 36 percent. Following the intensive follow-up protocol, the researchers were able to increase the total effective response rate to fifty percent.

Budget considerations

Many of the strategies presented in this resource are likely to expand the mail survey budget. Researchers should be prepared to weigh the tradeoffs of the desired response rates and the additional monetary costs, time, and logistics. Depending on the scale of the mail survey, the marginal cost for each additional mailing (e.g., postage and incentives) may vary. Some considerations when budgeting for a mail survey typically include the following (not in any particular order of importance):

  1. Printing of mailing materials and questionnaires. Printing in color and on glossy paper is usually more expensive. Printing and reprinting questionnaires can also be costly if the questionnaire is many pages long.
  2. Postage for mailing. The costs of using different postal methods are likely to depend on existing mailing processes and several other factors, such as the quantity of envelopes size, weight, quantity, and distance that the mail will travel. Postage can be one of the biggest cost items of a mail survey operation (Grubert 2017), especially if researchers choose to use special delivery or private, premium shipping services such as FedEx or UPS.
  3. Incentives and compensation for participants. In addition to the raw amount, consider the logistics in terms of the number of bills and denominations required. For example, $2 bills have traditionally been favored in mail survey research since there are logistical advantages to including a single $2 bill rather than two $1 bills.10
  4. Cost of follow-ups. Remember to budget for additional staff time, additional resources, and potential extensions to the project timeline. If the project involves any intensive tracking, the cost for each additional response from the “intensive” phase would be greater than the cost of a regular response.
  5. Services for getting updated contact information. For example, the cost for the NCOA database may vary depending on the number of records, number of variables, and level of detail requested. This is especially important to consider if researchers anticipate updating contact information frequently over multiple mailings.
  6. Design of mailing materials and surveys. These costs include research staff time for in-house design or the costs of hiring an external design firm.
  7. Logistics of mail survey operations. Researchers should weigh the trade-offs between research staff time and financial costs (e.g., outsourcing some tasks) when planning and implementing the mail survey. One major consideration is whether to contract a third-party survey firm to manage survey logistics, particularly that of printing and mailing.

Last updated June 2021. 

These resources are a collaborative effort. If you notice a bug or have a suggestion for additional content, please fill out this form.


 

Acknowledgments

We thank Amy Finkelstein, Daniela Gomez Trevino, Jesse Gubb, Amanda Kohn, Eliza Keller, Manasi Deshpande, Neale Mahoney, and Ray Kluender for helpful comments. Clare Sachsse copy-edited this document. This work was made possible by support from Arnold Ventures and the Alfred P. Sloan Foundation. Please send any comments, questions or feedback to [email protected]. Any errors are our own.

1.
Cold calling, a method frequently employed in the past, has fallen out of favor because it may seem intrusive. People in general answer their phones less frequently than they used to (Kennedy and Hartig 2019).
2.
In this resource, response refers to an action taken by the respondent in reply to mailed nudges, interventions, and surveys. When discussing surveys, the definition of response rates follows American Association for Public Opinion Research's definition. 
3.
Other ways for examining the problem of non-response include cross-checking with administrative data for overlapping outcomes if available, and bounding the treatment effect (Lee 2009).
4.
A recent study (Miller, Wherry, and Foster 2018) successfully used Experian data to investigate the economic impacts for women who were denied abortions.
5.
Several J-PAL affiliated researchers have successfully used Accurint and TLO to validate and update the physical addresses and phone numbers of the study sample and found that Social Security Numbers were particularly useful for ensuring the fidelity of this data.
6.
In general, questionnaires should avoid complex words and sentences and follow plain language principles like those found at plainlanguage.gov. Dillman’s Tailored Design Method offers plenty of suggestions for designing an effective survey. The CDC also has some useful guidance on evaluating the quality of survey questions.
7.
For example, Californian law prohibits lotteries. The UCSD IRB and UC Berkeley IRB explains this in more detail. Other states or jurisdictions may have similar regulations.
8.
Researchers may want to consult with their local IRB for guidance on the appropriate incentive amount. This J-PAL resource on intake and consent lists some considerations for incentive payments.
9.
This J-PAL resource provides additional information on using weights to adjust for non-response in surveys.
10.
There is no evidence to suggest that either denomination of bills is more effective at inducing responses than the other (Mills 2019).
    Additional Resources
    1. Survey design | J-PAL

      This resource includes an overview of survey development, practical tips, formatting suggestions, and quality control guides that are important for a well-designed survey.

    2. Working with a third-party survey firm | J-PAL

      This resource provides guidance on when to use an external survey firm and the process of identifying and contracting with a firm. It includes considerations for the study population, sample size, and survey monitoring processes that may be beneficial to review in regards to response rates.

    3. How to design effective communications | Office of Evaluation Sciences (OES)

      This resource lists recommendations for engaging with respondents through effective communications. They are based on insights and experience of the United States government with sending letters and emails.

    Behavioural Insights and Public Policy. 2017. OECD. doi:10.1787/9789264270480-en.

    Cantor, David, and Patricia Cunningham. 2002. “Studies of Welfare Populations: Data Collection and Research Issues.” In Studies of Welfare Populations, edited by Moffitt Robert, Constance Citro, and Michele Ver Ploeg, 55–85. Washington, D.C.: National Academies Press. doi:10.17226/10206.

    CDC. 2018a. “Evaluation Briefs Increasing Questionnaire Response Rates” https://www.cdc.gov/Healthyyouth/evaluation/. Last accessed June 2, 2021.

    CDC. 2018b. “Evaluation Briefs Using Incentives to Boost Response Rates When to Offer Incentives.” https://www.cdc.gov/healthyyouth/evaluation/. Last accessed June 2, 2021.

    Dutz, D., Ingrid Huitfeldt, Santiago Lacouture. Magne Mogstad, Alexander Torgovitsky, and Winnie van Dijk. 2021. "Selection in Surveys." National Bureau of Economic Research Working Paper Series No. 29549. doi: 10.3386/W29549

    Edwards, Phil, Ian Roberts, Mike Clarke, Carolyn DiGuiseppi, Sarah Pratap, Reinhard Wentz, and Irene Kwan. 2002. “Increasing Response Rates to Postal Questionnaires: Systematic Review.” BMJ (Clinical Research Ed.) 324 (7347): 1183. doi:10.1136/bmj.324.7347.1183.

    Edwards, Phil, Rachel Cooper, Ian Roberts, and Chris Frost. 2005. “Meta-Analysis of Randomised Trials of Monetary Incentives and Response to Mailed Questionnaires.” Journal of Epidemiology and Community Health 59: 987–99. doi:10.1136/jech.2005.034397.

    Edwards, Philip James, Ian Roberts, Mike J. Clarke, Carolyn DiGuiseppi, Reinhard Wentz, Irene Kwan, Rachel Cooper, Lambert M. Felix, and Sarah Pratap. 2009. “Methods to Increase Response to Postal and Electronic Questionnaires.” Cochrane Database of Systematic Reviews. John Wiley and Sons Ltd. doi:10.1002/14651858.MR000008.pub4.

    Fredrickson, Doren D., et al. 2005. "Optimal Design Features for Surveying Low-Income Populations." Journal of Health Care for the Poor and Underserved 16 (4): 677-690. doi:10.1353/hpu.2005.0096.

    Finkelstein, Amy, and Matthew J Notowidigdo. 2019. “Take-Up and Targeting: Experimental Evidence from SNAP.” The Quarterly Journal of Economics 134 (3): 1505–56. doi:10.1093/qje/qjz013.

    Finkelstein, Amy, Sarah Taubman, Heidi Allen, Jonathan Gruber, Joseph P. Newhouse, Bill Wright, Kate Baicker, and Oregon Health Study Group. 2010. “The Short-Run Impact of Extending Public Health Insurance to Low Income Adults: Evidence from the First Year of The Oregon Medicaid Experiment [Analysis Plan].” Working Paper.

    Gendall, Philip. 2005. “The Effect of Covering Letter Personalisation in Mail Surveys.” The International Journal of Market Research 47 (4): 365–380. doi:10.1177/147078530504700404.

    Grubert, Emily. 2017. “How to Do Mail Surveys in the Digital Age: A Practical Guide.” Survey Practice 10 (1): 1–8. doi:10.29115/sp-2017-0002.

    Holbrook, Allyson L., Jon A. Krosnick, and Alison Pfent. 2007. “The Causes and Consequences of Response Rates in Surveys by the News Media and Government Contractor Survey Research Firms.” Advances in Telephone Survey Methodology 60607: 499–528. doi:10.1002/9780470173404.ch23.

    Johnson, Amy, Ryan Callahan, Jesse Chandler, and Jason Markesich. 2017. “Using Behavioral Science to Improve Survey Response: An Experiment with the National Beneficiary Survey (In Focus Brief),” 1–2. Mathematica.

    Kennedy, Courtney, and Hannah Hartig. 2019. “Phone Survey Response Rates Decline Again | Pew Research Center.” Pew Research Center. https://www.pewresearch.org/fact-tank/2019/02/27/response-rates-in-telephone-surveys-have-resumed-their-decline/. Last accessed June 2, 2021.

    Lavrakas, Paul. 2008. “Encyclopedia of Survey Research Methods.” SAGE Publications. Thousand Oaks, California. doi:10.4135/9781412963947.

    Lee, David S. 2009. “Training, Wages, and Sample Selection: Estimating Sharp Bounds on Treatment Effects.” Review of Economic Studies 76 (3): 1071–1102. doi:10.1111/j.1467-937X.2009.00536.x.

    Levere, Michael, and David Wittenburg. 2019. “Lessons from Pilot Tests of Recruitment for the Promoting Opportunity Demonstration.” Mathematica.

    Liebman, Jeffrey B., and Erzo F.P. Luttmer. 2012. “The Perception of Social Security Incentives for Labor Supply and Retirement: The Median Voter Knows More than You’d Think.” Tax Policy and the Economy 26 (1): 1–42. doi:10.1086/665501.

    Mayer, Susan E., Ariel Kalil, Philip Oreopoulos, and Sebastian Gallegos. 2019. “Using Behavioral Insights to Increase Parental Engagement.” Journal of Human Resources 54 (4): 900–925. doi:10.3368/jhr.54.4.0617.8835r.

    Mcphee, Cameron, and Sarah Hastedt. 2012. “More Money? The Impact of Larger Incentives on Response Rates in a Two-Phase Mail Survey.” National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education. Washington, DC.

    Millar, Morgan M., and Don A. Dillman. 2011. “Improving Response to Web and Mixed-Mode Surveys.” Public Opinion Quarterly 75 (2): 249–69. doi:10.1093/poq/nfr003.

    Miller, Sarah, Laura R. Wherry, and Diana Greene Foster. 2018. “The Economic Consequences of Being Denied an Abortion.” NBER Working Paper 26662 53 (9): 1689–99.

    Mills, Sarah. 2019. “A $2 Bill or Two $1 Bills: An Experiment That Challenges Standard Protocol.” Field Methods 31, no. 3: 230–40. doi:10.1177/1525822X19844307.

    National Academies of Sciences  and Medicine, Engineering. 2016. Reducing Response Burden in the American Community Survey: Proceedings of a Workshop. Edited by Thomas J Plewes. Washington, DC: The National Academies Press. doi:10.17226/23639.

    Neher, Chris, and K. S. Neher. 2018. “Impact of Sample Frame on Survey Response Rates in Repeat-Contact Mail Surveys.” Park Science 34 (1): 43–46. https://www.nps.gov/articles/parkscience34-1_43-46_neher_neher_3877.htm. Last accessed June 2, 2021.

    Rao, Kumar, Olena Kaminska, Allan L. Mccutcheon, Darby Miller Steiger, Julie Curd, Larry Curd, Alison Hunter, Susan Sorenson, and Julie Zeplin. 2010. “Recruiting Probability Samples for a Multi-Mode Research Panel with Internet and Mail Components.” Public Opinion Quarterly 74 (1): 68–84. doi:10.1093/poq/nfp091.

    Stedman, Richard C., Nancy A. Connelly, Thomas A. Heberlein, Daniel J. Decker, and Shorna B. Allred. 2019. “The End of the (Research) World As We Know It? Understanding and Coping With Declining Response Rates to Mail Surveys.” Society & Natural Resources 32 (10): 1139–54. doi:10.1080/08941920.2019.1587127.

    Williams, Robin. 2014. The Non-Designer’s Design Book. 4th ed. San Francisco, CA: Peachpit Press.

    Yan, Alan, Joshua Kalla, and David E. Broockman. 2018. “Increasing Response Rates and Representativeness of Online Panels Recruited by Mail: Evidence from Experiments in 12 Original Surveys.” Stanford University Graduate School of Business Research Paper No. 18-12. doi:10.2139/ssrn.3136245.

    In this resource