High response rates — are they worth the effort?

June 23, 2025

Declining response rates present challenges for data collection, but does a high response rate always equal higher quality? Learn how your organization can navigate this complexity.

Response rates on surveys are declining across invitation channels and countries. On the one hand, this is a problem: many see a high response rate as a mark of quality — at minimum a sign that nothing went wrong and perhaps even that the data collection went well! Additionally, more responses increase statistical power and reduce potential bias in a survey.

On the other hand, a low response rate does not necessarily indicate a flawed measurement. For example, the response rate in Epinion’s exit poll during the 2024 European Parliament elections increased rapidly from 6 p.m. to 8 p.m., but the accuracy of the poll did not improve during the same period. Similarly, the PEW Research Center in the U.S. finds no correlation between response rates and the accuracy of phone surveys in measuring party preference and religiosity.

The relationship between response rates and quality is therefore not straightforward. And most would probably prefer a representative sample with a 25% response rate over a 50% response rate achieved by having all men respond while all women declined. However, it is certain that a minimum level of responses is necessary to conduct survey research. If no one answers, the method simply doesn’t work.

One size fits some

Survey research has identified several strategies to increase response rates. Keep the survey quick to complete — meaning fewer and simpler questions. Send more and faster reminders. Personalize the invitation. Offer respondents meaningful incentives, such as gift cards for those who participate.

The problem is that all these strategies come with their own challenges. Shorter surveys yield less information about respondents. Respondents may find frequent reminders annoying. Personalized invitations can lead to less honest answers because respondents may feel “watched.” And incentives are not only expensive — they may also make respondents expect financial rewards, creating a race to the bottom where only the most well-funded projects can secure participation.

There is no guaranteed path to the necessary response rate. What is certain, though, is that the survey design should be thought through holistically — considering whom we want to ask and what we want to ask them about. A purely web-based survey, for example, is a poor choice if we want to assess digital literacy among older adults — since we would miss those with the greatest digital challenges, leading to an overestimation of digital skills. A phone survey is equally problematic if we want to measure trust in strangers — since we would miss those who are least trusting and never answer calls from unknown numbers. Survey design should always align with the target group and the phenomenon we wish to study. For example, in collaboration with the ROCKWOOL Foundation, we developed a survey design that successfully achieved both a high response rate and broad representativeness in a notoriously difficult target group: young people.

Learn more through systematic experiments

To make the best design decisions, it’s important to expand the list of what works — and for whom! This requires randomized experiments where some respondents are invited one way, and others in another, to test which method works best. This approach requires a degree of sacrifice: it means A) making data collection more complex for ourselves, and B) testing various strategies rather than sticking to what we think will work best.

But this is how we learn. So, if your workplace, research project, or organization is planning a survey-based data collection, consider whether the project could also be used to test different invitation strategies! And if you decide to run tests, test multiple variations simultaneously. If you are offering gift cards, randomly vary the amounts. If sending invitations, randomly vary the subject line. If the survey is long, randomly vary the survey length. This prevents overly narrow findings.

Epinion recommends

A low response rate can indeed indicate poor data quality — but it doesn’t have to! The relationship between response rate and quality is complex. It depends not just on how many respond, but also on who responds and what they are responding to. Three recommendations can help navigate this complexity:

  • Only ask the necessary questions — and make them as easy to answer as possible. Respondent willingness is a shared resource we should protect.

  • Always consider invitation setup, incentive structure, and accessibility in relation to the specific topic and target group of the survey.

  • Consider implementing an experiment to test which strategies can help increase response rates.

 

Thorkil Klint
SENIOR MANAGER
thkl@epinionglobal.com