This three part post will examine survey response rates from a CME perspective, helping you estimate how many responses you need, what the response rate will likely be, and how to improve the rate. Data is drawn from personal projects, as well as the CME, healthcare, and broader literature.
Today’s post, the last of three, discusses how to improve response rates, and summarizes the three posts.
Improving Response Rates
There are several reasons to improve response rates. Higher response rates improve the certainly of conclusions reached, particularly when drilling down into segments of the population. For example, it may be very useful to distinguish between the various impacts on physicians, nurses, and pharmacists, and higher response number help you slice the data finer. The higher the proportion of the population who respond, the less the likelihood of a ‘selection bias’ that may skew results. (There are “modest differences” shown between responders and nonresponders, and between early and late respondents on demographic and/or practice-related characteristics. (Cull, O'Connor, Sharp, & Tang, 2005) (Olmsted, Murphy, McFarlane, & Hill, 2006)) The more, and better, the textual responses, the richer and the more nuanced the text used in the write-up can be. Even if the outcomes reporting is not improved by increased response rates, the data collected and analyzed can be smaller, saving time and money.
As made clear elsewhere in the article, the literature focused on CME is not large, and is occupied with other topics than testing techniques for improving response rates. To discuss different strategies adequately, we are discussing ideas that may not have CME literature or even literature from elsewhere in healthcare.
Communication:
Clear communication is essential in any learning and information-based activities. Here are our suggestions on strategies for communication.
- Provide advance communication about the questionnaire. This seems intuitively important; research on the US Census indicate that notifying people that the survey was going to arrive improved response rates by six percentage points (Dillman, Clark, & Treat, 1994). However, not all research agrees on this point; one study of physicians showed no differences between the group that received a premailing and the control group (Shiono & Klebanoff, 1991). Lockyer et al did promotions, however, and their response rates are lower than those of any other informational surveys listed in Table 1. (Lockyer, Horsley, Zeiter, & Campbell, 2015)
- Use one or two follow-up reminders. Written and telephone reminders are each associated with about 13 percentage point improvements. (Asch, Jedrziewski, & Christakis, 1997)
- Review the questionnaire at the end of the educational session. This helps explain the survey, and gives a chance to motivate the respondents better.
- Communicate the time limit for submitting responses. If people can put a specific time on their calendar, it is more likely to get done.
Motivation:
1. Give something away with the questionnaire. This has been shown to be a very good way to improve response rates, with reservations. What does help? Cash has been shown to help, when given without delay, and in a form that can be used. One study with physicians showed that, while both were effective, checks were more effective than gift cards for a $25 amount (Hogan & LaForce, 2008). One meta-study establishes that even a dollar provided adequate incentive to improve response rates in physicians, and effects leveled off sharply after $1, even when going up to $20. (VanGeest, Johnson, & Welch, 2007) Token items such as pencils (Kellerman & Herold, 2001) or candy (Burt & Woodwell, 2005) don’t seem to help response rates with physicians, nor do “lottery” chances to win some item in the future (Tamayo-Sarver & Baker, 2004). For internet-based surveys, the author has had good luck with giving away movie tickets in return for survey participation regarding technology use. (Capital Analytics, 2010). Comparative studies indicate that cash payments are more effective compared with charity inducements (Deehan, Templeton, Taylor, Drummond, & Strang, 1997) (Olson, Schneiderman, & Armstrong, 1993) monetary donations to their alma mater (Gattellari & Ward, 2001), non-monetary incentives (Easton, Price, Telljohann, & Boehm, 1997), or opportunities to win a cash prize through a lottery. (Tamayo-Sarver & Baker, 2004)
In the case of commercially supported CME-activities, there’s an obstacle to using incentives for physician learners who participate in these activities: the Sunshine Act (Health Policy Briefs, 2014). The Sunshine Act legislation requires reporting of any payments or other transfers of value made to physicians or teaching hospitals from August of 2013 forwards.
2. Have an introductory message signed by an executive or other important person in the organization. One study compared surveys sent with the letterhead of an AMA Vice President, and the other sent by a marketing research firm, and the surveys sent under the more prestigious letterhead had a 11.2 percentage point advantage. (Olson, Schneiderman, & Armstrong, 1993)
3. Clearly communicate why the questionnaire is important. Show how the data will be integrated with other data.
4. Let participants know what actions will be taken with the data, and who will see it. If appropriate, let the target audience know that they are part of a carefully selected sample. Add emotional appeal. The world is full of surveys that never get acted on, or whose results aren’t interpreted. Anecdotal evidence in the area of employee engagement suggests that giving an engagement survey, then not acting on the results, actually reduces employee engagement.
Design and Execution:
1. Distribute questionnaire to a captive audience. This is far and away the best idea. For CME applications, from the Academy of CME data, if the person has to fill out information to get credit, and the survey information is in the same form, the chance that they will fill out the minimal information to get credit and walk away is low. In reviewing data from several thousand learners in the last year, all respondents who filled in information sufficient to obtain credit also answered .
2. Keep the questionnaire simple and as brief as possible. It is believed among those who create and design surveys that the response rate falls dramatically as the length increases; this is borne out by research in the medical community. (Hing, Schappert, Burt, & Shimizu, 2005) (Jepson, Asch, Hershey, & Ubel, 2005) (Cartwright, 1978) While exactly what the response curve looks like is unknown, but every extra minute is thought to count. One study showed that closed ended questionnaires resulted in a 22% higher answer rates. (Griffith, Cook, Guyatt, & Charles, 1999)
3. Keep questionnaire responses anonymous – or at least confidential. While CME responses do not seem likely to make participants feel uneasy about their identity compared to other possible surveys, this is always good practice. Keep in mind that confidential may be better than anonymous when analysis would be improved by being able to link together multiple datasets. For example, if a survey is given at a conference, being able to link that information to demographic information contained in the registration data by a name or email address can make the outcomes reporting richer. Using a third party to collect and analyze the data can increase the participant’s feeling of privacy.
4. Make it easy to respond; consider an alternative distribution channels, such as both regular mail and e-mail. If regular mail is used, include a self-addressed, stamped envelope/e-mail; if an internet survey is used, test it rigorously using several different types of browsers and computers.
5. Allow completion of the survey during work or course hours, rather than requiring the participant to use personal time.
6. Design questionnaire to attract attention, with a professional format. A census redesign featuring user-friendly graphic design, with color coding, prominent question numbers, and categoricallevels improved response rates by 4 percentage points. (Dillman, Clark, & Treat, 1994)
Summary
Surveys are a keystone in CME evaluation processes, despite a new emphasis on operational data, such as charts. Surveys are easy to do, and provide information across a variety of levels, including practice change, knowledge gain, transfer of behaviors, enablers and barriers for new techniques, and outcomes estimation. The number of responses needs to be adequate to test the efficacy of the learning event. How many responses are required? That will depend on which question is the most crucial. That answer to that question should be evaluated based on the complexity of the answer, the mean and variance of the response, the required effect size, and other factors. You won't have the exact answer, of course, but using similar studies gets you close enough for an intelligent estimate.
The number of expected responses is based on similar work done elsewhere, with the huge caveat that no two studies are exactly alike. Factors that may need to be considered in comparing studies are the timing of the survey (intelligence gather prior to other contact, event-time surveys, and post-event follow-up surveys), the populations surveyed, and the healthcare domain. Due to the unpredictability of the response rate, the possibility of underestimating the effect size, and the ever-present desire to slice the data in finer and finer segments to improve the ability to optimize by measuring sub-populations, improving the response rate is always a good idea. Even if everything goes exactly as planned, and no further analysis is desired, higher response rates allow the evaluator, and the participants, the ability to save time and money on distributing surveys.
Numerous ways to improve response rate have been considered over the years; whether there is empirical evidence to confirm the method, and how applicable it is to your particular area can be open questions. Methods for improving response rates with the strongest support are:
1. Distribute your survey to a captive audience – those sitting in a room at a presentation, or those filling out information to obtain CME credits.
2. Communicate clearly the need for the survey, how it will help improve the activity for everyone in the future, and the confidentiality of the activity. Communications should be from a person who is well-regarded by the audience. Follow-up with non-responders promptly.
3. Compensate respondents, in advance, and in cash or some other generally useful good, if possible. Unfortunately, the Sunshine act eliminates this option for many purposes.
4. Keep the survey short and relevant. Use close-ended responses where an open-ended response would not add value.
Dillman (Dillman D. A., 2000) provides the following general advice that we like:
”One of the most common mistakes made in the design of mail surveys is to assume the existence of a “magic bullet”, that is, one technique that will assure a high response rate regardless of how other aspects of the survey are designed”.
A hybrid of these techniques, mixed judiciously with common sense, will likely produce improvement in response rates. Keep in mind that, where we have noted percentage point increases, those are examples based on different situations, and it is highly unlikely you will be able to combine a number of different ones without diminishing returns.
Surveys will continue to be an important tool in measuring and improving medical education. Response rates are an important aspect of surveys – good response rates allow more higher statistical certainty, deeper drill-downs into population segments, reduce chances of a selective response bias, and provide more textual material that enriches reporting.
Works Cited
Asch, D. A., Jedrziewski, M. K., & Christakis, N. A. (1997, Oct). Response rates to mail surveys published in medical journals. Journal of Clinical Epidemiology, 50(10), 1129-1136.
Buriak, S. E., Potter, J., & Bleckley, M. K. (2015). Using a Predictive Model of Clinician Intention to Improve Continuing Health Professional Education on Cancer Survivorship. Journal of Continuing Education in the Health Professions, 35(1), 57-74.
Burt, C., & Woodwell, D. (2005). Tests of methods to improve response to physicans surveys. Arlington, VA: Paper presented at the 2005 Federal Committe on Statistical Methodology.
Capital Analytics. (2010, June 20). Sun Microsystems University Case Study. Retrieved April 18, 2012, from Capital Analytics: http://www.capanalytics.com/wp-content/uploads/2012/01/BRANDED-SunMentoring-Case-Study-2012.pdf
Cartwright, A. (1978). Professionals as responders: variations in and effects of response rates to quesitionnaires, 1966-77. Br Med J, 2, 1419-1421.
Cook, C. F., Heath, F., & Thompson, R. L. (2000). A meta-analysis of response rates in web or internet-based surveys. Educational and Psychological Measurement, 60(7), 821-836.
Cull, W. L., O'Connor, K. G., Sharp, S., & Tang, S. S. (2005). Response rates adn response bias for 50 surveys of pediatricians. Health Services Research, 40, 213-226.
Deehan, A., Templeton, L., Taylor, C., Drummond, C., & Strang, J. (1997). The effect of cash and other financial inducements on the response rate of general practitioners in a national postal survey. British Journal of General Practice, 47, 87-90.
Dillman, D. A. (2000). Mail and Internet Surveys: The Tailored Design Method. New York, New York: Wiley and Sons.
Dillman, D. A., Clark, J. R., & Treat, J. (1994). The Influence of 13 Design Factors on Response Rates to Census Surveys. Annual Research Conference Proceedings, U.S. Bureau of the the Census, (pp. 137-159). Washington, D.C.
Easton, A. N., Price, J. H., Telljohann, S. K., & Boehm, K. (1997). An informational versus monetary incentive in increasing physicans response rates. Psychological Reports, 81, 968-970.
Evans, J. A., Mazmanian, P. E., Dow, A. W., Lockeman, K. S., & Yanchick, V. A. (2014). Commitment to Change and Assessment of Confidence: Tools to Inform the Design and Evaluation of Interprofessional Education. Journal of Continuing Education in the Health Professions, 34(3), 155-163.
Field, T. S., Cadoret, C. A., Brown, M. L., Ford, M., Greene, S. M., & Hill, D. (2002). Surveying physcians: Do compoents of the "Total Design Approach" to optimizing survey response rates apply to physicians? Medical Care, 40, 596-606.
Garber, M. (2012, May 30). The Future Growth of the Internet in One Chart. Retrieved from The Atlantic: http://www.theatlantic.com/technology/archive/2012/05/the-future-growth-of-the-internet-in-one-chart-and-one-graph/257811/
Gattellari, M., & Ward, J. E. (2001). Will donations to their learned college increase surgeons' participation in surveys? A randomized trial. Journal of Clincial Epidemoiology, 54, 645-650.
Grava-Gubins, I., & Scott, S. (2008, Oct). Effects of various methodologic strategies: Survey response rates among Canadian physicians and physicians-in-training. Can Fam Physician, 54(10), 1424-1430.
Graves, R. M. (2006). Nonresponse Rates and Nonresponse Bias in Household Surveys. Public Opin Q, 70(5), 646-675.
Griffith, L. E., Cook, D. J., Guyatt, G. H., & Charles, C. A. (1999). Comparison of open and closed questionnaire formats in obtaining demographic informatoin from Candian general internists. Journal of Clinical Epidemiology, 52, 977-1005.
Grzeskowiak, L. E., Thomas, A. E., To, J., Reeve, E., & Phillips, A. J. (2015). Enhancing Continuing Eduction Activitys Using Audience Response Systems: A Single-Blind Controlled Trial. Jounral of Continuing Education in the Health Professions, 35(1), 38-45.
Harris, W. A., Spencer, P., Winthrop, K., & Kravitz, J. (2014). Training Mid- to Late-Career Health Professionals for Clincial Work in Low-Income Regions Abroad. Journal of Continuing Education in the Healthy Professions, 34(3), 179-184.
Health Policy Briefs. (2014, Oct 2). Retrieved from Health Affairs.org: http://www.healthaffairs.org/healthpolicybriefs/brief.php?brief_id=127
Hing, E., Schappert, S. M., Burt, C. W., & Shimizu, I. M. (2005). Effects of form length and item format on response patterns and estimates of physician office and hospital outpatient department visits. National Ambulatory Medical Care Survey and National Hospital Ambulatory Medical Care Survey. Vital Health Statistics, 2(139), 1-32.
Hogan, S. O., & LaForce, M. (2008). Incentives in Physician Surveys: An Experiment Using Gift Cards and Checks. American Statistical Association Online Proceedings, (pp. 4179-84). Retrieved from http://www.amstat.org/sections/srms/proceedings/y2008/files/hogan.pdf
Jepson, C., Asch, D., Hershey, J. C., & Ubel, P. A. (2005). In a mailed phsycian survey, questionnaire length had a threshold effect effect on response rate. Journal of Clinical Epidemiology, 58, 103-105.
Kellerman, S. E., & Herold, J. (2001). Physician Response to Surveys: A Review of the Literature. Am J Prev Med, 20(1), 61-67.
Leece, P., Bhandari, M., Sprague, S., Swiontkowski, M. F., Schemitsch, E. H., & Tornetta, P. (2004). Internet versus mail questionnaires: A controlled questionnaire. Journal of medical internet research, 39, e39.
Lockyer, J., Horsley, T., Zeiter, J., & Campbell, C. (2015). Role for Assessment in Maintenance of Certification: Physician Perceptions of Assessment. Journal of Continuing Education in the Health Professions, 35(1), 11-17.
Mazmania, P. E., Daffron, S. R., Johnson, R. E., Davis, D. A., & Kantrowitz, M. P. (1998, August). Information about Barriers to Planned Change: A Randomized Controllled Trial Involving Continuing Medical Education Lectures and Commitment to Change. Academic Medicine, 73(8), 882-886.
McConnell, M. H., Azzam, K., Xenodemetropoulos, T., & Panju, A. (2015). Effectiveness of Test-Enhanced Learning in Continuing Health Sciences Education: A Randomized Controlled Trial. Journal of Continuing Education in the Healthy Professions, 35(2), 119-122.
Nulty, D. (2008). The adequacy of response rates to online and paper surveys: what can be done? Assessment & Evaluation in Higher Education, 33(3), 301-314. doi:10.1080/02602930701293231
Olivieri, J. (2012, June 25). Rule of Thumb: Number of Participants per Survey Item. Retrieved December 17, 2015, from assessCME: https://assesscme.wordpress.com/2012/06/25/rule-of-thumb-number-of-participants-per-survey-item/
Olmsted, M. G., Murphy, J., McFarlane, E., & Hill, C. (2006). Evaluating methods for increasing physician survey cooperation. Miami Beach, FL: Annual Conference of the American Association for Public Opinion Research.
Olson, C. A. (2014, Spring). Survey Burden, Response Rates, and the Tragedy of the Commons. Journal of Continuing education in the Health Professions, 34(2), 93-95.
Olson, L., Schneiderman, M., & Armstrong, R. V. (1993). Increasing Physician Survey Response Rates Without Biasing Survey Rates. Proceedings of the section on survey research methods of the American Statistical Association (pp. 1036-1041). Alexandria, VA: American Statistical Association. Retrieved from http://www.amstat.org/sections/srms/Proceedings/papers/1993_177.pdf
Phillips, J. J. (1997). Return on Investment in Training and Performance Improvement Programs. Houston, Texas: Gulf Publishing Company.
Pololi, L. H., Evans, A. T., Civian, J. T., Vasiliou, V., Coplit, L. D., Gillum, L. H., . . . Robert, T. B. (2015). Mentoring Faculty: A US National Survey of Its Adequacy and Linkage to Culture in Academic Health Centers. Journal of Continuing Education in the Health Professsions, 35(3), 176-184.
Sarayani, A., Naderi-Behdani, F., Hadavand, N., Javadi, M., Farsad, F., Hadjibabaie, M., & Gholami, K. (2015). A 3-Armed Randomized Trial of Nurses' Continuing Education Meetins on Adverse Drug Reactions. Journal of Continuing Education in the Health Professions, 35(2), 123-130.
Shiono, P. H., & Klebanoff, M. A. (1991). The effect of two mailing strategies on the response to a survey of physicians. Am J Epidemiol, 134, 539-542.
Singer, E. (2006). Introduction: Nonresponse Bias in Household Surveys. Public Opinion Q, 70(5), 637-645. doi:10.1093/poq/nfl034
Tamayo-Sarver, J. H., & Baker, D. W. (2004). Comparison of responses to a $2 bill versus a chance to win $250 in a mail survey of emergency physcians. Academic Emergency Medicine, 11, 888-892.
VanGeest, J. B., Johnson, T. P., & Welch, V. L. (2007). Methodologies for Improving Response Rates in Surveys of Physicians: A Systematic Review. Evaluations & the Health Professions, 30(4), 303-321. doi:10.1177/0163278707307899
Williams, B. W., Kessler, H. A., & Williams, M. V. (2015). Relationship Among Knowledge Acquisition, Motivation to Change, and Self-Efficacy in CME Participants. Journal of Continuing Education in the Health Professions, 35(S1), S13-S21.