Estimating and Improving Survey Response Rates in CME: What Response Rates Should You Expect?

This three part post will examine survey response rates from a CME perspective, helping you estimate how many responses you need, what the response rate will likely be, and how to improve the rate.   Data is drawn from personal projects, as well as the CME, healthcare, and broader literature.

Today’s post, the second of three, discusses what response rates you might expect from CME work, reviewing recent applicable literature.  The next post will discuss how to improve response rates, and summarize the discussion.

What Response Rates to Expect

Response rates are never as good as they could be, and when surveys are sent out well after an event, rates are even worse.  For this article, we discuss surveys from three categories that cover most CME needs: 

·         Informational Surveys, which are defined as those sent to a broad population who have not had prior contact with the writers, such as participating in a CME event.  Surveying a broad population to ascertain skill gaps during the instructional design phase is an example.

·         Event Questionnaires, which are defined as surveys given immediately upon completion of an educational event.

·         Follow-ups, which are defined as surveys sent after an event within a 30 to 90 day window.

Informationalsurveys seem to have the lowest completion rates.   They are often used for needs and skill gap assessments, to understand the current state of care and knowledge levels.  As previously noted, calculating how many responses are appropriate requires you to estimate the total population of interest (e.g., nurses specializing in rheumatology).

Surveys given out at the time of a CME event, usually immediately afterwards, are the most common type of survey.   Response rates are usually quite high, due to the ‘captive audience’ phenomena.  Where the survey is on the same form as the CME credit request, response rates approach 100%. 

Post-event survey is much more of a challenge.  The data in post-event surveys is particularly meaningful, however; participants are reporting on practice changes they made, compared to those they intend to make, which is a stronger argument.   In a meta-evaluation of ten recent  projects done by Level 6 Analytics, where 30- to 90-day follow-ups were used after a learning event, only 12% of those contacted provided post-event data (nevents= 10; nlearners = 2,707).    That percentage uses as a baseline learners who filled out surveys at the event.  When all learners attending an event are used as the denominator, the percentage drops to only 9%.   Even given that a survey has been filled out, participation on that survey may be lackluster.   Simple items like multiple choice questions are not all answered, and response rates on free-form text inputs, which often provide interesting color and background to results, are almost always below 50%.  With technology changes, changes in privacy laws, and an increasingly saturated audience population, it seems prudent to review only the most recent literature.  Table 1 contains a review of articles in recent issues of Journal of Continuing Education in the Health Professions that used surveys and contained adequate information to calculate response rates.   The first column cites the author and year; the second describes the participant pool; the third gives the response rates, first in percentage and second as a ratio of completed/invited; the fourth gives the type of survey. 

The fifth column notes any extraordinary circumstances which were intended to improve the participation rate, and the last column describes the general topic area of the article reporting the survey.   It should be noted that the few of these articles were simply about a CME program, but most are focused on other topics, such as audience response systems in a CME program (Grzeskowiak, Thomas, To, Reeve, & Phillips, 2015)

 

Table 1: Selective Review of Survey Rates in Recent Literatures Studies in the literature

Study

Pool

Response Rates

Survey Type

Special measures used to affect participation

Topic

(Lockyer, Horsley, Zeiter, & Campbell, 2015)

Canadian Physicians

17% (5259 out of 31158)

Informational Questionnaire

Promotion through email; promotional videos; announcements; gifts and drawings.

Physicians perceptions of assessment techniques.

(Harris, Spencer, Winthrop, & Kravitz, 2014)

US Physicians

30% (624/2099)

Informational Questionnaire

None reported

Physician retraining for overseas work

(Pololi, et al., 2015)

US Medical Faculty

52% (2381/4578)

Informational Questionnaire

None reported

Survey of Culture among Medical Faculty.

(Buriak, Potter, & Bleckley, 2015)

US HCP’s

90% (8997/10,000)

Event Questionnaire

None reported

Study of cancer survivor support.

(Evans, Mazmanian, Dow, Lockeman, & Yanchick, 2014)

US HCP’s

87% (120/138) for Event;  55%  (34/62) for post-event survey

Event Questionnaire and Follow-Up

None reported

Interprofessional Education

(McConnell, Azzam, Xenodemetropoulos, & Panju, 2015)

US Physicians

67% (56/83)

Follow-Up

$50 gift card

Study of Test Enhanced Learning in Constipation Management

(Sarayani, et al., 2015)

Iranian Nurses in Single Site

79% (198/250)

Follow-Up

None reported

Pharmacovigilence; study of effects of education type

(Grzeskowiak, Thomas, To, Reeve, & Phillips, 2015)

Australian Pharmacists

78% (62/79)

Follow-Up

None reported

Study tested efficacy of Audience Response Systems in CME

(Williams, Kessler, & Williams, 2015)

US Medical Faculty

(41%) 51/123

Follow-Up

None reported

Study on Self-Efficacy in a HIV/AIDS CME

On Monday:  The thrilling follow-up:  How to improve your response rates.