Tuesday, October 6, 2015

Conversation-starter: SQUIRE tool standards as a connection between medical communication organization: ACEhp and AMWA

There are many synergies and resources between the Alliance for Continuing Education in the Health Professions (ACEhp) and the American Medical Writers Association (AMWA), because medical educators and medical writers are all communicators ... which explains why so many of us are members of both. In today's ACEhp webinar (see www.acehp.org/p/cm/ld/fid=367), Jann Balmer replied to a question asking about the role of medical writers in publishing educational outcomes in the quality improvement reporting tool, SQUIRE (as customized by ACEhp), saying that outcomes manuscripts can be enhanced by medical writers and editors because QIE project managers may not be good writers. My colleague, Donald Harting, and I have been discussing potential organizational synergies between AMWA and ACEhp in this regard for over a year.

This is an opportune moment for our memberships to work together: AMWA celebrated 75 years with last week's Annual Conference, which started the very day on which ACEhp launched its custom SQUIRE reporting tool for clinical educational research at the annual Alliance Quality Symposium. This begs the question: What role will professional medical writers and communicators in AMWA play in publishing standardized, SQUIRE-compliant educational outcomes research in the Alliance for CE in the Health Professions custom tool?

Conversation-starter: ACEhp identified a top goal of 2015's Phase II of the Quality Improvement Education (QIE) Initiative as the "Assessment of SQUIRE to generate Case Studies that demonstrate successful integration of Education into QI." Professionals with capabilities in medical education and writing will ideally combine their skills in writing these case studies. Wouldn't you like to author a study that is eligible for later meta-analyses because the study was SQUIRE-compliant?

Extension: There is an inaugural meeting on writing research in the SQUIRE tool at Dartmouth this November 2015. MedBiquitous members, as experts in standardization in medical education technology and reporting, what can you contribute to this conversation? Members of the American Educational Research Association LinkedIn group, what are your thoughts for connecting health education research reporting to more global research agendas? 

I am promoting the SQUIRE tool because of my roles as Co-Leader of the ACEhp QIE Initiative's Building Block on "Nomenclature and Its Adoption" and Research Track faculty. In these roles, I see the needs for developing consistent wording and reporting standards for medical education research. After all, how will we report our achievements and deliberate our challenges in developing and researching CE in the Health Professions if we do not have consistent language to use in the SQUIRE reporting tool? Let's not delay a start in COMMUNICATING among health communicators. Join the discussion! I believe that this is a great opportunity for all health educators, education researchers, and writers to collaborate!

For additional information on these organizations, topics, and events, check out these links (some may have membership firewalls):
- SQUIRE (Standards for Quality Improvement Reporting Excellence) inaugural conference: squire-statement.org/news_events/squire_international_writing_conference/
- ACEhp Quality Symposium: www.acehp.org/p/cm/ld/fid=20 (see also the August 2015 issue of the Almanac: www.acehp.org/p/cm/ld/fid=52, p. 15)
- ACEhp Foundation QIE Initiative: www.acehp.org/p/cm/ld/fid=43
- ACEhp Foundation QIE Initiative's custom SQUIRE tool:
---See Webinars at www.acehp.org/p/cm/ld/fid=367
- AMWA 75th Anniversary: www.amwa.org/amwa_anniversary
- MedBiquitous: http://www.medbiq.org/ and new Performance Standard: www.medbiq.org/node/1001

Connect with me! SHBinford@FullCircleClinicalEducation.com or www.linkedin.com/company/full-circle-clinical-education-inc

Friday, September 18, 2015

CS2day: Award-Winning, 9-Collaborator, Performance-Improvement CME With an Outcomes-Based Evaluation Model

I saved the best for the last entry in the Back to School Tweet Fest. The Cease Smoking Today (CS2day) initiative cannot be ignored in a series about effective educational interventions in changing practice and improving quality of health care. An entire 2011 supplement of the Journal of Continuing Education in the Health Professions (JCEHP) reports the complex CS2day educational program and its findings, with six research articles [1-6] and three forum articles [7-9] written by multi-institutional teams among the nine initiative partners. This study was awarded the Alliance for CME (now ACEhp) Award for Outstanding CME Collaboration in 2009 (see PDF pages 15-18 of www.acehp.org/d/do/150), and was presented in a 2012 CME Congress poster (P50: http://www.cmecongress.org/wp-content/uploads/2012/05/CME-Congress-2012-Abstracts.pdf). The study boasts collaboration among universities, professional societies, accredited CME providers, ACEhp presidents and conference chairs, CME directors at academic medical centers, the JCEHP Editor-in-Chief, and other published researchers [1,10] who carefully define the educational program’s framework and collaboration model in the new quality improvement paradigm of CME called for by the Institute of Medicine in 2001 [11].

The CS2day initiative is so big that this blog post cannot feature just one article reporting it. I will focus on the introductory editorial [10] and 2 study articles that focus on (a) developing competencies to assess needs and outcomes [3] and (b) the educational and patient health outcomes data themselves [4]. The medical education expert Donald Moore introduces the supplement and one article therein reports the outcomes data. I hope you will do as Moore recommends, when you question what you can take from articles describing “a huge project with significant funding,” which is to ask, “What are the general principles that I can identify in these articles and how can I use them in my CME practice[?]” [10].

In my previous post, I noted the difficulties of using PI-CME to change patient health outcomes in a condition posing a major public health challenge: the COSEHC study addressed cardiometabolic risk factors and saw performance and patient health improvements. The CS2day initiative faced the same challenge, and happily also reported performance change and a change in patient health outcomes: smoking cessation. Moore nicely summarizes the challenge of connecting Level 5 performance changes among clinicians to Level 7 changes in public health outcomes: “All of us want to improve the health of the public in some way, but our approaches … may prevent us from having the impact that we wish to have. The [CS2day] articles … suggest there might be another approach that we should consider to address the important public health issues that surround but do not seem to be impacted by our CME programs” [10; emphasis added].

The articles in the JCEHP supplement are organized around 4 themes [10], to which I have added themes from the articles: 
a) Collaboration is challenging but worth doing if guidelines are set and a formative evaluation of the collaboration against known success factors is carried out [1,2,5]
b) Best-practice CME includes an outcomes orientation that connects learning and performance objectives from the needs assessment to the outcomes assessment in a valid framework to support content in all educational activities [3-6]
c) A public health focus can lead to development of CME/CEhp activities with a translational or implementation science function that transcends what can happen when education addresses only a practice gap [7]
d) Standards and competencies for CEhp and members of the CEhp profession help initiatives meet the principles and characteristics of the IOM report’s expectations [8,9,11] 

The two featured research articles [3,4] function together as the Methods and Results sections of a typical IMRAD-structured paper, but each is extensive enough to stand alone and inform CEhp professionals. McKeithen et al describe the following: the need for establishing clinical competency statements related to supporting smoking cessation; the clinical guidelines that informed performance expectations; “the 5 A’s” of support for smoking cessation (Ask, Advise, Assess, Assist, and Arrange); the 14 competencies or the 8 performance outcomes measures that fit into the 5 A’s algorithm being assessed; and collaboration of clinical and educational experts on outcomes tools to develop “a comprehensive set of measures at Levels 3 through 6” [3].

The summative outcomes data are extensively reported by Shershneva et al, where “evaluation of a collaborative program” is presented as “translating” the outcomes framework into practice [3,4]. Defining desired outcomes of the program across Levels 1 to 6* was seen as useful in facilitating agreement among stakeholders; guiding the evaluation process; gathering data from multiple activities and collaborators in a central repository; and studying the effects of mechanisms that link education to outcomes [4]. Thanks to effective planning, the researchers were also able to add to the literature on instructional design in CEhp by distinguishing performance outcomes from two groups of activity types: a) live PI activities with either a collaborative or practice-facilitator model and b) self-directed learning PI activities.

Also worth reading are additional insights about using the Success Case Method (SCM) to determine whether and why educational interventions succeed [6]. In CS2day reporting, using the SCM allowed the research team to conclude remarkably confidently, stating, “the PI activities were a primary and proximal cause of improvement in clinical practice” [4]. Moore notes that “the results were impressive: physicians integrated a new guideline into their practices and many patients stopped smoking” [10]. The guideline integrated into practice through the CS2day initiative was a “heavily researched evidence-based practice guideline published by the U.S. Agency for Healthcare Research and Quality,” due to be updated in 2008, the year after this collaborative initiative was begun [1].

Finally, a comment: In CEhp, change data are often seen as valid only when educational and program interventions do not change before activity expiration, nor even when a formative assessment shows changes to be necessary. This attitude can leave participating clinicians with suboptimal educational opportunities and stakeholders in the educational design frustrated. The use of the formative program evaluations that improved the CS2day initiative, with acknowledgements of changes, is in my opinion better than a pure pre/post comparison on an activity where valuable investments are not updated when indicated. If the CME/CEhp profession helps clinicians link medical care to public health through disease prevention, accountability to quality, and more, then educational design should respond to data collected in lengthy and large interventions.

The CS2day initiative is a model study in educational and performance improvement methods for a challenging public health problem. Please read the study articles if you have print or online access to JCEHP, for I have only touched the surface of the initiative's methodology, results, and rationales in the limited confines of this space. 

* Note: In this study, “Learning” was used as Level 3 and included knowledge and clinical skill (competence) measures, while “Performance” including commitment to change (CTC) queries was used as Level 4. Thus Level 5 was “Patient Health Status” and Level 6 was “Population Health Status.”

References cited: 
1. Olson CA, Balmer JT, Mejicano GC. Factors contributing to successful interorganizational collaboration: the case of CS2day. J Contin Educ Health Prof. 2011;31(Suppl 1):S3-S12.
2. Ales MW, Rodrigues SB, Snyder R, Conklin M. Developing and implementing an effective framework for collaboration: the experience of the CS2day collaborative. J Contin Educ Health Prof. 2011;31(Suppl 1): S13-S20.
3. McKeithen T, Robertson S, Speight M. Developing clinical competencies to assess learning needs and outcomes: the experience of the CS2day initiative. J Contin Educ Health Prof. 2011;31(Suppl 1):S21-S27. http://www.ncbi.nlm.nih.gov/pubmed/22190097. [Featured Article]
4. Shershneva MB, Larrison C, Robertson S, Speight M. Evaluation of a collaborative program on smoking cessation: translating outcomes framework into practice. J Contin Educ Health Prof. 2011;31(Suppl 1):S28-S36. http://www.ncbi.nlm.nih.gov/pubmed/22190098. [Featured Article]
5. Mullikin EA, Ales MW, Cho J, Nelson TM, Rodrigues SB, Speight M. Sharing collaborative designs of tobacco cessation performance improvement CME projects. J Contin Educ Health Prof. 2011;31(Suppl 1):S37-S49.
6. Olson CA, Shershneva MB, Brownstein MH. Peering inside the clock: using success case method to determine how and why practice-based educational interventions succeed. J Contin Educ Health Prof. 2011;31(Suppl 1):S50-S59.
7. Hudmon KS, Addleton RL, Vitale FM, Christiansen BA, Mejicano GC. Advancing public health through continuing education of health care professionals. J Contin Educ Health Prof. 2011;31(Suppl 1):S60-S66.
8. Balmer JT, Bellande BJ, Addleton RL, Havens CS. The relevance of the Alliance for CME competencies for planning, organizing, and sustaining an interorganizational educational collaborative. J Contin Educ Health Prof. 2011;31(Suppl 1):S67-S75.
9. Cervero RM, Moore DE. The Cease Smoking Today (CS2day) initiative: a guide to pursue the 2010 IOM report vision for CPD. J Contin Educ Health Prof. 2011;31(Suppl 1):S76-S82.
10. Moore DE. Collaboration, best-practice CME, public health focus, and the Alliance for CME competencies: a formula for the new CME? J Contin Educ Health Prof. 2011;31(Suppl 1):S1-S2. http://www.ncbi.nlm.nih.gov/pubmed/22190095. [Featured Editorial]
11. Institute of Medicine (IOM) Committee on Planning a Continuing Health Professional Education Institute. Redesigning Continuing Education in the Health Professions. Washington, DC: The National Academies Press; 2010. http://books.nap.edu/openbook.php?record_id=12704. Accessed September 17, 2015.

MeSH “Major” Terms for the 3 Featured Articles (common items italicized)
McKeithen et al [3]: Benchmarking; Clinical Competence; Education, Medical, Continuing/methods; Needs Assessment; Outcome and Process Assessment (Health Care)/organization & administration; Practice Guidelines as Topic/standards; Smoking Cessation/methods; Tobacco Use Disorder/prevention & control
Shershneva et al [4]: Benchmarking/methods; Clinical Competence/standards; Health Personnel/classification; Health Personnel/psychology; Health Personnel/statistics & numerical data; Interprofessional Relations; Outcome Assessment (Health Care)/organization & administration; Program Evaluation; Smoking Cessation/methods; Tobacco Use Disorder/prevention & control
Moore [11]: Benchmarking; Clinical Competence; Delivery of Health Care, Integrated; Education, Medical, Continuing/methods; Interinstitutional Relations; Public Health

Thursday, September 17, 2015

Patient-Health Effects of a Performance-Improvement CME Educational Intervention to Control Cardiometabolic Risk in the Southeastern U.S.

Many of you who know me might recall that I moved from the Northeast to the Southeast U.S. some years back. As I learned about the people and culture of the Southeast, I commonly saw many dietary and lifestyle factors that would confer increased risks for cardiovascular diseases and diabetes—indeed, this part of the United States is known as “The Stroke Belt.” The Consortium for Southeastern Hypertension Control (COSEHC) initiative reported by Joyner et al sought to improve the control of these risk factors through a performance-improvement continuing medical education (PI-CME) activity [1]. It somehow seems fated that I report this study because the lead author is based in the same North Carolina city where I have lived these many years, working at Wake Forest University. The PI-CME initiative itself was conducted with several primary care physician practices with designation as a COSEHC Cardiovascular Center of Excellence in Charleston, South Carolina; a comparable practice group served as a control. Results were reported to Moore’s Level 6 (patient health outcomes) [2]. 

The intervention included many overlapping and reinforcing elements that we would expect to see in a major initiative on a major health concern: using the plan-do-study-act (PDSA) model, researchers worked to “improve practice gaps by integrating evidence-based clinical interventions, physician-patient education, processes of care, performance metrics, and patient outcomes.” The intervention design included an action plan to include medical assistants and nurses in patient-level tasks and education, patient chart reminders, patient risk stratification, and sharing of physicians’ feedback on successful practice changes with other participating practices. 

Because patient health outcome indicators were used to define educational effectiveness of the PI-CME initiative, the selection of measures is important to our understanding of study findings. The research team used cardiometabolic risk factor target treatment goals for 7 lab values as recommended by 3 sets of evidence-based guidelines (JNC-7, ATP-III, and ADA). The team set a more aggressive target for low-density lipoprotein cholesterol (LDL-C) because many patients had multiple risk factors for cardiometabolic diseases and coronary heart disease risk “can exist even in the absence of other risk factors.” Researchers investigated changes in patient subgroups: “diabetic, African American, the elderly (> 65 years), and female patient subpopulations and in patients with uncontrolled risk factors at baseline.” The authors note that the average patient in both intervention and control groups was clinically obese; other baseline health indicators were also similar. 

Now to results, gathered at 6 months to assess changes in patients' cardiometabolic risk factor values and control rates from baseline. The abstract summarizes findings as follows [1]:
Only women receiving health care by intervention physicians showed a statistical improvement in their cardiometabolic risk factors as evidenced by a -3.0 mg/dL and a -3.5 mg/dL decrease in mean LDL cholesterol and non-HDL cholesterol, respectively, and a -7.0 mg/dL decrease in LDL cholesterol among females with uncontrolled baseline LDL cholesterol values. No other statistical differences were found.

I want to discuss some factors that could explain the little change seen in this study. First, the intervention was measured at just 6 months into the educational initiative; this is known to be barely adequate for assessing clinicians’ performance change, and even performance changes were not likely to produce significantly different lab values in patients with years of health-related practices that led to their higher risks. Interestingly, there was less room for improvement because patients in both groups had higher baseline risk-control rates than is seen at the U.S. national level, and the patients in the intervention group had even higher baseline risk-control rates than patients in the physician control group had.

The study did appear to improve noted performance gaps regarding gender disparities in care. The authors note 4 studies pointing out suboptimal treatment-intensification to control LDL-C in female vs. male patients and even physician bias or inaction for female patients. Thus the improved patient outcome data for LDL-C and non-HDL cholesterol among women treated by physicians in the intervention group indicates a narrowing of established gaps in attitude (Level 4) and/or performance (Level 5).

Here in “The Stroke Belt,” any effort to control cardiometabolic risk factors must include population-level initiatives and patient education, which I have seen state governments, public health departments, recreation centers, and schools undertake at many levels. Two items stand out as affecting the COSEHC report’s findings: that the study tried to measure changed patient health indicators too soon after intervention, and that the researchers tied themselves to the high standard of measuring Level 6 for a health concern that needs interventions among patients and the public that were not considered here. Indeed, because physicians’ feedback on successful changes during the initiative were shared across practices, we know that Level 4 - 5 competence and performance changes were achieved. The authors should be commended on their work to tackle this public health concern through a PI-CME initiative.

Finally, I want to mention that Joyner et al cite two studies by others I am humbled to name as colleagues. First, Sara Miller and others at Med-IQ (in a team often featured in Don Harting’s earlier posts in this Back to School campaign) published with PJ Boyle on improving diabetes care and patient outcomes in skilled-care (long-term-care) communities [3]. Second, Joyner et al cite the article featured in this blog on September 11, 2015—which itself came up in my reporting on that day’s release of the landmark SPRINT study results of the NHLBI [4]—by Shershneva, Olson, and others [5]. The Joyner article noted the Shershneva team’s finding that “process mapping led to improvement in [a majority of CVD] measures” [1].

References cited:
1. Joyner J, Moore MA, Simmons DR, et al. Impact of performance improvement continuing medical education on cardiometabolic risk factor control: the COSEHC initiative. J Contin Educ Health Prof. 2014;34(1):25-36. http://onlinelibrary.wiley.com/doi/10.1002/chp.21217/abstract. [Featured Article]
2. Moore DE, Green JS, Gallis HA. Achieving desired results and improved outcomes: integrating planning and assessment throughout learning activities. J Contin Educ Health Prof. 2009;29(1):1-15.
3. Boyle PJ, O’Neil KW, Berry CA, Stowell SA, Miller SC. Improving diabetes care and patient outcomes in skilled-care communities: successes and lessons from a quality improvement initiative. J Am Med Dir Assoc. 2013;14(5):340-344.
4. NHLBI. Landmark NIH study shows intensive blood pressure management may save lives: lower blood pressure target greatly reduces cardiovascular complications and deaths in older adults [press release]. NHLBI Website. http://www.nih.gov/news/health/sep2015/nhlbi-11.htm. Accessed September 11, 2015.
5. Shershneva MB, Mullikin EA, Loose A-S, Olson CA. Learning to collaborate: a case study of performance improvement CME. J Contin Educ Health Prof. 2008;28(3):140-147. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2782606/. [See blog post on this previously featured article at http://fullcirclece.blogspot.com/2015/09/todays-landmark-nhlbi-sprint-study.html]
MeSH “Major” Terms of Featured Article [1]:
Education, Medical, Continuing/organization & administration; Metabolic Syndrome X/prevention & control; Models, Educational; Physicians, Family/education; Quality Improvement

Study Design and Paired Comparisons: Individualized Education Fails to Change Practice—Or Was It Only Poor Matching?

We should commend Malone et al for submitting this AHRQ-supported* study [1] for publication when a flaw in its design or execution could be the authors’ main reason for concluding that “the current study was not able to demonstrate a significant beneficial effect of the educational outreach program on [the primary performance outcome measure].” This blog’s “Back-to-School” service campaign did not exclude studies reporting negative outcomes because these studies can potentially inform continuing education in the health professions (CEhp) as much as positive studies can.

CEhp/CME educational proposals, audience-generation strategies, and outcomes reports now specify relevant “target audiences,” recognizing that not all practitioners with a certain degree, specialty, or other professional demographic description would benefit from the same educational activity or design. With this more recent recognition of the importance of targeting specific clinicians and learning about their needs has come greater recognition that many CE participants should not be included in aggregated data. This is even truer in studies with matched pairs, where the step of greatest importance lies in setting match criteria. On September 15th, I discussed an opioids-education study where matching criteria were so stringent that the authors were not able to match certain participants (physicians in the intervention group), and these participants’ data and group assignments were handled nicely and reported clearly in the paper [2] (see post at http://fullcirclece.blogspot.com/2015/09/eight-year-canadian-study-on-opioid.html).

Conversely, the first result listed in this study’s abstract indicates a matching flaw for a study on education on drug-drug interactions (DDIs): “The 2 groups were significantly different with respect to age, profession, specialty, and geographic region.” This finding undermines other benefits to the study, namely, that large samples (19,606 prescribers) were recruited to both groups (educational intervention vs. control) and matched on prescribing volume. Individualized education (also known as academic detailing) was delivered by trained pharmacists as clinical consultants who met with prescribers to “provide one-on-one information … promote evidence-based knowledge, create trusting relationships, and induce practice change.” This study’s performance (behavioral) measure was a reduced rate of prescribing potential DDIs. The prescribing of 25 clinically important, potential DDIs increased more in the intervention group than it did in the control group.

In conclusion, when we look at this presumably negative finding, we are left to wonder whether the educational intervention was not effective—or whether a better matching process might have revealed different results on reducing potential DDIs and improving health care quality and utilization. One could argue that with nearly 20,000 prescribers in both samples, more matching criteria could have been applied without sacrificing so many data points that results would be inconclusive. The study’s design as a retrospective study could also explain recruitment and matching practices. In social sciences research (including educational outcomes research), a core expectation is generalizability of a sample to a population of interest; when reasonably achieved, generalizability lets us apply findings to practical needs and future decisions. 

Recall the study conclusion quoted above: “The current study was not able to demonstrate a significant beneficial effect …” (emphasis added). A secondary analysis with different pair-matching practices might yet inform national initiatives in improving quality while reducing costs through academic detailing, both of which help patients. Now let’s remember to thank Malone, Liberman, and Sun for sharing their data and methods with the healthcare quality and educational research communities in the Journal of Managed Care & Specialty Pharmacy.

* AHRQ = United States Agency for Healthcare Research and Quality

References cited:
1. Malone DC, Liberman JN, Sun D. Effect of an educational outreach program on prescribing potential drug-drug interactions. J Manag Care Pharm. 2013;19(7):549-557. http://www.ncbi.nlm.nih.gov/pubmed/23964616. [Featured Article]
2. Kahan M, Gomes T, Juurlink DN, et al. Effect of a course-based intervention and effect of medical regulation on physicians’ opioid prescribing. Can Fam Physician. 2013;59(5):e231-e239. http://www.cfp.ca/content/59/5/e231.full.pdf+html.
Free Full Text: http://www.amcp.org/JMCP/2013/September_2013/17103/1033.html
MeSH “Major” Terms: Drug Interactions; Drug Prescriptions; Education, Medical, Continuing; Health Education; Physician's Practice Patterns; Prescription Drugs/administration & dosage

Wednesday, September 16, 2015

Personalized MD Curriculum in Personalized NSCLC Treatment Produces High, “Clinically Significant” Educational Effect Size

In non-small cell lung cancer (NSCLC), evidence points to the benefits of tumor biopsy for biomarker analysis, which in turn may allow individually targeted therapy [e.g., 1-3]. In the last five years of this age of pharmacogenomics and prognostic markers, the clinical excitement for individualized medicine has produced a robust count of 256 review articles indexed in PubMed found with a search on “non small cell lung cancer treatment biomarker review,” even with additional filtering to “Abstract [available], English, [and] Humans.” But diagnostics in surgery and pathology, as well as personalized treatment for cancer are expensive, so the societal context of the Affordable Care Act enacted five years ago (March 23rd, 2010, with its emphases on quality measures, patient-centered care, and accountability in care decisions) cannot be ignored.

Individualized intervention is not just important to cancer biology and treatment: it is important to clinical education, as well. Not only do clinicians caring for patients with cancer have their own knowledge and competence gaps—mainly because of the discovery of new evidence in this rapidly changing therapeutic area—they have the healthcare system context to work within, from local to national levels. The newly published, featured articleby Hermann et al focuses on NSCLCeducation in the quality-driven system environment of the ACA, titled, “EducationalOutcomes in the Era of the Affordable Care Act: Impact of PersonalizedEducation About Non-Small Cell Lung Cancer.” The authors argue for timely opportunities for immediate, practical, and translatable education for individual clinicians, as follows: “Quality medical education must be available when the health care provider is ready to learn, provide feedback, and maximize translation of knowledge from desk to clinic” [4].

The educational methods and instructional design are of greatest interest. Oncologists completed a pre-intervention self-assessment of knowledge, skills, and attitudes. This was used to develop an individualized learning plan and a personalized curriculum, which included up to 5 distinct activities selected to address identified knowledge and practice gaps. The activities were distributed online, and learners received feedback at the completion of each activity. Learners were tested on 5 knowledge and decision-making areas relevant to NSCLC treatment.  

The results of education were dramatic: “Completion of the learning plan was associated with a high effect size (d = .70),” a Cohen’s d that indicates that the educational intervention was much more meaningful than the statistically significant differences between learners’ pre- and post-intervention testing would suggest. (Remember that p values tell the statistician only how likely it is that the hypothesis could be accepted or rejected in error.) If one reviews the Effect Size (ES) lecturenotes provided by Dr. Lee Becker on his University of Colorado webpages, this translates to what Cohen himself (reluctantly) defined as a medium-to-large effect but which has become standard usage where historical data from research teams are not published with current results. This effect size surpasses even what Wolf (1986) identified as the lowest benchmark for change results that are “clinically significant,” not just educationally meaningful, at d = .50.

Looking at this educational study’s effect size more simply at Becker’s site, Cohen’s d = .70 means that 43.0% of participating learners (oncologists) had posttest scores that did not overlap with pretest scores, indicating learning that facilitates change. This is a big percentage when one considers that even an effect size of .20 (small) is difficult to achieve in one initiative. In other words, personalized education on NSCLC affected quality care. Kudos to the researchers.

P.S. For additional reading on Cohen's d and effect sizes in CEhp, check out the AssessCME blog written by my outcomes colleague, Jason Oliveri: assesscme.wordpress.com/category/effect-size.

References cited: 
1. Remark R, Becker C, Gomez JE, et al. The non-small cell lung cancer immune contexture. A major determinant of tumor characteristics and patient outcome. Am J Respir Crit Care Med. 2015;191(4):377-90.
2. Cagle PT, Allen TC, Olsen RJ. Lung cancer biomarkers: present status and future developments. Arch Pathol Lab Med. 2013 Sep;137(9):1191-8.
3. Raparia K, Villa C, DeCamp MM, Patel JD, Mehta MP. Molecular profiling in non-small cell lung cancer: a step toward personalized medicine. Arch Pathol Lab Med. 2013;137(4):481-91.
4. Herrmann T, Peters P, Williamson C, Rhodes E. Educational outcomes in the era of the Affordable Care Act: impact of personalized education about non-small cell lung cancer. J Contin Educ Health Prof. 2015;35(Suppl 1):S5-S12. [Featured Article]
5. Becker L. Effect size (ES). University of Colorado—Colorado Springs Website. http://www.uccs.edu/lbecker/effect-size.html. Accessed September 16, 2015.
MeSH *Major* terms: This study [4] is so new, NLM librarians have not yet assigned Medical Subject Headings. Check for updates at http://www.ncbi.nlm.nih.gov/pubmed/?term=26115247

Tuesday, September 15, 2015

Eight-year Canadian study on opioid prescribing among regulator- and self-referred physicians to intensive course

This educational study in a clinical journal by Kahan et al at the University of Toronto examined “the effects of an intensive 2-day course on physicians' prescribing of opioids” [1]. The most impressive feature of this study is its eight-year-plus data-gathering period of opioid-prescribing levels among participating physicians, most of whom were family physicians. Other interesting features are worth mentioning, in both instructional design and study design.

The study design grouped participants into self-referred physicians vs. physicians who were referred by medical regulators, and added a control (nonparticipant) group. Undertaking a challenging matching procedure, researchers matched nonparticipants according to specific variables, including quarterly rates of opioid-prescribing, expressed as milligrams of morphine equivalent. Subgroups of participant groups with very high opioid-prescribing patterns were also identified; unfortunately, nonparticipants to match these participants were difficult to find. Yet this targeted approach to matching is appropriate and represents a significant investment of the researchers’ time, allowing the comparative group findings shown below. Nonparticipants were added to the study concurrently with their matched participants, per an “index date” defined as “the date of course completion for participating physicians. Control physicians were assigned the same index date as their matched pair.” In one deviation from the primary outcome measure, matching was done by number of opioid prescriptions rather than milligrams of morphine equivalent. Another study design feature is the specific comparison of opioid-prescribing rates for 2 years before vs. 2 years after the educational intervention, again by group and subgroup vs. nonparticipants; participants who could not be matched were analyzed separately from participants with matched pairs.

The instructional design of the 2-day course incorporated several educational settings and modalities. Planners used didactic presentations but added problem-based case discussions and mock-interview learning interactions with standardized patients who offered feedback. Pros and cons of changing prescribing patterns were discussed in a session at the end of the course, featuring a faculty interview with a patient. The course also provided a detailed syllabus with notes and references before the course, as well as office materials. It should be noted that benzodiazepine-prescribing was also addressed in course content. Finally, each 2-day course enrolled up to 12 participants, a limit that would confer an individualized learning environment and some professional privacy in what might be a sensitive concern among participating physicians.

The authors noted in the introduction, “Medical education has been suggested as one strategy to improve opioid prescribing among physicians” [2,3] and “Educational interventions focused on opioid prescribing lead to positive improvement in physicians’ knowledge and self-reported practices” [4]. Let's look at results by reported subgroup.

Among physicians referred by medical regulators, “the rate of opioid prescribing decreased dramatically in the year before course participation compared with matched control physicians,” and “the course had no added effect on the rate of physicians' opioid prescribing in the subsequent 2 years.” It seems that these physicians might have changed their behavior by arbitrarily reducing prescribing rates because of the regulatory investigation, even without an educational intervention to inform their clinical decision-making. In fact, the authors acknowledge this, noting, “We measured only the quantity of opioids prescribed, not the quality of opioid prescribing.” The regulatory concerns may have created a false baseline for the educational study that measured only quantity of opioid prescribed rather than patient-selection or other measure of competence.

Among the self-referred physicians who were matched to nonparticipants, “there was no statistically significant effect on the rate of opioid prescribing observed” from baseline to 2-year follow-up, although there had been a temporary decrease, particularly in prescribing for patients aged 15 – 64 (here’s a nice graph with patient ages: http://www.cfp.ca/content/59/5/e231/F4.expansion.html). On the other hand, “the rate of opioid prescribing decreased by 43.9% in the year following course completion” among self-referred physicians with high prescribing rates who could not be matched, suggesting that these physicians “might have responded to what was taught in the course.”  

References cited:
1. Kahan M, Gomes T, Juurlink DN, et al. Effect of a course-based intervention and effect of medical regulation on physicians’ opioid prescribing. Can Fam Physician. 2013;59(5):e231-e239. http://www.cfp.ca/content/59/5/e231.full.pdf+html.
[Featured Article]

2. College of Physicians and Surgeons of Ontario. Avoiding Abuse, Achieving a Balance: Tackling the Opioid Public Health Crisis. Toronto, ON: College of Physicians and Surgeons of Ontario; 2010.
3. National Opioid Use Guideline Group. Canadian Guideline for Safe and Effective Use of Opioids for Chronic Non-Cancer Pain. Hamilton, ON: National Opioid Use Guideline Group; 2010.
4. Midmer D, Kahan M, Marlow B. Effects of a distance learning program on physicians’ opioid- and benzodiazepine-prescribing skills. J Contin Educ Health Prof. 2006;26(4):294-301.
Free full text PDF: http://www.cfp.ca/content/59/5/e231.full.pdf.
MeSH *Major* terms:
Analgesics, Opioid/therapeutic use*; Drug Prescriptions/standards*; Education, Medical, Continuing*; Physician's Practice Patterns/standards* 

Saturday, September 12, 2015

Medical education with EMR-based reminders reduces antibiotic prescribing and dispensing for respiratory tract infections in Norway

It is known that British guidelines for otitis media support delayed antibiotic prescribing [1], and other countries have guidelines to reduce certain antibiotic prescribing for otitis media, for example, France [2]. Conversely, Finnish guidelines do not [3]. A 2013 Norwegian study published in the British Journal of General Practice compares the varying effectiveness of 2 interventions in delaying primary care antibiotic prescribing for respiratory tract infections, including otitis [4].

Notwithstanding a complicated design for recruiting and assigning general practitioners across multiple sites, this article offers several interesting features. First, it compares an education-only intervention with the same education enhanced by pop-up reminders of a physician’s own prescribing patterns in the electronic medical record (EMR), a nice reinforcement of the educational intervention for participating physicians. While not a focus of this post, I would like to mention a new Penn study of adherence to guidelines on otitis media using EMRs for decision support at Children’s Hospital of Philadelphia [5]. This shows interest in implementation science combined with continuing medical education (CME) for changing physicians’ practice patterns.

The Norwegian study featured here [4] data collected and linked data on prescribed and dispensed antibiotics from (a) 1 year before and (b) 1 year during the intervention, which allowed prescribing practice patterns to be displayed to physicians in the EMR at the point of prescribing antibiotics for a respiratory tract infection. It also collected pharmacy fill rates by patients, which I find interesting because it may offer insights into patients’ (or parents’) agreement with the need for the prescription, after any access barriers to medication adherence. 

Both study arms showed slightly reduced antibiotic prescribing from baseline (pre-intervention) rates: 1% reduction vs. 4% reduction in “approximated risk” (risk ratio, RR) in the education-only vs. education-plus-EMR study arms, respectively. Both results report very tight ranges around a 95% confidence interval (CI), increasing confidence in the findings. (It is further nice to see the CI reported instead of the p value, for those who often hesitate to report CI because of many readers’ greater familiarity with the p value.) While reporting of “risk ratio” may be used as simply a convenient and appropriate way of reporting epidemiological data, it seems to me that its use for reporting educational outcomes with practice data is unusual and perhaps a comment on antibiotic prescribing for these infections as a risk.

The authors find that upper respiratory tract infection, sinusitis, and otitis “gave highest odds for delayed prescribing and lowest odds for dispensing,” which led them to conclude that the greatest potential for “savings” is greatest for these infections, a comment that brings this CME study with implementation science into the context of health utilization research. The article offers freely accessible full text, so enjoy reading the study.

References cited:
1. Centre for Clinical Practice at NICE (UK). Respiratory Tract Infections - Antibiotic Prescribing: Prescribing of Antibiotics for Self-Limiting Respiratory Tract Infections in Adults and Children in Primary Care. London: National Institute for Health and Clinical Excellence (UK); 2008 Jul. http://www.ncbi.nlm.nih.gov/pubmedhealth/PMH0010014/.
2. Levy C, Pereira M, Guedj R, et al. Impact of 2011 French guidelines on antibiotic prescription for acute otitis media in infants. Médecine Mal Infect. 2014;44(3):102-106. http://www.ncbi.nlm.nih.gov/pubmed/24630597.
3. [Update on current care guidelines: acute otitis media]. Duodecim. 2010;126(5):573-4. Finnish. http://www.ncbi.nlm.nih.gov/pubmed/20597310.
4. Hoye S, Gjelstad S, Lindbaek M. Effects on antibiotic dispensing rates of interventions to promote delayed prescribing for respiratory tract infections in primary care. Br J Gen Pract. 2013;63(616):e777-e786. http://bjgp.org/content/63/616/e777.full.pdf. [Featured Article]
5. Fiks AG, Zhang P, Localio AR, et al. Adoption of electronic medical record-based decision support for otitis media in children. Health Serv Res. 2015;50(2):489-513. http://www.ncbi.nlm.nih.gov/pubmed/25287670.  
MeSH *Major* terms: Anti-Bacterial Agents/therapeutic use*; Education, Medical, Continuing*; General Practice/statistics & numerical data*; Physician's Practice Patterns/statistics & numerical data*; Respiratory Tract Infections/drug therapy*