Tuesday, October 6, 2015

Conversation-starter: SQUIRE tool standards as a connection between medical communication organization: ACEhp and AMWA

There are many synergies and resources between the Alliance for Continuing Education in the Health Professions (ACEhp) and the American Medical Writers Association (AMWA), because medical educators and medical writers are all communicators ... which explains why so many of us are members of both. In today's ACEhp webinar (see www.acehp.org/p/cm/ld/fid=367), Jann Balmer replied to a question asking about the role of medical writers in publishing educational outcomes in the quality improvement reporting tool, SQUIRE (as customized by ACEhp), saying that outcomes manuscripts can be enhanced by medical writers and editors because QIE project managers may not be good writers. My colleague, Donald Harting, and I have been discussing potential organizational synergies between AMWA and ACEhp in this regard for over a year.

This is an opportune moment for our memberships to work together: AMWA celebrated 75 years with last week's Annual Conference, which started the very day on which ACEhp launched its custom SQUIRE reporting tool for clinical educational research at the annual Alliance Quality Symposium. This begs the question: What role will professional medical writers and communicators in AMWA play in publishing standardized, SQUIRE-compliant educational outcomes research in the Alliance for CE in the Health Professions custom tool?

Conversation-starter: ACEhp identified a top goal of 2015's Phase II of the Quality Improvement Education (QIE) Initiative as the "Assessment of SQUIRE to generate Case Studies that demonstrate successful integration of Education into QI." Professionals with capabilities in medical education and writing will ideally combine their skills in writing these case studies. Wouldn't you like to author a study that is eligible for later meta-analyses because the study was SQUIRE-compliant?

Extension: There is an inaugural meeting on writing research in the SQUIRE tool at Dartmouth this November 2015. MedBiquitous members, as experts in standardization in medical education technology and reporting, what can you contribute to this conversation? Members of the American Educational Research Association LinkedIn group, what are your thoughts for connecting health education research reporting to more global research agendas? 

I am promoting the SQUIRE tool because of my roles as Co-Leader of the ACEhp QIE Initiative's Building Block on "Nomenclature and Its Adoption" and Research Track faculty. In these roles, I see the needs for developing consistent wording and reporting standards for medical education research. After all, how will we report our achievements and deliberate our challenges in developing and researching CE in the Health Professions if we do not have consistent language to use in the SQUIRE reporting tool? Let's not delay a start in COMMUNICATING among health communicators. Join the discussion! I believe that this is a great opportunity for all health educators, education researchers, and writers to collaborate!

For additional information on these organizations, topics, and events, check out these links (some may have membership firewalls):
- SQUIRE (Standards for Quality Improvement Reporting Excellence) inaugural conference: squire-statement.org/news_events/squire_international_writing_conference/
- ACEhp Quality Symposium: www.acehp.org/p/cm/ld/fid=20 (see also the August 2015 issue of the Almanac: www.acehp.org/p/cm/ld/fid=52, p. 15)
- ACEhp Foundation QIE Initiative: www.acehp.org/p/cm/ld/fid=43
- ACEhp Foundation QIE Initiative's custom SQUIRE tool:
---See Webinars at www.acehp.org/p/cm/ld/fid=367
- AMWA 75th Anniversary: www.amwa.org/amwa_anniversary
- MedBiquitous: http://www.medbiq.org/ and new Performance Standard: www.medbiq.org/node/1001

Connect with me! SHBinford@FullCircleClinicalEducation.com or www.linkedin.com/company/full-circle-clinical-education-inc

Friday, September 18, 2015

CS2day: Award-Winning, 9-Collaborator, Performance-Improvement CME With an Outcomes-Based Evaluation Model

I saved the best for the last entry in the Back to School Tweet Fest. The Cease Smoking Today (CS2day) initiative cannot be ignored in a series about effective educational interventions in changing practice and improving quality of health care. An entire 2011 supplement of the Journal of Continuing Education in the Health Professions (JCEHP) reports the complex CS2day educational program and its findings, with six research articles [1-6] and three forum articles [7-9] written by multi-institutional teams among the nine initiative partners. This study was awarded the Alliance for CME (now ACEhp) Award for Outstanding CME Collaboration in 2009 (see PDF pages 15-18 of www.acehp.org/d/do/150), and was presented in a 2012 CME Congress poster (P50: http://www.cmecongress.org/wp-content/uploads/2012/05/CME-Congress-2012-Abstracts.pdf). The study boasts collaboration among universities, professional societies, accredited CME providers, ACEhp presidents and conference chairs, CME directors at academic medical centers, the JCEHP Editor-in-Chief, and other published researchers [1,10] who carefully define the educational program’s framework and collaboration model in the new quality improvement paradigm of CME called for by the Institute of Medicine in 2001 [11].

The CS2day initiative is so big that this blog post cannot feature just one article reporting it. I will focus on the introductory editorial [10] and 2 study articles that focus on (a) developing competencies to assess needs and outcomes [3] and (b) the educational and patient health outcomes data themselves [4]. The medical education expert Donald Moore introduces the supplement and one article therein reports the outcomes data. I hope you will do as Moore recommends, when you question what you can take from articles describing “a huge project with significant funding,” which is to ask, “What are the general principles that I can identify in these articles and how can I use them in my CME practice[?]” [10].

In my previous post, I noted the difficulties of using PI-CME to change patient health outcomes in a condition posing a major public health challenge: the COSEHC study addressed cardiometabolic risk factors and saw performance and patient health improvements. The CS2day initiative faced the same challenge, and happily also reported performance change and a change in patient health outcomes: smoking cessation. Moore nicely summarizes the challenge of connecting Level 5 performance changes among clinicians to Level 7 changes in public health outcomes: “All of us want to improve the health of the public in some way, but our approaches … may prevent us from having the impact that we wish to have. The [CS2day] articles … suggest there might be another approach that we should consider to address the important public health issues that surround but do not seem to be impacted by our CME programs” [10; emphasis added].

The articles in the JCEHP supplement are organized around 4 themes [10], to which I have added themes from the articles: 
a) Collaboration is challenging but worth doing if guidelines are set and a formative evaluation of the collaboration against known success factors is carried out [1,2,5]
b) Best-practice CME includes an outcomes orientation that connects learning and performance objectives from the needs assessment to the outcomes assessment in a valid framework to support content in all educational activities [3-6]
c) A public health focus can lead to development of CME/CEhp activities with a translational or implementation science function that transcends what can happen when education addresses only a practice gap [7]
d) Standards and competencies for CEhp and members of the CEhp profession help initiatives meet the principles and characteristics of the IOM report’s expectations [8,9,11] 

The two featured research articles [3,4] function together as the Methods and Results sections of a typical IMRAD-structured paper, but each is extensive enough to stand alone and inform CEhp professionals. McKeithen et al describe the following: the need for establishing clinical competency statements related to supporting smoking cessation; the clinical guidelines that informed performance expectations; “the 5 A’s” of support for smoking cessation (Ask, Advise, Assess, Assist, and Arrange); the 14 competencies or the 8 performance outcomes measures that fit into the 5 A’s algorithm being assessed; and collaboration of clinical and educational experts on outcomes tools to develop “a comprehensive set of measures at Levels 3 through 6” [3].

The summative outcomes data are extensively reported by Shershneva et al, where “evaluation of a collaborative program” is presented as “translating” the outcomes framework into practice [3,4]. Defining desired outcomes of the program across Levels 1 to 6* was seen as useful in facilitating agreement among stakeholders; guiding the evaluation process; gathering data from multiple activities and collaborators in a central repository; and studying the effects of mechanisms that link education to outcomes [4]. Thanks to effective planning, the researchers were also able to add to the literature on instructional design in CEhp by distinguishing performance outcomes from two groups of activity types: a) live PI activities with either a collaborative or practice-facilitator model and b) self-directed learning PI activities.

Also worth reading are additional insights about using the Success Case Method (SCM) to determine whether and why educational interventions succeed [6]. In CS2day reporting, using the SCM allowed the research team to conclude remarkably confidently, stating, “the PI activities were a primary and proximal cause of improvement in clinical practice” [4]. Moore notes that “the results were impressive: physicians integrated a new guideline into their practices and many patients stopped smoking” [10]. The guideline integrated into practice through the CS2day initiative was a “heavily researched evidence-based practice guideline published by the U.S. Agency for Healthcare Research and Quality,” due to be updated in 2008, the year after this collaborative initiative was begun [1].

Finally, a comment: In CEhp, change data are often seen as valid only when educational and program interventions do not change before activity expiration, nor even when a formative assessment shows changes to be necessary. This attitude can leave participating clinicians with suboptimal educational opportunities and stakeholders in the educational design frustrated. The use of the formative program evaluations that improved the CS2day initiative, with acknowledgements of changes, is in my opinion better than a pure pre/post comparison on an activity where valuable investments are not updated when indicated. If the CME/CEhp profession helps clinicians link medical care to public health through disease prevention, accountability to quality, and more, then educational design should respond to data collected in lengthy and large interventions.

The CS2day initiative is a model study in educational and performance improvement methods for a challenging public health problem. Please read the study articles if you have print or online access to JCEHP, for I have only touched the surface of the initiative's methodology, results, and rationales in the limited confines of this space. 

* Note: In this study, “Learning” was used as Level 3 and included knowledge and clinical skill (competence) measures, while “Performance” including commitment to change (CTC) queries was used as Level 4. Thus Level 5 was “Patient Health Status” and Level 6 was “Population Health Status.”

References cited: 
1. Olson CA, Balmer JT, Mejicano GC. Factors contributing to successful interorganizational collaboration: the case of CS2day. J Contin Educ Health Prof. 2011;31(Suppl 1):S3-S12.
2. Ales MW, Rodrigues SB, Snyder R, Conklin M. Developing and implementing an effective framework for collaboration: the experience of the CS2day collaborative. J Contin Educ Health Prof. 2011;31(Suppl 1): S13-S20.
3. McKeithen T, Robertson S, Speight M. Developing clinical competencies to assess learning needs and outcomes: the experience of the CS2day initiative. J Contin Educ Health Prof. 2011;31(Suppl 1):S21-S27. http://www.ncbi.nlm.nih.gov/pubmed/22190097. [Featured Article]
4. Shershneva MB, Larrison C, Robertson S, Speight M. Evaluation of a collaborative program on smoking cessation: translating outcomes framework into practice. J Contin Educ Health Prof. 2011;31(Suppl 1):S28-S36. http://www.ncbi.nlm.nih.gov/pubmed/22190098. [Featured Article]
5. Mullikin EA, Ales MW, Cho J, Nelson TM, Rodrigues SB, Speight M. Sharing collaborative designs of tobacco cessation performance improvement CME projects. J Contin Educ Health Prof. 2011;31(Suppl 1):S37-S49.
6. Olson CA, Shershneva MB, Brownstein MH. Peering inside the clock: using success case method to determine how and why practice-based educational interventions succeed. J Contin Educ Health Prof. 2011;31(Suppl 1):S50-S59.
7. Hudmon KS, Addleton RL, Vitale FM, Christiansen BA, Mejicano GC. Advancing public health through continuing education of health care professionals. J Contin Educ Health Prof. 2011;31(Suppl 1):S60-S66.
8. Balmer JT, Bellande BJ, Addleton RL, Havens CS. The relevance of the Alliance for CME competencies for planning, organizing, and sustaining an interorganizational educational collaborative. J Contin Educ Health Prof. 2011;31(Suppl 1):S67-S75.
9. Cervero RM, Moore DE. The Cease Smoking Today (CS2day) initiative: a guide to pursue the 2010 IOM report vision for CPD. J Contin Educ Health Prof. 2011;31(Suppl 1):S76-S82.
10. Moore DE. Collaboration, best-practice CME, public health focus, and the Alliance for CME competencies: a formula for the new CME? J Contin Educ Health Prof. 2011;31(Suppl 1):S1-S2. http://www.ncbi.nlm.nih.gov/pubmed/22190095. [Featured Editorial]
11. Institute of Medicine (IOM) Committee on Planning a Continuing Health Professional Education Institute. Redesigning Continuing Education in the Health Professions. Washington, DC: The National Academies Press; 2010. http://books.nap.edu/openbook.php?record_id=12704. Accessed September 17, 2015.

MeSH “Major” Terms for the 3 Featured Articles (common items italicized)
McKeithen et al [3]: Benchmarking; Clinical Competence; Education, Medical, Continuing/methods; Needs Assessment; Outcome and Process Assessment (Health Care)/organization & administration; Practice Guidelines as Topic/standards; Smoking Cessation/methods; Tobacco Use Disorder/prevention & control
Shershneva et al [4]: Benchmarking/methods; Clinical Competence/standards; Health Personnel/classification; Health Personnel/psychology; Health Personnel/statistics & numerical data; Interprofessional Relations; Outcome Assessment (Health Care)/organization & administration; Program Evaluation; Smoking Cessation/methods; Tobacco Use Disorder/prevention & control
Moore [11]: Benchmarking; Clinical Competence; Delivery of Health Care, Integrated; Education, Medical, Continuing/methods; Interinstitutional Relations; Public Health

Thursday, September 17, 2015

Patient-Health Effects of a Performance-Improvement CME Educational Intervention to Control Cardiometabolic Risk in the Southeastern U.S.

Many of you who know me might recall that I moved from the Northeast to the Southeast U.S. some years back. As I learned about the people and culture of the Southeast, I commonly saw many dietary and lifestyle factors that would confer increased risks for cardiovascular diseases and diabetes—indeed, this part of the United States is known as “The Stroke Belt.” The Consortium for Southeastern Hypertension Control (COSEHC) initiative reported by Joyner et al sought to improve the control of these risk factors through a performance-improvement continuing medical education (PI-CME) activity [1]. It somehow seems fated that I report this study because the lead author is based in the same North Carolina city where I have lived these many years, working at Wake Forest University. The PI-CME initiative itself was conducted with several primary care physician practices with designation as a COSEHC Cardiovascular Center of Excellence in Charleston, South Carolina; a comparable practice group served as a control. Results were reported to Moore’s Level 6 (patient health outcomes) [2]. 

The intervention included many overlapping and reinforcing elements that we would expect to see in a major initiative on a major health concern: using the plan-do-study-act (PDSA) model, researchers worked to “improve practice gaps by integrating evidence-based clinical interventions, physician-patient education, processes of care, performance metrics, and patient outcomes.” The intervention design included an action plan to include medical assistants and nurses in patient-level tasks and education, patient chart reminders, patient risk stratification, and sharing of physicians’ feedback on successful practice changes with other participating practices. 

Because patient health outcome indicators were used to define educational effectiveness of the PI-CME initiative, the selection of measures is important to our understanding of study findings. The research team used cardiometabolic risk factor target treatment goals for 7 lab values as recommended by 3 sets of evidence-based guidelines (JNC-7, ATP-III, and ADA). The team set a more aggressive target for low-density lipoprotein cholesterol (LDL-C) because many patients had multiple risk factors for cardiometabolic diseases and coronary heart disease risk “can exist even in the absence of other risk factors.” Researchers investigated changes in patient subgroups: “diabetic, African American, the elderly (> 65 years), and female patient subpopulations and in patients with uncontrolled risk factors at baseline.” The authors note that the average patient in both intervention and control groups was clinically obese; other baseline health indicators were also similar. 

Now to results, gathered at 6 months to assess changes in patients' cardiometabolic risk factor values and control rates from baseline. The abstract summarizes findings as follows [1]:
Only women receiving health care by intervention physicians showed a statistical improvement in their cardiometabolic risk factors as evidenced by a -3.0 mg/dL and a -3.5 mg/dL decrease in mean LDL cholesterol and non-HDL cholesterol, respectively, and a -7.0 mg/dL decrease in LDL cholesterol among females with uncontrolled baseline LDL cholesterol values. No other statistical differences were found.

I want to discuss some factors that could explain the little change seen in this study. First, the intervention was measured at just 6 months into the educational initiative; this is known to be barely adequate for assessing clinicians’ performance change, and even performance changes were not likely to produce significantly different lab values in patients with years of health-related practices that led to their higher risks. Interestingly, there was less room for improvement because patients in both groups had higher baseline risk-control rates than is seen at the U.S. national level, and the patients in the intervention group had even higher baseline risk-control rates than patients in the physician control group had.

The study did appear to improve noted performance gaps regarding gender disparities in care. The authors note 4 studies pointing out suboptimal treatment-intensification to control LDL-C in female vs. male patients and even physician bias or inaction for female patients. Thus the improved patient outcome data for LDL-C and non-HDL cholesterol among women treated by physicians in the intervention group indicates a narrowing of established gaps in attitude (Level 4) and/or performance (Level 5).

Here in “The Stroke Belt,” any effort to control cardiometabolic risk factors must include population-level initiatives and patient education, which I have seen state governments, public health departments, recreation centers, and schools undertake at many levels. Two items stand out as affecting the COSEHC report’s findings: that the study tried to measure changed patient health indicators too soon after intervention, and that the researchers tied themselves to the high standard of measuring Level 6 for a health concern that needs interventions among patients and the public that were not considered here. Indeed, because physicians’ feedback on successful changes during the initiative were shared across practices, we know that Level 4 - 5 competence and performance changes were achieved. The authors should be commended on their work to tackle this public health concern through a PI-CME initiative.

Finally, I want to mention that Joyner et al cite two studies by others I am humbled to name as colleagues. First, Sara Miller and others at Med-IQ (in a team often featured in Don Harting’s earlier posts in this Back to School campaign) published with PJ Boyle on improving diabetes care and patient outcomes in skilled-care (long-term-care) communities [3]. Second, Joyner et al cite the article featured in this blog on September 11, 2015—which itself came up in my reporting on that day’s release of the landmark SPRINT study results of the NHLBI [4]—by Shershneva, Olson, and others [5]. The Joyner article noted the Shershneva team’s finding that “process mapping led to improvement in [a majority of CVD] measures” [1].

References cited:
1. Joyner J, Moore MA, Simmons DR, et al. Impact of performance improvement continuing medical education on cardiometabolic risk factor control: the COSEHC initiative. J Contin Educ Health Prof. 2014;34(1):25-36. http://onlinelibrary.wiley.com/doi/10.1002/chp.21217/abstract. [Featured Article]
2. Moore DE, Green JS, Gallis HA. Achieving desired results and improved outcomes: integrating planning and assessment throughout learning activities. J Contin Educ Health Prof. 2009;29(1):1-15.
3. Boyle PJ, O’Neil KW, Berry CA, Stowell SA, Miller SC. Improving diabetes care and patient outcomes in skilled-care communities: successes and lessons from a quality improvement initiative. J Am Med Dir Assoc. 2013;14(5):340-344.
4. NHLBI. Landmark NIH study shows intensive blood pressure management may save lives: lower blood pressure target greatly reduces cardiovascular complications and deaths in older adults [press release]. NHLBI Website. http://www.nih.gov/news/health/sep2015/nhlbi-11.htm. Accessed September 11, 2015.
5. Shershneva MB, Mullikin EA, Loose A-S, Olson CA. Learning to collaborate: a case study of performance improvement CME. J Contin Educ Health Prof. 2008;28(3):140-147. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2782606/. [See blog post on this previously featured article at http://fullcirclece.blogspot.com/2015/09/todays-landmark-nhlbi-sprint-study.html]
MeSH “Major” Terms of Featured Article [1]:
Education, Medical, Continuing/organization & administration; Metabolic Syndrome X/prevention & control; Models, Educational; Physicians, Family/education; Quality Improvement

Study Design and Paired Comparisons: Individualized Education Fails to Change Practice—Or Was It Only Poor Matching?

We should commend Malone et al for submitting this AHRQ-supported* study [1] for publication when a flaw in its design or execution could be the authors’ main reason for concluding that “the current study was not able to demonstrate a significant beneficial effect of the educational outreach program on [the primary performance outcome measure].” This blog’s “Back-to-School” service campaign did not exclude studies reporting negative outcomes because these studies can potentially inform continuing education in the health professions (CEhp) as much as positive studies can.

CEhp/CME educational proposals, audience-generation strategies, and outcomes reports now specify relevant “target audiences,” recognizing that not all practitioners with a certain degree, specialty, or other professional demographic description would benefit from the same educational activity or design. With this more recent recognition of the importance of targeting specific clinicians and learning about their needs has come greater recognition that many CE participants should not be included in aggregated data. This is even truer in studies with matched pairs, where the step of greatest importance lies in setting match criteria. On September 15th, I discussed an opioids-education study where matching criteria were so stringent that the authors were not able to match certain participants (physicians in the intervention group), and these participants’ data and group assignments were handled nicely and reported clearly in the paper [2] (see post at http://fullcirclece.blogspot.com/2015/09/eight-year-canadian-study-on-opioid.html).

Conversely, the first result listed in this study’s abstract indicates a matching flaw for a study on education on drug-drug interactions (DDIs): “The 2 groups were significantly different with respect to age, profession, specialty, and geographic region.” This finding undermines other benefits to the study, namely, that large samples (19,606 prescribers) were recruited to both groups (educational intervention vs. control) and matched on prescribing volume. Individualized education (also known as academic detailing) was delivered by trained pharmacists as clinical consultants who met with prescribers to “provide one-on-one information … promote evidence-based knowledge, create trusting relationships, and induce practice change.” This study’s performance (behavioral) measure was a reduced rate of prescribing potential DDIs. The prescribing of 25 clinically important, potential DDIs increased more in the intervention group than it did in the control group.

In conclusion, when we look at this presumably negative finding, we are left to wonder whether the educational intervention was not effective—or whether a better matching process might have revealed different results on reducing potential DDIs and improving health care quality and utilization. One could argue that with nearly 20,000 prescribers in both samples, more matching criteria could have been applied without sacrificing so many data points that results would be inconclusive. The study’s design as a retrospective study could also explain recruitment and matching practices. In social sciences research (including educational outcomes research), a core expectation is generalizability of a sample to a population of interest; when reasonably achieved, generalizability lets us apply findings to practical needs and future decisions. 

Recall the study conclusion quoted above: “The current study was not able to demonstrate a significant beneficial effect …” (emphasis added). A secondary analysis with different pair-matching practices might yet inform national initiatives in improving quality while reducing costs through academic detailing, both of which help patients. Now let’s remember to thank Malone, Liberman, and Sun for sharing their data and methods with the healthcare quality and educational research communities in the Journal of Managed Care & Specialty Pharmacy.

* AHRQ = United States Agency for Healthcare Research and Quality

References cited:
1. Malone DC, Liberman JN, Sun D. Effect of an educational outreach program on prescribing potential drug-drug interactions. J Manag Care Pharm. 2013;19(7):549-557. http://www.ncbi.nlm.nih.gov/pubmed/23964616. [Featured Article]
2. Kahan M, Gomes T, Juurlink DN, et al. Effect of a course-based intervention and effect of medical regulation on physicians’ opioid prescribing. Can Fam Physician. 2013;59(5):e231-e239. http://www.cfp.ca/content/59/5/e231.full.pdf+html.
Free Full Text: http://www.amcp.org/JMCP/2013/September_2013/17103/1033.html
MeSH “Major” Terms: Drug Interactions; Drug Prescriptions; Education, Medical, Continuing; Health Education; Physician's Practice Patterns; Prescription Drugs/administration & dosage

Wednesday, September 16, 2015

Personalized MD Curriculum in Personalized NSCLC Treatment Produces High, “Clinically Significant” Educational Effect Size

In non-small cell lung cancer (NSCLC), evidence points to the benefits of tumor biopsy for biomarker analysis, which in turn may allow individually targeted therapy [e.g., 1-3]. In the last five years of this age of pharmacogenomics and prognostic markers, the clinical excitement for individualized medicine has produced a robust count of 256 review articles indexed in PubMed found with a search on “non small cell lung cancer treatment biomarker review,” even with additional filtering to “Abstract [available], English, [and] Humans.” But diagnostics in surgery and pathology, as well as personalized treatment for cancer are expensive, so the societal context of the Affordable Care Act enacted five years ago (March 23rd, 2010, with its emphases on quality measures, patient-centered care, and accountability in care decisions) cannot be ignored.

Individualized intervention is not just important to cancer biology and treatment: it is important to clinical education, as well. Not only do clinicians caring for patients with cancer have their own knowledge and competence gaps—mainly because of the discovery of new evidence in this rapidly changing therapeutic area—they have the healthcare system context to work within, from local to national levels. The newly published, featured articleby Hermann et al focuses on NSCLCeducation in the quality-driven system environment of the ACA, titled, “EducationalOutcomes in the Era of the Affordable Care Act: Impact of PersonalizedEducation About Non-Small Cell Lung Cancer.” The authors argue for timely opportunities for immediate, practical, and translatable education for individual clinicians, as follows: “Quality medical education must be available when the health care provider is ready to learn, provide feedback, and maximize translation of knowledge from desk to clinic” [4].

The educational methods and instructional design are of greatest interest. Oncologists completed a pre-intervention self-assessment of knowledge, skills, and attitudes. This was used to develop an individualized learning plan and a personalized curriculum, which included up to 5 distinct activities selected to address identified knowledge and practice gaps. The activities were distributed online, and learners received feedback at the completion of each activity. Learners were tested on 5 knowledge and decision-making areas relevant to NSCLC treatment.  

The results of education were dramatic: “Completion of the learning plan was associated with a high effect size (d = .70),” a Cohen’s d that indicates that the educational intervention was much more meaningful than the statistically significant differences between learners’ pre- and post-intervention testing would suggest. (Remember that p values tell the statistician only how likely it is that the hypothesis could be accepted or rejected in error.) If one reviews the Effect Size (ES) lecturenotes provided by Dr. Lee Becker on his University of Colorado webpages, this translates to what Cohen himself (reluctantly) defined as a medium-to-large effect but which has become standard usage where historical data from research teams are not published with current results. This effect size surpasses even what Wolf (1986) identified as the lowest benchmark for change results that are “clinically significant,” not just educationally meaningful, at d = .50.

Looking at this educational study’s effect size more simply at Becker’s site, Cohen’s d = .70 means that 43.0% of participating learners (oncologists) had posttest scores that did not overlap with pretest scores, indicating learning that facilitates change. This is a big percentage when one considers that even an effect size of .20 (small) is difficult to achieve in one initiative. In other words, personalized education on NSCLC affected quality care. Kudos to the researchers.

P.S. For additional reading on Cohen's d and effect sizes in CEhp, check out the AssessCME blog written by my outcomes colleague, Jason Oliveri: assesscme.wordpress.com/category/effect-size.

References cited: 
1. Remark R, Becker C, Gomez JE, et al. The non-small cell lung cancer immune contexture. A major determinant of tumor characteristics and patient outcome. Am J Respir Crit Care Med. 2015;191(4):377-90.
2. Cagle PT, Allen TC, Olsen RJ. Lung cancer biomarkers: present status and future developments. Arch Pathol Lab Med. 2013 Sep;137(9):1191-8.
3. Raparia K, Villa C, DeCamp MM, Patel JD, Mehta MP. Molecular profiling in non-small cell lung cancer: a step toward personalized medicine. Arch Pathol Lab Med. 2013;137(4):481-91.
4. Herrmann T, Peters P, Williamson C, Rhodes E. Educational outcomes in the era of the Affordable Care Act: impact of personalized education about non-small cell lung cancer. J Contin Educ Health Prof. 2015;35(Suppl 1):S5-S12. [Featured Article]
5. Becker L. Effect size (ES). University of Colorado—Colorado Springs Website. http://www.uccs.edu/lbecker/effect-size.html. Accessed September 16, 2015.
MeSH *Major* terms: This study [4] is so new, NLM librarians have not yet assigned Medical Subject Headings. Check for updates at http://www.ncbi.nlm.nih.gov/pubmed/?term=26115247

Tuesday, September 15, 2015

Eight-year Canadian study on opioid prescribing among regulator- and self-referred physicians to intensive course

This educational study in a clinical journal by Kahan et al at the University of Toronto examined “the effects of an intensive 2-day course on physicians' prescribing of opioids” [1]. The most impressive feature of this study is its eight-year-plus data-gathering period of opioid-prescribing levels among participating physicians, most of whom were family physicians. Other interesting features are worth mentioning, in both instructional design and study design.

The study design grouped participants into self-referred physicians vs. physicians who were referred by medical regulators, and added a control (nonparticipant) group. Undertaking a challenging matching procedure, researchers matched nonparticipants according to specific variables, including quarterly rates of opioid-prescribing, expressed as milligrams of morphine equivalent. Subgroups of participant groups with very high opioid-prescribing patterns were also identified; unfortunately, nonparticipants to match these participants were difficult to find. Yet this targeted approach to matching is appropriate and represents a significant investment of the researchers’ time, allowing the comparative group findings shown below. Nonparticipants were added to the study concurrently with their matched participants, per an “index date” defined as “the date of course completion for participating physicians. Control physicians were assigned the same index date as their matched pair.” In one deviation from the primary outcome measure, matching was done by number of opioid prescriptions rather than milligrams of morphine equivalent. Another study design feature is the specific comparison of opioid-prescribing rates for 2 years before vs. 2 years after the educational intervention, again by group and subgroup vs. nonparticipants; participants who could not be matched were analyzed separately from participants with matched pairs.

The instructional design of the 2-day course incorporated several educational settings and modalities. Planners used didactic presentations but added problem-based case discussions and mock-interview learning interactions with standardized patients who offered feedback. Pros and cons of changing prescribing patterns were discussed in a session at the end of the course, featuring a faculty interview with a patient. The course also provided a detailed syllabus with notes and references before the course, as well as office materials. It should be noted that benzodiazepine-prescribing was also addressed in course content. Finally, each 2-day course enrolled up to 12 participants, a limit that would confer an individualized learning environment and some professional privacy in what might be a sensitive concern among participating physicians.

The authors noted in the introduction, “Medical education has been suggested as one strategy to improve opioid prescribing among physicians” [2,3] and “Educational interventions focused on opioid prescribing lead to positive improvement in physicians’ knowledge and self-reported practices” [4]. Let's look at results by reported subgroup.

Among physicians referred by medical regulators, “the rate of opioid prescribing decreased dramatically in the year before course participation compared with matched control physicians,” and “the course had no added effect on the rate of physicians' opioid prescribing in the subsequent 2 years.” It seems that these physicians might have changed their behavior by arbitrarily reducing prescribing rates because of the regulatory investigation, even without an educational intervention to inform their clinical decision-making. In fact, the authors acknowledge this, noting, “We measured only the quantity of opioids prescribed, not the quality of opioid prescribing.” The regulatory concerns may have created a false baseline for the educational study that measured only quantity of opioid prescribed rather than patient-selection or other measure of competence.

Among the self-referred physicians who were matched to nonparticipants, “there was no statistically significant effect on the rate of opioid prescribing observed” from baseline to 2-year follow-up, although there had been a temporary decrease, particularly in prescribing for patients aged 15 – 64 (here’s a nice graph with patient ages: http://www.cfp.ca/content/59/5/e231/F4.expansion.html). On the other hand, “the rate of opioid prescribing decreased by 43.9% in the year following course completion” among self-referred physicians with high prescribing rates who could not be matched, suggesting that these physicians “might have responded to what was taught in the course.”  

References cited:
1. Kahan M, Gomes T, Juurlink DN, et al. Effect of a course-based intervention and effect of medical regulation on physicians’ opioid prescribing. Can Fam Physician. 2013;59(5):e231-e239. http://www.cfp.ca/content/59/5/e231.full.pdf+html.
[Featured Article]

2. College of Physicians and Surgeons of Ontario. Avoiding Abuse, Achieving a Balance: Tackling the Opioid Public Health Crisis. Toronto, ON: College of Physicians and Surgeons of Ontario; 2010.
3. National Opioid Use Guideline Group. Canadian Guideline for Safe and Effective Use of Opioids for Chronic Non-Cancer Pain. Hamilton, ON: National Opioid Use Guideline Group; 2010.
4. Midmer D, Kahan M, Marlow B. Effects of a distance learning program on physicians’ opioid- and benzodiazepine-prescribing skills. J Contin Educ Health Prof. 2006;26(4):294-301.
Free full text PDF: http://www.cfp.ca/content/59/5/e231.full.pdf.
MeSH *Major* terms:
Analgesics, Opioid/therapeutic use*; Drug Prescriptions/standards*; Education, Medical, Continuing*; Physician's Practice Patterns/standards* 

Saturday, September 12, 2015

Medical education with EMR-based reminders reduces antibiotic prescribing and dispensing for respiratory tract infections in Norway

It is known that British guidelines for otitis media support delayed antibiotic prescribing [1], and other countries have guidelines to reduce certain antibiotic prescribing for otitis media, for example, France [2]. Conversely, Finnish guidelines do not [3]. A 2013 Norwegian study published in the British Journal of General Practice compares the varying effectiveness of 2 interventions in delaying primary care antibiotic prescribing for respiratory tract infections, including otitis [4].

Notwithstanding a complicated design for recruiting and assigning general practitioners across multiple sites, this article offers several interesting features. First, it compares an education-only intervention with the same education enhanced by pop-up reminders of a physician’s own prescribing patterns in the electronic medical record (EMR), a nice reinforcement of the educational intervention for participating physicians. While not a focus of this post, I would like to mention a new Penn study of adherence to guidelines on otitis media using EMRs for decision support at Children’s Hospital of Philadelphia [5]. This shows interest in implementation science combined with continuing medical education (CME) for changing physicians’ practice patterns.

The Norwegian study featured here [4] data collected and linked data on prescribed and dispensed antibiotics from (a) 1 year before and (b) 1 year during the intervention, which allowed prescribing practice patterns to be displayed to physicians in the EMR at the point of prescribing antibiotics for a respiratory tract infection. It also collected pharmacy fill rates by patients, which I find interesting because it may offer insights into patients’ (or parents’) agreement with the need for the prescription, after any access barriers to medication adherence. 

Both study arms showed slightly reduced antibiotic prescribing from baseline (pre-intervention) rates: 1% reduction vs. 4% reduction in “approximated risk” (risk ratio, RR) in the education-only vs. education-plus-EMR study arms, respectively. Both results report very tight ranges around a 95% confidence interval (CI), increasing confidence in the findings. (It is further nice to see the CI reported instead of the p value, for those who often hesitate to report CI because of many readers’ greater familiarity with the p value.) While reporting of “risk ratio” may be used as simply a convenient and appropriate way of reporting epidemiological data, it seems to me that its use for reporting educational outcomes with practice data is unusual and perhaps a comment on antibiotic prescribing for these infections as a risk.

The authors find that upper respiratory tract infection, sinusitis, and otitis “gave highest odds for delayed prescribing and lowest odds for dispensing,” which led them to conclude that the greatest potential for “savings” is greatest for these infections, a comment that brings this CME study with implementation science into the context of health utilization research. The article offers freely accessible full text, so enjoy reading the study.

References cited:
1. Centre for Clinical Practice at NICE (UK). Respiratory Tract Infections - Antibiotic Prescribing: Prescribing of Antibiotics for Self-Limiting Respiratory Tract Infections in Adults and Children in Primary Care. London: National Institute for Health and Clinical Excellence (UK); 2008 Jul. http://www.ncbi.nlm.nih.gov/pubmedhealth/PMH0010014/.
2. Levy C, Pereira M, Guedj R, et al. Impact of 2011 French guidelines on antibiotic prescription for acute otitis media in infants. MĂ©decine Mal Infect. 2014;44(3):102-106. http://www.ncbi.nlm.nih.gov/pubmed/24630597.
3. [Update on current care guidelines: acute otitis media]. Duodecim. 2010;126(5):573-4. Finnish. http://www.ncbi.nlm.nih.gov/pubmed/20597310.
4. Hoye S, Gjelstad S, Lindbaek M. Effects on antibiotic dispensing rates of interventions to promote delayed prescribing for respiratory tract infections in primary care. Br J Gen Pract. 2013;63(616):e777-e786. http://bjgp.org/content/63/616/e777.full.pdf. [Featured Article]
5. Fiks AG, Zhang P, Localio AR, et al. Adoption of electronic medical record-based decision support for otitis media in children. Health Serv Res. 2015;50(2):489-513. http://www.ncbi.nlm.nih.gov/pubmed/25287670.  
MeSH *Major* terms: Anti-Bacterial Agents/therapeutic use*; Education, Medical, Continuing*; General Practice/statistics & numerical data*; Physician's Practice Patterns/statistics & numerical data*; Respiratory Tract Infections/drug therapy* 

Friday, September 11, 2015

Today's Landmark NHLBI SPRINT study results relate to this 2008 PI-CME article by Shershneva, Olson, et al

Today the National Heart, Lung, and Blood Institute (NHLBI) of the United States National Institutes of Health announced the early completion of the landmark SPRINT study into recommended systolic blood pressure, which was led by researchers in my own town, at Wake Forest University School of Medicine. Therefore I am featuring a hypertension performance-improvement CME study by Marianna Shershneva, Curt Olson, and others, whose care is a quality measure in this country and elsewhere.

This PI-CME study was published in 2008, before most educational providers who now have outcomes-reporting capacities were able to study educational outcomes within a clinical context. Marianna Shershneva, M.D., Ph.D., is Building Block Leader for Quality Metrics of the Alliance's Quality Improvement Education (ACEhp QIE) Initiative, and Curtis Olson, Ph.D., has been guiding our field through his influence and tenure as Editor-in-Chief of the Journal of Continuing Education in the Health Professions. With their coauthors, Elizabeth Mullikin and Anne-Sophie Loos, we have a nice study for historical review that might have escaped attention on this day because the article title does not specify hypertension.

Consider this excerpt from the abstract, which lays out the opportunities for quality improvement professionals and clinical educators to work together for better patient care: "Although QI practices and CME approaches have been recognized for years, what emerges from their integration is largely unfamiliar, because it requires the collaboration of CME providers and stakeholders within the health care systems who traditionally have not worked together and may not have the same understanding of QI issues to close performance gaps." This was an observational case study with nine clinicians completing the study, and while we could wish for a larger sample, we should agree with the authors that "PI CME required unprecedented collaboration between CME planners and QI stakeholders to enable change in clinical practice." Let's applaud the effort and enjoy the three core findings that you'll see if you access this article.

This is a FREE article in PubMed Central, and I encourage you to review it for findings on physicians' practice patterns that have bearing on today's news from NHLBI. Thanks to the National Library of Medicine reviewers of articles for their assignments to this article's medical subject headings (MeSH terms), without which this article may not have risen to most noteworthy mention after today's NHLBI news. And by the way, proving the relevance of this sort of work to our nation, this study was also funded by two NIH grants.

References Cited: Shershneva MB, Mullikin EA, Loose A-S, Olson CA. Learning to collaborate: a case study of performance improvement CME. J Contin Educ Health Prof. 2008;28(3):140-147. doi:10.1002/chp.181.
NHLBI. Landmark NIH study shows intensive blood pressure management may save lives: lower blood pressure target greatly reduces cardiovascular complications and deaths in older adults [press release]. NHLBI Website. http://www.nih.gov/news/health/sep2015/nhlbi-11.htm. Accessed September 11, 2015.
Reboussin D, NHLBI, NIDDK, NINDS, and NIA. Systolic Blood Pressure Intervention Trial (SPRINT). NCT01206062. ClinicalTrials.gov Website. https://clinicaltrials.gov/ct2/show/NCT01206062. Accessed Accessed September 11, 2015.
MeSH *Major* Terms: Education, Medical, Continuing/standards; Hypertension/prevention & control; Physician's Practice Patterns; Quality Assurance, Health Care.

Thursday, September 10, 2015

Seven Days Remaining in CME TweetFest, Highlighting Clinical Education Outcomes Articles!

Check out this month’s service project featuring #Meded QI, PI, and #CME outcomes articles at blog http://fullcirclece.blogspot.com/. Only 7 days remain for $100 prize contest for retweeting at https://twitter.com/SHB_CMEedit. Blog comments earn prize entries too! This campaign is led by Donald Harting and yours truly, at #CMEtf, from August 17 - September 17, 2015. Don’t miss Don’s video introduction to the series at https://www.youtube.com/embed/VlVY4KlkdmA (but I have to insist that he deserves at least as much credit for starting the campaign as he gives me)!

I wanted to mention some colleagues who have authored the featured articles or been mentioned in Don Harting's or my blogs thus far: Ed Dellert, William Mencia, Derek Dietze, Erik Brady, Alexandra Howson, Jason Olivieri, Wendy Turell, Sara Miller, Allison Gardner, and Kathleen Moreo. Support publication of Medical Education outcomes reports!

Featured articles so far:
  • Bekanich SJ, Wanner N, Junkins S, et al. A multifaceted initiative to improve clinician awareness of pain management disparities. Am J Med Qual. 2014;29(5):388-96.
  • Peterson ED, Heidarian S, Effinger S, et al. Outcomes of an interprofessional team learning and improvement project aimed at reducing postsurgical delirium in elderly patients admitted with hip fracture. CE Measure. 2014;8(1):3-7. http://dx.doi.org/10.1532/cemeasure.v8i1.134.
  • Adams SG, Pitts J, Wynne J, Yawn BP, Diamond EJ, Lee S, Dellert E, Hanania NA. Effect of a primary care continuing education program on clinical practice of chronic obstructive pulmonary disease: translating theory into practice. Mayo Clin Proc. 2012 Sep;87(9):862-70.
  • Stevens L-M, Cooper JB, Raemer DB, et al. Educational program in crisis management for cardiac surgery teams including high realism simulation. J Thorac Cardiovasc Surg. 2012;144(1):17-24.
  • Dobesh PP, Stacy ZA. Effect of a clinical pharmacy education program on improvement in the quantity and quality of venous thromboembolism prophylaxis for medically ill patients. J Manag Care Pharm. 2005;11(9):755-62.
  • Raffini L, Trimarchi T, Beliveau J, Davis D. Thromboprophylaxis in a pediatric hospital: a patient-safety and quality-improvement initiative. Pediatrics. 2011 May;127(5):e1326-32.
  • Lapolla J, Morrice A, Quinn S, et al. Diabetes management in the hospital setting: a performance improvement continuing medical education program. CE Meas. 2013;7(1):54-60. doi:10.1532/CEM08.12103. http://www.cardenjenningspublishing.com/journal/index.php/cem/article/view/116.

Our coverage of these articles goes deeper in this blog and in Don Harting’s at http://dvmw.blogspot.com/2015/08/rt-to-win-100-your-fun-guide-to-cme.html
Why is there a contest? Because the more clinical education professionals see this series, the more we can prove that CME matters! So RT for a chance to win up to $100! CME outcomes contest rules at http://cmetweetfest.blogspot.com/. Please make sure you follow Don or me on Twitter so that we can notify you of any Tweet Fest winnings!

Thanks for your many retweets thus far, and please tell your colleagues about the service campaign to feature CME outcomes successes. I look forward to hearing from you at shbinford@fullcircleclinicaleducation.com. At the end of the service campaign, we will be preparing a report of featured articles, which you can sign up to receive from me at the Alliance Quality Symposium in Chicago or via electronic mail.

Now start Tweeting and let’s have some fun promoting the challenging work that goes into measuring health education outcomes!


Best,
Sandra 

3-Hospital Quality- and Performance-Improvement CME Project With Systems Change Support for Diabetes Care, With CE Measure Editor, Derek Dietze

This performance-improvement/continuingmedical education (PI-CME) study by Lapolla and colleagues at three mid-sized, regional hospitals in the United States focused on clinicians’ behaviors that can be documented as “validated metrics of diabetes care” in patient charts.

When you look at this example of a multi-center performance-improvement initiative with educational and institutional change supports, you’ll see that most of the intervention involved obvious reminders of the change initiative and its measures, graphs and data regarding performance trends, and a PI specialist as leader or champion of the campaign in each hospital. Even though the researchers chose hospitals that were willing to invest in change, the following implementation tools and collaborative approaches were incorporated as being necessary to change: "In the design and implementation of this program, we applied recognized PI principles and developed a dedicated working group to evaluate, monitor, and disseminate data, provide timely feedback, monitor outliers, attend to project management details, and maintain support of institutional leadership. We encouraged physicians' engagement by minimizing their time requirements, soliciting their input throughout the initiative, sharing meaningful data, and taking an 'improvement-oriented' approach rather than 'mandating change.'” [Emphases added.]

I wanted to point out the reminder of the American Medical Association’s recommendation of using a three-stage PI-CME structure, “comprising assessment of current performance (Stage A), design and implementation of strategies to improve performance (Stage B), and evaluation of PI efforts (Stage C).” Any outcomes project or program evaluation that sets its goals after educational content and resources are nearly final faces great challenges for later measurement. Permit me to include related content from the May 2015 Almanac article that Erik Brady and I wrote for the series, “Beginner’s Guide to Measuring Educational Outcomes in CEhp”: "A common error in assessment item-writing is the construction of assessment items that focus on a minor or trivial data point found in the content. This practice is particularly common in two cases: first, when assessment items are written from finished content that offers too little material for assessment; and second, when the minimum score a learner needs to request educational credit dictates the number of items on a tool, causing planners to test trivial points in their desperation to hit an arbitrary quota."

Because assessment items are optimally designed to assess how well a learning objective has been met, aligning a learning objective with an assessment item should ensure that your items are focused on the key points of the activity content and that activity content consistently supports learners’ achievement of the educational and performance objectives. Thus when Lapolla et al planned the educational content and reinforcing health-systems supports according to well-established metrics, they made certain that they had all desired outcomes mapped to hospital-specific care gaps before they started. This allowed the research team to provide “timely feedback” and manage the project better at each institution by identifying outliers in the PI datasets. A previously featured article in this month’s “Back to School” Tweet Fest mentioned incorporation of VTE prophylaxis therapies into order sets, and how this was effective. Notably, we see the same in this study: “the PI specialists at all 3 participating hospitals saw marked improvement once order sets included the metrics.”

Look at this article for its insights into quality improvement, implementation science, and educational methods. It is also nicely written and described, with model writing and brevity for a research report—especially remarkable given the number of study authors. Finally, note that Derek Dietze, Editor-in-Chief of CE Measure, participated in this large PI-CME study on improving care practices for diabetes, one of the most challenging epidemiological issues in the United States.

References cited:
Lapolla J, Morrice A, Quinn S, et al. Diabetes management in the hospital setting: a performance improvement continuing medical education program. CE Meas. 2013;7(1):54-60. doi:10.1532/CEM08.12103. doi:10.1532/CEM08.12103. http://www.cardenjenningspublishing.com/journal/index.php/cem/article/view/116. Accessed September 10, 2015. [FREE full text]
Brady ED, Binford SH. How to write sound educational outcomes questions: a focus on knowledge and competence assessments [series: “Beginner’s Guide to Measuring Educational Outcomes in CEhp”]. The Almanac. 2015;37(5):4-9. http://www.acehp.org/p/do/sd/topic=216&sid=811. Accessed September 10, 2015. [Full text]
Raffini L, Trimarchi T, Beliveau J, Davis D. Thromboprophylaxis in a pediatric hospital: a patient-safety and quality-improvement initiative. Pediatrics. 2011 May;127(5):e1326-32. doi: 10.1542/peds.2010-3282. http://pediatrics.aappublications.org/content/127/5/e1326.full.pdf+html. Accessed September 8, 2015. [FREE full text (Pediatrics final version)] 

Tuesday, September 8, 2015

Implementation Science Extends Multidisciplinary Education on VTE Prophylaxis at World-Class Children’s Hospital

Let’s continue yesterday's thread with venous thromboembolism (VTE) education in a new patient population: children. In this quality improvement articleRaffini et al point out that while risks of VTE are far lower in children than in adults, children still need clinicians to maintain an appropriate level of suspicion and take action on VTE risk with prophylaxis.

What a nice way to phrase a practice gap and the need to narrow that gap: “VTE prophylaxis for patients at risk is often overlooked in pediatric health care institutions, which provides an opportunity to improve patient care.” Even at the world-class Children’s Hospital of Philadelphia (CHOP) at the University of Pennsylvania, children and adolescents were not receiving an appropriate level of risk assessment and prophylaxis, leading researchers to undertake this four-year quality improvement study.

CME/CEhp initiatives often do not extend into local facilities, but this study used reinforcing methods of implementation science to communicate and support the desired behaviors and tasks. These extended “multidisciplinary educational forums” into patient-care settings. Here’s a summary of the intervention’s rollout of CHOP’s locally established guidelines to “encourage timely initiation of thromboprophylaxis." These interventions included 1) VTE risk assessment in the nursing admission intake forms; 2) a VTE-prophylaxis order set implemented into the computerized ordering system; 3) ICU nurses assessing VTE risk and preventive practice daily during team rounds; 4) trials of various pneumatic compression devices with appropriate sizing for children; 5) acquisition of more compression devices with storage near high-risk areas; and 6) development of a protocol for perioperative nurses to initiate pneumatic compression before surgery for certain patients, later expanded to all inpatient settings.

So was extension of multidisciplinary education into care settings, using methods of implementation science, effective? YES. The study’s primary outcome measure was “compliance with thromboprophylaxis guidelines in patients at risk for VTE.” Clinically meaningful improvements were seen: “Over the 4-year study period, the observed rate of VTE prophylaxis in patients at risk increased from a baseline of 22% to an average rate of 82%, and there were intermittent improvements up to 100%.” Implementation science methods, when added to multidisciplinary clinical education, quadrupled or even quintupled guidelines-based care in an underserved population.    

There’s also a nice feature to point out to those watching IRB “requirements” that peer-reviewed journals increasingly expect of performance-change initiatives: “This project was a quality-improvement activity and exempt from review from the institutional review board at the Children's Hospital of Philadelphia.” The QI initiative was not seen as human subjects research (HSR).

Reference cited: Raffini L, Trimarchi T, Beliveau J, Davis D. Thromboprophylaxis in a pediatric hospital: a patient-safety and quality-improvement initiative. Pediatrics. 2011;127(5):e1326-32. PMID: 21464186.
See also the ACCP and ICSI guidelines linked from the September 7, 2015 post (http://fullcirclece.blogspot.com/2015/09/pharmacy-education-for-hospital.html)
PubMed:
http://www.ncbi.nlm.nih.gov/pubmed/21464186
Free full text (Pediatrics final version): http://pediatrics.aappublications.org/content/127/5/e1326.full.pdf+html
MeSH *Major* terms: Anticoagulants/administration & dosage*; Guideline Adherence*; Patient Selection*; Primary Prevention/organization & administration*; Venous Thromboembolism/prevention & control*
(And for those of you who love MeSH and UMLS, many quality-improvement and program change-related MeSH terms were assigned by NLM but not starred as “major” terms.) 

Monday, September 7, 2015

Pharmacy Education for Hospital Clinicians on VTE Prophylaxis Changed Performance, Bringing Guideline-Adherent Care To Most Patients

Earlier today, I wrote of interprofessional clinical education regarding team communication during cardiac surgery. Now I continue the theme of nonphysician education by highlighting contributions of pharmacy education to patient care, and one that particularly relates to (post)surgical care. While this month’s Back-to-School campaign (illustrating published educational outcomes) mainly features recent articles, this 2005 study by Dobesh and Stacy in the Journalof Managed Care Pharmacy (free full text available) is a worthy read for its contributions to quality care research from the pharmacy perspective and scope of practice.

Venous thromboembolism (VTE and/or DVT, PE) is a great concern among surgeons and other physicians. In fact, the VTE evidence-basedguideline by the Institute for Clinical Systems Improvement (ICSI; Jobin et al 2012) names 10 stakeholder groups—including physicians and pharmacists—as “intended users.” The current article used the 2004 American College of Chest Physicians (ACCP) recommendations. Effectively preventing VTE can dictate the chances of successful outcomes and reduce patient readmission rates for many conditions. Because of the challenges of selecting the optimal anticoagulant agent and dosage for individual patients, pharmacists can clearly collaborate with physicians in making decisions about VTE prophylaxis. The 2012 guideline considered pharmacological thromboprophylaxis with unfractionated heparin (UFH), low-molecular-weight heparin (LMWH), fondaparinux, warfarin, aspirin, apixaban, dabigatran, and rivaroxaban—enough therapeutic options to suggest the need for consultation between physicians and pharmacists.  

The pharmacy intervention for nurses, pharmacists, and physicians in the community hospital was traditional in instructional format, involving reinforcing in-service and quality-assurance presentations, as well as newsletters. The educational outcomes assessment method was more notable, using retrospective chart reviews with statistically similar patients before and after the educational intervention (15 months of patient charts before, and 6 months after). Patient chart reviews showed statistically significant and clinically meaningful change in VTE prophylaxis performance in practice. Specifically, both “suitable” and “optimal” prophylaxis increased (P = .006 and P < .0001 respectively), with a fourfold increase in the optimally treated percentage of patients associated with pharmacy education of physicians, nurses, and pharmacists.  

These data show that traditional educational initiatives developed by one health care profession for others can be effective in changing performance, especially when guidelines for practice and risk categories are presented in reinforcing text-based and live formats. This intervention brought guideline-adherent care to 93% of patients with risk, up from 49% before the intervention.  

References cited:
Dobesh PP, Stacy ZA. Effect of a clinical pharmacy education program on improvement in the quantity and quality of venous thromboembolism prophylaxis for medically ill patients. J Manag Care Pharm. 2005;11(9):755-62.
PMID: 16300419.
Geerts WH, Pineo GF, Heit JA, et al. 
Prevention of venous thromboembolism: the Seventh ACCP Conference on Antithrombotic and Thrombolytic Therapy. Chest. 2004;126:338S-400S. PMID: 15383478.

Jobin S, Kalliainen L, Adebayo L, et al. Venous thromboembolism prophylaxis. Bloomington (MN): Institute for Clinical Systems Improvement (ICSI); 2012. Available at: http://www.guideline.gov/content.aspx?id=39350. Accessed September 7, 2015. 


PubMed:  http://www.ncbi.nlm.nih.gov/pubmed/16300419
Journal Free Full Text: http://amcp.org/data/jmcp/contemporary_755-762.pdf
MeSH *Major* terms: Health Personnel/education; Heparin, Low-Molecular-Weight/therapeutic use; Inservice Training; Thromboembolism/prevention & control; Venous Thrombosis/prevention & control

Mixed-Methods Study Improves Team Communication After Non-Didactic Interprofessional Education on Cardiac Surgical Crisis

It would be hard to imagine a more crucial setting for effective interdisciplinary clinical interactions than the cardiac surgery operating theater. Stevens and colleagues published this 2012 pilot study on interprofessional education to “sharpen performance of experienced cardiac surgical teams in acute crisis management.”

The educational methods support existing effectiveness research for non-didactic education, incorporating both interactive workshops for an entire care unit and computer-based, crisis-case simulations (whose “high-realism” scenarios improved over time). Researchers found that 82% of the 79 participants recommended repetition of case simulations every 6 – 12 months. Workshop participants identified priorities in “encouraging speaking up about critical information and interprofessional information sharing,” particularly early communication of the surgical plan.

The mixed-methods outcomes assessment methodology is also noteworthy because of its appropriateness to this study of human communications and behaviors during a patient crisis: the structured interviews with study participants added context and insights to the quantitative data that could be gathered from periodic surveys. The surveys that were administered before, just after, and 6 months after the educational activities noted that the “concept of working as a team improved between surveys,” as well as “trends for improvement in gaining common understanding of the plan before a procedure and appropriate resolution of disagreements.” The qualitative arm of the study found that interviewees valued the initiative’s “positive effect on their personal behaviors and patient care, including speaking up more readily and communicating more clearly.”

In the continuing medical education field, we often see the Canadians leading educational research, standards, and innovative methods. In fact, looking only at the U. S. National Library of Medicine’s assignment of Medical Subject Headings (MeSH terms) to this indexed article shows the relevance of this study for medical education methods for promoting competence in decision-making, performance-in-practice change, and quality improvement (see “major” MeSH terms listed below, and others on the PubMed page). One hopes to see a follow-up on this pilot study at the Centre Hospitalier Universitaire de MontrĂ©al (Quebec, Canada).  

MeSH *Major* terms: Cardiac Surgical Procedures/education; Clinical Competence; Critical Care/standards; Education, Medical, Continuing/methods; Patient Care Team/organization & administration