Showing posts with label outcomes methods. Show all posts
Showing posts with label outcomes methods. Show all posts

Tuesday, October 6, 2015

Conversation-starter: SQUIRE tool standards as a connection between medical communication organization: ACEhp and AMWA

There are many synergies and resources between the Alliance for Continuing Education in the Health Professions (ACEhp) and the American Medical Writers Association (AMWA), because medical educators and medical writers are all communicators ... which explains why so many of us are members of both. In today's ACEhp webinar (see www.acehp.org/p/cm/ld/fid=367), Jann Balmer replied to a question asking about the role of medical writers in publishing educational outcomes in the quality improvement reporting tool, SQUIRE (as customized by ACEhp), saying that outcomes manuscripts can be enhanced by medical writers and editors because QIE project managers may not be good writers. My colleague, Donald Harting, and I have been discussing potential organizational synergies between AMWA and ACEhp in this regard for over a year.

This is an opportune moment for our memberships to work together: AMWA celebrated 75 years with last week's Annual Conference, which started the very day on which ACEhp launched its custom SQUIRE reporting tool for clinical educational research at the annual Alliance Quality Symposium. This begs the question: What role will professional medical writers and communicators in AMWA play in publishing standardized, SQUIRE-compliant educational outcomes research in the Alliance for CE in the Health Professions custom tool?

Conversation-starter: ACEhp identified a top goal of 2015's Phase II of the Quality Improvement Education (QIE) Initiative as the "Assessment of SQUIRE to generate Case Studies that demonstrate successful integration of Education into QI." Professionals with capabilities in medical education and writing will ideally combine their skills in writing these case studies. Wouldn't you like to author a study that is eligible for later meta-analyses because the study was SQUIRE-compliant?

Extension: There is an inaugural meeting on writing research in the SQUIRE tool at Dartmouth this November 2015. MedBiquitous members, as experts in standardization in medical education technology and reporting, what can you contribute to this conversation? Members of the American Educational Research Association LinkedIn group, what are your thoughts for connecting health education research reporting to more global research agendas? 

I am promoting the SQUIRE tool because of my roles as Co-Leader of the ACEhp QIE Initiative's Building Block on "Nomenclature and Its Adoption" and Research Track faculty. In these roles, I see the needs for developing consistent wording and reporting standards for medical education research. After all, how will we report our achievements and deliberate our challenges in developing and researching CE in the Health Professions if we do not have consistent language to use in the SQUIRE reporting tool? Let's not delay a start in COMMUNICATING among health communicators. Join the discussion! I believe that this is a great opportunity for all health educators, education researchers, and writers to collaborate!

For additional information on these organizations, topics, and events, check out these links (some may have membership firewalls):
- SQUIRE (Standards for Quality Improvement Reporting Excellence) inaugural conference: squire-statement.org/news_events/squire_international_writing_conference/
- ACEhp Quality Symposium: www.acehp.org/p/cm/ld/fid=20 (see also the August 2015 issue of the Almanac: www.acehp.org/p/cm/ld/fid=52, p. 15)
- ACEhp Foundation QIE Initiative: www.acehp.org/p/cm/ld/fid=43
- ACEhp Foundation QIE Initiative's custom SQUIRE tool:
---See Webinars at www.acehp.org/p/cm/ld/fid=367
- AMWA 75th Anniversary: www.amwa.org/amwa_anniversary
- MedBiquitous: http://www.medbiq.org/ and new Performance Standard: www.medbiq.org/node/1001

Connect with me! SHBinford@FullCircleClinicalEducation.com or www.linkedin.com/company/full-circle-clinical-education-inc

Friday, September 18, 2015

CS2day: Award-Winning, 9-Collaborator, Performance-Improvement CME With an Outcomes-Based Evaluation Model

I saved the best for the last entry in the Back to School Tweet Fest. The Cease Smoking Today (CS2day) initiative cannot be ignored in a series about effective educational interventions in changing practice and improving quality of health care. An entire 2011 supplement of the Journal of Continuing Education in the Health Professions (JCEHP) reports the complex CS2day educational program and its findings, with six research articles [1-6] and three forum articles [7-9] written by multi-institutional teams among the nine initiative partners. This study was awarded the Alliance for CME (now ACEhp) Award for Outstanding CME Collaboration in 2009 (see PDF pages 15-18 of www.acehp.org/d/do/150), and was presented in a 2012 CME Congress poster (P50: http://www.cmecongress.org/wp-content/uploads/2012/05/CME-Congress-2012-Abstracts.pdf). The study boasts collaboration among universities, professional societies, accredited CME providers, ACEhp presidents and conference chairs, CME directors at academic medical centers, the JCEHP Editor-in-Chief, and other published researchers [1,10] who carefully define the educational program’s framework and collaboration model in the new quality improvement paradigm of CME called for by the Institute of Medicine in 2001 [11].

The CS2day initiative is so big that this blog post cannot feature just one article reporting it. I will focus on the introductory editorial [10] and 2 study articles that focus on (a) developing competencies to assess needs and outcomes [3] and (b) the educational and patient health outcomes data themselves [4]. The medical education expert Donald Moore introduces the supplement and one article therein reports the outcomes data. I hope you will do as Moore recommends, when you question what you can take from articles describing “a huge project with significant funding,” which is to ask, “What are the general principles that I can identify in these articles and how can I use them in my CME practice[?]” [10].

In my previous post, I noted the difficulties of using PI-CME to change patient health outcomes in a condition posing a major public health challenge: the COSEHC study addressed cardiometabolic risk factors and saw performance and patient health improvements. The CS2day initiative faced the same challenge, and happily also reported performance change and a change in patient health outcomes: smoking cessation. Moore nicely summarizes the challenge of connecting Level 5 performance changes among clinicians to Level 7 changes in public health outcomes: “All of us want to improve the health of the public in some way, but our approaches … may prevent us from having the impact that we wish to have. The [CS2day] articles … suggest there might be another approach that we should consider to address the important public health issues that surround but do not seem to be impacted by our CME programs” [10; emphasis added].

The articles in the JCEHP supplement are organized around 4 themes [10], to which I have added themes from the articles: 
a) Collaboration is challenging but worth doing if guidelines are set and a formative evaluation of the collaboration against known success factors is carried out [1,2,5]
b) Best-practice CME includes an outcomes orientation that connects learning and performance objectives from the needs assessment to the outcomes assessment in a valid framework to support content in all educational activities [3-6]
c) A public health focus can lead to development of CME/CEhp activities with a translational or implementation science function that transcends what can happen when education addresses only a practice gap [7]
d) Standards and competencies for CEhp and members of the CEhp profession help initiatives meet the principles and characteristics of the IOM report’s expectations [8,9,11] 

The two featured research articles [3,4] function together as the Methods and Results sections of a typical IMRAD-structured paper, but each is extensive enough to stand alone and inform CEhp professionals. McKeithen et al describe the following: the need for establishing clinical competency statements related to supporting smoking cessation; the clinical guidelines that informed performance expectations; “the 5 A’s” of support for smoking cessation (Ask, Advise, Assess, Assist, and Arrange); the 14 competencies or the 8 performance outcomes measures that fit into the 5 A’s algorithm being assessed; and collaboration of clinical and educational experts on outcomes tools to develop “a comprehensive set of measures at Levels 3 through 6” [3].

The summative outcomes data are extensively reported by Shershneva et al, where “evaluation of a collaborative program” is presented as “translating” the outcomes framework into practice [3,4]. Defining desired outcomes of the program across Levels 1 to 6* was seen as useful in facilitating agreement among stakeholders; guiding the evaluation process; gathering data from multiple activities and collaborators in a central repository; and studying the effects of mechanisms that link education to outcomes [4]. Thanks to effective planning, the researchers were also able to add to the literature on instructional design in CEhp by distinguishing performance outcomes from two groups of activity types: a) live PI activities with either a collaborative or practice-facilitator model and b) self-directed learning PI activities.

Also worth reading are additional insights about using the Success Case Method (SCM) to determine whether and why educational interventions succeed [6]. In CS2day reporting, using the SCM allowed the research team to conclude remarkably confidently, stating, “the PI activities were a primary and proximal cause of improvement in clinical practice” [4]. Moore notes that “the results were impressive: physicians integrated a new guideline into their practices and many patients stopped smoking” [10]. The guideline integrated into practice through the CS2day initiative was a “heavily researched evidence-based practice guideline published by the U.S. Agency for Healthcare Research and Quality,” due to be updated in 2008, the year after this collaborative initiative was begun [1].

Finally, a comment: In CEhp, change data are often seen as valid only when educational and program interventions do not change before activity expiration, nor even when a formative assessment shows changes to be necessary. This attitude can leave participating clinicians with suboptimal educational opportunities and stakeholders in the educational design frustrated. The use of the formative program evaluations that improved the CS2day initiative, with acknowledgements of changes, is in my opinion better than a pure pre/post comparison on an activity where valuable investments are not updated when indicated. If the CME/CEhp profession helps clinicians link medical care to public health through disease prevention, accountability to quality, and more, then educational design should respond to data collected in lengthy and large interventions.

The CS2day initiative is a model study in educational and performance improvement methods for a challenging public health problem. Please read the study articles if you have print or online access to JCEHP, for I have only touched the surface of the initiative's methodology, results, and rationales in the limited confines of this space. 

* Note: In this study, “Learning” was used as Level 3 and included knowledge and clinical skill (competence) measures, while “Performance” including commitment to change (CTC) queries was used as Level 4. Thus Level 5 was “Patient Health Status” and Level 6 was “Population Health Status.”

References cited: 
1. Olson CA, Balmer JT, Mejicano GC. Factors contributing to successful interorganizational collaboration: the case of CS2day. J Contin Educ Health Prof. 2011;31(Suppl 1):S3-S12.
2. Ales MW, Rodrigues SB, Snyder R, Conklin M. Developing and implementing an effective framework for collaboration: the experience of the CS2day collaborative. J Contin Educ Health Prof. 2011;31(Suppl 1): S13-S20.
3. McKeithen T, Robertson S, Speight M. Developing clinical competencies to assess learning needs and outcomes: the experience of the CS2day initiative. J Contin Educ Health Prof. 2011;31(Suppl 1):S21-S27. http://www.ncbi.nlm.nih.gov/pubmed/22190097. [Featured Article]
4. Shershneva MB, Larrison C, Robertson S, Speight M. Evaluation of a collaborative program on smoking cessation: translating outcomes framework into practice. J Contin Educ Health Prof. 2011;31(Suppl 1):S28-S36. http://www.ncbi.nlm.nih.gov/pubmed/22190098. [Featured Article]
5. Mullikin EA, Ales MW, Cho J, Nelson TM, Rodrigues SB, Speight M. Sharing collaborative designs of tobacco cessation performance improvement CME projects. J Contin Educ Health Prof. 2011;31(Suppl 1):S37-S49.
6. Olson CA, Shershneva MB, Brownstein MH. Peering inside the clock: using success case method to determine how and why practice-based educational interventions succeed. J Contin Educ Health Prof. 2011;31(Suppl 1):S50-S59.
7. Hudmon KS, Addleton RL, Vitale FM, Christiansen BA, Mejicano GC. Advancing public health through continuing education of health care professionals. J Contin Educ Health Prof. 2011;31(Suppl 1):S60-S66.
8. Balmer JT, Bellande BJ, Addleton RL, Havens CS. The relevance of the Alliance for CME competencies for planning, organizing, and sustaining an interorganizational educational collaborative. J Contin Educ Health Prof. 2011;31(Suppl 1):S67-S75.
9. Cervero RM, Moore DE. The Cease Smoking Today (CS2day) initiative: a guide to pursue the 2010 IOM report vision for CPD. J Contin Educ Health Prof. 2011;31(Suppl 1):S76-S82.
10. Moore DE. Collaboration, best-practice CME, public health focus, and the Alliance for CME competencies: a formula for the new CME? J Contin Educ Health Prof. 2011;31(Suppl 1):S1-S2. http://www.ncbi.nlm.nih.gov/pubmed/22190095. [Featured Editorial]
11. Institute of Medicine (IOM) Committee on Planning a Continuing Health Professional Education Institute. Redesigning Continuing Education in the Health Professions. Washington, DC: The National Academies Press; 2010. http://books.nap.edu/openbook.php?record_id=12704. Accessed September 17, 2015.

MeSH “Major” Terms for the 3 Featured Articles (common items italicized)
McKeithen et al [3]: Benchmarking; Clinical Competence; Education, Medical, Continuing/methods; Needs Assessment; Outcome and Process Assessment (Health Care)/organization & administration; Practice Guidelines as Topic/standards; Smoking Cessation/methods; Tobacco Use Disorder/prevention & control
Shershneva et al [4]: Benchmarking/methods; Clinical Competence/standards; Health Personnel/classification; Health Personnel/psychology; Health Personnel/statistics & numerical data; Interprofessional Relations; Outcome Assessment (Health Care)/organization & administration; Program Evaluation; Smoking Cessation/methods; Tobacco Use Disorder/prevention & control
Moore [11]: Benchmarking; Clinical Competence; Delivery of Health Care, Integrated; Education, Medical, Continuing/methods; Interinstitutional Relations; Public Health

Thursday, September 17, 2015

Patient-Health Effects of a Performance-Improvement CME Educational Intervention to Control Cardiometabolic Risk in the Southeastern U.S.

Many of you who know me might recall that I moved from the Northeast to the Southeast U.S. some years back. As I learned about the people and culture of the Southeast, I commonly saw many dietary and lifestyle factors that would confer increased risks for cardiovascular diseases and diabetes—indeed, this part of the United States is known as “The Stroke Belt.” The Consortium for Southeastern Hypertension Control (COSEHC) initiative reported by Joyner et al sought to improve the control of these risk factors through a performance-improvement continuing medical education (PI-CME) activity [1]. It somehow seems fated that I report this study because the lead author is based in the same North Carolina city where I have lived these many years, working at Wake Forest University. The PI-CME initiative itself was conducted with several primary care physician practices with designation as a COSEHC Cardiovascular Center of Excellence in Charleston, South Carolina; a comparable practice group served as a control. Results were reported to Moore’s Level 6 (patient health outcomes) [2]. 

The intervention included many overlapping and reinforcing elements that we would expect to see in a major initiative on a major health concern: using the plan-do-study-act (PDSA) model, researchers worked to “improve practice gaps by integrating evidence-based clinical interventions, physician-patient education, processes of care, performance metrics, and patient outcomes.” The intervention design included an action plan to include medical assistants and nurses in patient-level tasks and education, patient chart reminders, patient risk stratification, and sharing of physicians’ feedback on successful practice changes with other participating practices. 

Because patient health outcome indicators were used to define educational effectiveness of the PI-CME initiative, the selection of measures is important to our understanding of study findings. The research team used cardiometabolic risk factor target treatment goals for 7 lab values as recommended by 3 sets of evidence-based guidelines (JNC-7, ATP-III, and ADA). The team set a more aggressive target for low-density lipoprotein cholesterol (LDL-C) because many patients had multiple risk factors for cardiometabolic diseases and coronary heart disease risk “can exist even in the absence of other risk factors.” Researchers investigated changes in patient subgroups: “diabetic, African American, the elderly (> 65 years), and female patient subpopulations and in patients with uncontrolled risk factors at baseline.” The authors note that the average patient in both intervention and control groups was clinically obese; other baseline health indicators were also similar. 

Now to results, gathered at 6 months to assess changes in patients' cardiometabolic risk factor values and control rates from baseline. The abstract summarizes findings as follows [1]:
Only women receiving health care by intervention physicians showed a statistical improvement in their cardiometabolic risk factors as evidenced by a -3.0 mg/dL and a -3.5 mg/dL decrease in mean LDL cholesterol and non-HDL cholesterol, respectively, and a -7.0 mg/dL decrease in LDL cholesterol among females with uncontrolled baseline LDL cholesterol values. No other statistical differences were found.

I want to discuss some factors that could explain the little change seen in this study. First, the intervention was measured at just 6 months into the educational initiative; this is known to be barely adequate for assessing clinicians’ performance change, and even performance changes were not likely to produce significantly different lab values in patients with years of health-related practices that led to their higher risks. Interestingly, there was less room for improvement because patients in both groups had higher baseline risk-control rates than is seen at the U.S. national level, and the patients in the intervention group had even higher baseline risk-control rates than patients in the physician control group had.

The study did appear to improve noted performance gaps regarding gender disparities in care. The authors note 4 studies pointing out suboptimal treatment-intensification to control LDL-C in female vs. male patients and even physician bias or inaction for female patients. Thus the improved patient outcome data for LDL-C and non-HDL cholesterol among women treated by physicians in the intervention group indicates a narrowing of established gaps in attitude (Level 4) and/or performance (Level 5).

Here in “The Stroke Belt,” any effort to control cardiometabolic risk factors must include population-level initiatives and patient education, which I have seen state governments, public health departments, recreation centers, and schools undertake at many levels. Two items stand out as affecting the COSEHC report’s findings: that the study tried to measure changed patient health indicators too soon after intervention, and that the researchers tied themselves to the high standard of measuring Level 6 for a health concern that needs interventions among patients and the public that were not considered here. Indeed, because physicians’ feedback on successful changes during the initiative were shared across practices, we know that Level 4 - 5 competence and performance changes were achieved. The authors should be commended on their work to tackle this public health concern through a PI-CME initiative.

Finally, I want to mention that Joyner et al cite two studies by others I am humbled to name as colleagues. First, Sara Miller and others at Med-IQ (in a team often featured in Don Harting’s earlier posts in this Back to School campaign) published with PJ Boyle on improving diabetes care and patient outcomes in skilled-care (long-term-care) communities [3]. Second, Joyner et al cite the article featured in this blog on September 11, 2015—which itself came up in my reporting on that day’s release of the landmark SPRINT study results of the NHLBI [4]—by Shershneva, Olson, and others [5]. The Joyner article noted the Shershneva team’s finding that “process mapping led to improvement in [a majority of CVD] measures” [1].

References cited:
1. Joyner J, Moore MA, Simmons DR, et al. Impact of performance improvement continuing medical education on cardiometabolic risk factor control: the COSEHC initiative. J Contin Educ Health Prof. 2014;34(1):25-36. http://onlinelibrary.wiley.com/doi/10.1002/chp.21217/abstract. [Featured Article]
2. Moore DE, Green JS, Gallis HA. Achieving desired results and improved outcomes: integrating planning and assessment throughout learning activities. J Contin Educ Health Prof. 2009;29(1):1-15.
3. Boyle PJ, O’Neil KW, Berry CA, Stowell SA, Miller SC. Improving diabetes care and patient outcomes in skilled-care communities: successes and lessons from a quality improvement initiative. J Am Med Dir Assoc. 2013;14(5):340-344.
4. NHLBI. Landmark NIH study shows intensive blood pressure management may save lives: lower blood pressure target greatly reduces cardiovascular complications and deaths in older adults [press release]. NHLBI Website. http://www.nih.gov/news/health/sep2015/nhlbi-11.htm. Accessed September 11, 2015.
5. Shershneva MB, Mullikin EA, Loose A-S, Olson CA. Learning to collaborate: a case study of performance improvement CME. J Contin Educ Health Prof. 2008;28(3):140-147. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2782606/. [See blog post on this previously featured article at http://fullcirclece.blogspot.com/2015/09/todays-landmark-nhlbi-sprint-study.html]
MeSH “Major” Terms of Featured Article [1]:
Education, Medical, Continuing/organization & administration; Metabolic Syndrome X/prevention & control; Models, Educational; Physicians, Family/education; Quality Improvement

Study Design and Paired Comparisons: Individualized Education Fails to Change Practice—Or Was It Only Poor Matching?

We should commend Malone et al for submitting this AHRQ-supported* study [1] for publication when a flaw in its design or execution could be the authors’ main reason for concluding that “the current study was not able to demonstrate a significant beneficial effect of the educational outreach program on [the primary performance outcome measure].” This blog’s “Back-to-School” service campaign did not exclude studies reporting negative outcomes because these studies can potentially inform continuing education in the health professions (CEhp) as much as positive studies can.

CEhp/CME educational proposals, audience-generation strategies, and outcomes reports now specify relevant “target audiences,” recognizing that not all practitioners with a certain degree, specialty, or other professional demographic description would benefit from the same educational activity or design. With this more recent recognition of the importance of targeting specific clinicians and learning about their needs has come greater recognition that many CE participants should not be included in aggregated data. This is even truer in studies with matched pairs, where the step of greatest importance lies in setting match criteria. On September 15th, I discussed an opioids-education study where matching criteria were so stringent that the authors were not able to match certain participants (physicians in the intervention group), and these participants’ data and group assignments were handled nicely and reported clearly in the paper [2] (see post at http://fullcirclece.blogspot.com/2015/09/eight-year-canadian-study-on-opioid.html).

Conversely, the first result listed in this study’s abstract indicates a matching flaw for a study on education on drug-drug interactions (DDIs): “The 2 groups were significantly different with respect to age, profession, specialty, and geographic region.” This finding undermines other benefits to the study, namely, that large samples (19,606 prescribers) were recruited to both groups (educational intervention vs. control) and matched on prescribing volume. Individualized education (also known as academic detailing) was delivered by trained pharmacists as clinical consultants who met with prescribers to “provide one-on-one information … promote evidence-based knowledge, create trusting relationships, and induce practice change.” This study’s performance (behavioral) measure was a reduced rate of prescribing potential DDIs. The prescribing of 25 clinically important, potential DDIs increased more in the intervention group than it did in the control group.

In conclusion, when we look at this presumably negative finding, we are left to wonder whether the educational intervention was not effective—or whether a better matching process might have revealed different results on reducing potential DDIs and improving health care quality and utilization. One could argue that with nearly 20,000 prescribers in both samples, more matching criteria could have been applied without sacrificing so many data points that results would be inconclusive. The study’s design as a retrospective study could also explain recruitment and matching practices. In social sciences research (including educational outcomes research), a core expectation is generalizability of a sample to a population of interest; when reasonably achieved, generalizability lets us apply findings to practical needs and future decisions. 

Recall the study conclusion quoted above: “The current study was not able to demonstrate a significant beneficial effect …” (emphasis added). A secondary analysis with different pair-matching practices might yet inform national initiatives in improving quality while reducing costs through academic detailing, both of which help patients. Now let’s remember to thank Malone, Liberman, and Sun for sharing their data and methods with the healthcare quality and educational research communities in the Journal of Managed Care & Specialty Pharmacy.

* AHRQ = United States Agency for Healthcare Research and Quality

References cited:
1. Malone DC, Liberman JN, Sun D. Effect of an educational outreach program on prescribing potential drug-drug interactions. J Manag Care Pharm. 2013;19(7):549-557. http://www.ncbi.nlm.nih.gov/pubmed/23964616. [Featured Article]
2. Kahan M, Gomes T, Juurlink DN, et al. Effect of a course-based intervention and effect of medical regulation on physicians’ opioid prescribing. Can Fam Physician. 2013;59(5):e231-e239. http://www.cfp.ca/content/59/5/e231.full.pdf+html.
Free Full Text: http://www.amcp.org/JMCP/2013/September_2013/17103/1033.html
MeSH “Major” Terms: Drug Interactions; Drug Prescriptions; Education, Medical, Continuing; Health Education; Physician's Practice Patterns; Prescription Drugs/administration & dosage

Saturday, September 12, 2015

Medical education with EMR-based reminders reduces antibiotic prescribing and dispensing for respiratory tract infections in Norway

It is known that British guidelines for otitis media support delayed antibiotic prescribing [1], and other countries have guidelines to reduce certain antibiotic prescribing for otitis media, for example, France [2]. Conversely, Finnish guidelines do not [3]. A 2013 Norwegian study published in the British Journal of General Practice compares the varying effectiveness of 2 interventions in delaying primary care antibiotic prescribing for respiratory tract infections, including otitis [4].

Notwithstanding a complicated design for recruiting and assigning general practitioners across multiple sites, this article offers several interesting features. First, it compares an education-only intervention with the same education enhanced by pop-up reminders of a physician’s own prescribing patterns in the electronic medical record (EMR), a nice reinforcement of the educational intervention for participating physicians. While not a focus of this post, I would like to mention a new Penn study of adherence to guidelines on otitis media using EMRs for decision support at Children’s Hospital of Philadelphia [5]. This shows interest in implementation science combined with continuing medical education (CME) for changing physicians’ practice patterns.

The Norwegian study featured here [4] data collected and linked data on prescribed and dispensed antibiotics from (a) 1 year before and (b) 1 year during the intervention, which allowed prescribing practice patterns to be displayed to physicians in the EMR at the point of prescribing antibiotics for a respiratory tract infection. It also collected pharmacy fill rates by patients, which I find interesting because it may offer insights into patients’ (or parents’) agreement with the need for the prescription, after any access barriers to medication adherence. 

Both study arms showed slightly reduced antibiotic prescribing from baseline (pre-intervention) rates: 1% reduction vs. 4% reduction in “approximated risk” (risk ratio, RR) in the education-only vs. education-plus-EMR study arms, respectively. Both results report very tight ranges around a 95% confidence interval (CI), increasing confidence in the findings. (It is further nice to see the CI reported instead of the p value, for those who often hesitate to report CI because of many readers’ greater familiarity with the p value.) While reporting of “risk ratio” may be used as simply a convenient and appropriate way of reporting epidemiological data, it seems to me that its use for reporting educational outcomes with practice data is unusual and perhaps a comment on antibiotic prescribing for these infections as a risk.

The authors find that upper respiratory tract infection, sinusitis, and otitis “gave highest odds for delayed prescribing and lowest odds for dispensing,” which led them to conclude that the greatest potential for “savings” is greatest for these infections, a comment that brings this CME study with implementation science into the context of health utilization research. The article offers freely accessible full text, so enjoy reading the study.

References cited:
1. Centre for Clinical Practice at NICE (UK). Respiratory Tract Infections - Antibiotic Prescribing: Prescribing of Antibiotics for Self-Limiting Respiratory Tract Infections in Adults and Children in Primary Care. London: National Institute for Health and Clinical Excellence (UK); 2008 Jul. http://www.ncbi.nlm.nih.gov/pubmedhealth/PMH0010014/.
2. Levy C, Pereira M, Guedj R, et al. Impact of 2011 French guidelines on antibiotic prescription for acute otitis media in infants. Médecine Mal Infect. 2014;44(3):102-106. http://www.ncbi.nlm.nih.gov/pubmed/24630597.
3. [Update on current care guidelines: acute otitis media]. Duodecim. 2010;126(5):573-4. Finnish. http://www.ncbi.nlm.nih.gov/pubmed/20597310.
4. Hoye S, Gjelstad S, Lindbaek M. Effects on antibiotic dispensing rates of interventions to promote delayed prescribing for respiratory tract infections in primary care. Br J Gen Pract. 2013;63(616):e777-e786. http://bjgp.org/content/63/616/e777.full.pdf. [Featured Article]
5. Fiks AG, Zhang P, Localio AR, et al. Adoption of electronic medical record-based decision support for otitis media in children. Health Serv Res. 2015;50(2):489-513. http://www.ncbi.nlm.nih.gov/pubmed/25287670.  
MeSH *Major* terms: Anti-Bacterial Agents/therapeutic use*; Education, Medical, Continuing*; General Practice/statistics & numerical data*; Physician's Practice Patterns/statistics & numerical data*; Respiratory Tract Infections/drug therapy* 

Thursday, September 10, 2015

3-Hospital Quality- and Performance-Improvement CME Project With Systems Change Support for Diabetes Care, With CE Measure Editor, Derek Dietze

This performance-improvement/continuingmedical education (PI-CME) study by Lapolla and colleagues at three mid-sized, regional hospitals in the United States focused on clinicians’ behaviors that can be documented as “validated metrics of diabetes care” in patient charts.

When you look at this example of a multi-center performance-improvement initiative with educational and institutional change supports, you’ll see that most of the intervention involved obvious reminders of the change initiative and its measures, graphs and data regarding performance trends, and a PI specialist as leader or champion of the campaign in each hospital. Even though the researchers chose hospitals that were willing to invest in change, the following implementation tools and collaborative approaches were incorporated as being necessary to change: "In the design and implementation of this program, we applied recognized PI principles and developed a dedicated working group to evaluate, monitor, and disseminate data, provide timely feedback, monitor outliers, attend to project management details, and maintain support of institutional leadership. We encouraged physicians' engagement by minimizing their time requirements, soliciting their input throughout the initiative, sharing meaningful data, and taking an 'improvement-oriented' approach rather than 'mandating change.'” [Emphases added.]

I wanted to point out the reminder of the American Medical Association’s recommendation of using a three-stage PI-CME structure, “comprising assessment of current performance (Stage A), design and implementation of strategies to improve performance (Stage B), and evaluation of PI efforts (Stage C).” Any outcomes project or program evaluation that sets its goals after educational content and resources are nearly final faces great challenges for later measurement. Permit me to include related content from the May 2015 Almanac article that Erik Brady and I wrote for the series, “Beginner’s Guide to Measuring Educational Outcomes in CEhp”: "A common error in assessment item-writing is the construction of assessment items that focus on a minor or trivial data point found in the content. This practice is particularly common in two cases: first, when assessment items are written from finished content that offers too little material for assessment; and second, when the minimum score a learner needs to request educational credit dictates the number of items on a tool, causing planners to test trivial points in their desperation to hit an arbitrary quota."

Because assessment items are optimally designed to assess how well a learning objective has been met, aligning a learning objective with an assessment item should ensure that your items are focused on the key points of the activity content and that activity content consistently supports learners’ achievement of the educational and performance objectives. Thus when Lapolla et al planned the educational content and reinforcing health-systems supports according to well-established metrics, they made certain that they had all desired outcomes mapped to hospital-specific care gaps before they started. This allowed the research team to provide “timely feedback” and manage the project better at each institution by identifying outliers in the PI datasets. A previously featured article in this month’s “Back to School” Tweet Fest mentioned incorporation of VTE prophylaxis therapies into order sets, and how this was effective. Notably, we see the same in this study: “the PI specialists at all 3 participating hospitals saw marked improvement once order sets included the metrics.”

Look at this article for its insights into quality improvement, implementation science, and educational methods. It is also nicely written and described, with model writing and brevity for a research report—especially remarkable given the number of study authors. Finally, note that Derek Dietze, Editor-in-Chief of CE Measure, participated in this large PI-CME study on improving care practices for diabetes, one of the most challenging epidemiological issues in the United States.

References cited:
Lapolla J, Morrice A, Quinn S, et al. Diabetes management in the hospital setting: a performance improvement continuing medical education program. CE Meas. 2013;7(1):54-60. doi:10.1532/CEM08.12103. doi:10.1532/CEM08.12103. http://www.cardenjenningspublishing.com/journal/index.php/cem/article/view/116. Accessed September 10, 2015. [FREE full text]
Brady ED, Binford SH. How to write sound educational outcomes questions: a focus on knowledge and competence assessments [series: “Beginner’s Guide to Measuring Educational Outcomes in CEhp”]. The Almanac. 2015;37(5):4-9. http://www.acehp.org/p/do/sd/topic=216&sid=811. Accessed September 10, 2015. [Full text]
Raffini L, Trimarchi T, Beliveau J, Davis D. Thromboprophylaxis in a pediatric hospital: a patient-safety and quality-improvement initiative. Pediatrics. 2011 May;127(5):e1326-32. doi: 10.1542/peds.2010-3282. http://pediatrics.aappublications.org/content/127/5/e1326.full.pdf+html. Accessed September 8, 2015. [FREE full text (Pediatrics final version)] 

Monday, September 7, 2015

Mixed-Methods Study Improves Team Communication After Non-Didactic Interprofessional Education on Cardiac Surgical Crisis

It would be hard to imagine a more crucial setting for effective interdisciplinary clinical interactions than the cardiac surgery operating theater. Stevens and colleagues published this 2012 pilot study on interprofessional education to “sharpen performance of experienced cardiac surgical teams in acute crisis management.”

The educational methods support existing effectiveness research for non-didactic education, incorporating both interactive workshops for an entire care unit and computer-based, crisis-case simulations (whose “high-realism” scenarios improved over time). Researchers found that 82% of the 79 participants recommended repetition of case simulations every 6 – 12 months. Workshop participants identified priorities in “encouraging speaking up about critical information and interprofessional information sharing,” particularly early communication of the surgical plan.

The mixed-methods outcomes assessment methodology is also noteworthy because of its appropriateness to this study of human communications and behaviors during a patient crisis: the structured interviews with study participants added context and insights to the quantitative data that could be gathered from periodic surveys. The surveys that were administered before, just after, and 6 months after the educational activities noted that the “concept of working as a team improved between surveys,” as well as “trends for improvement in gaining common understanding of the plan before a procedure and appropriate resolution of disagreements.” The qualitative arm of the study found that interviewees valued the initiative’s “positive effect on their personal behaviors and patient care, including speaking up more readily and communicating more clearly.”

In the continuing medical education field, we often see the Canadians leading educational research, standards, and innovative methods. In fact, looking only at the U. S. National Library of Medicine’s assignment of Medical Subject Headings (MeSH terms) to this indexed article shows the relevance of this study for medical education methods for promoting competence in decision-making, performance-in-practice change, and quality improvement (see “major” MeSH terms listed below, and others on the PubMed page). One hopes to see a follow-up on this pilot study at the Centre Hospitalier Universitaire de Montréal (Quebec, Canada).  

MeSH *Major* terms: Cardiac Surgical Procedures/education; Clinical Competence; Critical Care/standards; Education, Medical, Continuing/methods; Patient Care Team/organization & administration