Thursday, September 17, 2015

Study Design and Paired Comparisons: Individualized Education Fails to Change Practice—Or Was It Only Poor Matching?

We should commend Malone et al for submitting this AHRQ-supported* study [1] for publication when a flaw in its design or execution could be the authors’ main reason for concluding that “the current study was not able to demonstrate a significant beneficial effect of the educational outreach program on [the primary performance outcome measure].” This blog’s “Back-to-School” service campaign did not exclude studies reporting negative outcomes because these studies can potentially inform continuing education in the health professions (CEhp) as much as positive studies can.

CEhp/CME educational proposals, audience-generation strategies, and outcomes reports now specify relevant “target audiences,” recognizing that not all practitioners with a certain degree, specialty, or other professional demographic description would benefit from the same educational activity or design. With this more recent recognition of the importance of targeting specific clinicians and learning about their needs has come greater recognition that many CE participants should not be included in aggregated data. This is even truer in studies with matched pairs, where the step of greatest importance lies in setting match criteria. On September 15th, I discussed an opioids-education study where matching criteria were so stringent that the authors were not able to match certain participants (physicians in the intervention group), and these participants’ data and group assignments were handled nicely and reported clearly in the paper [2] (see post at http://fullcirclece.blogspot.com/2015/09/eight-year-canadian-study-on-opioid.html).

Conversely, the first result listed in this study’s abstract indicates a matching flaw for a study on education on drug-drug interactions (DDIs): “The 2 groups were significantly different with respect to age, profession, specialty, and geographic region.” This finding undermines other benefits to the study, namely, that large samples (19,606 prescribers) were recruited to both groups (educational intervention vs. control) and matched on prescribing volume. Individualized education (also known as academic detailing) was delivered by trained pharmacists as clinical consultants who met with prescribers to “provide one-on-one information … promote evidence-based knowledge, create trusting relationships, and induce practice change.” This study’s performance (behavioral) measure was a reduced rate of prescribing potential DDIs. The prescribing of 25 clinically important, potential DDIs increased more in the intervention group than it did in the control group.

In conclusion, when we look at this presumably negative finding, we are left to wonder whether the educational intervention was not effective—or whether a better matching process might have revealed different results on reducing potential DDIs and improving health care quality and utilization. One could argue that with nearly 20,000 prescribers in both samples, more matching criteria could have been applied without sacrificing so many data points that results would be inconclusive. The study’s design as a retrospective study could also explain recruitment and matching practices. In social sciences research (including educational outcomes research), a core expectation is generalizability of a sample to a population of interest; when reasonably achieved, generalizability lets us apply findings to practical needs and future decisions. 

Recall the study conclusion quoted above: “The current study was not able to demonstrate a significant beneficial effect …” (emphasis added). A secondary analysis with different pair-matching practices might yet inform national initiatives in improving quality while reducing costs through academic detailing, both of which help patients. Now let’s remember to thank Malone, Liberman, and Sun for sharing their data and methods with the healthcare quality and educational research communities in the Journal of Managed Care & Specialty Pharmacy.

* AHRQ = United States Agency for Healthcare Research and Quality

References cited:
1. Malone DC, Liberman JN, Sun D. Effect of an educational outreach program on prescribing potential drug-drug interactions. J Manag Care Pharm. 2013;19(7):549-557. http://www.ncbi.nlm.nih.gov/pubmed/23964616. [Featured Article]
2. Kahan M, Gomes T, Juurlink DN, et al. Effect of a course-based intervention and effect of medical regulation on physicians’ opioid prescribing. Can Fam Physician. 2013;59(5):e231-e239. http://www.cfp.ca/content/59/5/e231.full.pdf+html.
Free Full Text: http://www.amcp.org/JMCP/2013/September_2013/17103/1033.html
MeSH “Major” Terms: Drug Interactions; Drug Prescriptions; Education, Medical, Continuing; Health Education; Physician's Practice Patterns; Prescription Drugs/administration & dosage

No comments: