June 12, 2015 by Nicholas Spence
Yes, motivational interviewing is all around. In previous posts, I have highlighted its popularity in a cross section of clinical settings, research, and community interventions. A troubling trend, however, is the lack of attention paid towards treatment fidelity or, in other words, examining the extent to which the intervention is being delivered in a manner consistent with the main tenets of motivational interviewing. In the absence of such information, any changes (or lack of) in outcomes may not be attributed to a motivational interviewing intervention with much confidence. Furthermore, this undermines our ability to scale the intervention appropriately given that lower or higher quality motivational interviewing interventions may be required to achieve a desired outcome.
This issue received some much needed consideration in a recent article by Jelsma et al. (2015), “How to Measure Motivational Interviewing Fidelity in Randomized Controlled Trials: Practical Recommendations” in the journal Contemporary Clinical Trials. The article provides an overview of motivational interviewing fidelity, followed up by some practical suggestions for those engaged in this area of behavior change. I will briefly discuss some of them.
Although there are a few measures available to assess motivational interviewing treatment fidelity, such as the Global Rating of Motivational Interviewing Therapist (GROMIT), the Sequential Code for Observing Process Exchanges (SCOPE) instrument, or the Motivational Interviewing Skill Code (MISC), the authors centered their attention on the new release of the Motivational Interviewing Treatment Integrity (MITI) Code version 4.1.
With a focus on the verbal behavior of the practitioner, the MITI provides a measure of good practice of motivational interviewing. This includes a broad range of issues, such as an empathy rating (how well the practitioner understands the client’s perspective, experiences and feelings), use of open and closed questions, cultivating change talk, softening sustain talk, and partnership.
In terms of data collection, it is suggested that all sessions should be audio recorded, with a random sample chosen for analysis from different points in time throughout the duration of study. In previous randomized controlled trials, where treatment fidelity has been examined, 11-32% of the sessions have been assessed, and the authors recommend an analysis of 20% of the study sample. As typically occurs, when the intervention is delivered by more than one interviewer, at least 4 conversations per interviewer or 20 conversations per intervention group, yielding an overall measure that is the average of multiple interviewers, should be reported.
Coding of the tool to assess motivational interviewing fidelity is a time consuming and rigorous process, which, ideally, can be handled by a specialized coding lab founded specifically for assessing treatment fidelity in research. In most cases, this may not be feasible; instead, use of local experts and/or training (40+ hours) inexperienced individuals to use MITI is a way to proceed. How many people does this job require? There should be at least two qualified coders to assess the degree of agreement among raters (inter-rater reliability).
When reporting, the entire measure of motivational interviewing fidelity should be provided. Researchers must, however, decide on the focus of fidelity measurement—the intervention group, individual interviewers, or interviewers over time. As well, much thought should be directed towards the level of motivational interviewing required to be competent for the intervention to be effective. Finally, an account of the coding results is required, including reliability scores between coders and rules for handling large discrepancies in the ratings between them.
A point warranting serious consideration is that motivational interviewing skills can vary between interviewers, with some reaching recommended thresholds and others failing to ever do so. Therefore, interviewer fidelity may be a variable that needs to be taken into account at the analysis stage. Related to this point, this tool may be used to monitor interviewer skills during the course of a study and allow corrective measures to be taken, raising the efficacy of the study or performance of the intervention under ideal, controlled, and artificial conditions. This, however, may undermine the effectiveness of the study or what may be possible in actual real life clinical practice, limiting the external validity or generalizability of the study.
Methodological papers like this may seem to be less thrilling than studies finding clinical applications of motivational interviewing for yet another health behavior; however, findings are only as useful as the study’s adherence to methodological principles that underscore the credibility and true value of the work.