What Evidence Should Guide Treatment?

Blog Home

Blog Archive

2013

2014

2015

June 15, 2016 at 6:45 PM

It’s an old lament: They don’t like us, and we don’t like them. Practitioners and researchers, that is; us and them being whichever one you are, and aren’t, respectively. So why don’t we get along? We all care about the same things, don’t we?

Researchers complain that practitioners don’t read the research. “That’s not responsible!” they say. “They should be using what works, not just whatever they feel like doing. Why don’t they listen to us?” Good point. Practitioners should be paying attention to the research, to improve practice based on what has been shown to work. So why don’t they?

Well... Practitioners complain that researchers are too focused on doing studies that are not relevant to practice. “So why bother to read that junk? It’s got nothing to do with my work!” Hmm. Good point. So why do they do those studies?

The academic culture has traditionally valued papers published in the highest status journals, which generally have the highest scientific standards. Of course, the more trivial the subject of study, the easier it is to achieve the high level of control required for publication in the high status journals. And although the scholarly journals may also include some clinically relevant nuggets, many practitioners don’t have the time or patience to dig for them.

This state of affairs leaves everyone frustrated. Researchers really do want their work to be used to improve practice; and practitioners really would like to know about clinically relevant research – but only that, without all the clutter.

The field has moved beyond mere interest in bridging the research-practice gap, to necessity in doing so. This necessity is most clearly apparent in the dreaded and ever-growing pressure to provide empirically supported treatments. This is, on the whole, a positive development, although with a major caveat.

First, the positive. It is reasonable and beneficial to expect that mental health professionals should provide effective treatments when known-effective treatments are available. Otherwise the customer risks paying for an inferior product/service. Today plenty of people are receiving months or even years of treatment for a problem that could routinely be resolved in relatively few sessions, using a known-effective treatment. This is a scandal; it’s bad for the public and it’s bad for the field. This should be corrected, and the movement towards empirically supported treatments has the potential to effect the correction.

Now for the caveat. Too much enthusiasm to promote empirically supported treatments can actually do harm and prevent clients from receiving the most effective available treatments. This is because not all empirically supported treatments are actually effective, and not all actually-effective treatments have been established as empirically supported treatments.

What is empirically supported is not necessarily effective. That is, many so-called empirically supported treatments were tested in laboratory settings, with trained and supervised therapists using treatment manuals, as well as research participants who have only the problem under study and no other problem. This can result in a cherry-picking effect in which the treatment may be proven efficacious in the laboratory setting, for uniquely easy-to-treat clients. However, this does not tell us whether the same treatment, as used by therapists in field/practice settings, will work with their real-world clients.

For a proven-efficacious treatment to be embraced by clinicians, the treatment should be tested in the field. Unfortunately, this step is not always done, and those regulating and/or funding treatment may not know the difference between an efficacious (laboratory-tested) and effective (field-tested) treatment. But practitioners know the difference. This is why the laboratory studies, often called "efficacy" studies, although highly valued in the scholarly journals, tend to be scorned by practitioners.

What is effective is not necessarily empirically supported. This does not mean that it cannot be empirically supported – only that it has not yet been subject to the testing that would earn it that designation. Not every study has been done yet. Also, because of limited resources, psychotherapy research tends to be conducted on brief symptom-focused treatments that can be neatly and efficiently studied with a relatively limited expenditure. Thus, the literature on empirically-supported treatments tends to favor the cognitive-behavioral "procedure" treatments that strictly target specific symptoms.

Although such treatments may indeed be effective, or even superior, for certain symptoms or disorders, the reality is more complex. For example, the Consumer Reports study (Seligman, 1995), using a retrospective research design, found that most therapy clients reported greater benefit from longer treatments. This sharply contrasted with much other psychotherapy research that found greater benefit for the shorter, easier-to-study symptom-focused treatments (typically compared, in those studies, to other brief treatments only). The finding of greater benefit for longer-term treatment may reflect the effectiveness of certain treatment approaches, of longer-term treatment per se, and/or of the impact of the so-called non-specific factors (such as empathy, positive regard, therapeutic alliance) that have been shown to contribute to positive outcomes (Norcross, 2002). The Consumer Reports findings highlight the risk that the easy-to-study treatments may overshadow other treatments that may actually be, in the long run, more effective and more beneficial for clients.

Identifying what works. To determine which treatments are most worthy of doing (and paying for), the literature should be analyzed in a way that values effectiveness over mere efficacy. Such analysis might also look for presence/absence of proven-effective treatment components, as well as the non-specific factors which may be more present/potent in some treatment approaches, even if the particular treatment approach in question has not yet undergone formal testing.

In lieu of appropriate analysis, there is a real risk that preference for empirically supported treatments could inappropriately lead to the funding/use of efficacious but ineffective treatments, saddling clinicians with requirements to use treatments that do not work well with their clients. For example, I’m currently collaborating with an agency in which all their therapists are trained in the most-research-supported child trauma therapy method, and it is their primary treatment modality, yet only a tiny percentage of their clients complete the trauma resolution work. Whereas I worked on a project with another agency – similarly focused on child victims of crime – using a less established method, in which nearly every client made it through the trauma work, with excellent outcomes (Descilo, Greenwald, Schmitt, & Reslan, 2010)

The preference for empirically supported treatment may also lead to refusal to fund/support actually-effective treatments that clinicians find useful but that have not yet been formally tested. For example, I worked with an agency that had been providing excellent therapy on a county contract, but then lost the contract to another agency that provided one of the evidence-based treatment approaches. Unfortunately, the new agency’s treatment was much less effective, and the clients suffered.

In conclusion, we don’t want to throw the baby out with the bath-water, but neither do we want to leave the baby soaking in that old bath-water if we have some better way to wash it. The movement to bring proven-effective methods into clinical practice (Norcross, Beutler, & Levant, 2006) has to balance what the research supports with what works with real clients. This means that treatment researchers should focus more on real clients in field/practice settings, and practitioners should systematically collect outcome data. When research and practice work together, we get better at helping our clients.

References

Descilo, T., Greenwald, R., Schmitt, T. A., & Reslan, S. (2010). Traumatic incident reduction for urban at-risk youth and unaccompanied minor refugees: Two open trials. Journal of Child & Adolescent Trauma, 3, 181-191.

Norcross, J. C. (Ed.). (2002). Psychotherapy relationships that work: Therapist contributions and responsiveness to patients. New York, Oxford University Press.

Norcross, J.C., Beutler, L.E., & Levant, R.F. (Eds.). (2006). Evidence-based practices in mental health. Washington, DC: American Psychological Association.

Seligman, M. E. P. (1995). The effectiveness of psychotherapy: The Consumer Reports study. American Psychologist, 50, 965-974.



Tags:
Category:

979 hits

Please add a comment

Posted by laura on
Remove fatherhood funding to prevent it from happening. Are you looking for a bandaide or a cure.
Posted by Ricky Greenwald on
I take it that you're suggesting that if we stop using government funds to subsidize domestic abusers' lawyers who help them maintain access to and control of their victims, we'll have a lot less trauma to treat. I agree -- as you can guess from some of the other blog posts. But for what doesn't get prevented, there's treatment, which is the focus of this post.
Posted by Scott Petersen on
Well said Ricky. A measured, thoughtful critique of the need to integrate evidence-based practice (EBP) with practice-based evidence. I appreciate your differentiating empirically supported treatments and evidence-based practice, the latter of which can be noun or verb. EBP, as I've come to understand it, occurs at the intersection of empirically supported treatments, evidence-based assessment, and evidence-based relationships (i.e., empathy, collaboration, goal consensus, feedback, rupture & repair) - among other factors. The Institute of Medicine (2001) defines evidence-based practice as the “integration of best researched evidence AND clinical expertise WITH patient values” (p. 147).
Additionally, various trauma specific treatments are identified as efficacious, with no one treatment having superiority (Najavits, 2015; Najavits & Anderson, 2015) and both past- and present-focused treatments work and neither consistently outperforms the other in terms of outcomes based on RCT’s. And as you point out there is the added consideration of the limitations of unrepresentative samples in many RCT's. As to treatment decision-making, there are the challenges of limited access to professional journals for many practitioners, in which much of the research is published. If we accept the finding that empirically supported trauma-specific treatments lead to similar outcomes, the decision re: treatment might also be informed by client presenting concern(s) and preference; practitioner preference, training, and skill; and the organizational setting when relevant.
Posted by Ricky Greenwald on
Scott, as to treatment decision-making... I'd like to see more focus not only on ecological validity, but on acceptability to clients (e.g., dropout rate) and treatment efficiency. I'm working on a meta-analysis that is finding that some trauma treatments are more equal than others on these factors. But it'll take a while to get published, as we have some serious re-doing to do.
Leave a Reply



(Your email will not be publicly displayed.)


Captcha Code

Click the image to see another captcha.