lunes, 5 de octubre de 2009

Clinicians have good reason to ignore this "Evidence"

Clinicians Have Good Reason to Ignore this “Evidence” Posted in Uncategorized on 10/02/2009 03:14 pm by Dr. Barry Duncan http://heartandsoulofchange.com/blog/
Rebecca just posted this article on the Heroicagency Listserv, and as she said, it begged a response.

Ignoring the Evidence Why do psychologists reject science?By Sharon Begley NEWSWEEK
Published Oct 2, 2009

From the magazine issue dated Oct 12, 2009

It’s a good thing couches are too heavy to throw, because the fight brewing among therapists is getting ugly. For years, psychologists who conduct research have lamented what they see as an antiscience bias among clinicians, who treat patients. But now the gloves have come off. In a two-years-in-the-making analysis to be published in November in Perspectives on Psychological Science, psychologists led by Timothy B. Baker of the University of Wisconsin charge that many clinicians fail to “use the interventions for which there is the strongest evidence of efficacy” and “give more weight to their personal experiences than to science.” As a result, patients have no assurance that their “treatment will be informed by science.” Walter Mischel of Columbia University, who wrote an accompanying editorial, is even more scathing. “The disconnect between what clinicians do and what science has discovered is an unconscionable embarrassment,” he told me, and there is a “widening gulf between clinical practice and science.”
The “widening” reflects the substantial progress that psycho-logical research has made in identifying the most effective treatments. Thanks to clinical trials as rigorous as those for, say, cardiology, we now know that cognitive and cognitive-behavior therapy (teaching patients to think about their thoughts in new, healthier ways and to act on those new ways of thinking) are effective against depression, panic disorder, bulimia nervosa, obsessive-compulsive disorder, and -posttraumatic-stress disorder, with multiple trials showing that these treatments—the tools of psychology—bring more durable benefits with lower relapse rates than drugs, which non-M.D. psychologists cannot prescribe. Studies have also shown that behavioral couples therapy helps alcoholics stay on the wagon, and that family therapy can help schizophrenics function. Neuroscience has identified the brain mechanisms by which these interventions work, giving them added credibility.
You wouldn’t know this if you sought help from a typical psychologist. Millions of patients are instead receiving chaotic meditation therapy, facilitated communication, dolphin-assisted therapy, eye-movement desensitization, and well, “someone once stopped counting at 1,000 forms of psychotherapy in use,” says Baker. Although many treatments are effective, they “are used infrequently,” he and his coauthors point out. “Relatively few psychologists learn or practice” them.
Why in the world not? Earlier this year I wrote a column asking, facetiously, why doctors “hate science,” meaning why do many resist evidence-based medicine. The problem is even worse in psychology. For one thing, says Baker, clinical psychologists are “deeply ambivalent about the role of science” and “lack solid science training”—a result of science-lite curricula, especially in Psy.D. programs. Also, one third of patients get better no matter what therapy (if any) they have, “and psychologists remember these successes, attributing them, wrongly, to the treatment. It’s very threatening to think our profession is a charade.”
When confronted with evidence that treatments they offer are not supported by science, clinicians argue that they know better than some study what works. In surveys, they admit they value personal experience over research evidence, and a 2006 Presidential Task Force of the American Psychological Association—the 150,000-strong group dominated by clinicians—gave equal weight to the personal experiences of the clinician and to scientific evidence, a stance they defend as a way to avoid “cookbook medicine.” A 2008 survey of 591 psychologists in private practice found that they rely more on their own and colleagues’ experience than on science when deciding how to treat a patient. (This is less true of psychiatrists, since these M.D.s receive extensive scientific training.) If they keep on this path as insurers demand evidence-based medicine, warns Mischel, psychology will “discredit and marginalize itself.”
If public shaming doesn’t help, Baker’s team suggests a new accreditation system to “stigmatize ascientific training programs and practitioners.” (The APA says its current system does require scientific training and competence.) Two years ago the Association for Psychological Science launched such a system to compete with the APA’s.
That may produce a new generation of therapists who apply science, but it won’t do a thing about those now in practice.
Find this article athttp://www.newsweek.com/id/216506

My Response:
There are many inaccuracies in this story—not the least of which is the distortion of APA’s definition of evidence based practice, which unequivocally does not give equal weight to the personal experiences of the clinician and scientific evidence—but I will focus here on the “evidence” claiming that the noted approaches are the most effective. Perhaps clinicians are ignoring the researchers quoted in the article because the brand of evidence they are selling is not credible or relevant to their work. They fail to mention the most replicated piece of evidence in the psychological literature: Namely, that no one treatment model, including the cognitive and cognitive behavioral models canonized in the article, have reliably shown any superiority over other treatments. Moreover, treatment models account for a very small amount of the variance of change. As just one example of these robustly demonstrated findings, consider the landmark NIMH study of depression in which cognitive behavioral therapy was compared to interpersonal therapy and antidepressants. No differences emerged between the treatments—they all worked about the same (although the talk therapies did better at follow-up). Treatment model differences accounted for only 2% of variance of change. What did explain the changes achieved by the clients? The quality of the relationship/alliance between the clinician and the client accounted for 21% of the variance. The person of the clinician, not what treatment was delivered explained another 8%. This is why clinicians don’t rally around the flag of different treatments making false claims about superior effectiveness. They know that other factors are far more important—psychotherapy is a richly nuanced interpersonal event that defies being reduced to a diagnosis and treatment model.
The much ballyhooed models have only shown themselves to be better than sham treatments or no treatment at all, which is not exactly news to write home to mom about. Think about it. What if one of your friends went out on a date with a new person, and when you asked about the guy, your friend replied, “He was better than nothing—he was unequivocally better than watching TV or washing my hair.” (Or, if your friend was a researcher: “…he was significantly better, at a 95% confidence level, than watching TV or washing my hair). How impressed would you be?
Finally, the success of any treatment is not guaranteed regardless of its evidentiary support or the expertise of the therapist. As the APA Task Force noted, the response of the client is variable and therefore must be monitored and treatment tailored accordingly to ensure a positive outcome. Monitoring outcome with clients, what has been called practice based evidence, has been shown to significantly improve treatment outcomes regardless of the treatment administered, a far more powerful influence on outcome that the specific approach administered.

No hay comentarios: