Post by Admin on Aug 23, 2022 17:16:33 GMT
Psychiatry, Fraud, and the Case for a Class-Action Lawsuit
www.madintheuk.com/2022/08/psychiatry-fraud-class-action-lawsuit/
When Mad in America received a notice this past June that Joanna Moncrieff, Mark Horowitz, and colleagues would soon publish a paper concluding that there were no research findings that supported the low-serotonin hypothesis of depression, I initially wondered whether we should bother to report on it. Mad in America readers know well that the low-serotonin theory had long ago been debunked, with numerous articles on our site telling of that fact, and so I quipped to other MIA staff that reviewing the article would be like “beating a dead horse.”
But such is our little cocoon here at Mad in America. For much of the mainstream media, their paper made for a stunning finding. In print, radio, and television, the paper has been described as a “landmark” finding, as a “game changer” and so forth, the media telling of how it has shaken up accepted wisdom about antidepressants and “how they work.”
This was rather amusing, I thought, as the exclamations of surprise revealed the media’s utter failure regarding their reporting on psychiatry for the past decades. Their surprise served as a tacit confession that they had been publishing propaganda for some time.
Then, as psychiatrists publicly commented on the paper, a second confession appeared, this one indeed of “landmark” importance. Their comments serve as an admission that, for the past several decades, their profession committed medical fraud. And I am using that term in its legal sense.
As Moncrieff and colleagues noted, there is a long line of research that failed to find evidence supporting the low-serotonin theory of depression. What was new about their work was that they performed a comprehensive review of this research, looking at the different “types” of studies that had been done, and finding that all had failed to produce evidence supporting the theory. In response, a number of prominent psychiatrists in the UK and the United States dismissed the paper as old news. Here is a sampling:
From UK psychiatrists:
“The findings from this review are really unsurprising. Depression has lots of different symptoms and I don’t think I’ve met any serious scientists or psychiatrists who think that all causes of depression are caused by a simple chemical imbalance in serotonin.” —Michael Bloomfield, University College London (UCL)
“This paper does not present any new findings but just reports results which have been published elsewhere and it is certainly not news that depression is not caused by ‘low serotonin levels.’” —David Curtis, UCL Genetics Institute
From US psychiatrists:
“Nothing is new here. And the fuss surrounding the paper reveals much ignorance about psychiatry. The serotonin hypothesis of depression which became popular from the 1990s until now, is false, and has known to be false for a long time, and never was proven to begin with.” —Nassir Ghaemi, Tufts University School of Medicine
“When I was doing research for [my] book, I was reading the same studies that I am sure that Dr. Moncrieff and colleagues read, which were basically saying that there’s no direct evidence of a serotonin deficiency. So it’s not really new.” —Daniel Carlat, publisher of the Carlat Psychiatry Report
The psychiatrists making these comments are correct. The psychiatric research community has long known that the low-serotonin theory didn’t pan out and that, in fact, the field long ago moved on to new theories about the possible pathology that gives rise to depression. Yet, as is easy to show, the American Psychiatric Association, in concert with pharmaceutical companies, promoted the low-serotonin theory to the public long after the low-serotonin theory had been found to be without merit. Scientific advisory councils populated by professors of psychiatry at prestigious medical schools also signed off on such pronouncements by non-profit advocacy associations, and in that manner, share culpability for telling this “falsehood” to the public.
That fraudulent story-telling worked, in the sense of deluding the public. As Moncrieff and colleagues noted, surveys in recent years found that 85% to 90% of the public believed that low serotonin was the cause of depression, and that antidepressants helped fix that imbalance.
There you have the basis for a class action lawsuit: the psychiatric community long ago knew that the low-serotonin story of depression hadn’t panned out, yet the American Psychiatric Association, pharmaceutical companies, and scientific advisory councils told the public otherwise, and this created a societal belief in that false story. The surveys prove that many millions of patients acted upon that falsehood and incorporated it into their sense of self.
The Legal Standard for Medical Fraud
In the wake of World War II, the discovery of Nazi medical experiments on Jewish prisoners and the mentally ill led to the principle, codified in law in the United States, of the duty to provide volunteers in research studies with informed consent. Potential study subjects need to be informed about the risks of a study before they can give consent.
In the 1950s and 1960s, this principle of informed consent was extended to ordinary medical care. The principle is grounded in the concept of personal autonomy: the individual has a right to self-determination. A 1972 landmark case in federal court, Canterbury v. Spence, ruled that providing patients with informed consent was not just an ethical obligation, but a legal one. The court wrote:
“The patient’s right of self-decision shapes the boundaries of the duty to reveal. That right can be exercised only if the patient possesses enough information to enable an intelligent choice.”
The court also set forth a standard for assessing whether this legal obligation had been met: “What would a reasonable patient want to know with respect to the proposed therapy and the dangers that may be inherently or potentially involved?”
While it is the physician or medical caregiver who is required to obtain the informed consent of the patient, this legal standard clearly imposes an ethical duty, by proxy, on the medical specialty that provides individual physicians with the information that should be disclosed. The medical specialty must provide physicians with the best possible accounting of the risks and benefits of any proposed therapy, and in its communications to the public, do the same.
The diagnosis of a disease is obviously a first step in obtaining informed consent. What is the illness that needs to be treated? If the presenting symptoms do not lead to a diagnosis with a known pathology, that is okay—the absence of knowledge helps inform the patient’s decision-making. If it isn’t understood why a drug works, that is okay too. Once again, the absence of knowledge helps inform the patient’s decision-making. At that point, the patient can focus on the risks and benefits of the proposed treatment: what have clinical studies shown?
The chemical imbalance story violated those principles at every step. Patients were informed that they had a known pathology, and that an antidepressant fixed that pathology. That was a story of an antidote to a disease, and thus was a medically necessary treatment. If a patient didn’t take the antidepressant, he or she could expect to continue to suffer from depression.
This isn’t simply a failure to give patients the information needed to make an “informed choice.” Instead, from a legal standpoint, this is a case of a patient being told a lie.
Here is how one Arizona law firm describes the legal consequences for a doctor that lies to a patient:
“You can sue your doctor for lying, provided certain breaches of duty of care occur. A doctor’s duty of care is to be truthful about your diagnosis, treatment options, and prognosis. If a doctor has lied about any of this information, it could be proof of a medical malpractice claim. The law considers it medical negligence if a doctor fails to provide the truth for informed consent, which may also bring a battery lawsuit.”
Medical malpractice is the charge if the action was due to negligence; medical battery requires the action to be intentional. Here is how a Washington D.C. law firm describes medical battery:
“When you visit a doctor and they prescribe a treatment or procedure, an essential element is your consent. You have the right to know what will be done to you, to learn the risk or potential side effects of a procedure, and to be informed of any alternative treatment options available to you . . . Medical battery occurs when the doctor or other medical professional violates your right to decide what kinds of medical treatments you will receive and which you do not wish to receive.”
The FDA, of course, approved the prescribing of antidepressants for depression. And it may be that many individual prescribers who told their patients that antidepressants fixed a chemical imbalance thought that was true. They believed they were providing patients with “informed consent.”
As such, in this instance of the chemical-imbalance story, the medical malpractice and battery can be understood as not necessarily originating in the doctor-patient interaction, but rather in the telling of a false story to the public by the American Psychiatric Association (APA) and pharmaceutical companies that knowingly promoted this falsehood. The academic psychiatrists that served on the scientific advisory boards of non-profit advocacy organizations that peddled this story share in this collective guilt.
The Trail of Fraud
As is well-known, the low-serotonin theory of depression had its roots in findings, dating back to the 1960s, that the first generation of antidepressants, tricyclics and monoamine oxidase inhibitors, both prevented the usual removal of neurotransmitters known as monoamines from the synaptic cleft between neurons. This led researchers in the 1965 to hypothesize that a deficit in monamines could be a cause of depression.
Once this hypothesis was floated, researchers then sought to determine whether if patients with depression actually suffered from a monamine deficiency. It’s a history of one negative finding after another.
As early as 1974, researchers concluded that all such studies up to that time indicated that “the depletion of brain norepinephrine, dopamine, or serotonin is in itself not sufficient to account for the development of the clinical syndrome of depression.” This was the first round of findings, and after that there was speculation that a monoamine deficit might be present in a subset of depressed patients (as opposed to being a pathology common to all such patients.) In 1984, the NIMH conducted a study to investigate that possibility. Once more, the bottom-line findings were negative, which led the NIMH researchers to conclude that “elevations or decrements in the functioning of serotonergic systems per are not likely to be associated with depression.”
At that point, the hypothesis had been around for nearly two decades and found to be wanting. In the research community, there was a sense that the hypothesis had always presented an overly reductive picture of how the brain functioned, and thus it wasn’t a surprise that research had failed to support the hypothesis. Even so, after that 1984 report, investigators continued to study whether depressed patients suffered from low serotonin, with this research quickening after Prozac arrived on the market in 1988. Many different investigative methods were tried, but once again, the results were negative. The hypothesis was officially buried by the American Psychiatric Association in 1999, when it published the third edition of its Textbook of Psychiatry. The authors of a section on mood disorders even pointed out the faulty logic that had led to the chemical imbalance theory of depression in the first place. They wrote:
“The monoamine hypothesis, which was first proposed in 1965, holds that monoamines such as norepinephrine and 5-HT [serotonin] are deficient in depression and that the action of antidepressants depends on increasing the synaptic availability of these monoamines. The monoamine hypothesis was based on observations that that antidepressants block reuptake inhibition of norepinephrine, 5-HT, and/or dopamine. However, inferring neurotransmitter pathophysiology from an observed action of a class of medications on neurotransmitter availability is similar to concluding that because aspirin causes gastrointestinal bleeding, headaches are caused by too much blood and the therapeutic action of aspirin in headaches involves blood loss. Additional experience has not confirmed the monoamine depletion hypothesis.”1
Other experts in the field echoed this point in the next few years. In his 2000 textbook Essential Psychopharmacology, psychiatrist Stephen Stahl wrote that “there is no clear and convincing evidence that monamine deficiency accounts for depression; that is, there is no ‘real’ monoamine deficit.”2
More such confessions appeared in the research literature, and finally, in a 2010 paper, Eric Nestler, famous for his work on the biology of mental disorders, detailed how the many types of inquiries into the low-serotonin theory had all come to the same conclusion:
“After more than a decade of PET studies (positioned aptly to quantitatively measure receptor and transporter numbers and occupancy), monoamine depletion studies (which transiently and experimentally reduce brain monoamine levels), as well as genetic association analyses examining polymorphisms in monoaminergic genes, there is little evidence to implicate true deficits in serotonergic, noradrenergic, or dopaminergic neurotransmission in the pathophysiology of depression. This is not surprising, as there is no a priori reason that the mechanism of action of a treatment is the opposite of disease pathophysiology.”
This is the research history that psychiatrists today, when asked to comment about Moncrieff’s paper, are referring to when they state, “there is nothing new here.” They are right. The theory was abandoned long ago. In a 2011 blog, Ronald Pies, editor of Psychiatric Times, the trade publication of the American Psychiatric Association, put it this way: “In truth, the ‘chemical imbalance’ notion was always a kind of urban legend—never a theory seriously propounded by well-informed psychiatrists.”
From a legal standpoint, the APA’s publication of the third edition of its Textbook of Psychiatry in 1999 is the pivotal moment in this history. Up until that time, the argument could be made that while the biology of depression remained unknown, one hypothesis was that it was due to low serotonin, and that there were still efforts to see if that might be true. However, after that date, the APA, the pharmaceutical companies, and the academic psychiatrists that populated the scientific advisory councils had an obligation to inform the public that the low-serotonin theory had not panned out. If instead these three groups informed the public that depressed patients suffered from a chemical imbalance that could be fixed by a drug, they were knowingly telling the public a lie, and thus, by informed consent standards, they were abetting medical malpractice and the medical battery of patients.
And it’s easy to document that is exactly what the APA, the pharmaceutical companies, and the scientific advisory boards did.
The APA’s Promotion of the Chemical Imbalance Story
The APA’s promotion of the chemical imbalance theory of mental disorders can be traced back to 1980, when it published the third edition of its Diagnostic and Statistical Manual. That publication is regularly characterized as a transformative moment for American psychiatry, as this was when the APA adopted a “disease” model for diagnosing and treating psychiatric disorders.
There were no scientific findings that spurred this transformation. The scientific impulse that was present arose from the failure of DSM II: the diagnoses in that edition were understood to “lack reliability and validity.” That led a team of researchers at Washington University in St. Louis to advocate that psychiatry should start fresh: it could develop categories for grouping patients with like symptoms, with the hope that subsequent research would “validate” the groupings as real diseases. DSM II would be abandoned, and new categories would be drawn up for research purposes.
However, during the 1970s, APA leaders spoke of how, in the face of various criticisms, psychiatry was fighting for its survival. Its diagnostic manual was understood to lack validity; psychologists and counselors were offering talk therapies that appeared to be as effective as psychoanalysis; One Flew Over the Cuckoo’s Nest depicted staff in mental hospitals as the truly crazy ones; and an “antipsychiatry” movement described psychiatry as an agency of social control.
The criticism that stung the most was that psychiatrists were not “real doctors.” There was an obvious solution that beckoned: if they adopted a disease model, they could present themselves as physicians who treated real diseases. This would enable them to don the “white coat”—both figuratively and literally—that society recognized as the garb of “real” doctors.
DSM III, said APA president Jack Weinberg in 1977, would “clarify to anyone who may be in doubt that we regard psychiatry as a specialty of medicine.”3
Once DSM III was published, the APA set out to market its new disease model to the public. In 1981, it established a “division of publications and marketing” to “deepen the medical identification of psychiatrists.” That same year it established a press to bring “psychiatry’s best talent and current knowledge before the reading public.” It developed a nationwide roster of experts to promote this disease model, and it set up a “public affairs institute” to run workshops that trained its members “in techniques for dealing with radio and television.”4
This PR effort told of a revolution in psychiatry, with the media informed that researchers were discovering the very “molecules” that caused psychiatric symptoms. The APA held “media days” to promote this understanding, with awards given to media that reported on this revolution, and soon newspapers and magazines were writing stories about extraordinary advances that heralded a day when mental disorders could be “cured.”
The Baltimore Sun, in a seven-part series titled “The Mind-Fixers,” which won a Pulitzer Prize for expository journalism in 1984, described the revolution in this way:
“For a decade and more, research psychiatrists have been working quietly in laboratories, dissecting the brains of mice and men and teasing out the chemical formulas that unlock the secrets of the mind. Now, in the 1980s, their work is paying off. They are rapidly identifying the interlocking molecules that produce human thought and emotion . . . As a result, psychiatry today stands on the threshold of becoming an exact science, as precise and quantifiable as molecular genetics. Ahead lies an era of psychic engineering, and the development of specialized drugs and therapies.”5
Pharmaceutical companies, of course, were thrilled with the APA’s adoption of a disease model, for they understood it would greatly expand the market for their drugs, and they began funneling money to the APA and to psychiatrists at academic medical centers to support this PR effort.
The chemical imbalance story served, in essence, as the soundbite that could best sell this disease model to the public. It was a claim that fit into a larger societal narrative about the march of medicine in the 20th century: insulin as a treatment for diabetes, antibiotics for infectious diseases, a vaccine for polio, and so forth. Now it was psychiatry’s turn to take its place at the head of this parade.
The public began hearing this soundbite immediately after DSM III was published. In 1981, an Associated Press article featuring an interview with University of Chicago psychiatrist Herbert Meltzer informed readers that “researchers believe clinical depression is caused by a chemical imbalance in the brain,” and that there were already two drugs in development that “restore the chemical imbalance” to normal.6
Three years later, Nancy Andreasen, who would soon become editor-in-chief of the American Journal of Psychiatry, published a best-selling book titled The Broken Brain: The Biological Revolution. The new understanding in psychiatry, she wrote, was that the “major psychiatric illnesses are diseases,” and that each “different illness has a different specific cause . . . there are many hints that mental illness is due to chemical imbalances in their brain and that treatment involves correcting these chemical imbalances.”7
Eli Lilly brought Prozac to market in 1988, and soon the public was hearing that this “selective serotonin reuptake inhibitor” restored serotonin to normal levels, and thus was like “insulin for diabetes.” New York magazine featured the pill on its cover: “Bye, Bye Blues” declared the headline.8 Newsweek’s did as well, with this headline atop the pill: “Prozac, A Breakthrough Drug for Depression.”9
Magazine and newspaper stories told of how patients were feeling better than ever. In the spring of 1990, the New York Times, in an article by Natalie Angier, who arguably was the nation’s most well-known science writer, informed readers that “all antidepressants work by restoring the balance of neurotransmitter activity in the brain, correcting an abnormal excess or inhibition of the electrochemical signals that control mood, thoughts, appetite, pain, and other sensations.” This new drug, Dr. Francis Mondimore told Angier, “is not like alcohol or Valium. It’s like antibiotics.”
Television shows weighed in with a similar message, and on 60 Minutes, Lesley Stahl told the inspiring story of a woman, Maria Romero, who, after a decade of horrible depression, had been reborn on Prozac. “Somebody, something left my body and another person came in,” Romero said. Stahl explained the biological cure that was at work: “Most doctors believe that chronic depression like Romero’s is caused by a chemical imbalance in the brain. To correct it, the doctors prescribed Prozac.”10
Sales of Prozac soared, and as other drug companies brought new “SSRI” antidepressants to market—Zoloft, Paxil, Celexa, Lexapro, and so forth—they relied on the chemical imbalance soundbite to market their products. The National Alliance on Mental Illness grew in prominence during this period, and its core message was that psychiatric disorders were diseases caused by chemical imbalances in the brain, and that psychiatric drugs fixed those imbalances.
Rest in Link.
www.madintheuk.com/2022/08/psychiatry-fraud-class-action-lawsuit/
When Mad in America received a notice this past June that Joanna Moncrieff, Mark Horowitz, and colleagues would soon publish a paper concluding that there were no research findings that supported the low-serotonin hypothesis of depression, I initially wondered whether we should bother to report on it. Mad in America readers know well that the low-serotonin theory had long ago been debunked, with numerous articles on our site telling of that fact, and so I quipped to other MIA staff that reviewing the article would be like “beating a dead horse.”
But such is our little cocoon here at Mad in America. For much of the mainstream media, their paper made for a stunning finding. In print, radio, and television, the paper has been described as a “landmark” finding, as a “game changer” and so forth, the media telling of how it has shaken up accepted wisdom about antidepressants and “how they work.”
This was rather amusing, I thought, as the exclamations of surprise revealed the media’s utter failure regarding their reporting on psychiatry for the past decades. Their surprise served as a tacit confession that they had been publishing propaganda for some time.
Then, as psychiatrists publicly commented on the paper, a second confession appeared, this one indeed of “landmark” importance. Their comments serve as an admission that, for the past several decades, their profession committed medical fraud. And I am using that term in its legal sense.
As Moncrieff and colleagues noted, there is a long line of research that failed to find evidence supporting the low-serotonin theory of depression. What was new about their work was that they performed a comprehensive review of this research, looking at the different “types” of studies that had been done, and finding that all had failed to produce evidence supporting the theory. In response, a number of prominent psychiatrists in the UK and the United States dismissed the paper as old news. Here is a sampling:
From UK psychiatrists:
“The findings from this review are really unsurprising. Depression has lots of different symptoms and I don’t think I’ve met any serious scientists or psychiatrists who think that all causes of depression are caused by a simple chemical imbalance in serotonin.” —Michael Bloomfield, University College London (UCL)
“This paper does not present any new findings but just reports results which have been published elsewhere and it is certainly not news that depression is not caused by ‘low serotonin levels.’” —David Curtis, UCL Genetics Institute
From US psychiatrists:
“Nothing is new here. And the fuss surrounding the paper reveals much ignorance about psychiatry. The serotonin hypothesis of depression which became popular from the 1990s until now, is false, and has known to be false for a long time, and never was proven to begin with.” —Nassir Ghaemi, Tufts University School of Medicine
“When I was doing research for [my] book, I was reading the same studies that I am sure that Dr. Moncrieff and colleagues read, which were basically saying that there’s no direct evidence of a serotonin deficiency. So it’s not really new.” —Daniel Carlat, publisher of the Carlat Psychiatry Report
The psychiatrists making these comments are correct. The psychiatric research community has long known that the low-serotonin theory didn’t pan out and that, in fact, the field long ago moved on to new theories about the possible pathology that gives rise to depression. Yet, as is easy to show, the American Psychiatric Association, in concert with pharmaceutical companies, promoted the low-serotonin theory to the public long after the low-serotonin theory had been found to be without merit. Scientific advisory councils populated by professors of psychiatry at prestigious medical schools also signed off on such pronouncements by non-profit advocacy associations, and in that manner, share culpability for telling this “falsehood” to the public.
That fraudulent story-telling worked, in the sense of deluding the public. As Moncrieff and colleagues noted, surveys in recent years found that 85% to 90% of the public believed that low serotonin was the cause of depression, and that antidepressants helped fix that imbalance.
There you have the basis for a class action lawsuit: the psychiatric community long ago knew that the low-serotonin story of depression hadn’t panned out, yet the American Psychiatric Association, pharmaceutical companies, and scientific advisory councils told the public otherwise, and this created a societal belief in that false story. The surveys prove that many millions of patients acted upon that falsehood and incorporated it into their sense of self.
The Legal Standard for Medical Fraud
In the wake of World War II, the discovery of Nazi medical experiments on Jewish prisoners and the mentally ill led to the principle, codified in law in the United States, of the duty to provide volunteers in research studies with informed consent. Potential study subjects need to be informed about the risks of a study before they can give consent.
In the 1950s and 1960s, this principle of informed consent was extended to ordinary medical care. The principle is grounded in the concept of personal autonomy: the individual has a right to self-determination. A 1972 landmark case in federal court, Canterbury v. Spence, ruled that providing patients with informed consent was not just an ethical obligation, but a legal one. The court wrote:
“The patient’s right of self-decision shapes the boundaries of the duty to reveal. That right can be exercised only if the patient possesses enough information to enable an intelligent choice.”
The court also set forth a standard for assessing whether this legal obligation had been met: “What would a reasonable patient want to know with respect to the proposed therapy and the dangers that may be inherently or potentially involved?”
While it is the physician or medical caregiver who is required to obtain the informed consent of the patient, this legal standard clearly imposes an ethical duty, by proxy, on the medical specialty that provides individual physicians with the information that should be disclosed. The medical specialty must provide physicians with the best possible accounting of the risks and benefits of any proposed therapy, and in its communications to the public, do the same.
The diagnosis of a disease is obviously a first step in obtaining informed consent. What is the illness that needs to be treated? If the presenting symptoms do not lead to a diagnosis with a known pathology, that is okay—the absence of knowledge helps inform the patient’s decision-making. If it isn’t understood why a drug works, that is okay too. Once again, the absence of knowledge helps inform the patient’s decision-making. At that point, the patient can focus on the risks and benefits of the proposed treatment: what have clinical studies shown?
The chemical imbalance story violated those principles at every step. Patients were informed that they had a known pathology, and that an antidepressant fixed that pathology. That was a story of an antidote to a disease, and thus was a medically necessary treatment. If a patient didn’t take the antidepressant, he or she could expect to continue to suffer from depression.
This isn’t simply a failure to give patients the information needed to make an “informed choice.” Instead, from a legal standpoint, this is a case of a patient being told a lie.
Here is how one Arizona law firm describes the legal consequences for a doctor that lies to a patient:
“You can sue your doctor for lying, provided certain breaches of duty of care occur. A doctor’s duty of care is to be truthful about your diagnosis, treatment options, and prognosis. If a doctor has lied about any of this information, it could be proof of a medical malpractice claim. The law considers it medical negligence if a doctor fails to provide the truth for informed consent, which may also bring a battery lawsuit.”
Medical malpractice is the charge if the action was due to negligence; medical battery requires the action to be intentional. Here is how a Washington D.C. law firm describes medical battery:
“When you visit a doctor and they prescribe a treatment or procedure, an essential element is your consent. You have the right to know what will be done to you, to learn the risk or potential side effects of a procedure, and to be informed of any alternative treatment options available to you . . . Medical battery occurs when the doctor or other medical professional violates your right to decide what kinds of medical treatments you will receive and which you do not wish to receive.”
The FDA, of course, approved the prescribing of antidepressants for depression. And it may be that many individual prescribers who told their patients that antidepressants fixed a chemical imbalance thought that was true. They believed they were providing patients with “informed consent.”
As such, in this instance of the chemical-imbalance story, the medical malpractice and battery can be understood as not necessarily originating in the doctor-patient interaction, but rather in the telling of a false story to the public by the American Psychiatric Association (APA) and pharmaceutical companies that knowingly promoted this falsehood. The academic psychiatrists that served on the scientific advisory boards of non-profit advocacy organizations that peddled this story share in this collective guilt.
The Trail of Fraud
As is well-known, the low-serotonin theory of depression had its roots in findings, dating back to the 1960s, that the first generation of antidepressants, tricyclics and monoamine oxidase inhibitors, both prevented the usual removal of neurotransmitters known as monoamines from the synaptic cleft between neurons. This led researchers in the 1965 to hypothesize that a deficit in monamines could be a cause of depression.
Once this hypothesis was floated, researchers then sought to determine whether if patients with depression actually suffered from a monamine deficiency. It’s a history of one negative finding after another.
As early as 1974, researchers concluded that all such studies up to that time indicated that “the depletion of brain norepinephrine, dopamine, or serotonin is in itself not sufficient to account for the development of the clinical syndrome of depression.” This was the first round of findings, and after that there was speculation that a monoamine deficit might be present in a subset of depressed patients (as opposed to being a pathology common to all such patients.) In 1984, the NIMH conducted a study to investigate that possibility. Once more, the bottom-line findings were negative, which led the NIMH researchers to conclude that “elevations or decrements in the functioning of serotonergic systems per are not likely to be associated with depression.”
At that point, the hypothesis had been around for nearly two decades and found to be wanting. In the research community, there was a sense that the hypothesis had always presented an overly reductive picture of how the brain functioned, and thus it wasn’t a surprise that research had failed to support the hypothesis. Even so, after that 1984 report, investigators continued to study whether depressed patients suffered from low serotonin, with this research quickening after Prozac arrived on the market in 1988. Many different investigative methods were tried, but once again, the results were negative. The hypothesis was officially buried by the American Psychiatric Association in 1999, when it published the third edition of its Textbook of Psychiatry. The authors of a section on mood disorders even pointed out the faulty logic that had led to the chemical imbalance theory of depression in the first place. They wrote:
“The monoamine hypothesis, which was first proposed in 1965, holds that monoamines such as norepinephrine and 5-HT [serotonin] are deficient in depression and that the action of antidepressants depends on increasing the synaptic availability of these monoamines. The monoamine hypothesis was based on observations that that antidepressants block reuptake inhibition of norepinephrine, 5-HT, and/or dopamine. However, inferring neurotransmitter pathophysiology from an observed action of a class of medications on neurotransmitter availability is similar to concluding that because aspirin causes gastrointestinal bleeding, headaches are caused by too much blood and the therapeutic action of aspirin in headaches involves blood loss. Additional experience has not confirmed the monoamine depletion hypothesis.”1
Other experts in the field echoed this point in the next few years. In his 2000 textbook Essential Psychopharmacology, psychiatrist Stephen Stahl wrote that “there is no clear and convincing evidence that monamine deficiency accounts for depression; that is, there is no ‘real’ monoamine deficit.”2
More such confessions appeared in the research literature, and finally, in a 2010 paper, Eric Nestler, famous for his work on the biology of mental disorders, detailed how the many types of inquiries into the low-serotonin theory had all come to the same conclusion:
“After more than a decade of PET studies (positioned aptly to quantitatively measure receptor and transporter numbers and occupancy), monoamine depletion studies (which transiently and experimentally reduce brain monoamine levels), as well as genetic association analyses examining polymorphisms in monoaminergic genes, there is little evidence to implicate true deficits in serotonergic, noradrenergic, or dopaminergic neurotransmission in the pathophysiology of depression. This is not surprising, as there is no a priori reason that the mechanism of action of a treatment is the opposite of disease pathophysiology.”
This is the research history that psychiatrists today, when asked to comment about Moncrieff’s paper, are referring to when they state, “there is nothing new here.” They are right. The theory was abandoned long ago. In a 2011 blog, Ronald Pies, editor of Psychiatric Times, the trade publication of the American Psychiatric Association, put it this way: “In truth, the ‘chemical imbalance’ notion was always a kind of urban legend—never a theory seriously propounded by well-informed psychiatrists.”
From a legal standpoint, the APA’s publication of the third edition of its Textbook of Psychiatry in 1999 is the pivotal moment in this history. Up until that time, the argument could be made that while the biology of depression remained unknown, one hypothesis was that it was due to low serotonin, and that there were still efforts to see if that might be true. However, after that date, the APA, the pharmaceutical companies, and the academic psychiatrists that populated the scientific advisory councils had an obligation to inform the public that the low-serotonin theory had not panned out. If instead these three groups informed the public that depressed patients suffered from a chemical imbalance that could be fixed by a drug, they were knowingly telling the public a lie, and thus, by informed consent standards, they were abetting medical malpractice and the medical battery of patients.
And it’s easy to document that is exactly what the APA, the pharmaceutical companies, and the scientific advisory boards did.
The APA’s Promotion of the Chemical Imbalance Story
The APA’s promotion of the chemical imbalance theory of mental disorders can be traced back to 1980, when it published the third edition of its Diagnostic and Statistical Manual. That publication is regularly characterized as a transformative moment for American psychiatry, as this was when the APA adopted a “disease” model for diagnosing and treating psychiatric disorders.
There were no scientific findings that spurred this transformation. The scientific impulse that was present arose from the failure of DSM II: the diagnoses in that edition were understood to “lack reliability and validity.” That led a team of researchers at Washington University in St. Louis to advocate that psychiatry should start fresh: it could develop categories for grouping patients with like symptoms, with the hope that subsequent research would “validate” the groupings as real diseases. DSM II would be abandoned, and new categories would be drawn up for research purposes.
However, during the 1970s, APA leaders spoke of how, in the face of various criticisms, psychiatry was fighting for its survival. Its diagnostic manual was understood to lack validity; psychologists and counselors were offering talk therapies that appeared to be as effective as psychoanalysis; One Flew Over the Cuckoo’s Nest depicted staff in mental hospitals as the truly crazy ones; and an “antipsychiatry” movement described psychiatry as an agency of social control.
The criticism that stung the most was that psychiatrists were not “real doctors.” There was an obvious solution that beckoned: if they adopted a disease model, they could present themselves as physicians who treated real diseases. This would enable them to don the “white coat”—both figuratively and literally—that society recognized as the garb of “real” doctors.
DSM III, said APA president Jack Weinberg in 1977, would “clarify to anyone who may be in doubt that we regard psychiatry as a specialty of medicine.”3
Once DSM III was published, the APA set out to market its new disease model to the public. In 1981, it established a “division of publications and marketing” to “deepen the medical identification of psychiatrists.” That same year it established a press to bring “psychiatry’s best talent and current knowledge before the reading public.” It developed a nationwide roster of experts to promote this disease model, and it set up a “public affairs institute” to run workshops that trained its members “in techniques for dealing with radio and television.”4
This PR effort told of a revolution in psychiatry, with the media informed that researchers were discovering the very “molecules” that caused psychiatric symptoms. The APA held “media days” to promote this understanding, with awards given to media that reported on this revolution, and soon newspapers and magazines were writing stories about extraordinary advances that heralded a day when mental disorders could be “cured.”
The Baltimore Sun, in a seven-part series titled “The Mind-Fixers,” which won a Pulitzer Prize for expository journalism in 1984, described the revolution in this way:
“For a decade and more, research psychiatrists have been working quietly in laboratories, dissecting the brains of mice and men and teasing out the chemical formulas that unlock the secrets of the mind. Now, in the 1980s, their work is paying off. They are rapidly identifying the interlocking molecules that produce human thought and emotion . . . As a result, psychiatry today stands on the threshold of becoming an exact science, as precise and quantifiable as molecular genetics. Ahead lies an era of psychic engineering, and the development of specialized drugs and therapies.”5
Pharmaceutical companies, of course, were thrilled with the APA’s adoption of a disease model, for they understood it would greatly expand the market for their drugs, and they began funneling money to the APA and to psychiatrists at academic medical centers to support this PR effort.
The chemical imbalance story served, in essence, as the soundbite that could best sell this disease model to the public. It was a claim that fit into a larger societal narrative about the march of medicine in the 20th century: insulin as a treatment for diabetes, antibiotics for infectious diseases, a vaccine for polio, and so forth. Now it was psychiatry’s turn to take its place at the head of this parade.
The public began hearing this soundbite immediately after DSM III was published. In 1981, an Associated Press article featuring an interview with University of Chicago psychiatrist Herbert Meltzer informed readers that “researchers believe clinical depression is caused by a chemical imbalance in the brain,” and that there were already two drugs in development that “restore the chemical imbalance” to normal.6
Three years later, Nancy Andreasen, who would soon become editor-in-chief of the American Journal of Psychiatry, published a best-selling book titled The Broken Brain: The Biological Revolution. The new understanding in psychiatry, she wrote, was that the “major psychiatric illnesses are diseases,” and that each “different illness has a different specific cause . . . there are many hints that mental illness is due to chemical imbalances in their brain and that treatment involves correcting these chemical imbalances.”7
Eli Lilly brought Prozac to market in 1988, and soon the public was hearing that this “selective serotonin reuptake inhibitor” restored serotonin to normal levels, and thus was like “insulin for diabetes.” New York magazine featured the pill on its cover: “Bye, Bye Blues” declared the headline.8 Newsweek’s did as well, with this headline atop the pill: “Prozac, A Breakthrough Drug for Depression.”9
Magazine and newspaper stories told of how patients were feeling better than ever. In the spring of 1990, the New York Times, in an article by Natalie Angier, who arguably was the nation’s most well-known science writer, informed readers that “all antidepressants work by restoring the balance of neurotransmitter activity in the brain, correcting an abnormal excess or inhibition of the electrochemical signals that control mood, thoughts, appetite, pain, and other sensations.” This new drug, Dr. Francis Mondimore told Angier, “is not like alcohol or Valium. It’s like antibiotics.”
Television shows weighed in with a similar message, and on 60 Minutes, Lesley Stahl told the inspiring story of a woman, Maria Romero, who, after a decade of horrible depression, had been reborn on Prozac. “Somebody, something left my body and another person came in,” Romero said. Stahl explained the biological cure that was at work: “Most doctors believe that chronic depression like Romero’s is caused by a chemical imbalance in the brain. To correct it, the doctors prescribed Prozac.”10
Sales of Prozac soared, and as other drug companies brought new “SSRI” antidepressants to market—Zoloft, Paxil, Celexa, Lexapro, and so forth—they relied on the chemical imbalance soundbite to market their products. The National Alliance on Mental Illness grew in prominence during this period, and its core message was that psychiatric disorders were diseases caused by chemical imbalances in the brain, and that psychiatric drugs fixed those imbalances.
Rest in Link.