Saturday, 20 August 2016

What are we getting wrong in neuroscience?

In 1935, an ambitious neurology professor named Egas Moniz sat in the audience at a symposium on the frontal lobes, enthralled by neuroscientist Carlyle F. Jacobsen's description of some experiments Jacobsen had conducted with fellow investigator John Fulton. Jacobsen and Fulton had damaged the frontal lobes of a chimpanzee named "Becky," and afterwards they had observed a considerable behavioral transformation. Becky had previously been stubborn, erratic, and difficult to train, but post-operation she became placid, imperturbable, and compliant. Moniz had already been thinking about the potential therapeutic value of frontal lobe surgery in humans after reading some papers about frontal lobe tumors and how they affected personality. He believed that some mental disorders were caused by static abnormalities in frontal lobe circuitry. By removing a portion of the frontal lobes, he hypothesized he would also be removing neurons and pathways that were problematic, in the process alleviating the patient's symptoms. Although Moniz had been pondering this possibility, Jacobsen's description of the changes seen in Becky was the impetus Moniz needed to try a similar approach with humans. He did so just three months after seeing Jacobsen's presentation, and the surgical procedure that would come to be known as the frontal lobotomy was born.Moniz's procedure initially involved drilling two holes in a patient's skull, then injecting pure alcohol subcortically into the frontal lobes, with the hopes of destroying the regions where the mental disorder resided. Moniz soon turned to another tool for ablation, however---a steel loop he called a leucotome (which is Greek for "white matter knife")---and began calling the procedure a prefrontal leucotomy. Although his means of assessing the effectiveness of the procedure were inadequate by today's standards---for example, he generally only monitored patients for a few days after the surgery---Moniz reported recovery or improvement in most of the patients who underwent the procedure. Soon, prefrontal leucotomies were being done in a number of countries throughout the world. The operation attracted the interest of neurologist Walter Freeman and neurosurgeon James Watts. They modified the procedure again, this time to involve entering the skull from the side using a large spatula. Once inside the cranium, the spatula was wiggled up and down in the hopes of severing connections between the thalamus and prefrontal cortex (based on the hypothesis that these connections were crucial for emotional responses, and could precipitate a disorder when not functioning properly). They also renamed the procedure "frontal lobotomy," as leucotomy implied only white matter was being removed and that was not the case with their method. Several years later (in 1946), Freeman made one final modification to the procedure. He advocated for using the eye socket as an entry point to the frontal lobes (again to sever the connections between the thalamus and frontal areas). As his tool to do the ablation, he chose an ice pick. The ice pick was inserted through the eye socket, wiggled around to do the cutting, and then removed. The procedure could be done in 10 minutes; the development of this new "transorbital lobotomy" brought about the real heyday of lobotomy.The introduction of transorbital lobotomy led to a significant increase in the popularity of the operation---perhaps due to the ease and expediency of the procedure. Between 1949 and 1952, somewhere around 5,000 lobotomies were conducted each year in the United States (the total number of lobotomies done by the 1970s is thought to have been between 40,000 and 50,000). Watts strongly protested the transformation of lobotomy into a procedure that could be done in one quick office visit---and done by a psychiatrist instead of a surgeon, no less---which caused he and Freeman to sever their partnership.Freeman, however, was not discouraged; he became an ardent promoter of transorbital lobotomy. He traveled across the United States, stopping at mental asylums to perform the operation on any patients who seemed eligible and to train the staff to perform the surgery after he had moved on. Freeman himself is thought to have performed or supervised around 3,500 lobotomies; his patients included a number of minors and a 4-year old child (who died 3 weeks after the procedure). Eventually, however, the popularity of transorbital lobotomy began to fade. One would like to think that this happened because people recognized how barbaric the procedure was (along with the fact that the approach was based on somewhat flimsy scientific rationale). The real reasons for abandoning the operation, however, were more pragmatic. The downfall of lobotomy began with some questions about the effectiveness of the surgery, especially in treating certain conditions like schizophrenia. It was also recognized that some types of cognition like motivation, spontaneity, and abstract thought suffered irreparably from the procedure. And the final nail in the coffin of lobotomy was the development of psychiatric drugs like chlorpromazine, which for the first time gave clinicians a pharmacological option for intractable cases of mental disorders. It is easy for us now to look at the practice of lobotomy as nothing short of brutality, and to scoff at what seems like a tenuous scientific explanation for why the procedure should work. It's important, however, to look at such issues in the history of science in the context of their time. In an age when effective psychiatric drugs were nonexistent, psychosurgical interventions were viewed as the "wave of the future." They offered a hopeful possibility for treating disorders that were often incurable and potentially debilitating. And while the approach of lobotomy seems far too non-selective (meaning such serious brain damage was not likely to affect just one mental faculty) to us now, the idea that decreasing frontal lobe activity might reduce mental agitation was actually based on the available scientific literature at the time.Still, it's clear that the decision to attempt to treat psychiatric disorders through inflicting significant brain damage represented a failure of logic at multiple levels. When we discuss neuroscience today, we often assume that our days of such egregious mistakes are over. And while we have certainly progressed since the time of lobotomies (especially in the safeguards protecting patients from such untested and dangerous treatments), we are not that far removed temporally from this sordid time in the history of neuroscience. Today, there is still more unknown about the brain than there is known, and thus it is to be expected that we continue to make significant mistakes in how we think about brain function, experimental methods in neuroscience, and more.Some of these mistakes may be due simply to a natural human approach to understanding difficult problems. For example, when we encounter a complex problem we often first attempt to simplify it by devising some straightforward way of describing it. Once a basic appreciation is reached, we add to this elementary knowledge to develop a more thorough understanding---and one that is more likely to be a better approximation of the truth. However, that overly simplistic conceptualization of the subject can give birth to countless erroneous hypotheses when used in an attempt to explain something as intricate as neuroscience. And in science, these types of errors can lead a field astray for years before it finds its way back on course.Other mistakes involve research methodology. Due to the rapid technological advances in neuroscience that have occurred in the past half-century, we have some truly amazing neuroscience research tools available to us that would have only been science fiction 100 years ago. Excitement about these tools, however, has caused researchers in some cases to begin utilizing them extensively before we are fully prepared to do so. This has resulted in using methods that cannot yet answer the questions we presume they can, and has provided us with results that we are sometimes unable to accurately interpret. In accepting the answers we obtain as legitimate and assuming our interpretations of results are valid, we may commit errors that can confound hypothesis development for some time.Advances in neuroscience in the 20th and into the 21st century have been nothing short of mind-boggling, and our successes in understanding far outpace our long-standing failures. However, any scientific field is rife with mistakes, and neuroscience is no different. In this article, I will discuss just a few examples of how missteps and misconceptions continue to affect progress in the field of neuroscience.The ________________ neurotransmitterNowadays the fact that neurons use signaling molecules like neurotransmitters to communicate with one another is one of those pieces of scientific knowledge that is widely known even to non-scientists. Thus, it may be a bit surprising that this understanding is less than 100 years old. It was in 1921 that the...




from Research Blogging - Psychology - English http://ift.tt/2bbqbPT
via https://ifttt.com/ IFTTT

No comments:

Post a Comment