Posts tagged: psychology
DSM-5 as a dystopian novel:
If the novel has an overbearing literary influence, it’s undoubtedly Jorge Luis Borges. The American Psychiatric Association takes his technique of lifting quotes from or writing faux-serious reviews for entirely imagined books and pushes it to the limit: Here, we have an entire book, something that purports to be a kind of encyclopedia of madness, a Library of Babel for the mind, containing everything that can possibly be wrong with a human being. Perhaps as an attempt to ward off the uncommitted reader, the novel begins with a lengthy account of the system of classifications used – one with an obvious debt to the Borgesian Celestial Emporium of Benevolent Knowledge, in which animals are exhaustively classified according to such sets as “those belonging to the Emperor,” “those that, at a distance, resemble flies,” and “those that are included in this classification.”
Just as Borges’s system groups animals by seemingly aleatory characteristics entirely divorced from their actual biological attributes, DSM-5 arranges its various strains of madness solely in terms of the behaviors exhibited. This is a recurring theme in the novel, while any consideration of the mind itself is entirely absent. In its place we’re given diagnoses such as “frotteurism,” “oppositional defiant disorder,” and “caffeine intoxication disorder.” That said, these classifications aren’t arranged at random; rather, they follow a stately progression comparable to that of Dante’s Divine Comedy, rising from the infernal pit of the body and its weaknesses (intellectual disabilities, motor tics) through our purgatorial interactions with the outside world (tobacco use, erectile dysfunction, kleptomania) and finally arriving in the limpid-blue heavens of our libidinal selves (delirium, personality disorders, sexual fetishism). It’s unusual, and at times frustrating in its postmodern knowingness, but what is being told is first and foremost a story.
Full Story: New Inquiry: Book of Lamantations
Tom Stafford wrote:
Psychologists have used this section of the book, or sentences taken from it or inspired by it, to induce feelings of determinism in experimental subjects. A typical study asks people to read and think about a series of sentences such as “Science has demonstrated that free will is an illusion”, or “Like everything else in the universe, all human actions follow from prior events and ultimately can be understood in terms of the movement of molecules”.
The effects on study participants are generally compared with those of other people asked to read sentences that assert the existence of free will, such as “I have feelings of regret when I make bad decisions because I know that ultimately I am responsible for my actions”, or texts on topics unrelated to free will.
And the results are striking. One study reported that participants who had their belief in free will diminished were more likely to cheat in a maths test. In another, US psychologists reported that people who read Crick’s thoughts on free will said they were less likely to help others. […]
This puts an extra burden of responsibility on philosophers, scientists, pundits and journalists who use evidence from psychology or neuroscience experiments to argue that free will is an illusion. We need to be careful about what stories we tell, given what we know about the likely consequences.
Fortunately, the evidence shows that most people have a sense of their individual freedom and responsibility that is resistant to being overturned by neuroscience. Those sentences from Crick’s book claim that most scientists believe free will to be an illusion. My guess is that most scientists would want to define what exactly is meant by free will, and to examine the various versions of free will on offer, before they agree whether it is an illusion or not.
Interesting stuff, especially when considered alongside the Milgram experiments, which turned out not to be very sound. It also brings to mind the Kitty Genovese myth. If this effect is real, it is important to be aware of it so that we can try to overide it in ourselves.
The wrinkles in Milgram’s research kept revealing themselves. Perhaps most damningly, after Perry tracked down one of Milgram’s research analysts, she found reason to believe that most of his subjects had actually seen through the deception. They knew, in other words, that they were taking part in a low-stakes charade.
Gradually, Perry came to doubt the experiments at a fundamental level. Even if Milgram’s data was solid, it is unclear what, if anything, they prove about obedience. Even if 65 percent of Milgram’s subjects did go to the highest shock voltage, why did 35 percent refuse? Why might a person obey one order but not another? How do people and institutions come to exercise authority in the first place? Perhaps most importantly: How are we to conceptualize the relationship between, for example, a Yale laboratory and a Nazi death camp? Or, in the case of Vietnam, between a one-hour experiment and a multiyear, multifaceted war? On these questions, the Milgram experiments—however suggestive they may appear at first blush—are absolutely useless.
It is likely that no one understood this better than Milgram himself. In his notes and letters, Perry finds ample evidence that, privately, he had significant doubts about his work.
A recent paper published in the Journal of Medical Ethics warns of the dangers of DIY transcranial direct current stimulation (tDCS). The National Post reports:
Those risks include reversing the polarity of the electrodes to cause impairment instead of benefit, and triggering potentially long-lasting and negative changes to the brain’s biology, the researchers argue in the Journal of Medical Ethics. […]
In fact, Health Canada considers tDCS machines to be class-three devices — on a scale of risk ranging from one to four — and has yet to approve any for treating psychological illness – though they are licensed for pain and insomnia therapy, said Leslie Meerburg, a department spokeswoman. […]
One subtle but troubling risk could lie in the ability of the devices to change behaviour, with research by Prof. Fecteau and colleagues suggesting tDCS can actually make people better liars, or less empathetic, both qualities that could encourage unscrupulous conduct.
Amusingly, after citing a researcher who says tDCS could make people better liars and less empathetic, the Post quotes someone selling a home tDCS rig saying that it is “very safe.” But, despite the somewhat sordid tone of the story, the actual paper Medical Ethics paper does say that tDCS is “relatively safe.” You can find the full paper here.
Talking about something in a neurobiological way sends the message that this is a neurobiological issue. In this way, many fMRI papers serve to spread the idea that this is an issue that only neuroscience can solve and, therefore, create a demand for more fMRI studies. The authors of this paper are victims of this mentality, a widespread confusion about what neuroscience is for.
fMRI is a great way to approach neuroscientific questions. It’s a bad (and terribly expensive) way to do psychology. This study is about psychology, and should not have involved an MRI scanner.
I’ve linked to research before casting doubt on the efficacy of “brain training” games and software (other than double n-back). But some new research reported by the MIT Technology Review is more promising:
Cancer survivors sometimes suffer from a condition known as “chemo fog”—a cognitive impairment caused by repeated chemotherapy. A study hints at a controversial idea: that brain-training software might help lift this cognitive cloud.
Various studies have concluded that cognitive training can improve brain function in both healthy people and those with medical conditions, but the broader applicability of these results remains controversial in the field.
In a study published in the journal Clinical Breast Cancer, investigators report that those who used a brain-training program for 12 weeks were more cognitively flexible, more verbally fluent, and faster-thinking than survivors who did not train. […]
“This is a well-done study—they had not just one transfer test but several,” says Hambrick, who notes that many studies of cognitive training depend on a single test to measure results. “But an issue is the lack of activity within the control group.” Better would be to have the control group do another demanding cognitive task in lieu of Lumosity training—something analogous to a placebo, he says: “The issue is that maybe the improvement in the group that did the cognitive training doesn’t reflect enhancement of basic cognitive processes per se, but could be a motivational phenomenon.”
See also: Dual N-Back FAQ
Eric Horowitz on a recent study on scapegoating:
Rothschild and his team were interested in examining how the potential culpability of one’s own group influenced moral outrage and blame for a third-party. They began their experiment by giving participants a survey that led participants to categorize themselves as middle class rather than working class or upper class. Participants then read an article about the struggles of working-class Americans, but in the in-group condition the article blamed the middle class for the struggles, in the out-group condition the article blamed the upper class, and in the unknown condition the article stated that economists don’t know the cause of working-class struggles. Participants then read another article about the status of illegal immigrants. In the viable scapegoat condition the article described the rising fortunes of illegal immigrants, while in the non-viable scapegoat condition the articles describe how illegal immigrants were also struggling to find work.
As expected, when illegal immigrants were viable scapegoats, participants were more likely to blame them for the struggles of the working-class when the cause of those struggles was unknown or attributed to their own group, the middle-class.
From Ted Haggard to Larry Craig, some of the most vocal anti-gay crusaders have turned out to be some of the biggest hypocrites. Some researchers have finally decided to put it to the test: are homophobes really just repressed homosexuals? Skeptikai writes:
The researchers looked at six studies from the US and Germany involving 784 university students. The participants rated themselves from gay to straight on a 10-point scale. Then they took an implicit sexual orientation test via computer, where participants are shown images and words associated with heterosexuality or homosexuality (such as “gay”) and asked to sort them into the appropriate category as fast as possible. Their reaction times were measured.
But before each word came up, the word “me” and “other” was flashed on the screen for 35 milliseconds – just enough time to subliminally perceive the word without being aware of it. The hypothesis was that when “me” precedes words that reflect their sexual orientation, those images will be sorted quicker. This is how researchers also try to determine things like implicit racist beliefs in individuals.
Over 20% of self-described highly straight people indicated some level of same-sex attraction – by which I mean they were faster at sorting “me” with pictures and words associated with homosexual than with heterosexuality. I have my own reservations about these kinds of studies – because I hesitate to call someone gay or racist by simple matching and reaction-time methodologies. However, it’s extremely difficult to measure something like this, and the next part of the study regarding this 20%+ group is fascinating.
The New York Times reports on Diederik Stapel, psychology’s most notorious fraudster. But the problems with psychology — and science — go far beyond Stapel’s deception:
At the end of November, the universities unveiled their final report at a joint news conference: Stapel had committed fraud in at least 55 of his papers, as well as in 10 Ph.D. dissertations written by his students. The students were not culpable, even though their work was now tarnished. The field of psychology was indicted, too, with a finding that Stapel’s fraud went undetected for so long because of “a general culture of careless, selective and uncritical handling of research and data.” If Stapel was solely to blame for making stuff up, the report stated, his peers, journal editors and reviewers of the field’s top journals were to blame for letting him get away with it. The committees identified several practices as “sloppy science” — misuse of statistics, ignoring of data that do not conform to a desired hypothesis and the pursuit of a compelling story no matter how scientifically unsupported it may be. […]
Fraud like Stapel’s — brazen and careless in hindsight — might represent a lesser threat to the integrity of science than the massaging of data and selective reporting of experiments. The young professor who backed the two student whistle-blowers told me that tweaking results — like stopping data collection once the results confirm a hypothesis — is a common practice. “I could certainly see that if you do it in more subtle ways, it’s more difficult to detect,” Ap Dijksterhuis, one of the Netherlands’ best known psychologists, told me. He added that the field was making a sustained effort to remedy the problems that have been brought to light by Stapel’s fraud.
Full Story: The New York Times: The Mind of a Con Man
This is the sort of thing that led to the foundation of the The Reproducibility Project, which aims to verify studies published in Psychological Science, the Journal of Personality and Social Psychology and the Journal of Experimental Psychology: Learning, Memory, and Cognition in 2008.
One thing not really discussed is the difficulty of funding for experiments. I have a story coming out on Wired.com tomorrow that deals in part with the state of scientific research, including how hard it is to get funding for research that has the chance of failing. Researchers are spending more time chasing funding than doing research, and the stakes are quite high to find positive results and to publish. Then there’s also, as alluded to above, the ever present publication bias.
The primary problem E.P. experienced came in what we’d probably call conscious memory, or what professionals call declarative memory. This involves, as the names imply, the ability to be aware of something we know, and to state it, whether it’s a historic event or the term for an obscure object. For example, E.P. moved to San Diego shortly after his illness, but he was never able to consciously remember the layout of his apartment or where the Pacific Ocean was, even though it was two miles from home. And although he could relate stories about the events of his youth, he’d often get repetitive while doing so—after all, he couldn’t remember which parts of the stories he’d already related.
But that doesn’t mean he had no memory. We store short-term information (like the digits we’re carrying when we’re doing math) in a place called working memory—and E.P.’s working memory was just fine. In some tests, he was blindfolded and led along a path up to 15 meters in length. When it was over, he was able to remember his start position successfully. But wait a few minutes, and the entire test faded from his memory. When asked, he’d tell the researchers that he’d been “in conversation” a few minutes earlier.
(Remind anyone of Memento?)