Caryl Rivers and Rosalind C. Barnett, authors of The New Soft War on Women, once again debunk the idea that there are important neurological differences between men and women:
Baron-Cohen based his ideas on a study done in his laboratory of day-old infants, male and female. He claimed that boy babies looked at mobiles longer, while girl babies looked at faces longer. Based on this study, Parents magazine informed its readers, “Girls prefer dolls [to blocks and toys] because girls pay more attention to people while boys are more enthralled with mechanical objects.”
But Baron-Cohen’s study had major problems. It was an “outlier” study. No one else has replicated these findings, including Baron-Cohen himself. It is so flawed as to be almost meaningless.
Full Story: Re/code: Neurosexism: Brains, Gender and Tech
It started as a headache, but soon became much stranger. Simon Baker entered the bathroom to see if a warm shower could ease his pain. “I looked up at the shower head, and it was as if the water droplets had stopped in mid-air”, he says. “They came into hard focus rapidly, over the course of a few seconds”. Where you’d normally perceive the streams as more of a blur of movement, he could see each one hanging in front of him, distorted by the pressure of the air rushing past. The effect, he recalls, was very similar to the way the bullets travelled in the Matrix movies. “It was like a high-speed film, slowed down.” […]
What’s more, Valtteri Arstila at University of Turku, Finland, points out that many of these subjects also report abnormally quick thinking. As one pilot, who’d faced a plane crash in the Vietnam War, put it: “when the nose-wheel strut collapsed I vividly recalled, in a matter of about three seconds, over a dozen actions necessary to successful recovery of flight attitude”. Reviewing the case studies and available scientific research on the matter, Arstila concludes that an automatic mechanism, triggered by stress hormones, might speed up the brain’s internal processing to help it handle the life or death situation. “Our thoughts and initiation of movements become faster – but because we are working faster, the external world appears to slow down,” he says. It is even possible that some athletes have deliberately trained themselves to create a time warp on demand: surfers, for instance, can often adjust their angle in the split second it takes to launch off steep waves, as the water rises overhead.
Full Story: BBC: The man who saw time stand still
The New York Times reports on fMRI studies on what exactly goes on in the brain while people write. The first version of the study was conducted with amateur writers. The second was conducted with experienced creative writers. The researchers found that there were differences between the brain regions used while brainstorming and actually writing, and between the amateurs and professionals. But not everyone is impressed with the research:
Steven Pinker, a Harvard psychologist, was skeptical that the experiments could provide a clear picture of creativity. “It’s a messy comparison,” he said.
Dr. Pinker pointed out that the activity that Dr. Lotze saw during creative writing could be common to writing in general — or perhaps to any kind of thinking that requires more focus than copying. A better comparison would have been between writing a fictional story and writing an essay about some factual information.
Full Story: New York Times: This Is Your Brain on Writing
Wired reports on DIY transcranial direct current stimulation, and why the science behind it might not be all it’s cracked up to be:
It’s a rare thing for a scientist to stand up in front of a roomful of his peers and rip apart a study from his own lab. But that’s exactly what Vincent Walsh did in September at a symposium on brain stimulation at the UC Davis Center for Mind and Brain. Walsh is a cognitive neuroscientist at University College London, and his lab has done some of the studies that first made a splash in the media. One, published in Current Biology in 2010, found that brain stimulation enhanced people’s ability to learn a new number system based on made-up symbols.
Only it didn’t really.
“It doesn’t show what we said it shows; it doesn’t show what people think it shows,” Walsh said before launching into a dissection of his paper’s flaws. They ranged from the technical (guesswork about whether parts of the brain are being excited or inhibited) to the practical (a modest effect with questionable impact on any actual learning outside the lab). When he finished this devastating critique, he tore into two more studies from other high-profile labs. And the problems aren’t limited to these few papers, Walsh said, they’re endemic in this whole subfield of neuroscience.
Neuroskeptic points to a recent meta-study of neuroimaging critiques conducted by Martha Farah at the University of Pennsylvania. The blog highlights Farah’s conclusion:
Inferences based on functional brain imaging, whether for basic science or applications, require scrutiny. As we apply such scrutiny, it is important to distinguish between specific criticisms of particular applications or specific studies and wholesale criticisms of the entire enterprise of functional neuroimaging.
In the first category are criticisms aimed at improving the ways in which imaging experiments are designed and the ways in which their results are interpreted. Uncontrolled multiple comparisons, circular analyses and unconstrained reverse inferences are serious problems that undermine the inferences made from brain imaging data. Although the majority of research is not compromised by any of these errors, a substantial minority of published research is, making such criticisms both valid and useful.
In contrast, the more sweeping criticisms of functional imaging concern the method itself and therefore cast doubt on the conclusions of any research carried out with imaging, no matter how well designed and carefully executed. These more wholesale criticisms invoke the hemodynamic nature of the signal being measured, the association of neuroimaging with modular theories of the mind, the statistical nature of brain images, and the color schemes used to make those images seductively alluring.
As mentioned earlier, each of these criticisms contains an element of truth, but overextends that element to mistakenly cast doubt on the validity or utility of functional neuroimaging research as a whole. None of the criticisms reviewed here constitute reasons to reject or even drastically curtail the use of neuroimaging.
The full paper is here.
(via Boing Boing)
Researchers have known for some time that sleep is critical for weight maintenance and hormone balance. And too little sleep is linked to everything from diabetes to heart disease to depression. Recently, the research on sleep has been overwhelming, with mounting evidence that it plays a role in nearly every aspect of health. Beyond chronic illnesses, a child’s behavioral problems at school could be rooted in mild sleep apnea. And studies have shown children with ADHD are more likely to get insufficient sleep. A recent study published in the journal SLEEP found a link between older men with poor sleep quality and cognitive decline. Another study out this week shows sleep is essential in early childhood for development, learning, and the formation and retention of memories. Dr. Allan Rechtschaffen, a pioneer of sleep research at the University of Chicago, once said, “If sleep does not serve an absolutely vital function, then it is the biggest mistake the evolutionary process ever made.”
(via Alex Holmes)
Black box recorders are a common feature in aircraft. They sit there keeping track of everything that is happening. Then, if something goes wrong the information can be reviewed to piece together exactly what happened and form a view of the events that may otherwise have been lost.
Now the Pentagon is attempting to develop a similar system for use in humans, and in particular soldiers who have suffered brain damage. If they could be fitted with a black box in their brain, then it may be possible to trigger memories surrounding a traumatic event and overcome memory loss quickly and easily. […]
It’s common to see memory loss in someone suffering brain damage, but they can also forget their personal details and skills, such as remembering their own name, who their family is, and even how to drive. As well as stimulating the brain to recover recent memories, it is hoped the implant would be able to recall common information and therefore help them remember who they are.
A quick catch-up episode in which Chris Dancy talks about his trip to Japan and the effects of globalization, and I talk a bit about the cognitive experience of writing. Here’s a taste:
One of them being this writer Arnon Grünberg who I think has actually might have been on Wired. I’m not sure. No. Where was it? Actually, it was New York Times. He is writing a book while he is connected to a bunch of sensors hundreds of sensors on his head, on his body and the book will be read by people wearing similar sensor. So, they have a bunch of volunteers to see if they can sync the feelings of what he wrote and what people experienced and I thought quite profound that we have almost a shared biological experience with the writing.
It was on November 29th. So, just couple of weeks ago in New York Times. Thoughts?
KF: That’s really interesting. I’d be curious to see what they find. I find the writing and reading are radically different experiences for me. So, I wouldn’t really expect the writer and the reader to really have synchronized experiences but I’m definitely curious to see how that plays out.
CD: I’ve never written fiction. So, if you’re like typing out a scene, I’m sure a lot of our listeners maybe aren’t writers, maybe some of them aren’t writers, professional, but when you’re doing science fiction and you’re in a really dramatic scene, you don’t get excited or you’re just seeing it and typing how it feels almost like you’re a court recorder or how does that work for you?
KF: Well, for me most of it … every writer’s different. I guess for me most of it is I already know what I’m going to write before I start typing it. So, by the time I’m trying to describe it, I think I’m a little bit more detached from the emotion of it and then the other thing to keep in mind is that. I don’t know what the saying is “75% of writing is rewriting” or whatever. Most of the time that you spend you spend on something is going to be revising it over and over again. So, I don’t know by the time you’re done, a lot of the visceral or emotional impact that you would expect to get from reading something is kind of worn off and you’re just sort of sick of reading the same sentence over and over again trying to figure out how to improve it.
CD: That’s really interesting.
KF: There are writers who don’t really know … I know that there are definitely a lot of writers who don’t really know what’s going to happen in a scene when they sit down and write it. I imagine that that would be kind of a different … they would be working in a very different state from me but I would still expect most of their time to be spent on rewriting and I would also expect … I would still expect like that feeling of sort of channeling creativity to be different from just reading it but again we’ll have to see how it plays out.
Download and Full Transcript Mindful Cyborgs: Episode 19 – Review, Musings, and Catch Up
My colleague Bob McMillan reports:
Conor Russomanno and Joel Murphy have a dream: They want to create an open-source brain scanner that you can print out at home, strap onto your head, and hook straight into your brainwaves.
This past week, they printed their first headset prototype on a 3-D printer, and WIRED has the first photos.
Bootstrapped with a little funding help from DARPA — the research arm of the Department of Defense — the device is known as OpenBCI . It includes sensors and a mini-computer that plugs into sensors on a black skull-grabbing piece of plastic called the “Spider Claw 3000,” which you print out on a 3-D printer. Put it all together, and it operates as a low-cost electroencephalography (EEG) brainwave scanner that connects to your PC.
I wrote for Wired about computer chips designed specifically for building neural networks:
Qualcomm is now preparing a line of computer chips that mimic the brain. Eventually, the chips could be used to power Siri or Google Now-style digital assistants, control robotic limbs, or pilot self-driving cars and autonomous drones, says Qualcomm director of product management Samir Kumar.
But don’t get too excited yet. The New York Times reported this week that Qualcomm plans to release a version of the chips in the coming year, and though that’s true, we won’t see any real hardware anytime soon. “We are going to be looking for a very small selection of partners to whom we’d make our hardware architecture available,” Kumar explains. “But it will just be an emulation of the architecture, not the chips themselves.”
Qualcomm calls the chips, which were first announced back in October, Zeroth, after the Isaac Asimov’s zeroth law of robotics: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
The Zeroth chips are based on a new architecture that radically departs from the architectures that have dominated computing for the past few decades. Instead, it mimics the structure of the human brain, which consists of billions of cells called neurons that work in tandem. Kumar explains that although the human brain does its processing much more slowly than digital computers, it’s able to complete certain types of calculations much more quickly and efficiently than a standard computer, because it can do many calculations at once.
Even the world’s largest supercomputers are able to use “only” one million processing cores at a time.