Posts tagged: neuroscience
Neuroskeptic points to a recent meta-study of neuroimaging critiques conducted by Martha Farah at the University of Pennsylvania. The blog highlights Farah’s conclusion:
Inferences based on functional brain imaging, whether for basic science or applications, require scrutiny. As we apply such scrutiny, it is important to distinguish between specific criticisms of particular applications or specific studies and wholesale criticisms of the entire enterprise of functional neuroimaging.
In the first category are criticisms aimed at improving the ways in which imaging experiments are designed and the ways in which their results are interpreted. Uncontrolled multiple comparisons, circular analyses and unconstrained reverse inferences are serious problems that undermine the inferences made from brain imaging data. Although the majority of research is not compromised by any of these errors, a substantial minority of published research is, making such criticisms both valid and useful.
In contrast, the more sweeping criticisms of functional imaging concern the method itself and therefore cast doubt on the conclusions of any research carried out with imaging, no matter how well designed and carefully executed. These more wholesale criticisms invoke the hemodynamic nature of the signal being measured, the association of neuroimaging with modular theories of the mind, the statistical nature of brain images, and the color schemes used to make those images seductively alluring.
As mentioned earlier, each of these criticisms contains an element of truth, but overextends that element to mistakenly cast doubt on the validity or utility of functional neuroimaging research as a whole. None of the criticisms reviewed here constitute reasons to reject or even drastically curtail the use of neuroimaging.
The full paper is here.
(via Boing Boing)
Researchers have known for some time that sleep is critical for weight maintenance and hormone balance. And too little sleep is linked to everything from diabetes to heart disease to depression. Recently, the research on sleep has been overwhelming, with mounting evidence that it plays a role in nearly every aspect of health. Beyond chronic illnesses, a child’s behavioral problems at school could be rooted in mild sleep apnea. And studies have shown children with ADHD are more likely to get insufficient sleep. A recent study published in the journal SLEEP found a link between older men with poor sleep quality and cognitive decline. Another study out this week shows sleep is essential in early childhood for development, learning, and the formation and retention of memories. Dr. Allan Rechtschaffen, a pioneer of sleep research at the University of Chicago, once said, “If sleep does not serve an absolutely vital function, then it is the biggest mistake the evolutionary process ever made.”
(via Alex Holmes)
Black box recorders are a common feature in aircraft. They sit there keeping track of everything that is happening. Then, if something goes wrong the information can be reviewed to piece together exactly what happened and form a view of the events that may otherwise have been lost.
Now the Pentagon is attempting to develop a similar system for use in humans, and in particular soldiers who have suffered brain damage. If they could be fitted with a black box in their brain, then it may be possible to trigger memories surrounding a traumatic event and overcome memory loss quickly and easily. […]
It’s common to see memory loss in someone suffering brain damage, but they can also forget their personal details and skills, such as remembering their own name, who their family is, and even how to drive. As well as stimulating the brain to recover recent memories, it is hoped the implant would be able to recall common information and therefore help them remember who they are.
A quick catch-up episode in which Chris Dancy talks about his trip to Japan and the effects of globalization, and I talk a bit about the cognitive experience of writing. Here’s a taste:
One of them being this writer Arnon Grünberg who I think has actually might have been on Wired. I’m not sure. No. Where was it? Actually, it was New York Times. He is writing a book while he is connected to a bunch of sensors hundreds of sensors on his head, on his body and the book will be read by people wearing similar sensor. So, they have a bunch of volunteers to see if they can sync the feelings of what he wrote and what people experienced and I thought quite profound that we have almost a shared biological experience with the writing.
It was on November 29th. So, just couple of weeks ago in New York Times. Thoughts?
KF: That’s really interesting. I’d be curious to see what they find. I find the writing and reading are radically different experiences for me. So, I wouldn’t really expect the writer and the reader to really have synchronized experiences but I’m definitely curious to see how that plays out.
CD: I’ve never written fiction. So, if you’re like typing out a scene, I’m sure a lot of our listeners maybe aren’t writers, maybe some of them aren’t writers, professional, but when you’re doing science fiction and you’re in a really dramatic scene, you don’t get excited or you’re just seeing it and typing how it feels almost like you’re a court recorder or how does that work for you?
KF: Well, for me most of it … every writer’s different. I guess for me most of it is I already know what I’m going to write before I start typing it. So, by the time I’m trying to describe it, I think I’m a little bit more detached from the emotion of it and then the other thing to keep in mind is that. I don’t know what the saying is “75% of writing is rewriting” or whatever. Most of the time that you spend you spend on something is going to be revising it over and over again. So, I don’t know by the time you’re done, a lot of the visceral or emotional impact that you would expect to get from reading something is kind of worn off and you’re just sort of sick of reading the same sentence over and over again trying to figure out how to improve it.
CD: That’s really interesting.
KF: There are writers who don’t really know … I know that there are definitely a lot of writers who don’t really know what’s going to happen in a scene when they sit down and write it. I imagine that that would be kind of a different … they would be working in a very different state from me but I would still expect most of their time to be spent on rewriting and I would also expect … I would still expect like that feeling of sort of channeling creativity to be different from just reading it but again we’ll have to see how it plays out.
Download and Full Transcript Mindful Cyborgs: Episode 19 – Review, Musings, and Catch Up
My colleague Bob McMillan reports:
Conor Russomanno and Joel Murphy have a dream: They want to create an open-source brain scanner that you can print out at home, strap onto your head, and hook straight into your brainwaves.
This past week, they printed their first headset prototype on a 3-D printer, and WIRED has the first photos.
Bootstrapped with a little funding help from DARPA — the research arm of the Department of Defense — the device is known as OpenBCI . It includes sensors and a mini-computer that plugs into sensors on a black skull-grabbing piece of plastic called the “Spider Claw 3000,” which you print out on a 3-D printer. Put it all together, and it operates as a low-cost electroencephalography (EEG) brainwave scanner that connects to your PC.
I wrote for Wired about computer chips designed specifically for building neural networks:
Qualcomm is now preparing a line of computer chips that mimic the brain. Eventually, the chips could be used to power Siri or Google Now-style digital assistants, control robotic limbs, or pilot self-driving cars and autonomous drones, says Qualcomm director of product management Samir Kumar.
But don’t get too excited yet. The New York Times reported this week that Qualcomm plans to release a version of the chips in the coming year, and though that’s true, we won’t see any real hardware anytime soon. “We are going to be looking for a very small selection of partners to whom we’d make our hardware architecture available,” Kumar explains. “But it will just be an emulation of the architecture, not the chips themselves.”
Qualcomm calls the chips, which were first announced back in October, Zeroth, after the Isaac Asimov’s zeroth law of robotics: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
The Zeroth chips are based on a new architecture that radically departs from the architectures that have dominated computing for the past few decades. Instead, it mimics the structure of the human brain, which consists of billions of cells called neurons that work in tandem. Kumar explains that although the human brain does its processing much more slowly than digital computers, it’s able to complete certain types of calculations much more quickly and efficiently than a standard computer, because it can do many calculations at once.
Even the world’s largest supercomputers are able to use “only” one million processing cores at a time.
Vaughan Bell on the shift from psychiatric drugs that act on one specific neurotransmitter in favor of a “circuit” driven model of treating mental and neurological disorders:
In its place is a science focused on understanding the brain as a series of networks, each of which supports a different aspect of our experience and behaviour. By this analysis, the brain is a bit like a city: you can’t make sense of the bigger picture without knowing how everything interacts. Relatively few residents of Belfast who live in the Shankill spend their money in the Falls Road and this tells us much more about the city – as these are the key loyalist and republican areas – than knowing that the average income of each area is much the same. Similarly, knowing that key brain areas interact differently when someone gets depressed tells us something important that a measure of average brain activity would miss. […]
Perhaps more surprising for some is the explosion in deep brain stimulation procedures, where electrodes are implanted in the brains of patients to alter electronically the activity in specific neural circuits. Medtronic, just one of the manufacturers of these devices, claims that its stimulators have been used in more than 100,000 patients. Most of these involve well-tested and validated treatments for Parkinson’s disease, but increasingly they are being trialled for a wider range of problems. Recent studies have examined direct brain stimulation for treating pain, epilepsy, eating disorders, addiction, controlling aggression, enhancing memory and for intervening in a range of other behavioural problems.
Fast Company on the latest in neurotech:
While their roundtable discussion admittedly sounded like a master’s exercise in strange science, the kicker is that all three are engaged in preliminary efforts to make this happen. Last year, at the resolutely mainstream MIT Media Lab, I saw Dr. Berger speak about hacking the memories of rats. Berger’s lab at USC is actively working on prosthetic brain implants that both falsify memories and stimulate brain function in damaged neurons. The lab’s work recently received media attention when it successfully generated new memories in a rat that had its hippocampus chemically disabled. In literature, Berger emphasizes his technology’s potential for treating Alzheimer’s and dementia through the possibility of “building spare parts for the brain;” on-stage in New York, he said it could also lead in the future to full-on brain transplants.
This would work in tandem with Kaplan’s and Lebedev’s specialties. The two Russian scientists research brain-computer interfaces (BCIs)–plug-in interfaces which meld the human brain and nervous system to computer operating systems. While BCIs are most commonly found in toys that read brainwaves to detect stress or concentration, they have revolutionary potential to change the lives of stroke victims and the disabled.
A recent paper published in the Journal of Medical Ethics warns of the dangers of DIY transcranial direct current stimulation (tDCS). The National Post reports:
Those risks include reversing the polarity of the electrodes to cause impairment instead of benefit, and triggering potentially long-lasting and negative changes to the brain’s biology, the researchers argue in the Journal of Medical Ethics. […]
In fact, Health Canada considers tDCS machines to be class-three devices — on a scale of risk ranging from one to four — and has yet to approve any for treating psychological illness – though they are licensed for pain and insomnia therapy, said Leslie Meerburg, a department spokeswoman. […]
One subtle but troubling risk could lie in the ability of the devices to change behaviour, with research by Prof. Fecteau and colleagues suggesting tDCS can actually make people better liars, or less empathetic, both qualities that could encourage unscrupulous conduct.
Amusingly, after citing a researcher who says tDCS could make people better liars and less empathetic, the Post quotes someone selling a home tDCS rig saying that it is “very safe.” But, despite the somewhat sordid tone of the story, the actual paper Medical Ethics paper does say that tDCS is “relatively safe.” You can find the full paper here.
Talking about something in a neurobiological way sends the message that this is a neurobiological issue. In this way, many fMRI papers serve to spread the idea that this is an issue that only neuroscience can solve and, therefore, create a demand for more fMRI studies. The authors of this paper are victims of this mentality, a widespread confusion about what neuroscience is for.
fMRI is a great way to approach neuroscientific questions. It’s a bad (and terribly expensive) way to do psychology. This study is about psychology, and should not have involved an MRI scanner.