Science fiction writer Tim Maughan reports on the real science of making animals smarter:
In 2011, a research team led by Sam Deadwyler of Wake Forest University in Winston-Salem, North Carolina, used five rhesus monkeys to study the factors that lead people with diseases like Alzheimer’s to lose control of their thought processes. The researchers trained the monkeys in an intelligence task that involved learning and identifying images and symbols. They were then given doses of cocaine in order to dull their intelligence and made to repeat the test, with predictably less impressive results.
What happened in the next stage of the research was remarkable. The same monkeys were fitted with neural prosthetics – brain implants designed to monitor and correct the functions of the neurons disabled by the cocaine. These implants successfully restored normal brain function to the monkeys when they were drugged – but crucially, if they were activated before the monkeys had been drugged, they improved the primates’ performance beyond their original test results. The aim of the experiments was to see whether neural prosthetics could theoretically be used to restore decision-making in humans who have suffered trauma or diseases such as Alzheimer’s – but as far as these specific tests were concerned at least, the brain prosthetics appeared to make the monkeys smarter.
While the obvious answer to that question is “hell no,” Tim points out that the medical research research motivations may make smarter animals an inevitability. He dives deeper into the ethical questions in the article.
More by Tim:
Glenn Greenwald reports on more documents from Edward Snowden’s cache, this batch on how GCHQ uses online deception and other tactics to discredit hacktivists and possibly other political activists:
Among the core self-identified purposes of JTRIG are two tactics: (1) to inject all sorts of false material onto the internet in order to destroy the reputation of its targets; and (2) to use social sciences and other techniques to manipulate online discourse and activism to generate outcomes it considers desirable. To see how extremist these programs are, just consider the tactics they boast of using to achieve those ends: “false flag operations” (posting material to the internet and falsely attributing it to someone else), fake victim blog posts (pretending to be a victim of the individual whose reputation they want to destroy), and posting “negative information” on various forums. […]
Government plans to monitor and influence internet communications, and covertly infiltrate online communities in order to sow dissension and disseminate false information, have long been the source of speculation. Harvard Law Professor Cass Sunstein, a close Obama adviser and the White House’s former head of the Office of Information and Regulatory Affairs, wrote a controversial paper in 2008 proposing that the US government employ teams of covert agents and pseudo-”independent” advocates to “cognitively infiltrate” online groups and websites, as well as other activist groups.
Sunstein also proposed sending covert agents into “chat rooms, online social networks, or even real-space groups” which spread what he views as false and damaging “conspiracy theories” about the government. Ironically, the very same Sunstein was recently named by Obama to serve as a member of the NSA review panel created by the White House, one that – while disputing key NSA claims – proceeded to propose many cosmetic reforms to the agency’s powers (most of which were ignored by the President who appointed them).
What’s more, the GCHQ admit in one of the docs that this activity has nothing to do with terrorism or even national security.
Here’s the description of a talk that happened at Belfer Center for Science and International Affairs:
In today’s world, businesses are facing increasingly complex threats to infrastructure, finances, and information. The government is sometimes unable to share classified information about these threats. As a result, business leaders are creating their own intelligence capabilities within their companies.
This is not about time honored spying by businesses on each other, or niche security firms, but about a completely new use of intelligence by major companies to support their global operations.
The panelists examine the reasons for private sector intelligence: how companies organize to obtain it, and how the government supports them. “Is this a growing trend?” “How do companies collaborate in intelligence?” “How does the government view private intelligence efforts?” “How do private and government intelligence entities relate to one another?” “What does this all mean for the future of intelligence work?”
I’d love to find out more, or find a transcript or video of the talk.
(Thanks Tim Maly)
I’ve wondered for a long time how IQ studies controlled for the motivation level of the participants. Turns out they don’t:
New work, led by Angela Lee Duckworth, a psychologist at the University of Pennsylvania, and reported online today in the Proceedings of the National Academy of Sciences explores the effect of motivation on how well people perform on IQ tests. While subjects taking such tests are usually instructed to try as hard as they can, previous research has shown that not everyone makes the maximum effort. A number of studies have found that subjects who are promised monetary rewards for doing well on IQ and other cognitive tests score significantly higher.
To further examine the role of motivation on both IQ test scores and the ability of IQ tests to predict life success, Duckworth and her team carried out two studies, both reported in today’s paper. First, they conducted a “meta-analysis” that combined the results of 46 previous studies of the effect of monetary incentives on IQ scores, representing a total of more than 2000 test-taking subjects. The financial rewards ranged from less than $1 to $10 or more. The team calculated a statistical parameter called Hedge’s g to indicate how big an effect the incentives had on IQ scores; g values of less than 0.2 are considered small, 0.5 are moderate, and 0.7 or higher are large.
Duckworth’s team found that the average effect was 0.64 (which is equivalent to nearly 10 points on the IQ scale of 100), and remained higher than 0.5 even when three studies with unusually high g values were thrown out. Moreover, the effect of financial rewards on IQ scores increased dramatically the higher the reward: Thus rewards higher than $10 produced g values of more than 1.6 (roughly equivalent to more than 20 IQ points), whereas rewards of less than $1 were only one-tenth as effective.
Full Story: What Does IQ Really Measure?
Full paper here.
(via Slate Star Codex)
Also of note: A study found that students who thought intelligence was malleable got better grades. “Convincing students that they could make themselves smarter by hard work led them to work harder and get higher grades. The intervention had the biggest effect for students who started out believing intelligence was genetic.” (paper).
I’ve linked to research before casting doubt on the efficacy of “brain training” games and software (other than double n-back). But some new research reported by the MIT Technology Review is more promising:
Cancer survivors sometimes suffer from a condition known as “chemo fog”—a cognitive impairment caused by repeated chemotherapy. A study hints at a controversial idea: that brain-training software might help lift this cognitive cloud.
Various studies have concluded that cognitive training can improve brain function in both healthy people and those with medical conditions, but the broader applicability of these results remains controversial in the field.
In a study published in the journal Clinical Breast Cancer, investigators report that those who used a brain-training program for 12 weeks were more cognitively flexible, more verbally fluent, and faster-thinking than survivors who did not train. […]
“This is a well-done study—they had not just one transfer test but several,” says Hambrick, who notes that many studies of cognitive training depend on a single test to measure results. “But an issue is the lack of activity within the control group.” Better would be to have the control group do another demanding cognitive task in lieu of Lumosity training—something analogous to a placebo, he says: “The issue is that maybe the improvement in the group that did the cognitive training doesn’t reflect enhancement of basic cognitive processes per se, but could be a motivational phenomenon.”
See also: Dual N-Back FAQ
The Dual N-Back FAQ is a great resource compiling tons of research and advice on using the dual n-back test for improving working memory and/or general intelligence.
SHOULD I DO MULTIPLE DAILY SESSIONS, OR JUST ONE?
Most users seem to go for one long N-back session, pointing out that exercises one’s focus. Others do one session in the morning and one in the evening so they can focus better on each one. There is some scientific support for the idea that evening sessions are better than morning sessions, though; see Kuriyama 2008 on how practice before bedtime was more effective than after waking up.
If you break up sessions into more than 2, you’re probably wasting time due to overhead, and may not be getting enough exercise in each session to really strain yourself like you need to.
And as for frequency/spacing:
This study compared a high intensity working memory training (45 minutes, 4 times per week for 4 weeks) with a distributed training (45 minutes, 2 times per week for 8 weeks) in middle-aged, healthy adults…Our results indicate that the distributed training led to increased performance in all cognitive domains when compared to the high intensity training and the control group without training. The most significant differences revealed by interaction contrasts were found for verbal and visual working memory, verbal short-term memory and mental speed.
It also includes a meta-analysis of studies critical of n-back and found evidence for only one flaw: use of passive control groups, which accounts for about half of the improvement in IQ in some studies.
I found it via Brain Workshop, an opens source n-back application for Windows, OSX and Linux.
Previously: History of the n-back training exercise.
Science Daily reports on a recent study using magnetic imaging found that about 6.7% of variation in individual intelligence can be predicted by the overall size of the brain, another 5% can be predicted by the size of the lateral prefrontal cortex and another 10% can be predicted by the strength of the connection between the left lateral prefrontal cortex and the rest of the brain. From the story:
"This study suggests that part of what it means to be intelligent is having a lateral prefrontal cortex that does its job well; and part of what that means is that it can effectively communicate with the rest of the brain," says study co-author Todd Braver, PhD, professor of psychology in Arts & Sciences and of neuroscience and radiology in the School of Medicine. Braver is a co-director of the Cognitive Control and Psychopathology Lab at Washington University, in which the research was conducted.
One possible explanation of the findings, the research team suggests, is that the lateral prefrontal region is a “flexible hub” that uses its extensive brain-wide connectivity to monitor and influence other brain regions in a goal-directed manner.
(via Ryan Yonck)
A high intake of fructose impairs the cognitive abilities of rats by interfering with insulin signaling, but omega-3 fatty acids (n-3) reduces those negative effects effects according to a study from the Department of Integrative Biology and Physiology UCLA published in Journal of Physiology.
Although headlines today, including my own, emphasize the study’s findings regarding the impairing effects of high levels of fructose, the study also highlights the importance of n-3 acids, specifically DHA, to cognitive function. The authors of the study conclude: “In terms of public health, these results support the encouraging possibility that healthy diets can attenuate the action of unhealthy diets such that the right combination of foods is crucial for a healthy brain.”
The study, conducted by Rahul Agrawal1 and Fernando Gomez-Pinilla, consisted of four groups of six rats:
Each group was tested on a Barnes maze, a standard measure of spatial learning and memory in rodents. Prior to beginning their special diets all of the rats had been trained in the maze for a five days were found to be of equal cognitive condition.
The study found that an n-3 deficient diet hampered the rats’ performance on the maze, and that adding high fructose intake to an n-3 deficient diet made things substantially worse. The rats with an n-3 sufficient diet but a high level of fructose did significantly better than those with a n-3 deficient diet and a high level of fructose, but still did worse than those with a deficient n-3 level but no fructose. Here’s an illustration of the latency in completing the maze (lower is better):
The study notes: “Although there was a preference towards fructose drinking in comparison to the food intake, no differences were observed in body weight and total caloric intake, thus suggesting that obesity is not a major contributor to altered memory functions in this model.”
This is a new study and has yet to be replicated, and so far its implications for human diets is unclear. “We’re not talking about naturally occurring fructose in fruits, which also contain important antioxidants,” Gomez-Pinilla said in a pres release. “We’re concerned about high-fructose corn syrup that is added to manufactured food products as a sweetener and preservative.”
Although studies have found positive benefits in taking DHA supplements (see Wikipedia for an overview), previous study by Nutritional Sciences Division at King’s College London on the DHA levels in vegans and vegetarians concluded that although those who don’t eat meat have significantly lower levels of DHA “There is no evidence of adverse effects on health or cognitive function with lower DHA intake in vegetarians.” However, there are now a number of algae based vegan DHA supplements.
Dan Hurley wrote a lengthy New York Times piece covering the origins of the n-back training exercise, which purportedly improves fluid intelligence in those who practice it daily:
The study, by a Swedish neuroscientist named Torkel Klingberg, involved just 14 children, all with A.D.H.D. Half participated in computerized tasks designed to strengthen their working memory, while the other half played less challenging computer games. After just five weeks, Klingberg found that those who played the working-memory games fidgeted less and moved about less. More remarkable, they also scored higher on one of the single best measures of fluid intelligence, the Raven’s Progressive Matrices. Improvement in working memory, in other words, transferred to improvement on a task the children weren’t training for. […]
When Klingberg’s study came out, both Jaeggi and Buschkuehl were doctoral candidates in cognitive psychology at the University of Bern, Switzerland. Since his high-school days as a Swiss national-champion rower, Buschkuehl had been interested in the degree to which skills — physical and mental — could be trained. Intrigued by Klingberg’s suggestion that training working memory could improve fluid intelligence, he showed the paper to Jaeggi, who was studying working memory with a test known as the N-back. “At that time there was pretty much no evidence whatsoever that you can train on one particular task and get transfer to another task that was totally different,” Jaeggi says. That is, while most skills improve with practice, the improvement is generally domain-specific: you don’t get better at Sudoku by doing crosswords. And fluid intelligence was not just another skill; it was the ultimate cognitive ability underlying all mental skills, and supposedly immune from the usual benefits of practice. To find that training on a working-memory task could result in an increase in fluid intelligence would be cognitive psychology’s equivalent of discovering particles traveling faster than light.
But really, the biggest drawback is probably that it’s hard to get people to start or stick with the n-back. I’ve known about for years now and still haven’t done it.
It’s clear now that, much like HBGary before it (see: Inside the World of Wannabe Cyberspooks for Hire) private security research firm Stratfor is a joke.
But according to The Atlantic International Editor Max Fisher, Stratfor was always a joke in the foreign policy community:
The group’s reputation among foreign policy writers, analysts, and practitioners is poor; they are considered a punchline more often than a source of valuable information or insight. As a former recipient of their “INTEL REPORTS” (I assume someone at Stratfor signed me up for a trial subscription, which appeared in my inbox unsolicited), what I found was typically some combination of publicly available information and bland “analysis” that had already appeared in the previous day’s New York Times. A friend who works in intelligence once joked that Stratfor is just The Economist a week later and several hundred times more expensive. As of 2001, a Stratfor subscription could cost up to $40,000 per year.
Fisher also chide Wikileaks for buying into Stratfor’s marketing hype:
It’s true that Stratfor employs on-the-ground researchers. They are not spies. On today’s Wikileaks release, one Middle East-based NGO worker noted on Twitter that when she met Stratfor’s man in Cairo, he spoke no Arabic, had never been to Egypt before, and had to ask her for directions to Tahrir Square. Stratfor also sometimes pays “sources” for information. Wikileaks calls this “secret cash bribes,” hints that this might violate the Foreign Corrupt Practices Act, and demands “political oversight.”
For comparison’s sake, The Atlantic often sends our agents into such dangerous locales as Iran or Syria. We call these men and women “reporters.” Much like Statfor’s agents, they collect intelligence, some of it secret, and then relay it back to us so that we may pass it on to our clients, whom we call “subscribers.” Also like Stratfor, The Atlantic sometimes issues “secret cash bribes” to on-the-ground sources, whom we call “freelance writers.” We also prefer to keep their cash bribes (“writer’s fees”) secret, and sometimes these sources are even anonymous.
I suppose much of that depends on whether these payments were made to, as Fisher suggests, freelance researchers/writers, or to, as Wikileaks implies, to government officials and employees. The Stratfor employee mentioned by that NGO worker may not be the only type of “informant” on the company’s pay role.
(via Alex Burns)