Happy 2010 to all! I’ve taken some time out as I have moved locations and it’s taken longer to settle down that I hoped. (sigh) I still have loads to do but can get back to posting regularly (I hope).
New Year’s Eve I came across a very interesting article about how scientists think thanks to a reference on the Foresight Institute website. The article, Accept Defeat: The Neuroscience of Screwing Up, by Jonah Lehrer for Wired Magazine uses a story about a couple of astronomers and their investigative frustrations to illustrate research on how scientists (and the rest of us, as it turns out) think.
Before going on about the article I’m going to arbitrarily divide beliefs about scientific thinking/processes into two schools. In the first there’s the scientific method with its belief in objectivity and incontrovertible truths waiting to be discovered and validated. Later in university I was introduced to the 2nd belief about scientific thinking with the notion that scientific facts are social creations and that objectivity does not exist. From the outside it appears that scientists tend to belong to the first school and social scientists to the second but, as the Wired article points out, things are a little more amorphous than that when you dig down into the neuroscience of it all.
From the article,
The reason we’re so resistant to anomalous information — the real reason researchers automatically assume that every unexpected result is a stupid mistake — is rooted in the way the human brain works. Over the past few decades, psychologists [and other social scientists] have dismantled the myth of objectivity. The fact is, we carefully edit our reality, searching for evidence that confirms what we already believe. Although we pretend we’re empiricists — our views dictated by nothing but the facts — we’re actually blinkered, especially when it comes to information that contradicts our theories. The problem with science, then, isn’t that most experiments fail — it’s that most failures are ignored.
The DLPFC [dorsolateral prefrontal cortex] is constantly censoring the world, erasing facts from our experience. If the ACC [anterior cingulate cortex, typically associated with errors and contradictions]] is the “Oh shit!” circuit, the DLPFC is the Delete key. When the ACC and DLPFC “turn on together, people aren’t just noticing that something doesn’t look right,” [Kevin] Dunbar says. “They’re also inhibiting that information.”
Disregarding evidence is something I’ve noticed (in others more easily than in myself) and have wondered about the implications. As noted in the article, ignoring scientific failure stymies research and ultimately more effective applications for the research. For example, there’s been a lot of interest in a new surgical procedure (still being tested) for patients with multiple sclerosis (MS). The procedure was developed by an Italian surgeon who (after his wife was stricken with the disease) reviewed literature on the disease going back 100 years and found a line of research that wasn’t being pursued actively and was a radical departure from current accepted beliefs about the nature of MS. (You can read more about the MS work here in the Globe and Mail story or here in the CBC story.) Btw, there are a couple of happy endings. The surgeon’s wife is much better and a promising new procedure is being examined.
Innovation and new research can be so difficult to pursue it’s amazing that anyone ever succeeds. Kevin Dunbar, the researcher mentioned previously, arrived at a rather interesting conclusion in his investigation on how scientists think and how they get around the ACC/DLFPC action: other people. He tells a story about two lab groups who each had a meeting,
Dunbar watched how each of these labs dealt with their protein problem. The E. coli group took a brute-force approach, spending several weeks methodically testing various fixes. “It was extremely inefficient,” Dunbar says. “They eventually solved it, but they wasted a lot of valuable time.”The diverse lab, in contrast, mulled the problem at a group meeting. None of the scientists were protein experts, so they began a wide-ranging discussion of possible solutions. At first, the conversation seemed rather useless. But then, as the chemists traded ideas with the biologists and the biologists bounced ideas off the med students, potential answers began to emerge. “After another 10 minutes of talking, the protein problem was solved,” Dunbar says. “They made it look easy.”
When Dunbar reviewed the transcripts of the meeting, he found that the intellectual mix generated a distinct type of interaction in which the scientists were forced to rely on metaphors and analogies [my emphasis] to express themselves. (That’s because, unlike the E. coli group, the second lab lacked a specialized language that everyone could understand.) These abstractions proved essential for problem-solving, as they encouraged the scientists to reconsider their assumptions. Having to explain the problem to someone else forced them to think, if only for a moment, like an intellectual on the margins, filled with self-skepticism.
As Dunbar notes, we usually need more than an outsider to experience a Eureka moment (the story about Italian surgeon notwithstanding and it should be noted that he was an MS outsider); we need metaphors and analogies. (I’ve taken it a bit further than Dunbar likely would but I am a writer, after all.)
If you are interested in Dunbar’s work, he’s at the University of Toronto with more information here.