December 23, 2009 @ 10:56 pm
In a Wired article, Accept Defeat: The Neuroscience of Screwing Up, Jonah Lehrer writes about the ways our assumptions (and experience) blinker us from seeing new evidence, even when it’s staring us in the face.
Lehrer tells a layered story, rich with examples, such as Kevin Dunbar’s look at how scientists actually work.
[W]hen experiments were observed up close — and Dunbar interviewed the scientists about even the most trifling details — this idealized version of the lab fell apart, replaced by an endless supply of disappointing surprises. There were models that didn’t work and data that couldn’t be replicated and simple studies riddled with anomalies. “These weren’t sloppy people,” Dunbar says. “They were working in some of the finest labs in the world. But experiments rarely tell us what we think they’re going to tell us. That’s the dirty secret of science.”
This reminds me of my lower-division astronomy class at Caltech in the 1980s. Maartin Schmidt taught us about various methods for measuring Hubble’s Constant (for expansion of the cosmos) involving red shift measurements and type 1-A supernovae. The problem was, different methods for measuring distance yielded different values for Hubble’s Constant (and for the amount of mass in the cosmos). As measurements and computations became more precise, the discrepancy only got more glaring.
Then in the 1990s (long after I had decided an astronomer’s life was not for me), Dark Energy and Dark Matter were finally offered as the least bizarre explanations for what at first had seemed like experimenter error.
Scientific experiments are an attempt to screen out the chaos of everyday existence – to shine a beam of light narrow enough to illuminate a single, repeatable fact of nature. But the teeming world is always right outside the door of our neat little experiment, and often the chaos finds a way inside. Even when it doesn’t – and here is Lehrer’s main point – we have a human tendency to discard new results as mere gibberish. Our expertise gets in our way.
We can’t escape this tendency (so suggests Dunbar’s experiment where students’ brains were observed while they watched videos of falling balls), but our predicament is not hopeless. Lehrer offers up some helpful strategies for escaping the trap of our own preconceptions.
How to Learn From Failure
Too often, we assume that a failed experiment is a wasted effort. But not all anomalies are useless. Here’s how to make the most of them. —J.L.
- Check Your Assumptions
Ask yourself why this result feels like a failure. What theory does it contradict? Maybe the hypothesis failed, not the experiment.
- Seek Out the Ignorant
Talk to people who are unfamiliar with your experiment. Explaining your work in simple terms may help you see it in a new light.
- Encourage Diversity
If everyone working on a problem speaks the same language, then everyone has the same set of assumptions.
- Beware of Failure-Blindness
It’s normal to filter out information that contradicts our preconceptions. The only way to avoid that bias is to be aware of it.
So it seems the illness is chronic and the prescription takes constant work. But at least there is some kind of remedy.