The 'Decline Effect' and Scientific Truth

Friday, June 29, 2012

Transcript

Surprising and exciting scientific findings capture our attention and captivate the press.  But what if, at some point after a finding has been soundly established, it starts to disappear?  In a special collaboration with Radiolab we look at the 'decline effect' when more data tells us less, not more, about scientific truth.

Correction: An earlier version of this short incorrectly stated that Jonathan Schooler saw the effect size of his study fall by 30% on two different occasions. In fact, he saw it fall by that amount the first time he repeated the study and saw a general downward trend thereafter. The audio has been adjusted to reflect this fact.

 

Correction: An earlier version of this short incorrectly attributed a statement to Jonathan Schooler’s advisor. The statement was actually made by his colleague. The audio has been adjusted to reflect this fact.

Guests:

Jad Abumrad, Robert Krulwich, Jonah Lehrer and Johnathan Schooler

Hosted by:

Brooke Gladstone

Comments [11]

Allan Dodds from San Diego County

The reproducibility of published science depends to some extent on the details within the materials and methods section and the ability or willingness to reproduce these the next time the study is conducted. I would guess failure to adhere to the original study methods, a desire to do it "better" than the last time, or a tendency to take short cuts plays a part in the topic at hand. This may have been already covered in the color of the room theory for studies that require interviews of humans.

Jul. 03 2012 08:51 PM
Simon Rochester from Oakland, CA

As was pointed out in the comments on this story at the RadioLab website, this effect does occur in physics, and for well-understood reasons. In fact, it was memorably discussed in Richard Feynman's 1974 Caltech commencement address on pseudoscience:

"We have learned a lot from experience about how to handle some of the ways we fool ourselves. One example: Millikan measured the charge on an electron by an experiment with falling oil drops, and got an answer which we now know not to be quite right. It's a little bit off because he had the incorrect value for the viscosity of air. It's interesting to look at the history of measurements of the charge of an electron, after Millikan. If you plot them as a function of time, you find that one is a little bit bigger than Millikan's, and the next one's a little bit bigger than that, and the next one's a little bit bigger than that, until finally they settle down to a number which is higher.

"Why didn't they discover the new number was higher right away? It's a thing that scientists are ashamed of--this history--because it's apparent that people did things like this: When they got a number that was too high above Millikan's, they thought something must be wrong--and they would look for and find a reason why something might be wrong. When they got a number close to Millikan's value they didn't look so hard. And so they eliminated the numbers that were too far off, and did other things like that. We've learned those tricks nowadays, and now we don't have that kind of a disease."

Jul. 03 2012 05:17 AM
Ray Wood from Portland, OR

As a double air sign libra, I'm all in on this one. After all, the Greeks not only recognized the solid/liquid/fire/water factors, but also the ethereal, which is just the catch-all term for subtle substance that we have not yet learned to observe.

Jul. 03 2012 12:12 AM
josh morgan from cambridge ma

The most likely explanation for declining effects is that the initial finding was a false positive and that there was a confirmation bias in follow up studies. There is no need to invoke supernatural explanations or to question the foundation of inductive reasoning.

Jul. 01 2012 11:52 PM
Michael J. Maierle from Milwaukee, WI

Here's my take on the 'decline effect.' Basically, there is a tendency to find what we're looking for. The first researcher hypothesizes an effect. It's bold, it's big, it get named after him or her. The next researcher is intrigued. The third researcher is interested. The fourth, dubious. Pretty soon researchers are looking for other factors that affect the outcome such as the green room or the attractive grad student. I've heard this called "motivated reasoning" or "confirmation bias."

By the way, I'm all for the "rambling, chit-chatty style of this report." To me, education is often entertaining.

Jul. 01 2012 06:33 PM

There are any number of explanations for this effect that challenge the mainstream scientific paradigm.

It could be that there are just far too many unknown but significant variables to conduct any but the most elementary behavioral experiments.

It could be that the researcher's intentions or assumptions effect the quantum probability field in which the experiment takes place.

It could be that what we imagine to be an "objective" reality slowly accommodates itself to our unconscious conceptual patterns - when we no longer expect or hope for surprising outcomes we get more ordinary outcomes.

But, most importantly, this is further evidence that the most axiomatic assumptions of the scientific paradigm are not as rock solid as we've long assumed; and the lesson is that we should rely less on the scientific method as the one and only reliable path to "truth".

Jul. 01 2012 05:37 PM
Ann Cheves from Salt Lake CIty

I was surprised to not hear any mention of the Hawthorne Effect in this report.

Jul. 01 2012 04:24 PM
Jessie Henshaw from way uptown

Of course, the strong motive to replace missing information with "made up stuff" could explain the apparent "observer effect" too...

Jul. 01 2012 03:58 PM
Jessie Henshaw from way uptown

There's a whole range of "low hanging fruit" problems for search strategies.

Jul. 01 2012 03:55 PM
John Gallup from San Diego

Perhaps some of your listeners enjoy the rambling, chit-chatty style of this report, so reminiscent of Krulwich's science stories and "This American Life." I don't.

Jun. 30 2012 06:06 PM
Steve Jenkins from Holland, Michigan

Not to be too picky, but "data" doesn't actually mean "truth" as Jad Abumrad says in this report. "Datum" from the classical Latin meaning "that which is given" or "present" , use as noun of neuter past participle of "dare" to give (OED) However, Mr. Abumrad's point is well taken that we expect data to be accurate.

Great show as usual. I thinke I listened to this show when it aired before, but didn't notice this.

Steve Jenkins

Jun. 30 2012 12:27 PM

Leave a Comment

Email addresses are required but never displayed.