Originally posted on March 11, 2016.
It was, as our NYC-bred presidential candidates would say, “yuge” news, both in and beyond psychological science: When 270 researchers in an “Open Science Collaboration” network redid 100 recent studies from three leading journals, only 36 percent of the findings replicated. Ouch!
But now another research team, led by Harvard social psychologist Daniel Gilbert, has reanalyzed the data and arrived at a radically different conclusion...to which the OSC group has offered a rejoinder, the Gilbert group a rebuttal, and the conversation continues. Boiling the controversy down to the fewest possible words, the Gilbert group offers this elevator speech synopsis:
OSC: “We have provided a credible estimate of the reproducibility of psychological science.”
US [Gilbert et al.]: “No you haven’t, because (1) you violated the basic rules of sampling when you selected studies to replicate, (2) you did unfaithful replications of many of the studies you selected and (3) you made statistical errors.”
OSC (& OTHERS): “We didn’t make statistical errors.”
Stay tuned: this debate is in process, as a disagreement among mutually respectful colleagues. The exchanges bring to mind the words of David Hume: “The truth springs from arguments amongst friends.”
Whatever the outcome, the “reproducibility crisis” debate is the free marketplace of ideas in action as diverse scholars
1) aim to discern and give witness to truth,
2) contribute their findings and conclusions to the public sphere, while welcoming others doing the same, and then
3) debate their differences, in the confidence that, in the end, greater wisdom ultimately will emerge.