When science fails

November 13, 2011

I just read this short piece by Richard Van Noorden in Nature about the rising number of retractions in medical journals over the past five years and it got me thinking about the different ways in which researchers fail to deal with failure (the visualizations that accompany the story are striking).

Esther Vargas 2008 cc-by-nc-sa

The article specifies two potential causes behind the retraction boom: (1) increased access to data and results via the Internet facilitating error discovery; and (2) creation of oversight organizations charged with identifying scientific fraud (Van Noorden points to the US Office of Research Integrity in the DHHS as an example). It occurred to me in reading this that, a third, complementary  cause could be the political pressure exerted on universities and funding agencies as a result of the growing hostility towards publicly funded research. In the face of such pressure, self-policing would seem more likely.

Apparently, the pattern goes further and deeper than Van Noorden is able to discuss within the confines of such a short piece. This Medill Reports story by Daniel Peake from last year has a graph of retractions that goes all the way back to 1990, showing that the upturn has been quite sudden.

All of these claims about the causes of retractions are empirical and should/could be tested to some extent. The bigger question, of course, remains: what to do about the reality of failure in scientific research? As numerous people have already pointed out, in an environment where publication serves as the principal metric of production, the institutions, organizations & individuals that create research – universities, funding agencies, peer reviewed journals, academics & publishers – have few (if any) reasons to identify and eliminate flawed work. The big money at stake in medical research probably compounds these issues, but that doesn’t mean the social sciences are immune. In fields like Sociology or Communication where the stakes are sufficiently low (how many lives were lost in FDA trials because of the conclusions drawn by that recent AJS article on structural inequality?), the social cost of falsification, plagiarism, and fraud remain insufficient to spur either public outrage or formal oversight. Most flawed social scientific research probably remains undiscovered simply because, in the grand scheme of policy and social welfare, this research does not have a clear impact.

Presumably, stronger norms around transparency can continue to provide enhanced opportunities for error discovery in quantitative work (and I should have underscored earlier that these debates are pretty much exclusively about quantitative work). In addition, however, I wonder if it might be worth coming up some other early-detection and response mechanisms. Here were some ideas I started playing with after reading the article:

Adopt standardized practices for data collection on research failure and retractions. I understand that many researchers, editors, funders, and universities don’t want the word to get out that they produced/published/supported anything less than the highest quality work, but it really doesn’t seem like too much to ask that *somebody* collect some additional data about this stuff and that such data adhere to a set of standards. For example, it would be great to know if my loose allegations about the social sciences having higher rates of research failure and lower rates of error discovery are actually true. The only way that could happen would be through data collection and comparison across disciplines.

Warning labels based on automated meta-analyses. Imagine if you read the following in the header of a journal article: “Caution! The findings in this study contradict 75% of published articles on similar topics.” In the case of medical studies in particular, a little bit of meta-data applied to each article could facilitate automated meta-analyses and simulations that could generate population statistics and distributions of results. This is probably only feasible for experimental work, where study designs are repeated with greater frequency than in observational data collection.

Create The Journal of Error Discovery (JEDi). If publications are the currency of academic exchange, why not create a sort of bounty for error discovery and meta-analyses by dedicating whole journals to them? At the moment, blogs like Retraction Watch are filling this gap, but there’s no reason the authors of the site shouldn’t get more formal recognition and credit for their work. Plus, the first discipline to have a journal that goes by the abbreviation JEDi clearly deserves some serious geek street cred. Existing journals could also treat error discoveries and meta-analyses as a separate category of submission and establish clear guidelines around the standards of evidence and evaluation that apply to such work. Maybe these sorts of practices already happen in the medical sciences, but they haven’t made it into my neighborhood of the social sciences yet.

Advertisements

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s