Though the goal of scientific research is to objectively follow evidence to advance our knowledge of the world we live in, it has become increasing apparent that there are some substantial road blocks in our way. For example, a number of recent articles have argued that (A) we get the wrong answer – a lot, (B) the hotter the area of research, the more likely we are to get it wrong, and (C) the higher the profile of the journal we published in, the more likely we are to have got it wrong (Ioannidis 2005, Pfeiffer & Hoffmann 2009, Brembs et al. 2013). Ideally, science is self-correcting process, allowing us to reach the correct answer over time, in spite of such misleading results. However, the authors of a recent Nature article argue that a phenomena they refer to as “herding” can prevent or severely delay the process of self-correction and their proposed solution is quite surprising: add more subjectivity to the peer review process (Park et al. 2013).
According to Park and colleagues at the University of Bristol, “herding” occurs in science when: “An individual … choose[s] a theme to advocate in their manuscript submission based entirely on what they have observed from others, independently of what they initially thought was true.” The authors argue that this behavioral tendency can become exacerbated by the pressures put on scientists to publish positive results in high profile journals. As a result, when it comes to contentious (and hot) topics, an early series of published positive results supporting one side of the argument can create a positive feedback cycle via herding. Ultimately, this can lead to the false appearance of consensus within the literature.
Park and colleagues provide a couple of interesting examples to illustrate patterns of herding in scientific literature. Of particular interest, they compared whether the data presented in a series of scientific articles statistically supported the claim that was made in the abstract. They illustrate a surprising lack of correlation between what authors claim their paper demonstrates and what the data actually shows (see figure below). This observation argues that authors have a tendency to overstate the significance of their findings and that this overstatement is biased towards the more exciting answer.
Park and colleagues propose a striking solution to this problem. They argue that strict objectivity during the review process can perpetuate this phenomena. Instead, we should ask reviewers to be moderately subjective when reviewing articles and to express their own opinions about the veracity of the results. Using a mathematical model, the article demonstrates that their proposal could, indeed, curb herding behaviors and allow science to self-correct more rapidly.
As with all models, theirs relies on simplifications and assumptions concerning the actual process of peer review. For an interesting analysis of the pros and cons of their approach, I encourage you to check out this Nature News article: Peer reviewers urged to speak their minds.
I have to admit, the idea of including subjective opinions as part of the peer review process slightly terrifies me. What could be worse than getting the reviews back for a paper and reading “I’m sorry, your results just don’t agree with the opinion I held before I read your paper.” However, as a young scientist maybe I am just too concerned with getting my dissertation chapters published…
What is your opinion?
Here is a link to the original article published in nature:
Park, I.-U., Peacey, M. & Munafo, M. R. (2013) Modelling the effects of subjective and objective decision making in scientific peer review. Nature.
And here is an additional link (open access) to the Nature News article that summarizes the work and provides some interesting critiques:
Van Noorden, R. (2013) Peer reviewers urged to speak their minds. Nature.
Ioannidis, J. P. A. Why most published research findings are false. PLoS Med. 2, e124 (2005)
Pfeiffer, T. & Hoffmann, R. Large-scale assessment of the effect of popularity on the reliability of research. PLoS ONE 4, e5996 (2009)
Brembs, B., Button, K. & Munafò, M. R. Deep impact: unintended consequences of journal rank. Front. Hum. Neurosci. 7, 291 (2013)
Cartoon via Nick D Kim, strange-matter.net