Finding bias in discussions of campus sexual assault data

When science is used to support proposed changes to public policy, it isn’t uncommon for opponents of the policy changes to question the legitimacy of the studies cited. This often leads to rejection of scientific studies for completely unscientific reasons, and can even devolve into outright scientific denialism.

Earlier this year, the Obama administration proposed controversial policy changes related to sexual assault prevention on college campuses. As evidence of the need for reform, the White House Task Force to Protect Students from Sexual Assault cited the statistic that one in five women attending college are sexually assaulted at some point during their time on campus. Unsurprisingly, those opposed to the sexual assault policy changes are questioning the legitimacy of both the statistic and the study that produced it.

Recently, Emily Yoffe published an article in which she argues that the statistics on sexual assault presented by the Obama administration are misleading. Yoffe describes herself as “bringing some healthy skepticism to the hard work of putting a number on the prevalence of campus rape.” The thing is, skepticism in and of itself isn’t really that helpful unless you understand how to think critically about scientific studies. Yoffe’s article presents a good example of how misconceptions about research methodology and statistics can derail an otherwise productive conversation and steer it towards the territory of science denialism.

Continue reading

Advertisements