Though the goal of scientific research is to objectively follow evidence to advance our knowledge of the world we live in, it has become increasing apparent that there are some substantial road blocks in our way. For example, a number of recent articles have argued that (A) we get the wrong answer – a lot, (B) the hotter the area of research, the more likely we are to get it wrong, and (C) the higher the profile of the journal we published in, the more likely we are to have got it wrong (Ioannidis 2005, Pfeiffer & Hoffmann 2009, Brembs et al. 2013). Ideally, science is self-correcting process, allowing us to reach the correct answer over time, in spite of such misleading results. However, the authors of a recent Nature article argue that a phenomena they refer to as “herding” can prevent or severely delay the process of self-correction and their proposed solution is quite surprising: add more subjectivity to the peer review process (Park et al. 2013).
Every Friday at Nothing in Biology Makes Sense! our contributors pass around links to new scientific results, or science-y news, or videos of adorable wildlife, that they’re most likely to bring up while waiting in line for a latte.
With the official start of Spring this week, at least depending on where you are. For me, I’m currently sitting in Ithaca, NY where the high for the day is still only squeaking into the above freezing range which only makes me miss Richmond, VA right now all the more. So without further adieu, which is apparently my new catch phrase, your links for the week.
To start things off on a light and happy note. Sarah has some wonderful news that she passed her dissertation defense!! She is so excited, as she should be, that her link this week is a ton of dancing GIFs. Of note, she things either Carlton or Ace Ventura match her mood best. Congrats Sarah!
This week CJ wonders about the possibility of a gender gap in pain perception as discussed in the NYTimes article. She also thought this article gave a good break down of the process of becoming tenured and is indeed quite helpful (and makes me glad to be in the field that I am in). And finally, an opinion piece on why De-extinction would not work.
Finally, I’d like to end things with a video. I’m a big fan of TED talks and also of U2, so when I saw that Bono gave a TED talk about his passion of helping to fight to end poverty I thought it was worth a look. I loved his analogy of how poverty could end in as short a time period as about 3 more Rolling Stones farewell tours.
Science kind of has a lady problem. While nearly equal numbers of males and females begin the path to a career in Science/Academia, more females drop out as they progress on the trajectory than men. This has been called a ”leaky pipeline” – at each progressive career stage, there are fewer women. There are many publications about this and the surrounding causes/effects (I’ve included a non-exhaustive list at the end of the post). One recent “Spotlight” in the journal Trends in Ecology and Evolution by Cameron et al. caught my eye. In it, they summarize much recent research on the topic – including that women:
- publish fewer papers.
- have lower grant success and receive lower grant amounts.
- get promoted more slowly.
- have lower retention rates.
The reason I’m bringing up this topic at this time is because Cameron et al. raised an issue that really rubbed me the wrong way. Cameron et al. construct a flow chart of interacting factors that contribute to women choosing to leave science. Central to their diagram is “Lower self-confidence in women”*. The authors say the way women “experience the scientific community” lowers women’s self-confidence which initiates a feedback loop through lower publication rate, lower grant success and lower professional success that inevitably spits a woman out at the bottom. In this framework, women are less competitive and therefore they don’t get hired or fail to get tenure. This may very well be true. But I don’t like it. I don’t like that the underlying reason women would leave science is low self-confidence.
How important is self-confidence in Science? How important is the generally unbearable stress of it all?
Science is difficult. Despite the belief that professors have low-stress desk jobs, people in academia have to work almost all the time because we have no upper limit on our job – there’s literally always more to do and it’s always up to you to do it. Relatively few job openings and relatively many people with doctorates ups the stress and competition factors as well. You really have to want to stay on this career path. Like really, really. But there’s got to be a limit for how much any one person can take before the cons outweigh the pros and the reasonable thing to do is leave – the amount of straw that breaks the camels back, if you will. No matter how strong (i.e., self-confident) the camel is. Right? I wonder if it’s less about self-confidence and more about the sum of all the parts. I’ve reworked Cameron et al.’s flowchart into something I call: “Not a flowchart but instead a hand-drawn picture of a camel”:
All the above facts/observations make it seem (to me) that women may just have more straws on their backs – i.e., more reasons to leave academia. Maybe I’m splitting hairs because I like the framing a little better. But all of the ways and reasons that there is a gender bias in science add up to a (however slightly and not in every case) less good environment that women may feel less loyal towards.
Cameron et al. conclude with “Enhancing self confidence and expectations may be the single most significant step in encouraging and retaining women in science.” I’m not sure how to do this – especially on an institutional scale. I think we should focus on lessening the number of straws for women, the biggest of which may be family oriented. So maybe we should work on institutionalizing allowing time off the tenure clock for maternity and paternity leave and increasing affordable childcare on campuses. Maybe actively recruiting female mentors/mentees in STEM disciplines will help (programs like this!). Maybe the fact that I’ve never been told women aren’t good at math is a sign that we’re growing out of an outdated way of thinking. We all need to apply for things we think might be out of our league and we’re all susceptible to low self-confidence from time to time. A good social support system (of men and women and four-legged friends and beer) is invaluable to me personally when I begin to crack. For the record, I have no evidence – it’s just what I think. My opinion is that this is important and to fix the leak, we need to keep talking about this subject.
One final point – they discuss these concepts under the title question: “Is publication rate an equal opportunity metric?” Apparently, the answer is “no”. Strictly looking at the number of someone’s publications doesn’t accurately summarize their publication history (or worth as a future colleague/grant recipient/whatever) and they argue this puts women at a disadvantage. Regardless of how realistic or useful a “quantity only” metric system is, this article has prompted me to action! How about including number of citations and/or journal impact factor on the publications section of a C.V.? Instead of a traditional citation, perhaps this?:
Hird SM and Sullivan JS. 2009. Assessment of gene flow across a hybrid zone in red-tailed chipmunks (Tamias ruficaudus). Molecular Ecology, 18: 3097-3109. Citations: 16. 2011 Journal Impact Factor: 5.522.
Including these metrics makes sense for anyone – it allows your publication record to be most fairly evaluated. Well, that’s enough from me but I’d love to hear from you. What do you think of Cameron et al.’s flowchart? How important is self-confidence in science? Should we put quality metrics on our C.V.s? Please leave comments below!
* There is no hard evidence that I could find that women in science have lower self-confidence than men, which is central to the Cameron et al. argument. If you know of any studies regarding this – please let me know!
References and further reading (additional suggestions welcome):
Barres BA (2006). Does gender matter? Nature 442: 133-136.
Bedi G, Van Dam NT, Munafo M (2012). Gender inequality in awarded research grants. The Lancet 380: 474.
Cameron EZ, Gray ME, White AM (2013). Is publication rate an equal opportunity metric? Trends in Ecology & Evolution 28: 7-8.
Damschen EI, Rosenfeld KM, Wyer M, Murphy-Medley D, Wentworth TR, Haddad NM (2005). Visibility matters: increasing knowledge of women’s contributions to ecology. Frontiers in Ecology and the Environment 3: 212-219.
Holmes M, O’Connell S (2007). Leaks in the pipeline. Nature 446: 346-347.
Hutson SR (2006). Self-citation in archaeology: Age, gender, prestige, and the self. Journal of Archaeological Method and Theory 13: 1-18.
Martin LJ (2012). Where are the women in ecology? Frontiers In Ecology and the Environment 10: 177-178.
McGuire KL, Primack RB, Losos EC (2012). Dramatic Improvements and Persistent Challenges for Women Ecologists. BioScience 62: 189-196.
Moss-Racusin CA, Dovidio JF, Brescoll VL, Graham MJ, Handelsman J (2012). Science faculty’s subtle gender biases favor male students. Proceedings of the National Academy of Sciences 109: 16474-16479.
O’Brien K, Hapgood K (2012). The academic jungle: ecosystem modelling reveals why women are driven out of research. Oikos 121: 999-1004.
Symonds MRE, Gemmell NJ, Braisher TL, Gorringe KL, Elgar MA (2006). Gender differences in publication output: towards an unbiased metric of research performance. PLoS ONE 1: e127.
For much of the last week, I have been looking for a solid reason to either go ahead or not ahead with a manuscript I submitted to HOAJ Biology, a journal I later discovered made Beall’s List of Predatory Open-Access Publishers. There were many good reasons, in my mind, to just do it – it was peer-reviewed, my article and software are sound (though a minor contribution) and I would like closure on this project I finished a year ago. It turns out, there are some even better reasons not to publish with this journal. First and foremost, there is a FICTIONAL PERSON on the Editorial Board.
Shortly after this post, Dr. Todd Vision, with NESCent and UNC Chapel Hill, emailed me and asked if he might be of help. After sending him the manuscript with some additional details, he replied with this advice:
As for HOAJ Biology specifically, <an editor’s name deleted> may be legitimate, but that doesn’t mean he actually helps oversee peer review. I suggest you look up the credentials of another one of the editors, Peter Uhnemann, before you draw any conclusions about the involvement of the editorial board: http://phylogenomics.blogspot.ca/2012/01/scary-and-funny-functional-researcher.html
I highly recommend people read that whole post, but for those in a hurry: “Peter Uhnemann” from the “Daniel-Duesentrieb Institute” is a fictitious person from a fictitious institution. And he’s listed plain as day on HOAJ Biology’s Editorial Board! (I did contact one person on the Editorial Board via email to make sure he was actually affiliated with the journal, which he confirmed, but I guess that didn’t cut it.)
Dr. Vision also shared this advice with me for future manuscripts:
There are technical aids to finding the right journal in which to publish (like http://www.edanzediting.com/journal_advisor) but in the end one still needs to make a personal judgement about how to weight the many different factors. And while I am personally a strong advocate for gold OA journals, paying for publication upfront does require us as researchers to be more informed about the choices – library subscriptions no longer keep the low-quality publishers out of the market. In the future, if you are trying to decide among OA publishers, members in the OASPA (http://oaspa.org/membership/members/) is generally a reliable indicator of being on the up and up.
So what to do if you say no? If you aren’t interested in sending it to a more reputable outlet for minor contributions (like, say, BMC Research Notes), you could simply post it as a technical note on your website (a very common thing in CS) or on a preprint server like Figshare.
We’ll support you whatever you decide.
I am grateful to Dr. Vision (and Dr. Jonathan Eisen for the original post about “Peter Uhnemann”) and to all the commenters here for advising me when I really wasn’t sure what to do. (Final note: I suppose I should say that my experience with HOAJ Biology doesn’t mean all the smaller, open-access journals out there are Bad. Doing my homework paid off, though!)
Last summer, I worked with NESCent and Google’s Summer of Code to write a small piece of software. I think it’s quite useful for the specific thing it does and some researchers in my immediate peer group who have used it agree. I wrote up a short manuscript describing the program and very quickly got it rejected from Molecular Ecology Resources and Bioinformatics. It went on the back burner for several months until I got a solicitation from a new open-access journal that was offering a discounted rate for articles received before a certain date. So I submitted to this journal, after looking up some of their papers and a few people that have published there and convincing myself it wasn’t a flat-out scam.
One day after I submitted, I got an email asking me to review my own article. I know, right? How could that ever happen with a legitimate journal? I declined, they sent it to others to review and about a month later I got three reviews back that were short (0.5 – 1 page), but addressed real questions about my manuscript and included helpful suggestions. I incorporated the changes as best I could and resubmitted. About a week after the resubmission, I saw Beall’s List of Predatory Open-Access Publishers, which includes the aforementioned journal on the list of “questionable, scholarly open-access publishers”. The author of the list says: I recommend that scholars not do any business with these publishers, including submitting articles, serving as editors or on editorial boards, or advertising with them. Also, articles published in these publishers’ journals should be given extra scrutiny in the process of evaluation for tenure and promotion.
Then, this morning, I got final acceptance of the manuscript and I’m not sure what to do.
I’m not trying to pull one over on anyone and I don’t necessarily disagree with the above text, but I don’t think this paper will be able to go anywhere else and I’m not convinced this journal is Bad. Not a lot of places publish small pieces of discipline-specific software (if you know of any, let me know). I believe this would be a really useful tool for some biologists and in fact, there are a couple of people waiting to cite the manuscript. I don’t want to encourage predatory journals, but open-access articles that do not-super-important science might actually have a place in our field.
I would LOVE thoughts on this. I certainly don’t view this manuscript as equivalent to a Molecular Ecology or Evolution publication – but do all pubs have to be top (or middle) tier? Is there a solution here, like including impact factors on CVs? Or maybe new fangled software like Google Citations can alleviate this problem since they show the overall publishing record of an individual/article? Please weigh in!
(PS – I’m not usually a fan of baby pictures, but come on. It’s a fat, funny baby. He looks like he’s made of marshmallows.)
(PPS – There’s a follow-up post here.)
As J.B.S. Haldane put it, “I think … that the public has a right to know what is going on inside the laboratories, for some of which it pays.” He was referring to the need for scientists to explain their work in popular media—which, amen, brother Jack!—but the point holds with regard to access to original scientific articles, too.
It doesn’t make much sense that U.S. citizens, whose taxes fund most of the basic science in this country, are then expected to pay upwards of $50 for a single PDF copy of a journal article presenting government-funded research results. The National Institutes of Health already requires that research it funds be archived online and accessible to the general public free of charge—why not expand that to all government-funded research? And hey, there’s a way to suggest exactly that out to the man in charge: a petition on WhiteHouse.gov.
We believe in the power of the Internet to foster innovation, research, and education. Requiring the published results of taxpayer-funded research to be posted on the Internet in human and machine readable form would provide access to patients and caregivers, students and their teachers, researchers, entrepreneurs, and other taxpayers who paid for the research. Expanding access would speed the research process and increase the return on our investment in scientific research.
The highly successful Public Access Policy of the National Institutes of Health proves that this can be done without disrupting the research process, and we urge President Obama to act now to implement open access policies for all federal agencies that fund scientific research.
It needs 25,000 virtual signatures within 30 days before it’ll get any meaningful attention, so sign this thing and then start badgering all your online “friends” about it, why don’t you? Especially the jerks who keep filling your update stream with branded product promotions and/or time-sucking adorable cat videos and/or news about how they’ve just spent real money for a virtual cow—post this directly on their “walls,” if those are even still a thing, with or without a witty and/or pleading comment appended.
I mean, it’s Monday morning; it’s not like you’re going to get do anything else for the benefit of humanity in the next minute or two, you slacker.
Evidently they’re not willing to toot their own horns, so I’ll do it on their behalf: Two of our contributors, Simone Des Roches and Chris Smith, have brand-new publications in print, and both papers are open access, available to anyone who wants to take a look.
Simone’s paper makes the case that the gypsum sands of White Sands, New Mexico, create an “ecological release” for lizards living there, since reduced predator density and diversity on the white dunes lets the lizards use a wider range of habitat types, and achieve higher population density.
First, we provide evidence for ecological opportunity by demonstrating reduced species richness and abundance of potential competitors and predators at White Sands relative to nearby dark soils habitats. Second, we characterize ecological release at White Sands by demonstrating density compensation in the three White Sands lizard species and expanded resource use in White Sands Sceloporus undulatus.
Chris’s paper tests the hypothesis that Joshua trees have expanded their range northward since the last glacial maximum, drawing together many different data sets to find the same signal of population expansion.
Using a database of >5000 GPS records for Joshua trees, and multi-locus DNA sequence data from the Joshua tree and four species of yucca moth, we combined paleaodistribution modeling with coalescent-based analyses of demographic and phylgeographic history. We extensively evaluated the power of our methods to infer past population size and distributional changes by evaluating the effect of different inference procedures on our results, comparing our palaeodistribution models to Pleistocene-aged packrat midden records, and simulating DNA sequence data under a variety of alternative demographic histories.