- Open Access
Towards open, reliable, and transparent ecology and evolutionary biology
BMC Biology volume 19, Article number: 68 (2021)
Unreliable research programmes waste funds, time, and even the lives of the organisms we seek to help and understand. Reducing this waste and increasing the value of scientific evidence require changing the actions of both individual researchers and the institutions they depend on for employment and promotion. While ecologists and evolutionary biologists have somewhat improved research transparency over the past decade (e.g. more data sharing), major obstacles remain. In this commentary, we lift our gaze to the horizon to imagine how researchers and institutions can clear the path towards more credible and effective research programmes.
Opaque research practices make it impossible to evaluate whether evidence generated by research is reliable, thereby stifling scientific progress . As authors, peer reviewers, and readers of scientific papers, we simultaneously perpetuate, and are frustrated by, the information gap between producers and consumers of scientific studies. In the role of producers, we struggle to fastidiously document the long lifespan of research projects and are vulnerable to self-deception (e.g. rationalising statistical analyses that produce the most compelling results) . In the role of consumers, too often we struggle to understand published articles, are left speculating about how and why particular conclusions were reached, and cannot build upon prior work. To help authors become reliable narrators of their own conduct and help readers better understand the published literature, proponents of reliable research have urged authors to document and report their research more transparently .
The lowest bar of academic reform is transparent reporting; with few exceptions (e.g. precise locations of endangered species), we can transparently share our research and ask the same of our community. In the past decade, most major ecology and evolutionary biology journals successfully mandated data sharing, with many journals signing onto the Transparency and Openness Promotion Guideline . Researchers are also extending transparency efforts beyond traditional journal formats. Preprint servers (e.g. EcoEvoRxiv and bioRxiv) provide a history of in-preparation manuscripts, and authors can use online repositories (e.g. Open Science Framework) to document their research throughout the lifespan of their projects . Having more of the research process made public would allow more people to learn from the mistakes and successes of others.
Changes in journals’ and funders’ policies can rapidly change researchers’ practices. Once leading journals in ecology and evolutionary biology started requiring open data upon publication, researchers complied, and more journals followed. Many journal guidelines now encourage authors to share their computer code too . While the usability and quality of shared materials can be considerably improved—for example, by sharing pre-processed data with complete descriptions, and fully annotated code—gradually, more papers are becoming computationally reproducible (meaning their results can be reproduced from open data and code). Community expectations for shared materials in ecology and evolutionary biology should continue to rise, with some journals (e.g. Ecology Letters, The American Naturalist) poised to enlist ‘data editors’ during peer review.
Preregistrations and registered reports
Transparent reporting requires transparent recording, because our current selves are often unreliable narrators of our past conduct. For example, perhaps our past-self tried multiple types of analyses before focusing on the clearest result, but our current-self, returning to these results months later, only remembers the statistically significant findings (‘p-hacking’ and ‘cherry picking’) . Blinded by hindsight, our study could seemingly test a hypothesis that we had not planned to test (HARKing = ‘Hypothesizing After Results are Known’). These and other common biases  result in information gaps between our current and former selves. Complete information is then inaccessible to research consumers, making it harder to assess research reliability.
Transparent recording begins by describing a planned study prior to key events (e.g. data collection, exploration, and modelling) in an unalterable and publicly available document . These ‘preregistration’ documents can be made for any type of study and reduce the potential for researcher self-deception while revealing the breadth of primary research before the filter of publication . For example, mandatory registries for planned clinical trials revealed publication bias in drug development research . Beyond controlled experiments, preregistration templates are expanding to include exploratory, descriptive, and theoretical work (e.g. guidance for preregistering modelling studies: https://osf.io/2qbkc/).
The benefits of preregistrations are amplified by ‘registered reports’, a style of publication first trialled in 2013 (a list of participating journals, such as BMC Biology and Conservation Biology, can be found at https://cos.io/rr). Registered reports are accepted on the strength of their study rationale, methods, and planned analyses, freeing researchers from results anxiety. Whereas traditional journal articles are reviewed (and revised) after a study is completed, registered reports are reviewed before and after the results are known (allowing studies to be critiqued and improved before it is too late to fix major flaws). While the delay in data collection might be difficult for researchers expected to generate results quickly (e.g. in many countries, doctoral students are expected to write three or more publishable chapters within 3–6 years), overcoming cultural and institutional barriers to registered reports could drastically reduce publication bias, while improving research quality, at the level of both researchers and publishers.
To build upon published research, we need to understand the conditions under which findings are expected to replicate (i.e. understand ‘context dependence’). Yet, despite multiple papers urging ecologists and evolutionary biologists to conduct more replications, and researchers generally agreeing that replications are worthwhile, ecology and evolutionary biology journals publish close to zero (< 0.05%) studies considered close replications (i.e. adhering as closely as possible to the methods of an original study) . There is therefore a disconnect between researcher’s beliefs towards replications and their behaviours.
Aligning the beliefs and behaviours of researchers requires a change in incentives (Fig. 1) . Many researchers have suggested interventions to promote replication studies, including incorporating replications into trainee programmes (e.g. thesis chapters), dedicated funding and journal sections for replications, and requiring replications of short-term studies prior to publication. It would be easier to design close replication studies—and their results would be easier to interpret—if authors of primary studies specified when and where they would expect their findings to generalise (e.g. ‘Constraints on Generality’ statements). For studies that are logistically infeasible to closely replicate (e.g. isolated populations), attention can still be given to computational reproducibility and the robustness of results to alternative analysis decisions. Studies grounded in clearly defined theories can also be subject to conceptual replications, where the same hypotheses are tested in different biological contexts .
Transparency is necessary but not sufficient
The information afforded by greater transparency only helps us discriminate between studies if we care to look (Fig. 2). Transparency alone does not prevent errors, nor does it guarantee that research helps to build and test strong theories. For example, methods might not measure what the authors claim to be measuring, and authors might not specify their claims precisely enough to be falsifiable. If preregistrations and supplementary materials are not read, data are not examined, analyses are not reproduced, and, crucially, close replications are not conducted or published, then our mistakes will not be identified. Researchers will always make mistakes, but changed incentives could encourage errors to be corrected and dissuade researchers from rushing into hypothesis testing, cutting corners, or fabricating results . Common wisdom within the scientific community is that fraud is so rare as to be ignorable, but we cannot really know, as we do not really check; mechanisms to detect, investigate, and prosecute cases of fraud and research misconduct are under-resourced and not standardised across institutions. The dearth of formalised error detection in ecology and evolutionary biology suggests that we do not live up to the scientific ideal of organised scepticism.
Changing incentives to relieve researcher strain
Many current institutional incentives foster irreproducible research. Major employers and funding agencies generally reward researchers for high publication outputs that attract a lot of attention, often without much regard for their reliability, selecting for productivity and hyperbole at the expense of rigour. Researchers in insecure employment might feel compelled to exaggerate the importance of their work to accrue more citations, and hastily publish papers that contribute little, if anything, to addressing specific research questions. Those who feel stifled by these publication pressures often simply leave academia, reducing the diversity of perspectives amongst people who stay . Even those with secure employment are not spared from researcher strain. Tenured researchers can feel invested in their academic offspring, beholden to the expectations of their institution, or simply driven to match the outputs of their peers.
The problems described above sound bleak, but incentives are not immutable. Reform is possible and has already begun. For example, there are international efforts to change the way researchers are evaluated (e.g. the San Francisco Declaration on Research Assessment – DORA, and the Hong Kong Principles), which would relieve some of the strain on researchers (Fig. 1). Critics of reform might argue that there are inevitable inefficiencies in a complex system, science has always been a flawed human endeavour, and too many regulations risk stifling creativity. But while negative consequences of any regulations should be carefully monitored with meta-research, there is ample evidence that academic research could often be better than it currently is. Rather than being mere naysayers, advocates for reform are optimistically working towards a better research landscape.
A community society for change
Researchers can make progress along the road to more credible research by uniting to educate our communities in more transparent and reliable research practices, and advocating for these practices to be valued by journals and funders. All of us (the authors) are founding members of the newly formed Society for Open, Reliable, and Transparent Ecology and Evolutionary biology (SORTEE: http://sortee.org, and @SORTEcoEvo on Twitter). SORTEE will absorb the previous efforts of the Tools for Transparency in Ecology and Evolutionary biology (https://osf.io/g65cb/) and is inspired by other researcher-driven organisations, such as the Center for Open Science, Society for the Improvement of Psychological Science, and the UK Reproducibility Network. As well as promoting transparent research practices, SORTEE aspires to foster communities of researchers who are passionate about improving research and institutional incentives in ecology and evolutionary biology.
At the time of writing, the COVID-19 pandemic has strained the university sector and funding agencies, but many researchers—caught between the accelerating demands of their profession and a desire to generate reliable results—were already feeling strained. This strain hurts individuals and threatens trust in science, especially on topics that are politically charged, and the disconnect between our ideals and actions could be worsened by the fiscal worries of our institutions. We can attempt to relieve this strain by reconsidering the type of research we want to be doing with scarce resources and by advocating for institutional change. Let us emerge from this period of uncertainty with renewed determination to conduct, and be valued for, open, reliable, and transparent research.
Availability of data and materials
No data or materials were presented in this manuscript.
Munafò MR, Nosek BA, Bishop DVM, Button KS, Chambers CD, du Sert NP, et al. A manifesto for reproducible science. Nat Hum Behav. 2017;1(1):1–9. https://doi.org/10.1038/s41562-016-0021.
Forstmeier W, Wagenmakers E-J, Parker TH. Detecting and avoiding likely false-positive findings - a practical guide. Biol Rev Camb Philos Soc. 2017;92(4):1941–68. https://doi.org/10.1111/brv.12315.
Parker TH, Forstmeier W, Koricheva J, Fidler F, Hadfield JD, Chee YE, Kelly CD, Gurevitch J, Nakagawa S. Transparency in ecology and evolution: real problems, real solutions. Trends Ecol Evol. 2016;31(9):711–9. https://doi.org/10.1016/j.tree.2016.07.002.
Culina A, van den Berg I, Evans S, Sánchez-Tójar A. Low availability of code in ecology: a call for urgent action. PLoS Biol. 2020;18(7):e3000763. https://doi.org/10.1371/journal.pbio.3000763.
Fraser H, Parker T, Nakagawa S, Barnett A, Fidler F. Questionable research practices in ecology and evolution. PLoS One. 2018;13:e0200303. https://doi.org/10.1371/journal.pone.0200303.
Barto EK, Rillig MC. Dissemination biases in ecology: effect sizes matter more than quality. Oikos. 2012;121(2):228–35. https://doi.org/10.1111/j.1600-0706.2011.19401.x.
Moher D, Glasziou P, Chalmers I, Nasser M, Bossuyt PMM, Korevaar DA, Graham ID, Ravaud P, Boutron I. Increasing value and reducing waste in biomedical research: who’s listening? Lancet. 2016;387(10027):1573–86. https://doi.org/10.1016/S0140-6736(15)00307-4.
Kelly CD. Rate and success of study replication in ecology and evolution. PeerJ. 2019;7:e7654. https://doi.org/10.7717/peerj.7654.
Smaldino PE, McElreath R. The natural selection of bad science. R Soc Open Sci. 2016;3(9):160384. https://doi.org/10.1098/rsos.160384.
Scheel AM, Tiokhin L, Isager PM, Lakens D. Why hypothesis testers should spend less time testing hypotheses. Perspect Psychol Sci. 2020; https://doi.org/10.1177/1745691620966795.
The views expressed in this article are those of the authors and are not meant to represent the official position of the Society for Open, Reliable, and Transparent Ecology and Evolutionary biology (SORTEE). We thank Wolfgang Forstmeier for constructive comments on an earlier draft. We acknowledge that, due to format restrictions on this short comment, only some relevant references are cited.
The majority of the time spent on this comment was funded by an ARC (Australian Research Council) Discovery grant (DP200100367) awarded to SN.
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
O’Dea, R.E., Parker, T.H., Chee, Y.E. et al. Towards open, reliable, and transparent ecology and evolutionary biology. BMC Biol 19, 68 (2021). https://doi.org/10.1186/s12915-021-01006-3