Science is, by nature, a collaborative venture; however, a ‘closed’ culture of science has nonetheless developed. Data are often hoarded for extended periods of time, paywalls prevent access to articles and unnecessary hurdles continue to slow the rate of progress. An ‘open’ culture of science would provide the most benefit to the public, to the scientists involved and to science itself, and it is best to begin practicing open science as early as possible in your scientific career.
Sharing of information is fundamental for science to progress and for new discoveries to benefit the public. The invention of scientific journals in 1665 helped drive the scientific revolution, using the best technology of the time to critique research, disseminate knowledge rapidly and to begin to foster communities of like-minded researchers.
However, the format of scientific journals that have been a massive aid to science still has room for improvement. For example, most scientific articles require payment to access, precluding many researchers and members of the public who cannot meet the costs. Many suffer from publication bias, not only in terms of gender and race, but also in terms of results whereby articles that show statistically significant results are more likely to be published than those that show often equally important, non-significant findings. Authors have also been known to hoard data, sometimes for years, in order to publish in journals with a high impact factor. To compound matters, the current culture of science incentivizes poor practice. Inadequate, though often well intentioned, study design combined with inappropriate data analysis can result in false-positive findings. Combined with the proclivity for publication bias, this means that the scientific literature is not a legitimate representation of the work that scientists conduct and raises a question of how ontologically ‘true’ published findings actually are.
These, and other, phenomena contribute to a culture of ‘closed’ science, where knowledge or data is unavailable or unaffordable to many interested parties.
But it is not all doom and gloom. There is growing consensus that something should be done to remedy these issues. Whilst fundamental changes will need to be ingrained into the system, many initiatives and developments are beginning to shift the current state of science from ‘closed’ to ‘open’.
What is open science?
Open science (OS) is the movement to increase transparency and reliability of research, accelerating the pace of discovery. It consists of principles and practices to make science more reproducible, including making information available faster and allowing the public to benefit more from accessible scientific knowledge. A current example of OS done well could be the use of preprints servers, which host publicly available scientific articles prior to undergoing peer review. For instance, in combating the coronavirus, a wealth of data is being rapidly generated, and through these preprint servers, teams around the world can immediately publish their data and access others’ to help inform their work. Through this rapid access to relevant information, efforts are being well informed and coordinated to help understand, and combat the spread of, the virus much faster than by traditional publishing formats.
Helpfully, OS has benefits for researchers themselves. Publishing through open-access or hybrid journals is associated with higher citation rates and media coverage than not doing so, and, unlike traditional journals, open-access journals typically publish under Creative Commons licenses, meaning that authors retain almost all rights to their manuscript and materials. The need for an OS culture is becoming recognized, and universities such as Cardiff, LMU Munich and Cologne have recently asked candidates applying for psychology positions to provide a track record of OS methods. In light of this, making changes to publication practices can be implemented early on in your career.
A large number of publishers themselves understand the benefit of OS and support a transition to open practices, and many are members of the Open Access Scholarly Publishers Association, an organization dedicated to making open access the predominant model of publication. The culture of science requires change from the ground up and is being pushed by funders, publishers and others – this can most easily be achieved by early career researchers who do not yet have bad publishing habits established over many years.
Problems and solutions: how OS can help improve science
Most research is locked behind paywalls, preventing access to a proportion of the public.
Often groups, such as charities and taxpayers, which funded the research are not the ones benefiting from such paywalls, and as such, the legitimacy of obstructing data and knowledge for third-party profit is being increasingly challenged. This is largely on ethical or utilitarian grounds, such as the right of taxpayers to access knowledge from publicly funded work, in order to minimize the number of animals that are required to undergo necessary experiments, or the ability to combine various data to address questions that have not been answered before, saving time and resources in the process. Thankfully, this is improving.
Open-access publishing makes research free at the point of access, predominantly via the authors (or, rather, their funders) paying a fee upfront. This allows interested parties unfettered access to the knowledge within, and thus allows the most people possible to work on that topic.
Preprint servers also help here. They are free to access and allow immediate access to content prior to publication, meaning it is accessed before it has been thoroughly peer reviewed, which should be taken into account.
Manuscripts which show a statistically significant difference are more likely to be published than those which do not, and high-profile journals are often insistent on a ‘good novel story’, which can result in data being omitted if it does not support this narrative. This can lead to some concern that some published articles are not entirely accurate.
A rising format to counter this are registered reports (RRs), which are now accepted by more than 200 journals. With RRs, peer review is split in half: firstly, researchers propose a hypothesis, detail their experimental plan to address it and how data will be analysed. If reviewers think the hypothesis is important, and that the methods and analysis are appropriate, then the journal agrees to publish the final manuscript before the data is collected. This means the manuscript will be published regardless of significance or resulting narrative. The authors then pre-register it in a recognized repository, collect the data and compile the manuscript. The final product is sent to peer review once more, and the article is published.
Publication of research should not depend on whether results are ‘positive’ or ‘negative’, but instead depend on whether they are conclusive. Data are important simply by being data, as long as it has been obtained through rigorous means. Has the question being investigated been adequately designed and experiments executed appropriately? If so, the results are important, regardless of whether there is a statistically significant difference.
Additionally, RRs have the benefit of guarding against two phenomena. Firstly, hypothesizing after the results are known (HARKing), in which hypotheses are generated after results have been seen, to match statistical significance, and secondly, p-hacking, where various statistical tests are applied until one is found which produces a statistically significant result, even if inappropriate.
Data are summarized and reported in publications, but raw datasets themselves are harder to access. This can be difficult, especially with large datasets, or computational code, which are too large to include with the manuscript. However, datasets can be withheld, sometimes indefinitely, which can reduce collaboration with other groups that could use this data to inform their own work and better guide the research questions of interest. Allowing access can, therefore, save resources and reduce the time being spent unnecessarily repeating the work of others. Sharing your data can even benefit you, since all credit will be attributed to those who first generated the data, increasing your citations and opening up new collaborative opportunities. The emergence of data repositories – servers which hold data and allow others to access it freely – is helping to better facilitate data sharing. With data repositories, one can upload a dataset for storage, whilst simultaneously allowing other scientists around the world to interpret data as they see fit to best further their own research.
With an increasing number of funders mandating data sharing, research articles which have their data openly accessible have been shown to accrue more citations than those which do not. One common fear that has always existed around data sharing, especially early on in a research project, is the possibility of being scooped. Most repositories generate a Digital Object Identifier (DOI) to a dataset as soon as its deposited, attributing this work to your group and preventing scooping.
Challenges with OS
With the potential of OS to ameliorate some of the problems afflicting modern science, it may seem glaringly obvious that we should all transition to practicing it. But as with most new things, it is easier said than done and is associated with many challenges. For example, there are associated time costs with correct documentation and organization of data, as well as the time taken to produce RRs. Understandably, researchers might be worried that the time spent organizing data for sharing may reduce the time available to actually collect data. Even when considering the increased citation count and coverage which comes with OS practices, it is difficult to predict whether these will ultimately balance out. Until incentive structures change to support OS practices, it will require researchers to be bold and begin to lead the way in demonstrating that OS can lead to many benefits within the scientific community and beyond.
Another concern is that opening up one’s data to the scientific community pre-publication can lead to the identification of errors. Yet, this is actually an upside. Errors are unavoidable in all human endeavours, and research is no exception. In fact, it is the mark of a good scientist that they seek to find errors in their work, to improve their methods and practices, and strive to make the results of their efforts ontologically true. As such, we should seek for our work to be scrutinized and errors to be pointed out. No serious scientist should chastize another for making an error – quite the opposite: humility to admit one’s own mistakes is generally commended and seen as a sign of a competent scientist.
Where to go from here?
The concept of OS is tightly linked with traditional scientific norms, as science has always relied on researchers sharing their work, so that it can be tested and built upon. OS is just good science, and as such, most people, when they are made aware of it, are interested in practicing OS themselves. But it can be difficult to know how to begin.
Reading as much as possible on OS is one of the best ways to start, with a good point to begin being the Further reading list below and/or doing an online search. Thankfully, there is a wealth of information relating to OS, so much that you won’t get bored! Once various issues related to the culture of science are brought to your attention in a meaningful way (e.g., understanding the flaws with impact factors), it is hard to forget them.
Additionally, bringing up OS in a lab meeting and/or getting together with a few like-minded people to form an OS journal club is a useful avenue too – forming groups of people interested in the same thing can be a major step to shifting the culture in one’s institution.
To improve the current state of scientific research, the culture of science requires change from the bottom up, which can be best led by those involved in it. Though OS has its challenges, it is rewarding to those who practice it and make science more efficient. Funders, publishers and other bodies are already mandating many practices of OS, with more aspects to follow. Speak to your fellow scientists, your supervisors and other students – science is becoming open, and the sooner we jump on the bandwagon, the more we all get out of it.
HARKing: Murphy, K.R., Aguinis, H. (2019) HARKing: How Badly Can Cherry-Picking and Question Trolling Produce Bias in Published Results?”. J Bus. Psychol.34, 1–17
Open science advantages: McKiernan, E. C., Bourne, P.E., Brown, C.T., Buck, S., Kenall, A., Lin, J. et al. (2016) How open science helps researchers succeed. Elife5, e16800
Open science and Coronavirus: https://www.sciencemag.org/news/2020/02/completely-new-culture-doing-research-coronavirus-outbreak-changes-how-scientists [Accessed 1 March 2020]
Open science recognised repository: https://osf.io/ [Accessed 1 March 2020]
Orion MOOC: https://www.open.edu/openlearncreate/course/view.php?id=3980 [Accessed 1 March 2020]
P hacking: Head ML, Holman L, Lanfear R, Kahn AT, Jennions MD (2015) The Extent and Consequences of P-Hacking in Science. PLoS Biol13, e1002106.
Plan S and COalition S: https://www.coalition-s.org/ [Accessed 1 March 2020]
Preprints: Hoy, M. B. (2020) Rise of the Rxivs: how preprint servers are changing the publishing process. Med Ref Serv Q 39 (1): 84-89.
Publication bias: Nissen, S. B., Magidson, T. , Gross, K. and Bergstrom, C.T. (2016) Publication bias and the canonization of false facts. Elife5, e21451
Reproducibility issues in science: Chawla, D.S. (2020) Software searches out reproducibility issues in scientific papers. Nat. Newshttps://www.nature.com/articles/d41586-020-00104-6 [Accessed 1 March 2020]
Reproducibility issues in science: Challenges in irreproducible research (2018) https://www.nature.com/collections/prbfkwmwvz [Accessed 1 March 2020]
Replication crisis: Ioannidis, J.P. (2005) Why most published research findings are false. PLoS Med.2, e124
Scientific journals: Mack, C.A. (2015) 350 Years of Scientific Journals. J. Micro/Nanolith. MEMS MOEMS14, 010101
The UK Reproducibility Network (UKRN): http://www.bristol.ac.uk/psychology/research/ukrn/ [Accessed 1 March 2020]
The author would like to acknowledge discussions with W. Cawthorn, R. Madsen, R. Semple, and collectively with members of the Edinburgh Open Science Initiative for informing this article
Ben Thomas obtained a BSc in biochemistry from King’s College London in 2016, and is currently completing his BHF-funded PhD at the University of Edinburgh, from which he first obtained an MSc Research. He researches metabolic & cardiovascular sex differences, and is the co-founder of the Edinburgh Open Science Initiative (twitter: https://twitter.com/edinburgh_open). Feel free to Email: email@example.com