Abstract

Clinical Science is proud to launch a new translational meta-research collection. Meta-research, or the science of science, applies the scientific method to study science itself. Meta-research is a powerful tool for identifying common problems in scientific papers, assessing their impact, and testing solutions to improve the transparency, rigor, trustworthiness, and usefulness of biomedical research. The collection welcomes science of science studies that link basic science to disease mechanisms, as well as meta-research articles highlighting opportunities to improve transparency, rigor, and reproducibility among the types of papers published in Clinical Science. Submissions might include science of science studies that explore factors linked to successful translation, or meta-research on experimental methods or study designs that are often used in translational research. We hope that this collection will encourage scientists to think critically about current practices and take advantage of opportunities to make their own research more transparent, rigorous, and reproducible.

Clinical Science is proud to launch a new translational meta-research collection. Meta-research, or the science of science, applies the scientific method to study science itself [1]. These studies are powerful tools for identifying common problems in scientific papers, assessing their impact, and testing solutions to improve the rigor, transparency, trustworthiness or usefulness of biomedical research. Meta-research may also examine other aspects of science, including the impact of educational programs, journal and funding agency policies, or institutional tenure and promotion criteria. Examples of meta-research include studies examining the prevalence of statistical errors in published papers, randomized controlled trials assessing whether pre-submission checklists improve reporting, research determining whether studies with rigorous design features (e.g. blinding, randomization) or open data are cited more often than studies lacking these features, or studies examining gender bias in peer review. While scientists often confuse meta-research with meta-analysis, the two are quite different. Meta-analyses systematically identify and assess all studies addressing a particular research question; then pool the results to estimate the size and direction of the ‘average’ effect across all studies.

Meta-research can benefit many stakeholders within the scientific community [1]. Science of science studies allow us to assess the prevalence and impact of rigorous design and transparent reporting practices, and provide authors with guidance on how to improve their own papers. Box 1 highlights selected meta-research papers that help readers to identify and fix common problems in translational research. Meta-research papers can also inform policies for journals, funding agencies, and institutions. A meta-research article highlighting the problems with the common practice of using bar graphs to present continuous data [2], for example, prompted many journals to introduce policies that encouraged authors to replace bar graphs with more informative graphics (e.g. [3,4]). Science of science studies can also inform stakeholders about the effectiveness of policy changes and other interventions to improve scientific rigor and reporting. Meta-research examining changes in research practices or methods over time may be valuable for setting educational priorities or updating training programs to incorporate new skills.

Box 1
Science of science reading list for authors and peer reviewers
  • Study design

    • Interpret underpowered studies cautiously: This study [7] examines the problems with underpowered studies, and explains why low power is an issue even when there is a significant difference.

    • Report attrition: This study [8] illustrates why reporting all excluded observations and the reasons for exclusion is essential to help readers evaluate the risk of bias. Use a flow chart to efficiently report attrition at each stage of the study [9].

    • Avoid misusing the term ‘case–control’ study: This paper [10] explores the inappropriate use of the term ‘case–control study’ to describe studies that include cases who have a disease or condition of interest, and controls who do not have the disease or condition of interest.

    • Account for non-independent observations in the study design and statistical analysis: This article [11] explains why related observations cannot be treated as though they are independent (pseudoreplication) and discusses considerations for designing and analyzing experiments with clusters of non-independent data (e.g. replicates, rodents from the same litter, etc.).

  • Statistical analysis

    • Clearly report what type of t test or ANOVA was used: This paper [12] explains why it is important to report more than ‘Data were analyzed by t tests or ANOVA, as appropriate’ and specifies what additional information is needed.

    • Test for interactions: This study [13] explains why one can not conclude that two experimental effects are different because one is statistically significant and the other is not, when the two effects were never statistically compared.

    • Avoid misreported P-values: Misreported P-values are common and may alter study conclusions in 13% of psychology papers [14]. This paper [14] shows how reporting the test statistic and degrees of freedom makes it easier to detect misreported P-values.

  • Reporting

    • Acknowledge study limitations: Study limitations sections are essential to acknowledge potential biases and assess uncertainty [15].

    • Include research resource identifiers: Research resource identifiers (RRIDs) improve reproducibility by specifying exactly what materials (antibodies, plasmids, cell lines, model organisms, software) were used [16]. Use the RRID Portal to look up or create new RRIDs (https://scicrunch.org/resources).

    • Follow reporting guidelines: When writing, follow guidelines for the appropriate study design (e.g. ARRIVE 2.0 for animal studies [17,18], STROBE for observational studies [19], etc.) to improve transparency and ensure that essential details needed to assess the risk of bias are reported. The EQUATOR network lists guidelines for many study designs (https://www.equator-network.org).

  • Data visualization

    • Replace bar graphs of continuous data with more informative graphs: These studies [2,9] illustrate why you should not use bar graphs to display continuous data, and what to use instead. The latter paper [9] also highlights solutions to other common visualization problems.

    • Create informative image-based figures: This study [20] identifies common problems that affect the transparency and interpretability of image-based figures. Visualizations illustrate strategies for creating more informative figures.

Publishing meta-research alongside traditional research articles helps to disseminate meta-research results and engage scientists in discussions about how to increase value and reduce waste in science. Unfortunately, meta-research studies are difficult to publish despite their value to the scientific community. Many journals consider meta-research to be out of scope, as these studies do not typically provide insight into biology or medicine. Furthermore, meta-research articles may be rejected from field-specific journals as they are often written for a general audience and address problems that affect many fields. Most scientists are unfamiliar with the science of science; hence, editors and reviewers may not recognize meta-research as research. Training on how to conduct meta-research is rare; therefore journals that do consider meta-research may have difficulty identifying editors and reviewers with the expertise needed to evaluate the study methods. The two journals that offer meta-research collections [5,6] are generally more focused on the biological sciences, leaving a gap for translational meta-research.

The Clinical Science meta-research collection will focus on applied translational science of science studies that link basic science to disease mechanisms. Given this focus on applied meta-research, the collection will not include purely theoretical studies. The collection also welcomes meta-research articles highlighting opportunities to improve transparency, rigor, and reproducibility among the types of papers published in Clinical Science. The collection might include meta-research that explores factors linked to successful translation, or science of science studies focused on study designs or experimental techniques that are often used in translational research.

Meta-analyses would only be eligible for the collection if they address a meta-research question. Most meta-analyses focus on biological or medical research questions unrelated to meta-research (e.g. does a particular treatment reduce infarct size in patients experiencing a myocardial infarction?). However, researchers may occasionally use meta-analysis to conduct meta-research. For example, meta-researchers might perform a meta-analysis of interventional studies using a particular stroke treatment to determine whether blinded studies find larger effects than non-blinded studies.

Articles in the Clinical Science translational meta-research collection should incorporate two elements that may be uncommon in traditional research articles. First, meta-research papers typically include an educational component, as these papers often aim to convince authors, journals or other stakeholders to adopt more transparent and reproducible practices. Authors are encouraged to think creatively about how they can engage readers in exploring the problem and its potential impact and aid scientists in implementing solutions. Strategies may include visualizations, infographics or interactive graphics, writing templates or checklists. Articles may also link to outside resources, such as online tools, simulators, or reading lists. Extended educational content, including video lectures or slides for teaching, may assist readers in sharing information about good scientific practice with colleagues. Second, articles should offer solutions. Meta-research provides detailed data about common problems and the context in which those problems occur, allowing authors to offer targeted recommendations for improvement. Authors should consider their own data, as well as data from previous studies, when discussing solutions. Common problems often require more attention than rare problems. Similarly, solutions are urgently needed for practices that have a major impact on reproducibility, the risk of bias or other important metrics, whereas practices that have a minimal impact on rigor and reproducibility may merit less attention.

Meta-research is most impactful when the scientific community acts on new findings. Publishing meta-research alongside translational studies encourages scientists to explore and benefit from this growing field. We hope that the Clinical Science translational meta-research collection will encourage scientists to think critically about current publishing practices and take advantage of opportunities to make their own research more rigorous, transparent, and reproducible.

Competing Interests

The author declares that there are no competing interests associated with the manuscript.

References

1.
Ioannidis
J.P.A.
(
2018
)
Meta-research: why research on research matters
.
PLoS Biol.
16
,
e2005468
[PubMed]
2.
Weissgerber
T.
et al.
(
2015
)
Beyond bar and line graphs: time for a new data presentation paradigm
.
PLoS Biol.
13
,
e1002128
[PubMed]
3.
Teare
M.D.
(
2016
)
Transparent reporting of research results in eLife
.
eLife
5
,
e21070
[PubMed]
4.
Fosang
A.J.
and
Colbran
R.J.
(
2015
)
Transparency is the key to quality
.
J. Biol. Chem.
290
,
29692
29694
[PubMed]
5.
Kousta
S.
,
Ferguson
C.
and
Ganley
E.
(
2016
)
Meta-research: broadening the scope of PLoS Biology
.
PLoS Biol.
14
,
e1002334
[PubMed]
7.
Button
K.S.
et al.
(
2013
)
Power failure: why small sample size undermines the reliability of neuroscience
.
Nat. Rev. Neurosci.
14
,
365
[PubMed]
8.
Holman
C.
et al.
(
2016
)
Where have all the rodents gone? The effects of attrition in experimental research on cancer and stroke
.
PLoS Biol.
14
,
e1002331
[PubMed]
9.
Weissgerber
T.L.
et al.
(
2019
)
Reveal, don’t conceal: transforming data visualization to improve transparency
.
Circulation
140
,
1506
1518
[PubMed]
10.
Mayo
N.E.
and
Goldberg
M.S.
(
2009
)
When is a case-control study not a case-control study?
J. Rehabil. Med.
41
,
209
216
[PubMed]
11.
Lazic
S.E.
,
Clarke-Williams
C.J.
and
Munafo
M.R.
(
2018
)
What exactly is ‘N’ in cell culture and animal experiments?
PLoS Biol.
16
,
e2005282
[PubMed]
12.
Weissgerber
T.L.
et al.
(
2018
)
Why we need to report more than ‘Data were Analyzed by t-tests or ANOVA’
.
eLife
7
,
e361363
13.
Nieuwenhuis
S.
,
Forstmann
B.U.
and
Wagenmakers
E.J.
(
2011
)
Erroneous analyses of interactions in neuroscience: a problem of significance
.
Nat. Neurosci.
14
,
1105
1107
[PubMed]
14.
Nuijten
M.B.
et al.
(
2015
)
The prevalence of statistical reporting errors in psychology (1985-2013)
.
Behav. Res. Methods
48
,
1205
1226
15.
Ter Riet
G.
et al.
(
2013
)
All that glitters isn’t gold: a survey on acknowledgment of limitations in biomedical studies
.
PLoS ONE
8
,
e73623
[PubMed]
16.
Bandrowski
A.
et al.
(
2016
)
The resource identification initiative: a cultural shift in publishing
.
J. Comp. Neurol.
524
,
8
22
[PubMed]
17.
Percie du Sert
N.
et al.
(
2020
)
The ARRIVE guidelines 2.0: Updated guidelines for reporting animal research
.
PLoS Biol.
18
,
e3000410
[PubMed]
18.
Percie du Sert
N.
et al.
(
2020
)
Reporting animal research: explanation and elaboration for the ARRIVE guidelines 2.0
.
PLoS Biol.
18
,
e3000411
[PubMed]
19.
von Elm
E.
et al.
(
2007
)
The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies
.
PLoS Med.
4
,
e296
[PubMed]
20.
Jambor
H.
et al.
(
2021
)
Creating clear and informative image-based figures for scientific publications
.
PLoS Biol.
19
,
e3001161
[PubMed]