ably you want the first crack at the
data you have painstakingly collected.
This is no problem. You can arrange a
data embargo period (say, one year).
Your data is uploaded along with the
first publication arising and you know
you have one year from that point for
exclusive opportunity for further anal-
ysis. The advantage of this system is
that we continue to reward hard ex-
perimental work while reducing the
incidence of valuable data languishing
on someone’s hard-drive for years (“I’ll
get to that other analysis sometime”).
Encouraging others to adopt these
practices
Changing the way you do science is
hard. There is a lot of inertia, which is
greater for those who have been doing
it longer. Students are at an advantage
in this respect, but also have to abide
by their supervisors’ wishes. Even if
you agree with my suggestions above,
it is probably not possible for you to
adopt them all right now. That is fine.
Start by telling others about them and
encouraging others to do the same
thing. Slowly, best practice is changing.
What can you do right now? Consider
signing the Peer Reviewer’s Openness
Initiative, or PRO – [8]. As a signatory
to PRO, you ask authors to make their
data and materials available as a con-
dition of peer review.
Imagine you accept a review request.
You check whether the authors make
their materials available and if not then
you write back to the editor asking the
authors to either (a) make their data
and materials available or (b) write in
the manuscript why they do not. If the
authors refuse to do either (a) or (b),
your recommendation is to reject the
manuscript because it does not meet
the minimum standards for a scientific
paper. This might seem harsh, but con-
sider that you are not judging the justi-
fication for (b) – it could be something
more (“our raw data contain identifi-
able patient information, but we make
anonymised summary data available”)
or less (“we’re too lazy to clean up our
code and upload our data”) legitimate.
You don’t care, so long as it appears in
the main text of the manuscript.
As a reviewer, you can give open sci-
ence and transparent practices a push
along.
18 | NEUROMAG | May 2017
Conclusion
Science is in a replication crisis, but
thankfully people are becoming in-
creasingly aware of the issues and
implementing ways to improve them.
Consider trying to adopt the following
practices in your scientific work:
•
•
•
•
Clearly discriminate exploratory
and confirmatory analyses
Pre-register your next experiment
Make data and materials openly
available
Encourage others to do the same
While these measures will not fix
the underlying problem (incentives for
scientific career advancement), they
will help to improve the quality of sci-
entific output.
* I’m not saying that high-impact pa-
pers are generally less likely to be true
than papers in other outlets, but there
is some worrying evidence that high-
impact papers tend to feature lower
statistical power and larger bias in ef-
fect size estimates in some fields
Tom Wallis is a project leader
in the CRC 1233 "Robust Vision"
at the Centre for Integrative
Neuroscience in Tübingen.
He blogs infrequently at
www.tomwallis.info
Author’s note
Parts of this article were adapted from
the author’s earlier blog post, found at
tomwallis.info. I use the term “irrepli-
cability” rather than “irreproducibility”
because “reproducible” research is
when you get the same result from the
same data and analysis (i.e. there’s no
silly error in the analysis script or re-
porting) whereas a “replicable” finding
is one that can be found repeatedly in
independent (but as-far-as-possible-
identical) experiments [9, 10].
[1] www.en.wikipedia.org/wiki/Replica-
tion_crisis
[2] www.nymag.com/
scienceofus/2016/09/a-helpful-rundown-
of-psychologys-replication-crisis.html
[3] Brembs B. et al. (2013). Deep impact:
unintended consequences of journal rank.
Frontiers in Human Neuroscience, 7(291),
1-12
[4] www.neuromag.wordpress.
com/2016/01/12/why-science-is-broken/
[5] www.nature.com/news/how-scientists-
fool-themselves-and-how-they-can-
stop-1.18517
[6] www.stat.columbia.edu/~gelman/re-
search/unpublished/p_hacking.pdf
[7] www.cos.io/our-services/registered-
reports/
[8] www.opennessinitiative.org/
[9] www.replicability.tau.ac.il/index.php/
replicability-in-science/replicability-vs-
reproducibility.html
[10] www.languagelog.ldc.upenn.edu/
nll/?p=21956