Neuromag July 2016 | Page 19

not literally correct itself, it will have to be corrected by the people conducting it. As the next generation of scientists this will be our job – we shall see how we do.
Update: In March 2016, the journal Science published a comment on the mentioned large reproducibility study in which Gilbert et al. took a critical look at how the Open Science Collaboration( OSC) conducted their replications. Apparently they were at least in some cases far away from being‘ exact replications’. Gilbert et al. also looked at other replication initiatives and use their results to estimate the statistical error in the OSC data. They end up concluding that actually“ the reproducibility of psychological science is quite high”. Let’ s wait for the OSC’ s reply.
* Reproduction success was measured not only by significant test statistics in the replication study( which would have resulted in only 35 successful reproductions). The authors used a total of five indicators, including combining the original and the replication data, effect size, and the subjective assessment“ did it replicate”.
** People tend to call the type I error rate alpha the false positive rate. This can be misleading, since the rate of false positives within a set of studies that used a particular alpha value and claim an effect is given by the false discovery rate = 1 – PPV, not the used alpha value.
Jens Klinzing, Germany Neural and Behavioural Sciences Master’ s Program‘ 13 Currently a GTC Doctoral student at the Institute for Medical Psychology and Behavioural Neurobiology in the lab of Prof. Dr. Jan Born and Dr. Susanne Diekelmann
1. Estimating the reproducibility of psychological science.( 2015). Science, 349( 6251). 2. neuromag. wordpress. com / 2015 / 11 / 25 / bar-graphs-anyone Accessed on Jan 10, 2016 3. Button, K. S., Ioannidis, J. P. a., Mokrysz, C., Nosek, B. a., Flint, J., Robinson, E. S. J., & Munafò, M. R.( 2013). Power failure: why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14( May). 4. Ioannidis, J. P., Ntzani, E. E., Trikalinos, T. A., & Contopoulos-Ioannidis, D. G.( 2001). Replication validity of genetic association studies.
Nature Genetics, 29( 3), 306 – 9. 5. Vandenbroucke JP( 2004) When are observational studies as credible as randomised trials? Lancet 363: 1728 – 1731 6. Ioannidis, J. P. a.( 2005). Why most published research findings are false. PLoS Medicine, 2( 8), 0696 – 0701. 7. en. wikipedia. org / wiki / Sch % C3 % B6n _ scandal Accessed on Jan 10, 2016 8. www. nature. com / news / us-vaccine-researcher-sentenced-to-prison-for-fraud-1.17660 Accessed on Jan 10, 2016 9. www. nature. com / news / stem-cell-scientist-found-guilty-of-misconduct-1.14974 Accessed on Jan 10, 2016 10. journals. plos. org / plosone / article? id = 10.1371 / journal. pone. 0005738 Accessed on Jan 10, 2016 11. en. wikipedia. org / wiki / Publication _ bias Accessed on Jan 10, 2016 12. www. timeshighereducation. com / news / journal-warned-about-pg-data / 200228. article? sectioncode = 26 & storycode = 200228 Accessed on Jan, 2016 13. www. telegraph. co. uk / news / health / 8360667 / Millions-of-surgery-patients-at-risk-in-drug-research-fraud-scandal. html Accessed on Jan 10, 2016 14. Colquhoun, D., & London, C.( 2014). An investigation of the false discovery rate and the misinterpretation of P values. Royal Society Open
Science, 1, 1 – 15. 15. Simmons, J. P., Nelson, L. D., & Simonsohn, U.( 2011). False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows
Presenting Anything as Significant. Psychological Science, 22( 11), 1359 – 1366. 16. Dance of the p-values: www. youtube. com / watch? v = 5OL1RqHrZQ8 Accessed on Jan 10, 2016 17. theness. com / neurologicablog / index. php / registering-studies-reduces-positive-outcomes / Accessed on Jan 10, 2016 18. biorxiv. org 19. https:// en. wikipedia. org / wiki / Scientific _ misconduct Accessed on Jan 10, 2016
July 2016 | NEUROMAG | 19