Written by Mallorie M. Smith
Although not a readily celebrated type of research, reproducibility has become exceedingly important in the progress of psychological science. If you are unfamiliar with the term, reproducibility, or direct replication in this case, is attempting to duplicate a research experiment and its results (Open Science Framework, 2015). To break it down, an experiment will typically consist of an antecedent, or predictor variable, that experimenters attempt to link to an outcome, with the intention of eventually finding the formula that leads to the desired result. If the predictor does lead to a significant effect, it gives support to the idea that the predictor leads to the outcome in some way. However, even if the experiment yields a significant effect, there is still a chance (about 50%) that this effect occurred randomly or was found accidently. It is, therefore, essential to replicate experiments to provide stronger support for or against the effects by determining if the effects keep occurring. Now, this brings up several questions, such as, “How closely do the experiments need to be alike?” and “How do you know when the antecedent really does lead to an outcome?” I intend on discussing these questions, and more, in a later post (so stay tuned!).
Further Evidence of the Importance of Reproducibility and Transparent Research
Reproducibility research helps to protect the ethics and reputation of psychological research. Most notably, the case of Diederik Stapel illustrates this best. An interim report was published by Tilburg University (2011) noted that Stapel had fabricated a large amount of his own data, as well as several datasets of his colleagues and graduate students (unbeknownst to them), since 2004. He was a renowned scientist at the time; however, this information indicated that his large vitae of work was based on false data. According to Retraction Watch, as of 2015, 58 retractions of his research from major psychological research journals, including the Journal of Personality and Social Psychology and Journal of Experimental Social Psychology have been made (Palus, 2015). In the case of Stapel, the repeated replications of his work would likely have revealed consistent non-significant findings, indicating the misleading results of his studies. Likewise, his refusal to share his data and the secrecy of his methods likely helped Stapel continue to fabricate data for so long. This, along with some other instances (see here and here for examples), led the psychological community to place higher importance on reproducibility and transparent science. Organizations, such as the Center for Open Science and the Association for Psychological Science, now work to promote these components in psychological research.
Estimating the Reproducibility of Psychological Science
The Center for Open Science, or COS, is also responsible for a big contribution to reproducibility with their 2015 article, “Estimating the Reproducibility of Psychological Science” (also called The Reproducibility Project). The Social Relations Collaborative, in conjunction with over 250 other contributing researchers from around the world, replicated one of 100 studies from three different psychological journals (for our specific contribution, see Revisiting Romeo and Juliet, 2014). One important aspect of this study is that it was preregistered, meaning that aspects of the study, such as analysis and hypotheses, were listed publicly before the study began. All protocols, reviews, and write-ups associated with it are publicly available, promoting the transparency and open research advocated strongly for by the COS.
Synopsis and (Some) Implications of The Reproducibility Project
The Open Science Framework (2015) sought to give basis to some of the uncertainties of reproducibility, such as the lack of a percentage of replication successes. By using a fixed-effect meta-analyses and estimated effect sizes, researchers were able to compare the significances of the original and replication studies. Results showed a significant decrease of significance between the original and their (full) replications, with only 36% (less than half!) of original studies being successfully replicated. As you can imagine, these results made a large impact on the psychological community, leading many to question the validity of psychological research. This publication was the topic of 300+ blogs and news articles and placed in the top 10 scientific breakthroughs of 2015 by both Science and Discover magazines (to see the full impact, please visit here).
While this is a significant study in the progression of psychological research, The Reproducibility Project (2015) is merely a starting point for further research into the complexities of scientific reproducibility. It is for this reason that my thesis and dissertation will be focused on answering important questions such as, “What is a standard rate of reproducibility?” and “Is there a publication bias against reproducibility in psychological research?” If the topic of reproducibility interests you as much as it does me, I urge you to check back to this blog regularly for monthly discussions of topics in replication research as well as updates on progress of my own project on reproducibility (as well as several other posts from my colleagues relating to relationship and bullying research)!
Altmetric –Estimating the reproducibility of psychological science. (2017). Altmetric.com. Retrieved from https://www.altmetric.com/details/4443094
Carpenter, S. (2017). Harvard Psychology Researcher Committed Fraud, U.S. Investigation Concludes. Science | AAAS. Retrieved from http://www.sciencemag.org/news/2012/09/harvard-psychology-researcher-committed-fraud-us-investigation-concludes
Enserink, M. (2012). Rotterdam marketing psychologist resigns after university investigates his data. Science. Retrieved from http://www.sciencemag.org/news/2012/06/rotterdam-marketing-psychologist-resigns-after-university-investigates-his-data
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716-aac4716. http://dx.doi.org/10.1126/science.aac4716
Palus, S. Diederik Stapel Now Has 58 Retractions. (2015). Retractionwatch.com. Retrieved from http://retractionwatch.com/2015/12/08/diederik-stapel-now-has-58-retractions/
Sinclair, H., Hood, K., & Wright, B. (2014). Revisiting the Romeo and Juliet Effect (Driscoll, Davis, & Lipetz, 1972). Social Psychology, 45(3), 170-178. http://dx.doi.org/10.1027/1864-9335/a000181
Tilberg Univeristy. (2011). Flawed science: The fraudulent research practices of social psychologist Diederik Stapel. Retrieved from https://errorstatistics.files.wordpress.com/2015/01/tilberg-report-stapel-final-report-levelt1.pdf
Verfaellie, M. & McGwin, J. (2011). The case of Diederik Stapel. http://www.apa.org. Retrieved January 2017, from http://www.apa.org/science/about/psa/2011ca/12/diederik-stapel.aspx
Dr. H. Colleen Sinclair
Social Psychologist, Relationships Researcher,
Ms. Chelsea Ellithorpe
Lab Manager of the Social Relations Collaborative and Blog Editor