Marjan Bakker

bakker1University of Amsterdam
FMG, Department of Psychology
Psychological Methodology

– prof.dr. H.L.J. Van der Maas (University of Amsterdam)
– dr. J.M. Wicherts (University of Amsterdam)

On April 24th 2014 Marjan Bakker defended her thesis entitled

Good science, bad science. Questioning research practices in psychological research

In this dissertation we have questioned the current research practices in psychological science and thereby contributed to the current discussion about the credibility of psychological research. We specially focused on the problems with the reporting of statistical results and showed that reporting errors are rather common in the psychological literature and related to other questionable research practices (QRPs) like not sharing the data with other researchers. Moreover, we investigated the consequences of applying several commonly used QRPs. The use of QRPs, like the ad hoc exclusion of outliers to obtain a significant result, will increase the probability of publishing false positive results, will result in biased effect size estimates, and will distorted meta-analytical results. Furthermore, we investigated the power paradox, or the question of why the psychological literature contains so many significant results based on underpowered studies. We showed that the running of multiple underpowered studies with a small sample size combined with the use of QRPs represents the “optimal” strategy for a researcher if his or her goal is to find a significant p value in the hypothesized direction. However, for science this strategy is disastrous. Another reason for the power paradox might be the flawed intuitions about power of many researchers. Specifically, we showed that researchers strongly overestimated the power of typical studies in their work. We also discuss the current directions and initiatives that are already improving or will hopefully improve research practices in psychological science in the future.


Expectancy effects on the analysis of behavioral research data
Behavioral researchers normally try to avoid expectancy effects during data collection, but they perform the statistical analysis of their study themselves. In this project we study whether researchers’ expectations can bias their statistical results. We propose that researchers may suffer from confirmation bias which may result in a failure to notice statistical errors that are in line with their hypotheses. Moreover, we hypothesize that researchers may resort to alternative analyses when the planned analysis fails to support their hypothesis. Expectancy effects on statistical outcomes will be studied by means of re-analyses and by employing correlational, experimental, and meta-analytical methods.

This project was financed by NWO.