Bayesian Evidence Synthesis for informative hypotheses: Aggregating evidence from conceptual replications
Scientific theories are essential for the advance of science as they are the basis for some of the main objectives of science, i.e. description, explanation and prediction. It is therefore necessary to have
appropriate tools to evaluate the extent to which a theory is in line with reality. Because no single study can be considered a sufficient test of a theory, accumulating evidence across studies is crucial. The
proposed research provides a tool for aggregating evidence from studies that investigate a common theory but are too diverse to meta-analyze with traditional approaches.
Awareness of the risk of drawing strong conclusions based on a single study is high, partly due to the large-scale replication attempt of numerous psychological studies (Open Science Collaboration, 2015) and subsequent debates in both science (e.g., Etz & Vandekerckhove, 2016; Baker & Penny, 2016; Gilbert et al., 2016) and media (Carey, 2015; Achenbach, 2015; Feldman, 2015). Many of the studies failed to replicate, providing a strong warning that more attention for replication is needed to arrive at solid conclusions about tested effects or theories. Indeed, in recent years, a meta-analytic way of thinking has been advocated in the scientific community. However, this renewed interest in replication
and subsequent aggregation is mostly directed at studies that are highly similar, while, in fact, there are different types of replications, serving different purposes (Hudson, 2021). Through Bayesian evidence synthesis (BES) we aim to aggregate data from multiple studies that does not meet the bar of similarity in order for them to be pooled together as is done in traditional meta-analysis.
Three objectives of the proposed research are:
Objective 1: Further developing and investigating BES and disseminating resulting guidelines for its use. The core of this research project entails the methodological investigation of the performance of BES. With simulation studies and direct applications to relevant empirical examples, BES will be further developed. Important factors affecting the estimated level of evidence, both in single studies and in the aggregated result, are the sample sizes of studies, the effect sizes under investigation, and the specificity of the hypotheses tested. In the next section, an elaboration of the current state of knowledge is given, followed by an outline of challenges to be tackled with the proposed research.
Objective 2: Implementing the proposed methodology and in this way contributing directly to substantive research goals as well as to the dissemination of BES to a wider audience. Implementation of BES in psychology research enables validation of the developed methods on real
data and theories, may inspire new methodological questions and subsequent developments, and is a useful way to disseminate BES to a wider community. We will collaborate with Dr. A.M. Krypotos on synthesizing research results using BES in in his field of research. Dr. Krypotos investigates fundamental
processes of avoidance and fear learning and the relation of those processes to psychopathology. The data that we will use are useful both for testing BES as well as for clinical reasons, given that conditioning research has provided significant insights on the basic mechanisms of anxiety-related disorders (Duits et al., 2015), providing stable groundwork for testing relevant treatment programs (Craske et al., 2014).
Objective 3: Increasing awareness of the need for study variation (i.e., conceptual replication) as a robustness check for several ancillary choices made in single studies. The third objective originates from the assumption that researchers may underestimate the impact of choices made throughout the research process. Starting with a specific research question or theory, there are many researcher degrees of freedom to devise the specifics of the study, e.g., through the choice of the design and context of the study, the targeted population, measurement instruments used, and data processing choices (Wicherts et al., 2016; Simmons et al., 2011). Stated differently, any other
researcher might make different choices for the investigation of the same theory. Such researcher degrees of freedom lead to the so-called garden of forking paths (Gelman & Loken, 2014; Steegen et al., 2016), referring to the many paths one can choose when designing and executing a study. These choices may seem minor but could greatly affect the results and conclusions. Increasing awareness of the potential impact of study-specific choices on the results, and therefore for the need for conceptual replications, is another contribution of this project.
Prof. dr. Irene Klugkist
Dr. D. Veen
1 September 2023 – 1 August 2027