Faculty of Social and Behavioral Sciences
Bayesian Evaluation of Informative Hypotheses Using bain
Scientific Background of the Research
There is increasing attention for the Bayesian evaluation of informative hypotheses using bain (https://informative-hypotheses.sites.uu.nl/software/bain/). Key contributors to the development of bain are Xin Gu, Qianrao Fu, Joris Mulder, Caspar van Lissa, and Herbert Hoijtink and key references are: Gu et al. (2014), Gu (2016), Gu et al. (2017), Hoijtink (2012), Hoijtink et al. (2018), and Hoijtink et al. (2019). bain is available as an R package and in the open statistics package JASP (https://jasp-stats.org/).
Informative hypotheses are a formal representation of expectations of researchers formalized using inequality constraints among the parameters of the statistical model of interest. A simple example is:
Hi: 1> 2> 3>4
that is, four means are assumed to be ordered from largest to smallest. Using the Bayes factor (Kass and Raftery, 1995) the evidence in the data for Hi and its complement “not Hi” can be quantified. This enables a researcher to make statements like: after observing the data “my expectation” Hi is ten times (that is, the Bayes factor is 10) as likely as “not my expectation”, that is, not Hi.
Bain is increasingly being used. One example is Zondervan-Zwijnenburg et al. (2019) which was published in Child Development, a top journal in developmental psychology, and received a lot of attention in the media (for example, Daily Mail https://www.dailymail.co.uk/health/article-7302253/Children-born-older-parents-better-behaved-aggressive.html; Earth https://www.earth.com/news/children-older-parents-behavioral-problems/, and US News https://www.usnews.com/news/health-news/articles/2019-07-31/study-children-of-older-parents-have-fewer-behavioral-problems?src=usn_tw). Another example is the paper by Dogge et al. (2019) in Scientific Reports (Nature Publishing Group).
Description of Problem and Research Objectives
Researchers (psychologists, medical scientists, biologists) using bain regularly provide us with the questions and requests they have with respect to the use of bain. Some of these questions are easily answered. However, others require further research. Three of these questions require the further development of Bayesian statistical methods and one question has an applied nature.
Question 1. There exist a number of models that can be used for the analysis of repeated measures, for example, repeated measures ANOVA, multilevel analysis, autoregressive models, and multivariate analysis of variance. Researchers have repeatedly asked “which model they should use for which kind of repeated measures” and “what the best manner is to evaluate informative hypotheses with respect to the parameters of repeated measures models using bain”.
Question 2. The main goals of null-hypothesis significance testing (NHST) is to control the Type I and Type II errors. The main goal of Bayesian hypothesis evaluation is to control the Bayesian error probabilities, that is, what is the probability of each hypothesis under consideration given the data that are available. This implies that sequential analysis (collect data, evaluate the hypotheses, collect more data, re-evaluate the hypotheses, etc.) needs careful planning in the classical framework (otherwise the Type I error level will be inflated, see, for example, Demets and Lan, 1994) and can be done “on the fly” in the Bayesian framework (Bayesian error probabilities are conditional on the available data, irrespective of if these data are collected sequentially or not, see, for example, Rouder, 2014). Researchers have repeatedly asked “how to do Bayesian sequential analysis with bain” and “what are the properties of Bayesian sequential analysis with bain”.
Question 3. When applying NHST multiple times when evaluating a research question, a correction for “multiple testing” has to be applied in order to control the Type I error level. The counterpart of “multiple testing” in the context of Bayesian hypothesis evaluation is “the combination of Bayesian error probabilities over multiple tests”. Researchers have repeatedly asked us how this can be done.
Question 4. Zondervan-Zwijnenburg et al. (2019) has already raised a lot of attention. It uses the data from four independently executed research projects (multiple cohort studies in which children are tracked from birth to adolescence) to answer “one and the same research question”. However, important questions remain: how to evaluate the repeated measures that are obtained for each child, and how to correct for multiple testing if multiple outcome measures are evaluated for each child.
Prof. Dr. Herbert Hoijtink，Dr. Caspar van Lissa
China Scholarship Council
1 September 2020 – 31 August 2024