Ingrid Arts

Methodology and Statistics
Faculty of Social and Behavioural Sciences
Utrecht University

Academic webpage:

Project

Increasing MI by introducing webprobing into a BSEM

There has been a tremendous increase in cross-national data production in social science research in recent decades. Since various factors potentially threaten the comparability of data, a precondition of drawing substantive conclusions based on such data is to verify that the measures are indeed comparable and to assess “whether or not, under different conditions of observing and studying phenomena, measurement operations yield measures of the same attribute’’  (Horn & McArdle, 1992, p.117).
Comparability of measures can be assessed using measurement invariance tests. There are three levels of measurement invariance: configural, metric and scalar. Only when all three levels of measurement invariance are achieved, can means be compared across countries.
However, researchers often do not find scalar measurement invariance (see e.g., Lommen, van de Schoot, Engelhard, 2014). As a potential solution, approximate approaches such as measurement invariance tests using Bayesian structural equation modeling (BSEM) have been proposed. Measurement invariance testing with BSEM replaces the requirement of exact equality constraints with the requirement that parameters are approximately equal (Muthén and Asparouhov 2013; Van de Schoot et al. 2013).
In a BSEM, the distribution is described by  Priors. The larger the prior, the larger the “wiggle room” (Van de Schoot et al. 2013) and the larger the difference of parameters (e.g., factor loadings and intercepts) that we allow between countries. Usually, researchers applying the BSEM approach either use the prior value of 0.05 that was mentioned by one of the first papers on BSEM (van de Schoot et al. 2013) or another arbitrary value (e.g., Kim et al. 2017: .001) is chosen for ALL factor loadings.  One possibility to include such information is Web Probing. Web probing is “the implementation of probing techniques from cognitive interviewing in web surveys with the goal to assess the validity of survey questions” (Behr, Meitinger, Braun, and Kaczmirek 2017:1). When implemented in cross-national surveys, it is also a valuable tool to assess the comparability of respondents’ associations. In contrast to most quantitative approaches, it also can reveal the reasons for a lack of comparability. Web probing is particularly useful for detecting cases of construct bias (construct is not identical across countries) and item bias (e.g., bias due to ambiguous source items, poor item translation, inapplicability of item contents; van de Vijver and Leung 2011). Web probing has been implemented in mixed method approaches alongside MGCFA to assess the equivalence of measures (Meitinger 2017) but its qualitative insights have not been used for informing the prior variances.

Supervisors
Prof. dr. R. van de Schoot, dr. K. Meitinger

Financed by
Utrecht University

Period
1 September 2019 – 31 August 2025