Modeling Response Styles Behaviors in a Cross-cultural Context
It has frequently been shown that response styles, tendencies of participants to respond to questionnaire items regardless of item content, decrease the validity of questionnaire results (Van Vaerenbergh & Thomas, 2013). Extensive research has been done to model and correct for these response styles (Bockenholt & Meiser, 2017; Bolt et al., 2014; Cho, 2013; Falk & Cai, 2016; Jin & Wang, 2014).
Response styles become especially relevant in cross-cultural research, as much research indicates a relationship between culture, language, and response styles (Harzing, 2006; Hui & Triandis, 1989; Marin et al., 1992). Not taking response styles into account in a cross-cultural context may lead to misleading conclusions about the trait of interest (Morren et al., 2011). Although this is a well-known phenomenon, current studies on response style modelling often focus on individual trait estimates and recovery of item parameters, rather than group inferences (Cho, 2013; Falk & Cai, 2016; Jin & Wang, 2014; Leventhal, 2017 for examples). This dissertation thus aims to apply and develop models for modelling response styles in the context of cross-cultural research.
Project 1: Which response style model is best for modeling extreme response style in a cross-cultural context?
This project aims to examine the literature to find various methods of modelling response styles. After relevant models have been identified (such as IRTree, mixture partial credit model, etc.) data will be generated under all of these models. The data that is generated will be evaluated under all collected models as well. For example, data may be generated under a mixture partial credit model, then evaluated with an IRTree model. The main question being asked is: what happens to the distribution of the substantive trait in the context of multigroup comparisons if data is generated under one model, then evaluated under another model? Various conditions will be considered in this study.
Project 2: To be determined further. Likely will deal with comparing various methods of compensating for extreme responding (e.g., using true values as a baseline, using IRT model estimated values, collapsing categories, and doing nothing) effect on reliability and possible bias.
Project 3: To be determined further.
Project 4: To be determined further.
Prof. Dr. Jeroen Vermunt
Dr. Maria Bolsinova
Dr. Jesper Tijmstra,
Department of Methodology and Statistics
15 September 2021 – 15 September 2025