Academic webpage Damiano D’Urso
Unraveling measurement non-invariance in multilevel data in the Social Sciences
Psychological researchers often measure unobservable psychological attributes (e.g., personality traits or emotions) using observable variables such as questionnaire items. However, valid comparison of the measured attributes across groups, subjects, and/or time points requires measurement invariance (MI): The same measurement model (MM) holds across the compared units. It is therefore of great importance to test for MI, also when the number of units to be compared is large and when they are clustered, that is, when dealing with multilevel data structures. The importance of the problem is highlighted by the fact that it is now becoming common practice to compare numerous groups at the same time and examples include the comparison of schools in scholastic surveys, countries in cross-cultural studies, and individuals in longitudinal studies.
On the one hand, different modelling frameworks and tools are currently available to compare groups as well as detecting violations of MI. However, for specific types of data it is yet not clear which methodologies perform better in detecting those violations as well as in which conditions.
As an example, in the case of polytomously scored items with limited and ordered categories one might use a factor analytic approach or a item response theory approach for the purpose of investigating the extent to which invariance holds and study the extent to which the compared MMs differ.
However, the two approaches differ notably with regard to both the chance of detecting various types of violations of MI as well as the procedures and tools available within each framework and yet it is not clear which one should be preferred.
On the other hand, these existing methods generally examine violations of MI only for comparison of small numbers of groups, are generally only confirmatory in nature, and are suited only for specific types of data or provide insufficient information on the sources of non-invariance. This is why new methods for comparing MMs across many units in multilevel data are necessary to indicate for which units MI holds (and thus valid comparisons can be made) and for which units MI is violated. The new methods for exploring MMs differences may indicate sources of non-invariance one may try to address (e.g., response styles, differential item functioning), but may also indicate substantively interesting structural differences in general, such as differences in the structure of personality or emotional experience.
The goal of this project is two-fold: on the one hand, we aim at evaluating the performance of present tools in detecting violations of MI in the context of multilevel data structures; on the other hand, we aim to overcome the limitations of the present techniques by developing new models and methods that allow for fine-grained and flexible evaluation of MI.
prof. dr. J.K. Vermunt, dr. K. de Roover
1 October 2018 – 1 October 2022