Karel Veldkamp



University of Amsterdam
Psychological Methods
Email



Project
Towards Psychometrically Interpretable Neural Networks: Bridging the Gap between Latent Variable Models from Psychometrics and Neural Networks from Deep Learning.

Digital transformation has resulted in a dramatic change in the amount and the nature of the data that scientists can collect. It is now relatively easy to collect large scale datasets from thousands of respondents by using online questionnaire administration services like Mechanical Turk. In addition, it is relatively easy to collect additional covariates like response times, mouse movements, and keystrokes to increase understanding of the respondents’ online behavior or to predict future behavior. Due to the large and high dimensional nature of these data, conventional statistical approaches are unsuitable for the analyses, while more contemporary approaches lack interpretability.
This calls for innovative data science tools for applicable and interpretable analyses of high dimensional large-scale datasets. An important class of data science tools that is suitable for large scale and multidimensional datasets are neural networks from the field of deep learning. In neural networks, complex mathematical relations are specified between dependent and independent variables using intermediate feature detectors that can be conceived of as (non-linear) components or latent variables. Using efficient algorithms, deep neural networks are relatively straightforward to fit to large amounts of multidimensional data. However, deep neural networks are associated with characteristics which hamper their application to substantive scientific data problems. That is, classical statistical estimation theory does not apply to neural networks resulting in a lack of theory about identifiability, parameter consistency, and parameter efficiency. In addition, neural networks are considered black boxes which highly obscures their interpretation in terms of the theoretical variables under study. In this project, these two challenges are addressed by focusing on the relation between neural networks and latent variable models from psychometrics.

These models are similar in that they both impose latent variables to underly the data, but, contrary to neural networks, 1) latent variable models are explicitly defined within classical statistical estimation theory so that their identifiability, parameter consistency, and parameter efficiency are well understood; and 2) latent variable models are better interpretable as they are explicitly formulated in terms of a theory about the concepts under study. However, applying latent variable models to large-scale high dimensional
data is challenging while this is not an issue for neural networks.

Therefore, in this project, we combine the strengths from neural networks and latent variable models to arrive at a new innovative class of data science tools that is both explainable and applicable in large-scale high dimensional data settings. To this end, in five subprojects, we establish the basis for combining neural networks and latent variable models:
In part 1 (Subproject 1, 2, and 3), we will establish the theoretical relations between Restricted Boltzmann Machines (RBM), Autoencoders (AE), and Variational Autoencoders (VAE) from the field of deep learning, and Multidimensional Item Response Theory (MIRT), Principal Component Analysis (PCA), and Parallel Factor Analysis (PARAFAC) from psychometrics.
In part 2 (Subproject 4 and 5) we will use the results in two important psychometric issues (item selection in Computerized Adaptive Testing, CAT, and testing for Differential Item Functioning, DIF) to demonstrate the impact of the present work and to ensure visibility of the resulting methods.

This project is an IOPS project because we study new ways to estimate psychometric models, and transfer psychometric concepts to the machine learning domain.

Supervisors
Dr. R.P.P.P.M. Grasman
Dr. D.M. Molenaar

Financed by
University of Amsterdam

Period
2022 – 2027