Rasoul Norouzi

Methodology and Statistics
Social and Behavioral Sciences
Tilburg University

Email
Website

Project
Reasoning machine in social science

Introduction- The replication crisis in developmental psychology and other scientific fields has prompted reforms to improve research practices, including preregistration and replication. However, the realization that many hypotheses are not supported by adequately powered replication studies has led some to propose a lack of “good theories” as another explanation (van Lissa, 2022). Theory construction is a process that involves identifying empirical phenomena, developing a proto theory, constructing a model to examine the proto theory, and evaluating the process. Phenomena are stable features of the world that scientists aim to explain through theory construction. Literature reviews, can shed light on undertheorized relevant phenomena by synthesizing inductive insights from the empirical literature. However, traditional narrative reviews are limited by small convenience samples, confirmation bias, and an emphasis on positive results. A systematic text mining review is a type of literature review that uses natural language processing techniques to systematically and objectively extract and analyze the content of a large number of research papers in order to provide a comprehensive and unbiased overview of the research in a particular field. This review method can be used to identify trends, generate new research hypotheses, compare findings, and assess the quality and consistency of research (van Lissa, 2021).
Problem Definition- However, TSMR has achieved acceptable results, there are challenges that limit its use in the social sciences. In the behavioral and social sciences, the detection of construct identity has remained a challenge for over a century, with constructs that are identical sometimes represent different real-world phenomena (the jingle fallacy) and those that are differently named often represent the same phenomena (the jangle fallacy) (O’Mara-Eves et al., 2015). Additionally, psychology theories are often weak theories, which inhibit theory development, failure, and reform. The term “weak theories” refers to narrative and imprecise accounts of hypotheses, which are vulnerable to hidden assumptions and other unknowns. There is no indication of what functional form two variables take when they are related, what conditions an effect should occur under, or what size an effect should be (Scheel, 2022). In light of this, it is imperative that we shift to a new paradigm that considers the context of the text and extracts and reasons about the entities, relationships, and phenomena described in research papers. In particular, the use of BERT (Bidirectional Encoder Representations from Transformers) and graph neural networks (GNNs) has shown great promise in this domain.
BERT is a transformer-based NLP model that has achieved state-of-the-art results on a wide range of NLP tasks, including language understanding and text classification. Its ability to process context and relationships between words in a sentence makes it particularly well-suited for extracting phenomena and hypotheses from research papers (Devlin et al., 2018). GNNs, on the other hand, are a type of neural network that are specifically designed to operate on graph-structured data, such as the relationships between entities in a research paper. By using GNNs to represent and analyze these relationships, it is possible to identify logical connections between phenomena and hypotheses, and to generate new hypotheses based on these connections (Kipf & Welling, 2016).
Proposed Method- In this proposal, we propose to utilize BERT and GNNs to extract phenomena and hypotheses from behavioral science papers and to identify logical relationships between these entities. The aim of this research is to develop a system that can automatically generate novel hypotheses concerning the mechanisms and factors that influence behavior, based on insights derived from existing research. The proposed approach involves using BERT to extract constructs or entities from papers and constructing a Directed Acyclic Graph (DAG) network using these entities. Subsequently, this network can be utilized by GNNs to reason, answer user queries, and generate new hypotheses.
Research Questions- Here are a few research questions that could be explored using the proposed approach of using BERT and GNNs to extract phenomena, hypotheses, and logical relationships from behavioral science papers:

  1. Can BERT and GNNs be used to accurately extract phenomena and hypotheses from behavioral science papers?
  2. How does the performance of BERT and GNNs for hypothesis generation compare to that of other NLP and machine learning approaches?
  3. Can the use of BERT and GNNs for hypothesis generation be extended to other domains beyond behavioral science?
  4. How can the insights gleaned from the extracted phenomena and hypotheses be used to generate new and novel hypotheses about behavior and psychological processes?
  5. How can the extracted phenomena and hypotheses be visualized in a way that is easily understandable and interpretable by researchers and practitioners in the field?
    Work Packages- The proposed research will be conducted over a period of four years and will involve the following tasks:
    Year 1:
    The focus of the first year is on reviewing the literature and learning the prerequisites for research. The hypothesis extraction module is also being worked on. The first step in extracting constructs and in total entities from hypothesis and causal relationships is to determine which of the sentences contain these elements. Thus, the sentences in the article are provided to the model, which identifies the sentences which contain a hypothesis. A small amount of labeled data from our dataset will be used to fine-tune BERT pre-trained models which the steps are the following:
    • Data collection: A corpus of behavioral science papers will be collected and annotated with entities, relationships, and phenomena. This could be done through manual annotation by researchers, or through the use of existing annotated datasets.
    • Preprocessing: The collected papers will be preprocessed and formatted for input into the BERT and GNN models. This could involve tasks such as tokenization, stemming, and lemmatization.
    • Training and evaluation: The BERT model will be trained on the annotated dataset and their performance will be evaluated using standard evaluation metrics for NLP tasks, such as precision, recall, and F1 score.
    Year 2-4:
    Model refinement: Based on the evaluation results from year 1, the BERT and GNN models will be fine-tuned and refined until satisfactory performance is achieved.
    Hypothesis generation: Once the models are trained and performing well, they will be used to extract entities, relationships, and phenomena from new, unseen behavioral science papers. These extracted entities will then be used to generate new hypotheses about behavior and psychological processes.
    Visualization: The extracted entities, relationships, and phenomena, as well as the generated hypotheses, will be visualized in a way that is easily understandable and interpretable by researchers and practitioners in the field.
    Case studies: To demonstrate the utility and practicality of the proposed approach, case studies will be conducted in which the generated hypotheses are tested and validated through empirical research.
    The expected outcomes of this research are:
    • A system that can accurately extract and reason about entities, relationships, and phenomena in behavioral science papers using BERT and GNNs.
    • New hypotheses about the mechanisms and factors that influence behavior, generated automatically based on the insights gleaned from existing research.
    • Case studies demonstrating the utility and practicality of the proposed approach in generating new insights about behavior and psychological processes.

Supervisors
dr. Caspar van Lissa
dr. Bennett Kleinberg
prof. dr. Jeroen Vermunt

Financed by

Period
1 January 2023 – 1 January 2027