Hadi Mohammadi

Methodology & Statistics
Faculty Social Sciences
University Utrecht University
Email
Website

Project
Explainable NLP with Human-AI Collaboration in Social Science

Explainable AI (XAI) is concerned with generating explanations for AI models
and their predictions(Adadi & Berrada, 2018). Some researchers have
investigated the advantages of explanations to humans, such as aiding
human decision-making (Liu et al., 2023), improving human trust in AI (Jacovi
& Goldberg, 2020), and educating people to undertake complicated tasks
(Lai et al., 2020). Due to the complexity of the AI field, it is not easy to provide
generally explainable AI solutions for every subfield, such as data mining,
computer vision, planning, optimization, and robotics. We are attempting to
build explainable AI for Natural Language Processing (NLP), an area of AI that

focuses on developing machine learning models to handle text inputs for
specific purposes (e.g., classification, question answering, and information
extraction).
Natural language processing is commonly required since a substantial
portion of the medical record consists of unstructured (free text) clinical

documentation potentially containing any text character. Moreover, human-
AI collaboration is crucial for the success of many AI applications, and

explanations can facilitate efficient collaboration between AIs and humans.
In addition, NLP, particularly text classification, has numerous beneficial
applications, many of which can benefit from Explainability and human
collaboration (Salimeh et al., 2022).
These points motivated us to study Human-AI Collaboration based on
Explainable Natural Language Processing (XNLP) in the Medical Domain to
develop a new intelligent and interpretable text generation model. The
overview of the current study is represented in Fig one.

Figure 1.Human-AI Collaboration based on Explainable NLP in Social Science
This proposal consists of four main topics, which fill in the gaps in the
literature on Explainable NLP and Human-AI Collaboration in Social Science.
The remainder of this report provides an overview of background relevant
information to understand the topics in this context.
• Project 1: Explainable NLP (XNLP): A Survey
• Project 2: Explainable Sexism Detection in social media
• Project 3: Explainability in Generative AI
• Project 4: Optimization of XNLP system

Supervisors
Dr. A.B. Bagheri
Dr. A.G. Giachanou
Prof. dr. D.O. Oberski

Period
Februari 2022 – Februari 2027