Zizhou Liu
2026
A Review of Incorporating Psychological Theories in LLMs
Zizhou Liu | Ziwei Gong | Lin Ai | Zheng Hui | Run Chen | Colin Wayne Leach | Michelle R. Greene | Julia Hirschberg
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Zizhou Liu | Ziwei Gong | Lin Ai | Zheng Hui | Run Chen | Colin Wayne Leach | Michelle R. Greene | Julia Hirschberg
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Psychological insights have long shaped pivotal NLP breakthroughs, from attention mechanisms to reinforcement learning and social modeling. As Large Language Models (LLMs) develop, there is a rising consensus that psychology is essential for capturing human-like cognition, behavior, and interaction.This paper reviews how psychological theories can inform and enhance stages of LLM development. Our review integrates insights from six subfields of psychology, including cognitive, developmental, behavioral, social, personality psychology, and psycholinguistics. With stage-wise analysis, we highlight current trends and gaps in how psychological theories are applied. By examining both cross-domain connections and points of tension, we aim to bridge disciplinary divides and promote more thoughtful integration of psychology into NLP research.
2025
PropaInsight: Toward Deeper Understanding of Propaganda in Terms of Techniques, Appeals, and Intent
Jiateng Liu | Lin Ai | Zizhou Liu | Payam Karisani | Zheng Hui | Yi Fung | Preslav Nakov | Julia Hirschberg | Heng Ji
Proceedings of the 31st International Conference on Computational Linguistics
Jiateng Liu | Lin Ai | Zizhou Liu | Payam Karisani | Zheng Hui | Yi Fung | Preslav Nakov | Julia Hirschberg | Heng Ji
Proceedings of the 31st International Conference on Computational Linguistics
Propaganda plays a critical role in shaping public opinion and fueling disinformation. While existing research primarily focuses on identifying propaganda techniques, it lacks the ability to capture the broader motives and the impacts of such content. To address these challenges, we introduce PropaInsight, a conceptual framework grounded in foundational social science research, which systematically dissects propaganda into techniques, arousal appeals, and underlying intent. PropaInsight offers a more granular understanding of how propaganda operates across different contexts. Additionally, we present PropaGaze, a novel dataset that combines human-annotated data with high-quality synthetic data generated through a meticulously designed pipeline. Our experiments show that off-the-shelf LLMs struggle with propaganda analysis, but PropaGaze significantly improves performance. Fine-tuned Llama-7B-Chat achieves 203.4% higher text span IoU in technique identification and 66.2% higher BertScore in appeal analysis compared to 1-shot GPT-4-Turbo. Moreover, PropaGaze complements limited human-annotated data in data-sparse and cross-domain scenarios, demonstrating its potential for comprehensive and generalizable propaganda analysis.
2024
Defending Against Social Engineering Attacks in the Age of LLMs
Lin Ai | Tharindu Sandaruwan Kumarage | Amrita Bhattacharjee | Zizhou Liu | Zheng Hui | Michael S. Davinroy | James Cook | Laura Cassani | Kirill Trapeznikov | Matthias Kirchner | Arslan Basharat | Anthony Hoogs | Joshua Garland | Huan Liu | Julia Hirschberg
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Lin Ai | Tharindu Sandaruwan Kumarage | Amrita Bhattacharjee | Zizhou Liu | Zheng Hui | Michael S. Davinroy | James Cook | Laura Cassani | Kirill Trapeznikov | Matthias Kirchner | Arslan Basharat | Anthony Hoogs | Joshua Garland | Huan Liu | Julia Hirschberg
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Enhancing Pre-Trained Generative Language Models with Question Attended Span Extraction on Machine Reading Comprehension
Lin Ai | Zheng Hui | Zizhou Liu | Julia Hirschberg
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Lin Ai | Zheng Hui | Zizhou Liu | Julia Hirschberg
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
2021
Quantifying the Effects of COVID-19 on Restaurant Reviews
Ivy Cao | Zizhou Liu | Giannis Karamanolakis | Daniel Hsu | Luis Gravano
Proceedings of the Ninth International Workshop on Natural Language Processing for Social Media
Ivy Cao | Zizhou Liu | Giannis Karamanolakis | Daniel Hsu | Luis Gravano
Proceedings of the Ninth International Workshop on Natural Language Processing for Social Media
The COVID-19 pandemic has implications beyond physical health, affecting society and economies. Government efforts to slow down the spread of the virus have had a severe impact on many businesses, including restaurants. Mandatory policies such as restaurant closures, bans on social gatherings, and social distancing restrictions have affected restaurant operations as well as customer preferences (e.g., prompting a demand of stricter hygiene standards). As of now, however, it is not clear how and to what extent the pandemic has affected restaurant reviews, an analysis of which could potentially inform policies for addressing this ongoing situation. In this work, we present our efforts to understand the effects of COVID-19 on restaurant reviews, with a focus on Yelp reviews produced during the pandemic for New York City and Los Angeles County restaurants. Overall, we make the following contributions. First, we assemble a dataset of 600 reviews with manual annotations of fine-grained COVID-19 aspects related to restaurants (e.g., hygiene practices, service changes, sympathy and support for local businesses). Second, we address COVID-19 aspect detection using supervised classifiers, weakly-supervised approaches based on keywords, and unsupervised topic modeling approaches, and experimentally show that classifiers based on pre-trained BERT representations achieve the best performance (F1=0.79). Third, we analyze the number and evolution of COVID-related aspects over time and show that the resulting time series have substantial correlation (Spearman’s 𝜌=0.84) with critical statistics related to the COVID-19 pandemic, including the number of new COVID-19 cases. To our knowledge, this is the first work analyzing the effects of COVID-19 on Yelp restaurant reviews and could potentially inform policies by public health departments, for example, to cover resource utilization.
Search
Fix author
Co-authors
- Lin Ai 4
- Julia Hirschberg 4
- Zheng Hui 4
- Arslan Basharat 1
- Amrita Bhattacharjee 1
- Ivy Cao 1
- Laura Cassani 1
- Run Chen 1
- James Cook 1
- Michael S. Davinroy 1
- Yi Fung 1
- Joshua Garland 1
- Ziwei Gong 1
- Luis Gravano 1
- Michelle R. Greene 1
- Anthony Hoogs 1
- Daniel Hsu 1
- Heng Ji 1
- Giannis Karamanolakis 1
- Payam Karisani 1
- Matthias Kirchner 1
- Tharindu Sandaruwan Kumarage 1
- Colin Wayne Leach 1
- Huan Liu (刘欢) 1
- Jiateng Liu 1
- Preslav Nakov 1
- Kirill Trapeznikov 1