Alex Wilf


2024

pdf bib
Think Twice: Perspective-Taking Improves Large Language Models’ Theory-of-Mind Capabilities
Alex Wilf | Sihyun Lee | Paul Pu Liang | Louis-Philippe Morency
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Human interactions are deeply rooted in the interplay of thoughts, beliefs, and desires made possible by Theory of Mind (ToM): our cognitive ability to understand the mental states of ourselves and others. Although ToM may come naturally to us, emulating it presents a challenge to even the most advanced Large Language Models (LLMs). Recent improvements to LLMs’ reasoning capabilities from simple yet effective prompting techniques such as Chain-of-Thought (CoT) have seen limited applicability to ToM. In this paper, we turn to the prominent cognitive science theory “Simulation Theory” to bridge this gap. We introduce SimToM, a novel two-stage prompting framework inspired by Simulation Theory’s notion of perspective-taking. To implement this idea on current ToM benchmarks, SimToM first filters context based on what the character in question knows before answering a question about their mental state. Our approach, which requires no additional training and minimal prompt-tuning, shows substantial improvement over existing methods, and our analysis reveals the importance of perspective-taking to Theory-of-Mind capabilities. Our findings suggest perspective-taking as a promising direction for future research into improving LLMs’ ToM capabilities.

2023

pdf bib
Difference-Masking: Choosing What to Mask in Continued Pretraining
Alex Wilf | Syeda Akter | Leena Mathur | Paul Liang | Sheryl Mathew | Mengrou Shou | Eric Nyberg | Louis-Philippe Morency
Findings of the Association for Computational Linguistics: EMNLP 2023

The self-supervised objective of masked prediction has led to promising performance gains on a variety of downstream tasks. However, while most approaches randomly mask tokens, there is strong intuition that deciding what to mask can substantially improve learning outcomes. We investigate this in continued pretraining setting in which pretrained models continue to pretrain on domain-specific data before performing some downstream task. We introduce Difference-Masking, a masking strategy that automatically chooses what to mask during continued pretraining by considering what makes a task domain different from the pretraining domain. Empirically, we find that Difference-Masking outperforms baselines on continued pretraining settings across four diverse language-only and multimodal video tasks.