Leena Mathur
2024
Advancing Social Intelligence in AI Agents: Technical Challenges and Open Questions
Leena Mathur
|
Paul Pu Liang
|
Louis-Philippe Morency
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Building socially-intelligent AI agents (Social-AI) is a multidisciplinary, multimodal research goal that involves creating agents that can sense, perceive, reason about, learn from, and respond to affect, behavior, and cognition of other agents (human or artificial). Progress towards Social-AI has accelerated in the past decade across several computing communities, including natural language processing, machine learning, robotics, human-machine interaction, computer vision, and speech. Natural language processing, in particular, has been prominent in Social-AI research, as language plays a key role in constructing the social world. In this position paper, we identify a set of underlying technical challenges and open questions for researchers across computing communities to advance Social-AI. We anchor our discussion in the context of social intelligence concepts and prior progress in Social-AI research.
Proceedings of the Sixth Workshop on Teaching NLP
Sana Al-azzawi
|
Laura Biester
|
György Kovács
|
Ana Marasović
|
Leena Mathur
|
Margot Mieskes
|
Leonie Weissweiler
Proceedings of the Sixth Workshop on Teaching NLP
2023
Difference-Masking: Choosing What to Mask in Continued Pretraining
Alex Wilf
|
Syeda Akter
|
Leena Mathur
|
Paul Liang
|
Sheryl Mathew
|
Mengrou Shou
|
Eric Nyberg
|
Louis-Philippe Morency
Findings of the Association for Computational Linguistics: EMNLP 2023
The self-supervised objective of masked prediction has led to promising performance gains on a variety of downstream tasks. However, while most approaches randomly mask tokens, there is strong intuition that deciding what to mask can substantially improve learning outcomes. We investigate this in continued pretraining setting in which pretrained models continue to pretrain on domain-specific data before performing some downstream task. We introduce Difference-Masking, a masking strategy that automatically chooses what to mask during continued pretraining by considering what makes a task domain different from the pretraining domain. Empirically, we find that Difference-Masking outperforms baselines on continued pretraining settings across four diverse language-only and multimodal video tasks.
Search
Co-authors
- Louis-Philippe Morency 2
- Paul Pu Liang 1
- Sana Al-Azzawi 1
- Laura Biester 1
- György Kovács 1
- show all...