Scott Fujimoto
2026
Imbalanced Gradients in RL Post-Training of Multi-Task LLMs
Runzhe Wu | Ankur Samanta | Ayush Jain | Scott Fujimoto | Jeongyeol Kwon | Ben Kretzu | Youliang Yu | Kaveh Hassani | Boris Vidolov | Yonathan Efroni
Findings of the Association for Computational Linguistics: EACL 2026
Runzhe Wu | Ankur Samanta | Ayush Jain | Scott Fujimoto | Jeongyeol Kwon | Ben Kretzu | Youliang Yu | Kaveh Hassani | Boris Vidolov | Yonathan Efroni
Findings of the Association for Computational Linguistics: EACL 2026
Multi-task post-training of large language models (LLMs) is typically performed by mixing datasets from different tasks and optimizing them jointly. This approach implicitly assumes that all tasks contribute gradients of similar magnitudes; when this assumption fails, optimization becomes biased toward large-gradient tasks. In this paper, however, we show that this assumption fails in RL post-training: certain tasks produce significantly larger gradients, thus biasing updates toward those tasks. Such gradient imbalance would be justified only if larger gradients implied larger learning gains on the tasks (i.e., larger performance improvements)—but we find this is not true. Large-gradient tasks can achieve similar or even much lower learning gains than small-gradient ones. Further analyses reveal that these gradient imbalances cannot be explained by typical training statistics such as training rewards or advantages, suggesting that they arise from the *inherent* differences between tasks. This cautions against naive dataset mixing and calls for future work on principled gradient-level corrections for LLMs.
2018
Sentiment Analysis: It’s Complicated!
Kian Kenyon-Dean | Eisha Ahmed | Scott Fujimoto | Jeremy Georges-Filteau | Christopher Glasz | Barleen Kaur | Auguste Lalande | Shruti Bhanderi | Robert Belfer | Nirmal Kanagasabai | Roman Sarrazingendron | Rohit Verma | Derek Ruths
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)
Kian Kenyon-Dean | Eisha Ahmed | Scott Fujimoto | Jeremy Georges-Filteau | Christopher Glasz | Barleen Kaur | Auguste Lalande | Shruti Bhanderi | Robert Belfer | Nirmal Kanagasabai | Roman Sarrazingendron | Rohit Verma | Derek Ruths
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)
Sentiment analysis is used as a proxy to measure human emotion, where the objective is to categorize text according to some predefined notion of sentiment. Sentiment analysis datasets are typically constructed with gold-standard sentiment labels, assigned based on the results of manual annotations. When working with such annotations, it is common for dataset constructors to discard “noisy” or “controversial” data where there is significant disagreement on the proper label. In datasets constructed for the purpose of Twitter sentiment analysis (TSA), these controversial examples can compose over 30% of the originally annotated data. We argue that the removal of such data is a problematic trend because, when performing real-time sentiment classification of short-text, an automated system cannot know a priori which samples would fall into this category of disputed sentiment. We therefore propose the notion of a “complicated” class of sentiment to categorize such text, and argue that its inclusion in the short-text sentiment analysis framework will improve the quality of automated sentiment analysis systems as they are implemented in real-world settings. We motivate this argument by building and analyzing a new publicly available TSA dataset of over 7,000 tweets annotated with 5x coverage, named MTSA. Our analysis of classifier performance over our dataset offers insights into sentiment analysis dataset and model design, how current techniques would perform in the real world, and how researchers should handle difficult data.