Steven Lu
2024
Academics Can Contribute to Domain-Specialized Language Models
Mark Dredze
|
Genta Indra Winata
|
Prabhanjan Kambadur
|
Shijie Wu
|
Ozan Irsoy
|
Steven Lu
|
Vadim Dabravolski
|
David S Rosenberg
|
Sebastian Gehrmann
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Commercially available models dominate academic leaderboards. While impressive, this has concentrated research on creating and adapting general-purpose models to improve NLP leaderboard standings for large language models. However, leaderboards collect many individual tasks and general-purpose models often underperform in specialized domains; domain-specific or adapted models yield superior results. This focus on large general-purpose models excludes many academics and draws attention away from areas where they can make important contributions. We advocate for a renewed focus on developing and evaluating domain- and task-specific models, and highlight the unique role of academics in this endeavor.
2023
MixCE: Training Autoregressive Language Models by Mixing Forward and Reverse Cross-Entropies
Shiyue Zhang
|
Shijie Wu
|
Ozan Irsoy
|
Steven Lu
|
Mohit Bansal
|
Mark Dredze
|
David Rosenberg
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Autoregressive language models are trained by minimizing the cross-entropy of the model distribution Q relative to the data distribution P – that is, minimizing the forward cross-entropy, which is equivalent to maximum likelihood estimation (MLE). We have observed that models trained in this way may “over-generalize”, in the sense that they produce non-human-like text. Moreover, we believe that reverse cross-entropy, i.e., the cross-entropy of P relative to Q, is a better reflection of how a human would evaluate text generated by a model. Hence, we propose learning with MixCE, an objective that mixes the forward and reverse cross-entropies. We evaluate models trained with this objective on synthetic data settings (where P is known) and real data, and show that the resulting models yield better generated text without complex decoding strategies.
Search
Co-authors
- Mark Dredze 2
- Shijie Wu 2
- Ozan İrsoy 2
- Genta Indra Winata 1
- Prabhanjan Kambadur 1
- show all...