Rashi Rungta
2026
Aligning Paralinguistic Understanding and Generation in Speech LLMs via Multi-Task Reinforcement Learning
Minseok Kim | Jingxiang Chen | Seong-Gyun Leem | Yin Huang | Rashi Rungta | Zhicheng Ouyang | Haibin Wu | Surya Teja Appini | Ankur Bansal | Yang Bai | Yue Liu | Florian Metze | Ahmed A Aly | Anuj Kumar | Ariya Rastrow | Zhaojiang Lin
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 5: Industry Track)
Minseok Kim | Jingxiang Chen | Seong-Gyun Leem | Yin Huang | Rashi Rungta | Zhicheng Ouyang | Haibin Wu | Surya Teja Appini | Ankur Bansal | Yang Bai | Yue Liu | Florian Metze | Ahmed A Aly | Anuj Kumar | Ariya Rastrow | Zhaojiang Lin
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 5: Industry Track)
Speech large language models (LLMs) observe paralinguistic cues such as prosody, emotion, and non-verbal sounds—crucial for intent understanding. However, leveraging these cues faces challenges: limited training data, annotation difficulty, and models exploiting lexical shortcuts over paralinguistic signals. We propose multi-task reinforcement learning (RL) with chain-of-thought prompting that elicits explicit affective reasoning. To address data scarcity, we introduce a paralinguistics-aware speech LLM (PALLM) that jointly optimizes sentiment classification from audio and paralinguistics-aware response generation via a two-stage pipeline. Experiments demonstrate that our approach improves paralinguistics understanding over both supervised baselines and strong proprietary models (Gemini-2.5-Pro, GPT-4o-audio), by 8-12% on Expresso, IEMOCAP, and RAVDESS. The results show that modeling paralinguistic reasoning with multi-task RL is crucial for building emotionally intelligent speech LLMs.
2024
Effective Long-Context Scaling of Foundation Models
Wenhan Xiong | Jingyu Liu | Igor Molybog | Hejia Zhang | Prajjwal Bhargava | Rui Hou | Louis Martin | Rashi Rungta | Karthik Abinav Sankararaman | Barlas Oguz | Madian Khabsa | Han Fang | Yashar Mehdad | Sharan Narang | Kshitiz Malik | Angela Fan | Shruti Bhosale | Sergey Edunov | Mike Lewis | Sinong Wang | Hao Ma
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Wenhan Xiong | Jingyu Liu | Igor Molybog | Hejia Zhang | Prajjwal Bhargava | Rui Hou | Louis Martin | Rashi Rungta | Karthik Abinav Sankararaman | Barlas Oguz | Madian Khabsa | Han Fang | Yashar Mehdad | Sharan Narang | Kshitiz Malik | Angela Fan | Shruti Bhosale | Sergey Edunov | Mike Lewis | Sinong Wang | Hao Ma
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
We present an effective recipe to train strong long-context LLMs that are capable of utilizing massive context windows of up to 32,000 tokens. Our models are built through continual pretraining from Llama 2 checkpoints with longer text sequences and on a dataset where long texts are upsampled. We perform extensive evaluation using language modeling, synthetic context probing tasks, and a wide range of downstream benchmarks. Across all evaluations, our models achieve consistent improvements on most regular-context tasks and significant improvements on long-context tasks over Llama 2. Moreover, with a cost-effective instruction tuning procedure that is free of expensive annotation, the presented models can already surpass gpt-3.5-turbo-16k‘s overall performance on long-context benchmarks. Alongside these results, we provide an in-depth analysis on each individual component of our method. We delve into Llama’s position encodings and discuss its key limitation in modeling long data. We examine the impact of various design choices in the pretraining process, including the data mix and the training curriculum of sequence lengths – ablation results suggest that having abundant long texts in the pretrain dataset is not the key to achieving strong performance, and we empirically verify that long context continual pretraining is more efficient and similarly effective compared to pretraining from scratch with long sequences.
2019
Sieg at MEDIQA 2019: Multi-task Neural Ensemble for Biomedical Inference and Entailment
Sai Abishek Bhaskar | Rashi Rungta | James Route | Eric Nyberg | Teruko Mitamura
Proceedings of the 18th BioNLP Workshop and Shared Task
Sai Abishek Bhaskar | Rashi Rungta | James Route | Eric Nyberg | Teruko Mitamura
Proceedings of the 18th BioNLP Workshop and Shared Task
This paper presents a multi-task learning approach to natural language inference (NLI) and question entailment (RQE) in the biomedical domain. Recognizing textual inference relations and question similarity can address the issue of answering new consumer health questions by mapping them to Frequently Asked Questions on reputed websites like the NIH. We show that leveraging information from parallel tasks across domains along with medical knowledge integration allows our model to learn better biomedical feature representations. Our final models for the NLI and RQE tasks achieve the 4th and 2nd rank on the shared-task leaderboard respectively.
Search
Fix author
Co-authors
- Ahmed A Aly 1
- Surya Teja Appini 1
- Yang Bai 1
- Ankur Bansal 1
- Prajjwal Bhargava 1
- Sai Abishek Bhaskar 1
- Shruti Bhosale 1
- Jingxiang Chen 1
- Sergey Edunov 1
- Angela Fan 1
- Han Fang 1
- Rui Hou 1
- Yin Huang 1
- Madian Khabsa 1
- Minseok Kim 1
- Anuj Kumar 1
- Seong-Gyun Leem 1
- Mike Lewis 1
- Zhaojiang Lin 1
- Jingyu Liu 1
- Yue Liu 1
- Hao Ma 1
- Kshitiz Malik 1
- Louis Martin 1
- Yashar Mehdad 1
- Florian Metze 1
- Teruko Mitamura 1
- Igor Molybog 1
- Sharan Narang 1
- Eric Nyberg 1
- Barlas Oguz 1
- Zhicheng Ouyang 1
- Ariya Rastrow 1
- James Route 1
- Karthik Abinav Sankararaman 1
- Sinong Wang 1
- Haibin Wu 1
- Wenhan Xiong 1
- Hejia Zhang 1