Jie Cao
Utah, Oklahoma
Other people with similar names: Jie Cao (Tsinghua, Pittsburgh, UNC)
Unverified author pages with similar names: Jie Cao
2025
Enhancing Talk Moves Analysis in Mathematics Tutoring through Classroom Teaching Discourse
Jie Cao | Abhijit Suresh | Jennifer Jacobs | Charis Clevenger | Amanda Howard | Chelsea Brown | Brent Milne | Tom Fischaber | Tamara Sumner | James H. Martin
Proceedings of the 31st International Conference on Computational Linguistics
Jie Cao | Abhijit Suresh | Jennifer Jacobs | Charis Clevenger | Amanda Howard | Chelsea Brown | Brent Milne | Tom Fischaber | Tamara Sumner | James H. Martin
Proceedings of the 31st International Conference on Computational Linguistics
Human tutoring interventions play a crucial role in supporting student learning, improving academic performance, and promoting personal growth. This paper focuses on analyzing mathematics tutoring discourse using talk moves—a framework of dialogue acts grounded in Accountable Talk theory. However, scaling the collection, annotation, and analysis of extensive tutoring dialogues to develop machine learning models is a challenging and resource-intensive task. To address this, we present SAGA22, a compact dataset, and explore various modeling strategies, including dialogue context, speaker information, pretraining datasets, and further fine-tuning. By leveraging existing datasets and models designed for classroom teaching, our results demonstrate that supplementary pretraining on classroom data enhances model performance in tutoring settings, particularly when incorporating longer context and speaker information. Additionally, we conduct extensive ablation studies to underscore the challenges in talk move modeling.
Do LLMs Encode Frame Semantics? Evidence from Frame Identification
Jayanth Krishna Chundru | Rudrashis Poddar | Jie Cao | Tianyu Jiang
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Jayanth Krishna Chundru | Rudrashis Poddar | Jie Cao | Tianyu Jiang
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
We investigate whether large language models encode latent knowledge of frame semantics, focusing on frame identification, a core challenge in frame semantic parsing that involves selecting the appropriate semantic frame for a target word in context. Using the FrameNet lexical resource, we evaluate models under prompt-based inference and observe that they can perform frame identification effectively even without explicit supervision. To assess the impact of task-specific training, we fine-tune the model on FrameNet data, which substantially improves in-domain accuracy while generalizing well to out-of-domain benchmarks. Further analysis shows that the models can generate semantically coherent frame definitions, highlighting the model’s internalized understanding of frame semantics.
OUNLP at TSAR 2025 Shared Task Multi-Round Text Simplifier via Code Generation
Cuong Huynh | Jie Cao
Proceedings of the Fourth Workshop on Text Simplification, Accessibility and Readability (TSAR 2025)
Cuong Huynh | Jie Cao
Proceedings of the Fourth Workshop on Text Simplification, Accessibility and Readability (TSAR 2025)
This paper describes the system submission of our team OUNLP to the TSAR-2025 shared task on readability-controlled text simplification. Based on the analysis of prompt-based text simplification methods, we discovered that simplification performance is highly related to the gap between the source CEFR level and the target CEFR level. Inspired by this finding, we propose two multi-round simplification methods generated via GPT-4o rule-based simplification (MRS-Rule) and jointly rule-based LLM simplification (MRS-Joint). Our submitted systems ranked 7th out of 20 teams. Later improvements with MRS-Joint show that taking the LLM simplified candidates as the starting point could further boost multi-round simplification performance.
2023
Mind the Gap between the Application Track and the Real World
Ananya Ganesh | Jie Cao | E. Margaret Perkoff | Rosy Southwell | Martha Palmer | Katharina Kann
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Ananya Ganesh | Jie Cao | E. Margaret Perkoff | Rosy Southwell | Martha Palmer | Katharina Kann
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Recent advances in NLP have led to a rise in inter-disciplinary and application-oriented research. While this demonstrates the growing real-world impact of the field, research papers frequently feature experiments that do not account for the complexities of realistic data and environments. To explore the extent of this gap, we investigate the relationship between the real-world motivations described in NLP papers and the models and evaluation which comprise the proposed solution. We first survey papers from the NLP Applications track from ACL 2020 and EMNLP 2020, asking which papers have differences between their stated motivation and their experimental setting, and if so, mention them. We find that many papers fall short of considering real-world input and output conditions due to adopting simplified modeling or evaluation settings. As a case study, we then empirically show that the performance of an educational dialog understanding system deteriorates when used in a realistic classroom environment.
Comparing Neural Question Generation Architectures for Reading Comprehension
E. Margaret Perkoff | Abhidip Bhattacharyya | Jon Cai | Jie Cao
Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023)
E. Margaret Perkoff | Abhidip Bhattacharyya | Jon Cai | Jie Cao
Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023)
In recent decades, there has been a significant push to leverage technology to aid both teachers and students in the classroom. Language processing advancements have been harnessed to provide better tutoring services, automated feedback to teachers, improved peer-to-peer feedback mechanisms, and measures of student comprehension for reading. Automated question generation systems have the potential to significantly reduce teachers’ workload in the latter. In this paper, we compare three differ- ent neural architectures for question generation across two types of reading material: narratives and textbooks. For each architecture, we explore the benefits of including question attributes in the input representation. Our models show that a T5 architecture has the best overall performance, with a RougeL score of 0.536 on a narrative corpus and 0.316 on a textbook corpus. We break down the results by attribute and discover that the attribute can improve the quality of some types of generated questions, including Action and Character, but this is not true for all models.
2021
A Comparative Study on Schema-Guided Dialogue State Tracking
Jie Cao | Yi Zhang
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Jie Cao | Yi Zhang
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Frame-based state representation is widely used in modern task-oriented dialog systems to model user intentions and slot values. However, a fixed design of domain ontology makes it difficult to extend to new services and APIs. Recent work proposed to use natural language descriptions to define the domain ontology instead of tag names for each intent or slot, thus offering a dynamic set of schema. In this paper, we conduct in-depth comparative studies to understand the use of natural language description for schema in dialog state tracking. Our discussion mainly covers three aspects: encoder architectures, impact of supplementary training, and effective schema description styles. We introduce a set of newly designed bench-marking descriptions and reveal the model robustness on both homogeneous and heterogeneous description styles in training and evaluation.
2019
Amazon at MRP 2019: Parsing Meaning Representations with Lexical and Phrasal Anchoring
Jie Cao | Yi Zhang | Adel Youssef | Vivek Srikumar
Proceedings of the Shared Task on Cross-Framework Meaning Representation Parsing at the 2019 Conference on Natural Language Learning
Jie Cao | Yi Zhang | Adel Youssef | Vivek Srikumar
Proceedings of the Shared Task on Cross-Framework Meaning Representation Parsing at the 2019 Conference on Natural Language Learning
This paper describes the system submission of our team Amazon to the shared task on Cross Framework Meaning Representation Parsing (MRP) at the 2019 Conference for Computational Language Learning (CoNLL). Via extensive analysis of implicit alignments in AMR, we recategorize five meaning representations (MRs) into two classes: Lexical- Anchoring and Phrasal-Anchoring. Then we propose a unified graph-based parsing framework for the lexical-anchoring MRs, and a phrase-structure parsing for one of the phrasal- anchoring MRs, UCCA. Our system submission ranked 1st in the AMR subtask, and later improvements show promising results on other frameworks as well.
Rhetorically Controlled Encoder-Decoder for Modern Chinese Poetry Generation
Zhiqiang Liu | Zuohui Fu | Jie Cao | Gerard de Melo | Yik-Cheung Tam | Cheng Niu | Jie Zhou
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Zhiqiang Liu | Zuohui Fu | Jie Cao | Gerard de Melo | Yik-Cheung Tam | Cheng Niu | Jie Zhou
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Rhetoric is a vital element in modern poetry, and plays an essential role in improving its aesthetics. However, to date, it has not been considered in research on automatic poetry generation. In this paper, we propose a rhetorically controlled encoder-decoder for modern Chinese poetry generation. Our model relies on a continuous latent variable as a rhetoric controller to capture various rhetorical patterns in an encoder, and then incorporates rhetoric-based mixtures while generating modern Chinese poetry. For metaphor and personification, an automated evaluation shows that our model outperforms state-of-the-art baselines by a substantial margin, while human evaluation shows that our model generates better poems than baseline methods in terms of fluency, coherence, meaningfulness, and rhetorical aesthetics.
Observing Dialogue in Therapy: Categorizing and Forecasting Behavioral Codes
Jie Cao | Michael Tanana | Zac Imel | Eric Poitras | David Atkins | Vivek Srikumar
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Jie Cao | Michael Tanana | Zac Imel | Eric Poitras | David Atkins | Vivek Srikumar
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Automatically analyzing dialogue can help understand and guide behavior in domains such as counseling, where interactions are largely mediated by conversation. In this paper, we study modeling behavioral codes used to asses a psychotherapy treatment style called Motivational Interviewing (MI), which is effective for addressing substance abuse and related problems. Specifically, we address the problem of providing real-time guidance to therapists with a dialogue observer that (1) categorizes therapist and client MI behavioral codes and, (2) forecasts codes for upcoming utterances to help guide the conversation and potentially alert the therapist. For both tasks, we define neural network models that build upon recent successes in dialogue modeling. Our experiments demonstrate that our models can outperform several baselines for both tasks. We also report the results of a careful analysis that reveals the impact of the various network design tradeoffs for modeling therapy dialogue.
Search
Fix author
Co-authors
- E. Margaret Perkoff 2
- Vivek Srikumar 2
- Yi Zhang 2
- David Atkins 1
- Abhidip Bhattacharyya 1
- Chelsea Brown 1
- Jon Cai 1
- Jayanth Krishna Chundru 1
- Charis Clevenger 1
- Gerard De Melo 1
- Tom Fischaber 1
- Zuohui Fu 1
- Ananya Ganesh 1
- Amanda Howard 1
- Cuong Huynh 1
- Zac Imel 1
- Jennifer Jacobs 1
- Tianyu Jiang 1
- Zhiqiang Liu (刘志强) 1
- James H. Martin 1
- Brent Milne 1
- Cheng Niu 1
- Martha Palmer 1
- Rudrashis Poddar 1
- Eric Poitras 1
- Rosy Southwell 1
- Tamara Sumner 1
- Abhijit Suresh 1
- Yik-Cheung Tam 1
- Michael Tanana 1
- Adel Youssef 1
- Jie Zhou 1
- Katharina von der Wense 1