2024
pdf
bib
abs
Dual Process Masking for Dialogue Act Recognition
Yeo Jin Kim
|
Halim Acosta
|
Wookhee Min
|
Jonathan Rowe
|
Bradford Mott
|
Snigdha Chaturvedi
|
James Lester
Findings of the Association for Computational Linguistics: EMNLP 2024
Dialogue act recognition is the task of classifying conversational utterances based on their communicative intent or function. To address this problem, we propose a novel two-phase processing approach called Dual-Process Masking. This approach streamlines the task by masking less important tokens in the input, identified through retrospective analysis of their estimated contribution during training. It enhances interpretability by using the masks applied during classification learning. Dual-Process Masking significantly improves performance over strong baselines for dialogue act recognition on a collaborative problem-solving dataset and three public dialogue benchmarks.
pdf
bib
abs
Assessing Student Explanations with Large Language Models Using Fine-Tuning and Few-Shot Learning
Dan Carpenter
|
Wookhee Min
|
Seung Lee
|
Gamze Ozogul
|
Xiaoying Zheng
|
James Lester
Proceedings of the 19th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2024)
The practice of soliciting self-explanations from students is widely recognized for its pedagogical benefits. However, the labor-intensive effort required to manually assess students’ explanations makes it impractical for classroom settings. As a result, many current solutions to gauge students’ understanding during class are often limited to multiple choice or fill-in-the-blank questions, which are less effective at exposing misconceptions or helping students to understand and integrate new concepts. Recent advances in large language models (LLMs) present an opportunity to assess student explanations in real-time, making explanation-based classroom response systems feasible for implementation. In this work, we investigate LLM-based approaches for assessing the correctness of students’ explanations in response to undergraduate computer science questions. We investigate alternative prompting approaches for multiple LLMs (i.e., Llama 2, GPT-3.5, and GPT-4) and compare their performance to FLAN-T5 models trained in a fine-tuning manner. The results suggest that the highest accuracy and weighted F1 score were achieved by fine-tuning FLAN-T5, while an in-context learning approach with GPT-4 attains the highest macro F1 score.
2022
pdf
bib
abs
Disruptive Talk Detection in Multi-Party Dialogue within Collaborative Learning Environments with a Regularized User-Aware Network
Kyungjin Park
|
Hyunwoo Sohn
|
Wookhee Min
|
Bradford Mott
|
Krista Glazewski
|
Cindy E. Hmelo-Silver
|
James Lester
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
Accurate detection and appropriate handling of disruptive talk in multi-party dialogue is essential for users to achieve shared goals. In collaborative game-based learning environments, detecting and attending to disruptive talk holds significant potential since it can cause distraction and produce negative learning experiences for students. We present a novel attention-based user-aware neural architecture for disruptive talk detection that uses a sequence dropout-based regularization mechanism. The disruptive talk detection models are evaluated with multi-party dialogue collected from 72 middle school students who interacted with a collaborative game-based learning environment. Our proposed disruptive talk detection model significantly outperforms competitive baseline approaches and shows significant potential for helping to support effective collaborative learning experiences.
2015
pdf
bib
NCSU_SAS_WOOKHEE: A Deep Contextual Long-Short Term Memory Model for Text Normalization
Wookhee Min
|
Bradford Mott
Proceedings of the Workshop on Noisy User-generated Text