2023
pdf
bib
abs
Synthetic Dialogue Dataset Generation using LLM Agents
Yelaman Abdullin
|
Diego Molla
|
Bahadorreza Ofoghi
|
John Yearwood
|
Qingyang Li
Proceedings of the Third Workshop on Natural Language Generation, Evaluation, and Metrics (GEM)
Linear programming (LP) problems are pervasive in real-life applications. However, despite their apparent simplicity, an untrained user may find it difficult to determine the linear model of their specific problem. We envisage the creation of a goal-oriented conversational agent that will engage in conversation with the user to elicit all information required so that a subsequent agent can generate the linear model. In this paper, we present an approach for the generation of sample dialogues that can be used to develop and train such a conversational agent. Using prompt engineering, we develop two agents that “talk” to each other, one acting as the conversational agent, and the other acting as the user. Using a set of text descriptions of linear problems from NL4Opt available to the user only, the agent and the user engage in conversation until the agent has retrieved all key information from the original problem description. We also propose an extrinsic evaluation of the dialogues by assessing how well the summaries generated by the dialogues match the original problem descriptions. We conduct human and automatic evaluations, including an evaluation approach that uses GPT-4 to mimic the human evaluation metrics. The evaluation results show an overall good quality of the dialogues, though research is still needed to improve the quality of the GPT-4 evaluation metrics. The resulting dialogues, including the human annotations of a subset, are available to the research community. The conversational agent used for the generation of the dialogues can be used as a baseline.
2022
pdf
bib
abs
PIE-QG: Paraphrased Information Extraction for Unsupervised Question Generation from Small Corpora
Dinesh Nagumothu
|
Bahadorreza Ofoghi
|
Guangyan Huang
|
Peter Eklund
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
Supervised Question Answering systems (QA systems) rely on domain-specific human-labeled data for training. Unsupervised QA systems generate their own question-answer training pairs, typically using secondary knowledge sources to achieve this outcome. Our approach (called PIE-QG) uses Open Information Extraction (OpenIE) to generate synthetic training questions from paraphrased passages and uses the question-answer pairs as training data for a language model for a state-of-the-art QA system based on BERT. Triples in the form of <subject, predicate, object> are extracted from each passage, and questions are formed with subjects (or objects) and predicates while objects (or subjects) are considered as answers. Experimenting on five extractive QA datasets demonstrates that our technique achieves on-par performance with existing state-of-the-art QA systems with the benefit of being trained on an order of magnitude fewer documents and without any recourse to external reference data sources.
2016
pdf
bib
Syndromic Surveillance through Measuring Lexical Shift in Emergency Department Chief Complaint Texts
Hafsah Aamer
|
Bahadorreza Ofoghi
|
Karin Verspoor
Proceedings of the Australasian Language Technology Association Workshop 2016
2009
pdf
bib
From Lexical Entailment to Recognizing Textual Entailment Using Linguistic Resources
Bahadorreza Ofoghi
|
John Yearwood
Proceedings of the Australasian Language Technology Association Workshop 2009
2007
pdf
bib
Two-Step Comprehensive Open Domain Text Annotation with Frame Semantics
Bahadorreza Ofoghi
|
John Yearwood
|
Liping Ma
Proceedings of the Australasian Language Technology Workshop 2007