Ingrid Zukerman


2024

pdf bib
Let’s Negotiate! A Survey of Negotiation Dialogue Systems
Haolan Zhan | Yufei Wang | Zhuang Li | Tao Feng | Yuncheng Hua | Suraj Sharma | Lizhen Qu | Zhaleh Semnani Azad | Ingrid Zukerman | Reza Haf
Findings of the Association for Computational Linguistics: EACL 2024

Negotiation is a crucial ability in human communication. Recently, there has been a resurgent research interest in negotiation dialogue systems, whose goal is to create intelligent agents that can assist people in resolving conflicts or reaching agreements. Although there have been many explorations into negotiation dialogue systems, a systematic review of this task has not been performed to date. We aim to fill this gap by investigating recent studies in the field of negotiation dialogue systems, and covering benchmarks, evaluations and methodologies within the literature. We also discuss potential future directions, including multi-modal, multi-party and cross-cultural negotiation scenarios. Our goal is to provide the community with a systematic overview of negotiation dialogue systems and to inspire future research.

pdf bib
RENOVI: A Benchmark Towards Remediating Norm Violations in Socio-Cultural Conversations
Haolan Zhan | Zhuang Li | Xiaoxi Kang | Tao Feng | Yuncheng Hua | Lizhen Qu | Yi Ying | Mei Rianto Chandra | Kelly Rosalin | Jureynolds Jureynolds | Suraj Sharma | Shilin Qu | Linhao Luo | Ingrid Zukerman | Lay-Ki Soon | Zhaleh Semnani Azad | Reza Haf
Findings of the Association for Computational Linguistics: NAACL 2024

Norm violations occur when individuals fail to conform to culturally accepted behaviors, which may lead to potential conflicts. Remediating norm violations requires social awareness and cultural sensitivity of the nuances at play. To equip interactive AI systems with a remediation ability, we offer ReNoVi — a large-scale corpus of 9,258 multi-turn dialogues annotated with social norms, as well as define a sequence of tasks to help understand and remediate norm violations step by step. ReNoVi consists of two parts: 512 human-authored dialogues (real data), and 8,746 synthetic conversations generated by ChatGPT through prompt learning. While collecting sufficient human-authored data is costly, synthetic conversations provide suitable amounts of data to help mitigate the scarcity of training data, as well as the chance to assess the alignment between LLMs and humans in the awareness of social norms. We thus harness the power of ChatGPT to generate synthetic training data for our task. To ensure the quality of both human-authored and synthetic data, we follow a quality control protocol during data collection. Our experimental results demonstrate the importance of remediating norm violations in socio-cultural conversations, as well as the improvement in performance obtained from synthetic data.

pdf bib
Going beyond Imagination! Enhancing Multi-modal Dialogue Agents with Synthetic Visual Descriptions
Haolan Zhan | Sameen Maruf | Ingrid Zukerman | Gholamreza Haffari
Proceedings of the 25th Annual Meeting of the Special Interest Group on Discourse and Dialogue

Building a dialogue agent that can seamlessly interact with humans in multi-modal regimes, requires two fundamental abilities: (1) understanding emotion and dialogue acts within situated user scenarios, and (2) grounding perceived visual cues to dialogue contexts. However, recent works have uncovered shortcomings of existing dialogue agents in understanding emotions and dialogue acts, and in ground- ing visual cues effectively. In this work, we investigate whether additional dialogue data with only visual descriptions can help dialogue agents effectively align visual and textual features, and enhance the ability of dialogue agents to ground perceived visual cues to dialogue contexts. To this end, in the absence of a suitable dataset, we propose a synthetic visual description generation pipeline, and con- tribute a large-scale synthetic visual description dataset. In addition, we propose a general training procedure for effectively leveraging these synthetic data. We conduct comprehensive analyses to evaluate the impact of synthetic data on two benchmarks: MELD and IEMOCAP. Our findings suggest that synthetic visual descriptions can serve as an effective way to enhance a dialogue agents’ grounding ability, and that the training scheme affects the extent to which these descriptions improve the agent’s performance.

pdf bib
Communicating Uncertainty in Explanations of the Outcomes of Machine Learning Models
Ingrid Zukerman | Sameen Maruf
Proceedings of the 17th International Natural Language Generation Conference

We consider two types of numeric representations for conveying the uncertainty of predictions made by Machine Learning (ML) models: confidence-based (e.g., “the AI is 90% confident”) and frequency-based (e.g., “the AI was correct in 180 (90%) out of 200 cases”). We conducted a user study to determine which factors influence users’ acceptance of predictions made by ML models, and how the two types of uncertainty representations affect users’ views about explanations. Our results show that users’ acceptance of ML model predictions depends mainly on the models’ confidence, and that explanations that include uncertainty information are deemed better in several respects than explanations that omit it, with frequency-based representations being deemed better than confidence-based representations.

pdf bib
Generating Simple, Conservative and Unifying Explanations for Logistic Regression Models
Sameen Maruf | Ingrid Zukerman | Xuelin Situ | Cecile Paris | Gholamreza Haffari
Proceedings of the 17th International Natural Language Generation Conference

In this paper, we generate and compare three types of explanations of Machine Learning (ML) predictions: simple, conservative and unifying. Simple explanations are concise, conservative explanations address the surprisingness of a prediction, and unifying explanations convey the extent to which an ML model’s predictions are applicable. The results of our user study show that (1) conservative and unifying explanations are liked equally and considered largely equivalent in terms of completeness, helpfulness for understanding the AI, and enticement to act, and both are deemed better than simple explanations; and (2)users’ views about explanations are influenced by the (dis)agreement between the ML model’s predictions and users’ estimations of these predictions, and by the inclusion/omission of features users expect to see in explanations.

2023

pdf bib
Turning Flowchart into Dialog: Augmenting Flowchart-grounded Troubleshooting Dialogs via Synthetic Data Generation
Haolan Zhan | Sameen Maruf | Lizhen Qu | Yufei Wang | Ingrid Zukerman | Gholamreza Haffari
Proceedings of the 21st Annual Workshop of the Australasian Language Technology Association

Flowchart-grounded troubleshooting dialogue (FTD) systems, which follow the instructions of a flowchart to diagnose users’ problems in specific domains (e.g., vehicle, laptop), have been gaining research interest in recent years. However, collecting sufficient dialogues that are naturally grounded on flowcharts is costly, thus FTD systems are impeded by scarce training data. To mitigate the data sparsity issue, we propose a plan-based synthetic data generation (PlanSDG) approach that generates diverse synthetic dialog data at scale by transforming concise flowchart into dialogues. Specifically, its generative model employs a variational-base framework with a hierarchical planning strategy that includes global and local latent planning variables. Experiments on the FloDial dataset show that synthetic dialogue produced by PlanSDG improves the performance of downstream tasks, including flowchart path retrieval and response generation, in particular on the Out-of-Flowchart settings. In addition, further analysis demonstrate the quality of synthetic data generated by PlanSDG in paths that are covered by current sample dialogues and paths that are not covered.

2021

pdf bib
Learning to Explain: Generating Stable Explanations Fast
Xuelin Situ | Ingrid Zukerman | Cecile Paris | Sameen Maruf | Gholamreza Haffari
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

The importance of explaining the outcome of a machine learning model, especially a black-box model, is widely acknowledged. Recent approaches explain an outcome by identifying the contributions of input features to this outcome. In environments involving large black-box models or complex inputs, this leads to computationally demanding algorithms. Further, these algorithms often suffer from low stability, with explanations varying significantly across similar examples. In this paper, we propose a Learning to Explain (L2E) approach that learns the behaviour of an underlying explanation algorithm simultaneously from all training examples. Once the explanation algorithm is distilled into an explainer network, it can be used to explain new instances. Our experiments on three classification tasks, which compare our approach to six explanation algorithms, show that L2E is between 5 and 7.5×10ˆ4 times faster than these algorithms, while generating more stable explanations, and having comparable faithfulness to the black-box model.

pdf bib
Curriculum Learning Effectively Improves Low Data VQA
Narjes Askarian | Ehsan Abbasnejad | Ingrid Zukerman | Wray Buntine | Gholamreza Haffari
Proceedings of the 19th Annual Workshop of the Australasian Language Technology Association

Visual question answering (VQA) models, in particular modular ones, are commonly trained on large-scale datasets to achieve state-of-the-art performance. However, such datasets are sometimes not available. Further, it has been shown that training these models on small datasets significantly reduces their accuracy. In this paper, we propose curriculum-based learning (CL) regime to increase the accuracy of VQA models trained on small datasets. Specifically, we offer three criteria to rank the samples in these datasets and propose a training strategy for each criterion. Our results show that, for small datasets, our CL approach yields more accurate results than those obtained when training with no curriculum.

pdf bib
Explaining Decision-Tree Predictions by Addressing Potential Conflicts between Predictions and Plausible Expectations
Sameen Maruf | Ingrid Zukerman | Ehud Reiter | Gholamreza Haffari
Proceedings of the 14th International Conference on Natural Language Generation

We offer an approach to explain Decision Tree (DT) predictions by addressing potential conflicts between aspects of these predictions and plausible expectations licensed by background information. We define four types of conflicts, operationalize their identification, and specify explanatory schemas that address them. Our human evaluation focused on the effect of explanations on users’ understanding of a DT’s reasoning and their willingness to act on its predictions. The results show that (1) explanations that address potential conflicts are considered at least as good as baseline explanations that just follow a DT path; and (2) the conflict-based explanations are deemed especially valuable when users’ expectations disagree with the DT’s predictions.

pdf bib
Lifelong Explainer for Lifelong Learners
Xuelin Situ | Sameen Maruf | Ingrid Zukerman | Cecile Paris | Gholamreza Haffari
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Lifelong Learning (LL) black-box models are dynamic in that they keep learning from new tasks and constantly update their parameters. Owing to the need to utilize information from previously seen tasks, and capture commonalities in potentially diverse data, it is hard for automatic explanation methods to explain the outcomes of these models. In addition, existing explanation methods, e.g., LIME, which are computationally expensive when explaining a static black-box model, are even more inefficient in the LL setting. In this paper, we propose a novel Lifelong Explanation (LLE) approach that continuously trains a student explainer under the supervision of a teacher – an arbitrary explanation algorithm – on different tasks undertaken in LL. We also leverage the Experience Replay (ER) mechanism to prevent catastrophic forgetting in the student explainer. Our experiments comparing LLE to three baselines on text classification tasks show that LLE can enhance the stability of the explanations for all seen tasks and maintain the same level of faithfulness to the black-box model as the teacher, while being up to 10ˆ2 times faster at test time. Our ablation study shows that the ER mechanism in our LLE approach enhances the learning capabilities of the student explainer. Our code is available at https://github.com/situsnow/LLE.

2019

pdf bib
Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue
Satoshi Nakamura | Milica Gasic | Ingrid Zukerman | Gabriel Skantze | Mikio Nakano | Alexandros Papangelis | Stefan Ultes | Koichiro Yoshino
Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue

pdf bib
Influence of Time and Risk on Response Acceptability in a Simple Spoken Dialogue System
Andisheh Partovi | Ingrid Zukerman
Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue

We describe a longitudinal user study conducted in the context of a Spoken Dialogue System for a household robot, where we examined the influence of time displacement and situational risk on users’ preferred responses. To this effect, we employed a corpus of spoken requests that asked a robot to fetch or move objects in a room. In the first stage of our study, participants selected among four response types to these requests under two risk conditions: low and high. After some time, the same participants rated several responses to the previous requests — these responses were instantiated from the four response types. Our results show that participants did not rate highly their own response types; moreover, they rated their own response types similarly to different ones. This suggests that, at least in this context, people’s preferences at a particular point in time may not reflect their general attitudes, and that various reasonable response types may be equally acceptable. Our study also reveals that situational risk influences the acceptability of some response types.

2018

pdf bib
Exploring Textual and Speech information in Dialogue Act Classification with Speaker Domain Adaptation
Xuanli He | Quan Tran | William Havard | Laurent Besacier | Ingrid Zukerman | Gholamreza Haffari
Proceedings of the Australasian Language Technology Association Workshop 2018

In spite of the recent success of Dialogue Act (DA) classification, the majority of prior works focus on text-based classification with oracle transcriptions, i.e. human transcriptions, instead of Automatic Speech Recognition (ASR)’s transcriptions. In spoken dialog systems, however, the agent would only have access to noisy ASR transcriptions, which may further suffer performance degradation due to domain shift. In this paper, we explore the effectiveness of using both acoustic and textual signals, either oracle or ASR transcriptions, and investigate speaker domain adaptation for DA classification. Our multimodal model proves to be superior to the unimodal models, particularly when the oracle transcriptions are not available. We also propose an effective method for speaker domain adaptation, which achieves competitive results.

pdf bib
The Context-Dependent Additive Recurrent Neural Net
Quan Hung Tran | Tuan Lai | Gholamreza Haffari | Ingrid Zukerman | Trung Bui | Hung Bui
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)

Contextual sequence mapping is one of the fundamental problems in Natural Language Processing (NLP). Here, instead of relying solely on the information presented in the text, the learning agents have access to a strong external signal given to assist the learning process. In this paper, we propose a novel family of Recurrent Neural Network unit: the Context-dependent Additive Recurrent Neural Network (CARNN) that is designed specifically to address this type of problem. The experimental results on public datasets in the dialog problem (Babi dialog Task 6 and Frame), contextual language model (Switchboard and Penn Tree Bank) and question answering (Trec QA) show that our novel CARNN-based architectures outperform previous methods.

2017

pdf bib
Preserving Distributional Information in Dialogue Act Classification
Quan Hung Tran | Ingrid Zukerman | Gholamreza Haffari
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

This paper introduces a novel training/decoding strategy for sequence labeling. Instead of greedily choosing a label at each time step, and using it for the next prediction, we retain the probability distribution over the current label, and pass this distribution to the next prediction. This approach allows us to avoid the effect of label bias and error propagation in sequence learning/decoding. Our experiments on dialogue act classification demonstrate the effectiveness of this approach. Even though our underlying neural network model is relatively simple, it outperforms more complex neural models, achieving state-of-the-art results on the MapTask and Switchboard corpora.

pdf bib
A Hierarchical Neural Model for Learning Sequences of Dialogue Acts
Quan Hung Tran | Ingrid Zukerman | Gholamreza Haffari
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers

We propose a novel hierarchical Recurrent Neural Network (RNN) for learning sequences of Dialogue Acts (DAs). The input in this task is a sequence of utterances (i.e., conversational contributions) comprising a sequence of tokens, and the output is a sequence of DA labels (one label per utterance). Our model leverages the hierarchical nature of dialogue data by using two nested RNNs that capture long-range dependencies at the dialogue level and the utterance level. This model is combined with an attention mechanism that focuses on salient tokens in utterances. Our experimental results show that our model outperforms strong baselines on two popular datasets, Switchboard and MapTask; and our detailed empirical analysis highlights the impact of each aspect of our model.

pdf bib
A Generative Attentional Neural Network Model for Dialogue Act Classification
Quan Hung Tran | Gholamreza Haffari | Ingrid Zukerman
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

We propose a novel generative neural network architecture for Dialogue Act classification. Building upon the Recurrent Neural Network framework, our model incorporates a novel attentional technique and a label to label connection for sequence learning, akin to Hidden Markov Models. The experiments show that both of these innovations lead our model to outperform strong baselines for dialogue act classification on MapTask and Switchboard corpora. We further empirically analyse the effectiveness of each of the new innovations.

2016

pdf bib
Inter-document Contextual Language model
Quan Hung Tran | Ingrid Zukerman | Gholamreza Haffari
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
A Corpus of Tables in Full-Text Biomedical Research Publications
Tatyana Shmanina | Ingrid Zukerman | Ai Lee Cheam | Thomas Bochynek | Lawrence Cavedon
Proceedings of the Fifth Workshop on Building and Evaluating Resources for Biomedical Text Mining (BioTxtM2016)

The development of text mining techniques for biomedical research literature has received increased attention in recent times. However, most of these techniques focus on prose, while much important biomedical data reside in tables. In this paper, we present a corpus created to serve as a gold standard for the development and evaluation of techniques for the automatic extraction of information from biomedical tables. We describe the guidelines used for corpus annotation and the manner in which they were developed. The high inter-annotator agreement achieved on the corpus, and the generic nature of our annotation approach, suggest that the developed guidelines can serve as a general framework for table annotation in biomedical and other scientific domains. The annotated corpus and the guidelines are available at http://www.csse.monash.edu.au/research/umnl/data/index.shtml.

2014

pdf bib
A Comparative Study of Weighting Schemes for the Interpretation of Spoken Referring Expressions
Su Nam Kim | Ingrid Zukerman | Thomas Kleinbauer | Masud Moshtaghi
Proceedings of the Australasian Language Technology Association Workshop 2014

pdf bib
Challenges in Information Extraction from Tables in Biomedical Research Publications: a Dataset Analysis
Tatyana Shmanina | Lawrence Cavedon | Ingrid Zukerman
Proceedings of the Australasian Language Technology Association Workshop 2014

pdf bib
Authorship Attribution with Topic Models
Yanir Seroussi | Ingrid Zukerman | Fabian Bohnert
Computational Linguistics, Volume 40, Issue 2 - June 2014

2013

pdf bib
Evaluation of the Scusi? Spoken Language Interpretation System – A Case Study
Thomas Kleinbauer | Ingrid Zukerman | Su Nam Kim
Proceedings of the Sixth International Joint Conference on Natural Language Processing

pdf bib
A Noisy Channel Approach to Error Correction in Spoken Referring Expressions
Su Nam Kim | Ingrid Zukerman | Thomas Kleinbauer | Farshid Zavareh
Proceedings of the Sixth International Joint Conference on Natural Language Processing

pdf bib
Impact of Corpus Diversity and Complexity on NER Performance
Tatyana Shmanina | Ingrid Zukerman | Antonio Jimeno Yepes | Lawrence Cavedon | Karin Verspoor
Proceedings of the Australasian Language Technology Association Workshop 2013 (ALTA 2013)

pdf bib
Error Detection in Automatic Speech Recognition
Farshid Zavareh | Ingrid Zukerman | Su Nam Kim | Thomas Kleinbauer
Proceedings of the Australasian Language Technology Association Workshop 2013 (ALTA 2013)

2012

pdf bib
Authorship Attribution with Author-aware Topic Models
Yanir Seroussi | Fabian Bohnert | Ingrid Zukerman
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

pdf bib
Experimental Evaluation of a Lexicon- and Corpus-based Ensemble for Multi-way Sentiment Analysis
Minh Duc Cao | Ingrid Zukerman
Proceedings of the Australasian Language Technology Association Workshop 2012

2011

pdf bib
In Situ Text Summarisation for Museum Visitors
Timothy Baldwin | Patrick Ye | Fabian Bohnert | Ingrid Zukerman
Proceedings of the 25th Pacific Asia Conference on Language, Information and Computation

pdf bib
Authorship Attribution with Latent Dirichlet Allocation
Yanir Seroussi | Ingrid Zukerman | Fabian Bohnert
Proceedings of the Fifteenth Conference on Computational Natural Language Learning

2010

pdf bib
A Hierarchical Classifier Applied to Multi-way Sentiment Detection
Adrian Bickerstaffe | Ingrid Zukerman
Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)

pdf bib
Interpreting Pointing Gestures and Spoken Requests – A Probabilistic, Salience-based Approach
Ingrid Zukerman | Gideon Kowadlo | Patrick Ye
Coling 2010: Posters

2009

pdf bib
An Empirical Study of Corpus-Based Response Automation Methods for an E-mail-Based Help-Desk Domain
Yuval Marom | Ingrid Zukerman
Computational Linguistics, Volume 35, Number 4, December 2009

pdf bib
Towards the Interpretation of Utterance Sequences in a Dialogue System
Ingrid Zukerman | Patrick Ye | Kapil Kumar Gupta | Enes Makalic
Proceedings of the SIGDIAL 2009 Conference

2006

pdf bib
Proceedings of the Australasian Language Technology Workshop 2006
Lawrence Cavedon | Ingrid Zukerman
Proceedings of the Australasian Language Technology Workshop 2006

pdf bib
Automating Help-desk Responses: A Comparative Study of Information-gathering Approaches
Yuval Marom | Ingrid Zukerman
Proceedings of the Workshop on Task-Focused Summarization and Question Answering

pdf bib
Balancing Conflicting Factors in Argument Interpretation
Ingrid Zukerman | Michael Niemann | Sarah George
Proceedings of the 7th SIGdial Workshop on Discourse and Dialogue

2005

pdf bib
Book Review: Argumentation Machines: New Frontiers in Argumentation and Computation, edited by Chris Reed and Timothy J. Norman
Ingrid Zukerman
Computational Linguistics, Volume 31, Number 1, March 2005

pdf bib
Exploring and Exploiting the Limited Utility of Captions in Recognizing Intention in Information Graphics
Stephanie Elzer | Sandra Carberry | Daniel Chester | Seniz Demir | Nancy Green | Ingrid Zukerman | Keith Trnka
Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05)

2004

pdf bib
Filtering Speaker-Specific Words from Electronic Discussions
Ingrid Zukerman | Yuval Marom
COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics

2003

pdf bib
Lexical Paraphrasing for Document Retrieval and Node Identification
Ingrid Zukerman | Sarah George | Yingying Wen
Proceedings of the Second International Workshop on Paraphrasing

pdf bib
An Information-theoretic Approach for Argument Interpretation
Sarah George | Ingrid Zukerman
Proceedings of the Fourth SIGdial Workshop of Discourse and Dialogue

2002

pdf bib
A Minimum Message Length Approach for Argument Interpretation
Ingrid Zukerman | Sarah George
Proceedings of the Third SIGdial Workshop on Discourse and Dialogue

pdf bib
Towards a Noise-Tolerant, Representation-Independent Mechanism for Argument Interpretation
Ingrid Zukerman | Sarah George
COLING 2002: The 19th International Conference on Computational Linguistics

pdf bib
Lexical Query Paraphrasing for Document Retrieval
Ingrid Zukerman | Bhavani Raskutti
COLING 2002: The 19th International Conference on Computational Linguistics

2001

pdf bib
Using Machine Learning Techniques to Interpret WH-questions
Ingrid Zukerman | Eric Horvitz
Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics

2000

pdf bib
Towards the Generation of Rebuttals in a Bayesian Argumentation System
Nathalie Jitnah | Ingrid Zukerman | Richard McConachy | Sarah George
INLG’2000 Proceedings of the First International Conference on Natural Language Generation

pdf bib
Using Argumentation Strategies in Automated Argument Generation
Ingrid Zukerman | Richard McConachy | Sarah George
INLG’2000 Proceedings of the First International Conference on Natural Language Generation

1998

pdf bib
A Bayesian Approach to Automating Argumentation
Richard McConachy | Kevin B. Korb | Ingrid Zukerman
New Methods in Language Processing and Computational Natural Language Learning

pdf bib
Extracting Phoneme Pronunciation Information from Corpora
Ian Thomas | Ingrid Zukerman | Bhavani Raskutti
New Methods in Language Processing and Computational Natural Language Learning

pdf bib
Attention During Argument Generation and Presentation
Ingrid Zukerman | Richard McConachy | Kevin B. Korb
Natural Language Generation

1994

pdf bib
Discourse Planning as an Optimization Process
Ingrid Zukerman | Richard McConachy
Proceedings of the Seventh International Workshop on Natural Language Generation

1991

pdf bib
Current Research in Natural Language Generation
Ingrid Zukerman
Computational Linguistics, Volume 17, Number 3, September 1991

1990

pdf bib
Generating Peripheral Rhetorical Devices by Consulting a User Model
Ingrid Zukerman
Proceedings of the Fifth International Workshop on Natural Language Generation