2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence

Jose M. Alonso, Alejandro Catala (Editors)


Anthology ID:
2020.nl4xai-1
Month:
November
Year:
2020
Address:
Dublin, Ireland
Venue:
NL4XAI
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
URL:
https://aclanthology.org/2020.nl4xai-1
DOI:
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
https://aclanthology.org/2020.nl4xai-1.pdf

pdf bib
2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence
Jose M. Alonso | Alejandro Catala

pdf bib
Automatically explaining health information
Emiel Khramer

Modern AI systems automatically learn from data using sophisticated statistical models. Explaining how these systems work and how they make their predictions therefore increasingly involves producing descriptions of how different probabilities are weighted and which uncertainties underlie these numbers. But what is the best way to (automatically) present such probabilistic explanations, do people actually understand them, and what is the potential impact of such information on people’s wellbeing? In this talk, I adress these questions in the context of systems that automatically generate personalised health information. The emergence of large national health registeries, such as the Dutch cancer registry, now make it possible to automatically generate descriptions of treatment options for new cancer patients based on data of comparable patients, including health and quality of life predictions following different treatments. I describe a series of studies, in which our team has investigated to what extent this information is currently provided to people, and under which conditions people actually want to have access to these kind of data-driven explanations. Additionally, we have studied whether there are different profiles in information needs, and what the best way is to provide probabilistic information and the associated undertainties to people.

pdf bib
Bias in AI-systems: A multi-step approach
Eirini Ntoutsi

Algorithmic-based decision making powered via AI and (big) data has already penetrated into almost all spheres of human life, from content recommendation and healthcare to predictive policing and autonomous driving, deeply affecting everyone, anywhere, anytime. While technology allows previously unthinkable optimizations in the automation of expensive human decision making, the risks that the technology can pose are also high, leading to an ever increasing public concern about the impact of the technology in our lives. The area of responsible AI has recently emerged in an attempt to put humans at the center of AI-based systems by considering aspects, such as fairness, reliability and privacy of decision-making systems. In this talk, we will focus on the fairness aspect. We will start with understanding the many sources of bias and how biases can enter at each step of the learning process and even get propagated/amplified from previous steps. We will continue with methods for mitigating bias which typically focus on some step of the pipeline (data, algorithms or results) and why it is important to target bias in each step and collectively, in the whole (machine) learning pipeline. We will conclude this talk by discussing accountability issues in connection to bias and in particular, proactive consideration via bias-aware data collection, processing and algorithmic selection and retroactive consideration via explanations.

pdf bib
Content Selection for Explanation Requests in Customer-Care Domain
Luca Anselma | Mirko Di Lascio | Dario Mana | Alessandro Mazzei | Manuela Sanguinetti

This paper describes a content selection module for the generation of explanations in a dialogue system designed for customer care domain. First we describe the construction of a corpus of a dialogues containing explanation requests from customers to a virtual agent of a telco, and second we study and formalize the importance of a specific information content for the generated message. In particular, we adapt the notions of importance and relevance in the case of schematic knowledge bases.

pdf bib
ExTRA: Explainable Therapy-Related Annotations
Mat Rawsthorne | Tahseen Jilani | Jacob Andrews | Yunfei Long | Jeremie Clos | Samuel Malins | Daniel Hunt

In this paper we report progress on a novel explainable artificial intelligence (XAI) initiative applying Natural Language Processing (NLP) with elements of codesign to develop a text classifier for application in psychotherapy training. The task is to produce a tool that will facilitate therapists to review their sessions by automatically labelling transcript text with levels of interaction for patient activation in known psychological processes, using XAI to increase their trust in the model’s suggestions and client trajectory predictions. After pre-processing of the language features extracted from professionally annotated therapy session transcripts, we apply a supervised machine learning approach (CHAID) to classify interaction labels (negative, neutral, positive). Weighted samples are used to overcome class imbalanced data. The results show this initial model can make useful distinctions among the three labels of patient activation with 74% accuracy and provide insight into its reasoning. This ongoing project will additionally evaluate which XAI approaches can be used to increase the transparency of the tool to end users, exploring whether direct involvement of stakeholders improves usability of the XAI interface and therefore trust in the solution.

pdf bib
The Natural Language Pipeline, Neural Text Generation and Explainability
Juliette Faille | Albert Gatt | Claire Gardent

End-to-end encoder-decoder approaches to data-to-text generation are often black boxes whose predictions are difficult to explain. Breaking up the end-to-end model into sub-modules is a natural way to address this problem. The traditional pre-neural Natural Language Generation (NLG) pipeline provides a framework for breaking up the end-to-end encoder-decoder. We survey recent papers that integrate traditional NLG submodules in neural approaches and analyse their explainability. Our survey is a first step towards building explainable neural NLG models.

pdf bib
Towards Harnessing Natural Language Generation to Explain Black-box Models
Ettore Mariotti | Jose M. Alonso | Albert Gatt

The opaque nature of many machine learning techniques prevents the wide adoption of powerful information processing tools for high stakes scenarios. The emerging field eXplainable Artificial Intelligence (XAI) aims at providing justifications for automatic decision-making systems in order to ensure reliability and trustworthiness in the users. For achieving this vision, we emphasize the importance of a natural language textual modality as a key component for a future intelligent interactive agent. We outline the challenges of XAI and review a set of publications that work in this direction.

pdf bib
Explaining Bayesian Networks in Natural Language: State of the Art and Challenges
Conor Hennessy | Alberto Bugarín | Ehud Reiter

In order to increase trust in the usage of Bayesian Networks and to cement their role as a model which can aid in critical decision making, the challenge of explainability must be faced. Previous attempts at explaining Bayesian Networks have largely focused on graphical or visual aids. In this paper we aim to highlight the importance of a natural language approach to explanation and to discuss some of the previous and state of the art attempts of the textual explanation of Bayesian Networks. We outline several challenges that remain to be addressed in the generation and validation of natural language explanations of Bayesian Networks. This can serve as a reference for future work on natural language explanations of Bayesian Networks.

pdf bib
Explaining data using causal Bayesian networks
Jaime Sevilla

I introduce Causal Bayesian Networks as a formalism for representing and explaining probabilistic causal relations, review the state of the art on learning Causal Bayesian Networks and suggest and illustrate a research avenue for studying pairwise identification of causal relations inspired by graphical causality criteria.

pdf bib
Towards Generating Effective Explanations of Logical Formulas: Challenges and Strategies
Alexandra Mayn | Kees van Deemter

While the problem of natural language generation from logical formulas has a long tradition, thus far little attention has been paid to ensuring that the generated explanations are optimally effective for the user. We discuss issues related to deciding what such output should look like and strategies for addressing those issues. We stress the importance of informing generation of NL explanations of logical formulas through reader studies and findings on the comprehension of logic from Pragmatics and Cognitive Science. We then illustrate the discussed issues and potential ways of addressing them using a simple demo system’s output generated from a propositional logic formula.

pdf bib
Argumentation Theoretical Frameworks for Explainable Artificial Intelligence
Martijn Demollin | Qurat-Ul-Ain Shaheen | Katarzyna Budzynska | Carles Sierra

This paper discusses four major argumentation theoretical frameworks with respect to their use in support of explainable artificial intelligence (XAI). We consider these frameworks as useful tools for both system-centred and user-centred XAI. The former is concerned with the generation of explanations for decisions taken by AI systems, while the latter is concerned with the way explanations are given to users and received by them.

pdf bib
Toward Natural Language Mitigation Strategies for Cognitive Biases in Recommender Systems
Alisa Rieger | Mariët Theune | Nava Tintarev

Cognitive biases in the context of consuming online information filtered by recommender systems may lead to sub-optimal choices. One approach to mitigate such biases is through interface and interaction design. This survey reviews studies focused on cognitive bias mitigation of recommender system users during two processes: 1) item selection and 2) preference elicitation. It highlights a number of promising directions for Natural Language Generation research for mitigating cognitive bias including: the need for personalization, as well as for transparency and control.

pdf bib
When to explain: Identifying explanation triggers in human-agent interaction
Lea Krause | Piek Vossen

With more agents deployed than ever, users need to be able to interact and cooperate with them in an effective and comfortable manner. Explanations have been shown to increase the understanding and trust of a user in human-agent interaction. There have been numerous studies investigating this effect, but they rely on the user explicitly requesting an explanation. We propose a first overview of when an explanation should be triggered and show that there are many instances that would be missed if the agent solely relies on direct questions. For this, we differentiate between direct triggers such as commands or questions and introduce indirect triggers like confusion or uncertainty detection.

pdf bib
Learning from Explanations and Demonstrations: A Pilot Study
Silvia Tulli | Sebastian Wallkötter | Ana Paiva | Francisco S. Melo | Mohamed Chetouani

AI has become prominent in a growing number of systems, and, as a direct consequence, the desire for explainability in such systems has become prominent as well. To build explainable systems, a large portion of existing research uses various kinds of natural language technologies, e.g., text-to-speech mechanisms, or string visualizations. Here, we provide an overview of the challenges associated with natural language explanations by reviewing existing literature. Additionally, we discuss the relationship between explainability and knowledge transfer in reinforcement learning. We argue that explainability methods, in particular methods that model the recipient of an explanation, might help increasing sample efficiency. For this, we present a computational approach to optimize the learner’s performance using explanations of another agent and discuss our results in light of effective natural language explanations for humans.

pdf bib
Generating Explanations of Action Failures in a Cognitive Robotic Architecture
Ravenna Thielstrom | Antonio Roque | Meia Chita-Tegmark | Matthias Scheutz

We describe an approach to generating explanations about why robot actions fail, focusing on the considerations of robots that are run by cognitive robotic architectures. We define a set of Failure Types and Explanation Templates, motivating them by the needs and constraints of cognitive architectures that use action scripts and interpretable belief states, and describe content realization and surface realization in this context. We then describe an evaluation that can be extended to further study the effects of varying the explanation templates.