Ana Brassard


2023

pdf bib
Empirical Investigation of Neural Symbolic Reasoning Strategies
Yoichi Aoki | Keito Kudo | Tatsuki Kuribayashi | Ana Brassard | Masashi Yoshikawa | Keisuke Sakaguchi | Kentaro Inui
Findings of the Association for Computational Linguistics: EACL 2023

Neural reasoning accuracy improves when generating intermediate reasoning steps. However, the source of this improvement is yet unclear. Here, we investigate and factorize the benefit of generating intermediate steps for symbolic reasoning. Specifically, we decompose the reasoning strategy w.r.t. step granularity and chaining strategy. With a purely symbolic numerical reasoning dataset (e.g., A=1, B=3, C=A+3, C?), we found that the choice of reasoning strategies significantly affects the performance, with the gap becoming even larger as the extrapolation length becomes longer. Surprisingly, we also found that certain configurations lead to nearly perfect performance, even in the case of length extrapolation. Our results indicate the importance of further exploring effective strategies for neural reasoning models.

pdf bib
Prompting for explanations improves Adversarial NLI. Is this true? {Yes} it is {true} because {it weakens superficial cues}
Pride Kavumba | Ana Brassard | Benjamin Heinzerling | Kentaro Inui
Findings of the Association for Computational Linguistics: EACL 2023

Explanation prompts ask language models to not only assign a particular label to a giveninput, such as true, entailment, or contradiction in the case of natural language inference but also to generate a free-text explanation that supports this label. For example: “This is label because explanation.” While this type of prompt was originally introduced with the aim of improving model interpretability, we showhere that explanation prompts also improve robustness to adversarial perturbations in naturallanguage inference benchmarks. Compared to prompting for labels only, explanation prompting consistently yields stronger performance on adversarial benchmarks, outperforming the state of the art on Adversarial Natural Language Inference, Counterfactually-Augmented Natural Language Inference, and SNLI-Hard datasets. We argue that the increase in robustness is due to the fact that prompting for explanations weakens superficial cues. Specifically, single tokens that are highly predictive of the correct answer in the label-only setting become uninformative when the model also has to generate explanations.

pdf bib
Do Deep Neural Networks Capture Compositionality in Arithmetic Reasoning?
Keito Kudo | Yoichi Aoki | Tatsuki Kuribayashi | Ana Brassard | Masashi Yoshikawa | Keisuke Sakaguchi | Kentaro Inui
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics

Compositionality is a pivotal property of symbolic reasoning. However, how well recent neural models capture compositionality remains underexplored in the symbolic reasoning tasks. This study empirically addresses this question by systematically examining recently published pre-trained seq2seq models with a carefully controlled dataset of multi-hop arithmetic symbolic reasoning. We introduce a skill tree on compositionality in arithmetic symbolic reasoning that defines the hierarchical levels of complexity along with three compositionality dimensions: systematicity, productivity, and substitutivity. Our experiments revealed that among the three types of composition, the models struggled most with systematicity, performing poorly even with relatively simple compositions. That difficulty was not resolved even after training the models with intermediate reasoning steps.

2022

pdf bib
Context Limitations Make Neural Language Models More Human-Like
Tatsuki Kuribayashi | Yohei Oseki | Ana Brassard | Kentaro Inui
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

Language models (LMs) have been used in cognitive modeling as well as engineering studies—they compute information-theoretic complexity metrics that simulate humans’ cognitive load during reading.This study highlights a limitation of modern neural LMs as the model of choice for this purpose: there is a discrepancy between their context access capacities and that of humans.Our results showed that constraining the LMs’ context access improved their simulation of human reading behavior.We also showed that LM-human gaps in context access were associated with specific syntactic constructions; incorporating syntactic biases into LMs’ context access might enhance their cognitive plausibility.

pdf bib
Chat Translation Error Detection for Assisting Cross-lingual Communications
Yunmeng Li | Jun Suzuki | Makoto Morishita | Kaori Abe | Ryoko Tokuhisa | Ana Brassard | Kentaro Inui
Proceedings of the 3rd Workshop on Evaluation and Comparison of NLP Systems

pdf bib
COPA-SSE: Semi-structured Explanations for Commonsense Reasoning
Ana Brassard | Benjamin Heinzerling | Pride Kavumba | Kentaro Inui
Proceedings of the Thirteenth Language Resources and Evaluation Conference

We present Semi-Structured Explanations for COPA (COPA-SSE), a new crowdsourced dataset of 9,747 semi-structured, English common sense explanations for Choice of Plausible Alternatives (COPA) questions. The explanations are formatted as a set of triple-like common sense statements with ConceptNet relations but freely written concepts. This semi-structured format strikes a balance between the high quality but low coverage of structured data and the lower quality but high coverage of free-form crowdsourcing. Each explanation also includes a set of human-given quality ratings. With their familiar format, the explanations are geared towards commonsense reasoners operating on knowledge graphs and serve as a starting point for ongoing work on improving such systems. The dataset is available at https://github.com/a-brassard/copa-sse.

2021

pdf bib
Learning to Learn to be Right for the Right Reasons
Pride Kavumba | Benjamin Heinzerling | Ana Brassard | Kentaro Inui
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Improving model generalization on held-out data is one of the core objectives in common- sense reasoning. Recent work has shown that models trained on the dataset with superficial cues tend to perform well on the easy test set with superficial cues but perform poorly on the hard test set without superficial cues. Previous approaches have resorted to manual methods of encouraging models not to overfit to superficial cues. While some of the methods have improved performance on hard instances, they also lead to degraded performance on easy in- stances. Here, we propose to explicitly learn a model that does well on both the easy test set with superficial cues and the hard test set without superficial cues. Using a meta-learning objective, we learn such a model that improves performance on both the easy test set and the hard test set. By evaluating our models on Choice of Plausible Alternatives (COPA) and Commonsense Explanation, we show that our proposed method leads to improved performance on both the easy test set and the hard test set upon which we observe up to 16.5 percentage points improvement over the baseline.

2019

pdf bib
Diamonds in the Rough: Generating Fluent Sentences from Early-Stage Drafts for Academic Writing Assistance
Takumi Ito | Tatsuki Kuribayashi | Hayato Kobayashi | Ana Brassard | Masato Hagiwara | Jun Suzuki | Kentaro Inui
Proceedings of the 12th International Conference on Natural Language Generation

The writing process consists of several stages such as drafting, revising, editing, and proofreading. Studies on writing assistance, such as grammatical error correction (GEC), have mainly focused on sentence editing and proofreading, where surface-level issues such as typographical errors, spelling errors, or grammatical errors should be corrected. We broaden this focus to include the earlier revising stage, where sentences require adjustment to the information included or major rewriting and propose Sentence-level Revision (SentRev) as a new writing assistance task. Well-performing systems in this task can help inexperienced authors by producing fluent, complete sentences given their rough, incomplete drafts. We build a new freely available crowdsourced evaluation dataset consisting of incomplete sentences authored by non-native writers paired with their final versions extracted from published academic papers for developing and evaluating SentRev models. We also establish baseline performance on SentRev using our newly built evaluation dataset.

2018

pdf bib
TakeLab at SemEval-2018 Task12: Argument Reasoning Comprehension with Skip-Thought Vectors
Ana Brassard | Tin Kuculo | Filip Boltužić | Jan Šnajder
Proceedings of the 12th International Workshop on Semantic Evaluation

This paper describes our system for the SemEval-2018 Task 12: Argument Reasoning Comprehension Task. We utilize skip-thought vectors, sentence-level distributional vectors inspired by the popular word embeddings and the skip-gram model. We encode preprocessed sentences from the dataset into vectors, then perform a binary supervised classification of the warrant that justifies the use of the reason as support for the claim. We explore a few variations of the model, reaching 54.1% accuracy on the test set, which placed us 16th out of 22 teams participating in the task.