Parth Anand Jawale
Also published as: Parth Anand Jawale
2020
Linguist vs. Machine: Rapid Development of Finite-State Morphological Grammars
Sarah Beemer
|
Zak Boston
|
April Bukoski
|
Daniel Chen
|
Princess Dickens
|
Andrew Gerlach
|
Torin Hopkins
|
Parth Anand Jawale
|
Chris Koski
|
Akanksha Malhotra
|
Piyush Mishra
|
Saliha Muradoglu
|
Lan Sang
|
Tyler Short
|
Sagarika Shreevastava
|
Elizabeth Spaulding
|
Testumichi Umada
|
Beilei Xiang
|
Changbing Yang
|
Mans Hulden
Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology
Sequence-to-sequence models have proven to be highly successful in learning morphological inflection from examples as the series of SIGMORPHON/CoNLL shared tasks have shown. It is usually assumed, however, that a linguist working with inflectional examples could in principle develop a gold standard-level morphological analyzer and generator that would surpass a trained neural network model in accuracy of predictions, but that it may require significant amounts of human labor. In this paper, we discuss an experiment where a group of people with some linguistic training develop 25+ grammars as part of the shared task and weigh the cost/benefit ratio of developing grammars by hand. We also present tools that can help linguists triage difficult complex morphophonological phenomena within a language and hypothesize inflectional class membership. We conclude that a significant development effort by trained linguists to analyze and model morphophonological patterns are required in order to surpass the accuracy of neural models.
Structured Tuning for Semantic Role Labeling
Tao Li
|
Parth Anand Jawale
|
Martha Palmer
|
Vivek Srikumar
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Recent neural network-driven semantic role labeling (SRL) systems have shown impressive improvements in F1 scores. These improvements are due to expressive input representations, which, at least at the surface, are orthogonal to knowledge-rich constrained decoding mechanisms that helped linear SRL models. Introducing the benefits of structure to inform neural models presents a methodological challenge. In this paper, we present a structured tuning framework to improve models using softened constraints only at training time. Our framework leverages the expressiveness of neural networks and provides supervision with structured loss components. We start with a strong baseline (RoBERTa) to validate the impact of our approach, and show that our framework outperforms the baseline by learning to comply with declarative constraints. Additionally, our experiments with smaller training sizes show that we can achieve consistent improvements under low-resource scenarios.
Search
Co-authors
- Sarah Beemer 1
- Zak Boston 1
- April Bukoski 1
- Daniel Chen 1
- Princess Dickens 1
- show all...