Matthias Boehm
2024
Version Control for Speech Corpora
Vlad Dumitru
|
Matthias Boehm
|
Martin Hagmüller
|
Barbara Schuppler
Proceedings of the 20th Conference on Natural Language Processing (KONVENS 2024)
2020
Learning Explainable Linguistic Expressions with Neural Inductive Logic Programming for Sentence Classification
Prithviraj Sen
|
Marina Danilevsky
|
Yunyao Li
|
Siddhartha Brahma
|
Matthias Boehm
|
Laura Chiticariu
|
Rajasekar Krishnamurthy
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Interpretability of predictive models is becoming increasingly important with growing adoption in the real-world. We present RuleNN, a neural network architecture for learning transparent models for sentence classification. The models are in the form of rules expressed in first-order logic, a dialect with well-defined, human-understandable semantics. More precisely, RuleNN learns linguistic expressions (LE) built on top of predicates extracted using shallow natural language understanding. Our experimental results show that RuleNN outperforms statistical relational learning and other neuro-symbolic methods, and performs comparably with black-box recurrent neural networks. Our user studies confirm that the learned LEs are explainable and capture domain semantics. Moreover, allowing domain experts to modify LEs and instill more domain knowledge leads to human-machine co-creation of models with better performance.
Search
Co-authors
- Vlad Dumitru 1
- Martin Hagmüller 1
- Barbara Schuppler 1
- Prithviraj Sen 1
- Marina Danilevsky 1
- show all...