Ulme Wennberg
2024
Exploring Internal Numeracy in Language Models: A Case Study on ALBERT
Ulme Wennberg
|
Gustav Eje Henter
Proceedings of the 2nd Workshop on Mathematical Natural Language Processing @ LREC-COLING 2024
Learned Transformer Position Embeddings Have a Low-Dimensional Structure
Ulme Wennberg
|
Gustav Henter
Proceedings of the 9th Workshop on Representation Learning for NLP (RepL4NLP-2024)
2021
The Case for Translation-Invariant Self-Attention in Transformer-Based Language Models
Ulme Wennberg
|
Gustav Eje Henter
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
2019
Entity, Relation, and Event Extraction with Contextualized Span Representations
David Wadden
|
Ulme Wennberg
|
Yi Luan
|
Hannaneh Hajishirzi
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)