Joe Davison


2021

pdf bib
Datasets: A Community Library for Natural Language Processing
Quentin Lhoest | Albert Villanova del Moral | Yacine Jernite | Abhishek Thakur | Patrick von Platen | Suraj Patil | Julien Chaumond | Mariama Drame | Julien Plu | Lewis Tunstall | Joe Davison | Mario Šaško | Gunjan Chhablani | Bhavitvya Malik | Simon Brandeis | Teven Le Scao | Victor Sanh | Canwen Xu | Nicolas Patry | Angelina McMillan-Major | Philipp Schmid | Sylvain Gugger | Clément Delangue | Théo Matussière | Lysandre Debut | Stas Bekman | Pierric Cistac | Thibault Goehringer | Victor Mustar | François Lagunas | Alexander Rush | Thomas Wolf
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations

The scale, variety, and quantity of publicly-available NLP datasets has grown rapidly as researchers propose new tasks, larger models, and novel benchmarks. Datasets is a community library for contemporary NLP designed to support this ecosystem. Datasets aims to standardize end-user interfaces, versioning, and documentation, while providing a lightweight front-end that behaves similarly for small datasets as for internet-scale corpora. The design of the library incorporates a distributed, community-driven approach to adding datasets and documenting usage. After a year of development, the library now includes more than 650 unique datasets, has more than 250 contributors, and has helped support a variety of novel cross-dataset research projects and shared tasks. The library is available at https://github.com/huggingface/datasets.

2020

pdf bib
Transformers: State-of-the-Art Natural Language Processing
Thomas Wolf | Lysandre Debut | Victor Sanh | Julien Chaumond | Clement Delangue | Anthony Moi | Pierric Cistac | Tim Rault | Remi Louf | Morgan Funtowicz | Joe Davison | Sam Shleifer | Patrick von Platen | Clara Ma | Yacine Jernite | Julien Plu | Canwen Xu | Teven Le Scao | Sylvain Gugger | Mariama Drame | Quentin Lhoest | Alexander Rush
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations

Recent progress in natural language processing has been driven by advances in both model architecture and model pretraining. Transformer architectures have facilitated building higher-capacity models and pretraining has made it possible to effectively utilize this capacity for a wide variety of tasks. Transformers is an open-source library with the goal of opening up these advances to the wider machine learning community. The library consists of carefully engineered state-of-the art Transformer architectures under a unified API. Backing this library is a curated collection of pretrained models made by and available for the community. Transformers is designed to be extensible by researchers, simple for practitioners, and fast and robust in industrial deployments. The library is available at https://github.com/huggingface/transformers.

2019

pdf bib
Commonsense Knowledge Mining from Pretrained Models
Joe Davison | Joshua Feldman | Alexander Rush
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

Inferring commonsense knowledge is a key challenge in machine learning. Due to the sparsity of training data, previous work has shown that supervised methods for commonsense knowledge mining underperform when evaluated on novel data. In this work, we develop a method for generating commonsense knowledge using a large, pre-trained bidirectional language model. By transforming relational triples into masked sentences, we can use this model to rank a triple’s validity by the estimated pointwise mutual information between the two entities. Since we do not update the weights of the bidirectional model, our approach is not biased by the coverage of any one commonsense knowledge base. Though we do worse on a held-out test set than models explicitly trained on a corresponding training set, our approach outperforms these methods when mining commonsense knowledge from new sources, suggesting that our unsupervised technique generalizes better than current supervised approaches.