Janusz Tracz
2020
KLEJ: Comprehensive Benchmark for Polish Language Understanding
Piotr Rybak
|
Robert Mroczkowski
|
Janusz Tracz
|
Ireneusz Gawlik
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
In recent years, a series of Transformer-based models unlocked major improvements in general natural language understanding (NLU) tasks. Such a fast pace of research would not be possible without general NLU benchmarks, which allow for a fair comparison of the proposed methods. However, such benchmarks are available only for a handful of languages. To alleviate this issue, we introduce a comprehensive multi-task benchmark for the Polish language understanding, accompanied by an online leaderboard. It consists of a diverse set of tasks, adopted from existing datasets for named entity recognition, question-answering, textual entailment, and others. We also introduce a new sentiment analysis task for the e-commerce domain, named Allegro Reviews (AR). To ensure a common evaluation scheme and promote models that generalize to different NLU tasks, the benchmark includes datasets from varying domains and applications. Additionally, we release HerBERT, a Transformer-based model trained specifically for the Polish language, which has the best average performance and obtains the best results for three out of nine tasks. Finally, we provide an extensive evaluation, including several standard baselines and recently proposed, multilingual Transformer-based models.
BERT-based similarity learning for product matching
Janusz Tracz
|
Piotr Iwo Wójcik
|
Kalina Jasinska-Kobus
|
Riccardo Belluzzo
|
Robert Mroczkowski
|
Ireneusz Gawlik
Proceedings of Workshop on Natural Language Processing in E-Commerce
Product matching, i.e., being able to infer the product being sold for a merchant-created offer, is crucial for any e-commerce marketplace, enabling product-based navigation, price comparisons, product reviews, etc. This problem proves a challenging task, mostly due to the extent of product catalog, data heterogeneity, missing product representants, and varying levels of data quality. Moreover, new products are being introduced every day, making it difficult to cast the problem as a classification task. In this work, we apply BERT-based models in a similarity learning setup to solve the product matching problem. We provide a thorough ablation study, showing the impact of architecture and training objective choices. Application of transformer-based architectures and proper sampling techniques significantly boosts performance for a range of e-commerce domains, allowing for production deployment.
Search
Co-authors
- Robert Mroczkowski 2
- Ireneusz Gawlik 2
- Piotr Rybak 1
- Piotr Iwo Wójcik 1
- Kalina Jasinska-Kobus 1
- show all...