Lele Cao


2023

pdf bib
A Scalable and Adaptive System to Infer the Industry Sectors of Companies: Prompt + Model Tuning of Generative Language Models
Lele Cao | Vilhelm von Ehrenheim | Astrid Berghult | Cecilia Henje | Richard Anselmo Stahl | Joar Wandborg | Sebastian Stan | Armin Catovic | Erik Ferm | Hannes Ingelhag
Proceedings of the Fifth Workshop on Financial Technology and Natural Language Processing and the Second Multimodal AI For Financial Forecasting

pdf bib
Using Deep Learning to Find the Next Unicorn: A Practical Synthesis on Optimization Target, Feature Selection, Data Split and Evaluation Strategy
Lele Cao | Vilhelm von Ehrenheim | Sebastian Stan | Xiaoxue Li | Alexandra Lutz
Proceedings of the Fifth Workshop on Financial Technology and Natural Language Processing and the Second Multimodal AI For Financial Forecasting

2021

pdf bib
PAUSE: Positive and Annealed Unlabeled Sentence Embedding
Lele Cao | Emil Larsson | Vilhelm von Ehrenheim | Dhiana Deva Cavalcanti Rocha | Anna Martin | Sonja Horn
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Sentence embedding refers to a set of effective and versatile techniques for converting raw text into numerical vector representations that can be used in a wide range of natural language processing (NLP) applications. The majority of these techniques are either supervised or unsupervised. Compared to the unsupervised methods, the supervised ones make less assumptions about optimization objectives and usually achieve better results. However, the training requires a large amount of labeled sentence pairs, which is not available in many industrial scenarios. To that end, we propose a generic and end-to-end approach – PAUSE (Positive and Annealed Unlabeled Sentence Embedding), capable of learning high-quality sentence embeddings from a partially labeled dataset. We experimentally show that PAUSE achieves, and sometimes surpasses, state-of-the-art results using only a small fraction of labeled sentence pairs on various benchmark tasks. When applied to a real industrial use case where labeled samples are scarce, PAUSE encourages us to extend our dataset without the burden of extensive manual annotation work.