Mahmoud Khalil
2022
An Information-theoretic Approach to Prompt Engineering Without Ground Truth Labels
Taylor Sorensen
|
Joshua Robinson
|
Christopher Rytting
|
Alexander Shaw
|
Kyle Rogers
|
Alexia Delorey
|
Mahmoud Khalil
|
Nancy Fulda
|
David Wingate
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Pre-trained language models derive substantial linguistic and factual knowledge from the massive corpora on which they are trained, and prompt engineering seeks to align these models to specific tasks. Unfortunately, existing prompt engineering methods require significant amounts of labeled data, access to model parameters, or both. We introduce a new method for selecting prompt templates without labeled examples and without direct access to the model. Specifically, over a set of candidate templates, we choose the template that maximizes the mutual information between the input and the corresponding model output. Across 8 datasets representing 7 distinct NLP tasks, we show that when a template has high mutual information, it also has high accuracy on the task. On the largest model, selecting prompts with our method gets 90% of the way from the average prompt accuracy to the best prompt accuracy and requires no ground truth labels.
2020
ASU_OPTO at OSACT4 - Offensive Language Detection for Arabic text
Amr Keleg
|
Samhaa R. El-Beltagy
|
Mahmoud Khalil
Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection
In the past years, toxic comments and offensive speech are polluting the internet and manual inspection of these comments is becoming a tiresome task to manage. Having a machine learning based model that is able to filter offensive Arabic content is of high need nowadays. In this paper, we describe the model that was submitted to the Shared Task on Offensive Language Detection that is organized by (The 4th Workshop on Open-Source Arabic Corpora and Processing Tools). Our model makes use transformer based model (BERT) to detect offensive content. We came in the fourth place in subtask A (detecting Offensive Speech) and in the third place in subtask B (detecting Hate Speech).
Search
Co-authors
- Amr Keleg 1
- Samhaa R. El-Beltagy 1
- Taylor Sorensen 1
- Joshua Robinson 1
- Christopher Rytting 1
- show all...